diff --git a/taskcompassscalingmultitaskpretrainingwithtaskprefix/68a17a92-d0f4-45da-a460-ec836d419fed_content_list.json b/taskcompassscalingmultitaskpretrainingwithtaskprefix/68a17a92-d0f4-45da-a460-ec836d419fed_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..70f424b0a994631f0434397ca146f1bbd2313809
--- /dev/null
+++ b/taskcompassscalingmultitaskpretrainingwithtaskprefix/68a17a92-d0f4-45da-a460-ec836d419fed_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f609d281b8e188a9d2d50dce6059b5e90e8361913bc77c14a1fcb65ddca177eb
+size 101106
diff --git a/taskcompassscalingmultitaskpretrainingwithtaskprefix/68a17a92-d0f4-45da-a460-ec836d419fed_model.json b/taskcompassscalingmultitaskpretrainingwithtaskprefix/68a17a92-d0f4-45da-a460-ec836d419fed_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..0f0ed676a42be52f86e08295ae6990bc4e1dcd8b
--- /dev/null
+++ b/taskcompassscalingmultitaskpretrainingwithtaskprefix/68a17a92-d0f4-45da-a460-ec836d419fed_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ec62bfc0f8fd8c70987ccf2c6538f30243520deeab2f5faa06fcb2aef26e6f93
+size 129718
diff --git a/taskcompassscalingmultitaskpretrainingwithtaskprefix/68a17a92-d0f4-45da-a460-ec836d419fed_origin.pdf b/taskcompassscalingmultitaskpretrainingwithtaskprefix/68a17a92-d0f4-45da-a460-ec836d419fed_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..76ec6c7ecf8511d4688435ef39b4430d405bc9bf
--- /dev/null
+++ b/taskcompassscalingmultitaskpretrainingwithtaskprefix/68a17a92-d0f4-45da-a460-ec836d419fed_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:110e08b6d67fd31dc0da9243f06792074687dff34378723599c724590159b97f
+size 1341685
diff --git a/taskcompassscalingmultitaskpretrainingwithtaskprefix/full.md b/taskcompassscalingmultitaskpretrainingwithtaskprefix/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..dc565eaa77f647af30824212e80576d5cc02d082
--- /dev/null
+++ b/taskcompassscalingmultitaskpretrainingwithtaskprefix/full.md
@@ -0,0 +1,354 @@
+# Task Compass: Scaling Multi-task Pre-training with Task Prefix
+
+Zhuosheng Zhang $^{1*}$ , Shuohang Wang $^{2}$ , Yichong Xu $^{2}$ , Yuwei Fang $^{2}$ , Wenhao Yu $^{3*}$ , Yang Liu $^{2}$ , Hai Zhao $^{1}$ , Chenguang Zhu $^{2}$ and Michael Zeng $^{2}$
+
+$^{1}$ Shanghai Jiao Tong University, Shanghai, China
+
+$^{2}$ Microsoft Cognitive Services Research, Redmond, WA, USA
+
+3University of Notre Dame, Notre Dame, IN, USA
+
+1zhangzs@sjtu.edu.cn, zhaohai@cs.sjtu.edu.cn;
+
+2{shuowa, yicxu, yuwfan, yaliu10, chezhu, nzeng}@microsoft.com;3wyu1@nd.edu
+
+# Abstract
+
+Leveraging task-aware annotated data as supervised signals to assist with self-supervised learning on large-scale unlabeled data has become a new trend in pre-training language models. Existing studies show that multitask learning with large-scale supervised tasks suffers from negative effects across tasks. To tackle the challenge, we propose a task prefix guided multi-task pre-training framework to explore the relationships among tasks. We conduct extensive experiments on 40 datasets, which show that our model can not only serve as the strong foundation backbone for a wide range of tasks but also be feasible as a probing tool for analyzing task relationships. The task relationships reflected by the prefixes align transfer learning performance between tasks. They also suggest directions for data augmentation with complementary tasks, which help our model achieve human-parity results on commonsense reasoning leaderboards. Code is available at https://github.com/cooelf/CompassMTL
+
+# 1 Introduction
+
+Recent years have witnessed a growing interest in leveraging a unified pre-trained language model (PrLM) to solve a wide range of natural language processing tasks (Tay et al., 2022; Chowdhery et al., 2022; Xie et al., 2022; Zhang et al., 2022). The pre-training recipe of a PrLM is driving from self-supervised learning (Peters et al., 2018; Radford et al., 2018; Devlin et al., 2019; Lan et al., 2020; Clark et al., 2020) to multi-task learning (MTL) with a mixture of standard self-supervised tasks and various supervised tasks,
+
+
+Figure 1: Input-output view. We append a task prefix for each data sequence to capture common patterns from the dataset and require the model to predict some randomly masked prefixes to capture task differences.
+
+which takes advantage of learning from both large-scale unlabeled corpus and high-quality human-labeled datasets (Raffel et al., 2019; Aribandi et al., 2021).1 Benefitting from supervision from related tasks, MTL approaches reduce the cost of curating deep learning models for an individual task and provide a shared representation that is generally applicable for a range of tasks (Wu et al., 2020b).
+
+In the research line of multi-task learning for PrLMs, a typical solution is to cast all tasks into a text-to-text format and utilize an encoder-decoder PrLM such as T5 to predict the target sequences (Raffel et al., 2019; Aribandi et al., 2021). Despite the extensive efforts on leveraging supervised tasks in strengthening PrLMs, the latest trend is extreme scaling of task numbers, with little attention paid to the relationships between tasks (Sanh et al., 2021; Wei et al., 2021). Aribandi et al. (2021) investigated co-training transfer effects amongst task-families and empirically found that tasks in different families may have side effects between each other, e.g., summarization tasks generally seem to hurt performance on other task families such as dialogue system (Mehri et al., 2020),
+
+natural language inference (Bowman et al., 2015), and commonsense reasoning (Lourie et al., 2021).
+
+When the task number scales up, the training of PrLMs would be more vulnerable to negative transfer due to the severe inconsistency of domain and data distribution between tasks (Wu et al., 2020b; Padmakumar et al., 2022). As one of the key concepts underlying MTL, task relationships potentially provide an effective basis for employing PrLMs in a more effective and interpretable way.
+
+To handle the issue of negative transfer during multi-task learning, early studies have taken task relationships into account by employing a dual-process model architecture that is composed of a shared encoder and task-specific layers. The two parts are supposed to integrate the common features of all the learning tasks and explore the task relationship in a predefined manner (Zheng et al., 2019; Liu et al., 2019a; Bai et al., 2020; Ma et al., 2021), respectively. However, these methods require additional modifications to model architecture and increase the model complexity and computation cost. Therefore, they are suboptimal for applying to PrLMs in terms of generality and computational bottlenecks.
+
+All the considerations above lay down our goal to investigate simple yet effective ways to measure the task relationship without additional cost and keep the generality of PrLMs. In this work, we propose a prefix-guided multi-task learning framework (CompassMTL) to explore the mutual effects between tasks (Figure 1) and improve model performance with complementary tasks. Targeting natural language understanding (NLU) tasks, we employ a discriminative $\mathrm{PrLM}^2$ as the backbone model and train the model on 40 tasks. Experimental results show that our model achieves human-parity performance on commonsense reasoning tasks. We further probe into the task relationship entailed in the tasks prefix representations, finding that the measured relationship highly correlates with task-to-task transfer performance, and it is also of referenced value for optimizing the PrLM on a target task with its complementary tasks during MTL, i.e., fewer tasks with better performance.
+
+In summary, our contributions are three folds:
+
+1) A unified discriminative multi-task PrLM for
+
+NLU tasks will be released as a strong counterpart for the dominant T5-based encoder-decoder PrLMs trained with MTL.
+
+2) A probing tool of using task prefix to explore the task relationships in large-scale MTL. We observe that the task relationships reflected by the prefixes manifest a correlation with transfer learning performance, and they help our model achieve better results with complementary tasks.
+3) State-of-the-art results on a variety of NLU tasks, especially human-parity benchmark performance on commonsense reasoning leaderboards, i.e., HellaSwag and $\alpha$ NLI.
+
+# 2 Background and Related Work
+
+# 2.1 Self-supervised Pre-training
+
+PrLMs are commonly pre-trained on large-scale corpora and then used for fine-tuning individual tasks. One of the most widely-used pre-training tasks is masked language modeling (MLM), which first masks out some tokens from the input sentences and then trains the model to predict them by the rest tokens. There are derivatives of MLM including permuted language modeling in XLNet (Yang et al., 2019) and sequence-to-sequence MLM in MASS (Song et al., 2019) and T5 (Raffel et al., 2019). Beyond the general-purpose pre-training, domain-adaptive pre-training and task-adaptive pretraining have attracted attention in recent studies.
+
+1) Domain-adaptive Pre-training. To incorporate specific in-domain knowledge, domain-aware pretraining is designed, which directly post-trains the original PrLMs using the domain-specific corpus. Popular models have been proposed in the dialogue domain (Whang et al., 2020; Wu et al., 2020a), as well as in the medical and science domains (Lee et al., 2020; Beltagy et al., 2019; Huang et al., 2019a; Yu et al., 2022).
+2) Task-adaptive Pre-training. The goal of task-adaptive pre-training is to capture task-specific skills by devising the pre-training tasks. The popular application scenarios include logical reasoning and dialogue-related tasks Kumar et al. (2020); Gu et al. (2020); Zhang and Zhao (2021); Li et al. (2021). For example, Whang et al. (2021) proposed various utterance manipulation strategies, including utterance insertion, deletion, and retrieval, to maintain dialog coherence.
+
+
+Figure 2: Comparison with existing paradigms of multi-task learning. Typical unified text-to-text methods include T5 (Raffel et al., 2019), ExT5 (Aribandi et al., 2021), FLAN (Wei et al., 2021), and T0 (Sanh et al., 2021).
+
+
+
+
+
+# 2.2 Multi-task Learning for PrLMs
+
+Our concerned MTL in the field of PrLMs is partially related to the studies of task-adaptive pretraining discussed above. The major difference is that the PrLMs in MTL are fed with human-annotated datasets instead of those automatically constructed ones for self-supervised tasks. Figure 2 overviews the paradigms of MTL PrLMs. Existing methods in this research line mostly vary in model architectures and training stages. For example, MT-DNN (Liu et al., 2019a) applied multi-task learning to train a shared model on all the target datasets in the fine-tuning stage, and there are several task-aware output modules to adapt the shared representations to each task. Recent studies, such as ExT5 (Aribandi et al., 2021), TO (Sanh et al., 2021), and FLAN (Wei et al., 2021), commonly applied an Encoder-Decoder architecture and convert a variety of tasks into the same text-to-text format and train those tasks jointly (Figure 2-a). We argue that they are not the optimal solution considering the model complexity and the gap between original and transformed task formats, especially for natural language understanding tasks that are in a discriminative manner, e.g., classification, multiple-choice, etc. Actually, there are studies (McCann et al., 2018; Keskar et al., 2019; Li et al., 2020; Khashabi et al., 2020) that transform traditional tasks into other formats like reading comprehension or question answering and achieve better results than prior methods. These studies motivate us to explore superior model backbones and data formats, especially for the application in NLU tasks.
+
+# 2.3 Modeling Task Relationships in MTL
+
+Modeling task relationships is a classic topic in deep learning studies. Bingel and Søgaard (2017) studied the research question about what task relations make gains in traditional natural language processing tasks and investigated when and why MTL works in sequence labeling tasks such as chunking, sentence compression, POS tagging, keyphrase detection, etc. Wu et al. (2020b) found that task data alignment can significantly affect the performance of MTL and proposed architecture with a shared module for all tasks and a separate output module for each task.
+
+Since these methods require additional modifications of model architecture, they are suboptimal for employment in PrLMs, considering computational bottlenecks and generality when task scaling. In the era of pre-trained models, Geva et al. (2021) analyzed the behavior transfer in PrLMs between related jointly-trained tasks such as QA and summarization and thus provided evidence for the extrapolation of skills as a consequence of multi-task training. ExT5 (Aribandi et al., 2021) evaluated the transfer performance among task families in a multi-task co-training setup and observed that negative transfer is common, especially when training across task families. Although there are recent studies that insert prompts to describe the task requirements in the data sequences (Liu et al., 2021; Su et al., 2022; Qin et al., 2021; Vu et al., 2022), it is still not clear whether the prompts help negative transfer or whether the prompts necessarily capture task relationships. In this work, we find that using task
+
+prefixes along with the MLM for prefix prediction effectively indicates task relationships and helps MTL with fewer datasets but better performance.
+
+# 3 Methodology
+
+# 3.1 Task Format
+
+According to prior studies (McCann et al., 2018; Keskar et al., 2019; Khashabi et al., 2020), the benchmark results on a task can be affected dramatically by training a model on different formats of the same dataset. In contrast to converting all tasks in a text-to-text manner, we choose to model our tasks in a multiple-choice-like format to minimize the format transformation for NLU tasks. Our transformation aims to ensure that each example in a task has a specific number of $k$ candidate options during the multi-task training stage. The original pair-wise input texts are regarded as context and question in the view of the multiple-choice problem. If there is only one text given, then the question will be kept empty. For the outliers, the data will be processed as follows (Examples are provided in Appendix A.1).
+
+1) If the number of candidate options $>k$ , the redundant options will be randomly discarded;
+2) If the number of candidate options $< k$ , add "N/A" placeholder options.
+3) If the ground truth is a list, randomly select a correct option from the gold list and randomly sample $k - 1$ negative options from the held-out set except the left items in the gold list.
+4) If the ground truth is a list and there is an empty choice, construct the truth option manually. For example, "there is no violation"; the negative examples are constructed as the same as 3).
+
+As a result, each training example will be formed as a sequence like {{Prefix}: context, question, option}, where [Prefix] indicates the task name in natural language such as [hellawag] presupposed to each data example.
+
+# 3.2 CompassMTL
+
+Our model is encoder-only, which is based on the DeBERTa architecture (He et al., 2021). The model is trained by using both the supervised task objective and the standard self-supervised denoising objective as described below.
+
+Suppose that we have a dataset $\mathcal{D} = \{(y_i, c_i, q_i, r)\}_{i=1}^N$ , where $c_i$ represents the context,
+
+$q_{i}$ represents the question, $r$ denotes a set of answer options $r = \{r_1,\dots ,r_k\}$ , and $y_{i}$ is the label. $N$ is the number of training data. Each data example is formed as $x = [\mathrm{CLS}][\mathrm{Prefix}]c_i[\mathrm{SEP}]q_i r_j[\mathrm{SEP}],^4$ $r_j\in r$ . The goal is to learn a discriminator $g(\cdot ,\cdot)$ from $\mathcal{D}$ . For the supervised task, the loss function is: $\mathcal{L}_{mtl} = -\sum_{i = 1}^{N}\sum_{j = 1}^{k}\log (g(c_i,q_i\circ r_j))$
+
+At the inference phase, given any new context $c_{i}$ , question $q_{i}$ and options $r$ , we use the discriminator to calculate $g(c_{i}, q_{i} \circ r_{j})$ as their matching score where $\circ$ denotes concatenation. The option with the highest score is chosen as the answer for the $i$ -th example.
+
+Let $\hat{x}_i$ denote the masked sequence where a certain proportion of tokens in $x_i$ are randomly replaced with a special [MASK] symbol. Using $\hat{x}_i$ as the input fed to the model in parallel with $x$ , the self-supervised denoising objective is computed in the way of MLM: $\mathcal{L}_{mlm} = -\sum_{i=1}^{N}\sum_{j\in\mathcal{M}}\log p_{\theta}(t_{i,j}|\hat{x}_i)$ , where $t_{i,j}$ is the $j$ -th token in $x_i$ and $\mathcal{M}$ denotes the index set of masked tokens for which the loss will be computed. To encourage the model to learn from both supervised and self-supervised signals, we combine $\mathcal{L}_{mtl}$ and $\mathcal{L}_{mlm}$ during training: $\mathcal{L} = \mathcal{L}_{mtl} + \lambda \mathcal{L}_{mlm}$ where $\lambda$ is a hyper-parameter to balance the weight of the training objectives.
+
+Compared with traditional MTL methods, CompassMTL is data-centric, without any modification of model architecture (Figure 2-b). It can be regarded as an efficient implementation of the traditional MTL method composed of a shared representation module and multiple task-aware modules. Since the data from the same datasets share the same task prefix, the prefix is supposed to reflect the common patterns from the dataset, which works in a similar operational principle to the shared representation module. During the training with our self-supervised objective, task prefixes will be randomly masked in a specific probability. The model is required to distinguish the task prefixes and predict the right prefix according to the input data. Therefore, the task differences will also be necessarily captured.
+
+# 3.3 Task Relationship Exploration
+
+Regarding the task prefixes as the compass to navigate the task relationships, it is possible to use our framework to analyze the relevance of
+
+
+Figure 3: Task taxonomy used in this work.
+
+
+
+
+
+tasks (Section 5.2). Our model for prefix probing experiments is slightly revised from CompassMTL, which only uses the MLM objective and is fed by the data without options to alleviate possible shortcuts in options. After the model is pre-trained with MTL, we fetch the prefix embeddings from the model embedding layer and calculate the Pearson correlation between each task pair with min-max normalization. Assuming that we have $n$ tasks, the process will result in $n \times n$ correlation scores to indicate the task relationships.
+
+For a target task, we can directly rank the top-related tasks according to the correlation scores and use those complementary tasks for MTL before fine-tuning a target task (Figure 2-c).
+
+# 4 Experiments
+
+# 4.1 Datasets
+
+There are 40 datasets used for training our multi-task model, some of which are collected from GLUE (Wang et al., 2019b), SuperGLUE (Wang et al., 2019a), Rainbow (Lourie et al., 2021), and LexGLUE (Chalkidis et al., 2021). Figure 3 illustrates the composition of our task families.
+
+GLUE GLUE (The General Language Understanding Evaluation benchmark) (Wang et al., 2019b) is a collection of 9 various tasks for sentence-level classification. We only use 8 of them: CoLA(Warstadt et al., 2019), SST-2 (Socher et al., 2013), MRPC (Dolan and Brockett, 2005), STS-B (Cer et al., 2017), QQP (Chen et al., 2018), QNLI (Rajpurkar et al., 2016), MNLI (Nangia et al., 2017) and RTE (Bentivogli et al., 2009).
+
+Rainbow Rainbow (Lourie et al., 2021) is a suite of commonsense question answering tasks including $\alpha$ NLI (Bhagavatula et al., 2020), CosmosQA (Huang et al., 2019b), HellaSwag (Zellers et al., 2019), PIQA (Bisk et al., 2020), SocialIQA (Sap et al., 2019), Winogrande (Sakaguchi et al., 2020).
+
+LexGLUE LexGLUE (Legal General Language Understanding Evaluation) (Chalkidis et al., 2021) is a collection of datasets for evaluating model performance across a diverse set of legal NLU tasks, which contain 7 subtasks, namely ECtHR (Task A), ECtHR (Task B), SCOTUS, EUR-LEX, LEDGAR, UNFAIR-ToS, and CaseHOLD.
+
+Domain-specific Classification We use seven datasets that cover specific domains (biomedical and computer science publications, news, and reviews) following Gururangan et al. (2020). The datasets are CHEMPROT (Kringelum et al., 2016), RCT (Dernoncourt and Lee, 2017), ACL-ARC (Jurgens et al., 2018), HYPERPARTISAN (Kiesel et al., 2019), AGNEWS (Zhang et al., 2015), HELPFULNESS (McAuley et al., 2015), and IMDB (Maas et al., 2011).
+
+Multiple-choice QA The datasets include DREAM (Sun et al., 2019), QuAIL (Rogers et al., 2020), QuaRTz (Tafjord et al., 2019), WiQA (Tandon et al., 2019), QASC (Khot et al., 2020), SCiQ (Welbl et al., 2017), ARC (Clark et al., 2018). We follow Sanh et al. (2021) to organize this task family.
+
+Miscellaneous The other datasets are BookQ (Clark et al., 2019), CB (De Marneffe et al., 2019), CommonsenseQA v1/v2 (Talmor et al., 2019, 2021), and COPA (Roemmelo et al., 2011). BoolQ, CB, and COPA are also collected in SuperGLUE (Wang et al., 2019a). We select those tasks as they can be easily transformed into our unified format.
+
+# 4.2 Implementations
+
+Our model is implemented using Pytorch and based on the Transformers Library (Wolf et al., 2019). To save computation, we initialize our model with the released checkpoints of DeBERTa-V3-Large, and the hyper-parameter setting generally follows DeBERTa (He et al., 2021). Our experiments
+
+
Model
Arch.
Tasks
Params.
αNLI
CosmosQA
HellaSwag
PIQA
SocialIQA
Winogrande
Average
UNICORN
Enc-Dec
6
770M
79.5
83.2
83.0
82.2
75.5
78.7
80.4
ExT5
Enc-Dec
107
770M
82.3
85.9
89.0
85.0
79.7
82.5
84.1
ExDeBERTa
Enc only
40
567M
87.9
85.3
83.6
85.5
79.6
87.0
84.8
CompassMTL
Enc only
40
567M
91.7
87.8
95.6
87.3
81.7
89.6
89.0
w/ Tailor
Enc only
14
567M
92.5
88.8
96.1
88.3
82.2
90.5
89.7
+
+Table 1: Results on the Rainbow commonsense reasoning validation sets. The baseline models are UNICORN $_{large}$ (Lourie et al., 2021) and ExT5 $_{large}$ (Aribandi et al., 2021). ExDeBERTa is our imitation of ExT5-style (Aribandi et al., 2021) MTL training by using DeBERTa backbone trained on 40 datasets with a multi-task objective of self-supervised denoising and supervised task objective, after which is transferred to each individual task. "w/Tailor" denotes multi-task training with related datasets (14-subset) according to our discovery in Section 5.3.
+
+
Method
ECtHR (A)
ECtHR (B)
SCOTUS
EUR-LEX
LEDGAR
UNFAIR-ToS
CaseHOLD
μ-F1
m-F1
μ-F1
m-F1
μ-F1
m-F1
μ-F1
m-F1
μ-F1
m-F1
μ-F1
m-F1
μ/m-F1
BERT
71.2
63.6
79.7
73.4
68.3
58.3
71.4
57.2
87.6
81.8
95.6
81.3
70.8
RoBERTa
69.2
59.0
77.3
68.9
71.6
62.0
71.9
57.9
87.9
82.3
95.2
79.2
71.4
DeBERTa
70.0
60.8
78.8
71.0
71.1
62.7
72.1
57.4
88.2
83.1
95.5
80.3
72.6
Longformer
69.9
64.7
79.4
71.7
72.9
64.0
71.6
57.7
88.2
83.0
95.5
80.9
71.9
BigBird
70.0
62.9
78.8
70.9
72.8
62.0
71.5
56.8
87.8
82.6
95.7
81.3
70.8
Legal-BERT
70.0
64.0
80.4
74.7
76.4
66.5
72.1
57.4
88.2
83.0
96.0
83.0
75.3
CaseLaw-BERT
69.8
62.9
78.8
70.3
76.6
65.9
70.7
56.6
88.3
83.0
96.0
82.3
75.4
ExDeBERTa
-
-
-
-
-
-
-
-
-
-
-
-
74.8
CompassMTL
71.7
60.7
80.6
73.2
77.7
68.9
67.2
42.1
88.1
82.3
96.3
84.3
76.1
w/ Tailor
73.0
64.7
80.7
72.3
76.3
68.6
66.9
44.9
88.3
83.2
96.2
83.2
78.1
+
+Table 2: Results on LexGLUE test sets. The baseline results except ours in the last column are from Chalkidis et al. (2021). Since the LexGlue tasks except CaseHold are multi-label classification problems, the ExDeBERTa model is not directly applicable for those tasks without extra task-specific fine-tuning; thus, the results are not reported. "w/Tailor" denotes multi-task training with the seven datasets in the same LexGLUE family.
+
+are run on 8x32GB Tesla A100 GPUs. The maximum input sequence length is 512. Similar to Lourie et al. (2021), the implementation of CompassMTL includes two procedures. We first conduct multi-task pre-training on all the datasets and then continue to train on each target dataset alone to verify the performance. For multi-task pretraining, we use a peak learning rate of 6e-6 with a warm-up rate of 0.1. We run up to 6 epochs using a batch size of 128. The masking ratio of MLM is 0.25, and $\lambda$ is set to 0.1. To avoid large-scale datasets dominating the pre-training, the training data is randomly sampled by a limit of $10k$ on the maximum dataset size according to Raffel et al. (2019). For fine-tuning experiments, the initial learning rate is selected in $\{3\mathrm{e} - 6,6\mathrm{e} - 6,8\mathrm{e} - 5\}$ with a warm-up rate of 0.1. The batch size is selected in $\{16,32\}$ . The maximum number of epochs is chosen from $\{6,10\}$ . More fine-tuning details are available in Appendix A.2.
+
+# 4.3 Main Results
+
+Our main results are reported on the Rainbow and LexGLUE benchmark datasets for comparisons with public methods. As the statistics shown in
+
+Tables 1-2, we see that CompassMTL models outperform the related public models in general. Specifically, it is observed that our encoder-only models yield better performance than the T5-based encoder-decoder models under similar model sizes. Further, the comparison in the second column discloses the potential to achieve comparable or better performance by multi-task learning with related tasks (w/ Tailor). How to find the related tasks and use them to enhance model performance will be discussed in the following section.
+
+# 5 Analysis
+
+# 5.1 Ablation Study
+
+Table 3 presents our ablation study to dive into the effectiveness of different training objectives and the influence of task prefixes in our method. For the training objectives, MTL and MLM denote the training objectives of $\mathcal{L}_{mtl}$ and $\mathcal{L}_{mlm}$ , respectively. The results suggest that both supervised and self-supervised tasks contribute to the overall model performance, and the supervised task is more beneficial than the self-supervised task in our study. Further, to inspect the role of the task prefixes, we
+
+
+Figure 4: Heatmap of task relationships probed by prefix embeddings.
+
+ablate the model with three conditions: 1) must: the prefixes are masked with the probability of 1.0; 2) no: the prefixes are masked with the probability of 0.0; 3) only: only prefixes will be masked, i.e., the prefix of each example will be masked, while the other tokens are left as original. The results in Table 3 show that using prefixes (Prefix $_{\text{must}}$ and Prefix $_{\text{only}}$ ) indeed boosts the model performance generally.
+
+# 5.2 Relationship Probing
+
+Figure 4 illustrates the heatmap of task relationships probed by prefix embeddings. We see that the datasets inside the same task family (e.g., GLUE and Rainbow) correlate highly with each other. The LexGLUE tasks are less related to other tasks because the texts are mainly legal descriptions. In addition, the correlation scores also accord with the common practice of data augmentation. For example, the NLI datasets (MNLI, QNLI, RTE)
+
+
Model
Accuracy
Single
84.6
CompassMTL
89.4
- MTL
85.0
- MLM
88.8
Prefixmust
89.3
Prefixno
88.9
Prefixonly
89.1
+
+Table 3: Ablation Study of the training objectives and task prefixes. We calculate the average accuracy scores on the development sets of all the 40 datasets.
+
+share close relevance, and it is helpful to initialize parameters from an MNLI model to fine-tune RTE (Liu et al., 2019b; Qu et al., 2020).
+
+We are interested in whether the probed relationship scores coordinate with the model performance transferred between tasks. We first obtain transfer accuracy between tasks in a dual-task training setup (Aribandi et al., 2021). Assume that we have 13 source tasks from GLUE and Rainbow tasks and 5 target tasks ( $\alpha$ NLI, HellaSwag, MRPC, PIQA, QNLI, and RTE). We first train individual models using the mixture of training sets from each pair of source and target tasks, and then evaluate the model on the validation set of the target dataset. As a result, we have $5 \times 13$ transfer results. For each
+
+
Dataset
RTE
MRPC
QNLI
HellaSwag
αNLI
Avg.
Probing
0.19
0.22
0.38
0.12
0.51
0.28
Length
-0.12
0.43
-0.17
0.04
-0.07
0.02
Vocab
0.37
-0.27
-0.001
0.09
0.31
0.10
+
+Table 4: Pearson correlation between each relationship measure and the transfer accuracy.
+
+
Model
Tasks
RTE
MRPC
QNLI
HellaSwag
αNLI
Single
1
61.4
89.2
95.0
95.1
91.3
40-fullset
40
92.8
90.4
95.5
95.6
91.7
Top 5
5
92.4
91.9
95.3
95.6
91.6
Family
6/7
91.4
90.2
95.0
95.7
91.9
14-subset
14
91.8
90.3
95.6
96.1
92.5
+
+target dataset, we calculate Pearson correlation between relationship scores and transfer accuracy among the source datasets. In Table 4, we find that the relationship scores are positively bound up with the transfer performance. The results indicate the potential to find related tasks by the relationship scores. In other words, the relationship scores essentially reflect task relationships.
+
+Task relationships may also be reflected by shallow token distributions, such as vocabulary overlap or sentence length. To investigate if our relationship probing can be replaced by comparing the token distributions, we further analyze the correlation between the similarity of token distributions and dual-task transfer accuracy. For sentence length, we first calculate the absolute values of the average length difference between source and target datasets and then convert them to negative values (intuitively less difference in length, more close the relationship). The vocab overlap of the source and target datasets is also computed for comparison. The similarity between datasets reflects weak correlations with the transfer accuracy (2/5 and 3/5 datasets, respectively in Table 4). These results are less consistent than our probing method, which indicates that our method mines more complex patterns toward task relationships.
+
+# 5.3 Complementary Transfer
+
+To inspect whether using more datasets always leads to better performance and whether using the most related datasets can lead to competitive
+
+Table 5: Complementary transfer results using different mixtures of datasets for MTL. The last three rows represent the mixture in different granularity inspired by our relationship probing.
+
+
Model
SQuADv1.1
SQuADv2.0
NER
EM
F1
EM
F1
F1
Baseline
88.8
94.8
87.1
90.5
96.5
CompassMTL
89.7
95.1
88.5
91.3
96.9
+
+Table 6: Results on the SQuAD v1.1/V2.0 and CoNLL2003 (NER) development sets. The evaluation metrics are Exact-Match (EM) and F1 scores.
+
+
Model
HellaSwag
αNLI
Human Performance
95.60
92.90
Previous SOTA
94.87
92.20
Our Results
95.94
92.80
+
+Table 7: Leaderboard tests of HellaSwag and $\alpha$ NLI.
+
+results. In this part, we conduct a complementary transfer analysis by selecting a group of datasets to train an MTL model and fine-tuning the model on target datasets. Four choices of dataset mixture are compared: 1) 40-fullset: the same as our basic setting of CompassMTL in this work; 2) Top-5 ranked dataset according to based on our probed relationship scores; 3) Family: the datasets belonged to the same family with the target dataset, i.e., 6 datasets for Rainbow tasks and 7 datasets for GLUE tasks; 4) 14-subset: the mixture of Rainbow and GLUE datasets.
+
+Table 5 presents the comparison results. We observe that the top-5 ranked variant yields comparable, even better results than the others, which indicates that models trained with more datasets may not always bring benefits. The results also indicate that small-scale datasets (e.g., MRPC and RTE), which have relatively high average correlation scores with the other datasets, are more likely to benefit from the complementary transfer. With the tasks scaling up, the performance (family $\rightarrow$ 14-subset) may improve as more related tasks are involved in training.
+
+# 5.4 Human-parity on Commonsense Reasoning Leaderboards
+
+Table 7 presents our test evaluation on the official leaderboards of HellaSwag7 and $\alpha$ NLI8. The submissions are based on the ensemble of three models selected according to Section 5.3. Compared with public methods that use much larger PrLMs, model ensemble, and knowledge
+
+
Model
αNLI
CosmosQA
HellaSwag
PIQA
SocialIQA
Winogrande
Average
T5
68.5
69.6
56.6
67.7
65.1
62.4
65.0
UNICORN
65.3
72.8
56.2
73.3
66.1
61.8
65.9
CompassMTL
69.1
72.6
57.7
73.6
66.6
64.9
67.4
+
+Table 8: Results on the Rainbow validation sets by using T5-base as the backbone model.
+
+graphs, our models establish new state-of-the-art results and reach human-parity performance.
+
+# 5.5 Beyond The Unified Format
+
+To verify whether our model can be employed for tasks that are unavailable to be transformed into our unified format, we evaluate the effectiveness of CompassMTL by using the typical reading comprehension datasets SQuAD v1.1/2.0 (Rajpurkar et al., 2016, 2018) and named entity recognition (NER) dataset CoNLL 2003 (Tjong Kim Sang and De Meulder, 2003), which represent extractive question answering and sequence labeling task formats, respectively. We first replicate the baselines for fine-tuning QA and NER tasks using the Transformers toolkit. For comparison, we initialize the baseline parameters with our model weights to see if CompassMTL is better than the baselines. Results in Table 6 show that our model is generally effective across formats. The results also indicate that CompassMTL can serve as a strong off-the-shelf representation encoder that is applicable for new tasks without needing to be pretrained again.
+
+# 5.6 Implementation Using The T5 Backbone
+
+Although our method is implemented by the encoder-only backbone to compete in NLU tasks, it is supposed to be generally applicable to other kinds of PrLMs, such as encoder-decoder T5. To verify the effectiveness, we employ the pre-trained T5-base model (Raffel et al., 2019) as the backbone. We use the Rainbow datasets for MTL and convert the data into text-to-text format following the standard processing for T5 training, with task prefixes inserted before each data sequence. The baselines are the single-task T5 trained on each individual task and UNICORN (Lourie et al., 2021) trained on the Rainbow datasets. Results in Table 8 verify that our method is generally effective.
+
+# 6 Conclusions
+
+This work presents a task prefix guided multi-task method by making use of task prefix to explore the mutual effects between tasks and improve model performance with complementary tasks. Our released model can not only serve as the strong foundation backbone for a wide range of NLU tasks but also be used as a probing tool for analyzing task relationships. Our model shows generalizable advances over tasks in diverse formats and establishes human-parity results on commonsense reasoning tasks. Based on our pre-trained model, we find that the prefixes necessarily reflect task relationships, which correlate with transfer learning performance between tasks and suggest directions for data augmentation of complementary tasks. In summary, our work has the following prospects for future studies:
+
+1) Collaborative multi-task learning of PrLMs. The recipe of using task prefixes in conjunction with prefix prediction in MLM training has shown effective for large-scale MTL pre-training.
+
+2) Suggestive choice for data augmentation. The task relationships probed by the prefix embeddings have shown informative in finding the complementary tasks. Using complementary tasks helps obtain better performance for a target task, especially for small-scale task datasets.
+
+3) Guidance for skill-aware model evaluation. The discovery of task relationships may help determine redundant datasets that assess similar patterns of models. Recently, there has been a trend to evaluate the comprehensive skills of deep learning models by using a large number of datasets (Srivastava et al., 2022), the selection of distinctive datasets can be guided by our relationship discovery criteria to avoid evaluation redundancy and save computation.
+
+Limitations. We acknowledge the major limitation of this work is that our model may not readily apply to new tasks. It is based on the common assumption of MTL that the set of tasks is known at training time. Adaptation to new tasks could be future work.
+
+# References
+
+Armen Aghajanyan, Anchit Gupta, Akshit Shrivastava, Xilun Chen, Luke Zettlemoyer, and Sonal Gupta. 2021. Muppet: Massive multi-task representations with pre-finetuning. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 5799-5811.
+Vamsi Aribandi, Yi Tay, Tal Schuster, Jinfeng Rao, Huaixiu Steven Zheng, Sanket Vaibhav Mehta, Honglei Zhuang, Vinh Q Tran, Dara Bahri, Jianmo Ni, et al. 2021. Ext5: Towards extreme multitask scaling for transfer learning. arXiv preprint arXiv:2111.10952.
+Lu Bai, Yew-Soon Ong, Tiantian He, and Abhishek Gupta. 2020. Multi-task gradient descent for multitask learning. Memetic Computing, 12(4):355-369.
+Iz Beltagy, Kyle Lo, and Arman Cohan. 2019. SciBERT: A pretrained language model for scientific text. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3615-3620, Hong Kong, China. Association for Computational Linguistics.
+Luisa Bentivogli, Peter Clark, Ido Dagan, and Danilo Giampiccolo. 2009. The fifth pascal recognizing textual entailment challenge. In ACL-PASCAL.
+Chandra Bhagavatula, Ronan Le Bras, Chaitanya Malaviya, Keisuke Sakaguchi, Ari Holtzman, Hannah Rashkin, Doug Downey, Wen-tau Yih, and Yejin Choi. 2020. Abductive commonsense reasoning. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.
+Joachim Bingel and Anders Søgaard. 2017. Identifying beneficial task relations for multi-task learning in deep neural networks. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 164-169, Valencia, Spain. Association for Computational Linguistics.
+Yonatan Bisk, Rowan Zellers, Ronan LeBras, Jianfeng Gao, and Yejin Choi. 2020. PIQA: reasoning about physical commonsense in natural language. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pages 7432-7439. AAAI Press.
+Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing,
+
+pages 632-642, Lisbon, Portugal. Association for Computational Linguistics.
+Daniel Cer, Mona Diab, Eneko Agirre, Inigo Lopez-Gazpio, and Lucia Specia. 2017. SemEval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 1-14, Vancouver, Canada. Association for Computational Linguistics.
+Ilias Chalkidis, Abhik Jana, Dirk Hartung, Michael Bommarito, Ion Androutsopoulos, Daniel Martin Katz, and Nikolaos Aletras. 2021. Lexglue: A benchmark dataset for legal language understanding in english. arXiv preprint arXiv:2110.00976.
+Zihan Chen, Hongbo Zhang, Xiaoji Zhang, and Leqi Zhao. 2018. Quora question pairs.
+Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. 2022. Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311.
+Christopher Clark, Kenton Lee, Ming-Wei Chang, Tom Kwiatkowski, Michael Collins, and Kristina Toutanova. 2019. *BoolQ: Exploring the surprising difficulty of natural yes/no questions.* In *Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies*, Volume 1 (Long and Short Papers), pages 2924–2936, Minneapolis, Minnesota. Association for Computational Linguistics.
+Kevin Clark, Minh-Thang Luong, Quoc V. Le, and Christopher D. Manning. 2020. ELECTRA: pretraining text encoders as discriminators rather than generators. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.
+Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. 2018. Think you have solved question answering? try arc, the ai2 reasoning challenge. arXiv preprint arXiv:1803.05457.
+Marie-Catherine De Marneffe, Mandy Simons, and Judith Tonhauser. 2019. The commitmentbank: Investigating projection in naturally occurring discourse. In proceedings of Sinn und Bedeutung, volume 23, pages 107-124.
+Franck Dernoncourt and Ji Young Lee. 2017. PubMed 200k RCT: a dataset for sequential sentence classification in medical abstracts. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 308-313, Taipei, Taiwan. Asian Federation of Natural Language Processing.
+
+Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.
+William B. Dolan and Chris Brockett. 2005. Automatically constructing a corpus of sentential paraphrases. In Proceedings of the Third International Workshop on Paraphrasing (IWP2005).
+Mor Geva, Uri Katz, Aviv Ben-Arie, and Jonathan Berant. 2021. What's in your head? emergent behaviour in multi-task transformer models. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 8201-8215.
+Xiaodong Gu, Kang Min Yoo, and Jung-Woo Ha. 2020. Dialogbert: Discourse-aware response generation via learning to recover and rank utterances. arXiv:2012.01775.
+Suchin Gururangan, Ana Marasovic, Swabha Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey, and Noah A. Smith. 2020. Don't stop pretraining: Adapt language models to domains and tasks. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8342-8360, Online. Association for Computational Linguistics.
+Pengcheng He, Jianfeng Gao, and Weizhu Chen. 2021. Debertav3: Improving deberta using electra-style pre-training with gradient-disentangled embedding sharing. arXiv preprint arXiv:2111.09543.
+Kexin Huang, Jaan Altosaar, and R. Ranganath. 2019a. Clinicalbert: Modeling clinical notes and predicting hospital readmission. arXiv:1904.05342.
+Lifu Huang, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. 2019b. Cosmos QA: Machine reading comprehension with contextual commonsense reasoning. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2391-2401, Hong Kong, China. Association for Computational Linguistics.
+David Jurgens, Srijan Kumar, Raine Hoover, Dan McFarland, and Dan Jurafsky. 2018. Measuring the evolution of a scientific field through citation frames. Transactions of the Association for Computational Linguistics, 6:391-406.
+Nitish Shirish Keskar, Bryan McCann, Caiming Xiong, and Richard Socher. 2019. Unifying question answering, text classification, and regression via span extraction. arXiv preprint arXiv:1904.09286.
+
+Daniel Khashabi, Sewon Min, Tushar Khot, Ashish Sabharwal, Oyvind Tafjord, Peter Clark, and Hannaneh Hajishirzi. 2020. UNIFIEDQA: Crossing format boundaries with a single QA system. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 1896-1907, Online. Association for Computational Linguistics.
+Tushar Khot, Peter Clark, Michal Guerquin, Peter Jansen, and Ashish Sabharwal. 2020. QASC: A dataset for question answering via sentence composition. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pages 8082-8090. AAAI Press.
+Johannes Kiesel, Maria Mestre, Rishabh Shukla, Emmanuel Vincent, Payam Adineh, David Corney, Benno Stein, and Martin Potthast. 2019. SemEval 2019 task 4: Hyperpartisan news detection. In Proceedings of the 13th International Workshop on Semantic Evaluation, pages 829-839, Minneapolis, Minnesota, USA. Association for Computational Linguistics.
+Jens Kringelum, Sonny Kim Kjaerulff, Søren Brunak, Ole Lund, Tudor I Oprea, and Olivier Taboureau. 2016. Chemprot-3.0: a global chemical biology diseases mapping. Database, 2016.
+Pawan Kumar, Dhanajit Brahma, Harish Karnick, and Piyush Rai. 2020. Deep attentive ranking networks for learning to order sentences. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pages 8115-8122. AAAI Press.
+Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2020. ALBERT: A lite BERT for self-supervised learning of language representations. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.
+Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, D. Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2020. Biobert: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics.
+Lu Li, Chenliang Li, and Donghong Ji. 2021. Deep context modeling for multi-turn response selection in dialogue systems. Information Processing & Management, 58(1):102415.
+Xiaoya Li, Jingrong Feng, Yuxian Meng, Qinghong Han, Fei Wu, and Jiwei Li. 2020. A unified
+
+MRC framework for named entity recognition. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5849-5859, Online. Association for Computational Linguistics.
+Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, and Graham Neubig. 2021. Pretrain, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586.
+Xiaodong Liu, Pengcheng He, Weizhu Chen, and Jianfeng Gao. 2019a. Multi-task deep neural networks for natural language understanding. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4487-4496, Florence, Italy. Association for Computational Linguistics.
+Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019b. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692.
+Nicholas Lourie, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. 2021. *Unicorn on rainbow: A universal commonsense reasoning model on a new multitask benchmark.* In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 35, pages 13480-13488.
+Yanbao Ma, Hao Xu, Junzhou He, Kun Qian, and Tiebing Li. 2021. Adaptive transfer learning via fine-grained multi-task pre-training. In 2021 4th International Conference on Algorithms, Computing and Artificial Intelligence, pages 1-5.
+Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 142-150, Portland, Oregon, USA. Association for Computational Linguistics.
+Julian J. McAuley, Christopher Targett, Qinfeng Shi, and Anton van den Hengel. 2015. Image-based recommendations on styles and substitutes. In Proceedings of the 38th International ACM SIGIR Conference on Research and Development in Information Retrieval, Santiago, Chile, August 9-13, 2015, pages 43-52. ACM.
+Bryan McCann, Nitish Shirish Keskar, Caiming Xiong, and Richard Socher. 2018. The natural language decathlon: Multitask learning as question answering. arXiv preprint arXiv:1806.08730.
+Shikib Mehri, Mihail Eric, and Dilek Hakkani-Tur. 2020. Dialoglue: A natural language understanding benchmark for task-oriented dialogue. arXiv preprint arXiv:2009.13570.
+
+Nikita Nangia, Adina Williams, Angeliki Lazaridou, and Samuel Bowman. 2017. The RepEval 2017 shared task: Multi-genre natural language inference with sentence representations. In Proceedings of the 2nd Workshop on Evaluating Vector Space Representations for NLP, pages 1-10, Copenhagen, Denmark. Association for Computational Linguistics.
+Vishakh Padmakumar, Leonard Lausen, Miguel Ballesteros, Sheng Zha, He He, and George Karypis. 2022. Exploring the role of task transferability in large-scale multi-task learning. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2542-2550, Seattle, United States. Association for Computational Linguistics.
+Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227-2237, New Orleans, Louisiana. Association for Computational Linguistics.
+Yu jia Qin, Xiaozhi Wang, Yusheng Su, Yankai Lin, Ning Ding, Zhiyuan Liu, Juanzi Li, Lei Hou, Peng Li, Maosong Sun, et al. 2021. Exploring low-dimensional intrinsic task subspace via prompt tuning. arXiv preprint arXiv:2110.07867.
+Yanru Qu, Dinghan Shen, Yelong Shen, Sandra Sajeev, Weizhu Chen, and Jiawei Han. 2020. Coda: Contrast-enhanced and diversity-promoting data augmentation for natural language understanding. In International Conference on Learning Representations.
+Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training.
+Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, W. Li, and Peter J. Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text transformer. arXiv: 1910.10683.
+Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you don't know: Unanswerable questions for SQuAD. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 784-789, Melbourne, Australia. Association for Computational Linguistics.
+Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383-2392, Austin, Texas. Association for Computational Linguistics.
+
+Melissa Roemmele, Cosmin Adrian Bejan, and Andrew S Gordon. 2011. Choice of plausible alternatives: An evaluation of commonsense causal reasoning. In 2011 AAAI Spring Symposium Series.
+Anna Rogers, Olga Kovaleva, Matthew Downey, and Anna Rumshisky. 2020. Getting closer to ai complete question answering: A set of prerequisite real tasks. In Proceedings of the AAAI conference on artificial intelligence, volume 34, pages 8722-8731.
+Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. 2020. Winogrande: An adversarial winograd schema challenge at scale. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 8732-8740.
+Victor Sanh, Albert Webson, Colin Raffel, Stephen H. Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Teven Le Scao, Arun Raja, et al. 2021. Multitask prompted training enables zero-shot task generalization. arXiv preprint arXiv:2110.08207.
+Maarten Sap, Hannah Rashkin, Derek Chen, Ronan Le Bras, and Yejin Choi. 2019. Social IQa: Commonsense reasoning about social interactions. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4463-4473, Hong Kong, China. Association for Computational Linguistics.
+Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1631-1642, Seattle, Washington, USA. Association for Computational Linguistics.
+Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, and Tie-Yan Liu. 2019. MASS: masked sequence to sequence pre-training for language generation. In Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, volume 97 of Proceedings of Machine Learning Research, pages 5926-5936. PMLR.
+Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu Awal Md Shoeb, Abubakar Abid, Adam Fisch, Adam R Brown, Adam Santoro, Aditya Gupta, Adrià Garriga-Alonso, et al. 2022. Beyond the imitation game: Quantifying and extrapolating the capabilities of language models. arXiv preprint arXiv:2206.04615.
+Yusheng Su, Xiaozhi Wang, Yujia Qin, Chi-Min Chan, Yankai Lin, Huadong Wang, Kaiyue Wen, Zhiyuan Liu, Peng Li, Juanzi Li, et al. 2022. On transferability of prompt tuning for natural language processing. In Annual Conference of the North American Chapter
+
+of the Association for Computational Linguistics (NAACL).
+Kai Sun, Dian Yu, Jianshu Chen, Dong Yu, Yejin Choi, and Claire Cardie. 2019. DREAM: A challenge data set and models for dialogue-based reading comprehension. Transactions of the Association for Computational Linguistics, 7:217-231.
+Oyvind Tafjord, Matt Gardner, Kevin Lin, and Peter Clark. 2019. QuaRTz: An open-domain dataset of qualitative relationship questions. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5941-5946, Hong Kong, China. Association for Computational Linguistics.
+Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. 2019. CommonsenseQA: A question answering challenge targeting commonsense knowledge. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4149-4158, Minneapolis, Minnesota Association for Computational Linguistics.
+Alon Talmor, Ori Yoran, Ronan Le Bras, Chandra Bhagavatula, Yoav Goldberg, Yejin Choi, and Jonathan Berant. 2021. Commonsenseqa 2.0: Exposing the limits of ai through gamification. In Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 1).
+Niket Tandon, Bhavana Dalvi, Keisuke Sakaguchi, Peter Clark, and Antoine Bosselut. 2019. WIQA: A dataset for "what if..." reasoning over procedural text. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 6076-6085, Hong Kong, China. Association for Computational Linguistics.
+Yi Tay, Mostafa Dehghani, Vinh Q Tran, Xavier Garcia, Dara Bahri, Tal Schuster, Huaixiu Steven Zheng, Neil Houlsby, and Donald Metzler. 2022 Unifying language learning paradigms. arXiv preprint arXiv:2205.05131.
+Erik F. Tjong Kim Sang and Fien De Meulder 2003. Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition. In Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003, pages 142-147.
+Tu Vu, Brian Lester, Noah Constant, Rami Al-Rfou, and Daniel Cer. 2022. Spot: Better frozen model adaptation through soft prompt transfer. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 5039-5059.
+
+Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019a. Superglue: A stickier benchmark for general-purpose language understanding systems. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 3261-3275.
+Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019b. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net.
+Alex Warstadt, Amanpreet Singh, and Samuel R. Bowman. 2019. Neural network acceptability judgments. Transactions of the Association for Computational Linguistics, 7:625-641.
+Jason Wei, Maarten Bosma, Vincent Y Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M Dai, and Quoc V Le. 2021. Finetuned language models are zero-shot learners. arXiv preprint arXiv:2109.01652.
+Johannes Welbl, Nelson F. Liu, and Matt Gardner. 2017. Crowdsourcing multiple choice science questions. In Proceedings of the 3rd Workshop on Noisy User-generated Text, pages 94-106, Copenhagen, Denmark. Association for Computational Linguistics.
+Taesun Whang, Dongyub Lee, Chanhee Lee, Kisu Yang, Dongsuk Oh, and Heuseok Lim. 2020. An effective domain adaptive post-training method for bert in response selection. *INTERSPEECH*.
+Taesun Whang, Dongyub Lee, Dongsuk Oh, Chanhee Lee, Kijong Han, Dong-hun Lee, and Saebyeok Lee. 2021. Do response selection models really know what's next? utterance manipulation strategies for multi-turn response selection. In The Thirty-Fifth AAAI Conference on Artificial Intelligence (AAAI-21).
+Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierrick Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, et al. 2019. Huggingface's transformers: State-of-the-art natural language processing. arXiv preprint arXiv:1910.03771.
+Chien-Sheng Wu, Steven C.H. Hoi, Richard Socher, and Caiming Xiong. 2020a. TOD-BERT: Pre-trained natural language understanding for task-oriented dialogue. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 917-929, Online. Association for Computational Linguistics.
+Sen Wu, Hongyang R. Zhang, and Christopher Ré. 2020b. Understanding and improving information
+
+transfer in multi-task learning. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.
+Tianbao Xie, Chen Henry Wu, Peng Shi, Ruiqi Zhong, Torsten Scholak, Michihiro Yasunaga, Chien-Sheng Wu, Ming Zhong, Pengcheng Yin, Sida I Wang, et al. 2022. Unifiedskg: Unifying and multi-tasking structured knowledge grounding with text-to-text language models. arXiv preprint arXiv:2201.05966.
+Zhilin Yang, Zihang Dai, Yiming Yang, Jaime G. Carbonell, Ruslan Salakhutdinov, and Quoc V. Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 5754-5764.
+Wenhao Yu, Chenguang Zhu, Yuwei Fang, Donghan Yu, Shuohang Wang, Yichong Xu, Michael Zeng, and Meng Jiang. 2022. Dict-bert: Enhancing language model pre-training with dictionary. In *Findings of the Association for Computational Linguistics: ACL* 2022, pages 1907-1918.
+Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. 2019. HellaSwag: Can a machine really finish your sentence? In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4791-4800, Florence, Italy. Association for Computational Linguistics.
+Xiang Zhang, Junbo Jake Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classification. In Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, December 7-12, 2015, Montreal, Quebec, Canada, pages 649-657.
+Zhihan Zhang, Wenhao Yu, Mengxia Yu, Zhichun Guo, and Meng Jiang. 2022. A survey of multi-task learning in natural language processing: Regarding task relatedness and training methods. arXiv preprint arXiv:2204.03508.
+Zhuosheng Zhang and Hai Zhao. 2021. Structural pretraining for dialogue comprehension. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume I: Long Papers), pages 5134-5145, Online. Association for Computational Linguistics.
+Zimu Zheng, Yuqi Wang, Quanyu Dai, Huadi Zheng, and Dan Wang. 2019. Metadata-driven task relation discovery for multi-task learning. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI 2019, Macao, China, August 10-16, 2019, pages 4426-4432. ijcai.org.
+
+
Context
Question
Option(s)
[sciq] A wetland is an area that is wet for all or part of the year. Wetlands are home to certain types of plants.
What is an area of land called that is wet for all or part of the year?
["tundra", "plains", "grassland", "wetland"]
[commonsense_qa] revolving door
A revolving door is convenient for two direction travel, but it also serves as a security measure at a what?
[dream] M: I am considering dropping my dancing class. I am not making any progress.", "W: If I were you, I stick with it. It's definitely worth time and effort.
What does the man suggest the woman do?
[ "Consult her dancing teacher.", "Take a more interesting class.", "Continue her dancing class.", "N/A"]
[scotus] The Interstate Commerce Commission, acting under § 19a of the Interstate Commerce Act, ordered the appellant to furnish certain inventories, schedules, maps and charts of its pipe line property ...
[unfair_tos] you must provide accurate and complete data during the registration and update your registration data if it changes.
-
[ "there is no unfair contractual term", "Limitation of liability", "Unilateral termination", "Arbitration"]
+
+Table 9: Examples of transformed datasets.
+
+# A Appendix
+
+# A.1 Examples of transformed datasets
+
+Table 9 shows examples of transformed datasets. The first column presents the standard multiple-choice dataset, followed by four types of outlier datasets (Section 3.1) that are transformed into our unified format.
+
+# A.2 Fine-tuning Details
+
+According to Section 3.1, our training datasets are converted into a multiple-choice-like format for multi-task pre-training. During fine-tuning, because our evaluated GLUE and Rainbow tasks for public comparisons are either single-label classification or multiple-choice tasks, the conversion would not affect the performance according to our preliminary experiments as the predictions can be easily mapped to the original formats by choosing the best-ranked options. For the other tasks, such as the multi-label classification tasks in LexGLUE, where the conversion will result in the clip of ground-true labels, we use the original datasets for fine-tuning and initialize the corresponding baseline models with our pre-trained weights after MTL. The criteria for choosing the baseline models for different types of tasks basically follows the standard practice in literature (He et al., 2021; Chalkidis et al., 2021).
\ No newline at end of file
diff --git a/taskcompassscalingmultitaskpretrainingwithtaskprefix/images.zip b/taskcompassscalingmultitaskpretrainingwithtaskprefix/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..bcec78dd26b5018bb4442cc2644a7c29b5e0250b
--- /dev/null
+++ b/taskcompassscalingmultitaskpretrainingwithtaskprefix/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:44c7498e8a9343a0c96289696de6e980b251c0a12e03ecd215c460aa507b1588
+size 823257
diff --git a/taskcompassscalingmultitaskpretrainingwithtaskprefix/layout.json b/taskcompassscalingmultitaskpretrainingwithtaskprefix/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..447187941f965c97799465b86fb5dcbd2fbac66e
--- /dev/null
+++ b/taskcompassscalingmultitaskpretrainingwithtaskprefix/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e7745c1161b00741e58d0a8219a60e454bf78b90fc08e3d8977a311e5c453c54
+size 460273
diff --git a/texteditingasimitationgame/24b4bbd1-c35f-4895-99bd-c96bd69e407b_content_list.json b/texteditingasimitationgame/24b4bbd1-c35f-4895-99bd-c96bd69e407b_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..1e6f573b692e23c5bcd995d13c6d9a8b34a57607
--- /dev/null
+++ b/texteditingasimitationgame/24b4bbd1-c35f-4895-99bd-c96bd69e407b_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:7b2bca7d643851f2ddaa827cd7f79eac3b588fb0326b1678fbb2936950042d1f
+size 87724
diff --git a/texteditingasimitationgame/24b4bbd1-c35f-4895-99bd-c96bd69e407b_model.json b/texteditingasimitationgame/24b4bbd1-c35f-4895-99bd-c96bd69e407b_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..1879623988bc51a1449338002a51975fe6c294cb
--- /dev/null
+++ b/texteditingasimitationgame/24b4bbd1-c35f-4895-99bd-c96bd69e407b_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5b66d02d019d218a7a0418018a76b82158db37510a883d2b477adeb044245950
+size 107418
diff --git a/texteditingasimitationgame/24b4bbd1-c35f-4895-99bd-c96bd69e407b_origin.pdf b/texteditingasimitationgame/24b4bbd1-c35f-4895-99bd-c96bd69e407b_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..10399eed1b7ed06f065acfb21b4ca3a769a2f9ba
--- /dev/null
+++ b/texteditingasimitationgame/24b4bbd1-c35f-4895-99bd-c96bd69e407b_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:518939324f554f2150ca2762450fe3b6839f647f857a5f0e75c424cba0353d56
+size 597326
diff --git a/texteditingasimitationgame/full.md b/texteditingasimitationgame/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..9b341be4012583e20db5dc10cace7baad3a48056
--- /dev/null
+++ b/texteditingasimitationgame/full.md
@@ -0,0 +1,347 @@
+# Text Editing as Imitation Game
+
+*Ning Shi\* Bin Tang Bo Yuan Longtao Huang Yewen Pu Jie Fu Zhouhan Lin
+
+$\spadesuit$ Alberta Machine Intelligence Institute, Dept. of Computing Science, University of Alberta
+
+Alibaba Group ★Shanghai Jiao Tong University
+
+$\clubsuit$ Autodesk Research $\diamond$ Beijing Academy of Artificial Intelligence Ning.shi@ualberta.ca,{tangbin.tang,qiufu.yb,kaiyang.hlt}@alibaba-inc.com yewen.pu@autodesk.com,fujie@baai.ac.cn,lin.zhouhan@gmail.com
+
+# Abstract
+
+Text editing, such as grammatical error correction, arises naturally from imperfect textual data. Recent works frame text editing as a multi-round sequence tagging task, where operations – such as insertion and substitution – are represented as a sequence of tags. While achieving good results, this encoding is limited in flexibility as all actions are bound to token-level tags. In this work, we reformulate text editing as an imitation game using behavioral cloning. Specifically, we convert conventional sequence-to-sequence data into state-to-action demonstrations, where the action space can be as flexible as needed. Instead of generating the actions one at a time, we introduce a dual decoders structure to parallel the decoding while retaining the dependencies between action tokens, coupled with trajectory augmentation to alleviate the distribution shift that imitation learning often suffers. In experiments on a suite of Arithmetic Equation benchmarks, our model consistently outperforms the autoregressive baselines in terms of performance, efficiency, and robustness. We hope our findings will shed light on future studies in reinforcement learning applying sequence-level action generation to natural language processing.
+
+# 1 Introduction
+
+Text editing (Malmi et al., 2022) is an important domain of processing tasks to edit the text in a localized fashion, applying to text simplification (Agrawal et al., 2021), grammatical error correction (Li et al., 2022), punctuation restoration (Shi et al., 2021), to name a few. Neural sequence-to-sequence (seq2seq) framework (Sutskever et al., 2014) establishes itself as the primary approach to text editing tasks, by framing the problem as machine translation (Wu et al., 2016). Applying a seq2seq modeling has the advantage of simplic
+
+
+Figure 1: Three approaches - sequence tagging (left), end-to-end (middle), sequence generation (right) - to turn an invalid arithmetic expression "1 1 2" into a valid one $1 + 1 = 2$ . In end-to-end, the entire string "1 1 2" is encoded into a latent state, which the string $1 + 1 = 2$ is generated directly. In sequence tagging, a localized action (such as "INSERT_+"), meaning insert a "+" symbol after this token) is applied/tagged to each token; these token-level actions are then executed, modifying the input string. In contrast, sequence generation output an entire action sequence, generating the location (rather than tagging it), and the action sequence is executed, modifying the input string. Both token-level actions and sequence-level actions can be applied multiple times to polish the text further (up to a fixed point).
+
+ity, where the system can simply be built by giving input-output pairs consisting of pathological sequences to be edited, and the desired sequence output, without much manual processing efforts (Junczys-Dowmunt et al., 2018).
+
+However, even with a copy mechanism (See et al., 2017; Zhao et al., 2019; Panthaplackel et al., 2021), an end-to-end model can struggle in carrying out localized, specific fixes while keeping the rest of the sequence intact. Thus, sequence tagging is often found more appropriate when outputs highly overlap with inputs (Dong et al., 2019; Mallinson et al., 2020; Stahlberg and Kumar, 2020). In such cases, a neural model predicts a tag sequence - representing localized fixes such as insertion and substitution - and a programmatic interpreter implements these edit operations through. Here, each tag represents a token-level action and determines the operation on its attached token (Kohita et al., 2020). A model can avoid modifying the overlap by assigning no-op (e.g., KEEP), while the action space is limited to token-level modifications,
+
+such as deletion or insertion after a token (Awasthi et al., 2019; Malmi et al., 2019).
+
+In contrast, alternative approaches (Gupta et al., 2019) train the agent to explicitly generate freeform edit actions and iteratively reconstructs the text during the interaction with an environment capable of altering the text based on these actions. This sequence-level action generation (Branavan et al., 2009; Guu et al., 2017; Elgohary et al., 2021) allows higher flexibility of action design not limited to token-level actions, and is more advantageous given the narrowed problem space and dynamic context in the edit (Shi et al., 2020).
+
+The mechanisms of sequence tagging and sequence generation against end-to-end are exemplified in Figure 1. Both methods allow multiple rounds of sequence refinement (Ge et al., 2018; Liu et al., 2021) and imitation learning (IL) (Pomerleau, 1991). Essentially an agent learns from the demonstrations of an expert policy and later imitates the memorized behavior to act independently (Schaal, 1996). On the one hand, IL in sequence tagging functions as a standard supervised learning in its nature and thus has attracted significant interest and been widely used recently (Agrawal et al., 2021; Yao et al., 2021; Agrawal and Carpuat, 2022), achieving good results in the token-level action generation setting (Gu et al., 2019; Reid and Zhong, 2021). On the other hand, IL in sequence-level action generation is less well defined even though its principle has been followed in text editing (Shi et al., 2020) and many others (Chen et al., 2021). As a major obstacle, the training is on state-action demonstrations, where the encoding of the states and actions can be very different (Gu et al., 2018). For instance, the mismatch of the lengths dimension between the state and action makes it tricky to implement for an auto-regressive modeling that benefits from a single, uniform representation.
+
+To tackle the issues above, we reformulate text editing as an imitation game controlled by a Markov Decision Process (MDP). To begin with, we define the input sequence as the initial state, the required operations as action sequences, and the output target sequence as the goal state. A learning agent needs to imitate an expert policy, respond to seen states with actions, and interact with the environment until the success of the eventual editing. To convert existing input-output data into state-action pairs, we utilize trajectory generation (TG), a skill to leverage dynamic programming (DP) for
+
+an efficient search of the minimum operations given a predefined edit metric. We backtrace explored editing paths and automatically express operations as action sequences. Regarding the length misalignment, we first take advantage of the flexibility at the sequence-level to fix actions to be of the same length. Secondly, we employ a linear layer after the encoder to transform the length dimension of the context matrix into the action length. By that, we introduce a dual decoders (D2) structure that not only parallels the decoding but also retains capturing interdependencies among action tokens. Taking a further step, we propose trajectory augmentation (TA) as a solution to the distribution shift problem most IL suffers (Ross et al., 2011). Through a suite of three Arithmetic Equation (AE) benchmarks (Shi et al., 2020), namely Arithmetic Operators Restoration (AOR), Arithmetic Equation Simplification (AES), and Arithmetic Equation Correction (AEC), we confirm the superiority of our learning paradigm. In particular, D2 consistently exceeds standard autoregressive models from performance, efficiency, and robustness perspectives.
+
+In theory, our methods also apply to other imitation learning scenarios where a reward function exists to further promote the agent. In this work, we primarily focus on a proof-of-concept of our learning paradigm landing at supervised behavior cloning (BC) in the context of text editing. To this end, our contributions1 are as follows:
+
+1. We frame text editing into an imitation game formally defined as an MDP, allowing the highest degrees of flexibility to design actions at the sequence-level.
+2. We involve TG to translate input-output data to state-action demonstrations for IL.
+3. We introduce D2, a novel non-autoregressive decoder, boosting the learning in terms of accuracy, efficiency, and robustness.
+4. We propose a corresponding TA technique to mitigate distribution shift IL often suffers.
+
+# 2 Imitation Game
+
+We aim to cast text editing into an imitation game by defining the task as a recurrent sequence generation, as presented in Figure 2 (a). In this section, we describe the major components of our proposal, including (1) the problem definition, (2) the data translation, (3) the model structure, and (4) a solution to the distribution shift.
+
+
+Figure 2: (a) shows the imitation game of AOR. Considering input text $\mathbf{x}$ as initial state $\mathbf{s}_1$ , the agent interacts with the environment to edit "1 1 2" into "1 + 1 = 2" via action $\mathbf{a}_1$ to insert "+" at the first position and $\mathbf{a}_2$ to insert "=" at the thrid position. After $\mathbf{a}_3$ , the agent stops editing and calls the environment to return $\mathbf{s}_3$ as the output text $\mathbf{y}$ . Using the same example, (b) explains how to achieve shifted state $\mathbf{s}_2'$ by skipping action $\mathbf{a}_1^*$ and doing $\mathbf{a}_2'$ . Here we update $\mathbf{a}_2^*$ to $\mathbf{a}_2'$ accordingly due to the previous skipping. The new state $\mathbf{s}_2'$ was not in the expert demonstrations.
+
+
+(a)
+
+
+
+
+
+
+(b)
+
+# 2.1 Behavior cloning
+
+We tear a text editing task $\mathcal{X} \mapsto \mathcal{Y}$ into recurrent subtasks of sequence generation $\mathcal{S} \mapsto \mathcal{A}$ defined by an MDP tuple $\mathcal{M} = (\mathcal{S}, \mathcal{A}, \mathcal{P}, \mathcal{E}, \mathcal{R})$ .
+
+State $S$ is a set of text sequences $\mathbf{s} = s_{j\leq m}$ , where $s\in \mathcal{V}_S$ . We think of a source sequence $\mathbf{x}\in \mathcal{X}$ as the initial state $\mathbf{s}_1$ , its target sequence $\mathbf{y}\in \mathcal{Y}$ as the goal state $\mathbf{s}_T$ , and every edited sequence in between as an intermediate state $\mathbf{s}_t$ . The path $\mathbf{x}\mapsto \mathbf{y}$ can be represented as a set of sequential states $\mathbf{s}_{t < T}$ .
+
+Action $\mathcal{A}$ is a set of action sequences $\mathbf{a} = a_{i\leq n}$ , where $a\in \mathcal{V}_A$ . In Figure 3, "INSERT", "POS_3", and "=" are three action tokens belonging to the vocabulary space of action $\nu_{\mathcal{A}}$ . In contrast to token-level actions in sequence tagging, sentence-level ones set free the editing by varying edit metrics $\mathbf{E}$ (e.g., Levenshtein distance) as long as $\mathcal{X}\xrightarrow{\mathcal{A}_{\mathbf{E}}}\mathcal{Y}$ . It serves as an expert policy $\pi^{*}$ to demonstrate the path to the goal state. A better expert usually means better demonstrations and imitation results. Hence, depending on the task, a suitable $\mathbf{E}$ is essential.
+
+Transition matrix $\mathcal{P}$ models the probability $p$ that an action $\mathbf{a}_t$ leads a state $\mathbf{s}_t$ to the state $\mathbf{s}_{t + 1}$ . We know $\forall \mathbf{s},\mathbf{a}.p(\mathbf{s}_{t + 1}|\mathbf{s}_t,\mathbf{a}_t) = 1$ due to the nature of text editing. So we can omit $\mathcal{P}$ .
+
+Environment $\mathcal{E}$ responds to an action and updates the game state accordingly by $\mathbf{s}_{t + 1} = \mathcal{E}(\mathbf{s}_t,\mathbf{a}_t)$ with process control. For example, the environment can refuse to execute actions that fail to pass the verification and terminate the game if a maximum number of iterations has been consumed.
+
+Reward function $\mathcal{R}$ calculates a reward for each action. It is a major factor contributing to the success of reinforcement learning. In the scope of this paper, we focus on BC, the simplest form of IL. So we can also omit $\mathcal{R}$ and leave it for future work.
+
+# Algorithm 1 Trajectory Generation (TG)
+
+Input: Initial state $\mathbf{x}$ , goal state $\mathbf{y}$ , environment $\mathcal{E}$ , and edit metric $\mathbf{E}$ .
+Output: Trajectories $\tau$ .
+1: $\tau \gets \emptyset$
+2: $\mathbf{s}\gets \mathbf{x}$
+3: $ops\gets \mathrm{DP}(\mathbf{x},\mathbf{y},E)$
+4: for $op\in ops$ do
+5: $\mathbf{a} \gets \operatorname{Action}(op) \quad \triangleright$ Translate operation to action
+6: $\tau \gets \tau \cup [(s, a)]$
+7: $\mathbf{s} \gets \mathcal{E}(\mathbf{s}, \mathbf{a})$
+8: end for
+9: $\tau \gets \tau \cup [(\mathbf{s},\mathbf{a}_T)]\triangleright$ Append goal state and output action
+10: return $\tau$
+
+The formulation turns out to be a simplified $\mathcal{M}_{BC} = (\mathcal{S},\mathcal{A},\mathcal{E})$ . Interacting with the environment $\mathcal{E}$ , we hope a trained agent is able to follow its learned policy $\pi : S \mapsto \mathcal{A}$ , and iteratively edit the initial state $\mathbf{s}_0 = \mathbf{x}$ into the goal state $\mathbf{s}_T = \mathbf{y}$ .
+
+# 2.2 Trajectory generation
+
+A data set to learn $\mathcal{X} \mapsto \mathcal{Y}$ consists of input-output pairs. It is necessary to convert it into state-action ones so that an agent can mimic the expert policy $\pi^{*}: S \mapsto \mathcal{A}$ via supervised learning. A detailed TG is described in Algorithm 1.
+
+Treating a pre-defined edit metric $\mathbf{E}$ as the expert policy $\pi^{*}$ , we can leverage DP to efficiently find the minimum operations required to convert $\mathbf{x}$ into $\mathbf{y}$ in a left-to-right manner and backtrace this path to get specific operations.
+
+Operations are later expressed as a set of sequential actions $\mathbf{a}_{t\leq T}^{*}$ . Here we utilize a special symbol DONE to mark the last action $\mathbf{a}_T^*$ where $\forall a\in \mathbf{a}_T^*$ . $a =$ DONE. Once an agent performs $\mathbf{a}_T^*$ , the current state is returned by the environment as the final output.
+
+Given $\mathbf{s}_1^* = \mathbf{x}$ , we attain the next state $\mathbf{s}_2^* =$
+
+
+Figure 3: The conventional autoregressive decoder (a) compared with the proposed non-autoregressive D2 (b) in which the linear layer aligns the sequence length dimension for the subsequent parallel decoding.
+
+
+
+$\mathcal{E}(\mathbf{s}_1^*, \mathbf{a}_1^*)$ and continue the rest until achieving $\mathbf{s}_T^* = \mathbf{y}$ , resulting in a set of sequential states $\mathbf{s}_{t < T}^*$ .
+
+After one-to-one correspondence between states and actions, we collect a set of sequential expert's demonstrations $\tau^{*} = [(\mathbf{s}_{t\leq T}^{*},\mathbf{a}_{t\leq T}^{*})]$ . Repeating the same process, we eventually convert $\mathcal{X}\mapsto \mathcal{Y}$ into trajectories $\mathcal{T}^*:S\mapsto \mathcal{A}$ .
+
+# 2.3 Model architecture
+
+We form $S \mapsto \mathcal{A}$ as sequence generation. More precisely, a neural model (i.e., the agent) takes states as input and outputs actions. Training an imitation policy with BC corresponds to fitting a parametric model $\pi_{\theta}$ that minimizes the negative log-likelihood loss $l(\mathbf{a}^{*}, \pi_{\theta}(\mathbf{s}))$ . Most seq2seq models have an encoder-decoder structure.
+
+Encoder takes an embedded state $\operatorname{E}(\mathbf{s}) \in \mathbb{R}^{m \times d}$ and generates an encoded hidden state $\mathbf{h}_E \in \mathbb{R}^{m \times d}$ with $d$ being the hidden dimension.
+
+Autoregressive decoder in Figure 3 (a) conditions the current step on the encoded context and previously predictions to overcome the mismatch of sequence length. It calculates step by step
+
+$$
+h _ {D} ^ {i} = \mathrm {A R} (\mathrm {E} (a _ {< i}), \mathbf {h} _ {E}) \in \mathbb {R} ^ {d}, i = 0, \dots , n + 1,
+$$
+
+$\hat{a}_i = \mathrm{LogSoftmax}(h_D^i)\in \mathbb{R}^{|\mathcal{V}_A|},i = 0,\dots ,n + 1,$ and in the end, returns $\hat{\mathbf{a}}\in \mathbb{R}^{n\times |\mathcal{V}_A|}$ . The training is conducted as back-propogating $l(\mathbf{a}^{*},\hat{\mathbf{a}})$ . Note that $a_0^* = \mathtt{BOS}$ and $a_{n + 1}^* = \mathtt{EOS}$ encourage the decoder to learn to begin and end the autoregression.
+
+Non-autoregressive decoder instead provides hidden states in one time. It is feasible to apply techniques of non-autoregressive machine translation. However, one of the primary issues solved by that is the uncertainty of the target sequence length. When it comes to state-action prediction, thanks to the flexibility at the sequence-level, we are allowed to design actions on purpose to eliminate such uncertainty. Specifically, we enforce action sequences
+
+to be of fixed length. On this basis, we propose D2 as shown in Figure 3 (b). To address the misalignment of sequence length between state and action, we insert a fully connected feed-forward network between the encoder and $\mathrm{decoder}_0$ .
+
+$$
+\operatorname {F F N} \left(\mathbf {h} _ {E}\right) = \left(\mathbf {h} _ {E} ^ {\mathbf {T}} W + b\right) ^ {\mathbf {T}} \in \mathbb {R} ^ {n \times d}
+$$
+
+where $W \in \mathbb{R}^{m \times n}$ and $b \in \mathbb{R}^{d \times n}$ transform the length dimension from $m$ to $n$ so as to project $\mathbf{h}_E$ into $\mathbf{h}_F \in \mathbb{R}^{n \times d}$ . The alignment of the sequence length allows us to trivially pass $\mathbf{h}_F$ to $\mathrm{decoder}_0$ .
+
+$$
+\mathbf {h} _ {D _ {0}} = \mathrm {N A R} _ {0} (\mathbf {h} _ {F}, \mathbf {h} _ {E}) \in \mathbb {R} ^ {n \times d}
+$$
+
+$$
+\hat {\mathbf {a}} ^ {0} = \operatorname {L o g S o f t m a x} \left(\mathbf {h} _ {\mathrm {D} _ {0}}\right) \in \mathbb {R} ^ {n \times | \mathcal {V} _ {\mathcal {A}} |}
+$$
+
+For a clear comparison with the autoregressive decoder, we make minimal changes to the structure and keep modeling the dependence between two contiguous steps through $\mathrm{decoder}_1$ . To elaborate, we shift $\mathbf{a}^0$ one position to the right as $\acute{\mathbf{a}}^0$ by appending $a_0^*$ at the beginning and remove $a_n^0$ to maintain the sequence length. After that, we continue to feed $\acute{\mathbf{a}}^0$ to $\mathrm{decoder}_1$ .
+
+$$
+\mathbf {h} _ {D _ {1}} = \mathrm {N A R} _ {1} (\mathrm {E} (\mathbf {\acute {a}} ^ {0}), \mathbf {h} _ {E}) \in \mathbb {R} ^ {n \times d}
+$$
+
+$$
+\hat {\mathbf {a}} ^ {1} = \operatorname {L o g S o f t m a x} \left(\mathbf {h} _ {\mathrm {D} _ {1}}\right) \in \mathbb {R} ^ {n \times | \mathcal {V} _ {\mathcal {A}} |}
+$$
+
+At last, we conduct backpropagation with respect to the loss summation $l(\mathbf{a}^*, \hat{\mathbf{a}}^0) \oplus l(\mathbf{a}^*, \hat{\mathbf{a}}^1)$ . Conventional seq2seq architectures are often equipped with intermediate modules such as a full attention distribution over the encoded context (Bahdanau et al., 2015), which is omitted in the above formulation for simplicity. In the implementation, we always assume to train $\mathrm{decoder}_0$ and $\mathrm{decoder}_1$ separately to increase the model capacity, yet weight sharing is possible.
+
+# 2.4 Trajectory augmentation
+
+IL suffers from distribution shift and error accumulation (Ross et al., 2011). An agent's mistakes can easily put it into a state that the expert demonstrations do not involve and the agent has never seen during training. This also means errors can add up, so the agent drifts farther and farther away from the demonstrations. To tackle this issue, we propose TA that expands the expert demonstrations and actively exposes shifted states to the agent. We accomplish this by diverting intermediate states and consider them as initial states for TG. An example is offered in Figure 2 (b).
+
+Given expert states $\mathbf{s}_{t\leq T}^{*}$ and corresponding actions $\mathbf{a}_{t\leq T}^{*}$ , we utilize the divide-and-conquer technique to (1) break down the chain of state generation $\mathbf{s}_t^*\xrightarrow{\mathbf{a}_t^*}\mathbf{s}_{t + 1}^*$ into two by either executing $\mathbf{a}_t^*$ to stay on the current path or skipping $\mathbf{a}_t^*$ to branch the current path; (2) recursively calling this process until reaching the goal state $\mathbf{s}_T^*$ ; (3) merge intermediate states from branches and return from bottom to top in the end. As illustrated in Algorithm 2, we collect a set of shifted states
+
+$$
+\mathbf {S} ^ {\prime} = \mathrm {T A} (\emptyset , \mathbf {s} _ {1} ^ {*}, \mathbf {s} _ {t \leq T} ^ {*}, \mathbf {a} _ {t \leq T} ^ {*}, \mathcal {E}),
+$$
+
+regard them as initial states paired with the same goal state to produce extra trajectories
+
+$$
+\tau^{\prime} = \bigcup_{\mathbf{s}^{*}\in \mathbf{S}^{*}}\operatorname {TG}(\mathbf{s}^{*},\mathbf{s}_{T}^{*},\mathcal{E},\mathbf{E}),
+$$
+
+and finally yield the augmented expert demonstrations $\mathcal{T}^{*}\cup \mathcal{T}^{\prime}$ after looping through $\mathcal{X}$ .
+
+TA is advantageous because it (i) only exploits existing expert demonstrations to preserve the i.i.d assumption; (ii) is universally applicable to our proposed paradigm without a dependency on the downstream task; (iii) does not need domain knowledge, labeling work, and further evaluation.
+
+# 3 Experiments
+
+We adapt recurrent inference to our paradigm and evaluate them across AE benchmarks.
+
+# 3.1 Setup
+
+Data. Arithmetic Operators Restoration (AOR) is a short-to-long editing to complete an array into a true equation. It is also a one-to-many task as an array can be completed as multiple true equations differently. Arithmetic Equation Simplification (AES) aims to calculate the parenthesized parts and keep the equation hold, resulting in a long-to-short and
+
+Algorithm 2 Trajectory Augmentation (TA)
+Input: States S, state $\mathbf{s}_t$ , expert states $\mathbf{S}^*$ , actions A, and environment $\mathcal{E}$
+Output: Augmented states S.
+1: if $|\mathbf{A}| > 1$ then
+2: $\mathbf{a}_t\gets \mathbf{A}.pop(0)$
+3: $\mathbf{s}_{t + 1}\leftarrow \mathcal{E}(\mathbf{s}_t,\mathbf{a}_t)$
+4: $\mathbf{S}\gets \mathbf{S}\cup \mathrm{TA}(\mathbf{S},\mathbf{s}_{t + 1},\mathbf{S}^*,\mathbf{A},\mathcal{E})$ $\triangleright$ Execute action
+5: $\mathbf{A}\gets \mathrm{Update}(\mathbf{A},\mathbf{s}_t,\mathbf{s}_{t + 1})$
+6: $\mathbf{S}\gets \mathbf{S}\cup \mathrm{TA}(\mathbf{S},\mathbf{s}_t,\mathbf{S}^*,\mathbf{A},\mathcal{E})$ $\triangleright$ Skip action
+7: else if $\mathbf{s}_t\notin \mathbf{S}^*$ then
+8: $\mathbf{S}\gets \mathbf{S}\cup [\mathbf{s}_t]$ $\triangleright$ Merge shifted state
+9: end if
+10: return S
+
+many-to-one editing. Arithmetic Equation Correction (AEC) targets to correct potential mistakes in an equation. Diverse errors perturb the equation, making AEC a mixed many-to-many editing. To align with the previous work, we follow the same data settings $N$ , $L$ , and $D$ for data generation, as well as the same action design for trajectory generation. The edit metric $\mathbf{E}$ for AOR and AEC is Levenshtein, while $\mathbf{E}$ for AES is a self-designed one (SELF) that instructs to replace tokens between two parentheses with the target token. Examples are presented in Table 2. We refer readers to Shi et al. (2020) for an exhaustive explanation. As shown in Table 1, the data splits are $7\mathrm{K} / 1.5\mathrm{K} / 1.5\mathrm{K}$ for training, validation, and testing respectively.
+
+Evaluation. Sequence accuracy and equation accuracy are two primary metrics with token accuracy for a more fine-grained reference. In contrast to sequence accuracy for measuring whether an equation exactly matches the given label, equation accuracy emphasizes whether an equation holds, which is the actual goal of AE tasks. It is noted that there is no hard constraint to guarantee that all the predicted actions are valid. However, when the agent makes an inference mistake, the environment can refuse to execute invalid actions and keep the current state. This is also one of the beauties of reformulating text editing as a controllable MDP.
+
+Baselines. Recurrent inference (Recurrence) exhibits advantages over conventional end-to-end (End2end) and sequence tagging (Tagging) (Shi et al., 2020). However, for AES and AEC, it² allows feeding training samples to a data generator and exposing more variants to models. These variants, as source samples paired with corresponding target samples, are used as the augmented dataset. This is impractical due to the strong dependency on domain knowledge. Given an input “1 + (2 + 2) =
+
+
AOR (N=10, L=5, D=10K)
AES (N=100, L=5, D=10K)
AEC (N=10, L=5, D=10K)
Train/Valid/Test
Train TA
Traj. Len.
Train/Valid/Test
Train TA
Traj. Len.
Train/Valid/Test
Train TA
Traj. Len.
7,000/1,500/1,500
145,176
6
7,000/1,500/1,500
65,948
6
7,000/1,500/1,500
19,764
4
+
+Table 1: Data statistics of AE benchmarks.
+
+
Term
AOR (N=10, L=5, D=10K)
AES (N=100, L=5, D=10K)
AEC (N=10, L=5, D=10K)
Source x
36293
65+(25-20)-(64+32)+(83-24)=-25+58
-2*+410+8/8=8
Target y
-3-6/2+9=3
65+5-96+59=33
-2+10*8/8=8
State st*
-3-6/293
65+5-(64+32)+(83-24)=-25+58
-2+410+8/8=8
Action at*
[POS_6,+]
[POS_4, POS_8, 96]
[DELETE, POS_3, POS_3]
Next State st+1
-3-6/2+93
65+5-96+(83-24)=-25+58
-2+10+8/8=8
Shifted State st'
-3-6/29=3
65+5-(64+32)+59=(-25+58)
-2+410*8/8=8
+
+Table 2: Examples from AE with specific $N$ for integer size, $L$ for the number of integers,and $D$ for data size.
+
+5" and output "1 + 4 = 5" in AES, a variant "1 + (1 + 3) = 5" can be generated based on the knowledge $\mathbf{1} + \mathbf{3} = \mathbf{4}$ . Nevertheless, if this knowledge is not provided in the other training samples, the model should only know $\mathbf{2} + \mathbf{2} = \mathbf{4}$ .
+
+Models. As discussed, since the previously reported experiments are not practical, we re-run Recurrence source code for a more reasonable baseline (Recurrence*) that only has access to the fixed training set. Meanwhile, in our development environment, we reproduce Recurrence* within the proposed paradigm according to the compatibility in between. The encoder-decoder architecture inherits the same recurrent network as the backbone with long short-term memory units (Hochreiter and Schmidhuber, 1997) and an attention mechanism (Luong et al., 2015). The dimension of the bidirectional encoder is 256 in each direction and 512 for both the embedding layer and decoder. We apply a dropout of 0.5 to the output of each layer (Srivastava et al., 2014). This provides us a standard autoregressive baseline AR, as well as a more powerful AR* after increasing the number of encoder layers from 1 to 4. On the one hand, to construct a non-autoregressive baseline NAR, we replace the decoder of AR* with a linear layer that directly maps the context to a probability distribution over the action vocabulary. In addition, we add two more encoder layers to maintain a similar amount of trainable parameters. On the other hand, replacing the decoder of AR* with D2 leads to our model NAR*. We strictly unify the encoder for a fair comparison regarding the decoder. Model configurations are shared across AE tasks for a comprehensive assessment avoiding particular tuning against any of them.
+
+Training. We train on a single NVIDIA Titan RTX with a batch size of 256. We use the Adam opti
+
+mizer (Kingma and Ba, 2015) with a learning rate of $10^{-3}$ and an $\ell_2$ gradient clipping of 5.0 (Pascanu et al., 2013). A cosine annealing scheduler helps manage the training process and restarts the learning every 32 epochs to get it out of a potential local optimum. We adopt early stopping to wait for a lower validation loss until there are no updates for 512 epochs (Prechelt, 1998). Teacher forcing with a rate of 0.5 spurs up the training process (Williams and Zipser, 1989). In AES and AEC, the adaptive loss weighting guides the model to adaptively focus on particular action tokens in accordance with the training results. Reported metrics attached with standard deviation are the results of five runs using random seeds from [0, 1, 2, 3, 4].
+
+# 3.2 Results
+
+Baselines. As summarized in Table 3, prohibiting the access of Recurrence to domain knowledge outcomes a fair baseline and significantly weakens Recurrence* in AES and AEC. We also would like to point out that, even in the same impractical setting, our NAR* can achieve around $99.33\%$ and $67.49\%$ for AES and AEC with respect to equation accuracy, which is still much higher than that $(87.73\%)$ and $58.27\%$ for AES and AEC) reported in the previous work. In AOR, a one-to-many editing, no augmented source sequence is retrieved from the target side. We confirm that the slight accuracy drop of Recurrence* in AOR results from bias through multiple tests. Although AR is our reproduction of Recurrence*, the overall advancement of AR over Recurrence* proves the goodness of our framework and implementation. Participation of added three encoder layers in AR* improves model capacity and thus contributes to higher accuracy. A simple linear header already enables NAR to parallel the decoding; nevertheless, it dramatically reduces performance, especially in AES.
+
+
Method
AOR (N=10,L=5,D=10K)
AES (N=100,L=5,D=10K)
AEC (N=10,L=5,D=10K)
Tok. Acc. %
Seq. Acc. %
Eq. Acc. %
Tok. Acc. %
Eq. Acc. %
Tok. Acc. %
Seq. Acc. %
Eq. Acc. %
End2end
-
-
29.33
84.60
25.20
88.08
57.27
57.73
Tagging
-
-
51.40
87.00
36.67
84.46
46.93
47.33
Recurrence
-
-
58.53
98.63
87.73
83.64
57.47
58.27
Recurrence*
60.30 ± 1.30
27.31 ± 1.33
56.73 ± 1.33
79.82 ± 0.37
22.28 ± 0.52
82.32 ± 0.56
41.72 ± 0.74
42.13 ± 0.75
AR
61.85 ± 0.51
28.83 ± 1.14
59.09 ± 0.95
88.12 ± 2.37
37.05 ± 6.57
82.61 ± 0.53
45.81 ± 0.36
46.31 ± 0.31
AR*
62.51 ± 0.62
30.85 ± 0.41
61.35 ± 0.33
99.27 ± 0.32
93.57 ± 2.91
82.29 ± 0.39
45.99 ± 0.49
46.35 ± 0.52
NAR
59.72 ± 0.70
24.16 ± 1.16
51.64 ± 1.97
83.87 ± 1.60
29.49 ± 2.51
80.28 ± 0.76
44.91 ± 1.71
45.40 ± 1.78
NAR*
62.81 ± 0.89
30.13 ± 1.31
61.45 ± 1.61
99.51 ± 0.13
95.67 ± 0.93
81.82 ± 0.68
45.97 ± 1.07
46.43 ± 1.10
AR +TA
62.35 ± 0.61
32.28 ± 0.67
63.56 ± 1.06
88.05 ± 1.20
38.39 ± 3.45
83.94 ± 0.42*
49.36 ± 1.23
49.83 ± 1.21
AR* +TA
62.58 ± 0.63
33.01 ± 1.31
65.73 ± 1.38
99.44 ± 0.27
95.24 ± 2.38
83.39 ± 0.74
48.95 ± 0.65
49.47 ± 0.73
NAR +TA
61.30 ± 0.86
32.04 ± 1.99
63.75 ± 2.08
90.38 ± 2.21
47.91 ± 8.18
81.36 ± 0.40
48.01 ± 1.07
48.47 ± 1.15
NAR* +TA
63.48 ± 0.38*
34.23 ± 0.92*
67.13 ± 0.99*
99.58 ± 0.15*
96.44 ± 1.29*
82.70 ± 0.42
49.64 ± 0.59*
50.15 ± 0.55*
+
+Table 3: Evaluation results on AOR, AES, and AEC with specific $N$ , $L$ , and $D$ . The token and sequence accuracy for AOR were not reported, thus we leave these positions blank here. With or without TA, our proposed NAR* achieves the best performance in terms of equation accuracy across the board.
+
+
+Figure 4: The learning curve of AR* (left column) and NAR* (right column) across AE tasks (rows). The red and blue lines represent the training on actions w.r.t sequence accuracy. The orange line stands for the validation on returned states w.r.t equation accuracy. The dashed line in green marks the earlier stop epoch of NAR* than that of AR* during training.
+
+Non-autoregressive. What stands out is the dominance of $\mathrm{NAR}^*$ , achieving $61.45\%$ , $95.67\%$ , and $46.43\%$ in terms of equation accuracy for AOR, AES, and AEC, separately. Particularly in AES, its better performance over $\mathrm{AR}^*$ by more than $2.1\%$ equation accuracy underlines the success of $\mathrm{NAR}^*$ in capturing the interdependencies among target tokens. Its superiority with respect to equation accuracy boosting by around $66.18\%$ over NAR highlights the contributions of D2 again.
+
+Trajectory augmentation. As expected, the incorporation of TA consistently promotes the accuracy
+
+
+Figure 5: Inference time of AR* and NAR* to predict action (left) and return state (right) across AE tasks.
+
+of all models in our learning regime throughout AE tasks. Taking NAR as an example, training with TA brings it a substantial equation accuracy gain, remarkably up to $18.42\%$ in AES. Even more, it pushes the gap between $\mathrm{NAR}^*$ and the other baselines. The most notable advance comes from AOR, where $\mathrm{NAR}^*$ outperforms $\mathrm{AR}^*$ by a substantial margin of $5.68\%$ equation accuracy. It appears that TA is more effective for non-autoregressive models than autoregressive ones.
+
+# 4 Analysis
+
+We conduct extensive sensitivity analyses to better illustrate and understand our methods.
+
+# 4.1 Efficiency
+
+From the learning curve (Figure 4) and inference time (Figure 5) of AR* and NAR* in AE, in addition to a higher accuracy, we find NAR* needs less number of training epochs to converge and trigger the early stopping. The periodic fluctuation of the learning curve is the consequence of using a scheduler. When it comes to inference, NAR* saves much time for every step of action determini
+
+
Design
Action Sequence
Method
Tok. Acc. %
Eq. Acc. %
#1
[Pos.L, Pos.R, Tok.]
AR*
99.27 ± 0.32
93.57 ± 2.91
NAR*
99.51 ± 0.13
95.67 ± 0.93
AR* +TA
99.44 ± 0.27
95.24 ± 2.38
NAR* +TA
99.58 ± 0.15*
96.44 ± 1.29*
#2
[Pos.L, Tok., Pos.R]
AR*
99.08 ± 0.93
92.35 ± 7.21
NAR*
99.50 ± 0.27
95.55 ± 2.28
AR* +TA
99.52 ± 0.29
95.68 ± 2.49
NAR* +TA
99.54 ± 0.20*
95.97 ± 1.64*
#3
[Tok., Pos.L, Pos.R]
AR*
98.06 ± 0.79
83.79 ± 6.25
NAR*
99.53 ± 0.14
95.99 ± 0.81
AR* +TA
98.43 ± 0.49
87.29 ± 3.70
NAR* +TA
99.61 ± 0.06*
96.55 ± 0.46*
+
+nation and ends up returning the edited state faster. As AR* and NAR* share exactly the same encoder structure, we conclude that D2 contributes to the advanced efficiency.
+
+# 4.2 Action design
+
+Due to the liberty of sequence generation, the same operation can be represented as different action sequences. In AES, the operation, instructing to substitute tokens between left and right parentheses with the required token, can fit the three action designs in Table 4, where $\text{Pos}_{\mathsf{L}}$ , $\text{Pos}_{\mathsf{R}}$ , and $\text{Tok}$ denote the positions of two parentheses and the target token. Design #1 is the default one. A simple swap of action tokens offers designs #2 and #3.
+
+AR* severely suffers such perturbation, causing an equation accuracy decline by $9.78\%$ in #3. Contrastly, NAR* holds around its results and even slightly improves to $95.99\%$ in #3. Despite the joining of TA, AR* still goes down from $95.24\%$ in #1 to $87.29\%$ in #3, while NAR* stays nearly consistent across three designs. It is reasonable that AR* is sensitive to the order of action tokens because the position information helps the inference of the target token. This also reflects that NAR* can catch the position information but with little dependence on token order. Such robustness allows greater freedom of action design.
+
+# 4.3 Trajectory optimization
+
+A better edit metric $\mathbf{E}$ often means a smaller action vocabulary space $|\mathcal{V}_A|$ , shorter trajectory length $T_{\mathrm{max}}$ , and, therefore, an easier IL. Taking AES as an instance, a SELF-action, replacing tokens enclosed in parentheses with the target one, actually is the compression of several Levenshtein-actions
+
+Table 4: Evaluation of AR* and NAR* in AES across three action designs that vary from each other by token order. They direct to the same operation with Pos.L/Pos.R/Tok. denoting left parenthesis/right parenthesis/target token.
+
+
Edit Metric E
Tmax
Method
Tok. Acc. %
Eq. Acc. %
SELF
6
AR*
99.27 ± 0.32
93.57 ± 2.91
NAR*
99.51 ± 0.13
95.67 ± 0.93
Levenshtein
31
AR*
69.53 ± 2.29
18.37 ± 0.70
NAR*
67.58 ± 0.87
17.93 ± 0.07
+
+Table 5: Evaluation of AR* and NAR* trained with edit metrics SELF and Levenshtein in AES. $T_{\text{max}}$ refers to the maximum length of expert trajectories.
+
+
+Figure 6: Evaluation of $\mathrm{NAR^{*}}$ trained with edit metrics LCS and Levenshtein in AEC. Results are grouped by two trajectory lengths caused by whether the policy involves REPLACE.
+
+including multiple deletions and one substitution. Although either can serve as an expert policy, SELF causes a much shorter $T_{\mathrm{max}}$ as indicated in Table 5. The change from SELF to Levenshtein brings on a longer $T_{\mathrm{max}}$ and consequently a significant performance gap of $75.2\%$ and $77.74\%$ for AR* and NAR* in terms of equation accuracy. Doing one edit in 31 steps rather than 6 undoubtedly raises the difficulty of the imitation game.
+
+As one more exploration, we introduce Longest Common Subsequence (LCS) as an alternative $\mathbf{E}$ to AEC. Token replacement is not allowed in LCS but in Levenshtein. A replacement action has to be decomposed as one deletion and one insertion in LCS. From this, LCS has a small $|\mathcal{V}_A|$ , while Levenshtein has a shorter $T_{\mathrm{max}}$ . We train $\mathrm{NAR}^*$ with these two and report in Figure 6. For a clear comparison, the test set is divided into two groups. In w/o REPLACE, both yield the same $T_{\mathrm{max}}$ , but, in w/ REPLACE, Levenshtein takes a shorter $T_{\mathrm{max}}$ .
+
+
Decoder
AOR (N = 10, L = 5, D = 10K)
AES (N = 100, L = 5, D = 10K)
AEC (N = 10, L = 5, D = 10K)
Tok. Acc. %
Seq. Acc. %
Eq. Acc. %
Tok. Acc. %
Eq. Acc. %
Tok. Acc. %
Seq. Acc. %
Eq. Acc. %
Linear
61.84 ± 0.94
28.55 ± 1.57
57.72 ± 1.55
99.41 ± 0.26
95.01 ± 2.01
81.35 ± 0.92
42.47 ± 1.85
42.81 ± 1.87
Decoder0
61.78 ± 0.83
28.20 ± 1.57
58.36 ± 1.58
99.24 ± 0.23
93.49 ± 2.03
80.84 ± 0.66
43.97 ± 1.82
44.32 ± 1.82
Shared D2
61.74 ± 0.71
28.68 ± 0.94
58.05 ± 1.01
99.28 ± 0.24
93.85 ± 2.14
81.38 ± 1.04
43.64 ± 2.03
44.09 ± 2.02
D2 (NAR*)
62.81 ± 0.89
30.13 ± 1.31
61.45 ± 1.61
99.51 ± 0.13
95.67 ± 0.93
81.82 ± 0.68
45.97 ± 1.07
46.43 ± 1.10
Linear +TA
61.41 ± 0.28
31.75 ± 0.93
63.15 ± 0.96
99.42 ± 0.17
95.08 ± 1.47
81.54 ± 0.66
46.79 ± 2.26
47.33 ± 2.30
Decoder0 +TA
62.50 ± 1.24
32.48 ± 1.87
64.47 ± 1.88
99.47 ± 0.13
95.33 ± 1.13
82.02 ± 0.40
46.80 ± 2.04
47.32 ± 1.91
Shared D2 +TA
61.64 ± 0.87
31.21 ± 0.34
62.77 ± 0.85
99.53 ± 0.12
95.91 ± 1.25
81.80 ± 0.47
47.23 ± 1.07
47.61 ± 1.14
D2 (NAR*) +TA
63.48 ± 0.38*
34.23 ± 0.92*
67.13 ± 0.99*
99.58 ± 0.15*
96.44 ± 1.29*
82.70 ± 0.42*
49.64 ± 0.59*
50.15 ± 0.55*
+
+Table 6: Evaluation of agents equipped with same encoders but different decoders on AE benchmarks.
+
+In the former, LCS exceeds Levenshtein with or without TA. In the latter, the opposite is true, where Levenshtein outperforms LCS under the same condition. This support our assumption at the beginning that an appropriate $\mathbf{E}$ , leading to a small $|\mathcal{V}_A|$ and a short $T_{\mathrm{max}}$ , is conducive to IL, suggesting trajectory optimization an interesting future work.
+
+# 4.4 Dual decoders
+
+As an ablation study, we freeze the encoder of NAR* and vary its decoder to reveal the contributions of each component in D2. As listed in Table 6, replacing the decoder with a linear layer leads to Linear and removing the second decoder from NAR* results in Decoder $_0$ . Moreover, sharing the parameters between two decoders of NAR* gives the Shared D2. All of them can parallel the decoding process. We then borrow the setup of Section 3 and test them on AE.
+
+Among four decoders, $\mathrm{NAR}^*$ dominates three imitation games. The performance decrease caused by shared parameters is more significant than expected. Besides the reason that saved parameters limit the model capacity, another potential one is the input mismatch of two decoders. The input of $\mathrm{decoder}_0$ is the projected context from the linear layer after the encoder, yet that of $\mathrm{decoder}_1$ is the embedded prediction from the embedding layer. When incorporating TA, we find the same trend persists. The gap between $\mathrm{NAR}^*$ and the others is even more apparent. Since they share the same encoder, such a gap clarifies the benefits of D2.
+
+# 5 Conclusion
+
+We reformulate text editing as an imitation game defined by an MDP to allow action design at the sequence-level. We propose D2, a non-autoregressive decoder for state-action learning, coupled with TG for data translation and TA for distribution shift alleviation. Achievements on AE benchmarks evidence the advantages of our
+
+methods in performance, efficiency, and robustness. Sequence-level actions are arguably more controllable, interpretable, and similar to human behavior. Turning tasks into games that agents feel more comfortable with sheds light on future studies in the direction of reinforcement learning in the application of text editing. The involvement of a reward function, the optimization of the trajectories, the design of sequence-level actions, and their applications in more practical tasks, to name a few, are interesting for future work. Suggesting text editing as a new testbed, we hope our findings will shed light on future studies in reinforcement learning applying to natural language processing.
+
+# Limitations
+
+Each time the state is updated, the agent can get immediate feedback on the previous action and thus a dynamic context representation during the editing. This also means that the encoder (e.g., a heavy pretrained language model) will be called multiple times to refresh the context matrix. Consequently, as the trajectory grows, the whole task becomes slow even though we have paralleled the decoding process. Meanwhile, applying our methods in more realistic editing tasks (e.g., grammatical error correction) remains a concern and needs to be explored in the near future.
+
+# Acknowledgements
+
+We gratefully appreciate Che Wang (Watcher), Yichen Gong, and Hui Xue for sharing their pearls of wisdom. We also would like to express our special thanks of gratitude to Yingying Huo for the support, as well as EMNLP anonymous reviewers for their constructive feedback. This work was supported by Shining Lab and Alibaba Group.
+
+# References
+
+Sweta Agrawal and Marine Carpuat. 2022. An imitation learning curriculum for text editing with non-autoregressive models. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 7550-7563, Dublin, Ireland. Association for Computational Linguistics.
+Sweta Agrawal, Weijia Xu, and Marine Carpuat. 2021. A non-autoregressive edit-based approach to controllable text simplification. In *Findings of the Association for Computational Linguistics: ACL-IJCNLP* 2021, pages 3757-3769, Online. Association for Computational Linguistics.
+Abhijeet Awasthi, Sunita Sarawagi, Rasna Goyal, Sabyasachi Ghosh, and Vihari Piratla. 2019. Parallel iterative edit models for local sequence transduction. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4260-4270, Hong Kong, China. Association for Computational Linguistics.
+Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.
+S.R.K. Branavan, Harr Chen, Luke Zettlemoyer, and Regina Barzilay. 2009. Reinforcement learning for mapping instructions to actions. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 82-90, Suntec, Singapore. Association for Computational Linguistics.
+Yangyi Chen, Jin Su, and Wei Wei. 2021. Multi-granularity textual adversarial attack with behavior cloning. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 4511-4526, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
+Yue Dong, Zichao Li, Mehdi Rezagholizadeh, and Jackie Chi Kit Cheung. 2019. EditNTS: An neural programmer-Interpreter model for sentence simplification through explicit editing. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3393-3402, Florence, Italy. Association for Computational Linguistics.
+Ahmed Elgohary, Christopher Meek, Matthew Richardson, Adam Fourney, Gonzalo Ramos, and Ahmed Hassan Awadallah. 2021. NL-EDIT: Correcting semantic parse errors through natural language interaction. In Proceedings of the 2021 Conference of the North American Chapter of the
+
+Association for Computational Linguistics: Human Language Technologies, pages 5599-5610, Online. Association for Computational Linguistics.
+Tao Ge, Furu Wei, and Ming Zhou. 2018. Fluency boost learning and inference for neural grammatical error correction. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1055-1065, Melbourne, Australia. Association for Computational Linguistics.
+Jiatao Gu, James Bradbury, Caiming Xiong, Victor O.K. Li, and Richard Socher. 2018. Nonautoregressive neural machine translation. In International Conference on Learning Representations.
+Jiatao Gu, Changhan Wang, and Junbo Zhao. 2019. Levenshtein transformer. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems 32, pages 11181-11191. Curran Associates, Inc.
+Rahul Gupta, Aditya Kanade, and Shirish Shevade. 2019. Deep reinforcement learning for syntactic error repair in student programs. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01):930-937.
+Kelvin Guu, Panupong Pasupat, Evan Liu, and Percy Liang. 2017. From language to programs: Bridging reinforcement learning and maximum marginal likelihood. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1051-1062, Vancouver, Canada. Association for Computational Linguistics.
+Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.
+Marcin Junczys-Dowmunt, Roman Grundkiewicz, Shubha Guha, and Kenneth Heafield. 2018. Approaching neural grammatical error correction as a low-resource machine translation task. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 595-606, New Orleans, Louisiana. Association for Computational Linguistics.
+Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.
+Ryosuke Kohita, Akifumi Wachi, Yang Zhao, and Ryuki Tachibana. 2020. Q-learning with language model for edit-based unsupervised summarization. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP),
+
+pages 470-484, Online. Association for Computational Linguistics.
+Jiquan Li, Junliang Guo, Yongxin Zhu, Xin Sheng, Deqiang Jiang, Bo Ren, and Linli Xu. 2022. Sequence-to-action: Grammatical error correction with action guided sequence generation. Proceedings of the AAAI Conference on Artificial Intelligence, 36(10):10974-10982.
+Zhongkun Liu, Pengjie Ren, Zhumin Chen, Zhaochun Ren, Maarten de Rijke, and Ming Zhou. 2021. Learning to ask conversational questions by optimizing Levenshtein distance. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 5638-5650, Online. Association for Computational Linguistics.
+Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1412-1421, Lisbon, Portugal. Association for Computational Linguistics.
+Jonathan Mallinson, Aliaksei Severyn, Eric Malmi, and Guillermo Garrido. 2020. FELIX: Flexible text editing through tagging and insertion. In *Findings of the Association for Computational Linguistics: EMNLP* 2020, pages 1244–1255, Online. Association for Computational Linguistics.
+Eric Malmi, Yue Dong, Jonathan Mallinson, Aleksandr Chuklin, Jakub Adamek, Daniil Mirylenka, Felix Stahlberg, Sebastian Krause, Shankar Kumar, and Aliaksei Severyn. 2022. Text generation with text-editing models. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Tutorial Abstracts, pages 1-7, Seattle, United States. Association for Computational Linguistics.
+Eric Malmi, Sebastian Krause, Sascha Rothe, Daniil Mirylenka, and Aliaksei Severyn. 2019. Encode, tag, realize: High-precision text editing. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5054–5065, Hong Kong, China. Association for Computational Linguistics.
+Sheena Panthaplackel, Miltiadis Allamanis, and Marc Brockschmidt. 2021. Copy that! editing sequences by copying spans. Proceedings of the AAAI Conference on Artificial Intelligence, 35(15):13622-13630.
+Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. 2013. On the difficulty of training recurrent neural networks. In Proceedings of the 30th International Conference on International Conference
+
+on Machine Learning - Volume 28, ICML'13, page III-1310-III-1318. JMLR.org.
+Dean A. Pomerleau. 1991. Efficient Training of Artificial Neural Networks for Autonomous Navigation. Neural Computation, 3(1):88-97.
+Lutz Prechelt. 1998. Early stopping-but when? In Neural Networks: Tricks of the trade, pages 55-69. Springer.
+Machel Reid and Victor Zhong. 2021. LEWIS: Levenshtein editing for unsupervised text style transfer. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 3932-3944, Online. Association for Computational Linguistics.
+Stéphane Ross, Geoffrey Gordon, and Drew Bagnell. 2011. A reduction of imitation learning and structured prediction to no-regret online learning. In Proceedings of the fourteenth international conference on artificial intelligence and statistics, pages 627-635. JMLR Workshop and Conference Proceedings.
+Stefan Schaal. 1996. Learning from demonstration. In Advances in Neural Information Processing Systems, volume 9. MIT Press.
+Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointer-generator networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1073-1083, Vancouver, Canada. Association for Computational Linguistics.
+Ning Shi, Wei Wang, Boxin Wang, Jinfeng Li, Xiangyu Liu, and Zhouhan Lin. 2021. Incorporating External POS Tagger for Punctuation Restoration. In Proc. Interspeech 2021, pages 1987-1991.
+Ning Shi, Ziheng Zeng, Haotian Zhang, and Yichen Gong. 2020. Recurrent inference in text editing. In *Findings of the Association for Computational Linguistics: EMNLP* 2020, pages 1758-1769, Online. Association for Computational Linguistics.
+Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research, 15(1):1929-1958.
+Felix Stahlberg and Shankar Kumar. 2020. Seq2Edits: Sequence transduction using span-level edit operations. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 5147-5159, Online. Association for Computational Linguistics.
+Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 27, pages 3104-3112. Curran Associates, Inc.
+
+Ronald J Williams and David Zipser. 1989. A learning algorithm for continually running fully recurrent neural networks. *Neural computation*, 1(2):270-280.
+Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google's neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144.
+Ziyu Yao, Frank F. Xu, Pengcheng Yin, Huan Sun, and Graham Neubig. 2021. Learning structural edits via incremental tree transformations. In International Conference on Learning Representations.
+Wei Zhao, Liang Wang, Kewei Shen, Ruoyu Jia, and Jingming Liu. 2019. Improving grammatical error correction via pre-training a copy-augmented architecture with unlabeled data. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 156-165, Minneapolis, Minnesota. Association for Computational Linguistics.
\ No newline at end of file
diff --git a/texteditingasimitationgame/images.zip b/texteditingasimitationgame/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..4339c392ff64ef32c6262b85a03909c55a4f41a3
--- /dev/null
+++ b/texteditingasimitationgame/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:27ccf142e2466355339af81234aa61c9a67c89ad8833b4f1af36541f0221de82
+size 580950
diff --git a/texteditingasimitationgame/layout.json b/texteditingasimitationgame/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..a8de17f90e7b5d8433aa578aee3bcf363c530161
--- /dev/null
+++ b/texteditingasimitationgame/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:11ea254ed700f441123a5032ff696b196e80428ad6393c3146b109fb0967edc8
+size 507780
diff --git a/texthackerlearningbasedhybridlocalsearchalgorithmfortexthardlabeladversarialattack/503598f3-dab7-4382-abf2-7075200db809_content_list.json b/texthackerlearningbasedhybridlocalsearchalgorithmfortexthardlabeladversarialattack/503598f3-dab7-4382-abf2-7075200db809_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..8b330ec358b6481d1c1e92924e7d8446d8bc2b58
--- /dev/null
+++ b/texthackerlearningbasedhybridlocalsearchalgorithmfortexthardlabeladversarialattack/503598f3-dab7-4382-abf2-7075200db809_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:52d5cf1d09c169f4e01ce50744d165c0f5517997787ccf2fd120d14d0d2373c8
+size 104566
diff --git a/texthackerlearningbasedhybridlocalsearchalgorithmfortexthardlabeladversarialattack/503598f3-dab7-4382-abf2-7075200db809_model.json b/texthackerlearningbasedhybridlocalsearchalgorithmfortexthardlabeladversarialattack/503598f3-dab7-4382-abf2-7075200db809_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..4ed54958d537fd84a4ad178b7cd3a01165f42470
--- /dev/null
+++ b/texthackerlearningbasedhybridlocalsearchalgorithmfortexthardlabeladversarialattack/503598f3-dab7-4382-abf2-7075200db809_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:76a01b90b9cfb5ce3cc947a30dc4b8c056549489d39ef3f5f1c11d8b30db0698
+size 124554
diff --git a/texthackerlearningbasedhybridlocalsearchalgorithmfortexthardlabeladversarialattack/503598f3-dab7-4382-abf2-7075200db809_origin.pdf b/texthackerlearningbasedhybridlocalsearchalgorithmfortexthardlabeladversarialattack/503598f3-dab7-4382-abf2-7075200db809_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..2166d30b439b5f5ab39403ca26921ac8b371fde3
--- /dev/null
+++ b/texthackerlearningbasedhybridlocalsearchalgorithmfortexthardlabeladversarialattack/503598f3-dab7-4382-abf2-7075200db809_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a087cf781e22e34e34976950eaf6f4ad9eba875cee7cfa01b1031bad305da20b
+size 4900377
diff --git a/texthackerlearningbasedhybridlocalsearchalgorithmfortexthardlabeladversarialattack/full.md b/texthackerlearningbasedhybridlocalsearchalgorithmfortexthardlabeladversarialattack/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..f4f56ef730e1d774f80151707e5c06712907884b
--- /dev/null
+++ b/texthackerlearningbasedhybridlocalsearchalgorithmfortexthardlabeladversarialattack/full.md
@@ -0,0 +1,439 @@
+# TextHacker: Learning based Hybrid Local Search Algorithm for Text Hard-label Adversarial Attack
+
+Zhen $\mathbf{Y}\mathbf{u}^{1*}$
+
+Xiaosen Wang $^{1,2*}$ ,
+
+Wanxiang Che3,
+
+Kun He $^{1\dagger}$
+
+1 School of Computer Science and Technology,
+
+Huazhong University of Science and Technology, Wuhan, China
+
+$^{2}$ Huawei Singular Security Lab, Beijing, China
+
+3 Research Center for SCIR, Harbin Institute of Technology, Harbin, China
+
+{baising15,xiaosen}@hust.edu.cn, car@ir.hit.edu.cn, brooklet60@hust.edu.cn
+
+# Abstract
+
+Existing textual adversarial attacks usually utilize the gradient or prediction confidence to generate adversarial examples, making it hard to be deployed in real-world applications. To this end, we consider a rarely investigated but more rigorous setting, namely hard-label attack, in which the attacker can only access the prediction label. In particular, we find we can learn the importance of different words via the change on prediction label caused by word substitutions on the adversarial examples. Based on this observation, we propose a novel adversarial attack, termed Text Hard-label attacker (TextHacker). TextHacker randomly perturbs lots of words to craft an adversarial example. Then, TextHacker adopts a hybrid local search algorithm with the estimation of word importance from the attack history to minimize the adversarial perturbation. Extensive evaluations for text classification and textual entailment show that TextHacker significantly outperforms existing hard-label attacks regarding the attack performance as well as adversary quality. Code is available at https://github.com/JHLHUST/TextHacker.
+
+# 1 Introduction
+
+Despite the unprecedented success of Deep Neural Networks (DNNs), they are known to be vulnerable to adversarial examples (Szegedy et al., 2014), in which imperceptible modification on the correctly classified samples could mislead the model. Adversarial examples bring critical security threats to the widely adopted deep learning based systems, attracting enormous attention on adversarial attacks and defenses in various domains, e.g. Computer Vision (CV) (Szegedy et al., 2014; Goodfellow et al., 2015; Madry et al., 2018; Wang et al., 2021a) and Natural Language Processing (NLP) (Papernot et al., 2016; Liang et al., 2018; Ren et al., 2019; Wang et al., 2022; Yang et al., 2022), etc.
+
+Compared with adversarial attacks in CV, textual adversarial attacks are more challenging due to the discrete input space and lexicality, semantics and fluency constraints. Recently, various textual adversarial attacks have been proposed, including white-box attacks (Ebrahimi et al., 2018; Li et al., 2019; Wang et al., 2021c), score-based attacks (Alzantot et al., 2018; Zang et al., 2020b) and hard-label attacks (Saxena, 2020; Maheshwary et al., 2021). Among these methods, hard-label attacks that only obtain the prediction label are more realistic in real-world applications but also more challenging.
+
+Existing white-box attacks (Li et al., 2019; Wang et al., 2021c) and score-based attacks (Ren et al., 2019; Yang et al., 2020) usually evaluate the word importance using either the gradient or change on logits after modifying the given word to craft adversarial examples. In contrast, due to the limited information (i.e., only the prediction labels) for hard-label attacks, it is hard to estimate the word importance, leading to relatively low effectiveness and efficiency on existing hard-label attacks (Maheshwary et al., 2021; Ye et al., 2022).
+
+Zang et al. (2020a) have shown that estimating the word importance by reinforcement learning algorithm via the prediction confidence exhibits good attack performance for score-based attacks, but performs poorly for hard-label attacks. We speculate that it cannot effectively estimate the word importance via the prediction label since most of the times the label does not change when turning benign samples into adversaries. It inspires us to investigate the problem: How to effectively estimate the word importance using the prediction label? In contrast, Wang et al. (2021b) show that replacing some words with synonyms could easily convert adversarial examples into benign samples. Thus, we could obtain abundant and useful information (i.e., changes of prediction label) for word importance estimation by word substitutions on the adversarial examples during the attack process. Such learned
+
+word importance could in turn guide us to minimize the word perturbation between adversarial examples and original samples.
+
+Based on the above observation, we propose a novel adversarial attack, named Text Hard-label attacker (TextHacker). TextHacker contains two stages, namely adversary initialization and perturbation optimization. At the adversary initialization stage, we substitute each word in the input text with its synonym iteratively till we find an adversarial example. At the perturbation optimization stage, TextHacker highlights the importance of each word based on the prediction label of the initialized adversarial example after synonym substitutions. Then TextHacker adopts the hybrid local search algorithm with local search (Aarts et al., 2003) as well as recombination (Radcliffe, 1993) to optimize the adversarial perturbation using such word importance, and simultaneously updates the word importance based on the model output.
+
+To validate the effectiveness of the proposed method, we compare TextHacker with two hard-label attacks (Maheshwary et al., 2021; Ye et al., 2022) and two evolutionary score-based attacks (Alzantot et al., 2018; Zang et al., 2020b) for text classification and textual entailment. Empirical evaluations demonstrate that TextHacker significantly outperforms the baselines under the same amount of queries, achieving higher average attack success rate with lower perturbation rate and generating higher-quality adversarial examples.
+
+# 2 Related Work
+
+This section briefly introduces the textual adversarial attacks and hybrid local search algorithm.
+
+# 2.1 Textual Adversarial Attacks
+
+Existing textual adversarial attacks fall into two settings: a) white-box attacks (Liang et al., 2018; Li et al., 2019; Zhang et al., 2019; Meng and Wattenhofer, 2020; Wang et al., 2021c) allow full access to the target model, e.g. architecture, parameters, loss function, gradient, output, etc. b) black-box attacks only allow access to the model output. Black-box attacks could be further split into two categories, in which score-based attacks (Gao et al., 2018; Alzantot et al., 2018; Ren et al., 2019; Jin et al., 2020; Zang et al., 2020a,b; Garg and Ramakrishnan, 2020) could access the output logits (i.e., prediction confidence) while hard-label attacks (Saxena, 2020; Maheshwary et al., 2021; Ye
+
+et al., 2022) could only utilize the prediction labels.
+
+Intuitively, hard-label attacks are much harder but more applicable in the real world and gain increasing interests. TextDeceptor (Saxena, 2020) hierarchically identifies the significant sentence among the input text and the critical word in the chosen sentence for attack. Hard label black-box attack (HLBB) (Maheshwary et al., 2021) initializes an adversarial example via multiple random synonym substitutions and adopts a genetic algorithm to minimize the adversarial perturbation between the initialized adversarial example and original text. TextHoaxter (Ye et al., 2022) randomly initializes an adversarial example and optimizes the perturbation matrix in the continuous embedding space to maximize the semantic similarity and minimize the number of perturbed word between the current adversarial example and the original text.
+
+Existing hard-label attacks access the prediction labels which are only used to evaluate adversarial examples without exploiting more information about the victim model. In this work, we learn the importance of each word w.r.t. the model based on the attack history, which is used to enhance the effectiveness of the attack.
+
+# 2.2 Hybrid Local Search Algorithm
+
+Hybrid local search algorithm is a popular population based framework, which is effective on typical combinatorial optimization problems (Galinier and Hao, 1999). It usually contains two key components, i.e., local search and recombination. Given a population containing multiple initial solutions, the local search operator searches for a better one from the neighborhood of each solution to approach the local optima. The recombination operator crossovers the existing solutions to accept non-improved solutions so that it could jump out of the local optima. Then it adopts the fixed number of top solutions for the next iteration. Compared to other evolutionary algorithms, e.g. genetic algorithm (Anderson and Ferris, 1994), particle swarm optimization (Kennedy and Eberhart, 1995), etc., hybrid local search algorithm balances the local and global exploitation that helps explore the search space with much higher efficiency.
+
+In this work, we follow the two-stage attack strategy in HLBB (Maheshwary et al., 2021). At the optimization stage, we utilize the word importance learned from the attack history to guide the local search and recombination. Thus, our method can
+
+focus on more critical words in the neighborhood which helps us find the optimal adversarial example from the whole search space more efficiently.
+
+# 3 Methodology
+
+In this section, we first introduce the preliminary, symbols and definitions in TextHacker, then provide a detailed description of the proposed method.
+
+# 3.1 Preliminary
+
+Given the input space $\mathcal{X}$ containing all the input texts and the output space $\mathcal{Y} = \{y_1, y_2, \ldots, y_k\}$ , a text classifier $f: \mathcal{X} \to \mathcal{Y}$ predicts the label $f(x)$ for any input text $x = \langle w_1, w_2, \ldots, w_n \rangle \in \mathcal{X}$ , in which $f(x)$ is expected to be equal to its ground-truth label $y_{true} \in \mathcal{Y}$ . The adversary typically adds an imperceptible perturbation on the correctly classified input text $x$ to craft a textual adversarial example $x^{adv}$ that misleads classifier $f$ :
+
+$$
+f (x ^ {a d v}) \neq f (x) = y _ {t r u e}, \quad \mathrm {s . t .} \quad d (x ^ {a d v}, x) < \epsilon ,
+$$
+
+where $d(\cdot, \cdot)$ is a distance metric (e.g. the $\ell_p$ -norm distance or perturbation rate) that measures the distance between the benign sample and adversarial example, and $\epsilon$ is a hyper-parameter for the maximum magnitude of perturbation. We adopt the perturbation rate as the distance metric:
+
+$$
+d (x ^ {a d v}, x) = \frac {1}{n} \sum_ {i = 1} ^ {n} \mathbb {1} (w _ {i} ^ {a d v} \neq w _ {i}),
+$$
+
+where $\mathbb{1}(\cdot)$ is the indicator function and $w_{i}\in x$ , $w_{i}^{adv}\in x^{adv}$ . Given a correctly classified text $x$ , we could reformulate the adversarial attack as minimizing the perturbation between benign sample and adversarial example while keeping adversarial:
+
+$$
+\underset {x ^ {a d v}} {\operatorname {a r g m i n}} d \left(x ^ {a d v}, x\right) \quad \text {s . t .} \quad f \left(x ^ {a d v}\right) \neq f (x). \tag {1}
+$$
+
+In this work, we propose a novel hard-label attack, named TextHacker, to craft textual adversarial examples by only accessing the prediction label $f(x)$ for any input sample $x$ .
+
+# 3.2 Symbols and Definitions
+
+- Candidate set $\mathcal{C}(w_i)$ . For each word $w_i \in x$ , we construct the candidate set $\mathcal{C}(w_i) = \{\hat{w}_i^0, \hat{w}_i^1, \dots, \hat{w}_i^m\}$ containing the word $w_i$ ( $\hat{w}_i^0 = w_i$ ) and its top $m$ nearest synonyms in the counter-fitted embedding space (Mrkšić et al., 2016). All the substitutions would be constrained in this set.
+
+- Weight table $\mathcal{W}$ . We construct a weight table $\mathcal{W}$ , a matrix with the shape of $(n, m + 1)$ , in which each item $\mathcal{W}_{i,j}$ represents the word importance of $\hat{w}_i^j \in \mathcal{C}(w_i)$ and $\mathcal{W}_{i,:} = \sum_{j=0}^{m} \mathcal{W}_{i,j}$ denotes the position importance of word $w_i \in x$ . The weight table $\mathcal{W}$ could guide the hybrid local search algorithm to determine the substitution at each iteration, which is initialized with all 0s.
+- $\delta$ -neighborhood $N_{\delta}(x)$ . Given an input sample $x$ , we define its $\delta$ -neighborhood as the set of texts in the input space $\mathcal{X}$ with at most $\delta$ different words from the sample $x$ :
+
+$$
+N _ {\delta} (x) = \left\{x ^ {k} \mid \sum_ {i = 1} ^ {n} \mathbb {1} \left(w _ {i} ^ {k} \neq w _ {i}\right) \leq \delta , x ^ {k} \in \mathcal {X} \right\},
+$$
+
+where $w_{i}^{k}\in x^{k},w_{i}\in x$ and $\delta$ is the maximum radius of the neighborhood. The neighborhood $N_{\delta}(x)$ reflects the search space for local search on input sample $x$ .
+
+- Fitness function $F\left( {x}^{\prime }\right)$ . Given an input sample ${x}^{\prime }$ and benign text $x$ ,we could define the fitness function as:
+
+$$
+F \left(x ^ {\prime}\right) = \mathbb {1} \left(f \left(x ^ {\prime}\right) \neq f (x)\right) \cdot \left(1 - d \left(x ^ {\prime}, x\right)\right). \tag {2}
+$$
+
+The fitness function could evaluate the quality of adversarial example to construct the next generation for TextHacker.
+
+# 3.3 The Proposed TextHacker Algorithm
+
+As illustrated in Figure 1, TextHacker contains two stages, i.e., adversary initialization to initialize an adversarial example and perturbation optimization to minimize the adversarial perturbation. In general, there are four operators used in TextHacker, namely WordSubstitution for adversary initialization, LocalSearch, WeightUpdate and Recombination for the hybrid local search algorithm at the perturbation optimization stage. The details of these operators are summarized as follows:
+
+- WordSubstitution $(x_{t},\mathcal{C})$ : Given an input text $x_{t}$ at $t$ -th iteration with the candidate set $\mathcal{C}$ of each word $w_{i}\in x_{t}$ , we randomly substitute each word $w_{i}\in x_{t}$ with a candidate word $\hat{w}_i^j\in \mathcal{C}(w_i)$ to craft a new text $x_{t + 1}$ . WordSubstitution aims to search for an adversarial example in the entire search space by random word substitutions.
+- LocalSearch $(x_{t}^{adv},\mathcal{C},\mathcal{W})$ : As shown in Figure 2, for an adversarial example $x_{t}^{adv}$ at $t$ -th
+
+
+Figure 1: The overall framework of the proposed TextHacker algorithm. At the adversary initialization stage, for a given input text $x$ , after generating the candidate set for each word $w_i \in x$ , we randomly substitute each word with its candidate words till we obtain an adversarial example $x_1^{adv}$ . At the perturbation optimization stage, we first utilize local search to construct an initial population $\mathcal{P}^0$ . Subsequently, we iteratively adopt recombination as well as local search to maximize the fitness function, and update the weight table after each local search.
+
+iteration with the candidate set $\mathcal{C}$ and weight table $\mathcal{W}$ , we randomly sample several (at most $\delta$ ) less important words $\hat{w}_i^{jt} \in x_t^{adv}$ with the probability $p_i$ from all the perturbed words in $x_t^{adv}$ :
+
+$$
+p _ {i} = \frac {1 - \sigma \left(\mathcal {W} _ {i , :}\right)}{\sum_ {i = 1} ^ {n} \left[ 1 - \sigma \left(\mathcal {W} _ {i , :}\right) \right]},
+$$
+
+where $\sigma(x) = 1 / (1 + e^{-x})$ is the sigmoid function. The coarse-grained learning strategies in WeightUpdate could easily make the gap between the word importance too large, resulting in probability distortion and getting stuck during the candidate word selection. To solve this problem, we utilize the sigmoid function with the saturation characteristic to reduce the excessive gap and make the probability more reasonable. Then, we substitute each chosen word $\hat{w}_i^{j_t}$ with the original word $\hat{w}_i^0$ or with an arbitrary word $\hat{w}_i^{j_{t+1}} \in \mathcal{C}(w_i)$ using the probability $p_{i,j_{t+1}}$ equally to generate a new sample $x_{t+1}^{adv}$ :
+
+$$
+p _ {i, j _ {t + 1}} = \frac {\sigma (\mathcal {W} _ {i , j _ {t + 1}})}{\sum_ {j _ {t + 1} = 0} ^ {m} \sigma (\mathcal {W} _ {i , j _ {t + 1}})}.
+$$
+
+We accept $x_{t+1}^{adv}$ if it is still adversarial, otherwise we return the input adversarial example $x_{t}^{adv}$ . LocalSearch greedily substitutes unimportant word with the original word or critical word using the weight table to search for better adversarial example from the $\delta$ -neighborhood of $x_{t}^{adv}$ .
+
+- WeightUpdate $(x_{t}^{adv}, x_{t+1}^{adv}, f, \mathcal{W})$ : Given an adversarial example $x_{t}^{adv}$ at $t$ -th iteration with the generated adversary $x_{t+1}^{adv}$ by local search, we update the word importance of each operated word
+
+
+Figure 2: The overview of the LocalSearch and WeightUpdate. For an adversary $x_{t}^{adv}$ , we sample several words with probability $p_{i}$ based on the weight table. Then, we substitute each sampled word with original word or its candidate word with probability $p_{i,j}$ to generate a new text $x_{t+1}^{adv}$ . Finally, we use the prediction label of the new text $x_{t+1}^{adv}$ to update the weight table.
+
+$\hat{w}_i^{j_t} \in x_t^{adv}$ and $\hat{w}_i^{j_{t+1}} \in x_{t+1}^{adv}$ , and the position importance of $w_i$ using the following rules:
+
+Rule I: For each replaced word $\hat{w}_i^{j_{t + 1}}$ , if $x_{t + 1}^{adv}$ is still adversarial, it has positive impact on the adversary generation. So we increase its weight $\mathcal{W}_{i,j_{t + 1}}$ , and vice versa.
+
+Rule II: For each operated position $i$ , if $x_{t+1}^{adv}$ is still adversarial, it has little impact on the adversary generation. So we decrease the position weight $\mathcal{W}_{i,:}$ , and vice versa.
+
+Specifically, if $x_{t+1}^{adv}$ is still adversarial, we assign the positive reward $r$ to each replaced word $\hat{w}_i^{j_{t+1}}$
+
+using Rule I, and reward $-2r$ to each $\hat{w}_i^{jt}$ to decrease the weight summation $\mathcal{W}_{i,:} = \sum_{j=0}^{m} \mathcal{W}_{i,j}$ in each operated position $i$ using Rule II:
+
+$$
+\mathcal {W} _ {i, j _ {t + 1}} ^ {\prime} = \mathcal {W} _ {i, j _ {t + 1}} + r, \quad \mathcal {W} _ {i, j _ {t}} ^ {\prime} = \mathcal {W} _ {i, j _ {t}} - 2 r,
+$$
+
+where $r$ is the predefined reward value and $\mathcal{W}'$ is the weight table after this update. Otherwise, we assign the reward $-r$ to each $\hat{w}_i^{j_{t+1}}$ and $2r$ to each $\hat{w}_i^{j_t}$ . WeightUpdate highlights the important words and positions by assigning different reward for each operated word, which helps the LocalSearch select more critical positions and synonyms to substitute.
+
+- Recombination $(\mathcal{P}^t, \mathcal{W})$ : For the $t$ -th generation population $\mathcal{P}^t$ that contains multiple adversarial examples, we combine two randomly sampled texts $x^a = \langle w_1^a, w_2^a, \ldots, w_n^a \rangle \in \mathcal{P}^t$ and $x^b = \langle w_1^b, w_2^b, \ldots, w_n^b \rangle \in \mathcal{P}^t$ to construct a recombined text $x^c = \langle w_1^c, w_2^c, \ldots, w_n^c \rangle$ , where each word $w_i^c$ is randomly sampled from $\{w_i^a, w_i^b\}$ based on their weights in the weight table $\mathcal{W}$ . We repeat the operation $|\mathcal{P}^t| / 2$ times, and then return all the recombined texts. Recombination crafts non-improved solutions by randomly mixing two adversarial examples, which globally changes the text to avoid poor local optima.
+
+In summary, as shown in Figure 1, at the adversary initialization stage, for an input text $x$ , we adopt WordSubstitution iteratively to search for an adversarial example. At the perturbation optimization stage, we initialize the weight table $\mathcal{W}$ and adopt the hybrid local search algorithm to minimize the adversary perturbation. Specifically, we first utilize the LocalSearch to construct an initial population. At each iteration, we adopt Recombination and LocalSearch to generate several adversarial examples using the weight table $\mathcal{W}$ . Then we utilize the fitness function in Equation (2) to filter adversarial examples for the next generation. After the adversary optimization, the adversary with the highest fitness would be regarded as the final adversarial example. The overall algorithm of TextHacker is summarized in Algorithm 1.
+
+# 4 Experiments
+
+In this section, we conduct extensive experiments on eight benchmark datasets and four models to validate the effectiveness of TextHacker.
+
+Algorithm 1: The TextHacker Algorithm
+Input: Input sample $x$ , target classifier $f$ , query budget $T$ , reward $r$ , population size $S$ , maximum number of local search $N$
+Output: Attack result and adversarial example
+1 $\triangleright$ Adversary Initialization
+2 Construct the candidate set $\mathcal{C}(w_i)$ for each $w_i \in x$
+3 $x_1 = x$ , $x_1^{adv} = \text{None}$
+4 for $t = 1 \rightarrow T$ do
+5 $\begin{array}{l} x_{t+1} = \text{WordSubstitution}(x_t, \mathcal{C}) \\ \text{if } f(x_{t+1}) \neq f(x) \text{ then} \\ x_1^{adv} = x_{t+1}; \text{break} \end{array}$
+6
+7
+8 if $x_1^{adv}$ is None then
+9 return False, None
+10 $\triangleright$ Perturbation Optimization
+11 Initialize the weight table $\mathcal{W}$ with all 0s
+12 $x_{i+1}^{adv} = \text{LocalSearch}(x_i^{adv}, \mathcal{C}, \mathcal{W})$
+13 $\mathcal{P}^1 = \{x_1^{adv}, \dots, x_i^{adv}, \dots, x_S^{adv}\}$
+14 $t = t + S - 1; g = 1$
+15 while $t \leq T$ do
+16 $\mathcal{P}^g = \mathcal{P}^g \cup \{\text{Recombination}(\mathcal{P}^g, \mathcal{W})\}$
+17 for each text $x_g^{adv} \in \mathcal{P}^g$ do
+18 With $x_1^{adv} = x_g^{adv}$ for $i = 1 \rightarrow N$ :
+19 $x_{i+1}^{adv} = \text{LocalSearch}(x_i^{adv}, \mathcal{C}, \mathcal{W})$ ;
+20 WeightUpdate $(x_i^{adv}, x_{i+1}^{adv}, f, \mathcal{W})$
+21 $\mathcal{P}^g = \mathcal{P}^g \cup \{x_{N+1}^{adv}\}$
+22 $t = t + N$
+23 Construct $\mathcal{P}^{g+1}$ with the top $S$ fitness in $\mathcal{P}^g$ based on Equation (2)
+24 Record global optima $x_{best}^*$ with the highest fitness
+25 $g = g + 1$
+26 return True, $x_{best}^*$ $\triangleright$ Attack succeeds
+
+# 4.1 Experimental Setup
+
+Datasets. We adopt five widely investigated datasets, i.e., AG's News (Zhang et al., 2015), IMDB (Maas et al., 2011), MR (Pang and Lee, 2005), Yelp (Zhang et al., 2015), and Yahoo! Answers (Zhang et al., 2015) for text classification. For textual entailment, we select SNLI (Bowman et al., 2015) and MultiNLI (Williams et al., 2018), where MultiNLI includes matched version (MNLI) and mismatched version (MNLIm).
+
+Baselines. We take the hard-label attacks HLBB (Maheshwary et al., 2021) and TextHoaxter (Ye et al., 2022) as our baselines. Since there are only few hard-label attacks proposed recently, we also adopt two evolutionary score-based attacks, i.e., GA (Alzantot et al., 2018) and PSO (Zang et al., 2020b) for reference, which extra utilize the prediction confidence for attack.
+
+Victim Models. We adopt WordCNN (Kim, 2014), WordLSTM (Hochreiter and Schmidhuber, 1997), and BERT base-uncased (Devlin et al.,
+
+
Model
Attack
AG's News
IMDB
MR
Yelp
Yahoo! Answers
Succ.
Pert.
Succ.
Pert.
Succ.
Pert.
Succ.
Pert.
Succ.
Pert.
BERT
GA
40.5
13.4
50.9
5.0
65.6
10.9
36.6
8.6
64.2
7.6
PSO
45.8
12.1
60.3
3.7
74.4
10.7
47.9
7.5
64.7
6.6
HLBB
54.7
13.4
77.0
4.8
65.8
11.4
57.1
8.2
82.0
7.7
TextHoaxter
52.0
12.8
78.8
5.1
67.1
11.1
58.3
8.5
83.1
7.6
TextHacker
63.2
11.9
81.5
3.4
73.1
11.4
63.2
6.7
87.2
6.3
Word CNN
GA
70.0
12.1
59.6
5.9
72.9
11.1
44.4
9.0
62.0
8.7
PSO
83.5
10.4
55.6
4.2
80.7
10.7
45.6
7.4
52.7
7.0
HLBB
74.0
11.7
74.0
4.2
71.1
11.2
67.1
7.6
78.7
7.8
TextHoaxter
73.5
11.5
76.5
4.6
71.1
10.7
68.1
8.0
78.6
7.8
TextHacker
81.7
10.2
77.8
3.0
78.3
11.1
75.4
6.4
84.5
6.3
Word LSTM
GA
45.5
12.4
50.8
5.7
67.2
11.2
40.7
8.1
51.2
8.6
PSO
54.2
11.6
42.5
4.5
73.0
10.9
44.5
6.7
43.3
7.3
HLBB
56.8
12.7
72.1
4.1
68.3
11.2
61.0
6.6
70.8
8.3
TextHoaxter
56.5
12.3
73.5
4.5
67.9
10.7
61.8
6.7
70.1
8.1
TextHacker
64.7
11.2
76.2
3.0
75.2
11.2
65.4
5.5
75.5
6.9
+
+2019) models for text classification and BERT base-uncased model for textual entailment.
+
+Evaluation Settings. For TextHacker, we set the neighborhood size $\delta = 5$ , reward $r = 0.5$ , population size $S = 4$ , maximum number of local search $N = 8$ . The parameter studies are given in Appendix A. For a fair comparison, we adjust the population size and adopt the same values for other parameters as in their original papers to achieve better performance for the score-based attacks of GA and PSO. All the evaluations are conducted on 1,000 randomly sampled texts from the corresponding testset. We set the synonym number $m = 4$ . The attack succeeds if the perturbation rate of the generated adversarial example is smaller than $25\%$ to ensure the semantic constraints of the adversarial examples. As the task complexity varies across datasets, we set different query budget $T$ (i.e., the maximum query number to the victim model) for different tasks (2,000 for text classification and 500 for textual entailment). The results are averaged on five runs to eliminate randomness.
+
+# 4.2 Evaluation on Attack Effectiveness
+
+We first conduct evaluations for text classification using five datasets on three models under the same query budget of 2,000. The results, including attack success rate and perturbation rate, are summarized in Table 1. We could observe that TextHacker consistently achieves higher attack success rate with lower perturbation rate across almost all the datasets and victim models than the hard-label
+
+Table 1: Attack success rate (Succ., %) ↑, perturbation rate (Pert., %) ↓ of various attacks on three models using five datasets for text classification under the query budget of 2,000. ↑ denotes the higher the better. ↓ denotes the lower the better. We bold the highest attack success rate and lowest perturbation rate among the hard-label attacks.
+
+
Attack
SNLI
MNLI
MNLIm
Succ.
Pert.
Succ.
Pert.
Succ.
Pert.
GA
67.2
14.6
67.6
12.6
66.9
12.2
PSO
70.7
15.0
72.0
12.9
70.8
12.4
HLBB
57.2
14.0
58.3
12.2
58.6
11.8
TextHoaxer
61.0
14.1
64.0
12.4
63.8
12.0
TextHacker
70.3
15.0
68.3
12.8
69.0
12.4
+
+Table 2: Attack success rate (Succ., %) ↑, perturbation rate (Pert., %) ↓ of TextHacker and the baselines on BERT using three datasets for textual entailment under the query budget of 500.
+
+attacks. Even for the score-based attacks of GA and PSO, TextHacker exhibits better attack performance on most datasets and victim models.
+
+To further validate the effectiveness of the proposed TextHacker, we also conduct evaluations on BERT for three textual entailment tasks. As shown in Table 2, under the same query budget of 500, TextHacker outperforms HLBB by a clear margin of $10.0\% - 13.1\%$ and TextHoaxter by $4.3\% - 9.3\%$ on three datasets with similar perturbation rate. Compared with the score-based attacks, TextHacker achieves lower attack success rate than PSO, but still gains better attack success rate than GA. It is acceptable since GA and PSO extra utilize the changes on prediction confidence introduced by synonym substitution, making the attack much easier than the hard-label attacks.
+
+In conclusion, under the same query budgets, the proposed TextHacker exhibits much better attack performance than existing hard-label attacks, for
+
+
+Figure 3: Attack success rate $(\%)$ ↑ of various attacks on BERT using IMDB dataset under various query budgets.
+
+either text classification or textual entailment, and achieves comparable or even better attack performance than the advanced score-based attacks.
+
+# 4.3 Evaluation on Attack Efficiency
+
+In practice, the victim could block the attack by simply denying the access if they detect overload access within a short period. Hence, the attack efficiency, which often refers to the query budget for victim model, plays a key role in evaluating the effectiveness of black-box attacks. On the other hand, the query budget significantly affects the attack performance of the algorithm. Thus, a good attack should exhibit consistent and superior attack performance under various query budgets.
+
+We report the attack success rate of TextHacker and the baselines under various query budgets on BERT using IMDB dataset in Figure 3. TextHacker, HLBB and TextHoaxter exhibit remarkably higher attack success rate than GA and PSO under the limited query budget ( $\leq 2,000$ ). We further analyze why GA and PSO perform poorly under the limited query budget in Appendix B. When we continue to increase the query budget, the attack success rate of GA and PSO starts to increase rapidly but is still lower than that of TextHacker, which maintains stable and effective performance. In general, TextHacker consistently exhibits better attack performance under various query budgets, which further demonstrates the superiority of TextHacker.
+
+# 4.4 Evaluation on Adversary Quality
+
+Adversarial examples should be indistinguishable from benign samples for humans but mislead the model prediction. Hence, textual adversarial examples should maintain the original meaning without
+
+
Attack
Succ.
Pert.
Sim.
Gram.
GA
50.9
5.0
79.3
0.9
PSO
60.3
3.7
81.8
0.7
HLBB
77.0
4.8
84.9
0.6
TextHoaxter
78.8
5.1
85.8
0.6
TextHacker
81.5
3.4
82.3
0.4
+
+Table 3: Attack success rate (Succ., %) ↑, perturbation rate (Pert., %) ↓, average semantic similarity (Sim., %) ↑, grammatical error increase rate (Gram., %) ↓ of TextHacker and the baselines on BERT using IMDB dataset under the query budget of 2,000.
+
+apparent typos or grammatical errors. Though existing word-level attacks adopt synonym substitution to maintain semantic consistency, it is still possible to introduce grammatical error and semantic inconsistency. Apart from the perturbation rate, we further evaluate the semantic similarity and grammatical error increase rate using the Universal Sequence Encoder (USE) (Cer et al., 2018) and Language-Tool $^{1}$ , respectively.
+
+We compare TextHacker with the baselines on BERT using IMDB dataset and summarize the results in Table 3. With the lowest perturbation rate, TextHacker exhibits better semantic similarity than the score-based attacks of GA and PSO but is lower than HLBB and TextHoaxter, which consider the semantic similarity of synonyms using the USE tool during the attack. However, USE tool is time-consuming and computationally expensive, resulting in HLBB and TextHoaxter running slower than TextHacker as shown in Table 4, and their CPU occupancy rate is seven times that of TextHacker. Also, TextHacker achieves the lowest grammatical error increase rate compared with the baselines. The human evaluation in Appendix C shows that the adversarial examples generated by TextHacker are of high quality and difficult to be detected by humans. These evaluations demonstrate the high lexicality, semantic similarity and fluency of the generated adversarial examples of TextHacker.
+
+# 4.5 Evaluation on Real-world Applications
+
+With the rapid development and broad application of DNNs, numerous companies have deployed many commercial Application Programming Interfaces (APIs) for various tasks, e.g. sentiment analysis, named entity recognition, etc. The user can obtain the prediction label by calling the service API, making it possible for hard-label attackers to attack. To validate the attack effectiveness
+
+
+Weight Table
+Figure 4: Visualization of the weight table in TextHacker and the word importance table from the victim model, representing the word importance of nouns, verbs, adjectives, adverbs, and their candidate words in the original text. The original words are highlighted in Cyan, with each row representing the candidate words. The substituted words are highlighted in Red with marker $\star$ . A darker color indicates a more important word.
+
+
+Original Text. Label: Positive A gripping movie, played with performance that are all understated and touching. Adversarial Text. Label: Negative A gripping films, played with representations that sunt all devaluted and touching.
+Word Importance Table
+
+
Attack
Succ.
Pert.
Sim.
Gram.
Time
HLBB
65.0
5.7
82.1
0.5
8.7
TextHoaxter
65.0
5.2
82.2
0.4
9.3
TextHacker
75.0
3.1
80.9
0.3
5.7
+
+Table 4: Attack success rate (Succ., %) ↑, perturbation rate (Pert., %) ↓, average semantic similarity (Sim., %) ↑, grammatical error increase rate (Gram., %) ↓, and running time per attack (Time, in minutes) ↓ of various hard-label attacks on Amazon Cloud APIs under the query budget of 2,000.
+
+of TextHacker in the real world, we evaluate the attack performance of TextHacker, HLBB, and TextHoaxter on Amazon Cloud sentiment analysis $\mathsf{API}^2$ . Besides, attacks that run faster in the real world are more available and convenient. So we also report the average running time per attack. Due to the high cost of commercial APIs, we sample 20 texts from IMDB dataset for the test. As shown in Table 4, TextHacker achieves higher attack success rate, generates higher quality adversarial examples and runs faster than HLBB and TextHoaxter when facing real world APIs under tight query budget.
+
+# 4.6 Visualization of Weight Table
+
+Existing attacks (Ren et al., 2019; Jin et al., 2020) usually take the model's output changes to different words as the word importance and perturb the top important words to generate adversarial examples. In this work, the weight table plays such a role, which learns the word importance from the attack history. Thus, the precise estimation of model's behavior is the key to generating better adversarial examples. To further explore TextHacker, we conduct comparison and visualization to analyze
+
+
Attack
Succ.
Pert.
Sim.
Gram.
Weight table
22.4
11.9
71.5
1.3
Hybrid local search
79.6
6.2
77.5
0.7
TextHacker
81.5
3.4
82.3
0.4
+
+Table 5: Ablation study on the hybrid local search algorithm and weight table in TextHacker on BERT using IMDB dataset under the query budget of 2,000.
+
+the difference between the weight table and the word importance table from the model. We generate the adversarial example of one benign text sampled from MR dataset by TextHacker. For the word importance table, we calculate the word importance of each word by the prediction confidence difference after replacing the original word with the candidate word on BERT. We map the values in the learned weight table and word importance table into [-1, 1] and illustrate their heatmaps in Figure 4. More case studies are presented in Appendix D. We find that the weight table is consistent with the word importance table for the most important words. It helps TextHacker optimize the adversarial perturbation more efficiently and hold on the most important words for better adversarial example. This is important and challenging in the hard-label attack setting, which also explains the superiority of TextHacker.
+
+# 4.7 Ablation Study
+
+To study the impact of different components of TextHacker, we conduct a series of ablation studies on BERT using IMDB dataset under the query budget of 2,000.
+
+The impact of weight table and hybrid local search. We design two variants to evaluate the impact of various components in TextHacker. a)
+
+
Attack
Succ.
Pert.
Sim.
Gram.
Local search → Mutation
79.1
6.1
77.5
0.7
Recombination → Crossover
81.3
3.7
81.9
0.4
TextHacker
81.5
3.4
82.3
0.4
+
+Table 6: Ablation study on the hybrid local search in TextHacker and genetic algorithm in HLBB on BERT using IMDB dataset under the query budget of 2,000.
+
+
Attack
Succ.
Pert.
Sim.
Gram.
Random-search
80.2
5.3
77.8
0.7
Random-flip
81.0
5.3
76.4
0.7
TextHacker
81.5
3.4
82.3
0.4
+
+Table 7: Ablation study on the hybrid local search in TextHacker and alternative strategies on BERT using IMDB dataset under the query budget of 2,000.
+
+weight table: we remove the hybrid local search and greedily substitute the sampled word with its synonyms iteratively based on the weight table. b) Hybrid local search: we utilize the hybrid local search to search for better adversaries without weight table. The experiments in Table 5 show the effectiveness and rationality of different components in TextHacker.
+
+Hybrid local search vs. genetic algorithms. Genetic algorithm in HLBB is inefficient in exploring the search space compared to the hybrid local search algorithm in TextHacker that balances the local and global exploitation. Compared with random synonym substitutions on mutation in HLBB, the local search replaces more critical words using word importance, making it reach the local optima faster. To further illustrate their differences, we replace local search with mutation and recombination with crossover respectively. The experiments in Table 6 demonstrate that the first change drops the success rate by $2.4\%$ and increases the perturbation rate by $2.7\%$ . The second change drops the success rate by $0.2\%$ and increases the perturbation rate by $0.3\%$ . This study validates the better performance of local search and recombination.
+
+Local search vs. alternative strategies. We replace the local search with two alternative strategies, namely random-search that randomly substitutes the sampled word with its synonyms, and random-flip that directly substitutes the sampled word with the original word. The experiments in Table 7 demonstrate that local search achieves better attack performance than random-search and random-flip, showing the superiority of the local search in TextHacker.
+
+# 5 Conclusion
+
+In this work, we propose a new text hard-label attack called TextHacker. TextHacker captures the words that have higher impact on the adversarial example via the changes on prediction label. By incorporating the learned word importance into the search process of the hybrid local search, TextHacker can reduce the adversarial perturbation between the adversarial example and benign text more efficiently to generate more natural adversarial examples. Extensive evaluations for two typical NLP tasks, namely text classification and textual entailment, using various datasets and models demonstrate that TextHacker achieves higher attack success rate and lower perturbation rate than existing hard-label attacks and generates higher-quality adversarial examples. We believe that TextHacker could shed new light on more precise estimation of the word importance and inspire more researches on hard-label attacks.
+
+# Limitations
+
+As shown in Table 3, adversarial examples generated by TextHacker have a slightly lower semantic similarity than HLBB and TextHoaxter from the automatic metric perspective. However, the quality (i.e., lexicality, semantic similarity and fluency) of adversarial examples depend not only on semantic similarity evaluation, but also on perturbation rate, grammatical error rate, human evaluation, etc. In our experiments, the quality in Table 3 and human evaluation experiment in Appendix C have demonstrated the higher quality and the harder detection by humans of the adversarial example generated by our TextHacker. In addition, the semantic similarity metric is usually measured by the USE tool which will lead to high computing resource occupancy and slow running speed of the attack algorithm, as described in Section 4.4. However, a faster and less resource-intensive attack attack is usually more suitable and convenient in the real world. Considering semantic similarity alone may not be a good choice for generating high quality adversarial examples. Hence, this limitation is acceptable.
+
+# Acknowledgement
+
+This work is supported by National Natural Science Foundation (62076105,U22B2017).
+
+# References
+
+Emile Aarts, Emile HL Aarts, and Jan Karel Lenstra. 2003. Local search in combinatorial optimization. In Princeton University Press.
+Moustafa Alzantot, Yash Sharma, Ahmed Elgohary, Bo-Jhang Ho, Mani Srivastava, and Kai-Wei Chang. 2018. Generating natural language adversarial examples. In Conference on Empirical Methods in Natural Language Processing.
+Edward J Anderson and Michael C Ferris. 1994. Genetic algorithms for combinatorial optimization: the assemble line balancing problem. ORSA Journal on Computing.
+Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In Conference on Empirical Methods in Natural Language Processing.
+Daniel Cer, Yinfei Yang, Sheng-yi Kong, Nan Hua, Nicole Limtiaco, Rhomni St John, Noah Constant, Mario Guajardo-Cespedes, Steve Yuan, Chris Tar, et al. 2018. Universal sentence encoder. In arXiv preprint arXiv:1803.11175.
+Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In North American Chapter of the Association for Computational Linguistics: Human Language Technologies.
+Javid Ebrahimi, Anyi Rao, Daniel Lowd, and Dejing Dou. 2018. HotFlip: White-box adversarial examples for text classification. In Association for Computational Linguistics.
+Philippe Galinier and Jin-Kao Hao. 1999. Hybrid evolutionary algorithms for graph coloring. In Springer.
+Ji Gao, Jack Lanchantin, Mary Lou Soffa, and Yanjun Qi. 2018. Black-box generation of adversarial text sequences to evade deep learning classifiers. In 2018 IEEE Security and Privacy Workshops (SPW).
+Siddhant Garg and Goutham Ramakrishnan. 2020. BAE: BERT-based adversarial examples for text classification. In Conference on Empirical Methods in Natural Language Processing.
+Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. 2015. Explaining and harnessing adversarial examples. In International Conference on Learning Representations.
+Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. In Neural Computation.
+Di Jin, Zhijing Jin, Joey Tianyi Zhou, and Peter Szolovits. 2020. Is BERT really robust? A strong baseline for natural language attack on text classification and entailment. In AAAI Conference on Artificial Intelligence.
+
+James Kennedy and Russell Eberhart. 1995. Particle swarm optimization. In Proceedings of ICNN'95-international conference on neural networks. IEEE.
+Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Conference on Empirical Methods in Natural Language Processing.
+Jinfeng Li, Shouling Ji, Tianyu Du, Bo Li, and Ting Wang. 2019. Textbugger: Generating adversarial text against real-world applications. In Network and Distributed System Security Symposium.
+Bin Liang, Hongcheng Li, Miaoqiang Su, Pan Bian, Xirong Li, and Wenchang Shi. 2018. Deep text classification can be fooled. In International Joint Conference on Artificial Intelligence.
+Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In Association for Computational Linguistics.
+Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. 2018. Towards deep learning models resistant to adversarial attacks. In International Conference on Learning Representations.
+Rishabh Maheshwary, Saket Maheshwary, and Vikram Pudi. 2021. Generating natural language attacks in a hard label black box setting. In AAAI Conference on Artificial Intelligence.
+Zhao Meng and Roger Wattenhofer. 2020. A geometry-inspired attack for generating natural language adversarial examples. In International Conference on Computational Linguistics.
+Nikola Mrksic, Diarmuid Řeaghdha, Blaise Thomson, Milica Gašić, Lina M. Rojas-Barahona, Pei-Hao Su, David Vandyke, Tsung-Hsien Wen, and Steve Young. 2016. Counter-fitting word vectors to linguistic constraints. In North American Chapter of the Association for Computational Linguistics: Human Language Technologies.
+Bo Pang and Lillian Lee. 2005. Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales. In Association for Computational Linguistics.
+Nicolas Papernot, Patrick McDaniel, Ananthram Swami, and Richard Harang. 2016. Crafting adversarial input sequences for recurrent neural networks. In MILCOM IEEE Military Communications Conference.
+Nicholas J Radcliffe. 1993. Genetic set recombination. In Foundations of Genetic Algorithms.
+Shuhuai Ren, Yihe Deng, Kun He, and Wanxiang Che. 2019. Generating natural language adversarial examples through probability weighted word saliency. In Association for Computational Linguistics.
+
+Sachin Saxena. 2020. Textdeceptor: Hard label black box attack on text classifiers. In arXiv preprint arXiv:2008.06860.
+Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian J. Goodfellow, and Rob Fergus. 2014. Intriguing properties of neural networks. In International Conference on Learning Representations.
+Xiaosen Wang, Xuanran He, Jingdong Wang, and Kun He. 2021a. Admix: Enhancing the transferability of adversarial attacks. In International Conference on Computer Vision, pages 16138-16147.
+Xiaosen Wang, Hao Jin, Yichen Yang, and Kun He. 2021b. Natural language adversarial defense through synonym encoding. In Conference on Uncertainty in Artificial Intelligence.
+Xiaosen Wang, Yifeng Xiong, and Kun He. 2022. Randomized substitution and vote for textual adversarial example detection. In Conference on Uncertainty in Artificial Intelligence.
+Xiaosen Wang, Yichen Yang, Yihe Deng, and Kun He. 2021c. Adversarial training with fast gradient projection method against synonym substitution based text attacks. In AAAI Conference on Artificial Intelligence.
+Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In *North American Chapter of the Association for Computational Linguistics: Human Language Technologies*.
+Puyudi Yang, Jianbo Chen, Cho-Jui Hsieh, Jane-Ling Wang, and Michael I Jordan. 2020. Greedy attack and gumbel attack: Generating adversarial examples for discrete data. In Journal of Machine Learning Research.
+Yichen Yang, Xiaosen Wang, and Kun He. 2022. Robust textual embedding against word-level adversarial attacks. In Conference on Uncertainty in Artificial Intelligence.
+Muchao Ye, Chenglin Miao, Ting Wang, and Fenglong Ma. 2022. Texthoaxer: Budgeted hard-label adversarial attacks on text. In AAAI Conference on Artificial Intelligence.
+Yuan Zang, Bairu Hou, Fanchao Qi, Zhiyuan Liu, Xiaojun Meng, and Maosong Sun. 2020a. Learning to attack: Towards textual adversarial attacking in real-world situations. In arXiv preprint arXiv:2009.09192.
+Yuan Zang, Fanchao Qi, Chenghao Yang, Zhiyuan Liu, Meng Zhang, Qun Liu, and Maosong Sun. 2020b. Word-level textual adversarial attacking as combinatorial optimization. In Association for Computational Linguistics.
+
+Huangzhao Zhang, Hao Zhou, Ning Miao, and Lei Li. 2019. Generating fluent adversarial examples for natural languages. In Association for Computational Linguistics.
+Xiang Zhang, Junbo Jake Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classification. In Advances in Neural Information Processing Systems.
+
+# A Parameter Study
+
+To gain more insights into the effectiveness of our TextHacker, we conduct a series of parameter studies to explore the impact of hyper-parameters for the neighborhood size $\delta$ , population size $S$ , and maximum number of local search $N$ in TextHacker. We conduct parameter studies on BERT using IMDB dataset to determine the best hyperparameters and use the same hyper-parameters on all other datasets.
+
+On the neighborhood size. In Figure 5a, we study the impact of the neighborhood size $\delta$ . The small $\delta$ would restrict the search scope of local search, making it difficult to find the local optimal solution from the vast search space, resulting in low attack success rate and high perturbation rate under limited query budgets. As $\delta$ increases, the attack success rate increases and the perturbation rate decreases until $\delta = 5$ . When we continually increase $\delta$ , the vast search scope of local search makes it difficult to converge to local optima, resulting in an increase in perturbation rate. Thus, we set $\delta = 5$ in our experiments.
+
+On the population size. As shown in Figure 5b, we study the impact of population size $S$ . When $S = 1$ , the hybrid local search algorithm degrades to the non-population-based algorithm which exhibits high perturbation rate. With the increment on the value of $S$ , the perturbation rate decreases until $S = 4$ . When we continually increase $S$ , the local search operator costs many queries for each candidate solution in the population. This limits the number of iterations of the overall algorithm under tight query budget, leading to low attack success rate and high perturbation rate. Thus, we set $S = 4$ in our experiments.
+
+On the maximum number of local search. We finally study the impact of maximum number of local search $N$ , as shown in Figure 5c. When $N = 2$ , the recombination operator is performed for every two steps of the local search operator. It is difficult for local search operator to thoroughly explore the neighborhood space, resulting in low attack success rate and high perturbation rate. When $N$ is too large, there are too few recombination operations under tight budgets, making TextHacker insufficient to explore the entire search space, leading to unstable performance. Therefore, we adopt an intermediate value $N = 8$ to balance the local search and recombination in our experiments.
+
+
Attack
S = 4
S = 30
Succ.
Pert.
Succ.
Pert.
GA
88.2
9.4
35.5
3.4
PSO
75.6
6.4
47.3
2.8
HLBB
65.3
4.5
77.0
4.8
TextHacker
81.5
3.4
80.6
4.7
+
+Table 8: Attack success rate (Succ., %) ↑, perturbation rate (Pert., %) ↓ of TextHacker and the baselines on BERT using IMDB dataset under the query budget of 2,000 when the population size $S = 4$ and $S = 30$ .
+
+# B Why Do Population-based Baselines Perform Poor?
+
+To further analyze why the baselines perform poorly under tight budgets, we show the performance of our TextHacker and the population-based baselines on BERT using IMDB dataset under the same population size $S = 4$ and $S = 30$ (commonly used in GA, PSO and HLBB). Note that TextHoaxter is a non-population-based algorithm and is not considered in this experiment. As shown in Table 8, when $S = 4$ , the low population size makes it difficult to seriously explore the search space and find the optimal adversarial example for GA and PSO, resulting in high perturbation rate. When $S = 30$ , GA and PSO cost too many queries in each iteration. Thus, tight budget makes it difficult for them to fully explore the entire search space to find adversarial examples, resulting in low attack success rate. In contrast, the adversary initialization by random walks ensures high attack success rate of TextHacker and HLBB even under tight budgets. And the word importance learned by attack history helps TextHacker explore more efficiently and obtain lower perturbation rate.
+
+# C Human Evaluation
+
+Human beings are very sensitive and subjective to texts. Even minor synonym substitutions may change the feeling of people, resulting in different evaluations. Therefore, human evaluation is also necessary to evaluate the quality of adversarial examples. We perform the human evaluation on 20 benign texts and the corresponding adversarial examples generated by TextHacker, HLBB and TextHoaxer on BERT using MR dataset. Note that the texts in the MR dataset are shorter, averaging only 20 words per sentence, making it easier for humans to detect the adversarial examples. We invite 20 volunteers to label the adversarial exam
+
+
+(a) Parameter study for various $\delta$
+
+
+(b) Parameter study for various $S$
+Figure 5: The attack success rate $(\%)$ ↑ and perturbation rate $(\%)$ ↓ of TextHacker on BERT using IMDB dataset, when varying the neighborhood size $\delta$ , population size $S$ or maximum number of local search $N$ .
+
+
+(c) Parameter study for various $N$
+
+plies, i.e., positive or negative, and score for the similarity between the benign sample and its adversarial example from 1 (very similar) to 5 (very different). The survey results show that $84.5\%$ of the adversarial examples on TextHacker (vs. $79.0\%$ on HLBB and $81.5\%$ on TextHoaxter) are labeled the same as the original samples, and the average similarity score is 1.9 (vs. 2.4 on HLBB and 2.1 on TextHoaxter). It demonstrates that the adversarial examples generated by TextHacker are of higher quality and harder to be detected by humans than that of HLBB and TextHoaxter.
+
+# D More Visualizations of Weight Table
+
+Here we present more case studies as the extension of Section 4.6 in Figure 6, 7, 8, and the adversarial examples generated by various hard-label attacks in Table 9, 10, 11. These visualizations further verify the consistency between the weight table and the word importance table, proving the effectiveness of the learned weight table in TextHacker.
+
+Original Text. Label: Positive
+
+Adversarial Text. Label: Negative
+
+Both lead performances are oscar size quaid is utterly fearless as the tortured husband living a painful lie, and moore wonderfully underplays the long suffering heroine with an unflappable 50s dignity somewhere between jane wyman and june cleaver.
+
+Both lead performances are oscar size quaid is utterly fearless as the tortured husband living a painful lie, and moore marvellously underplays the long suffers heroine with an unflappable 50s decency somewhere between jane wyman and june cleaver.
+
+
+Weight Table
+
+
+Word Importance Table
+Figure 6: Visualization of the weight table in TextHacker and the word importance table from the victim model, representing the word importance of nouns, verbs, adjectives, adverbs, and their candidate words in the original text as shown in Table 9. The original words are highlighted in Cyan, with each row representing the candidate words. The substituted words are highlighted in Red with marker $\star$ . A darker color indicates a more important word.
+
+
Attack
Original Text & Adversarial Example
Prediction
Original Text
Both lead performances are Oscar size quaid is utterly fearless as the tortured husband living a painful lie, and moore wonderfully underplays the long suffering heroine with an unflappable 50s dignity somewhere between jane wyman and june cleaver.
Positive
HLBB
Both lead (leaded) performances are Oscar size quaid is utterly fearless (brave) as the tortured (tortures) husband (hobby) living a painful (agonizing) lie, and moore wonderfully underplays the long suffering (suffer) heroine (smack) with an unflappable 50s dignity (decency) somewhere between jane wyman and june cleaver.
Negative
TextHoaxter
Both lead performances are Oscar size quaid is utterly fearless as the tortured (tortures) husband (hobby) living a painful (agonizing) lie, and moore wonderfully underplays the long suffering (suffers) heroine (smack) with an unflappable (easygoing) 50s dignity somewhere (nowhere) between jane wyman and june cleaver.
Negative
TextHacker
Both lead performances are Oscar size quaid is utterly fearless as the tortured husband living a painful lie, and moore wonderfully (marvellously) underplays the long suffering (suffers) heroine with an unflappable 50s dignity (decency) somewhere between jane wyman and june cleaver.
Negative
+
+Table 9: The original text from MR dataset and the adversarial example generated by various hard-label attacks (HLBB, TextHoaxter and TextHacker) on BERT. We highlight the words replaced by the attacks in Red. The corresponding original words are highlighted in Cyan.
+
+
Original Text.
Label: Business
Skulls on your symbian phone? don't panic! petaling jaya : virus experts at british software security firm sophos plc have advised customers not to panic, following media reports of a trojan horse which infects cellphones.
Adversarial Text.
Label: Sports
Frantz on your symbian phone? don't panic! petaling jaya : virus experts at british software insurance firm sophos plc have advised customers not to panic, following media reports of a troy horse which injury cellphones.
+
+
+Weight Table
+
+
+Word Importance Table
+Figure 7: Visualization of the weight table in TextHacker and the word importance table from the victim model, representing the word importance of nouns, verbs, adjectives, adverbs, and their candidate words in the original text as shown in Table 10. The original words are highlighted in Cyan, with each row representing the candidate words. The substituted words are highlighted in Red with marker $\star$ . A darker color indicates a more important word.
+
+
Attack
Original Text & Adversarial Example
Prediction
Original Text
Skulls on your symbian phone? don’t panic! petaling jaya : virus experts at british software security firm sophos plc have advised customers not to panic, following media reports of a trojan horse which infects cellphones.
Business
HLBB
Skulls on your symbian phone? don’t panic! petaling jaya : virus (infection) experts at british software (sw) security firm sophos plc have advised customers not to panic, following media reports of a trojan (spartans) horse which infects (injury) cellphones (telephones).
Sports
TextHoaxter
Skulls on your symbian phone? don’t panic! petaling jaya (gaya) : virus experts at british software (sw) security (insurance) firm (resolute) sophos plc have advised customers not to panic, following media reports of a trojan (spartans) horse which infects cellphones.
Sports
TextHacker
Skulls (Frantz) on your symbian phone? don’t panic! petaling jaya : virus experts at british software security (insurance) firm sophos plc have advised customers not to panic, following media reports of a trojan (troy) horse which infects (injury) cellphones.
Sports
+
+Table 10: The original text from AG's News dataset and the adversarial example generated by various hard-label attacks (HLBB, TextHoaxter and TextHacker) on BERT. We highlight the words replaced by the attacks in Red. The corresponding original words are highlighted in Cyan.
+
+Original Text. Label: Entertainment & Music
+
+What movie is the saying odoyle rules in ?? I think it might have been billy madison but I'm not sure. Yes you're right Billy Madison.
+
+Adversarial Text. Label: Education & Reference
+
+What filmmaking is the saying odoyle regulation in ?? I think it might have been billy madison but I'm not sure. Yes you're right Billy Madison.
+
+
+Weight Table
+
+
+Word Importance Table
+Figure 8: Visualization of the weight table in TextHacker and the word importance table from the victim model, representing the word importance of nouns, verbs, adjectives, adverbs, and their candidate words in the original text as shown in Table 11. The original words are highlighted in Cyan, with each row representing the candidate words. The substituted words are highlighted in Red with marker $\star$ . A darker color indicates a more important word.
+
+
Attack
Original Text & Adversarial Example
Prediction
Original Text
What movie is the saying odoyle rules in ?? I think it might have been billy madison but I'm not sure. Yes you're right Billy Madison.
Entertainment & Music
HLBB
What movie (filmmaking) is the saying (proverb) odoyle rules in ?? I think it might have been billy madison but I'm not (no) sure (secure). Yes you're right Billy Madison.
Education & Reference
TextHoaxter
What movie (filmmaking) is the saying (proverb) odoyle rules in ?? I think it might (perhaps) have (ha) been (undergone) billy madison but I'm not sure. Yes you're right Billy Madison.
Education & Reference
TextHacker
What movie (filmmaking) is the saying odoyle rules (regulation) in ?? I think it might have been billy madison but I'm not sure. Yes you're right Billy Madison.
Education & Reference
+
+Table 11: The original text from Yahoo! Answers dataset and the adversarial example generated by various hard-label attacks (HLBB, TextHoaxter and TextHacker) on BERT. We highlight the words replaced by the attacks in Red. The corresponding original words are highlighted in Cyan.
\ No newline at end of file
diff --git a/texthackerlearningbasedhybridlocalsearchalgorithmfortexthardlabeladversarialattack/images.zip b/texthackerlearningbasedhybridlocalsearchalgorithmfortexthardlabeladversarialattack/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..5588935d69e15c607d2310f094801cf83e7d1163
--- /dev/null
+++ b/texthackerlearningbasedhybridlocalsearchalgorithmfortexthardlabeladversarialattack/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f07d0027d573d3ca60f1adda7f72af7273c886e46453aeceab08061eac3633af
+size 1166985
diff --git a/texthackerlearningbasedhybridlocalsearchalgorithmfortexthardlabeladversarialattack/layout.json b/texthackerlearningbasedhybridlocalsearchalgorithmfortexthardlabeladversarialattack/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..885ebf67e83c5c09d088cd08f282dd2a2ce81f90
--- /dev/null
+++ b/texthackerlearningbasedhybridlocalsearchalgorithmfortexthardlabeladversarialattack/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a89e344c0f4ec5bea7a2fa8a1dde41bdf6a404271bb803ea3ab0a1cb62bed13e
+size 591175
diff --git a/textonlytrainingforimagecaptioningusingnoiseinjectedclip/140abd8e-8322-4276-88e8-1cd1f845c2ae_content_list.json b/textonlytrainingforimagecaptioningusingnoiseinjectedclip/140abd8e-8322-4276-88e8-1cd1f845c2ae_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..05ebf6fb854e6cd9d8ffe34bd1d06d6a8532c495
--- /dev/null
+++ b/textonlytrainingforimagecaptioningusingnoiseinjectedclip/140abd8e-8322-4276-88e8-1cd1f845c2ae_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5428af4b693c307b338cec82d1c37c2dcc32f07224b08dfd132405d58cedc4c3
+size 54953
diff --git a/textonlytrainingforimagecaptioningusingnoiseinjectedclip/140abd8e-8322-4276-88e8-1cd1f845c2ae_model.json b/textonlytrainingforimagecaptioningusingnoiseinjectedclip/140abd8e-8322-4276-88e8-1cd1f845c2ae_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..997796973094d20eeb5981637b9b8d1b49ae00cb
--- /dev/null
+++ b/textonlytrainingforimagecaptioningusingnoiseinjectedclip/140abd8e-8322-4276-88e8-1cd1f845c2ae_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:eb6c4f21af9a9365ec21c47f6b9fedc3b40c6bcc35f9f9f7e162e0e270c06b0e
+size 70980
diff --git a/textonlytrainingforimagecaptioningusingnoiseinjectedclip/140abd8e-8322-4276-88e8-1cd1f845c2ae_origin.pdf b/textonlytrainingforimagecaptioningusingnoiseinjectedclip/140abd8e-8322-4276-88e8-1cd1f845c2ae_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..fbe1f8a8eb1e19f478d3cd073c67d0c8be13b237
--- /dev/null
+++ b/textonlytrainingforimagecaptioningusingnoiseinjectedclip/140abd8e-8322-4276-88e8-1cd1f845c2ae_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c2fc4771f11d274f3941530a6058fb741bd2dbba755c29f8e074001c391a3e7c
+size 952576
diff --git a/textonlytrainingforimagecaptioningusingnoiseinjectedclip/full.md b/textonlytrainingforimagecaptioningusingnoiseinjectedclip/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..f571fb26de6e53dbdab0218b122ba983aedec2ac
--- /dev/null
+++ b/textonlytrainingforimagecaptioningusingnoiseinjectedclip/full.md
@@ -0,0 +1,230 @@
+# Text-Only Training for Image Captioning using Noise-Injected CLIP
+
+David Nukrai
+
+Ron Mokady
+
+Amir Globerson
+
+Blavatnik School of Computer Science, Tel Aviv University
+
+# Abstract
+
+We consider the task of image-captioning using only the CLIP model and additional text data at training time, and no additional captioned images. Our approach relies on the fact that CLIP is trained to make visual and textual embeddings similar. Therefore, we only need to learn how to translate CLIP textual embeddings back into text, and we can learn how to do this by learning a decoder for the frozen CLIP text encoder using only text. We argue that this intuition is "almost correct" because of a gap between the embedding spaces, and propose to rectify this via noise injection during training. We demonstrate the effectiveness of our approach by showing SOTA zero-shot image captioning across four benchmarks, including style transfer. Code, data, and models are available at https://github.com/DavidHuji/CapDec.
+
+# 1 Introduction
+
+Vision and language are closely intertwined, as they are two ways of describing the world. This raises the potential for developing models that map images and text into a shared semantic space. Indeed, this approach has recently achieved great success with models like CLIP (Radford et al., 2021) and ALIGN (Jia et al., 2021). These models use parallel image-text data to train a joint representation, where the embeddings of image-text pairs are similar. Such models have been employed for various vision-language tasks.
+
+Image captioning is a key task in vision-language perception. Yet, training image captioning models typically requires large datasets of captioned images, and these are challenging to collect. Furthermore, it is not clear how one could adapt a pretrained vision-language model to generate captions in new styles. In this work, we present an approach to captioning that only requires CLIP and text data, and generates styled captions using only unpaired textual examples from that style. This
+
+alleviates the need for paired text-image data, and also allows for simple style transfer.
+
+A first approach one could consider for this setting is to train a decoder model to reconstruct texts from their respective CLIP embeddings, and at inference use this decoder to decode image embeddings. However, we observed that this approach fails at inference, and we conjecture this is due to the known domain gap between the image and text modalities (Liang et al., 2022). We propose a simple approach to mitigate this, by injecting noise into the embedding during training. This has the effect of creating a ball in embedding space that will map to the same caption, and corresponding image-embedding is more likely to be inside this ball, as illustrated in Fig. 1.a.
+
+We evaluate our "Captioning via Decoding" (CapDec) method extensively, showing that it works well on several image captioning tasks, including standard, cross-domain, and style-guided captioning. Overall, our main contributions are as follows: 1) A simple and intuitive approach to learning a captioning model based on CLIP and additional text training data, but no images for training. 2) Evaluation of CapDec on image captioning tasks, including generating captions in various styles, shows it outperforms other methods which use the same supervision.
+
+# 2 Related Work
+
+Image captioning methods (Chen and Zitnick, 2014; Chen et al., 2017; Yang et al., 2019; Herdade et al., 2019; Luo et al., 2021; Tsimpoukelli et al., 2021) typically extract visual features using a pre-trained network. These are passed to a textual decoder that produces the final captions. To bridge the gap between vision and language, other works employ pre-training to create a shared latent space of vision and text (Tan and Bansal, 2019; Laina et al., 2019; Lu et al., 2019; Li et al., 2020; Zhou et al., 2020; Zhang et al., 2021; Wang et al., 2021;
+
+
+
+
+Figure 1: Overview of our CapDec captioning approach. (a) An illustration of the CLIP joint embedding space. Embedded text is relatively close to its corresponding visual embedding, but with a certain gap. (b) CapDec trains a model that decodes the CLIP embedding of text $T$ back to text $T$ , after noise-injection. The encoders remain frozen. (c) At inference, CapDec simply decodes the embedding of an image using the trained decoder.
+
+
+
+Hu et al., 2022). However, all of these approaches require extensive training and large paired datasets that are hard to collect. Gan et al. (2017) and Zhao et al. (2020) have suggested style-guided captioning, but also employ training over paired data.
+
+CLIP (2021) marked a turning point in vision-language perception, and has been utilized for vision-related tasks by various distillation techniques (Gu et al., 2021; Song et al., 2022; Jin et al., 2021; Gal et al., 2021; Xu et al., 2021; Khandelwal et al., 2022). Recent captioning methods use CLIP for reducing training time (Mokady et al., 2021), improved captions (Shen et al., 2021; Luo et al., 2022a,b; Cornia et al., 2021; Kuo and Kira, 2022), and in zero-shot settings (Su et al., 2022; Tewel et al., 2022). However, zero-shot techniques often result in inferior performance, as the produced captions are not compatible with the desired target style, which is usually dictated by a dataset. In this work, we suggest a new setting, where we adapt CLIP to image captioning using only textual data. As a result, we can easily adapt captions to any desired caption style given instances of text in that style. Concurrent work by Su et al. (2022) efficiently produces high-quality captions with the minimal supervision of text-only pre-training by employing CLIP-induced score at inference. Our approach is arguably simpler and also outperforms Su et al. (2022) empirically. Note that Zhou et al. (2021) have also employed noise-injection, but for the opposite problem of CLIP-based text-free text-to-image generation.
+
+# 3 Method
+
+Text-Only Training. Our goal is to learn a model that produces a caption for a given image $I$ . Unlike supervised approaches, we assume that during training we only have access to a set of texts $\mathcal{T}$ . These can be obtained by harvesting a text corpus. We next introduce notation for the CLIP model. Given an image $I$ let $\phi(I) \in \mathbb{R}^d$ be its embedding, and given a text $T$ let $\psi(T) \in \mathbb{R}^d$ be its embedding. For converting a vector $\pmb{v} \in \mathbb{R}^d$ into a caption, we use a textual decoder $C(\pmb{v})$ consisting of a lightweight mapping network and a pretrained auto-regressive language model, as suggested in Mokady et al. (2021).
+
+We train the decoder as follows (except for the noise-injection which we introduce below). Each text $T \in \mathcal{T}$ is first mapped to CLIP space via $\psi(T)$ and then decoded back into a text via $C(\psi(T))$ . We would like this decoding to be similar to the original text $T$ . Namely, our training objective is a reconstruction of the input text from CLIP's textual embedding. At inference, given an image $I$ we simply apply the decoder to $\phi(I)$ , returning the caption $C(\phi(I))$ .
+
+Noise-Injected CLIP Embeddings. We observed that the above training scheme results in inaccurate captions during inference. We conjecture this is because the embeddings of the text and image modalities are separated by a domain gap, as shown in Liang et al. (2022). As a result, while text reconstruction is successful during training,
+
+(A) Image Captioning
+
+
Model
MS-COCO
Flickr30k
B@1
B@4
M
R-L
CIDEr
B@1
B@4
M
R-L
CIDEr
Fully Supervised Approaches
BUTD
77.2
36.2
27.0
56.4
113.5
-
27.3
21.7
-
56.6
UniVLP
-
36.5
28.4
-
116.9
-
30.1
23.0
-
67.4
ClipCap
74.7
33.5
27.5
-
113.1
-
21.7
22.1
47.3
53.5
Oscar
-
36.5
30.3
-
123.7
-
-
-
-
-
LEMON
-
40.3
30.2
-
133.3
-
-
-
-
-
Weakly or Unsupervised Approaches
ZeroCap
49.8
7.0
15.4
31.8
34.5
44.7
5.4
11.8
27.3
16.8
MAGIC
56.8
12.9
17.4
39.9
49.3
44.5
6.4
13.1
31.6
20.4
CapDec
69.2
26.4
25.1
51.8
91.8
55.5
17.7
20.0
43.9
39.1
+
+(B) Cross-Domain Captioning
+
+
Flickr30k ⇒ MS-COCO
MS-COCO ⇒ Flickr30k
B@1
B@4
M
R-L
CIDEr
B@1
B@4
M
R-L
CIDEr
MAGIC
41.4
5.2
12.5
30.7
18.3
46.4
6.2
12.2
31.3
17.5
CapDec
43.3
9.2
16.3
36.7
27.3
60.2
17.3
18.6
42.7
35.7
+
+Table 1: Results for image captioning. (A) We use captions from the COCO and Flickr30k to train CapDec and evaluate on the datasets the captions were taken from. We report results for fully supervised methods that train on captioned images, and on methods that use no training text (ZeroCap), or just training text and no images (CapDec and MAGIC). (B) Similar setting to (A), but in cross-domain setup where training text is taken from one dataset, and evaluation is done on the second dataset.
+
+inference fails when using image embeddings instead. If image-text pairs were available, we could attempt to learn a mapping between these domains. Nevertheless, as we aim for text-only training, we shall seek a different approach.
+
+Specifically, we assume that the visual embedding corresponding to a text embedding lies somewhere within a ball of small radius $\epsilon$ around the text embedding (see Fig. 1). We would like all text embeddings in this ball to decode to the same caption, which should also correspond to the visual content mapped to this ball. We implement this intuition by adding zero-mean Gaussian noise of STD $\epsilon$ to the text embedding before decoding it.
+
+The value of $\epsilon$ is calculated by estimating the spread of captions corresponding to the same image. Specifically, we set $\epsilon$ to the mean $\ell_{\infty}$ norm of embedding differences between five captions that correspond to the same image. We estimated this based on captions of only 15 MS-COCO images. Since this calculation requires very few captions and there is no need to recalculate it for every new dataset, we do not view it as additional supervision.
+
+Our overall training objective is thus to minimize:
+
+$$
+\sum_ {T \in \mathcal {T}} \ell (C (\boldsymbol {\psi} (T) + \boldsymbol {n}), T), \tag {1}
+$$
+
+where $\pmb{n} \in \mathbb{R}^d$ is a random standard Gaussian noise with standard-deviation $\epsilon$ and $\ell$ is an auto-regressive cross-entropy loss for all tokens in $T$ . We train just the parameters of the textual decoder $C$ , while the encoder $\psi(\cdot)$ is kept frozen. The noise is sampled independently at each application of the encoder.
+
+# 4 Results
+
+We next evaluate CapDec on several captioning tasks, demonstrating state-of-the-art results. See supplementary for additional details.
+
+Image Captioning. We compare CapDec caption quality to several baselines with different supervision levels, as presented in Tab. 1(A). Here, all methods were trained end evaluated over the same dataset, using the commonly used MS-COCO (Lin et al., 2014; Chen et al., 2015) and Flickr30k (Young et al., 2014). We begin by evaluating fully supervised techniques: BUTD (Anderson et al., 2018), UniVLP (Zhou et al., 2020), ClipCap (Mokady et al., 2021), Oscar (Li et al., 2020), and Lemon (Hu et al., 2022). As expected, these achieve a better score than CapDec, as they exploit the additional supervision of image-text pairs. Nevertheless, compared to the unsupervised ap
+
+
Model
Romantic
Humorous
B@1
B@3
M
C
B@1
B@3
M
C
StyleNet
13.3
1.5
4.5
7.2
13.4
0.9
4.3
11.3
MemCap
21.2
4.8
8.4
22.4
19.9
4.3
7.4
19.4
CapDec + Image-Text Pre-training
27.9
8.9
12.6
52.2
29.4
8.8
13.2
55.1
CapDec + Text-Only Pre-training
23.0
4.6
9.1
27.4
22.7
4.3
9.7
29.0
CapDec
21.4
5.0
9.6
26.9
24.9
6.0
10.2
34.1
+
+Table 2: Style-Guided captioning results on FlickrStyle10K (Gan et al., 2017).
+
+proaches of MAGIC (Su et al., 2022) and ZeroCap (Tewel et al., 2022), CapDec achieves superior scores. Note that ZeroCap does not require any training data, while MAGIC requires text data similar to our setting.
+
+Cross-Domain Captioning. We test our generalization ability by training on one dataset while evaluating on another, as in Su et al. (2022). Again, as can be seen in Tab. 1(B), CapDec outperforms MAGIC (Su et al., 2022), which uses the same supervision as CapDec.
+
+Style-Guided Captioning. Several works (Zhao et al., 2020; Gan et al., 2017) have studied the task of adapting a captioning model to a new style, such as "romantic" or "humorous". Since collecting paired examples for each style requires great effort, these consider the setting where the new style is only learned from text. This is easy to do in our setting, since we can train the decoder on any given style text. Fig. 2 shows captions generated with CapDec in several styles (same setting and data as in Zhao et al. (2020)). Tab. 2 reports quantitative results for this setting, showing CapDec outperforms other baselines. To further analyze our approach, we present our results without pretraining (i.e., training on styled data only), with a text-only pre-training over COCO, and with text-image pre-training over COCO (similar to (Zhao et al., 2020)). As can be seen, we outperform (Zhao et al., 2020) even with considerably less supervision at pre-training. Moreover, both other variations improve results, demonstrating that CapDec can effectively use additional training data where available.
+
+The Effect of Noise Level. A key element of CapDec is noise injection before decoding. To demonstrate the effect of noise, we report results as a function of the noise variance $\epsilon^2$ in Fig. 3. It can be seen that too little or too much noise is
+
+
+Humorous: two golden retrievers fight for supremacy at a beach contest
+
+
+Humorous: a person hikes down a snowy mountain to reach outer space
+
+
+Humorous: a hobbit walks through a tunnel to find a gateway to the future
+
+
+Humorous: little girl giggles as she slides down a playground slide thinking she's a monkey
+Romantic: a climber hikes down a snowy mountain to conquer the high
+
+
+Romantic: two tan dogs play tag in the water celebrating their friendship
+Romantic: a person walks through a water tunnel to reach his destiny
+Romantic: a little girl in a pink diaper shows off for her favorite toy at the park
+Figure 2: Example for styled captions of CapDec on FlickrStyle10K (Gan et al., 2017).
+Figure 3: The effect of the noise variance on MS-COCO performance.
+
+suboptimal. We note that the noise variance we chose, $\epsilon^2 = 0.016$ , is based only on text, and not on the results in Fig. 3 which are shown for analysis purposes only.
+
+# 5 Noise Injection Analysis
+
+Noise-injection is a well-known technique for improving generalization (Reed and MarksII, 1999; Bishop, 1995; An, 1996; Vincent et al., 2010), and can be viewed as a data augmentation mechanism
+
+
+Figure 4: Analysis of performance of different methods as a function of the noise level (see Sec.5). We show the CiDER metric (higher is better), as other metrics show similar trends. CapDec here is the same as in Fig.3
+
+(Goodfellow et al., 2016). In our case, the use of noise was also meant to address the modality-gap observed in Liang et al. (2022). In order to examine the specific effect of noise, we perform additional evaluations on COCO and show the results in Fig.4.
+
+Text-Reconstruction: We encode COCO captions using CLIP text embedding and decode them using the learned CapDec model. This does not involve images at all, and is meant to test whether noise injection simply serves as regularization for text auto-encoding. Fig.4 shows that adding noise does not help, and thus suggests that noise is not merely functioning as augmentation.
+
+ClipCap: Recall that ClipCap is trained on joint text-image pairs (Mokady et al., 2021). Here we trained ClipCap by adding noise to the image embeddings during training. It can be seen that noise does not improve performance, again suggesting that improvement is due to its specific role in domain-gap correction.
+
+Modalities Offset: Given sufficient training paired-data, one could presumably learn the modalities-gap and correct for it. Here we test a simple approximation of the gap, that does not require image-text data to be paired, by calculating the shift between the mean of text embeddings and the mean of image embeddings in COCO. Then, given an image, we add the shift to its embedding to "correct" for this gap, and apply the CapDec trained decoder to the resulting embedding. Had this mapping been perfect, CapDec would not have needed additional noise injection. The results in Fig.4 show that the offset-correction does outperform CapDec at $\epsilon^2 < 0.01$ , but underperforms overall. This suggests that the gap was not perfectly estimated, and that noise injection still serves to
+
+mitigate it. We leave it for future research to consider a more complex or fully-supervised model that learns the modality-gap explicitly.
+
+# 6 Conclusion
+
+The image captioning task has been extensively studied, with considerable progress in recent years. However, the number of available training datasets, containing image-text pairs, is still rather limited. Consequently, image captioning models inherit the limitations of their training data, such as biases (Hendricks et al., 2018) or confinement to neutral style (Gan et al., 2017). In this work, we suggest a new paradigm, where a generic vision-language model (e.g., CLIP) is adapted to image captioning using a text-only dataset. Furthermore, we demonstrate a simple and intuitive technique to overcome the inherent domain gap of CLIP (Liang et al., 2022). For future work, we plan to study text-only training for other tasks, such as visual question answering and visual scene graph generation.
+
+# 7 Ethics Statement
+
+Image captioning models are notorious for their internal biases (Hendricks et al., 2018). These biases are usually inherited from the training data itself. We observe that since balancing a text-only dataset is much more feasible than collecting balanced text-image pairs, CapDec can be used to mitigate those biases. For instance, consider the problem of a dataset containing significantly more images of snowboarding men than women. Collecting more images requires substantial effort while replacing "man" with "woman" (and their synonyms) in all captions is quite simple. Therefore, our text-only training might mitigate some of the inherited bias.
+
+# 8 Limitations
+
+We observe that although CapDec achieves superior results compared to the baselines that use only text at training, it is still outperformed by fully supervised baselines. Since CLIP captures rich semantics in its latent space, we believe that text-only training can be further improved up to the almost same quality as supervised techniques in future work. In addition, note that CapDec relies on CLIP and a language model both of which were pre-trained on large English corpora. Therefore, we find the important task of extending CapDec's capabilities to other languages to be a significant challenge.
+
+# Acknowledgments
+
+This work was supported by the Blavatnik Interdisciplinary Research Center (ICRC). We thank to Amir Hertz for sharing relevant code parts from his work on ClipCap (Mokady et al., 2021).
+
+# References
+
+Guozhong An. 1996. The effects of adding noise during backpropagation training on a generalization performance. Neural computation, 8(3):643-674.
+Peter Anderson, Xiaodong He, Chris Buehler, Damien Teney, Mark Johnson, Stephen Gould, and Lei Zhang. 2018. Bottom-up and top-down attention for image captioning and visual question answering. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 6077-6086.
+Chris M Bishop. 1995. Training with noise is equivalent to Tikhonov regularization. Neural computation, 7(1):108-116.
+Long Chen, Hanwang Zhang, Jun Xiao, Liqiang Nie, Jian Shao, Wei Liu, and Tat-Seng Chua. 2017. Scacnn: Spatial and channel-wise attention in convolutional networks for image captioning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 5659-5667.
+Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Dólar, and C Lawrence Zitnick. 2015. Microsoft coco captions: Data collection and evaluation server. arXiv preprint arXiv:1504.00325.
+Xinlei Chen and C Lawrence Zitnick. 2014. Learning a recurrent visual representation for image caption generation. arXiv preprint arXiv:1411.5654.
+Marcella Cornia, Lorenzo Baraldi, Giuseppe Fiameni, and Rita Cucchiara. 2021. Universal captioner: Long-tail vision-and-language model training through content-style separation.
+Michael Denkowski and Alon Lavie. 2014. Meteor universal: Language specific translation evaluation for any target language. In Proceedings of the ninth workshop on statistical machine translation, pages 376-380.
+Rinon Gal, Or Patashnik, Haggai Maron, Gal Chechik, and Daniel Cohen-Or. 2021. Stylegan-nada: Clip-guided domain adaptation of image generators. arXiv preprint arXiv:2108.00946.
+Chuang Gan, Zhe Gan, Xiaodong He, Jianfeng Gao, and Li Deng. 2017. Stylenet: Generating attractive visual captions with styles. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3137-3146.
+Ian Goodfellow, Yoshua Bengio, and Aaron Courville. 2016. Deep learning. MIT press.
+Xiuye Gu, Tsung-Yi Lin, Weicheng Kuo, and Yin Cui. 2021. Open-vocabulary object detection via vision and language knowledge distillation. arXiv preprint arXiv:2104.13921.
+
+Lisa Anne Hendricks, Kaylee Burns, Kate Saenko, Trevor Darrell, and Anna Rohrbach. 2018. Women also snowboard: Overcoming bias in captioning models. In Proceedings of the European Conference on Computer Vision (ECCV), pages 771-787.
+Simao Herdade, Armin Kappeler, Kofi Boakye, and Joao Soares. 2019. Image captioning: Transforming objects into words. arXiv preprint arXiv:1906.05963.
+Xiaowei Hu, Zhe Gan, Jianfeng Wang, Zhengyuan Yang, Zicheng Liu, Yumao Lu, and Lijuan Wang. 2022. Scaling up vision-language pre-training for image captioning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 17980-17989.
+Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc Le, Yun-Hsuan Sung, Zhen Li, and Tom Duerig. 2021. Scaling up visual and vision-language representation learning with noisy text supervision. In International Conference on Machine Learning, pages 4904-4916. PMLR.
+Ying Jin, Yinpeng Chen, Lijuan Wang, Jianfeng Wang, Pei Yu, Zicheng Liu, and Jenq-Neng Hwang. 2021. Is object detection necessary for human-object interaction recognition? arXiv preprint arXiv:2107.13083.
+Andrej Karpathy and Li Fei-Fei. 2015. Deep visual-semantic alignments for generating image descriptions. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3128-3137.
+Apoorv Khandelwal, Luca Weihs, Roozbeh Mottaghi, and Aniruddha Kembhavi. 2022. Simple but effective: Clip embeddings for embodied ai. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 14829-14838.
+Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. CoRR, abs/1412.6980.
+Chia-Wen Kuo and Zsolt Kira. 2022. Beyond a pretrained object detector: Cross-modal textual and visual context for image captioning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 17969-17979.
+Iro Laina, Christian Rupprecht, and Nassir Navab. 2019. Towards unsupervised image captioning with shared multimodal embeddings. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 7414-7424.
+Xiujun Li, Xi Yin, Chunyuan Li, Pengchuan Zhang, Xiaowei Hu, Lei Zhang, Lijuan Wang, Houdong Hu, Li Dong, Furu Wei, et al. 2020. Oscar: Object-semantics aligned pre-training for vision-language tasks. In European Conference on Computer Vision, pages 121-137. Springer.
+
+Weixin Liang, Yuhui Zhang, Yongchan Kwon, Serena Yeung, and James Zou. 2022. Mind the gap: Understanding the modality gap in multi-modal contrastive representation learning. arXiv preprint arXiv:2203.02053.
+Chin-Yew Lin and Franz Josef Och. 2004. Automatic evaluation of machine translation quality using longest common subsequence and skip-bigram statistics. In Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics (ACL-04), pages 605-612.
+Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dólar, and C Lawrence Zitnick. 2014. Microsoft coco: Common objects in context. In European conference on computer vision, pages 740-755. Springer.
+Ilya Loshchilov and Frank Hutter. 2017. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101.
+Jiasen Lu, Dhruv Batra, Devi Parikh, and Stefan Lee. 2019. Vilbert: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks. arXiv preprint arXiv:1908.02265.
+Yunpeng Luo, Jiayi Ji, Xiaoshuai Sun, Liujuan Cao, Yongjian Wu, Feiyue Huang, Chia-Wen Lin, and Rongrong Ji. 2021. Dual-level collaborative transformer for image captioning. arXiv preprint arXiv:2101.06462.
+Ziyang Luo, Yadong Xi, Rongsheng Zhang, and Jing Ma. 2022a. A frustratingly simple approach for end-to-end image captioning.
+Ziyang Luo, Yadong Xi, Rongsheng Zhang, and Jing Ma. 2022b. I-tuning: Tuning language models with image for caption generation. arXiv preprint arXiv:2202.06574.
+Ron Mokady, Amir Hertz, and Amit H Bermano. 2021. Clipcap: Clip prefix for image captioning. arXiv preprint arXiv:2111.09734.
+Kishore Papineni, Salim Roukos, Todd Ward, and Wei Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting of the Association for Computational Linguistics, pages 311-318.
+Bryan A Plummer, Liwei Wang, Chris M Cervantes, Juan C Caicedo, Julia Hockenmaier, and Svetlana Lazebnik. 2015. Flickr30k entities: Collecting region-to-phrase correspondences for richer imageto-sentence models. In Proceedings of the IEEE international conference on computer vision, pages 2641-2649.
+Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. 2021. Learning transferable visual models from natural language supervision. arXiv preprint arXiv:2103.00020.
+
+Russell Reed and Robert J MarksII. 1999. Neural smithing: supervised learning in feedforward artificial neural networks. Mit Press.
+Sheng Shen, Liunian Harold Li, Hao Tan, Mohit Bansal, Anna Rohrbach, Kai-Wei Chang, Zhewei Yao, and Kurt Keutzer. 2021. How much can clip benefit vision-and-language tasks? arXiv preprint arXiv:2107.06383.
+Haoyu Song, Li Dong, Wei-Nan Zhang, Ting Liu, and Furu Wei. 2022. Clip models are few-shot learners: Empirical studies on vqa and visual entailment. arXiv preprint arXiv:2203.07190.
+Yixuan Su, Tian Lan, Yahui Liu, Fangyu Liu, Dani Yogatama, Yan Wang, Lingpeng Kong, and Nigel Collier. 2022. Language models can see: Plugging visual controls in text generation. arXiv preprint arXiv:2205.02655.
+Hao Tan and Mohit Bansal. 2019. Lxmert: Learning cross-modality encoder representations from transformers. arXiv preprint arXiv:1908.07490.
+Yoad Tewel, Yoav Shalev, Idan Schwartz, and Lior Wolf. 2022. Zerocap: Zero-shot image-to-text generation for visual-semantic arithmetic. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 17918-17928.
+Maria Tsimpoukelli, Jacob Menick, Serkan Cabi, SM Eslami, Oriol Vinyals, and Felix Hill. 2021. Multimodal few-shot learning with frozen language models. Advances in Neural Information Processing Systems, 34.
+Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems, pages 5998-6008.
+Ramakrishna Vedantam, C Lawrence Zitnick, and Devi Parikh. 2015. Cider: Consensus-based image description evaluation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4566-4575.
+Pascal Vincent, Hugo Larochelle, Isabelle Lajoie, Yoshua Bengio, Pierre-Antoine Manzagol, and León Bottou. 2010. Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. Journal of machine learning research, 11(12).
+Zirui Wang, Jiahui Yu, Adams Wei Yu, Zihang Dai, Yulia Tsvetkov, and Yuan Cao. 2021. Simvlm: Simple visual language model pretraining with weak supervision. arXiv preprint arXiv:2108.10904.
+Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le
+
+Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Association for Computational Linguistics.
+
+Mengde Xu, Zheng Zhang, Fangyun Wei, Yutong Lin, Yue Cao, Han Hu, and Xiang Bai. 2021. A simple baseline for zero-shot semantic segmentation with pre-trained vision-language model. arXiv preprint arXiv:2112.14757.
+
+Xu Yang, Hanwang Zhang, and Jianfei Cai. 2019. Learning to collocate neural modules for image captioning. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 4250-4260.
+
+Peter Young, Alice Lai, Micah Hodosh, and Julia Hockenmaier. 2014. From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions. Transactions of the Association for Computational Linguistics, 2:67-78.
+
+Pengchuan Zhang, Xiujun Li, Xiaowei Hu, Jianwei Yang, Lei Zhang, Lijuan Wang, Yejin Choi, and Jianfeng Gao. 2021. Vinvl: Revisiting visual representations in vision-language models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5579-5588.
+
+Wentian Zhao, Xinxiao Wu, and Xiaoxun Zhang. 2020. Memcap: Memorizing style knowledge for image captioning. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 12984-12992.
+
+Luowei Zhou, Hamid Palangi, Lei Zhang, Houdong Hu, Jason Corso, and Jianfeng Gao. 2020. Unified vision-language pre-training for image captioning and vqa. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 13041-13049.
+
+Yufan Zhou, Ruiyi Zhang, Changyou Chen, Chunyuan Li, Chris Tensmeyer, Tong Yu, Jiumiang Gu, Jinhui Xu, and Tong Sun. 2021. Lafite: Towards language-free training for text-to-image generation. arXiv preprint arXiv:2111.13792.
+
+# A Appendix
+
+# A.1 Implementation Details
+
+We use the RN-50x4 backbone for CLIP image encoder, and GPT-2 (large) as our language model (implementation of Wolf et al.(Wolf et al., 2020)). Following ClipCap (Mokady et al., 2021), for the decoder architecture, we use a transformer-based (Vaswani et al., 2017) mapping network where we set the CLIP embedding length of $K = 40$ with additional $K = 40$ constants tokens and use 8 multi-head self-attention layers with 8 heads each.
+
+For optimization, we employed AdamW (Kingma and Ba, 2015) with weight decay as introduced by Loshchilov et al. (Loshchilov and Hutter, 2017), with a learning rate of $2e^{-5}$ and 5000 warm-up steps.
+
+# A.2 Datasets and Evaluation Metrics
+
+When evaluating over MS-COCO (Chen et al., 2015) and Flickr30k (Plummer et al., 2015), we followed Karpathy(Karpathy and Fei-Fei, 2015) split, similar to (Su et al., 2022) and (Mokady et al., 2021). Considering the FlickrStyle10K (Gan et al., 2017) dataset, we followed (Zhao et al., 2020), and split the dataset randomly to $6/7$ , and $1/7$ of training and test sets, correspondingly. For qualitative evaluation, we employ the commonly used BLEU (Papineni et al., 2002) $(\mathbf{B}@\mathbf{1},\mathbf{B}@\mathbf{4})$ , METEOR (Denkowski and Lavie, 2014) (M), ROUGE-L (Lin and Och, 2004) (R-L), and CIDEr (Vedantam et al., 2015) (C) metrics.
+
+# A.3 Qualitative Comparison
+
+All qualitative scores were reproduced or obtained from the works of (Su et al., 2022) and (Zhao et al., 2020) after carefully validating we use the same splits. Our metrics implementation is adapted from the official implementation of (Li et al., 2020).
\ No newline at end of file
diff --git a/textonlytrainingforimagecaptioningusingnoiseinjectedclip/images.zip b/textonlytrainingforimagecaptioningusingnoiseinjectedclip/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..dd457e17b19e61abef1ecc48c1c5baeae4313eef
--- /dev/null
+++ b/textonlytrainingforimagecaptioningusingnoiseinjectedclip/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b017442b458ea715a44a91c793a2c0f4876ff6cfc0c6005482bde9822d7ceb18
+size 239839
diff --git a/textonlytrainingforimagecaptioningusingnoiseinjectedclip/layout.json b/textonlytrainingforimagecaptioningusingnoiseinjectedclip/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..66e70c7e46067f495e58610c7d6c533b6df0238d
--- /dev/null
+++ b/textonlytrainingforimagecaptioningusingnoiseinjectedclip/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d9801ee85bb69425f29841b723ad8f3b0fb1840d5e91f6b1c01a4cc34a6f5994
+size 269888
diff --git a/textualenhancedcontrastivelearningforsolvingmathwordproblems/8686ea23-2acd-4e21-bd8d-e449d28ae474_content_list.json b/textualenhancedcontrastivelearningforsolvingmathwordproblems/8686ea23-2acd-4e21-bd8d-e449d28ae474_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..f126a5ebe3ab928d8caf10ec97ff26a3aea731cb
--- /dev/null
+++ b/textualenhancedcontrastivelearningforsolvingmathwordproblems/8686ea23-2acd-4e21-bd8d-e449d28ae474_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ac10b39b40fc237dbab8fb4a9b058cbad7bf8e6a93b7f22abca92abf509f5f1a
+size 77453
diff --git a/textualenhancedcontrastivelearningforsolvingmathwordproblems/8686ea23-2acd-4e21-bd8d-e449d28ae474_model.json b/textualenhancedcontrastivelearningforsolvingmathwordproblems/8686ea23-2acd-4e21-bd8d-e449d28ae474_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..63c035f569da18d0eb2476f461661f5b61b64065
--- /dev/null
+++ b/textualenhancedcontrastivelearningforsolvingmathwordproblems/8686ea23-2acd-4e21-bd8d-e449d28ae474_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e5b0ced8b0712bf897e153ded97f1f3cc342615a1a45b00c656a80d294550294
+size 93947
diff --git a/textualenhancedcontrastivelearningforsolvingmathwordproblems/8686ea23-2acd-4e21-bd8d-e449d28ae474_origin.pdf b/textualenhancedcontrastivelearningforsolvingmathwordproblems/8686ea23-2acd-4e21-bd8d-e449d28ae474_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..c49e12c196e5d66b6d044e94375a97f27571cd99
--- /dev/null
+++ b/textualenhancedcontrastivelearningforsolvingmathwordproblems/8686ea23-2acd-4e21-bd8d-e449d28ae474_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:7f5643decb97e16df047f363c42591bdc36cb1e2c949ec7a965facc0a12ff1df
+size 782269
diff --git a/textualenhancedcontrastivelearningforsolvingmathwordproblems/full.md b/textualenhancedcontrastivelearningforsolvingmathwordproblems/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..89e08feca6280c720eebb593ed3c6ce0031bb8f5
--- /dev/null
+++ b/textualenhancedcontrastivelearningforsolvingmathwordproblems/full.md
@@ -0,0 +1,318 @@
+# Textual Enhanced Contrastive Learning for Solving Math Word Problems
+
+Yibin Shen $^{1}$ , Qianying Liu $^{2}$ , Zhuoyuan Mao $^{2}$ , Fei Cheng $^{2}$ and Sadao Kurohashi $^{2}$
+
+1 Meituan
+
+2 Graduate School of Informatics, Kyoto University
+
+shenyibin@meituan.com; {ying,zhuoyuanmao}@nlp.ist.i.kyoto-u.ac.jp; {feicheng, kuro}@i.kyoto-u.ac.jp
+
+# Abstract
+
+Solving math word problems is the task that analyses the relation of quantities and requires an accurate understanding of contextual natural language information. Recent studies show that current models rely on shallow heuristics to predict solutions and could be easily misled by small textual perturbations. To address this problem, we propose a Textual Enhanced Contrastive Learning framework, which enforces the models to distinguish semantically similar examples while holding different mathematical logic. We adopt a self-supervised manner strategy to enrich examples with subtle textual variance by textual reordering or problem re-construction. We then retrieve the hardest to differentiate samples from both equation and textual perspectives and guide the model to learn their representations. Experimental results show that our method achieves state-of-the-art on both widely used benchmark datasets and also exquisitely designed challenge datasets in English and Chinese.
+
+# 1 Introduction
+
+Solving Math Word Problems (MwPs) is the task of automatically performing logical inference and generating a mathematical solution from a natural language described math problem. Solving MwPs is a challenging task that cannot rely on shallow keyword matching but requires a comprehensive understanding of contextual information. For example, as shown in Figure 1, while the first problem shares high token-level overlapping with the third problem, the underlying mathematical logic is different. While on the other hand, the first and second problems have very low similarity at the textual level, while the equation solution is the same. The challenge of the task is that the underlying
+
+Problem: $P = (T,E)$
+
+T: Dave bought $n_0$ boxes of chocolate candy and gave $n_1$ to his little brother. If each box has $n_2$ pieces inside it, how many pieces did Dave still have?
+
+$$
+E: (n _ {0} - n _ {1}) * n _ {2}
+$$
+
+Different Textual, Similar logic: $P^{+} = (T^{+}, E^{+})$
+
+$T^{+}$ : A new building needed $n_0$ windows. The builder had already installed $n_1$ of them. If it takes $n_2$ hours to install each window how long will it take him to install the rest?
+
+$$
+E ^ {+}: \left(n _ {0} - n _ {1}\right) * n _ {2}
+$$
+
+Similar Textual, Different Logic: $P^{-} = (T^{-}, E^{-})$
+
+$T^{-}$ : For hallowen Faye got $n_0$ pieces of candy. She ate $n_1$ pieces the first night and then her sister gave her $n_2$ more pieces. How many pieces of candy does Faye have now?
+
+$$
+E ^ {-}: n _ {0} - n _ {1} + n _ {2}
+$$
+
+Figure 1: Example of positive data point $P^{+} = (T^{+}, E^{+})$ and negative data point $P^{-} = (T^{-}, E^{-})$ for an anchor $P = (T, E)$ .
+
+mathematical logic would change even with minor modifications in the text. While neural network based models have greatly boosted the performance on benchmarks datasets, Patel et al. (2021) argued that state-of-the-art (SOTA) models use shallow heuristics to solve a majority of word problems, and struggle to solve challenge sets that have only small textual variations between examples.
+
+Motivated by recent progress in contrastive learning methods, which is a flexible framework that has been successfully employed to representation learning in various fields (Chopra et al., 2005; Fang and Xie, 2020; Gao et al., 2021), we propose Textual Enhanced Contrastive Learning, which is an end-to-end framework that uses both textual and mathematical logic information to build effective representations. For each anchor data point, we find the hard example triplet pair, which consists of a textual-different but logic-similar positive data point $P^{+}$ , and a textual-similar but logic-different
+
+negative data point $P^{-}$ . Our method aims to learn an embedding space where the vector representations of $P$ and $P^{+}$ in Figure 1 are mapped close together, since they hold the same mathematical logic even though the textual expression is entirely different; on the other hand, because $P$ and $P^{-}$ have similar textual expression but different mathematical logic, their vector representations could be separated apart.
+
+To build such triplet pairs, we use a retrieval-based method to search in the training data. We consider the equation annotation as the representation of the mathematical logic in the example, and retrieve a positive and negative bag of data points according to equation similarity. Then we further use textual similarity to choose the hard examples in the bags, where positive examples have low textual similarity with the anchor and vice versa. Given such hard sample data, Contrastive Learning could empower the representations by leading the model to distinguish these potential disorienting examples in the training stage.
+
+Such approaches to retrieving triplet pairs from human-annotated training data via label annotation are considered as supervised contrastive learning. Another research line of contrastive learning is self-supervised contrastive learning, which does not require labeled data and use data augmentation methods to generate the positive or negative data points (Chen et al., 2020; He et al., 2020; Grill et al., 2020). In the task of solving MwPs, we can leverage self-supervised supervision by generating new examples via performing synchronized changes to text and equations. The generated data is naturally hard sample data, because the textual expression is similar to the origin example, while the equation could be either changed or the same. Specifically, we leverage Reversed Operation-based Data Augmentation (Liu et al., 2021) and a Question Reordering-based augmentation to form new data points. By enhancing the model to detect the small perturbations in the augmented examples, contrastive learning forces the model to learn more effective representations of contextual information.
+
+While previous studies also used Contrastive Learning to improve representations for solving MwPs (Li et al., 2022), their method is limited to supervised contrastive learning, ignores textual information during constructing the contrastive learning pairs and requires two step pre-training and re-training. Our method pushes the model to
+
+learn better text representations and understand the most minor textual variance from these textual enhanced hard samples from both supervised and self-supervised perspectives.
+
+We conduct experiments on two widely used datasets, the English dataset ASdiv-A (Miao et al., 2020) and the Chinese dataset Math23K (Wang et al., 2017). To further investigate how our method improves the ability of the model to detect small textual perturbations, we collect a Chinese challenge set Hard Example (HE)-MWP. We perform experiments on two challenge sets of MwPs, the English Asdiv-Adv-SP dataset (Kumar et al., 2021) and the Chinese HE-MWP dataset. Experimental results show that our method achieves consistent gains under different languages and settings, demonstrating the effectiveness of our method.
+
+# 2 Related Work
+
+# 2.1 Solving Math Word Problems
+
+There are various research lines in solving math word problems. Early studies majorly rely on rule-based methods (Bobrow, 1964; Charniak, 1969). Statistical machine learning methods were developed to map math word problems to specific equation templates (Kushman et al., 2014; Roy and Roth, 2015; Koncel-Kedziorski et al., 2015; Roy and Roth, 2017). Another research line uses semantic parsing-based methods to transform the input text into structured representations that could be parsed to obtain the answer (Roy and Roth, 2018; Shi et al., 2015; Zou and Lu, 2019). Recent studies focus on using a sequence-to-sequence (seq2seq) framework that takes in the text descriptions of the MwPs and predicts the answer equation. To improve the framework, various studies have investigated task designing task specialized encoder and decoder architectures (Wang et al., 2018, 2019; Xie and Sun, 2019; Liu et al., 2019; Guan et al., 2019; Zhang et al., 2020b,a; Shen and Jin, 2020), using pre-trained models (Tan et al., 2021; Liang et al., 2021) and leveraging auxiliary tasks (Liu et al., 2021; Shen et al., 2021; Li et al., 2022; Shen et al., 2022). Various auxiliary tasks have been introduced to improve model performance. Shen et al. (2021) introduced a reranking loss that reranks the beam search predictions. Huang et al. (2021) introduced a memory augmented subtask that gives guidance during the decoding stage. The closest study to our research is (Li et al., 2022), which uses equations as searching schema to build positive-
+
+negative pairs, and then perform contrastive learning. However, their research ignores textual information while building contrastive learning triplet pairs and limits supervised contrastive learning.
+
+MWP solvers have achieved relatively high performance on benchmark datasets. However, the extent to which these solvers truly understand language and numbers remains unclear. Various studies either use data augmentation to help the model improve robustness and performance on hard cases or develop adversarial examples and challenge sets to evaluate the robustness of MWP solvers against textual variance. Liu et al. (2021) proposed a data augmentation method that reverses the mathematical logic in the problem to generate a new example. Patel et al. (2021) constructed a challenge set of the math word problem in which the problem text only has a small variance. Kumar et al. (2021) investigated adversarial attack on MWP solvers. The challenge sets and adversarial attacks show that current MWP solvers use shallow heuristics to solve a majority of word problems and fail to detect subtle textual variance.
+
+# 2.2 Contrastive Learning
+
+Contrastive Learning was first adopted in Computer Vision to learn representations of images via self-supervision without human annotation (Chen et al., 2020; He et al., 2020; Grill et al., 2020). Self-supervised contrastive learning is applied in NLP to learn sentence representations. Back translation (Fang and Xie, 2020) and dropout (Gao et al., 2021) are used to construct positive-negative contrastive learning triplets. These perturbation-based techniques are not suitable for MWP solvers, that MWPs are sensitive to small textual variance and the perturbation might introduce noise.
+
+Khosla et al. (2020) first introduced supervised contrastive learning in Computer Vision by modifying the loss to allow supervision from label annotations. In NLP, various studies have introduced natural language inference (NLI) datasets as supervised annotations for contrastive learning (Reimers and Gurevych, 2019; Gao et al., 2021). The agreement of equation annotations of MwPs can be considered a form of NLI, that our supervised contrastive learning could be considered a transformation of these methods.
+
+# 3 Methodology
+
+We use Contrastive Learning to obtain text features with high differentiation of small perturbations, so that for each anchor data point $P = (T,E)$ , where $T$ stands for the text and $E$ stands for the equation, we construct a pair of examples positive data point $P^{+} = (T^{+},E^{+})$ and negative data point $P^{-} = (T^{-},E^{-})$ , and then use contrastive learning loss to map the representation of $P$ and $P^{+}$ closer and vice versa. The pipeline of the triplet pairs retrieval is shown in Figure 2. We first construct a candidate pool, which consists of supervised training data $\{P_i\}$ and augmented self-supervised data $\{P_{i}^{\text{aug}}\}$ as shown in the blue part of Figure 2. The self-supervised data is generated by two methods, Reversed Operation based Data Augmentation (RODA) and Question Reordering (QR), which is explained in Section 3.1. Then we perform two-step retrieval to retrieve the triplet pairs as described in Section 3.2. We first use an equation-based retrieval strategy to extract positive candidate set $\{\widetilde{P}^{+}\}$ and negative candidate set $\{\widetilde{P}^{-}\}$ , and then further introduce textual information by choosing one example from the candidate set via a text-based retrieval strategy. Finally, we train the MWP solving model that maps $T$ to $E$ by considering both the contrastive learning and solution equation generation objective, as described in Section 3.3.
+
+# 3.1 Enriching Candidate Pool via Self-Supervised Augmentation
+
+The self-supervised examples are challenging for the model to distinguish; while the perturbation in the text expression is extremely subtle, the corresponding mathematical logic could still change. Compared to the supervised examples that are retrieved from the training data, these self-supervised samples place a higher demand on the model's ability to detect subtle changes and understand contextual information. We generate task-orientated augmented examples from training set data point $P = (T,E)$ via two methods that obtain reliable new text-equation examples by modifying the text and equation in the same logic at the same time. We split the sentences with punctuation marks to a question followed by various declarative sentences $T = \{S_{1},S_{2},\dots,S_{k - 1},Q_{k}\}$ . The question sentence is always the last sentence for Asdiv, and we check whether interrogative pronouns are in the last sentence for Math23K.
+
+
+Figure 2: Overview of the contrastive learning triplet pairs retrieval procedure.
+
+# 3.1.1 Question Reordering
+
+We move the question to the front of the MWP to form a reordered new MWP similar to Kumar et al. (2021). Given a problem text $T = \{S_1, S_2, \dots, S_{k-1}, Q_k\}$ , we move the question $Q_k$ to the front of the problem text to form a new problem text $T^{QR} = \{Q_k, S_1, \dots, S_{k-1}\}$ while the rest of the text remains the same. We simultaneously edit the equation $E^{QR}$ so that the variables match with the new text order. The new example $P^{QR} = (T^{QR}, E^{QR})$ could either be a positive example that holds the same equation as $P$ or a negative example that holds a different equation since the variable order might change during the reordering. The high textual similarity but rotated variable order pushes the model to learn representations that can differ from these small textual perturbations.
+
+# 3.1.2 Reversed Operation based Data Augmentation
+
+We perform RODA (Liu et al., 2021) that generates a new example by asking a question about one of the original given variable. Given a problem text $T = \{S_{1}, S_{2}, \dots, S_{k-1}, Q_{k}\}$ where the question $Q$ asks about an unknown variable $n_{ans}$ , RODA chooses a known variable $n$ in one of the declarative sentence $S_{i}$ , and then generates a problem text which asks about this variable. To generate such an example, $S_{i}$ is transformed to a question $Q_{S_{i}}$ which asks a question of $n$ , while $Q$ is transformed to a declarative sentence $S_{k}$ describing $n_{ans}$ . We reorder the problem text by swapping the two sentences, that a new problem text $T^{RODA} = \{S_{1}, \dots, S_{k}, \dots, S_{k-1}, Q_{i}\}$ is generated. Simultaneously we edit the equation by resolv
+
+ing the equation expression $E^{RODA}$ of $n$ given $n_{ans}$ . While $P^{RODA} = (T^{RODA}, E^{RODA})$ has a very similar textual description of $P$ , the underlying equation could be completely different, which could benefit the model via contrastive learning. RODA requires text parsing and transformation rules to modify the text and equation. For Chinese, it can cover $93\%$ of the examples, and for English, it covers $60\%$ of the examples. The generated text has a 0.83 out of 1 coherent score reported by human evaluation by Liu et al. (2021).
+
+# 3.2 Triplet Pair Retrieval
+
+We construct the positive and negative triplet pairs from both textual and logical perspectives. For a given problem $P$ , the positive sample $P^{+}$ is considered to be a problem with similar equation expressions but relatively different text descriptions; the negative sample $P^{-}$ is considered to be a problem with highly textual similarity but different equation expression. However, it requires a time-consuming bruce-forth enumeration of all possible example pairs to find such optimal positive and negative samples. Considering the computational complexity, we break down the retrieval to a two-step pipeline. We adopt a heuristic searching algorithm to construct positive and negative samples $(P^{+}, P^{-})$ as follows:
+
+1. Construct a similarity matrix $M$ of all equation expressions $\{E_1, E_2, \ldots, E_n\}$ in the training set, where $M_{ij}$ is the similarity of equation expression $E_i, E_j$ .
+2. For a given anchor $P$ , Retrieve a positive candidate set $\{\widetilde{P}^{+}\}$ and a negative candidate set
+
+$\{\widetilde{P}^{-}\}$ of samples from the training set of the data via equation expression similarity.
+
+3. Extract the best positive example $P^{+}$ and the best negative example $P^{-}$ via textual similarity.
+
+We investigate various strategies to retrieve $(P^{+}, P^{-})$ from both equation-based and text-based perspectives.
+
+# 3.2.1 Equation-based Retrieval Strategy
+
+To evaluate the equation similarity during the retrieval, we design an equation similarity metric $\text{Sim}_{eq}$ based on length-wise normalized tree edit distance (TED). TED is defined as the minimum-cost sequence of node operations that transform one tree into another and is a well-known distance measure for hierarchical data. We define the TED of two equation expressions $E_1, E_2$ as the TED of their abstract syntax tree. The similarity of two equation expressions $E_1, E_2$ is defined as:
+
+$$
+S i m _ {e q} (E _ {1}, E _ {2}) = 1 - \frac {T E D (E _ {1} , E _ {2})}{| E _ {1} | + | E _ {2} |}
+$$
+
+Given this equation similarity metric, we design two retrieval strategies.
+
+Exact Match The positive candidate set $\{\widetilde{P}^{+}\}$ is constructed of the examples that meets $Sim_{eq}(E,E_i) = 1$ , which means their equation expression satisfies $E = E_{j}$ . If only the anchor itself holds this equation expression, the positive candidate set $\{\widetilde{P}^{+}\}$ has only the anchor $P$ . The negative candidate set $\{\widetilde{P}^{-}\}$ is constructed of the examples that meets $\argmax_{E_i \neq E} (Sim(E,E_i))$ , which holds the closest equation considering the anchor.
+
+Nearest Neighbour The positive candidate set is constructed of the examples that meets $\operatorname{argmax}_{E_i, T_i \neq T} (Simeq(E, E_i))$ . If no other example holds the same equation expression as the anchor, the positive candidate set $\{\widetilde{P}^+\}$ takes the examples that has the nearest neighbour equation expression. The negative candidate set $\{\widetilde{P}^-\}$ is constructed of the examples that meets $\operatorname{argmax}_{E_i \neq E^+} (Simeq(E, E_i))$ , which holds the closest equation considering the positive example.
+
+The positive and negative candidate sets are then further screened by the text-based strategy.
+
+# 3.2.2 Text-based Retrieval Strategy
+
+To lead the model to differentiate mathematical logic from similar textual expressions, we use textual-based information to select the $(P^{+}, P^{-})$ pair. We select the lowest textual similarity score example from the positive candidate set $\{\widetilde{P}^{+}\}$ , which is the example with different textual expression but the same mathematical logic; and select the highest textual similarity score example from the negative candidate set $\{\widetilde{P}^{-}\}$ , which is the example with similar textual expression but different mathematical logic. We design two similarity measurement metrics for this stage.
+
+BERTSim Sentence-BERT (SBERT) is a strong sentence representation baseline model (Reimers and Gurevych, 2019). We calculate the cosine similarity of the SentBERT representation of the two sentences to obtain the similarity score:
+
+$$
+S i m _ {t e x t} ^ {B E R T S i m} = \frac {S B E R T (T _ {1}) \cdot S B E R T (T _ {2})}{| | S B E R T (T _ {1}) | | | | | S B E R T (T _ {2}) | |}
+$$
+
+The value range of $Sim_{text}^{BERTSim}$ is from $[-1, 1]$ .
+
+Bi-direction BLEU BLEU is a widely used evaluation metric for text generation that measures the similarity between the generated text and the reference. We design a Bi-direction BLEU since BLEU is a not symmetrical similarity metric, which is defined as:
+
+$$
+S i m _ {t e x t} ^ {B i B L E U} = \frac {B L E U (T _ {1} , T _ {2}) + B L E U (T _ {2} , T _ {1})}{2}
+$$
+
+The value range of $Sim_{text}^{BiBLEU}$ is from [0, 1].
+
+# 3.3 Training Procedure
+
+We show the training procedure in Figure 3. The training loss consists of the MWP solving loss $\mathcal{L}_{\text {solver }}$ and the contrastive learning loss $\mathcal{L}_{cl}$ .
+
+MWP Solving Model We follow Li et al. (2022) and use the strong baseline model BERT-GTS as MWP solving model. The pre-trained language model BERT, which provides strong textual representations, is leveraged as the encoder. For the decoder, we use Goal-driven tree-structured MwP solver (GTS) (Xie and Sun, 2019). GTS directly generates the prefix notation of the solution equation by using a recursive neural network to encode subtrees based on the representations of its children
+
+
Dataset
Math23K
Asdiv-A
HE-MWP
Adv-Asdiv
Language
zh
en
zh
en
Domain
general
general
challenge
challenge
#Train
21,162
1,218
-
-
#Dev/#Test
1,000 / 1,000
- / -
- / 400
- / 239
#Equation Templates
3,104
66
231
66
+
+Table 1: Statistics and details of the datasets.
+
+
+Figure 3: Overview of the training procedure.
+
+nodes with the gate mechanism. With the subtree representations, this model can well structured information of the generated part to predict a new token.
+
+Contrastive learning. Contrastive learning is performed on triplets pairs $(P, P^{+}, P^{-})$ by pulling the representations of $T$ and $T^{+}$ together and pushing apart the representations of $T$ and $T^{-}$ . We follow the contrastive learning framework in Chen et al. (2020), which takes an in-batch cross-entropy objective. Let $x_{i}$ denote the encoder representation of $P$ , the training objective for $(x_{i}, x_{i}^{+}, x_{i}^{-})$ within the batch of $N$ triplet pairs is:
+
+$$
+\begin{array}{l} \mathcal {L} _ {c l} = \\ - \frac {1}{N} \sum_ {i = 1} ^ {N} l o g \frac {e ^ {c o s (x _ {i} , x _ {i} ^ {+}) / \tau}}{\sum_ {j = 1} ^ {N} (e ^ {c o s (x _ {i} , x _ {j} ^ {+}) / \tau} + e ^ {c o s (x _ {i} , x _ {j} ^ {-}) / \tau})} \\ \end{array}
+$$
+
+where $\tau$ is the temperature hyperparameter.
+
+Assume the prediction target equation of $P$ is $y$ , the final training objective is to minimize the sum of the MWP solution equation generation negative log likelihood loss $\mathcal{L}_{\text{solver}}$ and the contrastive learning loss $\mathcal{L}_{cl}$ :
+
+$$
+\mathcal {L} = \mathcal {L} _ {\text {s o l v e r}} + \alpha * \mathcal {L} _ {c l}
+$$
+
+# 4 Experiments
+
+# 4.1 Datasets
+
+We perform experiments on four datasets, including two widely used datasets to verify the generalization ability of our method and two challenge test sets to show further how our method can enhance the robustness of the model. We show detailed statistics of the datasets in Table 1.
+
+Math23K is a Chinese dataset that contains 23,161 math word problems of elementary school level (Wang et al., 2017). We use the standard train-test split setting of this dataset for the experiment.
+
+Asdiv-A is the arithmetic subset of ASDiv which has 1,218 MwPs mostly up to grade level 4 (Miao et al., 2020). Experiments of this dataset are evaluated by 5-cross validation.
+
+HE-MWP Since no challenge dataset has been developed for Chinese MWP solving, and existing challenge datasets have limited types of equation templates, we use RODA and QR on Math23K validation set to generate examples that are semantically similar to the original input but deceive the model into generating an incorrect prediction. We randomly sample a subset of 600 examples from the RODA result of the development set of Math23K and then manually delete the examples that the text is not coherent. Then we randomly select 400 examples out of this cleaned subset.
+
+Adv-Asdiv-SP is a challenge set of Asdiv-A, which is constructed of adversarial examples (Kumar et al., 2021). These adversarial examples are generated by sentence paraphrasing.
+
+Results of the challenge datasets are tested on the highest performance models trained on the corresponding benchmark datasets.
+
+There exists other MWP datasets, which are relatively less challenging such as ALG514, DRAW1K and MAWPS (Kushman et al., 2014; Upadhyay and Chang, 2017; Koncel-Kedziorski et al., 2016), or
+
+
Model
Cand. Pool
Math23K
Asdiv-A
HE-MWP
Adv-Asdiv-SP
GTS (Xie and Sun, 2019)
-
75.6
68.5
-
21.2
G2T (Zhang et al., 2020b)
-
77.4
71.0
-
23.8
pattern CL (Li et al., 2022)
train
83.2
-
-
-
BERT-GTS
-
82.9
73.4
55.5
59.9
w/ supervised CL
train
84.1
74.2
57.2
63.7
w/ RODA CL
RODA+train
84.3
74.3
64.1
64.1
w/ QR CL
QR+train
84.2
74.4
62.5
66.2
w/ CL
RODA+QR+train
85.0
74.6
69.5
66.9
+
+noisy such as Dolphin18K (Huang et al., 2016) or use semantic parsing as annotation such as MathQA (Amini et al., 2019). We use the two benchmarks Math23K and Asdiv-A because they are both clean and challenging with mathematical equation annotations.
+
+# 4.2 Implementation Details
+
+We use two language-specific BERT-base models as the problem encoder2. For both models, the maximum text length of the encoder is fixed at 256, and the maximum equation generation length of the decoder is fixed at 45. The decoder embedding size is 128. The batch size is 16, with learning rate of 5e-5. We tune the hyperparameters temperature $\tau$ in the set of $\{0.05, 0.1, 0.2\}$ and $\alpha$ in the range [0.1, 0.9]. Experiments of the Chinese datasets are conducted on V100 and RTX 3090 with approximately 6 hours of runtime. Experiments of the English datasets are conducted on 1080Ti with approximately 1-hour runtime.
+
+# 5 Results and Analysis
+
+# 5.1 Pre-examination on Retrieval Strategy
+
+We conduct a breakdown analysis on the most complex dataset Math23K of different retrieval strategies. We investigate the performance of different retrieval strategies for supervised contrastive learning. As shown in Table 3, for the equation-based retrieval strategy, the exact match equation strategy is more effective than the nearest neighbour strategy. This shows that the positive sample for the anchor must have accurate same mathematical logic for contrastive learning to benefit the performance. Both text-based retrieval strategies can improve the MWP solving performance compared
+
+Table 2: Results on MWP datasets. All experiments only compute MWP solving loss on the training set. The candidate pool only affects the choice of positive and negative examples in the CL loss.
+
+
Eq Strategies
Text Strategies
EM
NN
Random
83.2
82.3
BERTSim
83.6
83.1
Bi-BLEU
84.1
83.2
BERT-GTS
82.9
+
+Table 3: Results of different retrieval strategies for supervised contrastive learning. EM denotes exact match. NN denotes nearest neighbour. Random denotes randomly choosing an example from the candidate set. BERTSim and Bi-BLEU denotes choosing the examples by similarity metric.
+
+to the random choosing baseline, demonstrating the effectiveness of introducing textual information for contrastive training. With textual-based retrieval, the extracted positive and negative examples would form hard examples that can push the model to differ textual-similar but logic different examples. Bi-BLEU also has a slightly higher performance than BERTSim. In the following experiments, we use the best combination of EM and Bi-BLEU as retrieval strategies.
+
+# 5.2 Main Results
+
+We show the results of our method compared with other baselines in Table 2. In addition to our baseline BERT-GTS model, we also investigate three strong baseline models. GTS (Xie and Sun, 2019) uses an LSTM encoder and the same decoder as BERT-GTS that generates the abstract syntax trees through a tree structure decoder in a goal-driven manner. G2T (Zhang et al., 2020b) is a graph-to-tree model that uses a graph-based encoder for representing the relationships and order information among the quantities. Pattern CL (Li et al., 2022) proposes a pattern-based contrastive learning, that considers the equation similarity with supervised
+
+
+Figure 4: T-SNE Visualization results of BERT-GTS w/o (left) and w/ CL (right).
+
+
+Figure 5: T-SNE visualization for the case study on BERT-GTS w/o (left) and w/ CL (right).
+
+contrastive learning. We can see from the results that our method outperforms previous studies in all datasets. Compared to Pattern CL which ignores textual information, our method allows the model to have a stronger ability to bridge text descriptions to mathematical logic even using the same candidate pool. The self-supervised methods outperform the supervised settings, especially on challenge datasets, demonstrating the effectiveness of leading the model to learn contextual representations of small textual perturbations.
+
+On benchmark datasets, we achieve $2.1\%$ points of improvement on Math23K and $1.2\%$ points of improvement on Asdiv-A. One major reason is that RODA can only generate 394 examples for the English dataset Asdiv-A. In contrast, it can generate 47,318 examples for the Chinese dataset Math23K because English has more strict grammar than Chinese. On challenge datasets, we achieve $14\%$ points of improvement on HE-MWP dataset and $7.0\%$ points of improvement on Adv-Asdiv-SP dataset. For HE-MWP ablation, RODA is more effective since it could introduce new mathematical logic examples. For Adv-Asdiv-SP, since QR is similar to paraphrasing techniques, it gains more improvement with self-supervised supervision.
+
+# 5.3 Visualization and Case Study
+
+We show T-SNE visualization results of the representations of examples from the top-five frequent equation templates in Math23K: $n_1 * n_2 / n_3$ , $n_1 * n_2$ , $n_1 / n_2$ , $n_2 / n_1$ and $n_1 * (1 - n_2)$ , which refers to
+
+
Text
用一张长n1厘米,宽n2厘米的长方形纸围成一个最大的圆柱,圆柱的侧面积为多少平方厘米?
EN
Given a piece of paper n1 centimeters long and n2 centimeters wide, How many square centimeters is the lateral area of the largest cylinder enclosed by the rectangle?
w/o CL
π * n2()
w/ CL
n1 * n2()
+
+Table 4: Case study on Math23K example. w/o CL denotes the BERT-GTS baseline. w/ CL denotes using contrastive learning.
+
+orange, red, blue, green and purple in Figure 4. We can see that compared to the BERT-GTS baseline on the left subfigure, in the right subfigure, the text representations of the same equations are pulled closer via our contrastive learning, and the representation of different equations are separated apart, which shows that our method can benefit the representation learning.
+
+We further investigate how our method improves the representation via case study. In Table 4 the BERT-GTS baseline could not infer from the textual description that the side area of a cylinder is the area of a rectangle but rather uses shallow heuristics when the word "cylinder" is encountered and generates the constant $\pi$ . By constructing positive and negative sample pairs from both expressions and textual descriptions and changing the representation space via contrastive learning, the model is not misled by the keywords and correctly infers that the mathematical logic is to calculate the area of a rectangle so that the model with contrastive learning generated the correct result. We also show T-SNE visualization of the representation in Figure 5. The red dots are examples with the keyword rectangle and hold the equation $n_1 * n_2$ . The blue dots are the examples that hold the equation $\pi * n_1$ or $\pi * n_2$ . The green dot is the studied case. We can see that while BERT-GTS fails to separate the representation of the case from the cylinder or circle-related equations, contrastive learning helps the model to differentiate such confusing examples, learn better representations, and predict the answer correctly.
+
+# 5.4 Combination with Data Augmentation
+
+While the high-quality and challenging augmented examples have shown remarkable effectiveness for contrastive learning, a question remains whether contrastive learning is still effective when these augmented examples are directly used as training data.
+
+
Model
Acc
baseline
82.9
+QR aug w/o CL
84.9
+QR aug w/ CL
85.2
+RODA aug w/o CL
84.8
+RODA aug w/ CL
86.4
+
+Table 5: Results of using augmented example for both training and contrastive learning.
+
+Thus, we further investigate using the augmented examples as anchors. We use the augmented examples and the original data as training data and perform supervised contrastive learning in the training data. As shown in Table 5, we can see that while the augmented examples improve the performance, contrastive learning can further boost the performance, achieving SOTA results on Math23K.
+
+# 6 Conclusion
+
+In this paper, we propose a Textual Enhanced Contrastive Learning framework, which leverages both supervised and self-supervised supervision to help the model understand contextual information and bridge subtle textual variance to mathematical logic. We use two novel task-specific data augmentation methods to enrich the candidate pool with examples with minor textual variance for contrastive learning triplet pair retrieval. We design a two-stage retrieval method to find hard example triplet pairs with both equation and textual information and investigate various retrieval strategies. Experimental results show that our method gained improvement on both benchmark datasets and challenge datasets in English and Chinese. We also conduct visualization for representation distribution on different equations and also on a case study, which shows our method can benefit the representation learning. With the combination of data augmentation, our method still improves the performance and achieves SOTA results on Math23K dataset.
+
+# Limitations
+
+While our framework extracts contrastive learning triplet pairs with light computational complexity, we observe that such a two-stage retrieval strategy might not be optimal under certain circumstances.
+
+We build the framework assuming that methods with similar mathematical logic (i.e., high equation similarity) would form challenging negative examples. However, especially for self-supervision, such
+
+an assumption can block out the augmented small variance examples from consideration for triplet pairs because their equation might not be the most similar one. This is more severe when using RODA for self-supervised augment. The generated examples of RODA usually have relatively low equation similarity with the origin example. However, the RODA examples remain challenging as we can see the performance of HE-MWP still has a gap of $15\%$ points compared to the original Math23K datasets.
+
+A strategy that considers equation and textual similarity in the same stage could be introduced to fill this gap. However, such strategies cannot avoid the heavy computational complexity caused by calculating the metric of all data pairs. This could be reduced by recent studies in rapid embedding retrieval algorithms such as FAISS (Johnson et al., 2019) by transforming the equation similarity to embedding similarity via embedding training methods. This remains as future work in this paper.
+
+# Acknowledgements
+
+This work is partially supported by JST SPRING Grant No.JPMJSP2110 and KAKENHI No.22J13719, Japan.
+
+# References
+
+Aida Amini, Saadia Gabriel, Shanchuan Lin, Rik Koncel-Kedziorski, Yejin Choi, and Hannaneh Hajishirzi. 2019. Mathqa: Towards interpretable math word problem solving with operation-based formalisms. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2357-2367.
+Daniel G. Bobrow. 1964. Natural language input for a computer problem solving system. Technical report, Cambridge, MA, USA.
+Eugene Charniak. 1969. Computer solution of calculus word problems. In Proceedings of the 1st International Joint Conference on Artificial Intelligence, IJCAI'69, pages 303-316, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc.
+Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey E. Hinton. 2020. A simple framework for contrastive learning of visual representations. In Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event, volume 119 of Proceedings of Machine Learning Research, pages 1597-1607. PMLR.
+Sumit Chopra, Raia Hadsell, and Yann LeCun. 2005. Learning a similarity metric discriminatively, with
+
+application to face verification. In 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05), volume 1, pages 539-546. IEEE.
+Hongchao Fang and Pengtao Xie. 2020. CERT: contrastive self-supervised learning for language understanding. CoRR, abs/2005.12766.
+Tianyu Gao, Xingcheng Yao, and Danqi Chen. 2021. Simcse: Simple contrastive learning of sentence embeddings. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 7-11 November; 2021, pages 6894-6910. Association for Computational Linguistics.
+Jean-Bastien Grill, Florian Strub, Florent Altché, Corentin Tallec, Pierre H. Richemond, Elena Buchatskaya, Carl Doersch, Bernardo Ávila Pires, Zhaohan Guo, Mohammad Gheshlaghi Azar, Bilal Piot, Koray Kavukcuoglu, Rémi Munos, and Michal Valko. 2020. Bootstrap your own latent - A new approach to self-supervised learning. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual.
+Wenyv Guan, Qianying Liu, Guangzhi Han, Bin Wang, and Sujian Li. 2019. An improved coarse-to-fine method for solving generation tasks. In Proceedings of the The 17th Annual Workshop of the Australasian Language Technology Association, pages 178-185, Sydney, Australia. Australasian Language Technology Association.
+Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross B. Girshick. 2020. Momentum contrast for unsupervised visual representation learning. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020, Seattle, WA, USA, June 13-19, 2020, pages 9726-9735. Computer Vision Foundation / IEEE.
+Danqing Huang, Shuming Shi, Chin-Yew Lin, Jian Yin, and Wei-Ying Ma. 2016. How well do computers solve math word problems? large-scale dataset construction and evaluation. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 887-896.
+Shifeng Huang, Jiawei Wang, Jiao Xu, Da Cao, and Ming Yang. 2021. Recall and learn: A memory-augmented solver for math word problems. In *Findings of the Association for Computational Linguistics: EMNLP* 2021, pages 786-796.
+Jeff Johnson, Matthijs Douze, and Herve Jégou. 2019. Billion-scale similarity search with GPUs. IEEE Transactions on Big Data, 7(3):535-547.
+Prannay Khosla, Piotr Teterwak, Chen Wang, Aaron Sarna, Yonglong Tian, Phillip Isola, Aaron Maschinot, Ce Liu, and Dilip Krishnan. 2020. Supervised contrastive learning. In Advances in Neural
+
+Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual.
+Rik Koncel-Kedziorski, Hannaneh Hajishirzi, Ashish Sabharwal, Oren Etzioni, and Siena Dumas Ang. 2015. Parsing algebraic word problems into equations. Transactions of the Association for Computational Linguistics, 3:585-597.
+Rik Koncel-Kedziorski, Subhro Roy, Aida Amini, Nate Kushman, and Hannaneh Hajishirzi. 2016. Mawps: A math word problem repository. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1152-1157.
+Vivek Kumar, Rishabh Maheshwary, and Vikram Pudi. 2021. Adversarial examples for evaluating math word problem solvers. In *Findings of the Association for Computational Linguistics: EMNLP* 2021, Virtual Event / Punta Cana, Dominican Republic, 16-20 November; 2021, pages 2705-2712. Association for Computational Linguistics.
+Nate Kushman, Yoav Artzi, Luke Zettlemoyer, and Regina Barzilay. 2014. Learning to automatically solve algebra word problems. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 271-281.
+Zhongli Li, Wenxuan Zhang, Chao Yan, Qingyu Zhou, Chao Li, Hongzhi Liu, and Yunbo Cao. 2022. Seeking patterns, not just memorizing procedures: Contrastive learning for solving math word problems. In *Findings of the Association for Computational Linguistics: ACL* 2022, pages 2486-2496.
+Zhenwen Liang, Jipeng Zhang, Jie Shao, and Xiangliang Zhang. 2021. Mwp-bert: A strong baseline for math word problems.
+Qianying Liu, Wenyu Guan, Sujian Li, Fei Cheng, Daisuke Kawahara, and Sadao Kurohashi. 2021. Roda: Reverse operation based data augmentation for solving math word problems. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 30:1-11.
+Qianying Liu, Wenyv Guan, Sujian Li, and Daisuke Kawahara. 2019. Tree-structured decoding for solving math word problems. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2370-2379, Hong Kong, China. Association for Computational Linguistics.
+Shen-Yun Miao, Chao-Chun Liang, and Keh-Yih Su. 2020. A diverse corpus for evaluating and developing english math word problem solvers. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 975-984. Association for Computational Linguistics.
+
+Arkil Patel, Satwik Bhattachamishra, and Navin Goyal. 2021. Are NLP models really able to solve simple math word problems? In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2021, Online, June 6-11, 2021, pages 2080-2094. Association for Computational Linguistics.
+Nils Reimers and Iryna Gurevych. 2019. Sentence-bert: Sentence embeddings using siamese bert-networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, pages 3980-3990. Association for Computational Linguistics.
+Subhro Roy and Dan Roth. 2015. Solving general arithmetic word problems. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1743-1752, Lisbon, Portugal. Association for Computational Linguistics.
+Subhro Roy and Dan Roth. 2017. Unit dependency graph and its application to arithmetic word problem solving. In Thirty-First AAAI Conference on Artificial Intelligence.
+Subhro Roy and Dan Roth. 2018. Mapping to declarative knowledge for word problem solving. Transactions of the Association of Computational Linguistics, 6:159-172.
+Jianhao Shen, Yichun Yin, Lin Li, Lifeng Shang, Xin Jiang, Ming Zhang, and Qun Liu. 2021. Generate & rank: A multi-task framework for math word problems. In *Findings of the Association for Computational Linguistics: EMNLP* 2021, pages 2269-2279.
+Yibin Shen and Cheqing Jin. 2020. Solving math word problems with multi-encoders and multi-decoders. In Proceedings of the 28th International Conference on Computational Linguistics, pages 2924-2934, Barcelona, Spain (Online). International Committee on Computational Linguistics.
+Yibin Shen, Qianying Liu, Zhuoyuan Mao, Zhen Wan, Fei Cheng, and Sadao Kurohashi. 2022. Seeking diverse reasoning logic: Controlled equation expression generation for solving math word problems. arXiv preprint arXiv:2209.10310.
+Shuming Shi, Yuehui Wang, Chin-Yew Lin, Xiaojiang Liu, and Yong Rui. 2015. Automatically solving number word problems by semantic parsing and reasoning. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1132-1142.
+Minghuan Tan, Lei Wang, Lingxiao Jiang, and Jing Jiang. 2021. Investigating math word problems using pretrained multilingual language models.
+
+Shyam Upadhyay and Ming-Wei Chang. 2017. Annotating derivations: A new evaluation strategy and dataset for algebra word problems. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 494-504.
+Lei Wang, Dongxiang Zhang, Lianli Gao, Jingkuan Song, Long Guo, and Heng Tao Shen. 2018. Math-dqn: Solving arithmetic word problems via deep reinforcement learning. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 32.
+Lei Wang, Dongxiang Zhang, Jipeng Zhang, Xing Xu, Lianli Gao, Bing Tian Dai, and Heng Tao Shen. 2019. Template-based math word problem solvers with recursive neural networks. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 7144-7151.
+Yan Wang, Xiaojiang Liu, and Shuming Shi. 2017. Deep neural solver for math word problems. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP 2017, Copenhagen, Denmark, September 9-11, 2017, pages 845-854. Association for Computational Linguistics.
+Zhipeng Xie and Shichao Sun. 2019. A goal-driven tree-structured neural model for math word problems. In *IJCAI*, pages 5299–5305.
+Jipeng Zhang, Roy Ka-Wei Lee, Ee-Peng Lim, Wei Qin, Lei Wang, Jie Shao, and Qianru Sun. 2020a. Teacher-student networks with multiple decoders for solving math word problem. In Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI-20, pages 4011-4017. International Joint Conferences on Artificial Intelligence Organization. Main track.
+Jipeng Zhang, Lei Wang, Roy Ka-Wei Lee, Yi Bin, Yan Wang, Jie Shao, and Ee-Peng Lim. 2020b. Graph-to-tree learning for solving math word problems. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3928-3937.
+Yanyan Zou and Wei Lu. 2019. Text2math: End-to-end parsing text into math expressions. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5330-5340.
\ No newline at end of file
diff --git a/textualenhancedcontrastivelearningforsolvingmathwordproblems/images.zip b/textualenhancedcontrastivelearningforsolvingmathwordproblems/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..70ff84358a5940fe9aaf75ae1fa7deec39fb99d8
--- /dev/null
+++ b/textualenhancedcontrastivelearningforsolvingmathwordproblems/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:7f5afef43deeefde0ab7b8a604ca7f627f4b2d8ed28f376d6f48fe4a9eb5be1d
+size 321315
diff --git a/textualenhancedcontrastivelearningforsolvingmathwordproblems/layout.json b/textualenhancedcontrastivelearningforsolvingmathwordproblems/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..6e5f6754b248023f3591474159a976be1e9b1040
--- /dev/null
+++ b/textualenhancedcontrastivelearningforsolvingmathwordproblems/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:fb4b25aa18dd606d0feee987488bebdd6748075ca006002cb07e81c822cba4fa
+size 403693
diff --git a/thechallengesoftemporalalignmentontwitterduringcrises/9ebccff9-b096-4e5a-b002-3e1f13279edb_content_list.json b/thechallengesoftemporalalignmentontwitterduringcrises/9ebccff9-b096-4e5a-b002-3e1f13279edb_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..292cb664e4f61e3f5bb12984e9b4588f0aa25ca0
--- /dev/null
+++ b/thechallengesoftemporalalignmentontwitterduringcrises/9ebccff9-b096-4e5a-b002-3e1f13279edb_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a60fc78e9a787922c58a4094e2f574cb1fc1e119d57889374f1ae974303a602d
+size 100116
diff --git a/thechallengesoftemporalalignmentontwitterduringcrises/9ebccff9-b096-4e5a-b002-3e1f13279edb_model.json b/thechallengesoftemporalalignmentontwitterduringcrises/9ebccff9-b096-4e5a-b002-3e1f13279edb_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..dedf4a342aed24a1ece46c19288b5647954bb502
--- /dev/null
+++ b/thechallengesoftemporalalignmentontwitterduringcrises/9ebccff9-b096-4e5a-b002-3e1f13279edb_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3534933977ebed80dfa010d3e650643ef8ce74812905962fbd3f4d7027dfcd53
+size 116130
diff --git a/thechallengesoftemporalalignmentontwitterduringcrises/9ebccff9-b096-4e5a-b002-3e1f13279edb_origin.pdf b/thechallengesoftemporalalignmentontwitterduringcrises/9ebccff9-b096-4e5a-b002-3e1f13279edb_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..b3e8ccb3918ce84bc8d184f537a267bec970404c
--- /dev/null
+++ b/thechallengesoftemporalalignmentontwitterduringcrises/9ebccff9-b096-4e5a-b002-3e1f13279edb_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ac62b01ddac947994ded9b853f36547f4c81d052ace6d7937c246346a1426389
+size 434766
diff --git a/thechallengesoftemporalalignmentontwitterduringcrises/full.md b/thechallengesoftemporalalignmentontwitterduringcrises/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..0928f6928138de767f74043133b7d5bc0f379ce2
--- /dev/null
+++ b/thechallengesoftemporalalignmentontwitterduringcrises/full.md
@@ -0,0 +1,355 @@
+# The challenges of temporal alignment on Twitter during crises
+
+Aniket Pramanick, Tilman Beck, Kevin Stowe and Iryna Gurevych
+
+Ubiquitous Knowledge Processing Lab (UKP Lab)
+
+Department of Computer Science and Hessian Center for AI (hessian.AI)
+
+Technical University Darmstadt
+
+www.ukp.tu-darmstadt.de
+
+# Abstract
+
+Language use changes over time, and this impacts the effectiveness of NLP systems. This phenomenon is even more prevalent in social media data during crisis events where meaning and frequency of word usage may change over the course of days. Contextual language models fail to adapt temporally, emphasizing the need for temporal adaptation in models which need to be deployed over an extended period of time. While existing approaches consider data spanning large periods of time (from years to decades), shorter time spans are critical for crisis data. We quantify temporal degradation for this scenario and propose methods to cope with performance loss by leveraging techniques from domain adaptation. To the best of our knowledge, this is the first effort to explore effects of rapid language change driven by adversarial adaptations, particularly during natural and human-induced disasters. Through extensive experimentation on diverse crisis datasets, we analyze under what conditions our approaches outperform strong baselines while highlighting the current limitations of temporal adaptation methods in scenarios where access to unlabeled data is scarce.
+
+# 1 Introduction
+
+Patterns of language use change constantly over time, often in predictable and analyzable ways (Hamilton et al., 2016a; Kulkarni et al., 2015; Sommerauer and Fokkens, 2019). As language changes, the performance of NLP systems can be negatively impacted (Lazaridou et al., 2021). In most scenarios, training corpora are derived from a snapshot of data at some moment of time in the past, which puts the reliability of model performance on future data into question. Yet, there lacks a concrete reasoning or evidence that temporal adaptation elevates
+
+model performance. Despite the popularity of large language models and their usefulness in many NLP domains (Devlin et al., 2019), the representation of temporal knowledge in those models so far remains an open challenge.
+
+The increased interest in temporal adaptation (i.e. scenarios in which the training and test datasets are drawn from different periods of time) has led to the curation of a number of datasets such as NYT Annotated Corpus (Sandhaus, 2008) and Amazon Reviews (Ni et al., 2019) that have been the focus of most of the recent work in this area. However, these benchmark datasets are curated in such a way that they can only capture temporal change of language over long periods of time (from years to decades), giving access to a large amount of data. In the contrary, on social media, language changes can happen rapidly (Kulkarni et al., 2015; Eisenstein, 2013). Word usage and topics can even change over the span of a single day (Golder and Macy, 2011), especially during very dynamic scenarios like crisis or disastrous events (Reynolds and Seeger, 2005; Del Tredici et al., 2019). We denote these phenomena induced by linguistic and semantic changes over time as temporal drift.
+
+Accounting for temporal drift is critical in crisis situations in which information patterns can vary greatly between the phases of emergency management for crisis. For this purpose, we study short text classification in crisis situations. Given the time-critical nature of crisis scenarios, gathering annotations is too time-consuming and transfer learning is challenging due to the innate differences among the type of events (hurricane vs. earthquake) and the respective information needs. Thus, we offer a study investigating the impact of temporal drift on crisis datasets spanning shorter time periods (days/weeks), as well as datasets with relatively few samples (ranging from $\sim 1\mathrm{k}$ to $22\mathrm{k}$ ).
+
+Assessing rapid temporal drift is a challenging problem due to different linguistic phenomena
+
+# Tweets about Hurricane Sandy in 2012
+
+
+Figure 1: The blue line indicates the frequency of tweets during the hurricane Sandy (Stowe et al., 2018). The displayed tweets demonstrate challenging linguistic phenomena for a text classification model, e.g. semantic shifts (#irene as reference to a hurricane rather than a person) or neologisms (pre-sandy).
+
+which often require extensive knowledge about the temporal structure of the context. In Figure 1, we provide examples from Stowe et al. (2018) dataset, which were collected from the 2012 New York City landfall of Hurricane Sandy.
+
+Existing approaches like continual learning (Gururangan et al., 2020; Loureiro et al., 2022a) or learning time-specific models (Agarwal and Nenkova, 2022) cannot be applied to this scenario as access to a large set of unlabeled data from the temporal target distribution is missing. Unlike existing approaches, which react to incoming annotated data to update their models, we use temporal metadata as a training signal such that the existing contextualized representations are adapted temporally. More specifically, we are the first to apply projection methods (Wang et al., 2014) and domain adaptation approaches (Ganin et al., 2016; Bamler and Mandt, 2017) to learn time-aware contextualized embeddings. Our results highlight the challenges of integrating temporal information into contextualized embeddings with improvements being dependent on factors like dataset size - and thereby emphasizing that temporal adaptation remains a challenge in scenarios where we do not have access to large unlabeled data.
+
+In summary, we make the following contributions:
+
+1. We investigate temporal drift during crisis events and its adversarial effect on task performance. To the best of our knowledge, this
+
+is the first study of temporal effects on text classification performance in crisis scenarios, when temporal drift is rapid and access to data is scarce.
+
+2. We investigate the role of the domain of data in temporal drift and propose a simple metric to quantify the impact of temporal degradation on task performance.
+3. We propose methods that adapt future data to known models, improving performance with no additional labeled data.
+4. Through experiments on a multitude of diverse text classification datasets collected during crisis events, we analyse the effectiveness of our proposed methods over strong baselines.
+
+# 2 Related Work
+
+Analyzing semantic change of text over time has been of great interest since the pioneering work by Hamilton et al. (2016b) and others (Kutuzov et al., 2018; Rudolph and Blei, 2018; Martinc et al., 2020; Gonen et al., 2020). However, its influence on downstream task performance has only recently gained attention. Most importantly, the advent of contextualized word embeddings and large pretrained language models has led researchers to re-evaluate the role of temporality in language modeling (Jawahar and Seddah, 2019; Lazaridou et al., 2021; Hofmann et al., 2021; Kulkarni et al., 2021)
+
+and text classification (Bjerva et al., 2020; Florio et al., 2020; Röttger and Pierrehumbert, 2021; Agarwal and Nenkova, 2022).
+
+The performance degradation due to temporal factors has been confirmed in several studies and across multiple domains. Jaidka et al. (2018) analyzed the temporal performance degradation of age and gender classification models based on user's social media posts. Based on features derived from Latent Dirichlet Allocation and word embeddings, they find that models perform best if test and training data come from the same time span. Florio et al. (2020) investigated temporal effects on Hate Speech detection in Italian social media over the period of five months. Their results suggest that models trained on data temporally closer to the test data perform better with transformer based models. Loureiro et al. (2022b) studied semantic shifts in social media and proposed a dataset annotated with words that have undergone a semantic shift over the past two years. Loureiro et al. (2022a) focus on Twitter as text domain and contribute pretrained language models which have been further trained on time-specific data from Twitter.
+
+Bjerva et al. (2020) propose to use sequential subspace alignment (SSA) to adapt contextualized word embeddings for language change over time. Their results suggest that SSA applied on past data is able to outperform baselines which have access to data from all time-steps. Röttger and Pierrehumbert (2021) compared time-agnostic domain adaptation with temporal domain adaptation which considers the temporal order of the data. They found that, while temporal adaptation clearly outperforms domain adaptation in language modeling, this does not necessarily translate onto downstream classification performance due to updated tokens not being relevant for the task. Agarwal and Nenkova (2022) found the temporal model performance deterioration to be less significant when using language representations which have been pretrained on temporally closer data.
+
+Finally, Luu et al. (2022) have made the effort of conducting a large-scale study of temporal misalignment, the generalized scenario where training and evaluation data are drawn from different periods of time. Across multiple NLP classification tasks and domains they identify performance degradation with varying degrees but with social media and news being the most affected domains.
+
+We contribute to the existing line of work by
+
+quantifying the temporal effects on downstream task performance over short time periods (days and weeks) during crisis events. In such a scenario and in contrast to previous work, we do not assume access to large corpora of unlabeled data for temporal adaptation via continuous pretraining. Our proposed approaches temporally adapt pretrained contextualized embeddings to learn time-aware embeddings and we evaluate their effects on downstream classification tasks.
+
+# 3 Methods Overview
+
+Luu et al. (2022) describe three distinct stages of a typical NLP system which consist of a pretraining stage, a domain (or temporal) adaptation stage and a fine-tuning stage. Separating the adaptation and fine-tuning stages makes the implicit assumption that there is access to unlabeled data from the (temporal) target distribution which has been proven to be beneficial for temporal adaptation (Luu et al., 2022). In contrast, we are looking at the dynamic setting during crisis events. Temporal alignment through continuous pre-training is not feasible due to the lack of unlabeled data and time constraints imposed by the application scenario (e.g. crisis monitoring). The latter also limits the feasibility of an online learning setup which requires new annotations in a continuous stream. Finally, transfer learning is difficult due to inherent differences in information needs (i.e. the type of labels) and domains (e.g. hurricane vs. earthquake).
+
+Therefore, in this section we adapt and evaluate methods which are specifically designed for combining temporal adaptation and fine-tuning. Their training procedures are adapted to incorporate temporal information about the data along with the textual input. We describe each approach in the following:
+
+# Adapted Language Modelling (ALM)
+
+Similar to previous work (see Section 2), we explore temporal adaptation via pretraining but use only the available training data. We therefore continue with the language modeling objective of our respective pretrained language model on the training data and use the resulting fine-tuned model (FT) for downstream task training. Following Dhingra et al. (2022), we investigate a variation for temporal modelling (TM) by concatenating time as textual information to the input to encourage the language model to learn temporally relevant features during pretraining.
+
+# DCWE: Dynamic Contextualized Word Embeddings
+
+Hofmann et al. (2021) introduced a principled way to impart extra-linguistic knowledge into contextualized word embeddings by involving a prior distribution. This enables us to integrate temporal information into the embeddings during training. More specifically, for each temporal snapshot (e.g. days, months, years, etc.) present in the training data, an additional set of parameters is learned which acts as a temporal offset added to the original word embeddings. This way the model is able to maintain the semantic meaning of a word embedded in its temporal context. We adapt this idea to our setting by introducing additional parameters for shifting the pre-trained contextualized embeddings. Given a sequence of words/tokens $W = [w_{1}, w_{2}, \dots, w_{n}]$ and their corresponding pre-trained embeddings $H = [h_{1}, h_{2}, \dots, h_{n}]$ . To account for the temporal effect on the word meanings, we model word embeddings as a function of temporal context $t$ associated to $W$ .
+
+$$
+h _ {i} ^ {*} = f \left(h _ {i}, t\right) \tag {1}
+$$
+
+Since meanings of most of the words in the vocabulary are temporally stable, we can place a Normal prior on $h_i^*$ .
+
+$$
+h _ {i} ^ {*} \sim \mathcal {N} \left(h _ {i}, \lambda^ {- 1} I\right) \tag {2}
+$$
+
+Hence, we write as $h_i^* = h_i + d_i$ , where the offset $d_i$ is normally distributed as $d_i \sim \mathcal{N}(0, \lambda^{-1}I)$ . However, pre-trained LMs make this temporal adaptation easily applicable to any task by adding only a regularization term $L_{temporal}$ on top of the task specific loss $L_{task}$ .
+
+$$
+L _ {\text {t e m p o r a l}} = \frac {\lambda}{n} \sum_ {i = 1} ^ {n} \left(\left\| d _ {i} \right\| _ {2} ^ {2} + K \left\| d _ {i} - d _ {i - 1} \right\| _ {2} ^ {2}\right) \tag {3}
+$$
+
+For training the model, the overall loss $L = L_{task} + L_{temporal}$ is minimized. Similarly to Hofmann et al. (2021), we use $K = 10^{3}$ from Bamler and Mandt (2017), to enforce that $h_i^*$ s change smoothly over time.
+
+# LMSOC: Socio-temporally Sensitive Language Modeling
+
+Similar to DCWE, Kulkarni et al. (2021) propose a method to learn extra-linguistic context using
+
+graph representation learning algorithms and then primes with language models to generate language representations grounded in a socio-temporal context. We model the temporal order information as a linear chain graph and adapt this method to our setting by appending temporal graph embeddings to the initial layers of the pre-trained language model. During fine-tuning of the language model, the graph embeddings are kept frozen to inductively yield temporally-aware embeddings.
+
+# TAPH: Time Aware Projection on Hyperplanes
+
+Time adds an additional context or dimension to the knowledge making temporal scoping an imperative part while deriving context embeddings. Therefore, we model temporal information as a hyperplane and define a projection operation (Wang et al., 2014) on it. To build a time-invariant classification model, we project the sentence-embedding (Reimers and Gurevych, 2019) of each text on a hyperplane to obtain a time-aware sentence embedding. We describe the method in more detail.
+
+Let $X = [x_{1}, x_{2}, \ldots, x_{n}]$ be a given sequence of words and $H$ be its sentence embedding. Since the temporal span of our data is short, we assume that the temporal hyperplane $w_{t}$ represents the time frame of the training data. We derive time-aware sentence embeddings $H_{t}$ using our defined projection operation as follows:
+
+$$
+H _ {t} = H - w _ {t} ^ {\top} H w _ {t} \tag {4}
+$$
+
+While training the model, we learn the hyperplane representation $w_{t}$ in addition to fine-tuning the pre-trained embeddings in an end-to-end fashion. However during inference, we assume that we could 'teleport' the data to the past by projecting their sentence embeddings on the hyperplane $w_{t}$ in order to revert their temporal changes. We then use these embeddings in the downstream tasks.
+
+# TDA: Temporal Domain Adaptation
+
+Temporal Adaptation can also be interpreted as a variant of domain adaptation with the difference that the language change happens within the same domain, e.g. induced by external events or the general dynamic characteristics of the source infrastructure (e.g. social media platforms or news outlets). We adapt a widely used domain adaptation method (Ramponi and Plank, 2020) to our
+
+setting. We learn time-aware word representations by adding an additional classification layer during training to predict the time of each text and apply the Gradient Reversal method (Ganin et al., 2016). In this way, the input does not change during the forward pass but this additional layer affects the model parameters during back-propagation of error by an additional penalizing factor. $^4$ This acts as an adversarial training objective forcing the model to adapt to the temporal structure of the data.
+
+# 4 Experimental Setup
+
+# 4.1 Data
+
+We identify a collection of social media data during crisis with observable temporal phases (pre-, acute- and post-crisis), rapid change in language and a natural change in distribution over time - enabling us to evaluate how well temporally adapted models generalize over time. We use three datasets sampled from Twitter: Sandy, T26, and Humaid. We provide an overview here and refer to the Appendix A for dataset details.
+
+Sandy The dataset by Stowe et al. (2018) collected during hurricane Sandy in 2012 contains approximately 22,000 tweets spanning 17 days centered on landfall in New York City, annotated for binary relevance to the storm and its effects.5 The tweets were collected by first identifying users impacted by the event, then retroactively pulling their data from before, during, and after the event. As opposed to keyword collection, this provides a relatively broad collection of both relevant and nonrelevant tweets and a more complete dataset for evaluating temporal drift, as each tweet doesn't necessarily contain the same keyword(s).
+
+T26 The CrisisLex T26 (T26) dataset (Olteanu et al., 2015) includes labeled tweets for 26 different crisis events, labeled by informativeness into four different categories: (1) related to the crisis and informative, (2) related to the crisis but not informative, (3) not related to the crisis, and (4) not applicable category. This collection reflects a wide variety of events covering natural and human-created emergencies, with the added difficulty that the individual datasets are relatively small, with
+
+
+Figure 2: Overview of the data splits used in our experiments. Bins in blue are used during training, bins in yellow for testing, grey bins are not used. The PROGRESSIVE setting comprises multiple experiments with increasing training data size and a single test data bin moving forward temporally.
+
+each event containing only approximately 1,000 tweets.
+
+Humaid The Humaid dataset (Alam et al., 2021) is similar to T26, containing data about 19 different events with dataset sizes ranging from 575 to 9467 tweets. They are annotated with 11 different classes designed to capture fine-grained information related to disaster events.
+
+# 4.2 Data Splits
+
+We follow previous work (Lazaridou et al., 2021; Agarwal and Nenkova, 2022) and create time-based data splits to assess the temporal performance degradation. Specifically, we use three variants of dataset splits: CONTROL, TEMPORAL and PROGRESSIVE. We illustrate this in Figure 2.
+
+TEMPORAL Setup First, we split the entire data into two halves which cover equally-sized time periods. We call these first temporal half and the second temporal half, respectively. In the TEMPORAL setting, we use all the data from the first temporal half as the training data and a test set which is comprised of a randomly sampled $50\%$ of data from the second temporal half of a dataset. This evaluates the model's temporal generalization capabilities on test data from a temporally distant distribution than the training data.
+
+CONTROL Setup To assess whether TEMPORAL setup constrains model's generalization capabilities, we compare its performance with a CONTROL
+
+setup. Here, we evenly spread the training data over time frames, exposing the model to the full knowledge of all time. In this setting, the training data comprises of $50\%$ of instances from the first temporal half, along with $50\%$ instances from the second temporal half, matching the total training data from the TEMPORAL setup. We use the same test set as in TEMPORAL setup while ensuring that there is no overlap between the train and test split from the second temporal half.
+
+Under the assumption that a temporal gap between training and target distribution leads to performance decay, we expect that the CONTROL setup will yield better scores, as the model has access to training instances from the same temporal distribution as the test data.
+
+PROGRESSIVE Setup As described previously, semantic changes are likely to occur in short time spans within crisis-related data streams. Therefore, to investigate a more fine-grained analysis of temporal performance decay, we simulate a scenario in which an event is progressing, we have access to all the previous data, and need to take decisions about the incoming data. In this setup, we split the entire dataset into ten temporally ordered bins with even samples. Then, for each test bin $B_{t}$ , we use all preceding bins $B_{0}$ to $B_{t-2}$ for training. To identify the best performing model across all training epochs, we use bin $B_{t-1}$ for development.
+
+# 4.3 Baseline
+
+For a consistent performance comparison, all proposed models use bert-base-cased as their underlying backbone model for deriving pretrained embeddings.
+
+For the FT setup (see Section 3), we use the available training data for each dataset to run masked language modelling for three epochs to adapt the model to the data. We then fine-tune for the downstream task on the relevant training data using the updated pre-trained model. This will indicate whether the domain is the issue, or whether there is additional temporal effects. In the temporal modeling setup (TM) setup, we follow Dhingra et al. (2022) and presuppose the textual representation of the timestamp for each tweet to the tweet text, then train an additional three epochs of masked language modelling. We then fine-tune for the downstream task on the relevant training data.
+
+Finally, we apply another baseline where we use the timestamp text as second input to the model
+
+during supervised training, separated via a special token (i.e. [SEP] for BERT). We refer to this baseline as SEP.
+
+# 4.4 Hyperparameters and Infrastructure
+
+For a fair comparison, we run all experiments using the same hyperparameters and data splits. We use a learning rate of $1e - 4$ , batch size of 64, weight decay of $1e - 3$ and no warmup due to the limited amount of training data. We use Adam (Loshchilov and Hutter, 2019) as optimization algorithm and train for three epochs. Based on the performance on the development split, we load the best performing model at the end of the training procedure.
+
+We repeat each experiment using five different seeds and take the most frequent prediction across all runs as the final prediction by a model. All models are implemented in Python 3.6 using PyTorch 1.10.2 (Paszke et al., 2019) and the HuggingFace (Wolf et al., 2020) framework (4.18) as model backend. We used a computation cluster containing a mixture of NVIDIA Tesla P100 (16GB), NVIDIA A100 (40GB) and NVIDIA V100 (32GB) GPUs.
+
+# 4.5 Evaluation
+
+We report binary-F1 Score for Sandy and macro-F1 score for multi-class classification task on T26 and Humaid datasets. The comparison of the CONTROL and TEMPORAL setting serves two purposes; first, to quantify the degradation of model performance due to temporal drift and second, to estimate the temporal adaptation ability for our approaches. We expect that models considering temporal information should experience less performance degradation between these two settings compared to the baseline model.
+
+Additionally, we evaluate the mean model performance in the PROGRESSIVE setting for a more fine-grained analysis of temporal degradation.
+
+Temporal Rigidity: While analyzing the effects of temporal drift on model performance, it is necessary to quantify the degradation of model performance due to this phenomenon. We quantify the temporal adaptability of a model using a metric called Temporal Rigidity (TR) score, that summarizes the performance deterioration of a model from aligned to misaligned test data. Higher values of TR imply that the model is not able to adapt itself temporally.
+
+We denote $f_{M}(B_{i}, B_{j})$ as the F1 performance score of a model $M$ when trained using data sam
+
+pled from bin $B_{i}$ and evaluated using data sampled from bin $B_{j}$ . We define TR as:
+
+$$
+T R = \frac {1}{N} \sum_ {i \neq j} \frac {\left| f _ {M} \left(B _ {i} , B _ {j}\right) - f _ {M} \left(B _ {i} , B _ {i}\right) \right|}{\left| i - j \right|} \tag {5}
+$$
+
+In Eqn.5 the normalization factor is given as $N = |\{(i,j):i\neq j\} |$ . Unlike Luu et al. (2022), who do not take temporal proximity of bins into account. We use $\frac{1}{|i - j|}$ as the penalizing factor for the model when training and test bins are temporally close but the performance degradation is significant.
+
+Crisis Phases: Additionally, we utilize the well-known temporal structures of the crisis events (Reynolds and Seeger, 2005; Yang et al., 2013) to analyze model performance. The temporal structure of the Sandy dataset is annotated using pre-, acute- and post-crisis labels. For each model we cluster the time-aware embeddings using KMeans algorithm $(k = 3)$ and report the Normalized Mutual Information score (NMI). NMI gives the correlation between the time-aware embeddings and the temporal structure of the underlying data.
+
+# 5 Results and Analysis
+
+In this section, we attempt to answer the following questions:
+
+Q1. To what degree is temporal performance degradation present in short-term Twitter data during crisis events? (Section 5.1)
+Q2. Does temporal adaptation improve model performance? (Section 5.2)
+Q3. Does the domain of the data play a role in temporal drift? (Section 5.3)
+Q4. How do the proposed models perform when trained continually? (Section 5.4)
+
+# 5.1 Temporal Performance Degradation
+
+In order to estimate the degree of temporal performance degradation in the crisis scenario, we compare the classification performance of the baseline model in the CONTROL and TEMPORAL setting. Table 1 provides the averaged performance difference for all datasets. Given that we only change the temporal distribution of the training data, the effect is substantial with a difference in F1 up to 6.52 points for the Sandy dataset and slightly less
+
+
Data
Sandy
T26
Humaid
CONTROL - TEMPORAL
6.52
4.37
4.10
+
+pronounced on the T26 (4.37) and Humaid (4.10) dataset collections. Therefore, we conclude that, even in short-term scenarios like crisis events on Twitter, temporal distribution of the training data influences the classification performance.
+
+# 5.2 Performance Comparison
+
+Table 1: Temporal Performance Degradation: Averaged F1 performance difference of the CONTROL to TEMPORAL setting for the BERT baseline model. Overall results show that contextualized language models fail to adapt temporally. Refer Section 5.1 for details.
+
+
Method
Sandy
CONT
TEMP
DIFF
BERT
87.70
81.18
6.52
BERT+TM
82.55
70.48
12.07
BERT+SEP
87.79
79.65
8.14
BERT+LMSOC
73.78
67.24
6.54
BERT+DCWE
86.92
79.95
6.97
BERT+TAPH
87.40
82.02
5.38
BERT+TDA
87.10
82.53
4.57
BERTFT
86.96
81.84
5.12
BERTFT+LMSOC
74.89
67.90
6.99
BERTFT+DCWE
86.85
79.53
7.32
BERTFT+TAPH
87.12
82.60
4.52
BERTFT+TDA
86.71
83.43
3.28
+
+Table 2: Temporal Adaptation Evaluation on Sandy: Text classification performance measured in binary F1. Overall, TDA outperforms other approaches in TEMPORAL setting, with and without temporal adaptation (FT). Refer Section 5.2 and 5.3 for details.
+
+We summarize the results on Sandy in Table 2. Overall we find that TDA outperforms all other methods in TEMPORAL setting. We obtain around $1.6\%$ absolute increase over the baseline. We also observe that the difference between model performance in CONTROL and TEMPORAL setting (DIFF) is lowest for TDA $(30.8\%)$ lower than the baseline) indicating the higher robustness of the model. TAPH achieves $1\%$ absolute improvement in performance over the baseline in TEMPORAL setting (DIFF is lower by $16.9\%$ ).
+
+
Method
T26
Humaid
BERT+TM
4 / 26
3 / 19
BERT+SEP
5 / 26
3 / 19
BERT+DCWE
0 / 26
1 / 19
BERT+TAPH
6 / 26
0 / 19
BERT+TDA
10 / 26
4 / 19
BERTFT+DCWE
0 / 26
0 / 19
BERTFT+TAPH
5 / 26
0 / 19
BERTFT+TDA
8 / 26
0 / 19
+
+The $T26$ and Humaid datasets contain data for a multitude of events. Therefore, we aggregate model performances in Table 3 and provide detailed results per event in the Appendix A.2. We see that model performance varies greatly between the Sandy dataset and the others. This is due to two main reasons: (i) Data Size: Most of the event datasets in $T26$ and Humaid are very small, the temporal adaptation methods do not get enough training data to learn the parameters involved in temporal reasoning. To support our argument, we observe, in "Boston Bombings (2013)" dataset of $T26$ , which contains 81,172 annotated tweets, TDA outperforms the baseline by an absolute increases of $6.17\%$ and TAPH comes second with an absolute performance improvement of $2.9\%$ under TEMPORAL setting, a performance pattern which is similar to Sandy dataset. (ii) Data Quality: Unlike Sandy, $T26$ and Humaid have been collected using keyword-based search. This data collection technique has two main drawbacks: firstly, it restricts the data size and secondly, harms the completeness of the dataset collecting tweets that contain same keywords. All the improvements we report are statistically significant ( $p < 0.05$ , using McNemar's Test).
+
+Learning from Temporal Information: To understand the cause of the performance improvement of the models, we utilize the annotated temporal structure of the Sandy dataset. In Table 4 we report two additional metrics: TR Score and NMI, in TEMPORAL setting. Compared to the baseline, TDA is lowest (15.74% decrease) which suggests that TDA
+
+performs most robustly over time across all models. TAPH comes in second with a $9.26\%$ decrease in TR Score from the baseline. NMI scores show similar patterns, with TDA achieving the highest score. We conclude that TDA learns the most meaningful time-aware embeddings.
+
+Table 3: Performance Comparison on T26 and Humaid: The number of datasets for which the specific temporal adaptation method outperforms its baseline counterpart in the TEMPORAL setting. Refer Section 5.2 and 5.3 for details.
+
+
Method
Sandy
TR
NMI
BERT
0.108
0.051
BERT+TM
0.130
0.050
BERT+DCWE
0.111
0.105
BERT+TAPH
0.098
0.185
BERT+TDA
0.091
0.194
+
+Table 4: Temporal Information Learning: Comparison of methods on TR (lower is better) and NMI scores (higher is better). Refer section 5.2 for details.
+
+# 5.3 Effect of Domain of Data
+
+To understand whether the data domain is the main issue behind performance degradation or temporal effects indeed play a significant role, we perform additional experiments. We fine-tune the initial bert-base-cased embeddings for an additional three epochs with Masked Language Modeling Task (MLM) on the training data, before applying the Temporal Adaptation methods. We report the results for Sandy dataset in Table 2. For all models, there remains a substantial performance difference between the CONTROL and TEMPORAL settings which demonstrates the influence of temporal drift on performance. Similar to previous work (Agarwal and Nenkova, 2022), we observe that additional pre-training improves performance for most of the models. Still, TDA outperforms the baseline and TAPH comes in second.
+
+# 5.4 Effect of Continual Learning:
+
+Continual Learning requires continuous annotation of incoming data, which is not feasible during crisis events. However, for the analytical completeness of this paper, we simulate continual learning in the PROGRESSIVE setting to show the effectiveness of our proposed methods. In this setting, initially the models get access to very small amount of data to learn from, which affects model performance. Performance improves as the size of training data increases gradually. In Table 5 we report the model
+
+
Method
Sandy
BERT
68.67
BERT+TM
60.13
BERT+DCWE
67.39
BERT+TAPH
69.13
BERT+TDA
69.50
+
+Table 5: Continual Learning Effects: Average model performance across all bins in PROGRESSIVE setting, in terms of F1 Score. Refer section 5.4 for details.
+
+
+Figure 3: Representative example shows that in comparison with other models TDA correctly puts maximum attention weight on the word katrina (another storm) in the temporal context of the hurricane while computing the contextual embeddings. Refer Section 6 for details.
+
+performance averaged over all the bins. The results show that TDA outperforms and improves the BERT baseline by $1.2\%$ .
+
+# 6 Discussion
+
+Adapting temporally by training on timestamp patterns as text presupended to the input (BERT+TM) underperforms in all experiments. We argue that the added information affects all tokens equally via the self-attention mechanism although only some tokens will experience a semantic shift relevant for text classification in the crisis scenario.
+
+Similarly, the LMSOC and DCWE adaptation approaches cannot outperform the baseline without any temporal adaptation. The additional parameters for computing the temporal offset are not well
+
+tuned for predicting temporal distributions which have not been observed during training.
+
+Figure 3 shows that TDA correctly learns to put maximum attention weight on the word Katrina (i.e. reference to a previous hurricane) in the temporal context of hurricane. We provide representative examples of tweets in Appendix A that all other models but TDA fail to classify correctly. Forcing the model to learn time-invariant embeddings during training using an adversarial signal leads to TDA performing better over all other approaches. Although, TAPH does not fall far behind, it approximates temporal information to create time-static bins. The discrete approximation of temporal information is the main reason behind its performance drop.
+
+# 7 Conclusion
+
+The usage of natural language inevitably changes over time which influences performance of text classification models applied on data from different temporal distributions. We show that this effect is also prevalent for rapid temporal drift using social media during crisis events as an example. With the rise of pretrained contextualized embeddings, a dominant approach is to continue language modeling on data temporally closer to the target distribution. However, during crisis events such data is not available and annotated data is often scarce.
+
+We investigate approaches which work without any additional data besides the input text and its temporal metadata. Our results show that under ideal conditions, i.e. high data quality and sufficient annotated instances, they outperform strong baselines. However, most crucially, our work highlights a critical gap of temporal adaptation for rapid temporal drift, namely if unlabeled data for alignment is missing and annotated data is scarce. Our work opens the door for future research on methods which do not rely on pretraining in unlabeled target domain data. In this sense, crisis data provides an interesting use case for evaluation. We release all our code and models, fostering future work in this area.
+
+# Limitations
+
+While existing approaches account for temporal change of language over long periods of time, in social media this change can happen over the span of a single day during dynamic scenarios like crisis or disastrous events. In this work we study rapid tem
+
+poral drift prevalently observed in social media during a crisis. We observe that often data from social media are collected using keyword based search and data sampling techniques, where data containing the same set of keywords are collected. Since data collected using such techniques are both limited by size and vocabulary, as well by the issues inherent in keyword collection, the datasets naturally affect the performance of the methods described in the paper. Moreover, there exists differences among the types of crisis events (hurricane vs. earthquake) and their respective information needs. Hence, it is difficult to find a solution that works in all scenarios. Additionally, we highlight that evaluation of all the models was done on datasets annotated in presence of a crisis and that may not exactly reflect their performance in a real-world setting without annotated data, especially when differences among the types of crises are relevant. In a nutshell, we observe that during real-world crises, pre-trained language models turn out to be a good solution when access to unlabeled data is scarce and sufficient annotated data is unavailable.
+
+# Acknowledgements
+
+We thank Ilia Kuznetsov, Jan Buchmann, Luke Bates and the anonymous reviewers for their valuable feedback and Firoj Alam for providing us full access to the HumAID dataset. This work has been funded by the German Research Foundation (DFG) as part of the Research Training Group KRITIS No. GRK 2222. It has further been funded by the project "Open Argument Mining" (GU 798/25-1), associated with the Priority Program "Robust Argumentation Machines (RATIO)" (SPP-1999), and by the LOEWE initiative (Hesse, Germany) within the emergenCITY center.
+
+# References
+
+Oshin Agarwal and Ani Nenkova. 2022. Temporal effects on pre-trained models for language processing tasks. Transactions of the Association for Computational Linguistics, 10:904-921.
+Firoj Alam, Umair Qazi, Muhammad Imran, and Ferda Ofli. 2021. Humaid: Human-annotated disaster incidents data from twitter with deep learning benchmarks. In Proceedings of the Fifteenth International AAAI Conference on Web and Social Media, ICWSM. AAAI Press, pages 933-942.
+Robert Bamler and Stephan Mandt. 2017. Dynamic word embeddings. In Proceedings of the 34th International Conference on Machine Learning, ICML
+
+2017, Sydney, NSW, Australia, 6-11 August 2017, volume 70 of Proceedings of Machine Learning Research, pages 380-389. PMLR.
+Johannes Bjerva, Wouter Kouw, and Isabelle Augenstein. 2020. Back to the future — sequential alignment of text representations. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 7440-7447, United States. AAAI Press.
+Marco Del Tredici, Raquel Fernandez, and Gemma Boleda. 2019. Short-term meaning shift: A distributional exploration. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2069-2075, Minneapolis, Minnesota. Association for Computational Linguistics.
+Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.
+Bhuwan Dhingra, Jeremy R. Cole, Julian Martin Eisenschlos, Daniel Gillick, Jacob Eisenstein, and William W. Cohen. 2022. Time-Aware Language Models as Temporal Knowledge Bases. Transactions of the Association for Computational Linguistics, 10:257-273.
+Jacob Eisenstein. 2013. What to do about bad language on the internet. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 359-369, Atlanta, Georgia. Association for Computational Linguistics.
+Komal Florio, Valerio Basile, Marco Polignano, Pierpaolo Basile, and Viviana Patti. 2020. Time of Your Hate: The Challenge of Time in Hate Speech Detection on Social Media. Applied Sciences, 10(12).
+Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, François Lavi-olette, Mario Marchand, and Victor Lempitsky. 2016. Domain-adversarial training of neural networks. The journal of machine learning research, 17(1):2096-2030.
+Scott A. Golder and Michael W. Macy. 2011. Diurnal and seasonal mood vary with work, sleep, and daylength across diverse cultures. Science, 333(6051):1878-1881.
+Hila Gonen, Ganesh Jawahar, Djame Seddah, and Yoav Goldberg. 2020. Simple, interpretable and stable method for detecting words with usage change across corpora. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics,
+
+pages 538-555, Online. Association for Computational Linguistics.
+Suchin Gururangan, Ana Marasovic, Swabha Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey, and Noah A. Smith. 2020. Don't stop pretraining: Adapt language models to domains and tasks. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8342-8360, Online. Association for Computational Linguistics.
+William L. Hamilton, Jure Leskovec, and Dan Jurafsky. 2016a. Diachronic word embeddings reveal statistical laws of semantic change. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1489-1501, Berlin, Germany. Association for Computational Linguistics.
+William L. Hamilton, Jure Leskovec, and Dan Jurafsky. 2016b. Diachronic word embeddings reveal statistical laws of semantic change. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1489-1501, Berlin, Germany. Association for Computational Linguistics.
+Valentin Hofmann, Janet Pierrehumbert, and Hinrich Schütze. 2021. Dynamic contextualized word embeddings. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 6970-6984, Online. Association for Computational Linguistics.
+Kokil Jaidka, Niyati Chhaya, and Lyle Ungar. 2018. Diachronic degradation of language models: Insights from social media. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 195-200, Melbourne, Australia. Association for Computational Linguistics.
+Ganesh Jawahar and Djamé Seddah. 2019. Contextualized diachronic word representations. In Proceedings of the 1st International Workshop on Computational Approaches to Historical Language Change, pages 35-47, Florence, Italy. Association for Computational Linguistics.
+Vivek Kulkarni, Rami Al-Rfou, Bryan Perozzi, and Steven Skiena. 2015. Statistically significant detection of linguistic change. In Proceedings of the 24th International Conference on World Wide Web, pages 1384-1397, Florence, Italy. Association for Computing Machinery.
+Vivek Kulkarni, Shubhanshu Mishra, and Aria Haghighi. 2021. LMSOC: An approach for socially sensitive pretraining. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 2967-2975, Punta Cana, Dominican Republic. Association for Computational Linguistics.
+
+Andrey Kutuzov, Lilja Øvrelid, Terrence Szymanski, and Erik Velldal. 2018. Diachronic word embeddings and semantic shifts: a survey. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1384-1397, Santa Fe, New Mexico, USA. Association for Computational Linguistics.
+Angeliki Lazaridou, Adhiguna Kuncoro, Elena Gribovskaya, Devang Agrawal, Adam Liska, Tayfun Terzi, Mai Gimenez, Cyprien de Masson d'Autume, Tomás Kočisky, Sebastian Ruder, Dani Yogatama, Kris Cao, Susannah Young, and Phil Blunsom. 2021. Mind the Gap: Assessing Temporal Generalization in Neural Language Models. In Advances in Neural Information Processing Systems.
+Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net.
+Daniel Loureiro, Francesco Barbieri, Leonardo Neves, Luis Espinosa Anke, and Jose Camacho-collados. 2022a. TimeLMs: Diachronic language models from Twitter. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 251-260, Dublin, Ireland. Association for Computational Linguistics.
+Daniel Loureiro, Aminette D'Souza, Areej Nasser Muhajab, Isabella A. White, Gabriel Wong, Luis Espinosa Anke, Leonardo Neves, Francesco Barbieri, and Jose Camacho-Collados. 2022b. Tempowic: An evaluation benchmark for detecting meaning shift in social media.
+Kelvin Luu, Daniel Khashabi, Suchin Gururangan, Karishma Mandyam, and Noah A. Smith. 2022. Time waits for no one! analysis and challenges of temporal misalignment. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5944-5958, Seattle, United States. Association for Computational Linguistics.
+Matej Martinc, Petra Kralj Novak, and Senja Pollak. 2020. Leveraging contextual embeddings for detecting diachronic semantic shift. In Proceedings of the 12th Language Resources and Evaluation Conference, pages 4811-4819, Marseille, France. European Language Resources Association.
+Jianmo Ni, Jiacheng Li, and Julian McAuley. 2019. Justifying recommendations using distantly-labeled reviews and fine-grained aspects. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 188-197, Hong Kong, China. Association for Computational Linguistics.
+Alexandra Olteanu, Sarah Vieweg, and Carlos Castillo. 2015. What to expect when the unexpected happens:
+
+Kevin Stowe, Jennings Anderson, Martha Palmer, Leysia Palen, and Ken Anderson. 2018. Improving classification of Twitter behavior during hurricane events. In Proceedings of the Sixth International Workshop on Natural Language Processing for Social Media, pages 67-75, Melbourne, Australia. Association for Computational Linguistics.
+
+Zhen Wang, Jianwen Zhang, Jianlin Feng, and Zheng Chen. 2014. Knowledge graph embedding by translating on hyperplanes. Proceedings of the AAAI Conference on Artificial Intelligence, 28(1).
+
+Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Association for Computational Linguistics.
+
+Seungwon Yang, Haeyong Chung, Xiao Lin, Sunshin Lee, Liangzhe Chen, Andrew Wood, Andrea L. Kavanaugh, Steven D. Sheetz, Donald J. Shoemaker, and Edward A. Fox. 2013. Phasevis: What, when, where, and who in visualizing the four phases of emergency management through the lens of social media. In Proceedings of the 10th International IS-CRAM Conference, pages 912–917, Baden-Baden, Germany.
+
+# A Appendix: Data
+
+# A.1 Data Statistics
+
+The Sandy dataset spans 18 days with 23k tweets. The Humaid datasets range from 560 to 9399 tweets, from 1 to 81 days. The T26 datasets range from 1000 to 1442 tweets, over 7 to 56 days. In Table 6 and 7 we show the dataset statistics for the T26 datasets and Humaid datasets, respectively. Note that the Typhoon Pablo event from the original T26 dataset had only seven unlabelled tweets that could be successfully recovered: we therefore remove it from all experiments.
+
+# A.2 Detailed Results for T26 and Humaid
+
+In Tables 8 and 9 we provide the detailed evaluation results of the proposed approaches on T26 and Humaid.
+
+Progressive Events
+
+
Event
Dates (MM.DD.YY)
Total Days
Tweets
Colorado Floods (2013)
09.08.13 - 10.01.13
19
1,231
Sardinia Floods (2013)
11.16.13 - 11.28.13
13
824
Philipinnes Floods (2012)
08.07.12 - 08.15.12
13
1,341
Alberta Floods (2013)
06.20.13 - 07.16.13
24
4,040
Manila Flood (2013)
08.17.13 - 08.27.13
11
1,068
Queensland loods (2013)
01.17.13 - 02.05.13
19
727
Typhoon Yolanda (2013)
05.11.13 - 12.30.13
53
253
Australia bushfire (2013)
10.12.13 - 11.03.13
22
1,244
Colorado wildfires (2012)
06.08.12 - 07.08.12
31
2,901
Singapore haze (2013)
06.14.13 - 07.04.13
18
1,572
+
+Instantaneous Events
+
+
Italy earthquakes (2012)
05.18.12 - 06.14.12
28
5,219
Costa Rica earthquake (2012)
09.05.12 - 09.21.12
18
1,641
Bohol earthquake (2013)
10.14.13 - 10.25.13
12
1,131
Guatemala earthquake (2012)
11.06.12 - 11.25.12
20
2,233
LA airport shootings (2013)
11.01.13 - 11.12.13
12
1,737
Boston bombings (2013)
04.15.13 - 06.11.13
46
81,172
West Texas explosion (2013)
04.18.13 - 05.15.13
27
8,152
Venezuela refinery explosion (2012)
12.08.24 - 12.09.05
13
2,007
Brazil nightclub fire (2013)
01.27.13 - 02.11.13
16
2,644
Savar building collapse (2013)
04.23.13 - 06.01.13
39
2,646
Spain train crash (2013)
07.24.13 - 08.07.13
14
2,288
Lac Megantic train crash (2013)
07.06.12 - 07.26.12
21
1,755
NY train crash (2013)
12.01.13 - 12.08.13
9
667
Glasgow helicopter crash (2013)
11.29.13 - 12.29.13
30
1,541
Russia meteor (2013)
02.14.13 - 03.05.13
19
4,289
+
+Table 6: Summary of the T26datasets. The progressive and instantaneous splits were done manually based on the type of crisis event.
+Progressive Events
+
+
Event (Year)
Dates (MM.DD.YY)
Total Days
Nr. Tweets
Canada Wildfires (2016)
17.04.16 - 25.12.16
253
2,258
Hurricane Matthew (2016)
04.10.16 - 05.12.16
74
1,659
Sri Lanka Floods (2017)
31.05.17 - 03.07.17
34
575
Hurricane Harvey (2017)
17.08.17 - 19.09.17
34
9,164
Hurricane Irma (2017)
06.09.17 - 21.09.17
16
9,467
Hurricane Maria (2017)
16.09.17 - 02.10.17
17
7,328
Maryland Floods (2018)
28.05.18 - 07.06.18
11
747
Greece Wildfires (2018)
24.07.18 - 18.08.18
26
1,526
Kerala Floods (2018)
17.08.18 - 12.09.18
27
8,056
Hurricane Florence (2018)
11.09.18 - 17.11.18
68
6,359
California Wildfires (2018)
10.11.18 - 07.12.18
28
7,444
Cyclone Idai (2019)
15.03.19 - 16.04.19
33
3,944
Midwestern U.S. Floods (2019)
25.03.19 - 03.04.19
26
1,930
Hurricane Dorian (2019)
30.08.19 - 02.09.19
4
7,660
+
+Instantaneous Events
+
+
Ecuador Earthquake (2016)
17.04.16 - 25.12.16
253
1,594
Italy Earthquake (2016)
24.08.16 - 29.08.16
6
1,240
Kaikoura Earthquake (2016)
01.09.16 - 22.11.16
83
2,217
Mexico Earthquake (2017)
20.09.17 - 06.10.17
17
2,036
Pakistan Earthquake (2019)
24.09.19 - 26.09.19
3
1,991
+
+Table 7: Summary of the Humaid datasets. The progressive and instantaneous splits were done manually based on the type of crisis event.
+
+
Event
Humaid
BERT
BERT+TM
BERT+SEP
BERT+DCWE
BERT+TAPH
BERT+TDA
CONT
TEMP
CONT
TEMP
CONT
TEMP
CONT
TEMP
CONT
TEMP
CONT
TEMP
Progressive Events
Colorado Floods (2013)
0.309
0.309
0.309
0.309
0.309
0.309
0.309
0.309
0.309
0.309
0.309
0.309
Sardinia Floods (2013)
0.255
0.315
0.310
0.287
0.239
0.285
0.179
0.298
0.179
0.211
0.299
0.288
Philipinnes Floods (2012)
0.276
0.270
0.307
0.269
0.213
0.278
0.213
0.213
0.213
0.213
0.213
0.269
Alberta Floods (2013)
0.314
0.202
0.307
0.200
0.300
0.202
0.202
0.202
0.202
0.202
0.296
0.202
Manila Floods (2013)
0.369
0.369
0.367
0.366
0.337
0.372
0.190
0.350
0.308
0.355
0.380
0.374
Queensland Floods (2013)
0.423
0.353
0.486
0.342
0.361
0.331
0.374
0.351
0.318
0.314
0.472
0.355
Typhoon Yolanda (2013)
0.211
0.211
0.235
0.260
0.317
0.399
0.211
0.211
0.211
0.211
0.211
0.211
Australia Bushfire (2013)
0.447
0.450
0.583
0.585
0.449
0.522
0.426
0.421
0.422
0.461
0.577
0.547
Colorado Wildfires (2012)
0.569
0.370
0.584
0.370
0.541
0.335
0.533
0.370
0.446
0.330
0.567
0.222
Singapore Haze (2013)
0.363
0.348
0.360
0.340
0.352
0.344
0.357
0.332
0.361
0.349
0.360
0.351
Instantaneous Events
Italy Earthquakes (2012)
0.332
0.321
0.316
0.285
0.331
0.304
0.287
0.267
0.274
0.316
0.326
0.318
Costa Rica Earthquake (2012)
0.582
0.240
0.564
0.132
0.603
0.102
0.554
0.102
0.537
0.102
0.543
0.102
Bohol Earthquake (2013)
0.585
0.579
0.566
0.566
0.574
0.568
0.569
0.574
0.574
0.571
0.582
0.577
Guatemala Earthquake (2012)
0.568
0.484
0.401
0.437
0.274
0.274
0.425
0.274
0.274
0.274
0.474
0.434
LA Airport Shootings (2013)
0.534
0.475
0.518
0.465
0.210
0.378
0.376
0.312
0.309
0.192
0.356
0.382
Boston Bombings (2013)
0.358
0.340
0.362
0.349
0.360
0.356
0.378
0.300
0.363
0.352
0.354
0.361
West Texas Explosion (2013)
0.411
0.398
0.405
0.396
0.412
0.396
0.396
0.392
0.407
0.405
0.407
0.409
Venezuela Refinery Explosion (2012)
0.368
0.347
0.359
0.336
0.360
0.344
0.339
0.339
0.361
0.335
0.362
0.343
Brazil Nightclub Fire (2013)
0.426
0.431
0.425
0.413
0.416
0.416
0.422
0.302
0.424
0.412
0.431
0.315
Savar Building Collapse (2013)
0.426
0.352
0.424
0.347
0.404
0.348
0.413
0.227
0.411
0.180
0.413
0.200
Spain Train Crash (2013)
0.463
0.446
0.490
0.539
0.481
0.447
0.355
0.402
0.324
0.460
0.456
0.449
Lac Megantic Train Crash (2013)
0.319
0.318
0.326
0.318
0.310
0.174
0.289
0.270
0.301
0.210
0.325
0.319
NY Train Crash (2013)
0.490
0.573
0.520
0.566
0.490
0.565
0.490
0.490
0.490
0.490
0.490
0.742
Glasgow Helicopter Crash (2013)
0.554
0.292
0.527
0.290
0.543
0.292
0.502
0.309
0.390
0.298
0.491
0.321
Russia Meteor (2013)
0.392
0.412
0.372
0.339
0.412
0.412
0.296
0.324
0.324
0.305
0.321
0.316
+
+Table 8: Results for the T26 datasets. The progressive and instantaneous splits were done manually based on the type of crisis event.
+
+
Event
Humaid
BERT
BERT+TM
BERT+SEP
BERT+DCWE
BERT+TAPH
BERT+TDA
CONT
TEMP
CONT
TEMP
CONT
TEMP
CONT
TEMP
CONT
TEMP
CONT
TEMP
Progressive Events
Canada Wildfires (2016)
0.419
0.414
0.420
0.410
0.353
0.319
0.235
0.244
0.248
0.249
0.376
0.367
Hurricane Matthew (2016)
0.355
0.261
0.396
0.257
0.317
0.131
0.317
0.131
0.335
0.118
0.369
0.273
Sri Lanka Floods (2017)
0.092
0.092
0.092
0.092
0.092
0.092
0.092
0.092
0.092
0.092
0.092
0.092
Hurricane Harvey (2017)
0.635
0.663
0.639
0.669
0.637
0.645
0.589
0.586
0.578
0.587
0.583
0.581
Hurricane Irma (2017)
0.624
0.618
0.639
0.614
0.610
0.579
0.566
0.549
0.568
0.553
0.579
0.545
Hurricane Maria (2017)
0.620
0.628
0.640
0.621
0.603
0.602
0.507
0.575
0.501
0.581
0.600
0.529
Maryland Floods (2018)
0.183
0.147
0.197
0.141
0.173
0.077
0.208
0.166
0.188
0.101
0.198
0.155
Greece Wildfires (2018)
0.216
0.199
0.219
0.198
0.212
0.106
0.214
0.104
0.214
0.106
0.232
0.176
Kerala Floods (2018)
0.470
0.422
0.421
0.420
0.480
0.382
0.354
0.348
0.341
0.347
0.379
0.346
Hurricane Florence (2018)
0.663
0.510
0.664
0.500
0.658
0.481
0.590
0.435
0.586
0.417
0.649
0.421
California Wildfires (2018)
0.601
0.484
0.624
0.567
0.571
0.485
0.544
0.455
0.558
0.470
0.575
0.485
Cyclone Idai (2019)
0.372
0.350
0.370
0.350
0.352
0.331
0.287
0.298
0.319
0.294
0.347
0.300
Midwestern U.S. Floods (2019)
0.300
0.405
0.300
0.362
0.277
0.301
0.137
0.229
0.192
0.217
0.251
0.261
Hurricane Dorian (2019)
0.560
0.554
0.550
0.559
0.568
0.557
0.553
0.527
0.552
0.470
0.554
0.533
Instantaneous Events
Ecuador Earthquake (2016)
0.298
0.186
0.310
0.158
0.260
0.148
0.309
0.163
0.236
0.146
0.311
0.182
Italy Earthquake (2016)
0.395
0.266
0.403
0.260
0.350
0.090
0.118
0.090
0.175
0.090
0.401
0.274
Kaikoura Earthquake (2016)
0.434
0.353
0.426
0.350
0.283
0.251
0.205
0.164
0.229
0.196
0.484
0.266
Mexico Earthquake (2017)
0.340
0.318
0.341
0.300
0.283
0.262
0.269
0.258
0.245
0.264
0.289
0.281
Pakistan Earthquake (2019)
0.273
0.205
0.260
0.200
0.243
0.168
0.203
0.168
0.190
0.162
0.350
0.215
+
+Table 9: Results for the Humaid datasets. The progressive and instantaneous splits were done manually based on the type of crisis event.
+
+
Tweet
Analysis
Rep. Michael Grimm says situation in Staten is-land is "another Katrina situation"
TDA correctly identifies Katrina as the name of the storm in the temporal context of hurricane Sandy, while other models fails.
#queenscom ingtogether Eric Ulrich brought the keg donated by Russos on the bay.
Adversarial signal forces TDA to lean time-invariant embedding for the word #queenscom-ingtogether.
+
+Table 11: Representative examples showing tweets that TDA model correctly classifies while other models fail. Refer Section 6 for details.
\ No newline at end of file
diff --git a/thechallengesoftemporalalignmentontwitterduringcrises/images.zip b/thechallengesoftemporalalignmentontwitterduringcrises/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..1dc7bcc499bc4b7e63f1c395c6d6a4d43b1ffb94
--- /dev/null
+++ b/thechallengesoftemporalalignmentontwitterduringcrises/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4c8a6d2b543b7164f4167561b030d1c6a890c158d65a4b20465ec32034502123
+size 990804
diff --git a/thechallengesoftemporalalignmentontwitterduringcrises/layout.json b/thechallengesoftemporalalignmentontwitterduringcrises/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..bf05236bd1ecf87972e30e61a078617485df74be
--- /dev/null
+++ b/thechallengesoftemporalalignmentontwitterduringcrises/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:7d068cef98b0e5a4976bebf5a148cecec284b367fa5ed7ec84ac67b5785cb2a4
+size 380071
diff --git a/thecuriouscaseofabsolutepositionembeddings/5dfe178e-adb0-45b0-a63b-f83192bfd3a1_content_list.json b/thecuriouscaseofabsolutepositionembeddings/5dfe178e-adb0-45b0-a63b-f83192bfd3a1_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..24739870fb4831f635669ff02c4d9f07482d3147
--- /dev/null
+++ b/thecuriouscaseofabsolutepositionembeddings/5dfe178e-adb0-45b0-a63b-f83192bfd3a1_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:244726a92dace303e20a934256ed906d7eb5f3f51b546f5a65785c37f738d461
+size 148000
diff --git a/thecuriouscaseofabsolutepositionembeddings/5dfe178e-adb0-45b0-a63b-f83192bfd3a1_model.json b/thecuriouscaseofabsolutepositionembeddings/5dfe178e-adb0-45b0-a63b-f83192bfd3a1_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..a5ce5260ea386e9d15d5a0e473c732998c7e9ab1
--- /dev/null
+++ b/thecuriouscaseofabsolutepositionembeddings/5dfe178e-adb0-45b0-a63b-f83192bfd3a1_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5f6f8f78900a9053beb054a6261724f8f59a8652b4cfd2e74a6190410ce1e036
+size 164821
diff --git a/thecuriouscaseofabsolutepositionembeddings/5dfe178e-adb0-45b0-a63b-f83192bfd3a1_origin.pdf b/thecuriouscaseofabsolutepositionembeddings/5dfe178e-adb0-45b0-a63b-f83192bfd3a1_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..c3facc7ce0f6b57861914a473361bd0ecf558292
--- /dev/null
+++ b/thecuriouscaseofabsolutepositionembeddings/5dfe178e-adb0-45b0-a63b-f83192bfd3a1_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4d0085e0c7bfcc58134cfd85b59059e83c433046cc050c80b9591310715892ea
+size 902636
diff --git a/thecuriouscaseofabsolutepositionembeddings/full.md b/thecuriouscaseofabsolutepositionembeddings/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..180e3af1f00e2c276c01f90cb8d7f48fc5eed1b6
--- /dev/null
+++ b/thecuriouscaseofabsolutepositionembeddings/full.md
@@ -0,0 +1,690 @@
+# The Curious Case of Absolute Position Embeddings
+
+Koustuv Sinha\* Amirhossein Kazemnejad \*
+
+Siva Reddy‡ Joelle Pineau†‡ Dieuwke Hupkes† Adina Williams†
+
+$\ddagger$ McGill University / Mila - Quebec AI; † Meta AI
+
+{koustuv.sinha,amirhossein.kazemnejad}@mail.mcgill.ca
+
+# Abstract
+
+Transformer language models encode the notion of word order using positional information. Most commonly, this positional information is represented by absolute position embeddings (APEs), that are learned from the pretraining data. However, in natural language, it is not absolute position that matters, but relative position, and the extent to which APEs can capture this type of information has not been investigated. In this work, we observe that models trained with APE over-rely on positional information to the point that they break-down when subjected to sentences with shifted position information. Specifically, when models are subjected to sentences starting from a non-zero position (excluding the effect of priming), they exhibit noticeably degraded performance on zero- to full-shot tasks, across a range of model families and model sizes. Our findings raise questions about the efficacy of APEs to model the relativity of position information, and invite further introspection on the sentence and word order processing strategies employed by these models.
+
+# 1 Introduction
+
+Recently, Transformer (Vaswani et al., 2017) language models (TLMs) have been widely used for natural language applications. Such models incorporate positional encodings: vectors encoding information about the order of words in context. Many models, such as RoBERTa (Liu et al., 2019), GPT3 (Brown et al., 2020) and OPT (Zhang et al., 2022), utilize absolute position embeddings (APEs) that directly encode absolute (linear) word order. APEs appear to contribute to the performance of such models; although when they are removed, some models become sensitive to ablative word scrambles (Sinha et al., 2021), while others work optimally (Haviv et al., 2022). Thus, what precisely APEs contribute remains unclear.
+
+Zero starting position
+
+Who could Thomas observe without distracting Nathan?
+
+Non-zero starting position
+Figure 1: Transformer models with absolute positional embeddings have different representations for sentences starting from non-zero positions.
+
+0 1 2 3 4 5 6 7
+Who could Thomas observe without distracting Nathan?
+100 101 102 103 104 105 106 107 →
+
+It is conceivable that APEs may enable the model to handle the relative distances between words. If models were somehow learning relative position information despite using absolute positional embeddings, we would expect sentence encodings to be the same in most cases, regardless of where they appear in the context window. For example, the meaning of "smoking kills" should be constant in "Kim said smoking kills" (positions 2-3) and "It was commonly believed by most adult Americans in the 90s that smoking kills" (positions 13-14), despite the fact that these words appear in different absolute positions. Given this, our central question is: do APEs enable the model to learn the relative distances between the words in a sentence?
+
+Prior work has attempted to explore the consequences of APEs using probing methods (Wang et al., 2021). APEs have been found to not capture the meaning of absolute or relative positions (Wang and Chen, 2020). APEs have also been found to bias model output with positional artefacts (Luo et al., 2021), leading to better performance on token to position de-correlation (Ke et al., 2021). Haviv et al. (2022) even find that causal TLMs perform adequately even without an explicit APEs. However, a systematic study on relativity of positional encodings is still needed.
+
+To better understand the relativity of absolute
+
+position embeddings, we first need to ascertain the robustness of relative position understanding for a given input. TLMs are typically trained in a batch containing multiple sentences, with a limited sequence window size, which is typically much larger than an average sentence. We hypothesize that a systematic model should encode the same sentence equally throughout this context window. However, evaluating the encoding of a sentence starting from any position in this window in isolation is hard, as the representation of the sentence would depend on the prior context (Misra et al., 2020; Kassner and Schütze, 2020).
+
+In this work, we subject models from several different architectures and sizes to phase shifting. In this paradigm, the sentences exposed to the model are provided contiguous position identifiers starting from a non-zero position (Figure 1). Such inspection allows us to gauge the model's sentence encodings on different positions, emulating subwindow sentence representation, while factoring out the influence of prior context. We investigate several zero shot, few shot and full shot tasks by shifting the start positions of the sentences. We observe the following:
+
+- TLMs display different sub-window sentence representation capabilities, resulting in decreased zero shot task performance and variability in sentence perplexities.
+- Autoregressive models, including the recently published OPT (Zhang et al., 2022), show erratic zero and few-shot performance on subwindow representations, highlighting the brittleness of in-context learning evaluation.
+- Masked Language Models (MLMs) encode sentences in non-standard positions better than their autoregressive counterparts.
+- During fine-tuning models suffer drastically on cross phase-shifted evaluation, suggesting position specific overfitting.
+
+We aim to raise awareness about issues with APEs, which are still widely used in pre-training large language models. Our results highlight the severity of position shortcuts taken by the model during pretraining and fine-tuning, and imply that TLMs may have vastly varying sub-window sentence representation capability than previously assumed. We will
+
+release the code and analysis used in this work on Github.
+
+# 2 Approach
+
+Position encodings used by TLMs come in three broad categories: fixed sinusoidal embeddings as proposed by Vaswani et al. (2017), absolute or learned popularized by BERT (Devlin et al., 2019) family of masked language models, and relative positions (Shaw et al., 2018) used by T5 (Raffel et al., 2020). Wang et al. (2021) presents a comprehensive overview of current encoding strategies.
+
+Despite being an older method, absolute positional embeddings (APEs) are reportedly better than its relative counterparts on several tasks (Ravishankar et al., 2021), and are still used by majority of the large pre-trained TLMs, including the recently released OPT (Zhang et al., 2022). APEs compute token representation after adding the input token to the position embedding for the corresponding position: $x_{i} = \theta_{W}[w_{i}] + \theta_{P}[i]$ , where, $\theta_W\in \mathbf{R}^{|V|\times d}$ is the token vocabulary of size $|V|$ , embedding dimension $d$ , and the absolute position embedding matrix $\theta_P\in \mathbf{R}^{|T|\times d}$ , where $T$ is the maximum context window size of the model. Now, a sentence $S = [w_{1},w_{2}\dots w_{n}]$ containing $n$ tokens, is mapped during inference to positions 1,2,... $n$ contiguously for all models.
+
+TLMs offer various sizes of context window, which is the maximum sequence length in tokens it can train and infer on. Since this context window is usually larger than the average sentence length, multiple sentences can be packed together to "fill" the context window during pre-training. This allows TLMs to learn that sentences can start from various positions in their context window. If models trained with APEs do encode relativity of position, then the sentence representations should be roughly equal throughout the context window, regardless of their starting position.
+
+# 2.1 Phase Shift Methodology
+
+To understand the relativity of APEs, we examine the model performance under phase shift conditions. Phase shift2 involves right-shifting the absolute positions of all tokens in the sentence by an equal distance $k$ , such that the tokens are now
+
+
+Figure 2: Acceptability Scores in BLiMP (Warstadt et al., 2020) dataset across different phase shifts. RoBERTa only supports context window of size $T = 512$ , so we capped the scores to phase shift $k = 300$ to allow for sentences of maximum length in BLiMP to be evaluated.
+
+mapped to new positions $1 + k, 2 + k, \ldots, n + k$ , or $x_{i} = \theta_{W}[w_{i}] + \theta_{P}[i + k]$ . As such, phase shifting changes only the absolute position, but preserves the relative distances between tokens in the a sentence. Theoretically, we can shift the positions within the context window as long as $k + n \leq T$ . For example, given phase shift $k = 100$ , and sentence length of $n$ , we could have the following vector of position ids:
+
+$$
+\vec {p} = [ 1 0 1, 1 0 2, 1 0 3, \dots , n + 1 0 0 ]
+$$
+
+While computing the task scores and perplexities of the models, we observed that all of the models exhibit poor task performance on phase shifts. Due to the non-shiftable nature of the [CLS] token in masked language models (MLMs), we first fix the position of [CLS] token to start position during phase shifting, which results in significantly improved performance for all models:
+
+$$
+\vec {p} = [ 1, 1 0 2, 1 0 3, \dots , n + 1 0 0 ]
+$$
+
+Furthermore, we observed yet another marked improvement in task performance when we use special tokens in the beginning of the sentence: typically the end-of-sentence ([EOS]) token in case of MLM models (RoBERTa, BART). An explanation for this ambiguity in results is that typically when models are pre-trained, multiple sentences are packed together in the context window by delimiting the start of each sentence with an [EOS]
+
+
+Figure 3: Distribution of sentences in BLiMP (Warstadt et al., 2020) having the lowest perplexities (i.e., are deemed most acceptable) for each phase shift.
+
+
+
+token $^{3}$ . Thus, in all of our results, we opt with this configuration (adding an [EOS] token before the sentence) to ensure fairer evaluation for all model families. Concretely, the input to a model uses the following template $^{4}$ :
+
+$$
+[ C L S ] [ E O S ] < s e n t e n c e >
+$$
+
+# 3 Impact of phase shifts on grammatical acceptability
+
+First, we investigate the impact of phase shifting on the model performance. We compute the perplexities of several publicly available models—RoBERTa (Liu et al., 2019), BART (Lewis et al., 2020), GPT2 (Radford et al., 2019) and OPT (Zhang et al., 2022)—to evaluate the grammatical acceptability capabilities of the model, using the BLiMP (Warstadt et al., 2020) benchmark. We compute the task score by comparing grammatical and ungrammatical sentence perplexities, and applying the phase shift in increasing values of $k$ to the sentences and models (Figure 2).
+
+We observe that the task performance of all models, except for RoBERTa, drastically suffers from phase shifting. Autoregressive models in particular display worse results. This is likely due to a mismatch of position information learned due to
+
+
+Figure 4: Aggregate performance of OPT family on six NLP tasks when various phase shifts are applied.
+
+
+
+
+
+the causal language modelling objective vs the position information provided to the model during phase shift (Haviv et al., 2022). We also compare the perplexities of each sentence across different phase shifts and plot the frequency of sentences having the lowest perplexity in each $k$ (Figure 3). We observe in GPT2 that more than $70\%$ of the sentences have their best perplexity in $k = 0$ , highlighting a severe zero-position bias. $\mathrm{OPT}_{350\mathrm{M}}$ has better sub-window sentence representation capacity than similarly sized GPT2, which is also evident from the acceptability results in Figure 2.
+
+# 4 Impact of phase shifts on in-context learning
+
+More recently, zero-shot and few-shot inference, commonly referred to as in-context learning, have become a de facto standard in evaluating pretrained language models (Brown et al., 2020). In this approach, the model's predictions are produced by conditioning it on certain prompts, such as instructions (zero-shot setting) or a few examples of input-output pairs (few-shot setup). In both cases, the model faces an extended input text, and we suspect it will be affected by deficiencies of APE. To evaluate this hypothesis, we employ an experimental setup similar to §3. Under zero-shot and five-shot inference regimes, we assess the model performance on standard NLP tasks when it is fed with inputs in increasing values of phase shifts. We choose OPT model family, because it is available in a wide range of sizes (125M to 30B parameters), allowing allows us to examine the behavior of APE at different scales. Moreover, our evaluations take into account four tasks reported in the original pa
+
+
+Figure 5: Distribution of prompts with best accuracy across all six tasks.
+
+per: Winogrande (Sakaguchi et al., 2020), COPA (Gordon et al., 2012), PIQA (Bisk et al., 2020), and ARC (Clark et al., 2018) as well as two classification datasets from GLUE benchmark (Wang et al., 2019): MRPC and RTE. We provide an aggregated view of the models' performance on all six accuracy-dominated benchmarks in Figure 4. The detailed plots for each task are in Appendix B.
+
+In most tasks, the performance deteriorates when the model process inputs in any other phase shift than zero, especially in zero-shot inference. More importantly, the model's performance is not always adversely affected by phase shifts. In fact, Figure 5 shows that non-zero starting positions result in the best accuracy for many prompts. This erratic performance is present in all model sizes, and scaling the number of parameters does not help. Furthermore, one can see larger models are more affected by shifted starting position, which suggests that absolute positional embedding might need more data or training as the number of parameters increases.
+
+
+Figure 6: GLUE task heatmap with varying fine-tuning train and test phase shifts, averaged across all models. Darker colors represent better task performance.
+
+# 5 Impact of phase-shifts on fine-tuning
+
+Finally, we investigate the effect of phase shift in fine-tuning. We ask whether the models can generalize to out-of-phase sentences for a given task. We train RoBERTa, BART, GPT2 and OPT models on CoLA, RTE and MRPC tasks from the GLUE benchmark (Wang et al., 2019) and evaluate them on phase-shifts. We choose these three relatively small tasks in order to decrease the number of gradient updates to position embeddings during fine-tuning. We perform a cross-phase analysis by training and evaluating across different phase shifts $(k = 0, 100, 200, 300)$ for all models on the same set of datasets, and show the averaged performance. We observe for all models, the task performance drops during out-of-phase evaluation (non-diagonals in Figure 6).
+
+The drop in performance of evaluating out-of-phase sentences might just be simply attributed to overfitting on position information during finetuning. However, we observe that for all tasks, training and evaluating on the same phase-shift is worse when $k \neq 0$ (diagonals in Figure 6). Out-of-phase training appears to be worst for CoLA, which suffers drastically when fine-tuning on different phase shifts. These results highlight a potential task data bias with respect to different positions.
+
+# 6 Conclusion
+
+In this work, we investigate the abilities of APEs in encoding the relative positions of the tokens in an input. We observe that TLMs using APEs encode sentences differently based on the starting position of the sentence in the context window. This result has major implications in the way we perceive the sentence processing capabilities of TLMs. Specifically, we observe that the representation of the same sentence varies depending on where it is in the context window, such that it impacts zero shot, few shot and full shot task performance of sub-window sentences. Future work could leverage
+
+the start position in building robust and positiongeneralizable models. We hope our work can inform the community on the pitfalls of using APEs, and inspire development and adoption of alternative relative position embedding based approaches.
+
+# Limitations
+
+Our work primarily focuses on evaluating the relative position encoding of APEs. We do not focus on the relative position embeddings (Shaw et al., 2018; Raffel et al., 2020) (RPE) as our method of phase-shift analysis is not applicable to those classes of models. RPEs employ a window based position information computation on the fly, which does not require it to store embeddings uniquely for each position. Thus, a phase shift in RPE would not change the sentence processing pipeline, as the model recomputes the position information based on the shifted window. Thus, we need different tools to study the relative position encoding of RPE than the one proposed in this paper.
+
+We also acknowledge that our study is primarily focused on English language data from BLiMP and GLUE. It is likely the same results would hold in a multi-lingual model, however, since many languages are less word order inflexible than English, that should be investigated in a follow-up work.
+
+# Ethical Consideration
+
+Our work aims at understanding the difference in sentence representation by shifting position information. In practice, this could yield un-intended results from a TLM deployed in production. Since we observe a large variation in results, we would advise for caution when deploying TLMs in sensitive real world applications, as the relative positioning of a given sentence might evoke different responses from the model. We hope our work can be useful to motivate the use of better positional encoding schemes in pre-training TLMs in future.
+
+# Acknowledgements
+
+We would like to thank Kanishka Misra, Shagun Sodhani, Stephen Roller and Kushal Arora for their feedback on the initial versions of this draft. We are also grateful for anonymous reviewers' feedback. Siva Reddy acknowledges the support by the Facebook CIFAR AI Chair program.
+
+# References
+
+Mikel Artetxe, Shruti Bhosale, Naman Goyal, Todor Mihaylov, Myle Ott, Sam Shleifer, Xi Victoria Lin, Jingfei Du, Srinivasan Iyer, Ramakanth Pasunuru, Giri Anantharaman, Xian Li, Shuhui Chen, Halil Akin, Mandeep Baines, Louis Martin, Xing Zhou, Punit Singh Koura, Brian O'Horo, Jeff Wang, Luke Zettlemoyer, Mona T. Diab, Zornitsa Kozareva, and Ves Stoyanov. 2021. Efficient large scale language modeling with mixtures of experts. CoRR, abs/2112.10684.
+Iz Beltagy, Matthew E. Peters, and Arman Cohan. 2020. Longformer: The long-document transformer. CoRR, abs/2004.05150.
+Yonatan Bisk, Rowan Zellers, Ronan LeBras, Jianfeng Gao, and Yejin Choi. 2020. PIQA: reasoning about physical commonsense in natural language. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pages 7432-7439. AAAI Press.
+Sid Black, Leo Gao, Phil Wang, Connor Leahy, and Stella Biderman. 2021. GPT-Neo: Large Scale Autoregressive Language Modeling with Mesh-Tensorflow. If you use this software, please cite it using these metadata.
+Sidney Black, Stella Biderman, Eric Hallahan, Quentin Gregory Anthony, Leo Gao, Laurence Golding, Horace He, Connor Leahy, Kyle McDonell, Jason Phang, Michael Martin Pieler, USVSN Sai Prashanth, Shivanshu Purohit, Laria Reynolds, Jonathan Tow, Ben Wang, and Samuel Weinbach. 2022. GPT-neox-20b: An open-source autoregressive language model. In *Challenges & Perspectives in Creating Large Language Models*.
+Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual.
+Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, Parker Schuh, Kensen Shi, Sasha Tsvyashchenko, Joshua Maynez, Abhishek
+
+Rao, Parker Barnes, Yi Tay, Noam Shazeer, Vinodkumar Prabhakaran, Emily Reif, Nan Du, Ben Hutchinson, Reiner Pope, James Bradbury, Jacob Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin, Toju Duke, Anselm Levskaya, Sanjay Ghemawat, Sunipa Dev, Henryk Michalewski, Xavier Garcia, Vedant Misra, Kevin Robinson, Liam Fedus, Denny Zhou, Daphne Ippolito, David Luan, Hyeontaek Lim, Barret Zoph, Alexander Spiridonov, Ryan Sepassi, David Dohan, Shivani Agrawal, Mark Omernick, Andrew M. Dai, Thanumalayan Sankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica Moreira, Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang, Brennan Saeta, Mark Diaz, Orhan First, Michele Catasta, Jason Wei, Kathy Meier-Hellstern, Douglas Eck, Jeff Dean, Slav Petrov, and Noah Fiedel. 2022. Palm: Scaling language modeling with pathways.
+Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. 2018. Think you have solved question answering? try arc, the AI2 reasoning challenge. CoRR, abs/1803.05457.
+Róbert Csordás, Kazuki Irie, and Juergen Schmidhuber. 2021. The devil is in the detail: Simple tricks improve systematic generalization of transformers. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 619-634, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
+Zihang Dai, Zhilin Yang, Yiming Yang, Jaime Carbonell, Quoc Le, and Ruslan Salakhutdinov. 2019. Transformer-XL: Attentive language models beyond a fixed-length context. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2978-2988, Florence, Italy. Association for Computational Linguistics.
+Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.
+William B. Dolan and Chris Brockett. 2005. Automatically constructing a corpus of sentential paraphrases. In Proceedings of the Third International Workshop on Paraphrasing (IWP2005).
+Leo Gao, Jonathan Tow, Stella Biderman, Sid Black, Anthony DiPofi, Charles Foster, Laurence Golding, Jeffrey Hsu, Kyle McDonell, Niklas Muennighoff, Jason Phang, Laria Reynolds, Eric Tang, Anish Thite, Ben Wang, Kevin Wang, and Andy Zou. 2021. A framework for few-shot language model evaluation.
+Danilo Giampiccolo, Bernardo Magnini, Ido Dagan, and Bill Dolan. 2007. The third PASCAL recognizing
+
+textual entailment challenge. In Proceedings of the ACL-PASCAL Workshop on Textual Entailment and Paraphrasing, pages 1-9, Prague. Association for Computational Linguistics.
+Andrew Gordon, Zornitsa Kozareva, and Melissa Roemmele. 2012. SemEval-2012 task 7: Choice of plausible alternatives: An evaluation of commonsense causal reasoning. In *SEM 2012: The First Joint Conference on Lexical and Computational Semantics – Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation (SemEval 2012), pages 394–398, Montréal, Canada. Association for Computational Linguistics.
+Adi Haviv, Ori Ram, Ofir Press, Peter Izsak, and Omer Levy. 2022. Transformer Language Models without Positional Encodings Still Learn Positional Information. ArXiv preprint, abs/2203.16634.
+Dieuwke Hupkes, Verna Dankers, Mathijs Mul, and Elia Bruni. 2020. Compositionality decomposed: How do neural networks generalise? (extended abstract). In Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI-20, pages 5065-5069. International Joint Conferences on Artificial Intelligence Organization. Journal track.
+Nora Kassner and Hinrich Schütze. 2020. Negated and misprimed probes for pretrained language models: Birds can talk, but cannot fly. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7811-7818, Online. Association for Computational Linguistics.
+Guolin Ke, Di He, and Tie-Yan Liu. 2021. Rethinking positional encoding in language pre-training. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net.
+Shun Kiyono, Sosuke Kobayashi, Jun Suzuki, and Kentaro Inui. 2021. SHAPE: Shifted absolute position embedding for transformers. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 3309-3321, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
+Brenden M. Lake and Marco Baroni. 2018. Generalization without systematicity: On the compositional skills of sequence-to-sequence recurrent networks. In ICML.
+Hector J. Levesque, Ernest Davis, and L. Morgenstern. 2011. The winograd schema challenge. In KR.
+Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics,
+
+pages 7871-7880, Online. Association for Computational Linguistics.
+Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining approach. CoRR, abs/1907.11692.
+Ziyang Luo, Artur Kulmizev, and Xiaoxi Mao. 2021. Positional artefacts propagate through masked language model embeddings. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 5312-5327, Online. Association for Computational Linguistics.
+Brian W. Matthews. 1975. Comparison of the predicted and observed secondary structure of t4 phage lysozyme. Biochimica et biophysica acta, 405 2:442-51.
+Kanishka Misra. 2022. minicons: Enabling flexible behavioral and representational analyses of transformer language models. ArXiv preprint, abs/2203.13112.
+Kanishka Misra, Allyson Ettinger, and Julia Rayz. 2020. Exploring BERT's sensitivity to lexical cues using tests from semantic priming. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 4625-4635, Online. Association for Computational Linguistics.
+Santiago Ontanon, Joshua Ainslie, Zachary Fisher, and Vaclav Cvicek. 2022. Making transformers solve compositional tasks. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 3591-3607, Dublin, Ireland. Association for Computational Linguistics.
+Ofir Press, Noah Smith, and Mike Lewis. 2022. Train short, test long: Attention with linear biases enables input length extrapolation. In International Conference on Learning Representations.
+Ofir Press, Noah A. Smith, and Mike Lewis. 2021. Shortformer: Better language modeling using shorter inputs. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 5493-5505, Online. Association for Computational Linguistics.
+Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners.
+Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research, 21(140):1-67.
+
+Maithra Raghu, Thomas Unterthiner, Simon Kornblith, Chiyuan Zhang, and Alexey Dosovitskiy. 2021. Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems, 34:12116-12128.
+Vinit Ravishankar, Andrey Kutuzov, Lilja Øvrelid, and Erik Velldal. 2021. Multilingual ELMo and the effects of corpus sampling. In Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa), pages 378-384, Reykjavik, Iceland (Online). Linköping University Electronic Press, Sweden.
+Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. 2020. Winogrande: An adversarial winograd schema challenge at scale. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pages 8732-8740. AAAI Press.
+Julian Salazar, Davis Liang, Toan Q. Nguyen, and Katrin Kirchhoff. 2020. Masked language model scoring. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2699-2712, Online. Association for Computational Linguistics.
+Peter Shaw, Jakob Uszkoreit, and Ashish Vaswani. 2018. Self-attention with relative position representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 464-468, New Orleans, Louisiana. Association for Computational Linguistics.
+Koustuv Sinha, Robin Jia, Dieuwke Hupkes, Joelle Pineau, Adina Williams, and Douwe Kiela. 2021. Masked language modeling and the distributional hypothesis: Order word matters pre-training for little. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 2888-2913, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
+Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 5998-6008.
+Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net.
+
+Ben Wang. 2021. Mesh-Transformer-JAX: Model-Parallel Implementation of Transformer Language Model with JAX. https://github.com/kingoflolz/mesh-transformer-jax.
+Benyou Wang, Lifeng Shang, Christina Lioma, Xin Jiang, Hao Yang, Qun Liu, and Jakob Grue Simonsen. 2021. On position embeddings in BERT. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net.
+Yu-An Wang and Yun-Nung Chen. 2020. What do position embeddings learn? an empirical study of pre-trained language model positional encoding. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6840-6849, Online. Association for Computational Linguistics.
+Alex Warstadt, Alicia Parrish, Haokun Liu, Anhad Mohananey, Wei Peng, Sheng-Fu Wang, and Samuel R. Bowman. 2020. BLiMP: The benchmark of linguistic minimal pairs for English. Transactions of the Association for Computational Linguistics, 8:377-392.
+Alex Warstadt, Amanpreet Singh, and Samuel R. Bowman. 2019. Neural network acceptability judgments. Transactions of the Association for Computational Linguistics, 7:625-641.
+Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Association for Computational Linguistics.
+Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, Todor Mihaylov, Myle Ott, Sam Shleifer, Kurt Shuster, Daniel Simig, Punit Singh Koura, Anjali Sridhar, Tianlu Wang, and Luke Zettlemoyer. 2022. Opt: Open pre-trained transformer language models. ArXiv, abs/2205.01068.
+
+# A Experiment Details
+
+# A.1 Models
+
+We used 11 publicly available pretrained language models in this work, ranging across different architecture families: Encoder, Sequence-to-Sequence, and Auto regressive models. All of them use absolute positional embeddings (APE) that is learned during pretraining. In §4, we follow the standard practice for in-context learning evaluation (Brown et al., 2020; Black et al., 2022; Gao et al., 2021) and use autoregressive models. In our initial experiments, we found GPT2 to have a similar behaviour to OPT models, and since the OPT models are available in a wider range of sizes, we primarily focus on them for these experiments. In fine-tuning (§5) and acceptability (§3) experiments, we assess all model families. However, because of the computational costs associated with these experiments, we opt for model variants with $< 1$ B parameters. The details of all models can be found in Table 1. We use HuggingFace (Wolf et al., 2020) model hub to load, fine-tune train, and run inference for all models.
+
+# A.2 Datasets
+
+We use BLiMP (Warstadt et al., 2020) for the grammatical acceptability experiments in §3 as it is typically employed in a inference-only setting and does not require additional training. For §5, we take three tasks from the standard language understanding benchmark GLUE (Wang et al., 2019) which is often used for finetuning language models: MRPC, RTE, and COLA. In addition to these three tasks, we use four other datasets, COPA, PIQA, WinoGrande, and ARC, on which the OPT family have previously demonstrated good performance (Zhang et al., 2022). Table 2 shows the statistics of all datasets, and the following provides a brief description of them:
+
+- BLiMP (Warstadt et al., 2020) is a challenge set designed to measures the model's ability to distinguish between acceptable and unacceptable English sentences. This benchmark consists of synthetic examples created based on expert-crafted grammars, where each instance comes with two versions: one acceptable and one unacceptable.
+- COPA (Gordon et al., 2012) is an open-domain commonsense causal reasoning task, where the model is given a premise and must
+
+correctly identify its cause or effect. COPA consists of short hand-crafted sentences and is provided as a multi-choice task.
+
+PIQA (Bisk et al., 2020) is a physical commonsense benchmark dataset, challenging language models' idea of the physical world. Given a physical goal, a model must choose the most plausible solution between two choices. This benchmark is used in the multi-choice format.
+- WinoGrande (Sakaguchi et al., 2020) is a commonsense reasoning benchmark based on the Winograd Schema Challenge (WSC) (Levesque et al., 2011) with increased hardness and scale. The dataset is provided as a pronoun resolution problem, where the model must recover an ambiguous pronoun in a given context.
+- ARC (Clark et al., 2018) is collected from grade-school-level science questions commonly asked in exams. This question-answering dataset is provided in a multi-choice QA format suitable for evaluating pretrained language models. We use the "easy" subset of this benchmark.
+- MRPC (Dolan and Brockett, 2005) is a paraphrase identification dataset collected from online news websites and has become a standard benchmark in the NLP community. We follow the previous works and treat the data as a text classification task.
+- RTE (Giampiccolo et al., 2007) is one of original subtasks in the GLUE benchmark and comprises textual entailment challenges. We follow the standard format and use Natural Language Inference (NLI) protocol for this dataset.
+- CoLA (Warstadt et al., 2019) is a linguistic acceptability dataset, where each example is an English sentence annotated with a binary label showing whether it is a grammatical sentence. This is a text classification dataset and we follow the standard protocol and report Matthews correlation coefficient (Matthews, 1975).
+
+
Model
Type
Pretraining Objective
Context Size
First Position
# Layers
Hidden Size
# Params
RoBERTa family (Liu et al., 2019)
RoBERTaBASE
encoder-only
Masked Language Modeling
514
2
12
768
123M
RoBERTaLARGE
encoder-only
Masked Language Modeling
514
2
24
1024
325M
BART family (Lewis et al., 2020)
BARTBASE
encoder-decoder
Masked Language Modeling
1024
2
6
768
140M
BARTLARGE
encoder-decoder
Masked Language Modeling
1024
2
12
1024
400M
GPT2 family (Radford et al., 2019)
GPT2
decoder-only
Next Token Prediction
1024
0
12
768
125M
GPT2MEDIUM
decoder-only
Next Token Prediction
1024
0
24
1024
345M
OPT family (Zhang et al., 2022)
OPT125M
decoder-only
Next Token Prediction
2048
2
12
768
125M
OPT350M
decoder-only
Next Token Prediction
2048
2
24
1024
350M
OPT2.7M
decoder-only
Next Token Prediction
2048
2
32
2560
2.7B
OPT13B
decoder-only
Next Token Prediction
2048
2
40
5120
13B
OPT30B
decoder-only
Next Token Prediction
2048
2
48
7168
30B
+
+Table 1: Details of the models we used in this paper.
+
+
Dataset
# Train
# Test/Validation
BliMP
-
67000
COPA
400
100
PIQA
16113
1838
WinoGrande
40398
1267
ARC (Easy)
2251
2376
MRPC
3668
408
RTE
2490
277
CoLA
8551
1043
+
+Table 2: Dataset statistics we used in this work.
+
+
Parameter
Value
Learning rate
{0.0001, 0.0002, 0.0003}
Batch size
{16, 32}
# Train Epochs
10
Early Stopping
On
Early Stopping Tolerance
3
Optimizer
AdamW
Learning Rate Schedule
Linear
Weight Decay
0.0
Warm Up
6% of initial training steps
+
+Table 3: Summary of hyperparameters used in finetuning experiments.
+
+# A.3 Grammatical acceptability
+
+We use all 67 subsets (a total of 67K data instances) of BLiMP (Warstadt et al., 2020). A model achieves a score of 1 if it successfully assigns a lower perplexity to the grammatical version of each example. We report the average score across the entire dataset for starting positions that are shifted in the intervals of 10. The inputs are fed to the models in the format explained in §2.1. Recall that perplexities are ill-defined in case of Masked Language Models. Thus, we follow the formulation of Salazar et al. (2020) to compute a pseudoperplexity for RoBERTa and BART. We adopt the Minicons (Misra, 2022) library to compute the perplexities, which provides a unified interface for models hosted in HuggingFace (Wolf et al., 2020).
+
+# A.4 Prompting
+
+For evaluating zero-shot inference and in-context learning, we make use of EleutherAI Language Model Evaluation Harness (Gao et al., 2021), an open-source library that is used for evaluating autoregressive pretrained language models (Black et al., 2022). In the zero-shot setting, each ex
+
+ample is converted to a prompt using task-specific templates. Then, the prompt is fed to the language model to elicit the answer. Similarly, in the few-shot setup, a prompt is created from the concatenation of few dataset examples base on the same template and are prepended as a context to validation instances. In our experiments, we use default templates provided by the EleutherAI Language Model Evaluation Harness, which can be found in Table 4. The task performance is computed over the validation set of due to the lack of public test sets, except for ARC, where we evaluate the models on the test set. We set the number of few-shots examples to be five and randomly sample them from the training set of each dataset. We report the few-shot results averaged over five random seeds. Note that feeding inputs to the models still follows the same protocol introduced in §2.1.
+
+# A.5 Fine-tuning
+
+We fine-tune all models on CoLA, RTE and MRPC tasks from the GLUE benchmark on different values of phase shift $k$ , and evaluate across all pos
+
+
The water in the teapot started to boil therefore the teapot whistled.
PIQA
Prompt
Question: <question>\n Answer: <possible-answer>
Example
Question: How can I quickly clean my blender without washing? \n Answer: Put some ice, water, and a half cup of baking soda in the blender and puree for 3 min.
WinoGrande
Prompt
<context> because <replaced-pronoun> <continuation>
Example
Angela was better suited to conduct the science experiment than Katrina because Katrina was less disciplined.
ARC
Prompt
Question: <question>\n Answer: <possible-answer>
Example
Question: Amanda is learning about different adaptations of animals. Which is an example of a behavioral adaptation? \n Answer: migration of songbirds
MRPC
Prompt
Sentence 1: <sentence1>\n Sentence 2: <sentence2>\n Question: Do both sentences mean the same thing? \n Answer: <label>
Example
Sentence 1: Inamed shares closed down nearly 12 percent on Nasdaq, where it was one of the top percentage losers. \n Sentence 2: Inamed shares dropped as much as about 16 percent on Nasdaq, where it was one of the top percentage losers. \n Question: Do both sentences mean the same thing? \n Answer: yes
RTE
Prompt
<premise>\n Question: <sentence2>. True or False? \n Answer: <label>
Example
United States astronaut Sunita Williams, currently on board the International Space Station, has today broken the record for... \n Question: Anousheh Ansari paid to go in space. True or False? \n Answer: False
CoLA
Prompt
<sentence> \n Question: Does this sentence make sense? \n Answer: <label>
Example
Brandon read every book that Megan did. \n Question: Does this sentence make sense? \n Answer: yes
+
+Table 4: Prompt templates used in EleutherAI Language Model Evaluation Harness library (Gao et al., 2021)
+
+sible phase shifts. Since RoBERTa only supports 512 positions, and maximum sentence length in these datasets amount to 128, we train models upto $k = 300$ . For each fine-tuning experiment, we first run a hyperparameter sweep varying learning rate (0.0001, 0.0002, 0.0003) and training batch size (16, 32) (amounting to 6 runs) with $6\%$ warmup steps, similar to the setting by Liu et al. (2019). We also set the weight decay to zero in order to not harm the existing positional encodings which are not used during training. Table 3 summarizes all of the parameters. Finally, we choose the best hyperparams and repeat the experiment over five different seeds (42 to 46), and present an aggregate over the results. Table 5 lists the outcome of hyperparameters tuning.
+
+In Figure 7, we further show the difference in fine-tuned models when trained on no phase shift $(k = 0)$ and evaluated on different phase shifts $(k = 100,200,300)$ . In-line with our experimental results from §3, we observe worse generalization results from BART.
+
+# B Detailed results on phase shifting with prompts
+
+We displayed a holistic view of zero-shot and five-shot experiments in Figure 4, covering the accuracies averaged over all six datasets. In this section, we now report and analyze the result of each dataset individually. Figure 9 and Figure 10 showcase models' performance in zero-shot and five-shot configurations. The same pattern can be seen across all model sizes in COPA, WinoGrande, PIQA, ARC (Easy), and RTE. Concretely, the zero-shot abilities of the models sharply decrease as we increase the starting position. Moreover, five-shot inference, typically referred to as in-context learning, is also subject to decreased performance, ranging from $-2\%$ to $-40\%$ . However, the degradation is not as severe as with zero-shot setting. Only MRPC exhibits stable phase shift performance, but even in this case, larger models are still adversely affected. Due to the exceptionally poor performance of OPT family on CoLA, we exclude these results from our analysis (Figure 10).
+
+The erratic behaviour observed in majority of evaluated datasets makes it evident that models struggle to encode the relative distances of words as
+
+
+Figure 7: GLUE downstream task results on CoLA, RTE and MRPC. The dashed lines represent the model performance with no phase shifts. The shaded area show the standard deviation from five random seeds.
+
+their understanding of inputs heavily change with various phase shifts. It is important to note that our findings demonstrate models' unstable functioning as opposed to solely highlighting their failure. Indeed, Figure 5 shows that one can extract better and improved accuracies with non-zero starting positions. Namely, $\mathrm{OPT}_{30\mathrm{B}}$ has the best zero-shot performance on phase shift $k = 300$ in the case of MRPC; the same pattern can also be observed in RTE five-shot for $\mathrm{OPT}_{13\mathrm{B}}$ on phase shift $k = 300$ . Another noteworthy observation is that the performance drop is often a non-monotonic function of phase shifts. i.e., for some prompts, the model might be more accurate for $k = 1000$ than for $k = 0$ . This observation suggests that some positional biases might be learned during pre-training and are well-captured by APE. So, increasing values of $k$ in some occasions lands the model attention in a "sweet spot" in the processing window, such that the model benefits from some positional biases learned during pre-training.
+
+We observe the presence of erratic behavior across a fairly wide range of model sizes in the OPT family. Additionally, it can be seen that larger models are more prone to fail at encoding relative positions than their smaller counterparts. One possible explanation for this is that in order for the models to encode relative positional information, they need to view all combinations of words and sentences in every position. This coverage rarely occurs in natural data, resulting in data sparsity issues. Hence, models with a large number of parameters may require more data/training to learn the relative ordering of words.
+
+# C Variation of best perplexity across phase shifts
+
+In this section, we investigate the perplexity of individual sentences from the BLiMP dataset across each phase shift for each model. We plot the distribution of sentences achieving lowest perplexity in each phase shift for the range of models in Figure 8. We observe several modes of phase shift for RoBERTa and BART models where they have the least perplexity on phase shifts other than the standard (zero position). In the case of GPT2 and OPT, the distribution is more skewed towards zero, indicating they almost always achieve the lowest perplexity in the zero position, i.e. when there is no phase shift.
+
+# D Code and reproducibility
+
+For all of the experiments in this work, we used open-source libraries (Wolf et al., 2020; Gao et al., 2021; Misra, 2022) and models with publicly available checkpoints. The code to reproduce the results can be accessed from https://github.com/kazemnejad/lm_pos_investigations. Furthermore, Listing 1 provides a short, easy-to-use code snippet to modify starting position in HuggingFace models. (We will also release a singularity image with all dependencies to facilitate reproducibility.) We ran our experiments on a mix of NVIDIA A100 40G and NVIDIA RTX8000 48G GPUs. In particular, almost all experiments required only one of such GPUs. The exception was only in the prompting section, where the $\mathrm{OPT}_{30\mathrm{B}}$ model required two NVIDIA RTX8000 48G GPUs to fit the model and inputs of batch size 1.
+
+
+
+
+
+
+
+
+Phase Shifts (k)
+Figure 8: Distribution of sentences having the lowest perplexities for each phase shift
+
+# E Attention analysis
+
+We further perform attention analysis on GPT2, RoBERTa and BART to visualize whether the model's attention pattern changes with phase shifts.
+
+import torch
+from transformers import AutoModelForCausalLM, AutoTokenizer
+
+Download and load the pretrained model
+tokenizer = AutoTokenizer.from_pretrained("GPT2-medium")
+model = AutoModelForCausalLM.from_pretrained("GPT2-medium")
+
+text = "The capital of France is"
+inputs = tokenizer(text, return_tensors="pt")
+
+Create unshifted position ids from the attention_mask, which $\hookrightarrow$ is equivalent to
+ $\hookrightarrow$ torch.arange(inputsh["input_ids"].shape[-1])
+inputs["position_ids"] $=$ inputs["attention_mask"].cumsum(-1)
+ $\hookrightarrow$ -1
+print(inputsh["position_ids'])
+#>>>tensor([[0,1,2,3,4]])
+
+output1 = model(**inputs, return_dict=True)
+next_token_id = torch.argmax(output1.logits[0, -1])
+print(tokenizer.decode(next_token_id))
+# >>> Paris
+
+Add special tokens
+special_tokens = torch.LongTensor([[tokenizer.bos_token_id, $\hookrightarrow$ tokenizer.eos_token_id])
+specialattention_mask $=$ torch.LongTensor([1,1])
+inputs['input_ids'] $=$ torch.cat([[special_tokens, $\hookrightarrow$ inputs['input_ids'] [0]).unsqueeze(0)
+inputs['attention_mask'] $=$ torch.cat([[special attention_mask, $\hookrightarrow$ inputs['attention_mask'] [0]).unsqueeze(0)
+
+Recompute position ids inputs["position_ids"] $=$ inputs["attention_mask"].cumsum(-1) $\longleftrightarrow$ -1
+
+Shift the position ids by 10
+inputs["position_ids"] += 9
+inputs["position_ids."[0, 0] = 0
+print(inputs["position_ids")]
+# >>> tensor([[0, 10, 11, 12, 13, 14, 15]])
+
+output2 $=$ model(\*\*inputs,return_dict=True) next_token_id $\equiv$ torch.argmax(output2.logits[0,-1]) print(tokenizer Decode(next_token_id)) #>>>the
+
+Listing 1: Python code example to shift the starting position of a sentence from $k = 0$ to $k = 10$ .
+
+Following the experimental protocol of Raghu et al. (2021), we first collect a summary of attention weights computed with token distances for each token-pair in a sentence. This summary metric is then further normalized for sentence length. The values of this metric show whether the attention is local (low values)—focused on small token distances—or global (high values)—i.e. focused on the whole sentence.
+
+We compute this attention summary metric on a sample of 5000 sentences drawn from the BLiMP
+
+
+WinoGrande
+
+
+
+
+PIQA
+
+
+
+
+ARC (Easy)
+
+
+
+
+
+
+
+
+Figure 9: Zero-shot and Few-shot performance of OPT family with various phase shifts for each individual dataset (Part 1)
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Figure 10: Zero-shot and Few-shot performance of OPT family with various phase shifts for each individual dataset (Part 2)
+
+
+CoLA
+
+
+MRPC
+RoBERTa (base)
+
+
+RTE
+
+
+
+
+RoBERTa (large)
+
+
+
+
+
+
+BART (base)
+
+
+
+
+Eval Phase Shift
+Figure 11: Individual heatmap for each GLUE task and model with varying train (fine-tune) and test phase. (Part 1)
+
+
+BART (large)
+Eval Phase Shift
+
+
+Eval Phase Shift
+
+
+CoLA
+
+
+MRPC
+
+
+RTE
+
+
+
+
+GPT2 (Medium)
+
+
+
+
+
+
+OPT (125M)
+
+
+
+
+Eval Phase Shift
+
+
+OPT (350M)
+Eval Phase Shift
+
+
+Eval Phase Shift
+Figure 12: Individual heatmap for each GLUE task and model with varying train (fine-tune) and test phase. (Part 2)
+
+
Model
Phase shifts
k=0
k=100
k=200
k=300
Learning Rate
Batch Size
Learning Rate
Batch Size
Learning Rate
Batch Size
Learning Rate
Batch Size
CoLA
RoBERTaBASE
0.00002
32
0.00002
16
0.00002
16
0.00002
16
RoBERTaLARGE
0.00003
32
0.00003
32
0.00001
32
0.00002
16
BARTBASE
0.00002
32
0.00003
16
0.00002
16
0.00002
32
BARTLARGE
0.00002
16
0.00003
32
0.00003
16
0.00003
32
GPT2
0.00002
16
0.00003
32
0.00003
16
0.00003
16
GPT2MEDIUM
0.00002
32
0.00001
16
0.00003
16
0.00003
16
OPT125M
0.00002
16
0.00001
16
0.00001
32
0.00001
16
OPT350M
0.00001
16
0.00001
32
0.00002
32
0.00001
16
MRPC
RoBERTaBASE
0.00002
32
0.00003
16
0.00003
32
0.00001
32
RoBERTaLARGE
0.00002
32
0.00001
16
0.00002
32
0.00002
16
BARTBASE
0.00001
16
0.00003
32
0.00002
16
0.00003
16
BARTLARGE
0.00002
16
0.00003
16
0.00002
16
0.00003
16
GPT2
0.00002
16
0.00003
16
0.00002
16
0.00003
16
GPT2MEDIUM
0.00002
16
0.00003
16
0.00003
16
0.00003
16
OPT125M
0.00003
16
0.00002
32
0.00002
16
0.00003
32
OPT350M
0.00003
32
0.00001
16
0.00001
32
0.00001
32
RTE
RoBERTaBASE
0.00002
16
0.00003
16
0.00002
16
0.00002
16
RoBERTaLARGE
0.00003
32
0.00001
32
0.00003
32
0.00001
32
BARTBASE
0.00003
16
0.00003
32
0.00002
32
0.00003
16
BARTLARGE
0.00003
32
0.00003
16
0.00002
16
0.00003
16
GPT2
0.00001
16
0.00003
16
0.00003
16
0.00003
16
GPT2MEDIUM
0.00002
16
0.00003
16
0.00001
16
0.00002
32
OPT125M
0.00003
16
0.00001
32
0.00001
16
0.00001
32
OPT350M
0.00001
16
0.00001
16
0.00001
32
0.00001
16
+
+Table 5: Result of hyperparameter sweep for finetuning experiments.
+
+dataset (Warstadt et al., 2020). We then plot the summary values per layer and sort according to the values for each attention head, as per Raghu et al. (2021). The idea is to discover whether this attention summary metric is drastically different under different phase shift conditions.
+
+We do observe drastic differences in attention patterns in all layers for GPT2 (Figure 13) and GPT2-Medium (Figure 14). Comparing this with of RoBERTa (base) (Figure 15) and RoBERTa (large) (Figure 16), we can corroborate our findings from §3—RoBERTa is much more robust to phase shifts. Consequently, BART (Figure 17 and Figure 18) also displays differences in attention patterns, but they are not as drastic as GPT2.
+
+# F Extended Related Work
+
+Positional encoding has been always an important part of the Transformer architecture, and since it original introduction different variants of it have been deployed by pretrained models (see Table 6 for a summary of positional encoding used by some of popular state-of-the-art models.)
+
+Positional encodings have garnered a niche community over the past several years. Wang and Chen (2020) investigate whether position embeddings learn the meaning of positions and how do they affect the learnability for different downstream tasks.
+
+Wang et al. (2021) explore different positional encodings and establish monotonicity, translation and symmetry properties of different methods, including APEs. They also report that learned APE's demonstrate superior performance for text classification, further adding to the evidence APE's enable exploitation of positional biases. Luo et al. (2021) report that masked language model embeddings consists of positional artefacts which bias the model output. More related to our work, Kiyono et al. (2021) train a Transformer model from scratch using shifted positional embeddings for machine translation, and observe improved performance in extrapolation and intrapolation setup. Haviv et al. (2022) reports a surprising finding that autoregressive Transformer models trained without explicit positional information still perform on-par with their counterparts having access to positional information. This result is attributed to the causal attention structure induced by the autoregressive training only, as this effect is not observed with masked language models, as highlighted by both Haviv et al. (2022) and Sinha et al. (2021). Ke et al. (2021) proposes a novel technique to de-correlate the position encodings and token embeddings, and achieve better downstream performance than baselines. Ravishankar et al. (2021) find relative positional encoding does not improve over APE in
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Figure 13: Attention globality distributions of GPT2 across different heads (sorted according to value) and averaged over all layers and 5000 data points. Blue curve stands for the no phase shift condition, and orange, green and red curves represent $k = 100, 200$ and 300 respectively.
+
+
+
+
+
+
+
+multi-lingual setting.
+
+On the other hand, multiple works have shown the advantage of explicit relative positional encoding for length extrapolation. Csordás et al. (2021) show Transformers equipped with variants of relative positional encoding (Dai et al., 2019; Shaw et al., 2018) significantly outperform their absolute counterparts when it comes to length generalization. In the same line of work, Ontanon et al. (2022) also find that for numerous synthetic benchmarks, the best extrapolation performance can only be obtained by relative positional encoding. Press et al. (2022) take the experiments beyond synthetic datasets and show that APE's struggle in generalization to longer sequence of natural language. All of these amount to the evidence that points to APE's as one of the potential reasons Transformers are known to fail in length generalization and productivity (Hupkes et al., 2020; Lake and Baroni, 2018). Although the benefits of using explicit relative positional bias is mentioned in various works, they typically come at the cost of slowing the training down: (Press et al., 2022) report that training T5 (which uses a relative variant of positional encod
+
+ing) is almost twice as slow as training a model with sinusoidal absolute embedding. Thus, the gained runtime efficiency allows longer training of the APE model, which in turn enables the further extrapolation capabilities. These works suggest that we have a lot left to explore about positional encoding and highlight the fact that the consequences of particular choices is still an open field of ongoing research.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Figure 14: Attention globality distributions of GPT2-Medium across different heads (sorted according to value) and averaged over all layers and 5000 data points. Blue curve stands for the no phase shift condition, and orange, green and red curves represent $k = 100, 200$ and 300 respectively.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Figure 15: Attention globality distributions of RoBERTa (base) across different heads (sorted according to value) and averaged over all layers and 5000 data points. Blue curve stands for the no phase shift condition, and orange, green and red curves represent $k = 100, 200$ and 300 respectively.
+
+
+
+
+
+
+
+
Name
Release Year
Positional Encoding Type
BERT (Devlin et al., 2019)
2019
Learned Absolute
RoBERTa (Liu et al., 2019)
2019
Learned Absolute
GPT2 (Radford et al., 2019)
2019
Learned Absolute
BART (Lewis et al., 2020)
2020
Learned Absolute
LongFormer (Beltagy et al., 2020)
2020
Learned Absolute
T5 (Raffel et al., 2020)
2020
Relative Learned Bias
GPT3 (Brown et al., 2020)
2020
Learned Absolute
GPT-Neo (Black et al., 2021)
2021
Learned Absolute
Fairseq-Dense (Artetxe et al., 2021)
2021
Fixed Absolute
ShortFormer (Press et al., 2021)
2021
Fixed Absolute
GPT-J (Wang, 2021)
2021
Rotary
GPT-NeoX (Black et al., 2022)
2022
Rotary
OPT (Zhang et al., 2022)
2022
Learned Absolute
PaLM (Chowdhery et al., 2022)
2022
Rotary
+
+Table 6: Positional encoding of commonly used pretrained language models.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Figure 16: Attention globality distributions of RoBERTa (large) across different heads (sorted according to value) and averaged over all layers and 5000 data points. Blue curve stands for the no phase shift condition, and orange, green and red curves represent $k = 100, 200$ and 300 respectively.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Figure 17: Attention globality distributions of BART (base) across different heads (sorted according to value) and averaged over all layers and 5000 data points. Blue curve stands for the no phase shift condition, and orange, green and red curves represent $k = 100, 200$ and 300 respectively.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Figure 18: Attention globality distributions of BART (large) across different heads (sorted according to value) and averaged over all layers and 5000 data points. Blue curve stands for the no phase shift condition, and orange, green and red curves represent $k = 100, 200$ and 300 respectively.
+
+
+
+
+
+
\ No newline at end of file
diff --git a/thecuriouscaseofabsolutepositionembeddings/images.zip b/thecuriouscaseofabsolutepositionembeddings/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..4d348a0315e2bcc72473ddf45973723f032421e7
--- /dev/null
+++ b/thecuriouscaseofabsolutepositionembeddings/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:471760327acf0ed272824c52af36d5aa476e7a79199ff2c332a66b1a79000cfc
+size 2016389
diff --git a/thecuriouscaseofabsolutepositionembeddings/layout.json b/thecuriouscaseofabsolutepositionembeddings/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..8d2cf4533cd572c002a1f8ab847377e7873ec591
--- /dev/null
+++ b/thecuriouscaseofabsolutepositionembeddings/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:91bea649f77f3f633d2d8f48d10272781167673c41da90e2b4131da236309bec
+size 745412
diff --git a/theeffectsofcorpuschoiceandmorphosyntaxonmultilingualspaceinduction/4bbdabc5-6f81-495a-a5b0-93929344a445_content_list.json b/theeffectsofcorpuschoiceandmorphosyntaxonmultilingualspaceinduction/4bbdabc5-6f81-495a-a5b0-93929344a445_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..a780d291eb4fa0ddb3f52a5258d932025a85ff98
--- /dev/null
+++ b/theeffectsofcorpuschoiceandmorphosyntaxonmultilingualspaceinduction/4bbdabc5-6f81-495a-a5b0-93929344a445_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:1241e756e0dbefc360b3c044d8516fe0397f3a93b2cac5da693c656c7b8572ef
+size 59754
diff --git a/theeffectsofcorpuschoiceandmorphosyntaxonmultilingualspaceinduction/4bbdabc5-6f81-495a-a5b0-93929344a445_model.json b/theeffectsofcorpuschoiceandmorphosyntaxonmultilingualspaceinduction/4bbdabc5-6f81-495a-a5b0-93929344a445_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..ac635e3bec0f6843bd81811c8c312a8cce190ebe
--- /dev/null
+++ b/theeffectsofcorpuschoiceandmorphosyntaxonmultilingualspaceinduction/4bbdabc5-6f81-495a-a5b0-93929344a445_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:bb3c6839891a2f6c202265bdc972dee484e2e2c37f88197215899cd48f7e4e41
+size 74517
diff --git a/theeffectsofcorpuschoiceandmorphosyntaxonmultilingualspaceinduction/4bbdabc5-6f81-495a-a5b0-93929344a445_origin.pdf b/theeffectsofcorpuschoiceandmorphosyntaxonmultilingualspaceinduction/4bbdabc5-6f81-495a-a5b0-93929344a445_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..15d478f311edad6e2fb9062082994a0e5dce643c
--- /dev/null
+++ b/theeffectsofcorpuschoiceandmorphosyntaxonmultilingualspaceinduction/4bbdabc5-6f81-495a-a5b0-93929344a445_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f98b455792a701f198c52ebc89e1bd1e6bfb8dd4e6416f859eb865d08b119237
+size 556585
diff --git a/theeffectsofcorpuschoiceandmorphosyntaxonmultilingualspaceinduction/full.md b/theeffectsofcorpuschoiceandmorphosyntaxonmultilingualspaceinduction/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..04b2b248616548d5bd68ca2d190705b335e54616
--- /dev/null
+++ b/theeffectsofcorpuschoiceandmorphosyntaxonmultilingualspaceinduction/full.md
@@ -0,0 +1,257 @@
+# The Effects of Corpus Choice and Morphosyntax on Multilingual Space Induction
+
+Vinit Ravishankar $^{\S}$ Joakim Nivre†
+
+$^{\S}$ Department of Informatics, University of Oslo
+
+†RISE Research Institutes of Sweden
+
+$^{\dagger}$ Dept. of Linguistics and Philology, Uppsala University
+
+$^\S$ vinitr@ifi.uio.no †joakim.nivre@ri.se
+
+# Abstract
+
+In an effort to study the inductive biases of language models, numerous studies have attempted to use linguistically motivated tasks as a proxy of sorts, wherein performance on these tasks would imply an inductive bias towards a specific linguistic phenomenon. In this study, we attempt to analyse the inductive biases of language models with respect to natural language phenomena in the context of building multilingual embedding spaces. We sample corpora from 2 sources in 15 languages and train language models on pseudo-bilingual variants of each corpus, created by duplicating each corpus and shifting token indices for half the resulting corpus. We evaluate the cross-lingual capabilities of these LMs, and show that while correlations with language families tend to be weak, other corpus-level characteristics, such as type-token ratio, tend to be more strongly correlated. Finally, we show that multilingual spaces can be built, albeit less effectively, even when additional destructive perturbations are applied to the training corpora, implying that (effectively) bag-of-words models also have an inductive bias that is sufficient for inducing multilingual spaces.
+
+# 1 Introduction
+
+A variety of proxies and analytical methods have been used to study the inductive biases of language models towards natural language. This work includes targeted syntactic evaluation (Gulordava et al., 2018; Linzen et al., 2016), language model responses to formulaic synthetic languages (Ravfogel et al., 2019; White and Cotterell, 2021), as well as attempts to correlate differences in language modeling performance to language features over a wide range of languages (Cotterell et al., 2018).
+
+In this paper, we combine two strands that have, of late, been fairly active research threads. The first of these concerns the inductive biases of language models towards languages that exhibit a specific
+
+grammar; the second addresses the inductive biases of these models towards multilingualism, which in this context refers to a model's ability to build a multilingual space (rather than distinct monolingual spaces), when trained on corpora consisting of text in multiple languages.
+
+Prior work in this domain is focused on either a) quantifying language model performance across a variety of languages, or b) studying the effects of different architectural components on the quality of the induced multilingual space. We attempt to unite the two strands of research by studying transformer-based masked language models in an effort to quantify the extent to which the grammar of the language being modelled affects the model's ability to build a multilingual space. We use Dufter and Schütze's (2021) metrics, namely word translation and sentence retrieval, as a proxy for the utility of this space. Our main findings are:
+
+- Masked language models are capable of building multilingual spaces even when destructive perturbations, like lemmatisation and shuffling, are applied to the training corpora.
+- Multilingual performance is only weakly correlated with languages and language families.
+- Multilingual performance correlates better with corpus-level statistics like type-token ratio, and the frequency of hapax legomena.
+
+# 2 Related Work
+
+Language modelling There has been a considerable amount of research addressing inductive biases that language models may have towards specific grammatical patterns, or towards natural languages with specific structures. An early study by Cotterell et al. (2018) demonstrates, over 21 languages, that certain languages are harder to model than others; the authors find that model performance correlates with the richness of a language's (inflectional) morphology. Later work by Mielke et al. (2019) shows
+
+contradictory findings; the authors extend these experiments to 69 languages and find that morphological complexity does not correlate as strongly with performance as simpler factors like vocabulary size and sentence length do.
+
+Other work involves studying how language modelling is affected by manually altering corpora. Ravfogel et al. (2019) train RNN-based models on English, altered to display different word orders and different degrees of morphological agreement; White and Cotterell (2021) generate corpora of natural language sentences, with constituents permuted based on Boolean switches, and show that recurrent language models show little variance in performance across word orders, compared to transformers.
+
+Multilingualism Moving beyond monolingual language modelling, we examine the numerous works analysing what precisely multilingual language models need, in order to form an adequate multilingual space, which is quantified by measuring a model's performance on some multilingual task. Pires et al. (2019) show that subword overlap tends to improve multilingual alignment, though overlap is by no means necessary, as languages with different scripts can exist in the same multilingual space. Deshpande et al. (2021) show that while structurally similar languages do not necessarily need subword overlap, dissimilar languages rely heavily on overlap; they also show that well-aligned non-contextual word embedding spaces allow for better transfer.
+
+On the other hand, Artetxe et al. (2020) have somewhat contradictory results, and show that neither shared vocabulary items nor joint pre-training are essential to build a multilingual encoder. K et al. (2020) and Dufter and Schütze (2021) analyse encoders from an architectural point of view. The former work shows that model depth (and not the number of attention heads) contributes to transfer performance, even when the number of parameters is kept constant. The latter points out that multilingual spaces exist because languages are forced to share parameters, and that even in the absence of shared subwords and special tokens, position embeddings play a significant role in building these spaces. Dufter and Schütze (2021) go on to show that the removal of shared position embeddings is sufficient to reduce a model's multilingual performance (as measured on word translation and sentence retrieval) to approximately random. This,
+
+we show, is not universally the case.
+
+# 3 Methodology
+
+# 3.1 General approach
+
+In order to evaluate the quality of our models' multilingual spaces, we use word translation and sentence retrieval as proxy tasks; this contrasts with, for example, Deshpande et al. (2021), who use (zero-shot) transfer performance instead. We avoid this largely due to performance constraints: small models are unlikely to be parameterised enough to handle transfer.
+
+To create synthetic multilingual (more precisely, bilingual) corpora, we follow the approach of K et al. (2020) and Dufter and Schütze (2021). Starting from a monolingual corpus, we shift the vocabulary index for every token in the original corpus up by the model's vocabulary size. For instance, the token convenient, with token index 42, would have a "mirror": convenient, with token index 2090. This effectively gives us a parallel second half, which has the same structure as the original language, but a guarantee of no vocabulary overlap.
+
+While this is a somewhat unrealistic simulation – after all, multilingual models are trained on languages with different structures – we use our formulation in order to a) have a simplified test bed where the structure of the language plays a role, but the structural differences between the two languages are ignored; and b) to avoid the complexity of the experimental space from exploding, when each language can conceivably be paired with every other language.
+
+# 3.2 Data
+
+In an effort to have a reasonably comprehensive search space of languages, we experiment over two corpora (Wikipedia and Commoncrawl) and fifteen languages – namely Arabic, Czech, Danish, German, English, Spanish, Finnish, French, Hebrew, Italian, Dutch, Polish, Portuguese, Russian and Swedish. While Indo-European languages are still rather overrepresented in our data, these languages exhibit a wide range of head-dependent entropies (Levshina, 2019). This is also part of the reason we avoid completely synthetic corpora: while it is trivial to generate synthetic corpora from some descriptive grammar, the stochasticity and random variation inherent to most natural languages is harder to synthetically model. Both corpora have been parsed into Universal Dependencies
+
+# Default
+
+he spent most of his childhood in sunamganj with his mother . david s. mack ( born 1941 ) is an american businessman . he spent most of his childhood in sunamganj with his mother . david s. mack ( born 1941 ) is an american businessman .
+
+# Lemmatised
+
+the episode be generally well receive.
+the software be sell and support only in japan.
+the episode be generally well receive.
+the software be sell and support only in japan.
+
+# Shuffled
+
+most his with in of childhood spent sunamganj . mother his hes. american . born is david 1941 ) businessman an ( mackmost his with in of childhood spent sunamganj . mother his hes. american . born is david 1941 ) businessman an ( mack
+
+# Corrupted
+
+be generally . receive well episode the software be the sell in and support . japan only be generally . receive well episode the software be the sell in and support . japan only
+
+Table 1: Sample sentences extracted from real corpora, with each of our modifications applied. Note that while the original and lemmatised corpora are sampled differently, the shuffled and corrupted corpora are modified variants of the former.
+
+(UD) (Nivre et al., 2016, 2020; de Marneffe et al., 2021).
+
+From each of the large corpora (Wikipedia and Common Crawl), we sample five corpora of 20k sentences for each language, with different random seeds, and split them into train and validation splits of 15k and 5k tokens, respectively. We employ a number of simple heuristics to filter out sentences that we suspect to be titles, or other noisy text. We generate two variants of each corpus: one that we tokenise with a BPE tokenizer, and another that retains UD-style tokenisation. The motivation behind this is to control for subwords: the absence of subword tokenisation is harder for our models to recover from, as they must be able to cluster tokens that have the same morphological affixes without explicit access to these affixes.
+
+For our BPE segmented corpora, we use a model vocabulary of size 2048; this vocabulary is derived by training a fastBPE tokenizer on the respective training corpus. For UD-style tokenisation, we also use a vocab with 2048 unique tokens. We handle unknown tokens by replacing them with tokens; we also filter out sentences that have over $90\%$ OOV tokens in the process of sentence selection, to avoid noise. As both our corpora are fairly noisy, we also apply a set of heuristics to eliminate corpus noise; for instance, we filter out sentences based on the number of title-cased tokens in them, to avoid scraping Wikipedia titles.
+
+# 3.3 Perturbations
+
+To adequately isolate the effects of word order and morphology, we apply three modifications to each combination of tokenisation method and corpus, giving us a total of $2 * 2 * 4 = 16$ corpora per language; with 15 languages and 5 seeds, this equates to $16 * 15 * 5 = 1200$ experiments in all.
+
+Original Our original, unmodified corpus, presented with both UD- and BPE-based tokenisation.
+
+Shuffled We modify our corpus by shuffling every sentence at a word level. Note that the shuffling procedure takes place before BPE segmentation, similar to Sinha et al. (2021). Ideally, given no word-order context, our masked language models should only be able to rely on morphological information, or bag-of-words distributions, in order to build a multilingual space. This also has a similar effect to removing positional embeddings from the transformer, as described in Sinha et al. (2021). Positional embeddings act as an ordering mechanism in masked language modelling; without them, a corpus is similar to our shuffled corpus.
+
+Lemmatised We use the LEMMA Universal Dependencies field to generate our corpus, instead of the usual FORM field. The motivation here is to eliminate all morphological information; the difference between this and avoiding BPE tokenisation is that lemmatisation prevents unique word forms from having separate vocab indices.
+
+Corrupted This corpus is both lemmatised and shuffled. Given this precondition, and UD-style tokenisation, there ought to be no information accessible to our model, beyond bag-of-word lemma statistics. We therefore expect word translation and sentence retrieval to be close to 0 in this setting.
+
+# 3.4 Models and Evaluation
+
+To evaluate our models' multilingual capabilities, we first train lower-capacity language models on each corpus. Each model is trained on the task of masked language modelling, on the concatenation of both halves (original and shifted) of a corpus. We use Dufter and Schütze (2021)'s BERT variant, which downsizes the original BERT model; we use
+
+
+Figure 1: Results for our four perturbations, with and without BPE, with data from Common Crawl (top) and Wikipedia (bottom). Scores (sentence retrieval on the X-axis, word translation on the Y-axis) are averaged over layers 0 and 8.
+
+single-headed, 12 layer transformer, with a head dimensionality of 64 and a feed-forward dimensionality of 256. This allows us to rapidly train a model on our corpora (in approximately 30-60 minutes per model). We set the random seed of each model to the same as the random seed used to generate the corpus we train it on; i.e. the model with seed 0, for English, is trained on the English corpus that was generated using a random seed of 0. Models are trained on V100 GPUs, each for approximately 1 hour.
+
+Finally, we evaluate word translation and sentence retrieval scores for these models by using the deterministic gold labels, obtained by simply adding the vocab size (for translation) and by di
+
+viding the corpus into two halves and generating a sequential mapping (for retrieval). Note that this evaluation does not involve fine-tuning language models: we use the cosine similarity between either a word or a sentence and its fake parallel, for word translation and sentence retrieval resepectively. For word translation, we ensure that non-initial subwords are not included in the evaluation; while this is not ideal, none of our languages are morphologically prefixing, implying that the bulk of the semantic content is in the initial subword.
+
+# 4 Results
+
+We present results per language and experiment on Common Crawl (top) and Wikipedia (bottom) in
+
+Figure 1. We begin by making a few general observations before moving on to study correlations with morphosyntactic and corpus factors.
+
+'Fails' are frequent We note, first, that across most of our experiments, we have several 'fails', where our model effectively has near 0 retrieval and translation capacity. While this observation in isolation is somewhat meaningless – the model might have failed to learn effectively, either due to the random seed or due to the hyperparameters – the sheer number of experiments we run for each scenario makes these results more meaningful, when used as a comparison between training scenarios, as evidence that a certain scenario is likelier to result in a fail than another.
+
+BPE makes word translation harder Despite controlling for non-initial subwords, using BPE tokenisation results in a drop in translation score for all our experiments. We hypothesise that this is due to common word-initial subwords being distributionally 'overloaded'; they are more likely to appear in a wider range of contexts than whole tokens are, due to the variety in consecutive subwords.
+
+Multilingualism is robust to lemmatisation Perhaps somewhat unsurprisingly, lemmatisation does not significantly affect model scores, indicating that our model relies more on word order to build multilingual spaces. Interestingly, removing BPE segmentation results in an increase in fails on lemmatised corpora.
+
+# Bag-of-words is enough for (some) experiments
+
+Our most unexpected observation is that for both shuffling and corrupting, for both BPE and non-BPE, several experiments do appear to result in fairly successful retrieval/translation models, often with an accuracy higher than $50\%$ on either task. This is surprising, given that a) this appears to contradict the findings of Dufter and Schütze (2021) about position embeddings being critical for multilingual spaces, and b) it implies that a simple bag-of-words model is enough to build a multilingual space. We attempt, in the following sections, to tease out what factors might enable this transfer. It is plausible that some part of this signal stems from the fact that the shuffling operation was carried out prior to BPE segmentation (Abdou et al., 2022); we discuss this further in Section 5.4.
+
+# 5 Analysis
+
+# 5.1 Clustering
+
+In order to find potential explanations for our results, we automatically cluster our scores, using retrieval and translation scores as our cluster metrics. To determine whether either languages (given that we have five experiments per language) or language families tend to actually represent logical, meaningful clusters, we set the number of clusters to be equivalent to the number of families, and use the adjusted Rand score (Vinh et al., 2010) to measure the distance between two clusterings – clusterings based on language/family, and learnt clusterings.
+
+We present these results in Table 2. First, clustering by language family shows little to no correlation with score-based clusters. Clusters of corpora in a single language ('language-based' clusters) are slightly clearer: while similarities are relatively low for all our BPE-based clusters, when we switch to UD tokenisation, the default and lemmatised cases begin to form more typologically relevant clusters, resembling languages. While these are by no means perfect overlaps, they are almost twice as realistic as for BPE-based tokenisation, implying that there exist language-specific features that correlate somewhat to the model's ability to form multilingual spaces. To investigate these findings in greater detail, we look for language-specific features – both corpus-specific features, and vocabulary features – and look for correlations that might explain our results.
+
+# 5.2 Corpus correlations
+
+We analyse our corpora, and measure correlations of model performance to a range of descriptive statistics, applied to the corpora that the models were trained on. For a single 'performance' metric, we follow Dufter and Schütze (2021) in defining a model's ML score as the average of its word translation and sentence retrieval scores, at layers 0 and 7. We measure correlations with:
+
+- The number of training tokens
+- The type-token ratio
+- The number of one-letter types
+- The number of one-letter tokens
+- Average type length (in characters);
+Average token length
+Average sentence length
+- Frequency of hapax, dis and tris legomena
+
+
+Figure 2: Spearman correlations $(\alpha = 0.001)$ . Greyed-out values indicate insufficient evidence.
+
+
Language
Family
BPE
UD
BPE
UD
Default
0.17/0.05
0.35/0.25
0.07/0.05
0.04/0.08
Lemmatised
0.16/0.11
0.38/0.14
0.10/0.04
0.14/0.07
Shuffled
0.15/0.13
0.03/0.01
0.07/0.10
0.02/0.05
Corrupted
0.14/0.12
0.05/0.02
0.13/0.09
0.01/0.02
+
+Table 2: Cluster similarities (adjusted Rand score) between language, or language family clusters, and $k$ -means clustering, with a random seed of 42. Results on Wikipedia and Common Crawl are separated with a backslash.
+
+We present these statistics in Figure 2. A clear difference between doing nothing/lemmatising and shuffling/corrupting leaps out. With UD tokenisation, none of our corpus metrics correlates well with model performance, while BPE tokenisation consistently throws out a range of correlations. There is also a clear difference between Wikipedia and Common CWEl; in general, we find that correlations tend to be either weaker or less significant with Common CWEl than with Wikipedia. We hypothesise that this is due to Wikipedia being both more homogeneous and less noisy as a corpus.
+
+Type-token ratio is a strong predictor For the default (and, to some extent, lemmatised) models, we find that type-token ratio has a strong positive correlation to ML-score (particularly retrieval), implying that lexical diversity enables better transfer. This is perhaps unsurprising – infrequent types might act as ‘anchors’, allowing easier transfer for their surrounding contexts. This is somewhat backed up by the disappearance of this metric in
+
+shuffled models.
+
+Avg. token length predicts BPE performance Over our scrambled corpora, for both Wikipedia and Common Crawl, $^{1}$ it appears that average token length correlates strongly to downstream performance. The fact that this occurs for BPE tokenisation and not UD implies that this is likely a proxy for the number of BPE splits, rather than a realistic cross-linguistic measure; the more aggressive the BPE, the poorer the model. This is also somewhat backed up by the fact that the number of tokens inversely correlates to BPE performance; the shorter the average BPE split, the more the actual number of tokens in a corpus, for a given language.
+
+Sentence length often correlates negatively This finding is consistent across all our BPE models; longer sentence lengths (in tokens) imply poorer multilingual scores. This is likely at least partially related to the previous observation - the
+
+
+(a) Sentence retrieval
+
+
+(b) Word translation
+Figure 3: Spearman correlations, with a more relaxed $\alpha = 0.01$ . X-axis indicates vocabulary statistics. Y-axis indicates tokenisation method. Correlations are on Common Crawl data, with the appropriate metric averaged at layers 0 and 7.
+
+longer the average token, the less aggressive the BPE, and the less aggressive the BPE, the shorter the average sentence.
+
+Hapax/dis/tris ratios Results generally tend to correlate positively with the ratio of hapax legomena to the total number of tokens, when BPE tokenisation is used. This difference is likely due to the presence of more morphemic hapaxes in BPE-tokenised models: UD tokenisation is likely to result in a long tail of rarer morphological forms of rarer tokens. Curiously, this correlation, albeit weaker, is reversed for dis and tris legomena.
+
+# 5.3 Vocabulary correlations
+
+Next, we examine ML score correlations with different properties of the size 2048 UD/BPE vocabulary for each model. Note that as each model is trained with a unique corpus, each model has a unique vocabulary. Our features include:
+
+- Average token length; for non-initial wordpieces, we do not include the length of the prefix.
+- Counting complexity, using UniMorph (Kirov et al., 2020) to count the number of distinct morphological features in a given language.
+- The frequency of single-letter vocab items.
+- The frequency of digits in the vocab.
+- The frequency of punctuation in the vocab.
+
+We present these correlations in two heatmaps in Figures 3a and 3b. Some of our observations back
+
+up the observations in the previous section (eg. token length correlates inversely with ML score).
+
+Counting complexity is complex. Gratifyingly, the counting complexity metric (Sagot, 2013) appears to match Cotterell et al. (2018)'s observation, and is positively correlated with both retrieval and (to a larger extent) translation. Strangely, however, this correlation also appears to hold for both corrupted corpora; this is odd, as these corpora are lemmatised, implying the absence of inflectional morphology. It is plausible that this effect is still visible (albeit weakened) due to differences in the distribution of function words and stems, when compared with a language with actual differences in counting complexity; a language with strong case-marking, for instance, is likely to have a very different distribution of adpositions than a language without. This finding also backs up Mielke et al. (2019), who suggest that vocabulary-level measures may correlate better.
+
+Specific tokens may act as anchors For the task of word translation, we notice that positive correlations tend to occur with the frequency of noninitial subwords, the frequency of digits, and the frequency of single-letter tokens. This effect, visible across all three categories, might indicate that these tokens act as anchors, enabling easier transfer in their contexts.
+
+No clear patterns exist for retrieval We notice no clear factors contributing to retrieval. While the number of unused tokens does appear to correlate
+
+
+Figure 4: Retrieval/translation scores for (learnt) absolute position, (fixed) sinusoidal position and no position. English in bold black for easier comparison with Duffer and Schütze (2021).
+
+in the lemmatised models, this is mild and is likely to be an effect of the vocab size being effectively smaller.
+
+# 5.4 Ablation experiments
+
+While somewhat tangential to our original research question, we attempted to modify the positional embedding bias in our model. Dufter and Schütze (2021) show that positional embeddings are critical to building a multilingual space; Sinha et al. (2021) show that positional embeddings are critical to building monolingual language models, a finding backed up in other work (Abdou et al., 2022; Papadimitriou et al., 2022), where the authors also emphasise the importance of meaningful word order. These observations are somewhat contradictory to our findings, where shuffling corpora at a token-level still allows for successful multilingual space induction.
+
+To resolve this, we train two additional models, on a corrupted variant of Common Crawl, presented in Figure 4. The first of these has its learnt, absolute position embeddings (Devlin et al., 2019) replaced with sinusoidal embeddings, as in the original transformer paper (Vaswani et al., 2017), and the other has them removed entirely. While we
+
+would expect to see model performance drop considerably without position embeddings, this is often not the case at all; there is no real visible difference in performance across either of the tasks, implying that certain 'clues' are perhaps sufficient to build a multilingual space, even when a functional monolingual space might not exist for any of the languages.
+
+Having said that, we note that English (annotated in black) is not one of the easier languages to build multilingual spaces for, even with absent position embeddings; as such, our English results are more similar to the results reported by Dufter and Schütze (2021).
+
+# 6 Conclusion
+
+In this work, we attempted to measure the variance in the ability of masked language models to build multilingual spaces with the underlying typology of the language. In doing so, we have shown that these models are capable of building multilingual spaces even when sentences are lemmatised and scrambled at a token level, showing that multilingualism can exist even when transformers act, functionally, like bag-of-words models. This does not, however, necessarily imply the ability to effectively model language (Abdou et al., 2022), but merely the ability to align two disjoint linguistic spaces.
+
+We have also shown that, on the one hand, the ability to build a multilingual space is only weakly correlated to language (given multiple corpora) and to language family, and that, on the other hand, certain corpus-level metrics (specifically, type-token ratios and the presence of hapax legomena) are relatively good predictors of multilingual space quality, while others (such as the number of tokens or the average sentence length) are negatively correlated.
+
+Our work is not without its caveats. For one, a lot of our correlating factors muddy the waters between what is an inherent property of the language itself, and what is a property of the corpus we use. While we use texts from the same domain in all our languages, both Wikipedia and Common Crawl are widely inconsistent across language, unless explicitly made comparable (Otero and López, 2010). Further, as discussed earlier, our scenario is not strictly realistic: first, this is a bilingual setup meant to approximate a multilingual one; second, both our languages have exactly the same structure; third, our language models are very underparame
+
+terised relative to full-scale models. It is unlikely that our observations would hold true in a real-world scenario; given, however, that our aim was to study the inductive biases of masked language models, using full-scale models would defeat the purpose somewhat, as the sheer volume of training data would have overridden these biases. Having said that, we present this work as an attempt to add to the often conflicting pool of papers attempting to shed some light on how language models acquire language.
+
+# Limitations
+
+This work has several limitations, some of which we have addressed. To reiterate, in order to enable some degree of cross-linguistic diversity in this analysis, our bilingual setup is only an approximation of a true multilingual setup. Conversely, we are limited in the data we have access to: for inclusion in this study, languages had to have large and relatively noiseless dependency-parsed corpora available; as such, we are somewhat biased towards over-representing Indo-European languages.
+
+# Ethical considerations
+
+The research presented in this work is compatible with the ACL ethics policy; the data we use is a toy subset of openly available corpora, and our models are very underparameterised, relative to the current state-of-the-art. Given the sheer number of models we train, our main experimental findings require approximately 1200 GPU hours for training, approximately equivalent to the amount of time required to train a full-scale BERT model on the same V100 GPUs.[2]
+
+# References
+
+Mostafa Abdou, Vinit Ravishankar, Artur Kulmizev, and Anders Søgaard. 2022. Word Order Does Matter and Shuffled Language Models Know It. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 6907-6919.
+Mikel Artetxe, Sebastian Ruder, and Dani Yogatama. 2020. On the Cross-lingual Transferability of Monolingual Representations. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4623–4637, Online. Association for Computational Linguistics.
+
+Ryan Cotterell, Sebastian J. Mielke, Jason Eisner, and Brian Roark. 2018. Are All Languages Equally Hard to Language-Model? In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 536-541, New Orleans, Louisiana. Association for Computational Linguistics.
+Marie-Catherine de Marneffe, Christopher D. Manning, Joakim Nivre, and Daniel Zeman. 2021. Universal Dependencies. Computational Linguistics, 47(2):255-308.
+Ameet Deshpande, Partha Talukdar, and Karthik Narasimhan. 2021. When is BERT Multilingual? Isolating Crucial Ingredients for Cross-lingual Transfer. arXiv:2110.14782 [cs].
+Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv:1810.04805 [cs].
+Philipp Dufter and Hinrich Schütze. 2021. Identifying Necessary Elements for BERT's Multilinguality. arXiv:2005.00396 [cs].
+Kristina Gulordava, Piotr Bojanowski, Edouard Grave, Tal Linzen, and Marco Baroni. 2018. Colorless green recurrent networks dream hierarchically. arXiv:1803.11138 [cs].
+Karthikeyan K, Zihan Wang, Stephen Mayhew, and Dan Roth. 2020. Cross-Lingual Ability of Multilingual BERT: An Empirical Study. arXiv:1912.07840 [cs].
+Christo Kirov, Ryan Cotterell, John Sylak-Glassman, Geraldine Walther, Ekaterina Vylomova, Patrick Xia, Manaal Faruqui, Sabrina J. Mielke, Arya D. McCarthy, Sandra Kübler, David Yarowsky, Jason Eisner, and Mans Hulden. 2020. UniMorph 2.0: Universal Morphology. arXiv:1810.11101 [cs].
+Natalia Levshina. 2019. Token-based typology and word order entropy: A study based on universal dependencies. Linguistic Typology, 23:533-572.
+Tal Linzen, Emmanuel Dupoux, and Yoav Goldberg. 2016. Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies. Transactions of the Association for Computational Linguistics, 4:521-535.
+Sebastian J. Mielke, Ryan Cotterell, Kyle Gorman, Brian Roark, and Jason Eisner. 2019. What Kind of Language Is Hard to Language-Model? In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4975-4989, Florence, Italy. Association for Computational Linguistics.
+Joakim Nivre, Marie-Catherine De Marneffe, Filip Ginter, Yoav Goldberg, Jan Hajic, Christopher D Manning, Ryan McDonald, Slav Petrov, Sampo Pyysalo, Natalia Silveira, et al. 2016. Universal dependencies
+
+v1: A multilingual treebank collection. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 1659-1666.
+Joakim Nivre, Marie-Catherine de Marneffe, Filip Ginter, Jan Hajic, Christopher D. Manning, Sampo Pyysalo, Sebastian Schuster, Francis Tyers, and Daniel Zeman. 2020. Universal Dependencies v2: An evergrowing multilingual treebank collection. In Proceedings of the 12th Language Resources and Evaluation Conference, pages 4034-4043, Marseille, France. European Language Resources Association.
+Pablo Gamallo Otero and Isaac González López. 2010. Wikipedia as multilingual source of comparable corpora. In Proceedings of the 3rd Workshop on Building and Using Comparable Corpora, LREC, pages 21-25. CiteSeer.
+Isabel Papadimitriou, Richard Futrell, and Kyle Mahowald. 2022. When classifying arguments, bert doesn't care about word order... except when it matters. Proceedings of the Society for Computation in Linguistics, 5(1):203-205.
+Telmo Pires, Eva Schlinger, and Dan Garrette. 2019. How Multilingual is Multilingual BERT? In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4996-5001, Florence, Italy. Association for Computational Linguistics.
+Shauli Ravfogel, Yoav Goldberg, and Tal Linzen. 2019. Studying the Inductive Biases of RNNs with Synthetic Variations of Natural Languages. page 11.
+Benoit Sagot. 2013. Comparing complexity measures. In Computational approaches to morphological complexity.
+Koustuv Sinha, Robin Jia, Dieuwke Hupkes, Joelle Pineau, Adina Williams, and Douwe Kiela. 2021. Masked language modeling and the distributional hypothesis: Order word matters pre-training for little. arXiv preprint arXiv:2104.06644.
+Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention Is All You Need. arXiv:1706.03762 [cs].
+Nguyen Xuan Vinh, Julien Epps, and James Bailey. 2010. Information theoretic measures for clusterings comparison: Variants, properties, normalization and correction for chance. Journal of Machine Learning Research, 11(95):2837-2854.
+Jennifer C. White and Ryan Cotterell. 2021. Examining the Inductive Bias of Neural Language Models with Artificial Languages. arXiv:2106.01044 [cs].
\ No newline at end of file
diff --git a/theeffectsofcorpuschoiceandmorphosyntaxonmultilingualspaceinduction/images.zip b/theeffectsofcorpuschoiceandmorphosyntaxonmultilingualspaceinduction/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..bbbdd19691a86890d17e47959892f57525838466
--- /dev/null
+++ b/theeffectsofcorpuschoiceandmorphosyntaxonmultilingualspaceinduction/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c32404587ee6fe00282bb61b9e7b58e22f0bb1aef4e15671fa650da37b0f553b
+size 213990
diff --git a/theeffectsofcorpuschoiceandmorphosyntaxonmultilingualspaceinduction/layout.json b/theeffectsofcorpuschoiceandmorphosyntaxonmultilingualspaceinduction/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..922c863300b14b2431775cdacd1c638257ad9403
--- /dev/null
+++ b/theeffectsofcorpuschoiceandmorphosyntaxonmultilingualspaceinduction/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:fcd9eaa3e3f6989c781c2c5b90a8390413fc7702b0ed3906d28d511af07a54cb
+size 250512
diff --git a/theundesirabledependenceonfrequencyofgenderbiasmetricsbasedonwordembeddings/ade8f037-4c0e-4118-b6fb-59ca86898416_content_list.json b/theundesirabledependenceonfrequencyofgenderbiasmetricsbasedonwordembeddings/ade8f037-4c0e-4118-b6fb-59ca86898416_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..32b4ebc3eb3fdf748f84c1290e4fdf0543cdc8ba
--- /dev/null
+++ b/theundesirabledependenceonfrequencyofgenderbiasmetricsbasedonwordembeddings/ade8f037-4c0e-4118-b6fb-59ca86898416_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5f6c6f26274654e00a60326a40ca23359ff62ae64e20ed66401eed7e7f13656c
+size 45157
diff --git a/theundesirabledependenceonfrequencyofgenderbiasmetricsbasedonwordembeddings/ade8f037-4c0e-4118-b6fb-59ca86898416_model.json b/theundesirabledependenceonfrequencyofgenderbiasmetricsbasedonwordembeddings/ade8f037-4c0e-4118-b6fb-59ca86898416_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..23e4a6f95ce92e1669f113abbde191d6a9808ad0
--- /dev/null
+++ b/theundesirabledependenceonfrequencyofgenderbiasmetricsbasedonwordembeddings/ade8f037-4c0e-4118-b6fb-59ca86898416_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ae9d1a95223d9e861bb6ea4ddbe1351037b8d3b5dc087dd4312515ebe62c15cb
+size 54406
diff --git a/theundesirabledependenceonfrequencyofgenderbiasmetricsbasedonwordembeddings/ade8f037-4c0e-4118-b6fb-59ca86898416_origin.pdf b/theundesirabledependenceonfrequencyofgenderbiasmetricsbasedonwordembeddings/ade8f037-4c0e-4118-b6fb-59ca86898416_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..b5ede379a03c8c7c49d8440b7e4f8b3c5799aee3
--- /dev/null
+++ b/theundesirabledependenceonfrequencyofgenderbiasmetricsbasedonwordembeddings/ade8f037-4c0e-4118-b6fb-59ca86898416_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:bfb2fa85b51094011ae96176968f55c56531cdea9c4a2ee274c27f1675743551
+size 1057308
diff --git a/theundesirabledependenceonfrequencyofgenderbiasmetricsbasedonwordembeddings/full.md b/theundesirabledependenceonfrequencyofgenderbiasmetricsbasedonwordembeddings/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..40f6db7a5c61da4b4e243be7ba943ca0a4b6c1ea
--- /dev/null
+++ b/theundesirabledependenceonfrequencyofgenderbiasmetricsbasedonwordembeddings/full.md
@@ -0,0 +1,217 @@
+# The Undesirable Dependence on Frequency of Gender Bias Metrics Based on Word Embeddings
+
+Francisco Valentini
+
+ICC (UBA-CONICET)
+
+Maestría en Data Mining (UBA)
+
+Buenos Aires, Argentina
+
+ft.valentini@gmail.com
+
+Diego Fernandez Slezak ICC (UBA-CONICET)
+
+Buenos Aires, Argentina
+
+dfslezak@dc.uba.ar
+
+Germán Rosati
+
+Escuela IDAES (UNSAM)
+
+Buenos Aires, Argentina
+
+grosati@unsam.edu.ar
+
+Edgar Altszyler
+
+ICC (UBA-CONICET)
+
+Maestria en Data Mining (UBA)
+
+Buenos Aires, Argentina
+
+ealtszyler@dc.uba.ar
+
+# Abstract
+
+Numerous works use word embedding-based metrics to quantify societal biases and stereotypes in texts. Recent studies have found that word embeddings can capture semantic similarity but may be affected by word frequency. In this work we study the effect of frequency when measuring female vs. male gender bias with word embedding-based bias quantification methods. We find that Skip-gram with negative sampling and GloVe tend to detect male bias in high frequency words, while GloVe tends to return female bias in low frequency words. We show these behaviors still exist when words are randomly shuffled. This proves that the frequency-based effect observed in unshuffled corpora stems from properties of the metric rather than from word associations. The effect is spurious and problematic since bias metrics should depend exclusively on word co-occurrences and not individual word frequencies. Finally, we compare these results with the ones obtained with an alternative metric based on Pointwise Mutual Information. We find that this metric does not show a clear dependence on frequency, even though it is slightly skewed towards male bias across all frequencies.
+
+# 1 Introduction
+
+Word embeddings are one of the most commonly used techniques to measure semantic closeness between words in a corpus. In recent years, they have been widely used in Computational Social Science applications to measure societal biases and stereotypes (Caliskan et al., 2017; Garg et al., 2018; Kozlowski et al., 2019; Lewis and Lupyan, 2020; Charlesworth et al., 2021).
+
+For practical purposes, we consider bias to be the degree to which the language used to describe
+
+groups or things is different. Bias is typically measured by computing the difference between the mean similarity of words of two context groups $A$ and $B$ with respect to a target word $x$ :
+
+$$
+\operatorname {B i a s} _ {\mathrm {W E}} = \underset {a \in A} {\text {m e a n}} \cos \left(v _ {x}, v _ {a}\right) - \underset {b \in B} {\text {m e a n}} \cos \left(v _ {x}, v _ {b}\right), \tag {1}
+$$
+
+where $v_{i}$ is the word embedding of word $i$ and $\cos(v_{i}, v_{j})$ is the cosine similarity between vectors.
+
+Gender bias has long been one of the most studied biases with this method. In this context, $A$ and $B$ are usually defined as gendered nouns and pronouns (Caliskan et al., 2017; Lewis and Lupyan, 2020). A representative example is Garg et al. (2018), where the female vs. male bias of professions in historical corpora is found to correlate with the percentage of women employed in each profession over time.
+
+Not as widely used as word embeddings, Pointwise Mutual Information (PMI) is a metric of word similarity that can also be used to study biases (Gálvez et al., 2019; Aka et al., 2021; Valentini et al., 2021). Valentini et al. (2021) define the PMI-based bias metric as
+
+$$
+\operatorname {B i a s} _ {\mathrm {P M I}} = \operatorname {P M I} (x, A) - \operatorname {P M I} (x, B),
+$$
+
+where
+
+$$
+\operatorname {P M I} (x, Y) = \log \frac {P (x , Y)}{P (x) P (Y)}.
+$$
+
+$P(x,Y)$ is the probability of co-occurrence between the word $x$ with any one in $Y$ in a window of a predefined number of words, and $P(x)$ and $P(Y)$ are the probability of occurrence of the word $x$ and any word in $Y$ , respectively. Valentini et al.
+
+(2021) show that $\mathrm{Bias}_{\mathrm{PMI}}$ can be expressed as
+
+$$
+\operatorname {B i a s} _ {\mathrm {P M I}} = \log \frac {P (x \mid A)}{P (x \mid B)}. \tag {2}
+$$
+
+That is, $\text{Bias}_{\text{PMI}}$ can be interpreted as how much more likely it is to find words in $x$ in the context of words in $A$ than in the context of words in $B$ , in a log scale. Thus, it captures exclusively first-order associations and can be computed via maximum likelihood using co-occurrence counts from the text (Valentini et al., 2021).
+
+Some recent works have studied the relationship between word frequencies and word embeddings. In particular, embeddings seem to encode word frequency even after normalization (Schnabel et al., 2015), vector norm depends on word frequency (Wilson and Schakel, 2015), top principal component directions encode frequency in different ways (Mu and Viswanath, 2018) and vectors of high-frequency and low-frequency words lie in different regions of the embedding space (Gong et al., 2018).
+
+These studies are nevertheless inconclusive in the sense that they do not determine clearly to what extent the association observed is caused by actual attributes of corpora or by undesirable properties of embedding training. Hence, an answer to the origin of the relation between embeddings and frequency is yet to be found. What is more, the repercussions of this effect in applications relevant to Computational Social Sciences such as bias quantification have not yet been explored.
+
+We make three main contributions. First, we show that frequency has an association with gender bias when measured with word embedding-based metrics: both Skip-gram with negative sampling (SGNS) and GloVe-based bias metrics tend to detect male bias in high frequency words, while GloVe also yields female bias on average in low-frequency words. Second, we show that the dependence of the embedding-based gender bias on frequency holds when tokens in the corpus are randomly shuffled. This proves that the dependence on frequency is an artifact of the metric itself i.e. that embedding-based bias metrics can encode frequency spuriously. Third, we find that the PMI-based gender bias metric does not present this frequency-based effect but is slightly skewed towards male bias across all frequency ranges. $^{1}$
+
+Our analyses are restricted to the English language and are based on a binary understanding of
+
+gender (see Limitations).
+
+# 2 The effect of frequency on gender bias
+
+Our objective is to study the association between gender bias and frequency in the widely used embedding-based metrics and in the alternative PMI-based metric. Therefore, in a first experiment, we analyze this in two pretrained word embeddings, GloVe (Pennington et al., 2014) and Word2Vec with SGNS (Mikolov et al., 2013).
+
+We do this by studying the distribution of bias in different bins of frequency of words in the vocabulary. Bias is computed with equation 1 with the female and male context words lists used in Caliskan et al. (2017), and we refer to this as female bias or gender bias. For each frequency bin, we also compute the ratio between the mean and the sample standard deviation (SD). These are Cohen's $d$ effect sizes of the mean of each group under the null hypothesis of absence of bias on average (Cohen, 1988). Here we use it as a normalized magnitude of the deviation of the distribution from zero. We use this methodology to assess the association between gender bias and frequency hereinafter. See Appendix B for further details.
+
+There is a clear association between gender $\mathrm{Bias}_{\mathrm{WE}}$ with the pretrained embeddings and target word frequency (Figure 1). GloVe embeddings present a monotonic relationship between frequency and gender bias, such that the top $10^{3.5}$ words tend to have male bias with large effect sizes, whereas less frequent words have mean female bias with medium to large effect sizes. In the SGNS embeddings the effect is small and positive in less frequent words, but in the top 100 words there is a large shift towards male bias.
+
+Even if there is literature which has studied the relationship between frequency and word vectors (see section 1), this result is still startling: a priori, we wouldn't expect the gender bias of words to correlate so strongly with frequency, because word similarity should be more closely related to semantics and co-occurrences in the training corpus than with the individual frequencies of words.
+
+To validate this behavior, we train GloVe and SGNS embeddings from scratch with the English Wikipedia and study the association between gender bias and word frequency. We compare this with the results obtained with $\mathrm{Bias}_{\mathrm{PMI}}$ (equation 2).
+
+
+
+
+Figure 1: Female bias distribution vs. words' frequency rank in pretrained GloVe (top panel) and Word2Vec with SGNS (bottom panel). Words are grouped into bins according to their rank in a log-scale, so that the most frequent words are in the leftmost bin and the less frequent, in the rightmost. We use frequency ranks as raw frequencies are not available for pretrained embeddings. Blue dots represent the means and blue values are the effect sizes (mean to SD ratio). The plots are not comparable in either axis because the corpus, the vocabulary and the training methodology of each set of embeddings are different.
+
+# 2.1 Comparing BiaswE with BiasPMI
+
+Methods and data We measure the gender bias of words in the vocabulary of the 2021 English Wikipedia with $\text{Bias}_{\text{PMI}}$ and $\text{Bias}_{\text{WE}}$ and assess the association with word frequency. We train SGNS and GloVe vectors to compute $\text{Bias}_{\text{WE}}$ , whereas the frequency counts from the corpus are used to compute $\text{Bias}_{\text{PMI}}$ . Refer to appendices A and B for details on the corpus and the methods, respectively.
+
+Results The relation between $\mathrm{Bias}_{\mathrm{WE}}$ and frequency in pretrained embeddings (Figure 1) holds qualitatively when training embeddings from scratch (top and middle panels in Figure 2). GloVe embeddings yield a negative relationship between female bias and frequency, while in SGNS we find an average male bias with medium to large effect sizes in high frequency words.
+
+When using $\text{Bias}_{\text{PMI}}$ no frequency bin is
+
+strongly biased on average (bottom panel in Figure 2). There is however a slight skew towards male bias, such that all frequency ranges present small negative effect sizes. Furthermore, the variability of bias tends to increase as the frequency of target words decreases: this behavior is attributable to the fact that PMI is usually high and noisy in low frequency words (Jurafsky and Martin, 2009).
+
+This analysis is not enough to determine that the effect of frequency on embedding-based bias metrics is a spurious artifact generated by the embeddings. It still might be the case that higher frequency words are actually more male-biased than lower frequency words due to second-order or higher associations, thus yielding plots like those on the top and middle panels of Figure 2. We conduct the following study to assess this.
+
+# 2.2 The undesirable dependency on frequency
+
+Methods and data We create five randomly shuffled, independent versions of the Wikipedia corpus where tokens are randomly located across the text. In these corpora words keep their frequency but lose their context because co-occurrences are completely random. We estimate bias with $\text{Bias}_{\text{WE}}$ and $\text{Bias}_{\text{PMI}}$ in each of the corpora and consider the average of the five values as the gender bias of each word. We analyze the relationship between the gender bias metrics and frequency in this setting. By shuffling the words in the corpus, contexts become meaningless, thus any association found between bias and frequency in this setting is explained only by the frequencies of the words. We highlight that it is problematic and undesirable for a metric to detect biases in a setting where they do not exist.
+
+Results In this controlled experimental setup, $\mathrm{Bias}_{\mathrm{WE}}$ presents a strong association with target word frequency (Figure 3): average male bias grows as frequency increases for both SGNS and GloVe, with large effect sizes from around frequencies $10^{4}$ onwards. Low frequency words present female bias on average when measured with GloVe, while they tend to have a slight male bias with small effect sizes when using SGNS.
+
+Conversely, measuring bias with PMI in the shuffled corpora does not produce a clear dependence on frequency. The average bias is roughly constant for all frequencies, with small negative effect sizes; that is, there is a slight skew towards male bias across all frequencies.
+
+
+Figure 2: Female bias vs. frequency in Wikipedia. Bias is measured with $\text{Bias}_{\text{WE}}$ using GloVe (top panel), $\text{Bias}_{\text{WE}}$ using SGNS (middle panel), and $\text{Bias}_{\text{PMI}}$ (bottom panel). Words in the vocabulary are grouped in bins according to their frequencies in log-scale. Blue dots represent the means and blue values are the effect sizes (mean to SD ratio).
+
+# 3 Discussion and Conclusion
+
+In this work we revealed the existence of a spurious frequency-based distortion in gender bias metrics based on the cosine similarity between word embeddings. Both SGNS and GloVe-based gender bias metrics tend to detect male bias in high frequency words, while GloVe also yields female bias on average in low frequency words.
+
+To determine whether this effect is indeed an undesirable artifact of the embedding-based metric we assessed the relation between gender bias and frequency in shuffled corpora, where words lose their context but keep their frequency. Results reveal that the dependence on frequency is caused by the metric and does not originate from actual properties of the texts. This shows that popular gender
+
+
+Figure 3: Female bias vs. frequency in shuffled Wikipedia. The bias of each word is computed as the average of five estimates, one for each of five shuffles performed. Words are grouped in bins according to their frequencies. Blue dots represent the means and blue values are the effect sizes (mean to SD ratio).
+
+bias measurements can detect bias even when there is none. Additionally, we found that an alternative PMI-based bias metric does not show a clear dependence on frequency, even though it shows a slight tendency towards male bias.
+
+According to these results, we consider the PMI-based bias metric has an advantage over the embedding-based metrics, which adds to the advantages of interpretability and hypothesis testing (Valentini et al., 2021). However, as PMI captures exclusively first-order associations and is unable to capture synonyms, it may be required to include several terms associated to the context words in order to measure some biases.
+
+Male nouns and pronouns are usually more frequent than female ones in large corpora (Twenge et al., 2012; Gálvez et al., 2019). For example,
+
+in the Wikipedia corpus, $he$ appears 11.8 million times, while the frequency of $she$ is 3.5 million (refer to Appendix B for the frequencies of the other gendered context words).
+
+The disparity in frequencies of male and female contexts is a type of bias in itself and can be measured by counting word occurrences. In contrast, the bias we study refers to the stereotyped contexts in which male and female entities are portrayed, and should be independent of individual word frequencies.
+
+When words are shuffled, the biases associated with the contexts of female and male context words are eliminated, but the disparities in frequencies are maintained. We propose that bias metrics capture this disparity in frequencies of female and male context words. In the case of the embedding-based metric, this hypothesis is supported by the existing evidence that embeddings encode word frequency in addition to semantics.
+
+We believe the random-shuffling experiment is general enough to show that the frequency effect would still exist with other word lists, types of biases and domains, as long as the frequencies of the context words differ. This result is important because the context words' frequencies are disregarded when measuring biases with embeddings.
+
+Our findings have important implications for bias measurement applications, as they cast doubt on the reliability of widely used bias metrics when the frequencies of the words involved are very different. We believe that more effort should be put into designing new bias detection methods that do not suffer from this weakness.
+
+# Limitations
+
+We use sets of context words typically used in the gender bias literature. These words imply a binary understanding of gender, excluding other gender representations from the bias measurement. Moreover, we focus exclusively on the English Wikipedia corpus and do not apply methods on corpora of other domains, which might yield different distributions of gender bias.
+
+We report results using default hyperparameters. This intends to mimic the typical experimental setup found in the Computational Social Science literature. Hyperparameters are left at their default values because there is no ground truth for biases, i.e. there are no annotations indicating the level of bias of words.
+
+The studies conducted in this work can be adapted to other languages, other biases and other corpora. We hope further research can assess the frequency-based distortion in these settings as well as the influence of hyperparameter choices.
+
+# References
+
+Osman Aka, Ken Burke, Alex Bauerle, Christina Greer, and Margaret Mitchell. 2021. Measuring model biases in the absence of ground truth. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society. ACM.
+Aylin Caliskan, Joanna J. Bryson, and Arvind Narayanan. 2017. Semantics derived automatically from language corpora contain human-like biases. Science, 356(6334):183-186.
+Tessa ES Charlesworth, Victor Yang, Thomas C Mann, Benedek Kurdi, and Mahzarin R Banaji. 2021. Gender stereotypes in natural language: Word embeddings show robust consistency across child and adult language corpora of more than 65 million words. Psychological Science, 32(2):218-240.
+Jacob Cohen. 1988. Statistical power analysis for the behavioral sciences. Routledge.
+Ramiro H. Gálvez, Valeria Tiffenberg, and Edgar Altszyler. 2019. Half a century of stereotyping associations between gender and intellectual ability in films. Sex Roles, 81(9):643-654.
+Nikhil Garg, Londa Schiebinger, Dan Jurafsky, and James Zou. 2018. Word embeddings quantify 100 years of gender and ethnic stereotypes. Proceedings of the National Academy of Sciences, 115(16):E3635-E3644.
+Chengyue Gong, Di He, Xu Tan, Tao Qin, Liwei Wang, and Tie-Yan Liu. 2018. Frage: Frequency-agnostic word representation. In Advances in Neural Information Processing Systems, volume 31. Curran Associates, Inc.
+Daniel Jurafsky and James H. Martin. 2009. Speech and Language Processing (2nd Edition). Prentice-Hall, Inc., USA.
+Austin C. Kozlowski, Matt Taddy, and James A. Evans. 2019. The geometry of culture: Analyzing the meanings of class through word embeddings. American Sociological Review, 84(5):905-949.
+Omer Levy, Yoav Goldberg, and Ido Dagan. 2015. Improving distributional similarity with lessons learned from word embeddings. Transactions of the Association for Computational Linguistics, 3:211-225.
+Molly Lewis and Gary Lupyan. 2020. Gender stereotypes are reflected in the distributional structure of 25 languages. Nature Human Behaviour, 4(10):1021-1028.
+
+Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems, volume 26. Curran Associates, Inc.
+Jiaqi Mu and Pramod Viswanath. 2018. All-but-the-top: Simple and effective postprocessing for word representations. In International Conference on Learning Representations.
+Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543.
+Radim Rehurek and Petr Sojka. 2010. Software Framework for Topic Modelling with Large Corpora. In Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pages 45-50, Valletta, Malta. ELRA. http://is.muni.cz/publication/884893/en.
+Tobias Schnabel, Igor Labutov, David Mimno, and Thorsten Joachims. 2015. Evaluation methods for unsupervised word embeddings. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 298-307, Lisbon, Portugal. Association for Computational Linguistics.
+Jean M Twenge, W Keith Campbell, and Brittany Gentile. 2012. Male and female pronoun use in us books reflects women's status, 1900-2008. Sex roles, 67(9):488-493.
+Francisco Valentini, Germán Rosati, Damián Blasi, Diego Fernandez Slezak, and Edgar Altszyler. 2021. On the interpretation and significance of bias metrics in texts: a pmi-based approach. arXiv preprint arXiv:2104.06474.
+Benjamin J Wilson and Adriaan MJ Schakel. 2015. Controlled experiments for word embeddings. arXiv preprint arXiv:1510.02675.
+
+# A Corpus
+
+We use the April 2021 Wikipedia dump2 and remove articles with less than 50 tokens. We remove non-alpha-numeric symbols and apply sentence splitting. The corpus contains around 1.7 billion tokens and 78.1 million documents (sentences) after pre-processing.
+
+# B Methods
+
+We measure female vs. male gender using gendered nouns and pronouns (Caliskan et al., 2017; Lewis and Lupyan, 2020), namely, $\mathrm{A} = \{\text{female, woman, girl, sister, she, her, hers, daughter}\}$ and $\mathrm{B} = \{\text{male, man, boy, brother, he, him, his, son}\}$ .
+
+Tables 1 and 2 display the frequency of each of these words in the pre-processed Wikipedia corpus.
+
+
Word
Frequency
her
3,720,408
she
3,517,570
daughter
294,043
female
282,159
woman
236,954
sister
179,511
girl
141,616
hers
5,706
+
+Table 1: Frequencies of female context words in the Wikipedia corpus
+
+
Word
Frequency
he
11,815,189
his
9,603,118
him
1,811,552
son
541,828
man
443,881
brother
287,544
male
181,471
boy
124,326
+
+We exclude words with fewer than 100 occurrences, which yields a vocabulary of 222,144 words. Table 3 displays the distribution of these words according to their frequencies, excluding the female and male context words.
+
+Table 2: Frequencies of male context words in the Wikipedia corpus
+
+
Frequency
# words
[102, 102.5]
116,340
(102.5, 103]
54,187
(103, 103.5]
26,617
(103.5, 104]
13,144
(104, 104.5]
6,579
(104.5, 105]
3,255
(105, 105.5]
1,448
(105.5, 106]
441
(106, 108.12]
117
+
+Table 3: Number of words in each frequency range in the Wikipedia corpus
+
+dings trained on Wikipedia 2014 and Gigaword 5 (Pennington et al., 2014), and Word2Vec SGNS embeddings trained on Google News (Mikolov et al., 2013), both with 300 dimensions.
+
+All methods employed in sections 2.1 and 2.2 (GloVe, SGNS and PMI) use a window size of 10 and remove out-of-vocabulary tokens before the corpus is processed into word-context pairs (Levy et al., 2015).
+
+For SGNS we use the Word2Vec implementation available in the Gensim library (Rehurek and Sojka, 2010) with default hyperparameters. GloVe is trained with Pennington et al. (2014)'s implementation with 100 iterations.
+
+For PMI, we count co-occurrences with the GloVe module (Pennington et al., 2014) and set the smoothing parameter $\epsilon$ to 0.01, so that it can be computed whenever there are no co-occurrences between the target word and any of the context words.
+
+All computations were performed on a desktop machine with 4 cores Intel Core i5-4460 CPU @ 3.20GHz and 32 GB RAM. Training took around 30 minutes per iteration with GloVe and 2 hours per epoch with SGNS.
\ No newline at end of file
diff --git a/theundesirabledependenceonfrequencyofgenderbiasmetricsbasedonwordembeddings/images.zip b/theundesirabledependenceonfrequencyofgenderbiasmetricsbasedonwordembeddings/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..dea6320dfe8b3f0c42fc8cb40401468dc9f69501
--- /dev/null
+++ b/theundesirabledependenceonfrequencyofgenderbiasmetricsbasedonwordembeddings/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:86e9fb2ff428f1a0c7d6404f6c89ed60b868a75994b4acbf1df54a0fd61210c2
+size 316871
diff --git a/theundesirabledependenceonfrequencyofgenderbiasmetricsbasedonwordembeddings/layout.json b/theundesirabledependenceonfrequencyofgenderbiasmetricsbasedonwordembeddings/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..e22cfa6a9be577497b86d51cbc127b2bb29f0870
--- /dev/null
+++ b/theundesirabledependenceonfrequencyofgenderbiasmetricsbasedonwordembeddings/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:8383bfb40374785b102e01ac1cbd299d262f684b9fdbdf4a3d5b460a70ab661c
+size 220070
diff --git a/thinkbeyondwordsexploringcontextrelevantvisualcommonsensefordiversedialoguegeneration/d466fe7e-85ed-4516-9603-8a7f428a0d24_content_list.json b/thinkbeyondwordsexploringcontextrelevantvisualcommonsensefordiversedialoguegeneration/d466fe7e-85ed-4516-9603-8a7f428a0d24_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..f490604973c093a648b41db3b47f9339a8fb4946
--- /dev/null
+++ b/thinkbeyondwordsexploringcontextrelevantvisualcommonsensefordiversedialoguegeneration/d466fe7e-85ed-4516-9603-8a7f428a0d24_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:67a154c8b3d76bf8dfb1c6b1240e6d5ff877183ab7ae983d2f3e5751627696ac
+size 84352
diff --git a/thinkbeyondwordsexploringcontextrelevantvisualcommonsensefordiversedialoguegeneration/d466fe7e-85ed-4516-9603-8a7f428a0d24_model.json b/thinkbeyondwordsexploringcontextrelevantvisualcommonsensefordiversedialoguegeneration/d466fe7e-85ed-4516-9603-8a7f428a0d24_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..9f1b2ff366c893807fe6afc54884190100e82167
--- /dev/null
+++ b/thinkbeyondwordsexploringcontextrelevantvisualcommonsensefordiversedialoguegeneration/d466fe7e-85ed-4516-9603-8a7f428a0d24_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9d190f28db36eda5aa816de12f001aa236534004fcaaad60294ce814f65f18a1
+size 101568
diff --git a/thinkbeyondwordsexploringcontextrelevantvisualcommonsensefordiversedialoguegeneration/d466fe7e-85ed-4516-9603-8a7f428a0d24_origin.pdf b/thinkbeyondwordsexploringcontextrelevantvisualcommonsensefordiversedialoguegeneration/d466fe7e-85ed-4516-9603-8a7f428a0d24_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..ac6a9f6e945455600f98946d4e134c00ca7c4bf9
--- /dev/null
+++ b/thinkbeyondwordsexploringcontextrelevantvisualcommonsensefordiversedialoguegeneration/d466fe7e-85ed-4516-9603-8a7f428a0d24_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:23af6e3867e179c050ec7753cf97a5f9bbbdbdba216d3c16ea8d4a370cf657ba
+size 1989145
diff --git a/thinkbeyondwordsexploringcontextrelevantvisualcommonsensefordiversedialoguegeneration/full.md b/thinkbeyondwordsexploringcontextrelevantvisualcommonsensefordiversedialoguegeneration/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..66c43866598154d3665d94f65ef912a7d1bceda6
--- /dev/null
+++ b/thinkbeyondwordsexploringcontextrelevantvisualcommonsensefordiversedialoguegeneration/full.md
@@ -0,0 +1,402 @@
+# Think Beyond Words: Exploring Context-Relevant Visual Commonsense for Diverse Dialogue Generation
+
+Yiting Liu $^{1,2}$ , Liang Li $^{1*}$ , Beichen Zhang $^{2}$ , Qingming Huang $^{1,2}$
+
+1 Key Laboratory of Intelligent Information Processing,
+
+Institute of Computing Technology, Chinese Academy of Sciences (ICT/CAS)
+
+2 University of Chinese Academy of Sciences
+
+{liuyiting21s, liang.li}@ict.ac.cn
+
+beichen.zhang@vip1.ict.ac.cn, qmhuang@ucas.ac.cn
+
+# Abstract
+
+Commonsense knowledge has been widely considered for building intelligent open-domain dialogue agents, aiming to generate meaningful and diverse responses. Previous works in this field usually lack the ability to effectively obtain and utilize auxiliary commonsense from the external visual world. In this paper, we argue that exploiting logical information in images related to context can be effective to enrich and steer the generation process. In view of this, we propose VICTOR, a context-relevant Visual Commonsense enhanced dialogue generatorTOR for generating coherent and informative responses. To obtain the associated visual commonsense, we devise a novel approach that expands topic words on the knowledge graph and maps them into daily scenarios. During the generation, the model adopts multimodal fusion mechanism to integrate visual and textual information, and adaptively combine their decoding distributions for better response generation. The experimental results on two public datasets show that our proposed method outperforms the latest competitive methods in terms of coherence and diversity.
+
+# 1 Introduction
+
+Building intelligent dialogue systems is a longstanding goal of artificial intelligence and has attracted increasing research attention in recent years. An ideal conversation agent is supposed to generate diverse and informative responses without sacrificing their relevance to the dialogue context. To avoid general and dull dialogue generation (Li et al., 2016), some approaches modify model architecture to manipulate latent variables and target distributions (Lin et al., 2020; Wang et al., 2021), yet these works limit themselves to original conversations without considering useful auxiliary information.
+
+Another series of solutions augment the training corpus with extra information like emotions or per
+
+# OTTers
+
+User A: I love going to the gym.
+
+User B: I like taking my dogs for a run. I love animals and I want to help them.
+
+
+
+
+
+
+
+
+
+# DailyDialog
+
+User A: How was your trip to Brazil?
+
+User B: I had no idea how seriously they take soccer!
+
+
+Figure 1: Examples from two pure language dialogue datasets, where the underlined green part is the output that needs to be generated1. The hidden visual memory, which contains associative commonsense and needs to be explored, can be essential for humans to make proper responses during the conversation.
+
+
+
+
+
+sonality (Mazare et al., 2018; Song et al., 2019). Following this line, works like Su et al. (2020); Majumder et al. (2021) introduce more general non-conversation text like forum comments and stories to help generate richer responses. However, these works only consider information stored in pure text, ignoring the grounding information from the external visual world, which is essential for generating really meaningful language (Harnad, 1990; Bisk et al., 2020).
+
+As shown in Figure 1, it is natural that when making a conversation, we do not only focus on current context. We also expand or transition the topics by using associative memory gained from the physi
+
+cal world, so that the chat can be more engaging and last longer. In this work, we introduce visual commonsense as the logical semantic information stored in visual scenes from daily life. Considering that images representing everyday scenarios are typically logical and grounded in commonsense, it is reasonable to introduce them into open-domain conversation as additional information. Liang et al. (2021); Shen et al. (2021) are pioneering works that introduce visual information into the general open-domain response generation. However, these works only connect visual information by simply matching the context representation with images, without explicitly considering the topic transition of conversation. This may lead to monotonous and narrow semantics of the responses. Besides, the semantic gap between modalities makes it difficult for these methods to effectively integrate visual features. Furthermore, they ignore the balance between the contributions of two modalities in the decoding stage.
+
+To alleviate the above issues, we present VIC-TOR, a context-relevant visual commonsense enhanced dialogue generator, which consists of three components: visual commonsense retriever, multimodal fusion block, and self-adaptive response generator. The visual commonsense retriever first extracts concept words from context. Then, in order to acquire explicit commonsense knowledge, it explores related concepts by multi-hop searching on knowledge graphs. Each of these related concepts will be considered globally and mapped into the corresponding images, which then produce captions to narrow the semantic gap. In this way, we obtain visual commonsense with rich associative semantic information.
+
+To facilitate diverse dialogue, our multimodal fusion block incorporates auxiliary visual knowledge at each decoding step. It encodes visual commonsense with a transformer block and utilizes a coattention mechanism to fuse two modalities. The response generator is based on GPT-2 model. It takes knowledge pairs gained from knowledge graphs as guidance to encourage consistent responses with relevant topics. Finally, at each decoding step, the generator uses soft probability to adaptively combine the distributions based on the textual and visual information. We demonstrate the effectiveness of our approach on two public datasets in comparison with various representative baselines.
+
+Our contributions are summarized as follows:
+
+- We present a novel approach to retrieve visual scenes based on dialogue. It expands concepts on knowledge graphs and maps them to unpaired image data, so as to acquire context-related visual commonsense with high quality.
+- We propose VICTOR, a new conversation agent that fuses multimodal information to enrich and steer the generation process. It adaptively balances textual information from context and external visual commonsense, generating diverse responses while maintaining their coherence with contexts.
+- We conduct extensive experiments on two open-domain dialogue datasets. The results show the effectiveness of our proposed method, and verify the potential of exploiting multimodal information for intelligent conversation agents.
+
+# 2 Related Work
+
+# 2.1 Controllable dialogue response generation
+
+The goal of open-domain dialogue systems is to establish engaging conversations with users. To satisfy the human need for communication and affection, an ideal conversation agent always has a higher requirement in consistency, semantics and diversity (Huang et al., 2020). Therefore, constraints on conversation attributes like persona (Mazare et al., 2018; Zhang et al., 2018) and sentiment (Song et al., 2019; Shen and Feng, 2020), and external non-conversation data like documents and knowledge base (Li et al., 2020; Majumder et al., 2020) are introduced to control the dialogue response and improve the interactivity of the conversation model. However, most of these works use additional constraints or guiding information in the form of pure text, neglecting the rich commonsense knowledge stored in the visual scene.
+
+# 2.2 Multimodal open-domain dialogue
+
+Along with the thriving of multimodal learning for tasks like captioning (Tu et al., 2022; Li et al., 2022) and entity mapping (Li et al., 2018; Liu et al., 2022), the use of visual information for improving language tasks has also shown great potential in areas such as machine translation (Caglayan et al., 2019; Fang and Feng, 2022) and semantic parsing (Shi et al., 2019; Kojima et al., 2020). However, its exploration for enhancing dialogue generation is still limited.
+
+Early attempts on this issue assume the conversation to be grounded on a given image (Mostafazadeh et al., 2017; Shuster et al., 2020). Yang et al. (2021) tries to recover the latent image of the conversation using conditional variational auto-encoding framework (Sohn et al., 2015). Recent researches (Liang et al., 2021; Shen et al., 2021) have taken it a step further by matching context with extra image data. Distinct from these existing works, our method expands original topics from context by searching from commonsense knowledge base, and uses corresponding images to explore valid visual information for response generation.
+
+# 3 The Proposed Method
+
+In this section, we first introduce our task formulation for open-domain dialogue generation with visual commonsense, and then illustrate the three main components of our proposed VICTOR model.
+
+# 3.1 Task Formulation
+
+Let $\mathcal{D}_T = \{(C_1,R_1),(C_2,R_2),\ldots ,(C_n,R_n)\}$ denotes the parallel conversational corpus. $C_i$ is the context and $R_{i}$ is the corresponding response. $\mathcal{D}_I$ denotes our collected image data. We assume that for each dialogue context $C_i$ we can find an image subset $V_{i} = \{v_{i1},v_{i2},\dots,v_{im}\}$ containing visual commonsense to assist the response generation, where $V_{i}\subseteq \mathcal{D}_{I}$ . Thus our goal is to learn a generation model $P(R_{i}|C_{i},V_{i})$ from $\mathcal{D}_T$ and $\mathcal{D}_I$ .
+
+# 3.2 Visual Commonsense Retrieval
+
+As shown in Figure 2, we design a static approach to retrieve related visual commonsense for each conversation context.
+
+Concepts Expansion Since an engaging conversation requires dialogue agents to be able to pro-actively introduce new relevant topics, we expand the topic concepts by searching from ConceptNet (Speer et al.), a commonsense knowledge base. Following Ji et al. (2020), we first perform fuzzy matching with lemmatized form of surface texts to extract topic concepts from provided conversation context. After removing stopwords, we keep verbs and nouns as our original topic concepts $T_{o}$ .
+
+We consider the original concepts as the initial nodes, and iteratively search for their directed
+
+
+Figure 2: Retrieval process: Extracting and expanding the context concepts, and mapping them to corresponding images.
+
+neighbours in ConceptNet for $H$ hops ( $H$ iterations). During each hop, we preserve top $N$ of the neighbouring concept nodes by the standard of their incoming degree. Hence, we got the expanded topic concepts $T_{e} = \{t_{1}, t_{2}, \ldots, t_{m}\}$ .
+
+Image Mapping Our attempt is to utilize commonsense knowledge existing in the corresponding visual scenarios of the conversation topics. It is intuitive to consider the connection among chosen concepts rather than mapping them separately into visual space. Since there is no large scale aligned dialogue-image dataset available, we train our concept-image matching model from MSCOCO (Lin et al., 2014), a commonly used image-captioning dataset containing sentence-image pairs. Following Tan and Bansal (2020), we align each token in the caption $s$ to the paired image, and perform token-level matching.
+
+To extract feature representation of text and image, we adopt pretrained language and visual model (here we use $\mathrm{BERT}_{\mathrm{BASE}}$ (Devlin et al., 2018) and ResNeXt (Xie et al., 2017) respectively) to operate the encoding process. We then project the feature vectors of the two modalities into aligning space, and normalize them to norm-1 vectors of the same
+
+
+Figure 3: The overall framework of VICTOR.
+
+dimension $d$ ..
+
+$$
+H _ {s} = f _ {\text {m a p}} (\operatorname {B E R T} (s)) \in \mathbb {R} ^ {L \times d},
+$$
+
+$$
+\begin{array}{l} h _ {v} = f _ {\text {m a p}} (\operatorname {B E N A} (v)) \in \mathbb {R} ^ {d} \\ \end{array} \tag {1}
+$$
+
+where the mapping function $f_{map}(.)$ is a multilayer perceptron followed by normalization function, $L$ is the sentence length. Thus we get the aligned textual and visual representation $H_{s} = \{h_{si}\}$ and $h_v$ .
+
+The relevance score of two modalities will be measured by the inner product of their representations. Finally, hinge loss is adopted to optimize the matching model:
+
+$$
+\operatorname {s c o r e} \left(w _ {i}, v\right) = h _ {s i} ^ {\top} h _ {v},
+$$
+
+$$
+\mathcal {L} _ {\text {h i n g e}} (s, v, v ^ {-}) = \sum_ {i = 1} ^ {L} \max \{0, \tag {2}
+$$
+
+$$
+\alpha - \mathrm {s c o r e} (w _ {i}, v) + \mathrm {s c o r e} (w _ {i}, v ^ {-}) \}
+$$
+
+where $v^{-}$ is the randomly selected negative image sample, $\alpha$ is the margin between the similarities of a positive and a negative pair.
+
+After training the token-image matching model, it takes the expanded topic concepts $T_{e}$ to retrieve their matched images. We keep the top $K$ images for each concept word regarding their relevance scores. Thus we get the corresponding visual scenes $V = \{v_{1}, v_{2}, \ldots, v_{m}\}$ , which contains the desired commonsense knowledge.
+
+# 3.3 Multimodal Information Fusion
+
+A commonly-used captioning model3 pretrained on MSCOCO dataset is adopted to caption the former retrieved image for each concept. The assumption is that caption-styled visual information is easier for the model to exploit than roughly extracted visual features. Then we concatenate these captions using token [cap]. Thus we get the corresponding visual commonsense $V_{c} = \{u_{1},\dots ,u_{z}\}$ , where $z$ is its total length. After that, we utilize transformer block(TB) (Vaswani et al., 2017) to obtain the representation of the visual commonsense. Formally, the representation of each $V_{c}$ is calculated by:
+
+$$
+e _ {i} = w _ {i} W _ {e m b} + P E (i),
+$$
+
+$$
+I _ {n} ^ {v} = \left[ e _ {1}, \dots , e _ {z} \right], \tag {3}
+$$
+
+$$
+H ^ {v} = \operatorname {T B} (I _ {n} ^ {v}, I _ {n} ^ {v}, I _ {n} ^ {v}),
+$$
+
+where $W_{emb} \in \mathbb{R}^{d_{voc} \times d_h}$ is the word embedding matrix from the generator, $d_{voc}$ is the size of the vocabulary. $\mathrm{PE}(.)$ is the position embedding to make use of the sentence order.
+
+Afterward, we apply the fusion module to incorporate the context information and visual knowledge, so as to determine the external information desired by current context. Formally, at each decoding step $t$ , the response generator will produce the hidden state $\widetilde{h_t^c}$ which encodes the current context (details will be described in the next section).
+
+We leverage the hidden state as a context query, and use multi-head attention layer to capture the correlated visual information $h_t^{vc}$ from $H^v$ :
+
+$$
+h _ {t} ^ {v c} = \mathrm {M u l t i H e a d} (\widetilde {h} _ {t} ^ {c}, H ^ {v}, H ^ {v}) \qquad (4)
+$$
+
+At the $t$ -th decoding step, based on the extracted commonsense information, the decoding distribution over the vocabulary decided by visual knowledge can be produced by:
+
+$$
+P _ {V} (s _ {t} | s _ {< t}, V) = \mathrm {s o f t m a x} (\mathrm {L i n e a r} (h _ {t} ^ {v c})) \quad (5)
+$$
+
+# 3.4 Self-adaptive Response Generator
+
+The generation network is based on GPT-2 (Radford et al., 2019), a pretrained multi-layer transformer decoder which learns the language granularity from large amounts of open Web text data.
+
+As shown in Figure 3, given a dialogue context $C$ , the decoding process of each step $t$ is as follows: By using GPT-2 model, we first obtain the hidden state $h_t$ of the current context. To encourage the generated response to use topic knowledge, we explicitly consider the extracted concepts here. As we get the expanded concepts set by searching for neighbours on external knowledge bases earlier, we can obtain the related concepts pairs $T_{pr} = \{[t_1^{hd}, t_1^{tl}], [t_2^{hd}, t_2^{tl}], \ldots, [t_k^{hd}, t_k^{tl}]\}$ , where $t_i^{tl}$ is the tail concept found by neighbouring search from head concept $t_i^{hd}$ . Inspired by Nie et al. (2019), we first embed the two concepts of each pair and thereafter concatenate them to obtain the related-concepts embedding. Then we use $h_t$ to query from embedded pairs by applying single-layer multi-head attention layer, getting topic-aware $\widetilde{h}_t^c$ :
+
+$$
+h _ {t} ^ {c} = \mathrm {G P T} (H _ {\leq t} ^ {c}),
+$$
+
+$$
+E ^ {T _ {p r}} = \operatorname {L i n e a r} \left(\operatorname {C o n c a t} \left(\left\{[ t _ {i} ^ {h d}, t _ {i} ^ {t l} ] \right\} W _ {e m b}\right)\right), \tag {6}
+$$
+
+$$
+\widetilde {h _ {t} ^ {c}} = \mathrm {M u l t i H e a d} (h _ {t} ^ {c}, E ^ {T _ {p r}}, E ^ {T _ {p r}})
+$$
+
+the probability distribution of the $t$ -th token decided by textual knowledge will then be computed as follows:
+
+$$
+P _ {L M} (s _ {t} | s _ {< t}, T _ {e}) = \mathrm {s o f t m a x} (\mathrm {L i n e a r} (\widetilde {h _ {t} ^ {c}})) \quad (7)
+$$
+
+Since different conversation turns may require various information, it is crucial to balance the textual information from context, which constrains the direction of this conversation, and previously obtained visual knowledge, which indicates related commonsense from real world grounding. Thus we utilize a weighted average score $\beta$ to decide
+
+the different levels of contribution of these two knowledge sources, for generating ideal responses. Instead of fixing a manual hyperparameter to adjust the balance, we adopt self-adaptive weight (See et al., 2017) based on the current hidden states of the context:
+
+$$
+\beta_ {t} = \sigma (\operatorname {L i n e a r} \left(\widetilde {h _ {t} ^ {c}}\right)), \tag {8}
+$$
+
+then we can obtain the following combined decoding distribution:
+
+$$
+\begin{array}{l} P \left(s _ {t} \mid s _ {< t}\right) = \beta_ {t} P _ {L M} \left(s _ {t} \mid s _ {< t}, T _ {e}\right) \tag {9} \\ + \left(1 - \beta_ {t}\right) P _ {V} \left(s _ {t} \mid s _ {< t}, V\right) \\ \end{array}
+$$
+
+Finally, following the standard practice of dialogue response generation, we optimize our proposed model with the cross entropy loss:
+
+$$
+\mathcal {L} _ {c e} = - \sum_ {i = 1} ^ {L} \log \left(P \left(s _ {t} \mid s _ {< t}\right)\right) \tag {10}
+$$
+
+# 4 Experimental Settings
+
+# 4.1 Datasets
+
+We conduct our experiments on two open-domain dialogue corpus, OTters (Sevegnani et al., 2021) and DailyDialog (Li et al., 2017). OTters is a dialogue dataset of human one-turn topic transitions. Unlike other common dialogue datasets which contain a large number of short, generic responses, each utterance in OTters has a specific topic and is therefore more informative. OTters is slightly different from other dialogue corpus in form: given one turn conversation $[u_a, u_b]$ , where each utterance has a different topic, the goal is to generate a transition response $u_t$ to serve as a smooth link between them. This dataset is exactly suitable for testing our model, since the response generation requires associative commonsense knowledge. During the experiments, we concatenate $[u_a, u_b]$ using separator tokens as the model inputs, and treat $u_t$ as the outputs. To test the generalization ability of our model and make a fair comparison with other baselines, we also evaluate VICTOR on the commonly used DailyDialog dataset. The examples of both datasets are shown in Figure 1.
+
+For image retrieval, we train our mapping model on MSCOCO dataset. We randomly sample 100K images from the Open Images dataset (Kuznetsova et al., 2020) as our candidate image set $\mathcal{D}_I$ , then we retrieve images from it following section 3.2.
+
+# 4.2 Comparison Methods
+
+To demonstrate the effectiveness of our proposed model, we compare it with the following representative methods: (1) Seq2seq: a classic encoder-decoder framework (Sutskever et al., 2014) with global attention(Luong et al., 2015). (2) GPT-2: a pretrained GPT-2 model (Radford et al., 2019) fined-tuned on the task datasets. (3) GRF: a GPT-based generation model (Ji et al., 2020), which performs multi-hop reasoning on knowledge graphs using graph convolution network (GCN) (Kipf and Welling, 2016). (4) GVT: a variational transformer(Lin et al., 2020) that uses CVAE to model the discourse-level diversity with a global latent variable. (5) AdaLabel: an adaptive label smoothing approach(Wang et al., 2021) that diversifies dialogue generation by adaptively estimating the soft target label distribution.
+
+Among these comparison methods, Seq2seq is a standard generation model, GPT-2 is a commonly used pretrained language model, GVT and AdaLabel are both transformer-based models for diverse dialogue generation, GRF and AdaLabel are state-of-the-art approaches for the datasets we use.
+
+# 4.3 Evaluation Metrics
+
+Automatic Evaluation We hypothesize that our proposed approach, which leverages external topic-aware visual commonsense, can increase the diversity of the generated responses, while maintaining relevance to their corresponding contexts. For fluency, we use Perplexity (Serban et al., 2015) to measure the confidence of the generated responses. A relatively low perplexity indicates better fluency. For relevance, we adopt widely used BLEU (Papineni et al., 2002) (here we use BLEU-1 and BLEU-4) and Rouge-L (Lin, 2004) to measure the n-gram overlaps between ground truth references and the generated responses. To measure the diversity, we report the percentage of distinct uni-grams and bigrams (Dist-1 and Dist-2 respectively) (Li et al., 2016) in all generated responses.
+
+Human Evaluation Considering that the automatic metrics are not always accurate to evaluate the responses (Liu et al., 2016), we further conduct manual evaluation following previous works Wu et al. (2021); Zou et al. (2021). Specifically, we randomly sample 200 testing pairs from each test set. Given a dialogue context, three annotators are asked to conduct pair-wise comparison between the responses generated by VICTOR and three strong
+
+basielines, including state-of-the-art methods (1200 comparisons with three baselines on two datasets in total). For each comparison, three annotators are required to compare the responses from the following perspectives: fluency, context coherence, informativeness. The annotators need to judge which response is better independently. If the two responses are both proper or inappropriate, the comparison of this pair is treated as "draw". Ultimately, we average the results of three annotators and calculate their Fleiss' kappa scores (Fleiss, 1971).
+
+# 4.4 Implementation Details
+
+During the topic-expansion, we set the number of hops $H = 2$ and preserve top $N = 5$ concepts per hop. For the retrieval model, the concatenation of the last 4 layers of BERT output and image features from ResNeXt-101-32x8d are used as embedding of each modality. We set the hidden size $d$ of the aligning space to 256 and the hinge loss margin $\alpha$ to 0.5. We test the performance of retrieving different number of top-scored images for each cocept, and set $K = 1$ for its best result (see section 5.4). The pretrained captioning model is combined with a ResNet-101 encoder and a LSTM decoder.
+
+For the generator, we base our model on gpt2-small $^{4}$ (Transformer with 12 layers, 768 hidden size, 12 heads). The multi-head transformer block for encoding visual commonsense has the structure of 6 layers, 768 hidden size and 6 heads. To train the model, we use the Adam optimizer (Kingma and Ba, 2014) with a learning rate of 1e-6. At the inference stage, the maximum decoding length of the response is set to 40, and we adopt beam search decoding with a beam size of 3. All our experiments are implemented with PyTorch, and the entire model is trained on RTX3090 GPUs.
+
+# 5 Results and Analysis
+
+# 5.1 Automatic Evaluations
+
+As shown in Table 1, our proposed model VICTOR outperforms baselines on most automatic metrics in these two datasets. In the aspect of relevance, it beats baselines in all related metrics, indicating responses generated by VICTOR can be coherent with the help of context-related knowledge. Meanwhile, enhanced by the extracted visual commonsense, VICTOR also achieves the best performance in Dist-1/2, showing it can generate diverse and informative responses. Besides, we can see
+
+
Dataset
OTTers
DailyDialog
Model
PPL
B-1
B-4
RG
D-1
D-2
PPL
B-1
B-4
RG
D-1
D-2
Seq2seq
52.85
13.88
1.14
14.1
6.18
15.37
47.24
12.54
2.55
22.16
6.39
25.95
GPT-2
16.37
14.36
2.30
18.72
18.03
40.48
19.65
16.31
2.53
21.93
7.41
23.85
GRF
17.8
17.52
2.95
18.81
21.78
47.86
19.88
16.08
3.31
22.19
11.19
35.72
GVT
37.74
14.83
0.91
13.13
18.49
48.11
34.19
22.27
9.64
22.92
6.57
36.11
AdaLabel
33.16
17.27
1.71
18.21
16.79
39.88
30.92
24.12
8.46
27.65
9.95
39.12
VICTOR
16.29
21.49
4.82
20.09
24.41
56.64
22.21
29.15
14.89
30.21
14.14
46.47
+
+Table 1: Automatic evaluation results(%) The metrics Perplexity, BLEU-1/4,Rouge-L,Dist-1/2 are abbreviated as PPL,B-1/4, RG,D-1/2 respectively. The best results are highlighted in bold.
+
+
Opponent
Win
Loss
Draw
Kappa
(a)
VIC vs. GRF
35.5%
14.2%
50.3%
0.54
VIC vs. GVT
54.5%
7.8%
37.7%
0.43
VIC vs. AdaLabel
61.3%
11.5%
27.2%
0.54
(b)
VIC vs. GRF
39.5%
23.5%
37.0%
0.59
VIC vs. GVT
57.8%
15.5%
26.7%
0.56
VIC vs. AdaLabel
42.0%
12.8%
45.2%
0.66
+
+Table 2: Human evaluation results on (a) Otters and (b) DailyDialog datasets. VICTOR is abbreviated as VIC.
+
+that although AdaLabel can generate relatively diverse responses, the lack of context related external knowledge prevents it from keeping high relevance to the context. This problem can be particularly acute with OTters dataset, since most dialogues in it are topic specific. The same problem also affects GVT model, without the assistance of commonsense knowledge, it performs rather poorly on OTters dataset. Although the GRF baseline integrates information from knowledge bases, its performance is worse than our model on both relevance and diversity. This indicates the superiority of considering commonsense information in visual scenes rather than just pure textual knowledge.
+
+# 5.2 Human Evaluations
+
+The human evaluation results are shown in Table 2. Not surprisingly, VICTOR consistently outperforms all the strong baselines and achieves significant improvements on both datasets. We also analyze the bad cases and find that the baselines still suffer from the general or irrelevant responses. The evaluation result indicates that VICTOR can generate more coherent and informative responses
+
+that are attractive to annotators. This validates the benefits of the context-relevant visual commonsense and the fusion mechanism. We also employ Fleiss' kappa scores to measure the reliability between different annotators, and results show that annotators reach a moderate agreement.
+
+# 5.3 Ablation Study
+
+To investigate the effectiveness of each part of VICTOR, we conduct ablation studies on two datasets by removing or replacing particular modules from the original model. Here, we have three variants: (1) w/o. VC: removing visual commonsense extraction and multimodal fusion block. (2) w/o. AW: removing the adaptive weight of the response generator and replacing it with a fixed weight of 0.5. (3) w. RF: replacing the caption-styled visual commonsense with ResNeXt features of the same image, which can be obtained by using the pretrained image encoder from our retrieval model.
+
+The ablation results are shown in Table 3. We observe that without fusing visual commonsense, the performance of variant-1 drops sharply with respect to relevance and diversity metrics. The result verifies the effectiveness of integrating context-relevant visual knowledge into response generation. Besides, although variant-2 maintains a relatively high diversity, the values of relevance metrics drop largely due to the fixed balancing weight of the generator. This indicates that adaptively deciding the contribution of language and visual knowledge plays an important role in the generation process for different conversation turns. We also witness a small drop in performance of the variant-3, which uses ResNeXt features instead of the image captions as the visual commonsense source. As shown in previous researches (Jin et al., 2022; Feng et al., 2021), this phenomenon can be explained by the fact that captions of everyday scenarios, which dampen the reporting bias of the general text cor
+
+
Dataset
OTTers
DailyDialog
Model
PPL
B-1
B-4
RG
D-1
D-2
PPL
B-1
B-4
RG
D-1
D-2
VICTOR
16.29
21.49
4.82
20.09
24.41
56.64
22.21
29.15
14.89
30.21
14.14
46.47
w/o. VC
16.52
15.39
2.59
18.91
21.02
43.16
28.89
19.01
6.87
22.04
7.19
28.53
w/o. AW
23.18
15.92
3.3
19.16
24.2
50.75
26.36
23.23
10.47
29.88
14.01
43.61
w. RF
19.04
18.53
4.09
19.72
24.27
53.81
22.35
28.86
13.05
28.61
11.94
42.2
+
+Table 3: Ablation study results(%) on two datasets.
+
+
K
0
1
2
3
rand_3
B-1
15.39
21.49
19.32
19.02
20.34
D-1
21.02
24.41
23.26
22.60
23.61
+
+Table 4: Influence of the number of retrieved images on OTters dataset(%) $K$ means concatenating captions of top $K$ images as visual commonsense, $K = 0$ is equivalent to not using visual information, rand_3 means randomly choosing from top 3 images.
+
+pus, are better carriers of logical commonsense and contain less noise than roughly extracted image features.
+
+# 5.4 Number of images
+
+We further study the effect of visual commonsense by varying the number of retrieved images and conducting experiments on OTters dataset. As shown in Table 4, all the results obtained with the help of visual commonsense are better than those without, while choosing top 1 image helps achieve best performance. This can be explained that each selected image refers to key information of all core concepts, resulting in partial semantic overlap, thus additional selection of more images may introduce more unnecessary noise, which is not helpful to the generation.
+
+# 5.5 Case Study
+
+To further investigate the quality of responses generated by VICTOR, and compare the results with other baselines intuitively, we show two dialogue cases from the two datasets in Figure 4. As we can see, the retrieval process can obtain proper expanded concepts from knowledge graphs and retrieve related images. The corresponding captions with logical commonsense will then bring auxiliary visual information into the generation process. In these two cases, although all four models have generated fluent and informative responses, compared with the other three strong baselines, responses generated by VICTOR are clearly more consistent with the context and more engaging. Again, the
+
+results prove the effectiveness of exploring context-relevant visual commonsense for dialogue generation.
+
+# 6 Conclusion
+
+In this work, we propose a novel context-relevant visual commonsense enhanced approach for open domain dialogue generation. The model effectively extracts relevant visual commonsense and integrates the multimodal knowledge, and adaptively measures the contribution of different modalities, so as to produce better responses. Extensive experiments on two pure language dialogue datasets show that the proposed VICTOR model significantly outperforms previous approaches, indicating that VICTOR can generate more diverse and informative responses, while maintaining coherence with the context. For future work, we will continue to investigate the advantages of introducing external visual knowledge into the dialogue system. We notice that the current use of visual modality in this field may be too rough. Further study should focus on how to extract more specific and more necessary information from images or videos for enhancing response generation. Besides, enabling dialogue agents to handle multimodal inputs and outputs is also a relevant hot research problem.
+
+# Limitations
+
+We discussed the limitations of this work. The proposed method trains the visual retriever and the generation model separately, which may affect the overall optimization of the system to a certain extent. Besides, limited by the performance of the retrieval modules, the extracted visual commonsense is not always an effective extension of the context content. This indicates that there is still room for improvement in the acquisition and utilization of high-quality visual knowledge for dialogue generation.
+
+
User A: I enjoy going on the sand.
+User B: I love books.
User A: what sort of things do you grow in your garden?
Original concepts: enjoy, sand, book
Original concepts: garden, grow
Extended concepts: enjoy, sand, book, sun, beach, swim, read ...
Captions:
+a group of people sitting on top of a beach.
+a wooden boat floating on top of water.
+a picture of a man reading a book.
+...
Captions:
+a bunch of trees in a park.
+a building with a plant growing out of it.
+a tree with flowers in it.
+...
Responses:
+Ground truth: I love reading books while sunbathing in the sand.
+GRF: I love going on the sand with my family.
+GVT: I really enjoy spending time with my family. I love to play tennis.
+AdaLabel: I have been into my parents.
+VICTOR: I enjoy going to the beach. I love to read when it is sunny outside.
Responses:
+Ground truth: I grow a variety of things, so that something is in bloom all year round.
+GRF: Well, I grow lots of plants.
+GVT: Quite a number. I don't think that we need anything else.
+AdaLabel: I grow up something in bloom all day. I like that.
+VICTOR: I grow all kinds of flower and there are many trees next to my house.
+
+Figure 4: Two cases from OTters and DailyDialog datasets respectively.
+
+# Ethics Statement
+
+We honor and support the ACL code of Ethics. Dialogue response generation aims to build a dialogue system which better interacts with users. The generation of the responses does not involve any bias towards the participants. All datasets used in this work are from previously published works, and in our view, do not have any attached privacy or ethical issues.
+
+# Acknowledgements
+
+This work was supported in part by the National Key R&D Program of China under Grant 2018AAA0102000, and in part by the National Natural Science Foundation of China: U21B2038, 61931008, 61732007, and CAAI-Huawei MindSpore Open Fund, Youth Innovation Promotion Association of CAS under Grant 2020108, CCF-Baidu Open Fund.
+
+# References
+
+Yonatan Bisk, Ari Holtzman, Jesse Thomason, Jacob Andreas, Yoshua Bengio, Joyce Chai, Mirella Lapata, Angeliki Lazaridou, Jonathan May, Aleksandr Nisnevich, et al. 2020. Experience grounds language. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 8718-8735.
+
+Ozan Caglayan, Pranava Swaroop Madhyastha, Lucia Specia, and Loic Barault. 2019. Probing the need for visual context in multimodal machine translation. In Proceedings of the 2019 Conference of the North
+
+American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4159-4170.
+
+Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.
+
+Qingkai Fang and Yang Feng. 2022. Neural machine translation with phrase-level universal visual representations. arXiv preprint arXiv:2203.10299.
+
+Steven Y Feng, Kevin Lu, Zhuofu Tao, Malihe Alikhani, Teruko Mitamura, Eduard Hovy, and Varun Gangal. 2021. Retrieve, caption, generate: Visual grounding for enhancing commonsense in text generation models. arXiv preprint arXiv:2109.03892.
+
+Joseph L Fleiss. 1971. Measuring nominal scale agreement among many raters. Psychological bulletin, 76(5):378.
+
+Stevan Harnad. 1990. The symbol grounding problem. Physica D: Nonlinear Phenomena, 42(1-3):335-346.
+
+Minlie Huang, Xiaoyan Zhu, and Jianfeng Gao. 2020. Challenges in building intelligent open-domain dialog systems. ACM Transactions on Information Systems (TOIS), 38(3):1-32.
+
+Haozhe Ji, Pei Ke, Shaohan Huang, Furu Wei, Xiaoyan Zhu, and Minlie Huang. 2020. Language generation with multi-hop reasoning on commonsense knowledge graph. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 725-736.
+
+Woojeong Jin, Dong-Ho Lee, Chenguang Zhu, Jay Pujara, and Xiang Ren. 2022. Leveraging visual knowledge in language tasks: An empirical study on intermediate pre-training for cross-modal knowledge
+
+transfer. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2750-2762.
+Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
+Thomas N Kipf and Max Welling. 2016. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907.
+Noriyuki Kojima, Hadar Averbuch-Elor, Alexander M Rush, and Yoav Artzi. 2020. What is learned in visually grounded neural syntax acquisition. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2615-2635.
+Alina Kuznetsova, Hassan Rom, Neil Alldrin, Jasper Uijlings, Ivan Krasin, Jordi Pont-Tuset, Shahab Kamali, Stefan Popov, Matteo Malloci, Alexander Kolesnikov, et al. 2020. The open images dataset v4. International Journal of Computer Vision, 128(7):1956-1981.
+Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and William B Dolan. 2016. A diversity-promoting objective function for neural conversation models. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 110-119.
+Liang Li, Xingyu Gao, Jincan Deng, Yunbin Tu, Zheng-Jun Zha, and Qingming Huang. 2022. Long short-term relation transformer with global gating for video captioning. IEEE Transactions on Image Processing, 31:2726-2738.
+Liang Li, Shuhui Wang, Shuqiang Jiang, and Qingming Huang. 2018. Attentive recurrent neural network for weak-supervised multi-label image classification. In Proceedings of the 26th ACM international conference on Multimedia, pages 1092-1100.
+Linxiao Li, Can Xu, Wei Wu, Yufan Zhao, Xueliang Zhao, and Chongyang Tao. 2020. Zero-resource knowledge-grounded dialogue generation. Advances in Neural Information Processing Systems, 33:8475-8485.
+Yanran Li, Hui Su, Xiaoyu Shen, Wenjie Li, Ziqiang Cao, and Shuzi Niu. 2017. Dailydialog: A manually labelled multi-turn dialogue dataset. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 986-995.
+Zujie Liang, Huang Hu, Can Xu, Chongyang Tao, Xi-ubo Geng, Yining Chen, Fan Liang, and Daxin Jiang. 2021. Maria: A visual experience powered conversational agent. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 5596-5611.
+
+Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out, pages 74-81.
+Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dólar, and C Lawrence Zitnick. 2014. Microsoft coco: Common objects in context. In European conference on computer vision, pages 740-755. Springer.
+Zhaojiang Lin, Genta Indra Winata, Peng Xu, Zihan Liu, and Pascale Fung. 2020. Variational transformers for diverse response generation. arXiv preprint arXiv:2003.12738.
+Chia-Wei Liu, Ryan Lowe, Iulian Vlad Serban, Mike Noseworthy, Laurent Charlin, and Joelle Pineau. 2016. How not to evaluate your dialogue system: An empirical study of unsupervised evaluation metrics for dialogue response generation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2122-2132.
+Xuejing Liu, Liang Li, Shuhui Wang, Zheng-Jun Zha, Zechao Li, Qi Tian, and Qingming Huang. 2022. Entity-enhanced adaptive reconstruction network for weakly supervised referring expression grounding. IEEE Transactions on Pattern Analysis and Machine Intelligence.
+Minh-Thang Luong, Hieu Pham, and Christopher D Manning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1412-1421.
+Bodhisattwa Prasad Majumder, Taylor Berg-Kirkpatrick, Julian McAuley, and Harsh Jhamtani. 2021. Unsupervised enrichment of person-grounded dialog with background stories. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 585-592.
+Bodhisattwa Prasad Majumder, Harsh Jhamtani, Taylor Berg-Kirkpatrick, and Julian McAuley. 2020. Like hiking? you probably enjoy nature: Personagrounded dialog with commonsense expansions. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 9194-9206.
+Pierre-Emmanuel Mazare, Samuel Humeau, Martin Raison, and Antoine Bordes. 2018. Training millions of personalized dialogue agents. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2775-2779.
+Nasrin Mostafazadeh, Chris Brockett, William B Dolan, Michel Galley, Jianfeng Gao, Georgios Spithourakis, and Lucy Vanderwende. 2017. Image-grounded conversations: Multimodal context for natural question and response generation. In Proceedings of
+
+the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 462-472.
+Liqiang Nie, Wenjie Wang, Richang Hong, Meng Wang, and Qi Tian. 2019. Multimodal dialog system: Generating responses via adaptive decoders. In Proceedings of the 27th ACM International Conference on Multimedia, pages 1098-1106.
+Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting of the Association for Computational Linguistics, pages 311-318.
+Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9.
+Abigail See, Peter J Liu, and Christopher D Manning. 2017. Get to the point: Summarization with pointer-generator networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1073-1083.
+Iulian V Serban, Alessandro Sordoni, Yoshua Bengio, Aaron Courville, and Joelle Pineau. 2015. Hierarchical neural network generative models for movie dialogues. arXiv preprint arXiv:1507.04808, 7(8):434-441.
+Karin Sevegnani, David M Howcroft, Ioannis Konstas, and Verena Rieser. 2021. Otters: One-turn topic transitions for open-domain dialogue. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 2492-2504.
+Lei Shen and Yang Feng. 2020. Cdl: Curriculum dual learning for emotion-controllable response generation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 556-566.
+Lei Shen, Haolan Zhan, Xin Shen, Yonghao Song, and Xiaofang Zhao. 2021. Text is not enough: Integrating visual impressions into open-domain dialogue generation. In Proceedings of the 29th ACM International Conference on Multimedia, pages 4287-4296.
+Haoyue Shi, Jiayuan Mao, Kevin Gimpel, and Karen Livescu. 2019. Visually grounded neural syntax acquisition. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1842-1861.
+Kurt Shuster, Samuel Humeau, Antoine Bordes, and Jason Weston. 2020. Image-chat: Engaging grounded conversations. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2414-2429.
+
+Kihyuk Sohn, Honglak Lee, and Xinchen Yan. 2015. Learning structured output representation using deep conditional generative models. Advances in neural information processing systems, 28.
+Zhenqiao Song, Xiaqing Zheng, Lu Liu, Mu Xu, and Xuan-Jing Huang. 2019. Generating responses with a specific emotion in dialog. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3685-3695.
+Robyn Speer, Catherine Havasi, et al. Representing general relational knowledge in conceptnet 5.
+Hui Su, Xiaoyu Shen, Sanqiang Zhao, Zhou Xiao, Pengwei Hu, Randy Zhong, Cheng Niu, and Jie Zhou. 2020. Diversifying dialogue generation with nonconversational text. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7087-7097.
+Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. Advances in neural information processing systems, 27.
+Hao Tan and Mohit Bansal. 2020. Vokenization: Improving language understanding with contextualized, visual-grounded supervision. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2066-2080.
+Yunbin Tu, Liang Li, Li Su, Shengxiang Gao, Chenggang Yan, Zheng-Jun Zha, Zhengtao Yu, and Qingming Huang. 2022. I2transformer: Intra-and interrelation embedding transformer for tv show captioning. IEEE Transactions on Image Processing.
+Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in neural information processing systems, 30.
+Yida Wang, Yinhe Zheng, Yong Jiang, and Minlie Huang. 2021. Diversifying dialog generation via adaptive label smoothing. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 3507-3520.
+Sixing Wu, Ying Li, Dawei Zhang, Yang Zhou, and Zhonghai Wu. 2021. Topicka: Generating commonsense knowledge-aware dialogue responses towards the recommended topic fact. In Proceedings of the Twenty-Ninth International Conference on International Joint Conferences on Artificial Intelligence, pages 3766-3772.
+Saining Xie, Ross Girshick, Piotr Dolkar, Zhuowen Tu, and Kaiming He. 2017. Aggregated residual transformations for deep neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1492-1500.
+
+Ze Yang, Wei Wu, Huang Hu, Can Xu, Wei Wang, and Zhoujun Li. 2021. Open domain dialogue generation with latent images. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 14239-14247.
+Saizheng Zhang, Emily Dinan, Jack Urbanek, Arthur Szlam, Douwe Kiela, and Jason Weston. 2018. Personalizing dialogue agents: I have a dog, do you have pets too? In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2204-2213.
+Yicheng Zou, Zhihua Liu, Xingwu Hu, and Qi Zhang. 2021. Thinking clearly, talking fast: Concept-guided non-autoregressive generation for open-domain dialogue systems. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 2215-2226.
\ No newline at end of file
diff --git a/thinkbeyondwordsexploringcontextrelevantvisualcommonsensefordiversedialoguegeneration/images.zip b/thinkbeyondwordsexploringcontextrelevantvisualcommonsensefordiversedialoguegeneration/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..5608d71b05a090e3b73e67fa70793211e9910301
--- /dev/null
+++ b/thinkbeyondwordsexploringcontextrelevantvisualcommonsensefordiversedialoguegeneration/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:2aaffeb77a3a718a0c8cbace662d89331808505ea500f765159e1ef8fa3531ed
+size 572696
diff --git a/thinkbeyondwordsexploringcontextrelevantvisualcommonsensefordiversedialoguegeneration/layout.json b/thinkbeyondwordsexploringcontextrelevantvisualcommonsensefordiversedialoguegeneration/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..358fad057f347d2fb5058261fc9222486cb74391
--- /dev/null
+++ b/thinkbeyondwordsexploringcontextrelevantvisualcommonsensefordiversedialoguegeneration/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:81f146b4cb9f00a8f0e79a1ade985219669ead213f2db488047130774b070ff3
+size 385317
diff --git a/thinkingaboutgpt3incontextlearningforbiomedicaliethinkagain/54f93e7e-e036-47d0-acde-1e95767553fd_content_list.json b/thinkingaboutgpt3incontextlearningforbiomedicaliethinkagain/54f93e7e-e036-47d0-acde-1e95767553fd_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..8a1d93066fd90605d1110f929120d35da08b11e4
--- /dev/null
+++ b/thinkingaboutgpt3incontextlearningforbiomedicaliethinkagain/54f93e7e-e036-47d0-acde-1e95767553fd_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5455e75d5200eaabd9ad91c469d74941efb31181dea9eac39e679088bf7279b3
+size 114081
diff --git a/thinkingaboutgpt3incontextlearningforbiomedicaliethinkagain/54f93e7e-e036-47d0-acde-1e95767553fd_model.json b/thinkingaboutgpt3incontextlearningforbiomedicaliethinkagain/54f93e7e-e036-47d0-acde-1e95767553fd_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..8e3621c30b961d356f7fd3d3fb6b3f08a37cbb92
--- /dev/null
+++ b/thinkingaboutgpt3incontextlearningforbiomedicaliethinkagain/54f93e7e-e036-47d0-acde-1e95767553fd_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:2a53bcdd935b3fae2e55dc77be134bcc4b0bac725d6a6fcc1cdc14ca77beae78
+size 132827
diff --git a/thinkingaboutgpt3incontextlearningforbiomedicaliethinkagain/54f93e7e-e036-47d0-acde-1e95767553fd_origin.pdf b/thinkingaboutgpt3incontextlearningforbiomedicaliethinkagain/54f93e7e-e036-47d0-acde-1e95767553fd_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..62ed04420703fbeaac4b3bf63f1b023479829ce9
--- /dev/null
+++ b/thinkingaboutgpt3incontextlearningforbiomedicaliethinkagain/54f93e7e-e036-47d0-acde-1e95767553fd_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:67285b9c51e10adc2ea4434ce150495a041a4f51aac6e2af4e25da648d9d1b2b
+size 1276267
diff --git a/thinkingaboutgpt3incontextlearningforbiomedicaliethinkagain/full.md b/thinkingaboutgpt3incontextlearningforbiomedicaliethinkagain/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..d5ec16a915faf69d72cd8f522c6134ced4c52dd5
--- /dev/null
+++ b/thinkingaboutgpt3incontextlearningforbiomedicaliethinkagain/full.md
@@ -0,0 +1,381 @@
+# Thinking about GPT-3 In-Context Learning for Biomedical IE? Think Again
+
+Bernal Jiménez Gutiérrez1, Nikolas McNeal1, Clay Washington1, You Chen2, Lang Li1, Huan Sun1, Yu Su1
+
+1The Ohio State University, 2Vanderbilt University
+
+{jimenezgutierrez.1,mcneal.121,Washington.534,sun.397,su.809}@osu.edu
+
+lang.li@osumc.edu, you.chen@vumc.org
+
+# Abstract
+
+Large pre-trained language models (PLMs) such as GPT-3 have shown strong in-context learning capabilities, which are highly appealing for domains such as biomedicine that feature high and diverse demands of language technologies but also high data annotation costs. In this paper, we present the first systematic and comprehensive study to compare the few-shot performance of GPT-3 in-context learning with fine-tuning smaller (i.e., BERT-sized) PLMs on two representative biomedical information extraction (IE) tasks: named entity recognition and relation extraction. We follow the true few-shot setting (Perez et al., 2021) to avoid overestimating models' few-shot performance by model selection over a large validation set. We also optimize GPT-3's performance with known techniques such as contextual calibration and dynamic in-context example retrieval. However, our results show that GPT-3 still significantly underperforms compared to simply fine-tuning a smaller PLM. In addition, GPT-3 in-context learning also yields smaller gains in accuracy when more training data becomes available. More in-depth analyses further reveal issues of in-context learning that may be detrimental to IE tasks in general. Given the high cost of experimenting with GPT-3, we hope our study provides helpful guidance for biomedical researchers and practitioners towards more practical solutions such as fine-tuning small PLMs before better in-context learning is available for biomedical IE. $^{1}$
+
+# 1 Introduction
+
+Given the overwhelming pace of biomedical research and clinical text production, transforming large amounts of biomedical text into structured information has become increasingly important for researchers and practitioners alike. In recent years,
+
+
+Figure 1: Main findings: (Left) fine-tuning BERT-sized PLMs substantially outperforms GPT-3 in-context learning under true few-shot setting. (Right) Feature comparison for consideration of practical applications.
+
+
GPT-3 In-Context
PLM Fine-Tuning
No Training
✔
✔
No External Costs
✘
✔
More Data = Better Performance
✘
✔
+
+pre-trained language models (PLMs), both general-domain and biomedicine-specific ones, have remarkably boosted performance on biomedical information extraction (IE) tasks (Lee et al., 2019; Peng et al., 2019; Gu et al., 2021; Alsentzer et al., 2019; Beltagy et al., 2019).
+
+The latest round of PLMs such as GPT-3 (Brown et al., 2020), Megatron-Turing NLG (Smith et al., 2022), the Switch Transformer (Fedus et al., 2022), among others, feature hundreds of billions of parameters and have achieved impressive performance in many NLP tasks using in-context learning—a new few-shot learning paradigm first introduced by Brown et al. (2020). In-context learning allows PLMs to use their natural language generation capabilities to solve any task almost like how humans would—by completing a piece of text or prompt. This paradigm allows large PLMs to solve various NLP problems without updating their parameters, potentially resulting in massive savings in both data annotation and engineering overhead compared with standard model training. Even more impressively, GPT-3 in-context learning yields competitive performance against fully supervised baselines in many NLP tasks by adding only a handful of demonstrative examples in the prompt (Brown et al., 2020).
+
+The variety of potential biomedical information extraction applications, the high cost of biomedical annotations, and the complexity of model training make in-context learning particularly appealing for
+
+biomedical applications. To investigate its practicality, we present the first systematic and comprehensive comparative study of GPT-3 in-context learning and BERT-sized (Devlin et al., 2019) PLM fine-tuning in the few-shot setting on named entity recognition (NER) and relation extraction (RE), two representative and highly valued biomedical IE tasks. For consistency and comprehensiveness, we use all the biomedical NER and RE tasks compiled in the BLURB benchmark (Gu et al., 2021). We operate under the true few-shot setting introduced by Perez et al. (2021) to avoid overestimating models' few-shot performance via model selection over a large validation set.
+
+We optimize GPT-3's in-context learning performance for biomedical information extraction by leveraging multiple recent techniques. Firstly, inspired by studies that show the importance of optimal prompt selection (Perez et al., 2021; Schick and Schütze, 2021; Gao et al., 2021), we formulate a prompt structure which allows us to construct prompt designs and select optimal ones systematically. Secondly, similar to Liu et al. (2022), we introduce a k-nearest neighbor (kNN) module for in-context example retrieval. Finally, for NER, we also use logit biases to ensure that the generated tokens are from the input sentence; for RE, we use contextual calibration (Zhao et al., 2021) to reduce contextual bias.
+
+Even when equipped with these latest techniques, which indeed improve GPT-3's performance as shown in ablation studies, we find that fine-tuning BERT-sized PLMs substantially outperforms GPT-3 in-context learning across all biomedical information extraction datasets when using the same small training set (e.g., 100 labeled examples). We also find that fine-tuning small PLMs yields a more reliable return in terms of data annotation: as training data size increases, fine-tuning performance steadily improves while in-context learning performance lags behind. In-depth analyses further reveal that in-context learning struggles with the null class, e.g., sentences that contain no named entity (for NER) or entity pairs that hold none of the target relations (for RE), which is likely detrimental to IE tasks in general. In summary, our findings suggest that fine-tuning PLMs is still a more cost-effective option than GPT-3 in-context learning for biomedical IE tasks, at least before qualitatively better methods for in-context learning are discovered.
+
+# 2 Approach
+
+In this section, we describe the two paradigms we explored under the true few-shot setting for NER and RE: BERT-sized PLM fine-tuning and GPT-3 in-context learning.
+
+# 2.1 Tasks
+
+We use named entity recognition (NER) and relation extraction (RE) as two representative and highly valued tasks to comprehensively evaluate the potential of GPT-3 in-context learning in biomedical IE.
+
+# 2.2 True Few-Shot Setting
+
+Recent work has questioned the performance of few-shot learning in very large PLMs like GPT-3 as well as small PLM fine-tuning, arguing that large validation sets have played a strong biasing role in model and prompt selection (Perez et al., 2021). To avoid overestimating the few-shot learning performance of PLMs, we follow their proposed true few-shot setting. In this setting, all model selection decisions are made systematically on the few-shot training set rather than on a large validation set. For our main experiments, we use cross-validation on 100 training examples to choose the prompt structure, the number of few-shot examples per prompt and the fine-tuning hyperparameters.
+
+# 2.3 BERT-Sized PLM Fine-Tuning
+
+We follow the standard PLM fine-tuning process for NER and RE used in Gu et al. (2021). We use 5-fold cross-validation on the 100 example training set mentioned above to select the best performing values for learning rate, batch size, warm-up ratio, weight decay, and stopping checkpoint for all of our fine-tuning experiments. The hyperparameter values we select from are specified in Appendix C.
+
+Named Entity Recognition. For NER, we use the BIO tag token classification formulation and fine-tune a separate model for each entity type.
+
+Relation Extraction. For RE, we mask the object and subject entities in the input sentence and use the [CLS] token to classify the relation between them.
+
+# 2.4 GPT-3 In-Context Learning
+
+In this section, we first describe how we reformulate the NER and RE tasks for in-context learning. We then provide thorough descriptions of our
+
+
+Figure 2: Overall architecture for GPT-3 in-context learning for both NER and RE (left). One-shot learning example prompt for NER (middle) and RE (right). Different colors indicate different prompt design elements: orange for overall task instructions, red for sentence introduction and purple for the retrieval message portion. The current input sentence and the completion by GPT-3 are highlighted.
+
+
+Classify the interaction between drugs based on the provided sentences.
+Sentence: It is recommended not to exceed a single Vardenafil dose when used with ritonavir. How do Vardenafil and ritonavir interact in the previous sentence? Interaction: advice
+Sentence: Therefore, concomitant use of toradol and probenecid is contraindicated. How do toradol and probenecid interact in the previous sentence? Interaction: advice
+
+prompt design and in-context example retrieval approaches as well as other recent techniques we use to improve GPT-3's in-context learning performance for biomedical IE.
+
+# 2.4.1 Task Linearization
+
+As shown in the examples in Figure 2, in order to use in-context learning, we must first reformulate NER and RE as language generation tasks.
+
+For NER, we extract all entity spans from the original sentence and combined them using a separator (entities are only added once), as was done in previous work (Raval et al., 2021). GPT-3 will then be expected to generate a list of entities joined by the chosen separator when conditioned on the current input and its context, as shown in Figure 2 (middle).
+
+For relation extraction, we draw inspiration from Huguet Cabot and Navigli (2021) and transform every example into a prompt as shown in Figure 2 (right). For all our prompt templates shown in Appendix D, we add the subject and object entities, in their verbatim lexical form in the original sentence, to the prompt.
+
+# 2.4.2 Prompt Design
+
+Given the importance of prompt selection in obtaining strong performance from GPT-3 in-context learning (Perez et al., 2021; Schick and Schütze, 2022, 2021; Gao et al., 2021), we provide a systematic and task-agnostic process for constructing GPT-3 prompts. As shown in the examples in Figure 2, we identify three main parts of a prompt: overall task instructions, a sentence introduction and a retrieval message. The overall task instruction provides broad instructions for the task as concisely as possible. The sentence introduction describes the
+
+input text (i.e., scientific article excerpt, tweet, sentence, etc.). Finally, the retrieval message directly precedes the expected completion and is meant to reiterate what is needed for the task. For relation extraction, similar to Schick et al. (2020), we also define a label verbalizer which maps relation categories to plausible natural language phrases to facilitate generation.
+
+For each task, we manually create a set of alternatives for each prompt section and select their best combination. We use leave-one-out cross-validation (LOOCV) to choose the best combination of the prompt alternatives as well as the number of in-context examples included in the prompt. To keep costs reasonable, we compare 8 prompt alternatives for each dataset. A list of all the options for each dataset can be found in Appendix D.
+
+# 2.4.3 Logit Biases
+
+In order to prevent GPT-3 from generating tokens that are not in the original sentence, we use the logit bias option from the OpenAI Completion API. This option allows us to add a fixed value to the final probability of a specified set of tokens, restricting the possible tokens that GPT-3 can generate. Specifically, we add a value of 10 to all tokens present in the original sentence, our chosen separator and the newline token (used to designate the end of the entity list). Additionally, any predicted entities that do not match any span in the original sentence are discarded during post-processing.
+
+# 2.4.4 Contextual Calibration
+
+During preliminary studies, we found that each set of few-shot in-context examples biased GPT-3
+
+
Task
Train
Dev
Test
Eval. Metric
BC5CDR-disease
NER
4,182
4,244
4,424
F1 entity-level
BC5CDR-chem
NER
5,203
5,347
5,385
F1 entity-level
NCBI-disease
NER
5,134
787
960
F1 entity-level
JNLPBA
NER
46,750
4,551
8,662
F1 entity-level
BC2GM
NER
15,197
3,061
6,325
F1 entity-level
DDI
RE
25,296
2,496
5,716
Micro F1
ChemProt
RE
18,035
11,268
15,745
Micro F1
GAD
RE
4,261
535
534
Micro F1
+
+Table 1: Dataset statistics.
+
+towards certain labels regardless of the test input. Previous work (Zhao et al., 2021) proposes to address these biases by calibrating the output using a linear transformation which equalizes all label probabilities generated by GPT-3 when conditioned on a null prompt (a version of the original prompt in which the test input is replaced by a null value such as "N/A"). This linear transformation is then used to update the output probabilities of the true few-shot prompt, thereby removing the context induced biases. We adopt this approach for RE and create the null prompt by replacing the original sentence as well as the subject and object entities in the retrieval message with "N/A".
+
+# 2.4.5 Retrieval Module
+
+Several studies (Liu et al., 2022; Rubin et al., 2022; Shin et al., 2021) suggest that choosing few-shot in-context examples for each test example dynamically instead of using a fixed set of in-context examples yields strong improvements for GPT-3 in-context learning. Following Liu et al. (2022), we use a k-nearest neighbor (kNN) retrieval module to select the most similar examples in our training set as the few-shot in-context prompt for each test example. We opt for RoBERTa-large as the encoder for our kNN retrieval module after preliminary experiments showing its advantages over other alternatives including biomedical PLMs (Lee et al., 2019; Gu et al., 2021), sentence-transformer models (Reimers and Gurevych, 2019) and a BM25 baseline (Robertson and Zaragoza, 2009).
+
+# 3 Experiments
+
+# 3.1 Datasets
+
+We use all NER and RE datasets exactly as they are used in the BLURB benchmark (Gu et al., 2021) to evaluate biomedical IE. Table 1 lists the datasets and their statistics. For processing and train/dev/test splits, we refer the interested reader to Section 2.3 of Gu et al. (2021).
+
+# 3.1.1 Named Entity Recognition
+
+BC5CDR. The BioCreative V Chemical-Disease Relation corpus (Li et al., 2016) contains PubMed abstracts with both disease and chemical annotations; we evaluate models on each entity type separately following previous work (Gu et al., 2021).
+
+NCBI-disease. The Natural Center for Biotechnology Information Disease corpus (Doogan et al., 2014) contains disease name and concept annotations for 793 PubMed abstracts.
+
+JNLPBA. The Joint Workshop on Natural Language Processing in Biomedicine and its Applications dataset (Collier and Kim, 2004) contains 2,000 abstracts from MEDLINE selected and annotated by hand for gene related entities.
+
+BC2GM. The Biocreative II Gene Mention corpus (Smith et al., 2008) contains 17,500 sentences from PubMed abstracts labeled for gene entities.
+
+# 3.1.2 Relation Extraction
+
+DDI. The DDI dataset (Herrero-Zazo et al., 2013) consists of sentences from MEDLINE and DrugBank labeled with drug-drug interactions categorized into 4 true and one vacuous relation.
+
+ChemProt. ChemProt (Krallinger et al., 2017) is a dataset consisting of 1,820 PubMed abstracts with annotated chemical-protein interactions categorized into 5 true and one vacuous relation.
+
+GAD. The Genetic Association Database corpus (Bravo et al., 2015) consists of scientific excerpts and abstracts distantly annotated with gene-disease associations.
+
+# 3.2 Compared Methods
+
+In our main experiments, we compare three pretrained language models, PubMedBERT-base (Gu et al., 2021),3 BioBERT-large (Lee et al., 2019) and RoBERTa-large (Liu et al., 2019), fine-tuned on 100 training examples, with GPT-3 in-context learning where each test example's in-context prompt was retrieved from the same 100 training examples.4 Both PubMedBERT and BioBERT were pre-trained on a large corpus of PubMed articles; PubMedBERT was pre-trained from scratch with a biomedical-specific vocabulary while BioBERT was initialized from a BERT checkpoint. We use RoBERTa-large as a strong representative for general-domain PLMs. We refer the interested
+
+
PubMedBERT-base
BioBERT-large
RoBERTa-large
GPT-3 In-Context
Precision / Recall / F1
Precision / Recall / F1
Precision / Recall / F1
Precision / Recall / F1
BC5CDR-disease
67.43.7/ 67.51.2/ 67.42.4
62.95.0/ 69.03.0/ 65.84.1
66.91.7/ 68.74.7/ 67.71.8
57.92.3/ 35.02.9/ 43.62.2
BC5CDR-chem
86.11.9/ 88.64.8/ 87.31.3
84.82.6/ 87.33.3/ 86.01.1
82.11.8/ 87.31.0/ 84.61.3
74.72.5/ 71.42.2/ 73.00.3
NCBI-disease
68.54.7/ 67.62.4/ 68.02.9
59.610.6/ 67.06.1/ 63.08.7
64.33.7/ 68.76.7/ 66.45.1
55.26.7/ 49.06.1/ 51.41.4
JNLPBA
56.92.9/ 67.91.7/ 61.92.4
57.41.9/ 73.71.8/ 64.61.8
57.22.9/ 75.12.4/ 65.02.7
44.71.0/ 52.43.7/ 48.32.1
BC2GM
55.40.4/ 57.97.2/ 56.53.2
53.60.8/ 59.22.0/ 56.21.0
49.72.1/ 56.35.3/ 52.72.2
43.08.2/ 40.82.3/ 41.42.7
NER Average
66.91.0/ 69.90.9/ 68.20.8
63.71.8/ 71.30.4/ 67.10.9
64.01.6/ 71.20.5/ 67.20.9
55.13.6/ 49.70.6/ 51.51.3
DDI
19.92.0/ 79.13.0/ 31.82.7
17.31.4/ 75.41.2/ 28.21.9
25.52.2/ 77.93.5/ 38.42.6
9.61.1/ 48.61.9/ 16.11.6
ChemProt
17.92.2/ 62.03.9/ 27.72.9
19.06.8/ 60.68.2/ 28.78.7
22.00.3/ 69.71.2/ 33.40.4
15.90.8/ 68.91.9/ 25.91.3
GAD
63.76.6/ 57.27.9/ 60.27.4
63.25.8/ 72.75.7/ 67.65.8
64.14.0/ 78.511.5/ 70.35.6
51.40.9/ 92.34.2/ 66.01.8
RE Average
33.82.0/ 66.12.8/ 39.92.2
33.20.6/ 69.62.3/ 41.51.4
37.21.8/ 75.44.5/ 47.41.9
25.60.1/ 70.01.4/ 36.00.4
+
+Table 2: Comparison of the true few-shot performance of fine-tuned BERT-sized PLMs with GPT-3 in-context learning on biomedical IE datasets from the BLURB benchmark (Gu et al., 2021). We run all experiments on at most 1,000 test examples from each dataset and use 3 different 100-example training sets to account for data variance (standard deviation found in subscripts).
+
+reader to Appendix E for results on the base versions of BioBERT and RoBERTa.
+
+Implementation Details. We choose 100 training examples for our experiments as a reasonable number of annotated examples with which to start training an IE model for a new task. $^{5}$ For the RE tasks, we use a balanced set of 100 examples evenly distributed over all relation types. All BERT-sized PLMs are fine-tuned using the HuggingFace Transformers library (Wolf et al., 2020). For our GPT-3 experiments, we use a maximum of 10 and 5 in-context examples for NER and RE respectively to remain within GPT-3's input length limit. Due to the high cost of GPT-3, we evaluate all methods on at most 1,000 test examples from each dataset, using the same subset for all methods. For RE, the test examples are sampled in a stratified fashion to reflect the original test set distribution of relation types. Model and prompt design selection are done following the true few-shot framework we described in §2.2. To account for training data variance, we run all experiments using 3 different 100-example training sets and report the mean and standard deviation.
+
+# 4 Results & Discussion
+
+# 4.1 Main Results
+
+Our main experimental results can be found in Table 2. We first note that fine-tuned BERT-sized PLMs outperform GPT-3 in-context learning across all datasets, often by a large margin (on average $15.6 - 16.7\%$ for NER and $3.9 - 11.4\%$ for RE in F1).
+
+For NER, even though GPT-3's precision already drops by an average of 10 points, recall drops by twice as much. This indicates that entity underprediction is an important factor for GPT-3's poor in-context learning performance. In contrast, GPT-3's precision decreases much more steeply in the RE tasks due in part to the poor performance on the none relation class. In §4.4, we explore the reasons behind these issues in greater depth.
+
+Drilling down into the fine-tuning results, we note that BERT-sized PLMs obtain reasonable performance on the NER tasks, considering the extremely small size of the training sets. We obtain strong performance in the mid 80s for the drug extraction task (BC5CDR-chem) due to the high lexical regularity of drug names (e.g., suffixes like “-ate”, “-ine” or “-ol” are very frequent). On other biomedical NER datasets such as disease and gene extraction, performance stalls in the high and low 60s, respectively. This performance gap is likely due to the higher lexical diversity present in gene and disease names and is also observed in PLMs fine-tuned on the full training sets, which typically achieve scores in the low or mid 80s compared to low 90s for disease recognition (Gu et al., 2021). It is also worth noting that the base version of PubMedBERT outperforms the larger versions of the general-domain RoBERTa model and biomedicine-specific BioBERT model, suggesting that pre-training on domain-specific text and vocabulary from scratch is especially beneficial for NER, reinforcing the findings in Gu et al. (2021).
+
+Given the higher complexity of the task, it is not surprising that performance deteriorates for all evaluated methods on RE tasks (especially for DDI and ChemProt since they contain more relation types
+
+and higher class imbalance). In contrast with the NER task and previous work using larger training sets (Gu et al., 2021), RoBERTa-large notably outperforms PubMedBERT-base and BioBERT-large in the RE task. This suggests that, in the low-resource setting, larger-scale general-domain pretraining offsets the advantage of domain-specific pre-training in tasks which require more advanced syntactic and semantic understanding such as RE.
+
+# 4.2 Ablation Studies for GPT-3
+
+In Tables 3 and 4, we present ablation studies demonstrating the effectiveness of the techniques used to improve GPT-3 performance. These studies are done on a subset of 250 validation examples from one representative dataset for each task. We follow the LOOCV process discussed in §2.4.2 and use the same experimental setup as the main experiments with the exception of using only one 100-example training set instead of three.
+
+We ablate the kNN module for both tasks, replacing it with a module which randomly assigns examples from the training set to each test example's in-context prompt. As we can see in both Table 3 and 4, removing the kNN module reduces GPT-3 in-context learning performance. Performance drops more steeply for RE than NER, indicating that NER is more resilient to different in-context examples. This is to be expected given that there are only a limited number of completions to choose from in the RE task and thus having similar examples (with likely the same class label as the test example) would favorably bias GPT-3 towards predicting that class label. For NER, conversely, the diversity of entities is large and so it is rare that a training sentence would have similar completions to a given test example in the low-resource setting.
+
+
F1
Precision
Recall
Best Model
46.3
42.5
50.9
-kNN Module
45.3
42.7
48.2
-Logit Biases
42.6
66.7
31.3
-Both
38.7
60.2
28.5
+
+Table 3: NER ablation study on BC5CDR-disease.
+
+
F1
Precision
Recall
Best Model
26.1
16.1
68.0
-kNN Module
18.6
11.5
48.0
-Calibration
23.6
14.6
62.0
-Both
16.9
10.9
38.0
+
+Table 4: RE ablation study on DDI.
+
+In our NER-specific ablation study, we find that
+
+Figure 3: Data efficiency curves for BC5CDR disease NER dataset (left) and DDI RE dataset (right).
+
+RoBERTa-large BioBERT-large
+
+
+PubMedBERT-base GPT-3 In-Context
+
+removing the logit bias option leads to a large drop in performance even though precision improves. This boost in precision is due to our post-processing which removes predicted entities that are not in the original sentence and eliminates false positives. However, since invalid entities are generated instead of the valid spans which could be correct, recall drops. When ablating the kNN module and removing the logit bias option, we see an even greater drop, indicating that they are complementary. As for our RE-specific ablation study, removing the calibration module results in a drop in both precision and recall, with or without the kNN module, verifying its effectiveness.
+
+# 4.3 Data Efficiency
+
+In practice, choosing an optimal machine learning model requires considering not only a model's overall performance but also, crucially, its data efficiency, i.e., how performance improves w.r.t the amount of labeled data added. Previous work shows that GPT-3 in-context learning performance improves as dataset size increases when using kNN retrieval (Liu et al., 2022). Thus, we explore whether adding more training examples to sample from leads to performance improvements via more relevant in-context examples. In this experiment, we expand the training dataset to 200 and 500 training examples for one representative dataset from each task: BC5CDR-disease and DDI. For the BERT-sized PLMs, we carry out the same cross-validation procedure for model selection as in the main experiments. For GPT-3, we utilize the same optimal prompt design obtained from the main experiments to keep costs manageable. As shown in Figure 3, for NER, we find that performance for in-context learning improves at a similar rate as the small PLMs, keeping the large gap between them constant. On the other hand, for RE, GPT-3's performance quickly falls behind. This behavior can be partially explained by the fact that none
+
+relation examples are more challenging to retrieve by leveraging simple lexical features than their positive class counterparts.6 Overall, GPT-3 in-context learning does not seem to yield a high return for more data annotation to compensate for its lower few-shot performance, so fine-tuning BERT-sized PLMs is likely still a better choice in the medium to high-data regime.
+
+# 4.4 Detailed Error Analysis
+
+In this section, our in-depth analysis reveals the difficulty of in-context learning in handling the null class, $^{7}$ such as sentences that contain no entities (for NER) and entity pairs that hold none of the target relations (for RE). Such issues do not seem to be specific to biomedical applications but are likely detrimental for IE tasks in general.
+
+# 4.4.1 NER Error Analysis
+
+When applying an NER (or similar span extraction tasks such as slot filling) model in practice on an input sentence, it may, more often than not, contain no relevant entity at all (what we call null class examples). For example, up to $50\%$ sentences in the BC5CDR-disease dataset contain no disease. However, existing work on GPT-3 in-context learning has ignored this issue. For instance, Zhao et al. (2021) chose to remove all examples that contain no relevant slots from their slot filling experiment. Unfortunately, as we will show, such null class examples turn out to be a major culprit of in-context learning's poor performance.
+
+
Original BC5CDR-disease
F1
Precision
Recall
GPT-3 In-Context
43.6
57.9
35.0
RoBERTa-large
67.7
66.9
68.7
Modified BC5CDR-disease
F1
Precision
Recall
GPT-3 In-Context
59.8
60.3
59.3
RoBERTa-large
70.4
68.0
72.9
+
+To explore the effect of such null examples, we compare GPT-3 in-context learning with fine-tuned RoBERTa-large on a modified BC5CDR-disease dataset in which all sentences containing no disease entities are removed. As shown in Table 5,
+
+recall for GPT-3 improves by around $24\%$ , compared to only $4\%$ for RoBERTa-large, indicating that including null examples in a prompt biases GPT-3 much more strongly to predict few entities than adding them to the fine-tuning data.
+
+Table 5: Evaluation on modified BC5CDR-disease where sentences with no disease entity are removed.
+
+
Number of Entities
P(nul1) 2-Shot
P(nul1) 3-Shot
Absolute Δ
% Increase
Zero (nu11)
19.4
49.1
29.7
153%
One or More
15.8
40.9
25.1
159%
+
+Table 6: We compare the null token probability assigned by GPT-3 to examples with zero and non-zero entities in the BC5CDR-disease training dataset. We run GPT-3 on 2-shot and 3-shot prompts (the 3-shot prompts contain one extra null example to examine its effect). We present the average over 3 randomly chosen prompts.
+
+We hypothesize that this bias is due, at least in part, to the fact that GPT-3 in-context learning must infer that relevant entities should only be predicted if they are present in the given sentence, in contrast with smaller PLMs using the token-classification formulation. In order to examine this hypothesis more closely, we simplify our experimental setting to isolate the effect that an additional null example has on GPT-3's predictions. We run GPT-3 on the BC5CDR-disease training dataset without the k-NN retrieval module, instead using the same randomly chosen two-shot prompt (containing an example with no entities and one with at least one) across all examples. We then add one more random example without entities to every prompt and compare the probability of a null prediction in each setting.8 As shown in Table 6, we find that, while adding the second null example increases the null probability slightly more for zero entity examples than ones with entities in absolute terms, accounting for the lower initial null probability that is assigned to examples with one entity or more reverses this effect. The absence of a significantly larger increase on the null probability for examples with zero entities over others suggests that GPT-3 struggles to infer the appropriate prediction constraint for this task and rather increases the null probability somewhat uniformly across examples.
+
+
Label
Example
Model
Correct
Effect
Concurrent use of phenothiazines may antagonize the anorectic effect of diethylpropion.
RoBERTa-large
✓
Concurrent use of phenothiazines may antagonize the anorectic effect of diethylpropion.
GPT-3
✓
None
Other strong inhibitors of CYP3A4 (e.g., itraconazole, clarithromycin, nefazodone, troleandomycin, ritonavir, nelfinavir) would be expected to behave similarly.
RoBERTa-large
✓
Other strong inhibitors of CYP3A4 (e.g., itraconazole, clarithromycin, nefazodone, troleandomycin, ritonavir, nelfinavir) would be expected to behave similarly.
GPT-3
X(Mechanism)
+
+Table 7: We compare LIME-based saliency scores for two DDI examples predicted by GPT-3 in-context learning and RoBERTa-large. Masking out words highlighted in blue changes the model's current prediction (the color's intensity indicates the effect of removing each word on the final prediction). The drugs shown in bold are the head and tail entities for the relation being queried. The second example shows that GPT-3 in-context learning is more prone to spurious surface-level signals and thus suffers in correctly predicting the none class.
+
+# 4.4.2 RE Error Analysis
+
+We similarly examine the effect of the null class for RE, which is denoted as the none relation in the DDI dataset analyzed. As seen in Table 2, GPT-3 in-context learning achieves high recall but low precision on RE datasets that have multiple relation types such as DDI and ChemProt. Based on the confusion matrices derived from LOOCV (Appendix F.1), the none relation in DDI is rarely predicted by GPT-3. This bias against the none class greatly degrades the model's precision given that the DDI dataset is, rightfully so, heavily skewed towards this class.
+
+In order to further understand this bias, we use LIME (Ribeiro et al., 2016)9 to analyze the predictions for both GPT-3 and RoBERTa on an effect example and a none example.10 The first example in Table 7 was labeled correctly by both models by relying on "anorectic effect" as a relevant signal. For none examples, however, correct predictions often require the use of more implicit structural understanding rather than reliance on surface level signals, as can be seen in the second example in Table 7. In this none example, we note that RoBERTa-large's prediction is strongly affected by the phrase "of CYP3A4 (e.g.,)" which helps express that the drugs within the parenthesis are examples of the same drug class and therefore do not interact with each other. This suggests that RoBERTa correctly leverages the linguistic structure of the sentence. On the other hand, GPT-3's incorrect mechanism prediction appears to be supported by the phrase "expected to behave similarly", which is not relevant to the relation between the drugs being queried. This suggest that GPT-3 in-context learning is more prone to spurious surface-level signals and thus suff
+
+fers in predicting the none class.
+
+# 4.4.3 General Limitation or Domain Shift?
+
+Our analysis suggests that GPT-3's in-context learning faces a broader issue concerning the higher complexity of null examples compared to positive examples. However, given that there is little work thoroughly studying GPT-3 for general domain IE, we leave it for future work to determine to what extent our findings stem from this null class limitation, the biomedical domain shift, or some other unforeseen reasons.
+
+# 5 Related Work
+
+In-Context Learning. GPT-3 in-context learning (Brown et al., 2020) has been found to be competitive against supervised baselines in a broad range of tasks including text classification, natural language inference, machine translation, question answering, table-to-text generation and semantic parsing (Brown et al., 2020; Zhao et al., 2021; Liu et al., 2022; Shin et al., 2021). Many techniques have been introduced to bolster its performance by removing biases through calibration (Zhao et al., 2021; Malkin et al., 2022) as well as by optimizing prompt retrieval (Liu et al., 2022; Rubin et al., 2022; Shin et al., 2021), prompt ordering (Lu et al., 2022) and prompt design (Perez et al., 2021).
+
+Previous work exploring GPT-3's in-context learning performance for information extraction tasks is limited. Zhao et al. (2021) evaluate smaller GPT-3 models on a modified slot filling task in which all examples have at least one entity of interest. Additionally, Epure and Hennequin (2021) evaluate the in-context learning performance of GPT-2 on open-domain NER datasets, modified to keep a specific ratio of empty to non-empty examples. Our prompt design for biomedical NER draws heavily from both of these works.
+
+As far as we know, our work is among the first to comprehensively evaluate GPT-3's in-context learning performance on IE tasks.
+
+Prompt Design. Apart from work on in-context learning, several other research directions study how to reformulate NLP tasks as language generation tasks. Schick and Schütze (2021) reformulate text classification and natural language inference tasks using a diverse set of manually constructed cloze-style templates as prompts to improve few-shot learning in smaller pretrained language models. Gao et al. (2021) explore a similar setting but leverage an external language model to generate such templates. Both of these demonstrate the importance of using a variety of prompt designs.
+
+In a related direction, Huguet Cabot and Navigli (2021) achieve state-of-the-art performance on relation extraction benchmarks by reformulating it as an end-to-end sequence-to-sequence task. In the biomedical domain, several works (Raval et al., 2021; Phan et al., 2021; Parmar et al., 2022) follow the multi-task sequence-to-sequence paradigm introduced by Raffel et al. (2020) and outperform previous methods on many tasks such as side effect extraction, NER, RE, natural language inference and question answering. Our prompt design is heavily inspired by many of these efforts to reformulate IE tasks as sequence-to-sequence tasks.
+
+True Few-Shot Learning. Perez et al. (2021) argue that previous work overestimates the few-shot learning abilities of PLMs by using large validation sets for model and prompt selection. This setting has been adopted by many works in this direction in an effort to more accurately estimate few-shot performance (Logan IV et al., 2022; Schick and Schütze, 2022; Lu et al., 2022).
+
+Biomedical In-Context Learning. Previous work evaluating GPT-3's in-context learning abilities on biomedical NLP tasks suggests that using the GPT-3 API directly yields poor performance in the biomedical domain (Moradi et al., 2021). Their work provides experimental results on 5 biomedical NLP datasets on distinct tasks including relation extraction. In our study, we aim to provide a comprehensive and in-depth evaluation on biomedical IE by using an established multi-dataset biomedical NLP benchmark and leverage recent in-context
+
+learning techniques to obtain the highest possible performance to our knowledge and ability. However, our results ultimately provide more evidence for the inadequacy of GPT-3 in-context learning for biomedical IE tasks, which cannot be easily overcome with existing techniques. Interestingly, a concurrent work (Agrawal et al., 2022) finds that GPT-3 perform well on a different set of clinical IE tasks, including one on biomedical evidence extraction that is clinical in nature. More work is needed to ascertain the cause of this surprising gap in IE performance between the clinical and biomedical domains for in-context learning.
+
+# 6 Conclusions
+
+In this work, we explored the potential of GPT-3 in-context learning for the high impact task of biomedical information extraction (IE). Given that such a paradigm would provide significant advantages for biomedical IE applications, we spent considerable efforts exploring available techniques that have been proven effective for other in-context learning settings. We showed, however, that current techniques do not enable GPT-3 in-context learning to surpass BERT-sized PLM fine-tuning on a range of benchmark datasets for biomedical NER and RE. Additionally, we discussed some potentially general limitations of in-context learning in biomedical IE to be explored in future work: its difficulty in handling the null class, such as entityless NER examples and vacuous relation examples for RE. Apart from posing this question for further study, we hope our work provides helpful guidance to biomedical researchers and practitioners towards more promising and cost-effective tools for low-resource IE such as small PLM fine-tuning or perhaps even directly fine-tuning GPT-3.
+
+# Limitations
+
+While we have uncovered a large performance gap between current GPT-3 in-context learning techniques and standard fine-tuning in the true few-shot setting, there are several important limitations that are worth discussing. Our limited budget restricted our study to a small set of prompt styles and number of examples in the prompt. Although our experiments suggest otherwise, it is possible that having a larger prompt design search space or using more examples per prompt could narrow the gap between small PLM fine-tuning and GPT-3 in-context learning. Additionally, it is still unclear
+
+to what degree using larger validation sets, at the cost of compromising the few-shot assumption, for prompt selection could improve GPT-3's in-context learning performance. Perhaps more notably, the kNN retrieval module used in this study relies on whole sentence embeddings, as commonly done in the existing literature. However, intuitively, tasks like relation extraction require a more focused view around the target entity pair. We speculate that developing a better retrieval module that is able to incorporate such task-specific inductive biases will likely be beneficial for in-context learning, but we leave it for future work. Finally, it is important to note that while contextual calibration (Zhao et al., 2021) is shown to work well in some text classification tasks, it is unclear whether other more recent methods such as that by Malkin et al. (2022) could better address GPT-3's text generation biases or if more task-specific calibration mechanisms are necessary for IE tasks.
+
+# Acknowledgements
+
+The authors would like to thank colleagues from the OSU NLP group for their thoughtful comments. This research was supported in part by OSU TDAI, NSF OAC 2118240, NSF OAC 2112606, NSF IIS 1815674, NSF CAREER 1942980, and Ohio Supercomputer Center (Center, 1987).
+
+# References
+
+Monica Agrawal, Stefan Hegselmann, Hunter Lang, Yoon Kim, and David Sontag. 2022. Large language models are zero-shot clinical information extractors.
+Emily Alsentzer, John Murphy, William Boag, WeiHung Weng, Di Jindi, Tristan Naumann, and Matthew McDermott. 2019. Publicly Available Clinical BERT Embeddings. In Proceedings of the 2nd Clinical Natural Language Processing Workshop, pages 72-78, Minneapolis, Minnesota, USA. Association for Computational Linguistics.
+Iz Beltagy, Kyle Lo, and Arman Cohan. 2019. SciBERT: A Pretrained Language Model for Scientific Text. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3615-3620, Hong Kong, China. Association for Computational Linguistics.
+Steven Bird, Ewan Klein, and Edward Loper. 2009. Natural language processing with Python: analyzing text with the natural language toolkit. O'Reilly Media, Inc.
+
+Alex Bravo, Janet Pinero, Núria Queralt-Rosinach, Michael Rautschka, and Laura I Furlong. 2015. Extraction of relations between genes and diseases from text and large-scale data analysis: implications for translational research. BMC Bioinformatics, 16(1):55.
+Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language Models are Few-Shot Learners. In Advances in Neural Information Processing Systems, volume 33, pages 1877-1901. Curran Associates, Inc.
+Ohio Supercomputer Center. 1987. Ohio Supercomputer Center.
+Nigel Collier and Jin-Dong Kim. 2004. Introduction to the Bio-entity Recognition Task at JNLPBA. In Proceedings of the International Joint Workshop on Natural Language Processing in Biomedicine and its Applications (NLPBA/BioNLP), pages 73-78, Geneva, Switzerland. COLING.
+Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.
+Rezarta Islamaj Dogan, Robert Leaman, and Zhiyong Lu. 2014. NCBI disease corpus: A resource for disease name recognition and concept normalization. Journal of Biomedical Informatics, 47:1-10.
+Elena V. Epure and Romain Hennequin. 2021. A Realistic Study of Auto-regressive Language Models for Named Entity Typing and Recognition. CoRR, abs/2108.11857.
+William Fedus, Barret Zoph, and Noam Shazeer. 2022. Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity. Journal of Machine Learning Research, 23(120):1-39.
+Tianyu Gao, Adam Fisch, and Danqi Chen. 2021. Making Pre-trained Language Models Better Few-shot Learners. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 3816-3830, Online. Association for Computational Linguistics.
+
+Yu Gu, Robert Tinn, Hao Cheng, Michael Lucas, Naoto Usuyama, Xiaodong Liu, Tristan Naumann, Jianfeng Gao, and Hoifung Poon. 2021. Domain-Specific Language Model Pretraining for Biomedical Natural Language Processing. ACM Trans. Comput. Healthcare, 3(1).
+María Herrero-Zazo, Isabel Segura-Bedmar, Paloma Martínez, and Thierry Declerck. 2013. The DDI corpus: An annotated corpus with pharmacological substances and drug-drug interactions. Journal of Biomedical Informatics, 46(5):914-920.
+Pere-Lluis Huguet Cabot and Roberto Navigli. 2021. REBEL: Relation extraction by end-to-end language generation. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 2370-2381, Punta Cana, Dominican Republic. Association for Computational Linguistics.
+Martin Krallinger, Obdulia Rabal, Saber A Akhondi, Martin Pérez Pérez, Jesús Santamaría, Gael Pérez Rodríguez, Georgios Tsatsaronis, Ander Intxaurrondo, José Antonio López, Umesh Nandal, et al. 2017. Overview of the biocreative VI chemicalprotein interaction track. In Proceedings of the sixth BioCreative challenge evaluation workshop, volume 1, pages 141-146.
+Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2019. BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics, 36(4):1234-1240.
+Jiao Li, Yueping Sun, Robin J Johnson, Daniela Sciaky, Chih-Hsuan Wei, Robert Leaman, Allan Peter Davis, Carolyn J Mattingly, Thomas C Wiegers, and Zhiyong Lu. 2016. BioCreative V CDR task corpus: a resource for chemical disease relation extraction. Database (Oxford), 2016:baw068.
+Jiachang Liu, Dinghan Shen, Yizhe Zhang, Bill Dolan, Lawrence Carin, and Weizhu Chen. 2022. What makes good in-context examples for GPT-3? In Proceedings of Deep Learning Inside Out (DeeLIO 2022): The 3rd Workshop on Knowledge Extraction and Integration for Deep Learning Architectures, pages 100–114, Dublin, Ireland and Online. Association for Computational Linguistics.
+Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A Robustly Optimized BERT Pretraining Approach. CoRR, abs/1907.11692.
+Robert Logan IV, Ivana Balazevic, Eric Wallace, Fabio Petroni, Sameer Singh, and Sebastian Riedel. 2022. Cutting down on prompts and parameters: Simple few-shot learning with language models. In *Findings of the Association for Computational Linguistics: ACL* 2022, pages 2824–2835, Dublin, Ireland. Association for Computational Linguistics.
+
+Yao Lu, Max Bartolo, Alastair Moore, Sebastian Riedel, and Pontus Stenetorp. 2022. Fantastically ordered prompts and where to find them: Overcoming few-shot prompt order sensitivity. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 8086-8098, Dublin, Ireland. Association for Computational Linguistics.
+Nikolay Malkin, Zhen Wang, and Nebojsa Jojic. 2022. Coherence boosting: When your pretrained language model is not paying enough attention. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics.
+Milad Moradi, Kathrin Blagec, Florian Haberl, and Matthias Samwald. 2021. GPT-3 Models are Poor Few-Shot Learners in the Biomedical Domain. CoRR, abs/2109.02555.
+Mihir Parmar, Swaroop Mishra, Mirali Purohit, Man Luo, Murad Mohammad, and Chitta Baral. 2022. In-BoXBART: Get instructions into biomedical multitask learning. In Findings of the Association for Computational Linguistics: NAACL 2022, pages 112-128, Seattle, United States. Association for Computational Linguistics.
+Yifan Peng, Shankai Yan, and Zhiyong Lu. 2019. Transfer Learning in Biomedical Natural Language Processing: An Evaluation of BERT and ELMo on Ten Benchmarking Datasets. In Proceedings of the 18th BioNLP Workshop and Shared Task, pages 58-65, Florence, Italy. Association for Computational Linguistics.
+Ethan Perez, Douwe Kiela, and Kyunghyun Cho. 2021. True Few-Shot Learning with Language Models.
+Long N. Phan, James T. Anibal, Hieu Tran, Shaurya Chanana, Erol Bahadroglu, Alec Peltekian, and Grégoire Altan-Bonnet. 2021. SciFive: a text-to-text transformer model for biomedical literature. CoRR, abs/2106.03598.
+Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer. Journal of Machine Learning Research, 21(140):1-67.
+Shivam Raval, Hooman Sedghamiz, Enrico Santus, Tuka Alhanai, Mohammad Ghassemi, and Emmanuele Chersoni. 2021. Exploring a Unified Sequence-To-Sequence Transformer for Medical Product Safety Monitoring in Social Media. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 3534–3546, Punta Cana, Dominican Republic. Association for Computational Linguistics.
+Nils Reimers and Iryna Gurevych. 2019. SentenceBERT: Sentence Embeddings using Siamese BERT-Networks. In Proceedings of the 2019 Conference on
+
+Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3982-3992, Hong Kong, China. Association for Computational Linguistics.
+Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. "why should I trust you?": Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, August 13-17, 2016, pages 1135-1144.
+Stephen Robertson and Hugo Zaragoza. 2009. The Probabilistic Relevance Framework: BM25 and Beyond. Found. Trends Inf. Retr., 3(4):333-389.
+Ohad Rubin, Jonathan Herzig, and Jonathan Berant. 2022. Learning to retrieve prompts for in-context learning. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2655-2671, Seattle, United States. Association for Computational Linguistics.
+Timo Schick, Helmut Schmid, and Hinrich Schütze. 2020. Automatically Identifying Words That Can Serve as Labels for Few-Shot Text Classification. In Proceedings of the 28th International Conference on Computational Linguistics, pages 5569-5578, Barcelona, Spain (Online). International Committee on Computational Linguistics.
+Timo Schick and Hinrich Schütze. 2021. It's not just size that matters: Small language models are also few-shot learners. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2339-2352, Online. Association for Computational Linguistics.
+Timo Schick and Hinrich Schütze. 2022. True few-shot learning with Prompts—A real-world perspective. Transactions of the Association for Computational Linguistics, 10:716-731.
+Richard Shin, Christopher Lin, Sam Thomson, Charles Chen, Subhro Roy, Emmanouil Antonios Platanios, Adam Pauls, Dan Klein, Jason Eisner, and Benjamin Van Durme. 2021. Constrained language models yield few-shot semantic parsers. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 7699-7715, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
+Larry Smith, Lorraine K Tanabe, Rie Johnson Nee Ando, Cheng-Ju Kuo, I-Fang Chung, Chun-Nan Hsu, Yu-Shi Lin, Roman Klinger, Christoph M Friedrich, Kuzman Ganchev, Manabu Torii, Hong-fang Liu, Barry Haddow, Craig A Struble, Richard J Povinelli, Andreas Vlachos, William A Baumgartner, Jr, Lawrence Hunter, Bob Carpenter, Richard Tzong-Han Tsai, Hong-Jie Dai, Feng Liu, Yifei Chen,
+
+Chengjie Sun, Sophia Katrenko, Pieter Adriaans, Christian Blaschke, Rafael Torres, Mariana Neves, Preslav Nakov, Anna Divoli, Manuel Maña-López, Jacinto Mata, and W John Wilbur. 2008. Overview of BioCreative II gene mention recognition. Genome Biology, 9(S2):S2.
+Shaden Smith, Mostofa Patwary, Brandon Norick, Patrick LeGresley, Samyam Rajbhandari, Jared Casper, Zhun Liu, Shrimai Prabhumoye, George Zerveas, Vijay Korthikanti, Elton Zheng, Rewon Child, Reza Yazdani Aminabadi, Julie Bernauer, Xia Song, Mohammad Shoeybi, Yuxiong He, Michael Houston, Saurabh Tiwary, and Bryan Catanzaro. 2022. Using DeepSpeed and Megatron to Train Megatron-Turing NLG 530B, A Large-Scale Generative Language Model. CoRR, abs/2201.11990.
+Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-Art Natural Language Processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Association for Computational Linguistics.
+Zihao Zhao, Eric Wallace, Shi Feng, Dan Klein, and Sameer Singh. 2021. Calibrate Before Use: Improving Few-shot Performance of Language Models. In Proceedings of the 38th International Conference on Machine Learning, volume 139 of Proceedings of Machine Learning Research, pages 12697-12706. PMLR.
+
+# A Experimental Setup Details
+
+# A.1 Named Entity Recognition
+
+We follow the BIO tag formulation for NER and use standard fine-tuning process for PLMs used in Gu et al. (2021). Given a sentence containing $n$ tokens $X = [x_{1},\dots,x_{n}]$ , an NER system attempts to predict a tag for each token: $Y = [y_{1},\dots,y_{n}]$ which can then be translated into a set of $k$ entities. An encoder $H$ is used to obtain a contextualized representation for the sentence $X$ : $H(X) = [\vec{v_1},\dots,\vec{v_n}]$ . Each embedding $\vec{v_i}$ is then used to predict $y_{i}$ using a linear layer. The encoder $H$ and the linear layer are then fine-tuned using a standard cross entropy objective. We use NLTK (Bird et al., 2009) to tokenize all NER sentences.
+
+# A.2 Relation Extraction
+
+For RE we use the simplest formulation in standard fine-tuning, the subject and object entities for each relation are replaced in the original sentence by new special tokens [ENT1] and [ENT2]. An encoder $H$ is then used as in NER to obtain a contextualized representation $H(X) = [\vec{v_1},\dots,\vec{v_n} ]$ of the now masked sentence. As is standard for text classification tasks, the [CLS] token embedding is then used to predict each relation type. As with the NER task, a standard cross entropy loss is used to fine-tune the encoder and linear layer.
+
+# B Computational Cost
+
+For our experiments, we used 4 NVIDIA GeForce RTX 2080 Ti GPUs. The number of parameters for each model we used as well as the total GPU hours and costs associated with using GPT-3 are listed in Table 8.
+
+
# of Parameters (millions)
Total GPU Hours
Total Cost
RoBERTa-large
354
338
-
PubMedBERT-base
100
138
-
BioBERT-large
345
344
-
GPT-3 (davinci)
175,000
-
~$1,500
+
+# C Fine-Tuning Hyperparameters
+
+We run 5-fold cross validation for each 100 sample training subset to choose between all hyperparameters listed in Table 9.
+
+Table 8: Total GPU Hours and GPT-3 costs associated with our experiments.
+
+
Learning Rate
Batch Size
Warmup Ratio
Weight Decay
Early Stopping Checkpoint
Search Space
1e-5
5
2e-5
16
0.0
0.0
10
3e-5
32
0.06
0.01
15
5e-5
0.1
20
25
+
+Table 9: Hyperparameter search grid used with k-fold cross-validation to obtain the optimal hyperparameters for all PLM fine-tuning experiments.
+
+# D Prompt Designs
+
+We run leave-one-out cross validation for each 100 sample training subset to choose between all choices listed in Table 10. Prompt design selections were completely independent for each training subset to maintain the true few-shot learning setting.
+
+# E Base Models
+
+As expected, the base models added to Table 11 underperform their large counterparts on almost all datasets. Consistent with previous work (Gu et al., 2021), benefiting from the biomedical-specific vocabulary, PubMedBERT-base handily outperforms other base models on the NER task (as well as some large models on several tasks). However, on the RE tasks, RoBERTa models perform the best. Since RE tasks require more holistic understanding of the whole sentence, this suggests that RoBERTa provides more general linguistic knowledge than other PLMs specific to biomedicine.
+
+# F DDI Error Analysis
+
+# F.1 Confusion Matrices
+
+Figure 4 shows the error distribution for both GPT-3 and RoBERTa-large in a 100 example training set for the DDI relation extraction dataset. We obtain these by combining all folds from 5-fold and leave-one-out cross-validation for RoBERTa-large and GPT-3 respectively. From the figure, we can see that GPT-3 in-context learning rarely predicts the none class which indicates two drugs bare no relation to each other. We note that RoBERTa-large also suffers from a larger error rate for the none class than other classes, indicating that this class is challenging for both models, however, the gap is
+
+
NER
Overall Instructions
Sentence Introduction
Retrieval Message
# of In-Context Examples
Label Verbalizer
BC5CDR-disease
""
Sentence:
Diseases:
5
N/A
List the diseases mentioned in the following sentences.
Scientific Article Excerpt:
10
BC5CDR-chemical
""
Sentence:
Drugs:
5
List the drugs mentioned in the following sentences.
Scientific Article Excerpt:
10
NCBI-disease
""
Sentence:
Diseases:
5
List the diseases mentioned in the following sentences.
Scientific Article Excerpt:
10
JNLPBA
""
Sentence:
Genes:
5
List the genes mentioned in the following sentences.
Scientific Article Excerpt:
10
BC2GM
""
Sentence:
Genes:
5
List the genes mentioned in the following sentences.
Classify the interaction between drugs based on the provided scientific article excerpts.
Scientific Article Excerpt:
How do <DRUG1> and <DRUG2> interact according to the previous sentence? Which word best describes their interaction: advice, effect, mechanism, other or none? Interaction:
Classify the effect drugs have on the genes mentioned in the following scientific article excerpts.
Scientific Article Excerpt:
What effect does the drug <DRUG>have on gene <GENE>according to the previous sentence? Choose from the following: none, activator, inhibitor, agonist, antagonist or substrate. Effect:
GAD
""
Sentence:
Gene: <GENE>Disease: <DISEASE>Interaction:
5
0 > no1 > yes
Determine if there is any interaction between the diseases and genes mentioned in the provided scientific article excerpts.
Scientific Article Excerpt:
Based on the previous sentence, is there any interaction between gene <GENE>and disease <DISEASE?>
+
+Table 10: For each element in our proposed prompt design (overall task instruction, sentence introduction and retrieval message), we list every option used for each dataset. For our main experiments, we used LOOCV on 100 training examples to select among 8 combinations of our 3 design elements and the number of in-context examples added to the prompt for each task. We also list the label verbalization used for each relation extraction dataset.
+
+
PubMedBERT-base
BioBERT-base
RoBERTa-base
BioBERT-large
RoBERTa-large
GPT-3 In-Context
Precision / Recall / F1
Precision / Recall / F1
Precision / Recall / F1
Precision / Recall / F1
Precision / Recall / F1
Precision / Recall / F1
BC5CDR-disease
67.43./7 67.51.2/67.42.4
60.65.1/66.15.7/63.02.0
60.42.8/61.94.4/61.23.6
62.95.0/69.03.0/65.84.1
66.91.7/68.74.7/67.71.8
57.92.3/35.02.0/43.62.2
BC5CDR-chem
86.11.9/88.64.8/87.31.3
77.83.3/85.14.6/81.20.5
74.63.8/84.12.1/79.01.2
84.82.6/87.33.3/86.01.1
82.11.8/87.31.0/84.61.3
74.72.5/71.42.2/73.03.3
NCBI-disease
68.54.7/67.62.4/68.02.9
58.85.4/65.92.7/62.14.0
60.63.2/61.94.6/61.23.5
59.610.6/67.06.1/63.08.7
64.33.7/68.76.7/66.45.1
55.26.7/49.06.1/51.41.4
JNLPBA
56.92.9/67.91.7/61.92.4
49.10.2/66.71.9/56.60.8
54.62.7/71.42.6/61.92.7
57.41.9/73.71.8/64.61.8
57.22.9/75.12.4/65.02.7
44.71.0/52.43.7/48.32.1
BC2GM
55.40.4/57.97.2/56.53.2
46.42.5/57.31.0/51.31.9
46.23.0/53.70.4/49.71.6
53.60.8/59.22.0/56.21.0
49.72.1/56.35.3/52.72.2
43.08.2/40.82.3/41.42.7
NER Average
66.91.0/69.90.9/68.20.8
58.60.9/68.21.6/62.81.0
59.32.8/66.61.7/62.62.2
63.71.8/71.30.4/67.10.9
64.01.6/71.20.5/67.20.9
55.13.6/49.70.6/51.51.3
DDI
19.92.0/79.13.0/31.82.7
18.90.6/78.34.4/30.50.9
19.61.3/68.83.9/30.51.6
17.31.4/75.41.2/28.21.9
25.52.2/77.93.5/38.42.6
9.61.1/48.61.9/16.11.6
ChemProt
17.92.2/62.03.9/27.72.9
18.70.9/59.45.0/28.40.9
18.10.7/56.81.6/27.50.7
19.06.8/60.68.2/28.78.7
22.00.3/69.71.2/33.40.4
15.90.8/68.91.9/25.91.3
GAD
63.76.6/57.27.9/60.27.4
60.55.0/62.814.3/61.28.1
60.21.4/71.20.1/64.49.1
63.25.8/72.75.7/67.65.8
64.14.0/78.511.5/70.35.6
51.40.9/92.34.2/66.01.8
RE Average
33.82.0/66.12.8/39.92.2
32.71.7/66.85.1/40.02.7
35.14.6/68.010.7/43.57.7
33.20.6/69.62.3/41.51.4
37.21.8/75.44.5/47.41.9
25.60.1/70.01.4/36.00.4
+
+Table 11: Main experimental results from Table 2 with additional results from BioBERT and RoBERTa base models for appropriate comparison.
+
+
+Figure 4: Confusion matrices on 100 validation examples from DDI for GPT-3 (left) and RoBERTa-large (right).
+
+
+
+much smaller for RoBERTa than GPT-3 in-context learning.
+
+# F.2 Qualitative Analysis
+
+In Table 12, we present 3 positive and 3 none DDI examples respectively to help illustrate the more challenging nature of the none class as well as RoBERTa-large's superior ability to attend to more relevant implicit signals. In all three positive examples, the saliency scores attributed by LIME for RoBERTa and GPT-3 agree closely, suggesting that both models are able to leverage relevant surface level signals. The feature attribution for the none examples, however, suggests that GPT-3 continues leveraging surface level signals when more complex sentence level information is needed which RoBERTa seems to extract and use more effectively.
+
+The first none example shows GPT-3's prediction is affected by several irrelevant features such as other drugs in the drug list ("channel", "quinidine" and "carbamazepine"), the initial phrase explaining that specific studies have not been performed and the word "metabolized". In contrast, RoBERTa is unaffected by the removal of drugs from the drug list and is correctly affected by important signals such as the removal of "CYP3A4 (eg.", similar to
+
+the example in Table 7. For the second none example, GPT-3's incorrect prediction is most strongly affected by the words "binding", "diuretic" and "gastrointestinal" while for RoBERTa the effect of removing words is more uniformly distributed over the phrase "binding thiazide diuretics and reducing diuretic absorption from the gastrointestinal tract". This indicates that RoBERTa's prediction relies on broader phrase level information rather than word level signals. In the last example, we note that removing the phrase "with L-tryptophan" from the sentence would create an interaction between the drugs being queried by yielding the phrase "Using these medicines may increase the chance of side effects". The fact that RoBERTa's prediction is strongly affected by the removal of this phrase indicates that its decision boundary uses more complex linguistic signals than GPT-3 which leverages single words such as "inhibitors", "Using" and "increase" to arrive at its prediction.
+
+# F.3 LIME Details
+
+We choose LIME (Ribeiro et al., 2016) to perform our RE error analysis because it enables us to obtain faithful local explanations for GPT-3 in-context learning which are directly comparable with the ones from RoBERTa or other small PLMs. We
+
+
Label
Example
Model
Correct
Advice
Concomitant use of bromocriptine mesylate with other ergot alkaloids is not recommended.
RoBERTa-large
✓
Concomitant use of bromocriptine mesylate with other ergot alkaloids is not recommended.
GPT-3
✓
Advice
Consequently, it is recommended not to exceed a single 2.5 mg Vardenafil dose in a 72-hour period when used in combination with ritonavir.
RoBERTa-large
✓
Consequently, it is recommended not to exceed a single 2.5 mg Vardenafil dose in a 72-hour period when used in combination with ritonavir.
GPT-3
✓
Effect
However, reports suggest that NSAIDs may diminish the antihypertensive effect of ACE inhibitors.
RoBERTa-large
✓
However, reports suggest that NSAIDs may diminish the antihypertensive effect of ACE inhibitors.
GPT-3
✓
None
Although specific studies have not been performed, coadministration with drugs that are mainly metabolized by CYP3A4 (eg, calcium channel blockers, dapsone, disopyramide, quinine, amiodarone, quinidine, warfarin, tacrolimus, cyclosporine, ergot derivatives, pimozide, carbamazepine, fentanyl, alfentanyl, alprazolam, and triazolam) may have elevated plasma concentrations when coadministered with saquinavir;
RoBERTa-large
✓
Although specific studies have not been performed, coadministration with drugs that are mainly metabolized by CYP3A4 (eg, calcium channel blockers, dapsone, disopyramide, quinine, amiodarone, quinidine, warfarin, tacrolimus, cyclosporine, ergot derivatives, pimozide, carbamazepine, fentanyl, alfentanyl, alprazolam, and triazolom) may have elevated plasma concentrations when coadministered with saquinavir;
GPT-3
X (Other)
None
- Cholestyramine and colestipol resins: Cholestyamine and colestipol resins have the potential of binding thiazide diuretics and reducing diuretic absorption from the gastrointestinal tract
RoBERTa-large
✓
- Cholestyramine and coolestipol resins: Cholestyamine and colestipol resins have the potential of binding thiazide diuretics and reducing diuretic absorption from the gastrointestinal tract
GPT-3
X (Mechanism)
None
Monoamine oxidase (MAO) inhibitors such as isocarboxazid (e.g., Marplan), phenelzine (e.g., Nardil), procarbazine (e.g., Matulane), selegiline (e.g., Eldepryl), and tranylcypromine (e.g., Parnate): Using these medicines with L-tryptophan may increase the chance of side effects.
RoBERTa-large
✓
Monoamine oxidase (MAO) inhibitors such as isocarboxazid (e.g., Marplan), phenelzine (e.g., Nardil), procarbazine (e.g., Matulane), selegiline (e.g., Eldepryl), and tranylcypromine (e.g., Parnate): Using these medicines with L-tryptophan may increase the chance of side effects.
GPT-3
X (Effect)
+
+Table 12: LIME-based saliency scores for more DDI examples. We present 3 examples with true drug-drug interactions predicted correctly by both models and 3 none examples predicted correctly by RoBERTa-large but incorrectly by GPT-3 in-context learning. As in Table 7, masking out words highlighted in blue changes the model's current prediction and the color's intensity indicates the strength of the effect on the final prediction. The drugs shown in bold are the head and tail entities for the relation being queried.
+
+use a modified version of the original LIME implementation $^{11}$ (Ribeiro et al., 2016) to carry out our analysis in Appendix F.2 and §4.4.2. Due to resource constraints, we modify the token removal method in the original implementation from randomly masking out tokens to a sliding window of 3 tokens. This allows us to look at how phrase removal changes predictions while still using a reasonable number of neighbor examples. Since we use this tool for analyzing relation extraction only, we do not remove the entities that are being queried. For GPT-3 in-context learning, we keep the few-shot prompts constant and use BLANK as the replacement token given that GPT-3 does not have a mask token. We do not observe a large difference in the saliency scores when this mask token was changed. In our visualizations, the saliency score for each word is normalized by the largest score found for that example in order to make effects more apparent.
\ No newline at end of file
diff --git a/thinkingaboutgpt3incontextlearningforbiomedicaliethinkagain/images.zip b/thinkingaboutgpt3incontextlearningforbiomedicaliethinkagain/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..f13b5c34c78f0c30d89e6f987196a03d54c6becc
--- /dev/null
+++ b/thinkingaboutgpt3incontextlearningforbiomedicaliethinkagain/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4ae2577bb57afbe449938748cc4c94eca5a6cabb9bb83fabd65664b2e592bd1a
+size 876883
diff --git a/thinkingaboutgpt3incontextlearningforbiomedicaliethinkagain/layout.json b/thinkingaboutgpt3incontextlearningforbiomedicaliethinkagain/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..8e7347e46b610736a9f861452061c654742f1b20
--- /dev/null
+++ b/thinkingaboutgpt3incontextlearningforbiomedicaliethinkagain/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a9bce8a56fc0b71adca9bcea7306336624eb67230c5d27351b5e04feba809c0f
+size 405214
diff --git a/thirdpartyalignerforneuralwordalignments/85fcd1aa-3515-4ede-bcbd-5f0ed9069bc0_content_list.json b/thirdpartyalignerforneuralwordalignments/85fcd1aa-3515-4ede-bcbd-5f0ed9069bc0_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..7f417fd611c974d00727cf4c4709db2ca2b30c21
--- /dev/null
+++ b/thirdpartyalignerforneuralwordalignments/85fcd1aa-3515-4ede-bcbd-5f0ed9069bc0_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:8582b5e236defc31d7b6d0d5a3e1cebe36418f31ac7f6ff51033ef3c797d6e37
+size 86996
diff --git a/thirdpartyalignerforneuralwordalignments/85fcd1aa-3515-4ede-bcbd-5f0ed9069bc0_model.json b/thirdpartyalignerforneuralwordalignments/85fcd1aa-3515-4ede-bcbd-5f0ed9069bc0_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..55bf43ecaa92149d6a7156fc7ea8726861d9c765
--- /dev/null
+++ b/thirdpartyalignerforneuralwordalignments/85fcd1aa-3515-4ede-bcbd-5f0ed9069bc0_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:03a4b709ed18f9183c631c5e0bccda4449234616ed26def7849379e57da5d25a
+size 104097
diff --git a/thirdpartyalignerforneuralwordalignments/85fcd1aa-3515-4ede-bcbd-5f0ed9069bc0_origin.pdf b/thirdpartyalignerforneuralwordalignments/85fcd1aa-3515-4ede-bcbd-5f0ed9069bc0_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..b56426df90c6b0647e6b81f7ada866906b002fdc
--- /dev/null
+++ b/thirdpartyalignerforneuralwordalignments/85fcd1aa-3515-4ede-bcbd-5f0ed9069bc0_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3d117094f69bcd8010b8c577bf829b44469153d5bc3237e07d219790c02d8ebb
+size 1028808
diff --git a/thirdpartyalignerforneuralwordalignments/full.md b/thirdpartyalignerforneuralwordalignments/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..9279e58a37d390adeab8272b7842fd5c3ebc1fa4
--- /dev/null
+++ b/thirdpartyalignerforneuralwordalignments/full.md
@@ -0,0 +1,353 @@
+# Third-Party Aligner for Neural Word Alignments
+
+Jinpeng Zhang $^{1*}$ , Chuanqi Dong $^{1*}$ , Xiangyu Duan $^{1\dagger}$ , Yuqi Zhang $^{2}$ , Min Zhang $^{1}$
+
+$^{1}$ Institute of Aritificial Intelligence, School of Computer Science and Technology, Soochow University
+
+2 Alibaba DAMO Academy
+
+{jpzhang1,cqdong}@stu.suda.edu.cn; xiangyuduan@suda.edu.cn;
+
+chenwei.zyq@alibaba-inc.com; minzhang@suda.edu.cn
+
+# Abstract
+
+Word alignment is to find translationally equivalent words between source and target sentences. Previous work has demonstrated that self-training can achieve competitive word alignment results. In this paper, we propose to use word alignments generated by a third-party word aligner to supervise the neural word alignment training. Specifically, source word and target word of each word pair aligned by the third-party aligner are trained to be close neighbors to each other in the contextualized embedding space when fine-tuning a pre-trained cross-lingual language model. Experiments on the benchmarks of various language pairs show that our approach can surprisingly do self-correction over the third-party supervision by finding more accurate word alignments and deleting wrong word alignments, leading to better performance than various third-party word aligners, including the currently best one. When we integrate all supervisions from various third-party aligners, we achieve state-of-the-art word alignment performances, with averagely more than two points lower alignment error rates than the best third-party aligner. We released our code at https://github.com/sdongchuanqi/Third-Party-Supervised-Aligner.
+
+# 1 Introduction
+
+Word alignment is to find the correspondence between source side and target side words in a sentence pair (Brown et al., 1993). It is widely applied in a variety of natural language processing (NLP) tasks, including learning translation lexicons (Ammar et al., 2016; Cao et al., 2019), cross-lingual transfer (Yarowsky et al., 2001; Padó and Lapata, 2009; Tiedemann, 2014; Agić et al., 2016; Mayhew et al., 2017; Nicolai and Yarowsky, 2019), and semantic parsing (Herzig and Berant, 2018). In
+
+
+Figure 1: Gold and third-party word alignments, and cosine similarities between contextualized embeddings of subwords in a parallel sentence pair before and after fine-tuning with third-party supervision.
+
+particular, word alignment plays a key role in many neural machine translation (NMT) related methods, such as imposing lexical constraints in the decoding process (Arthur et al., 2016; Hasler et al., 2018), improving automatic post-editing (Pal et al., 2017), guiding learned attention (Liu et al., 2016), and automatic analysis or evaluation of NMT models (Tu et al., 2016; Bau et al., 2018; Stanovsky et al., 2019; Neubig et al., 2019; Wang et al., 2020).
+
+Word alignment is usually inferred by GIZA++ (Och and Ney, 2003) or FastAlign (Dyer et al., 2013), which are based on the statistical IBM word alignment models (Brown et al., 1993). Recently, neural methods are applied for inferring the word alignment. They use NMT-based framework to induce alignments through using attention weights or feature importance measures, and surpass the statistical word aligners such as GIZA++ on a variety of language pairs (Li et al., 2019; Garg et al., 2019; Zenkel et al., 2019, 2020; Chen et al., 2020; Song et al., 2020a,b; Chen et al., 2021).
+
+Inspired by the success of the large-scale crosslingual language model (CLM) pre-training (De
+
+vlin et al., 2019; Conneau and Lample, 2019; Conneau et al., 2020), the pre-trained contextualized word embeddings are also explored for the word alignment task by either extracting alignments based on the pre-trained contextualized embeddings (Sabet et al., 2020) or fine-tuning the pretrained CLMs by self-training to get new contextualized embeddings appropriate for extracting word alignments (Dou and Neubig, 2021). Based on careful design of self-training objectives, the fine-tuning approach achieves competitive word alignment results (Dou and Neubig, 2021).
+
+In this paper, we use simple supervision instead of the self-training to fine-tune the pre-trained CLMs. The simple supervision is derived from a third-party word aligner. Given a parallel corpus, the third-party word aligner predicts the word alignments over the corpus, which are used as the supervision signal for the fine-tuning. In particular, for each aligned word pair of a parallel sentence pair, the contextualized embeddings of the source and target words are trained to have high cosine similarity to each other in the embedding space.
+
+As illustrated by Figure 1, the cosine similarities between the source and target words of the correct word alignments are not quite high before the finetuning. The third-party word aligner can provide some correct word alignments (e.g. "that" "must" "be" associated alignments) along with wrong ones (e.g. "primary" "objective" associated alignments) as the supervision. Although the supervision is not perfect, it is still helpful for driving the contextualized embeddings of the source and target words of a correct word alignment closer in the embedding space after the fine-tuning. Surprisingly, with imperfect third-party supervision in fine-tuning, the heat map of the cosine similarities exhibits clearer split between the correct and wrong word alignments than not fine-tuning. Wrong alignments of the third-party aligner are rectified after fine-tuning (e.g. "primary" "objective" associated alignments), and the incorrect alignment before fine-tuning (e.g. "be"associated alignment) is also rectified after finetuning.
+
+We perform experiments on word alignment benchmarks of five different language pairs. The results show that the proposed third-party supervising approach outperforms all third-party word aligners. When we integrate all supervisions from various third-party word aligners, we achieve state-of-the-art performances across all benchmarks, with an
+
+
+Figure 2: The framework of the fine-tuning with the third-party supervision.
+
+average word error rate two points lower than that of the best third-party word aligner.
+
+# 2 Approach
+
+Formally, the word alignment task can be defined as finding a set of word pairs in the sentence pair $\langle \mathbf{s}, \mathbf{t} \rangle$ , where $\mathbf{s}$ denotes the source sentence " $s_1, \ldots, s_n$ ", and $\mathbf{t}$ denotes the corresponding target sentence " $t_1, \ldots, t_m$ " parallel to $\mathbf{s}$ . The set of the word pairs is:
+
+$$
+A = \{\langle s _ {i}, t _ {j} \rangle | s _ {i} \in \mathbf {s}, t _ {j} \in \mathbf {t} \}.
+$$
+
+In each word pair $\langle s_i, t_j \rangle$ , $s_i$ and $t_j$ are translationally equivalent to each other within the context of the sentence pair.
+
+In the following, we will describe how we obtain the word alignments by fine-tuning the pre-trained CLMs. Different to previous work that fine-tunes by self-training (Dou and Neubig, 2021), we supervise the fine-tuning process with third-party word alignments.
+
+# 2.1 Third-Party Supervision
+
+The large-scale CLM pre-training has gained impressive performances across various NLP tasks (Libovický et al., 2019; Hu et al., 2020). As the outcome of the pre-trained CLMs, the contextualized word embeddings can represent words in semantic context across different languages. By further fine-tuning the CLMs, the contextualized embeddings of the source and target words of a word alignment in the embedding space can become closer, which makes it easier for identifying word alignments according to the simple geometry of the embedding space for each pair of parallel sentences.
+
+We propose to fine-tune the pre-trained CLMs with supervision from a third-party word aligner. Figure 2 shows the overall fine-tuning framework. For a source sentence $s_1, s_2, s_3, s_4$ and its corresponding target sentence $t_1, t_2, t_3$ , we stack CLM over them to obtain the contextualized word embeddings $\mathbf{hs} = "hs_1, hs_2, hs_3, hs_4"$ and $\mathbf{ht} = "ht_1, ht_2, ht_3"$ for the source and target sides, respectively. Since CLM models sentences of different languages in the same contextualized embedding space, it is easy to constitute a similarity matrix by directly computing the cosine similarities between $\mathbf{hs}$ and $\mathbf{ht}$ . The similarity matrix is:
+
+$$
+M = \mathbf {h} \mathbf {s} \times \mathbf {h} \mathbf {t} ^ {\mathrm {T}}
+$$
+
+In the matrix, word pairs with higher similarities are deemed as word alignments. Let $A'$ denotes the word alignments generated by the third-party word aligner. CLM is fine-tuned with the supervision of $A'$ so that $M$ is consistent with $A'$ . Although the third-party supervision $A'$ is not perfect, we observe that the fine-tuning can proceed with self-correction of imperfect $A'$ in the experiments.
+
+The supervision is bidirectional:
+
+$$
+P _ {s 2 t} (i, j) = \frac {e ^ {M _ {i , j}}}{\sum_ {j = 1} ^ {n} e ^ {M _ {i , j}}}
+$$
+
+$$
+P _ {t 2 s} (i, j) = \frac {e ^ {M _ {i , j}}}{\sum_ {i = 1} ^ {m} e ^ {M _ {i , j}}}
+$$
+
+$$
+\begin{array}{l} \mathcal{L} = \frac{1}{m}\sum_{i = 1}^{m}\sum_{\substack{j\\ s.t.}}P_{s2t}(i,j) \\ + \frac {1}{n} \sum_ {j = 1} ^ {n} \sum_ {\substack {i \\ s. t.}} \left\langle s _ {i}, t _ {j} \right\rangle \in A ^ {\prime} P _ {t 2 s} (i, j) \tag{1} \\ \end{array}
+$$
+
+where $P_{s2t}$ denotes the probability of source-to-target alignment between $s_i$ and $t_j$ , which is computed by softmax over the $i$ th row of $M$ . Correspondingly, $P_{t2s}$ denotes the probability of target-to-source alignment between $t_j$ and $s_i$ , which is computed by softmax over the $j$ th column of $M^1$ . $m$ and $n$ denote the lengths of the source and target sentences, respectively. We aim to maximize $\mathcal{L}$ , which sums the bidirectional probabilities subject to $A'$ supervision.
+
+Through the above training objective, CLM is fine-tuned to generate the contextualized embeddings suitable for building the similarity matrices to extract word alignments.
+
+# 2.2 Word Alignment Prediction
+
+Given a new pair of parallel sentences in the test set, we can predict its word alignments based on the CLM fine-tuned on the parallel training corpus. In particular, for the sentence pair, the source-to-target probability matrix $M_{s2t}$ which consists of probabilities of $P_{s2t}$ , and the target-to-source probability matrix $M_{t2s}$ which consists of probabilities of $P_{t2s}$ , are computed using the fine-tuned CLM at first, then the set of word alignments $A$ can be deduced according to the intersection of the two matrices:
+
+$$
+A = \{\langle s _ {i}, t _ {j} \rangle | P _ {s 2 t} (i, j) > c \& P _ {t 2 s} (i, j) > c \}
+$$
+
+where $c$ is a threshold. Only the word pairs whose source-to-target alignment probability and target-to-source alignment probability are both greater than $c$ are deemed as the predicted word alignments.
+
+# 2.3 Integrating various Third-Party Supervisions
+
+Different third-party word aligners exhibit different behaviors in the word alignment results. We integrate the word alignments produced by various aligners into one set of supervisions for the finetuning process to test if they can be combined to improve the performance further. At first, we group all third-party aligners' output alignments into one union. Then we utilize the union in two category of methods: filtering and weighting.
+
+The filtering method abandons word alignments in the union which have low consistency between various aligners, and only keep the alignments that majority of the aligners consent to. The remaining word alignments are used to supervise the fine-tuning process. Since different aligners get different performances, we assign credit to each aligner by using its performance on the development set (i.e., negative alignment error rate of the development set), then we normalize the credits of all aligners by softmax. Consequently, each word alignment $\langle s_i,t_j\rangle$ in the union has a total credit:
+
+$$
+Credit_{total}(i,j) = \sum_{\substack{k = 1\\ s.t. \langle s_{i},t_{j}\rangle \in A_{k}^{\prime}}}^{K}Credit_{k}(i,j)
+$$
+
+where $A_{k}^{\prime}$ denotes the set of word alignments of the $k$ th third-party word aligner, $K$ denotes the number of the third-party word aligners, and $Credit_{k}$ is the credit of the $k$ th aligner after softmax. $Credit_{total}$ represents the degree of agreement between various aligners. Only word alignments whose $Credit_{total}$ are greater than a threshold $f$ are kept for the subsequent fine-tuning.
+
+Different to the filtering method, the weighting method considers all word alignments in the union, though it put weights over them in the fine-tuning. $Credit_{total}$ used in the filtering method is also adopted in the weighting method:
+
+$$
+w _ {i, j} = \frac {1}{1 + e ^ {- \lambda (C r e d i t _ {t o t a l} (i , j) - f)}}
+$$
+
+where $w_{i,j}$ is the weight of the word pair $\langle s_i,t_j\rangle$ When Credittotal exceeds the threshold $f$ ,the weight tends to 1, otherwise it tends to O. $\lambda$ is the hyper-parameter that controls the effect of the supervision integration. $w_{i,j}$ is inserted into the fine-tuning objective $\mathcal{L}$ in euation (1) by simply replacing $P_{s2t}(i,j)$ with $w_{i,j}P_{s2t}(i,j)$ , and replacing $P_{t2s}(i,j)$ with $w_{i,j}P_{t2s}(i,j)$
+
+# 2.4 Handling Subwords
+
+Subwords (Sennrich et al., 2016; Wu et al., 2016) are widely used in pre-training CLMs. The fine-tuning process is conducted on the contextualized embeddings of the subwords. So we run all third-party word aligners at the subword level to get subword alignments, which are used for supervising the fine-tuning. During testing, we get the subword alignments for the test set sentence pairs at first, then convert the subword alignments to the word alignments by following previous work (Sabet et al., 2020; Zenkel et al., 2020), which consider two words to be aligned if any of their subwords are aligned.
+
+# 3 Experiments
+
+We test the proposed third-party supervised finetuning approach on word alignment tasks of five language pairs: Chinese-English (Zh-En), German-English (De-En), English-French (En-Fr), Romanian-English (Ro-En) and Japanese-English (Ja-En).
+
+# 3.1 Datasets
+
+We use the benchmark datasets of the five language pairs. They are utilized in two ways. For all third-party aligners, whole training corpus for each language pair is used by each third-party aligner. For our approach, only a fraction of the whole training corpus for each language pair is used in the fine-tuning phase.
+
+Regarding the datasets for all third-party aligners, the configuration is the same to previous works. The Zh-En training-set is from the LDC corpus which consists of 1.2M sentence pairs, and the test and development sets are obtained from the TsinghuaAligner website (Liu et al., 2005). For the De-En, En-Fr, Ro-En datasets, we follow the experimental setup of previous work (Zenkel et al., 2019, 2020) and use their pre-processing scripts (Zenkel et al., 2019) to get the training and test sets. The Ja-En dataset is obtained from the Kyoto Free Translation Task (KFTT) word alignment data (Neubig et al., 2011), in which the sentences with less than 1 or more than 40 words are removed. The Japanese sentences are tokenized by KyTea tokenizer (Neubig et al., 2011).
+
+Regarding the datasets for our fine-tuning approach, we only use the first 80,000 sentence pairs of the whole training corpus for each language pair. Basically, the third-party supervision for these sentence pairs are extracted from the word alignments of the whole training corpus induced by the third-party aligner. We also test training the third-party aligner just on the 80,000 sentence pairs to provide the third-party supervision, the results are presented in section 3.8. Besides, we also vary the data size for the fine-tuning as shown in the experimental section 3.5.
+
+Table 1 presents the statistics of these datasets. Since De-En, En-Fr, and En-Ro have no manually aligned development sets, we take the last 1,000 sentences of the training data as the development sets(Ding et al., 2019), in which the aligner is self-tuned on the alignments predicted by itself in the last iteration. Other development sets and all test sets are manually aligned. All training sets do not contain manually labeled word alignments.
+
+
TRAIN
FINE-TUNE
DEV
TEST
Zh-En
1,252,977
80,000
450
450
De-En
1,918,317
80,000
1,000
508
En-Fr
1,129,104
80,000
1,000
447
Ro-En
447,856
80,000
1,000
248
Ja-En
329,882
80,000
653
582
+
+Table 1: Number of sentence pairs in the benchmark datasets.
+
+# 3.2 Settings
+
+Pre-trained Cross-lingual Language Models. For fine-tuning, we investigate two types of pretrained CLMs, namely mBERT and XLM(Conneau and Lample, 2019). mBERT is pre-trained over Wikipedia texts of 104 languages with the same settings to Dou and Neubig (2021). For XLM, we have tried its two released models: 1) XLM-15(MLM+TLM) which is pre-trained with MLM and TLM objectives and supports 15 languages. 2) XLM-100(MLM), which is trained with MLM and supports 100 languages. Specifically, for ZhEn, De-En, and En-Fr, which are among the 15 languages, we use XLM-15(MLM+TLM) same to Dou and Neubig (2021). For Ro-En and Ja-En which are not covered by XLM-15(MLM+TLM), we choose XLM-100 (MLM) instead with a modification that XLM-100 (MLM) is further trained on the parallel training corpora of Ro-en and Ja-En with the TLM objectives to be consistent with XLM-15(MLM+TLM). In the following, unless with clear specification, XLM stands for XLM-15 or XLM-100 in appropriate circumstances.
+
+The contextualized word embeddings are extracted from the hidden states of the $i$ th layer of the pre-trained CLMs, where $i$ is an empirically-chosen hyper-parameter based on the development set performances. For XLM-15, we use its 5th layer to extract the contextual embeddings (Hewitt and Manning, 2019; Tenney et al., 2019), while for XLM-100, we use its 9th layer. For mBERT, we use its 8th layer. We directly use the subwords in the pre-trained CLMs, i.e., BPE subwords in XLM and word piece subwords in mBERT.
+
+Training Setup and Hyper-parameters. We fine-tune XLM and mBERT models for 10 epochs over the parallel fine-tuning corpus for each language pair, with a batch size of 8. We use AdamW(Loshchilov and Hutter, 2017) with learning rate of 1e-5. The dropout rate is set to 0.1. The training process typically takes 2 to 3 hours. The hyper-parameters are tuned based on the develop
+
+ment set performances. Regarding the threshold $c$ in the word alignment prediction, it is set to 1e-6 for Ro-En and 0.1 for the others. Regarding the hyper-parameters in integrating the various third-party supervisions, $f$ is set to 0.45 and $\lambda$ is set to 0.5 for all language pairs.
+
+# 3.3 Third-Party Word Aligners
+
+We explore various third-party word aligners ranging from statistical approaches to neural approaches to supervise the fine-tuning process. The aligners include:
+
+- FastAlign (Dyer et al., 2013) $^4$ : a popular statistical word aligner which is an effective reparameterization of IBM model 2.
+- GIZA++(Och and Ney, 2003) $^5$ : another popular statistical word aligner implementing the IBM models. We use traditional settings of 5 iterations each for model 1, HMM model, model 3 and model 4.
+- Eflomal(Östling and Tiedemann, 2016)6: an efficient statistical word aligner using a Bayesian model with Markov Chain Monte Carlo inference.
+- SimAlign(Sabet et al., 2020)7: a word aligner that directly uses static and contextualized embeddings of BERT to extract word alignments. We use its Argmax model with default settings.
+- AwesomeAlign(Dou and Neubig, 2021)8: a neural word aligner that fine-tunes CLMs by self-training to produce contextualized embeddings suitable for word alignment.
+- MaskAlign(Chen et al., 2021): a neural word aligner based on self-supervision which parallel masks each target token and predicts it conditioned on both sides remaining tokens to better model the alignment.
+
+For some language pairs that are not reported in the papers of the above third-party aligners, we run their released tools on the benchmark datasets to get the corresponding results. Specifically, for
+
+4https://github.com/clab/fast_align
+5https://github.com/moses-smt/mgiza
+6https://github.com/robertostling/eflomal
+7https://github.com/cisnlp/simalign
+$^{8}$ https://github.com/neulab/awesome-align
+9https://github.com/THUNLP-MT/Mask-Align
+
+Zh-En, we run FastAlign, Eflomal, and SimAlign. For Ja-En, we run FastAlign, GIZA++, Eflomal, SimAlign, and MaskAlign. Because the evaluation in AwesomeAlign for Zh-En ignores manually labeled possible alignments, which is inconsistent to other works, we run AwesomeAlign for Zh-En to re-evaluate with considering the manually labeled possible alignments.
+
+# 3.4 Main Results
+
+The alignment error rate (AER) (Och and Ney, 2003) is used to evaluate the performances. Main results are summarized in Table 2. Compared to all third-party word aligners, which are also set as the baselines, our proposed approach achieves the state-of-the-art performances across the five language pairs, with an average AER of more than two points lower than the best third-party word aligner.
+
+Table 2 presents the results of fine-tuning XLM. The results of fine-tuning mBERT is reported in Table 3. Both fine-tuning approaches perform better than the third-party word aligners. Since fine-tuning CLMs is conducted in the subword level, we need to adapt the third-party aligners for subwords. Given the parallel corpus of each language pair, we directly use the dictionary of the CLM to get the subwords of the corpus, then run each third-party aligner on such corpus which is subword segmented. Such adapted results are reported in both tables with the subscript "adapted" to each third-party aligner10. For neural aligners such as MaskAlign which already uses subwords, the adaptation is still needed since the subwords of the pretrained CLM are different.
+
+Regarding the plain contextualized embeddings in XLM and mBERT, they can be directly aligned between source and target languages by mining the closest neighbors in the universal embedding space, as shown in the "w/o Fine-tuning" rows in Table 2 and 3 (Dou and Neubig, 2021). When we further fine-tune these embeddings supervised by the subword alignments produced by each adapted individual third-party aligner, we obtain significant improvement over each individual third-party aligner. When compare fine-tuning to without finetuning ("w/o Fine-tuning" rows), we found that
+
+
+
+
+Figure 3: The effect of the different sizes of parallel corpora for the fine-tuning.
+
+
+
+
+
+fine-tuning generally performs better than without fine-tuning, except for fine-tuning with the supervision of FastAlignadapted. Since FastAlignadapted performs remarkably worse than without fine-tuning, it is hard for FastAlignadapted to provide effective supervision for the fine-tuning. Since AwsomeAlignadapted already fine-tunes the CLMs by self-training, continuing to fine-tune CLMs with the supervision of AwsomeAlignadapted does not gain improvements. At last, when we integrate all supervisions from various third-party aligners, we achieve state-of-the-art AER. Details of integrating all supervisions are presented in section 3.7.
+
+# 3.5 The Effect of The Fine-tuning Corpus Size
+
+Figure 3 presents the performance variance when the size of parallel corpus for the fine-tuning varies. As the fine-tuning corpus becomes larger, AER becomes lower across all five language pairs. The full corpus is identical to that used in training the third-party aligners. The curve for En-Fr is presented in the appendix due to space limit. Usually 80k sentence pairs can provide good supervisions for the fine-tuning, with limited margin to the performance of using the full corpus. Note that the performance of using 2k sentence pairs for fine-tuning is less than two points worse than that of using the full corpus, even just 0.4 points worse in En-Fr.
+
+# 3.6 Self-Correction Effect
+
+Although the supervision from the third-party aligner is not perfect, we observe a self-correction effect that as the fine-tuning proceeds, more accurate word alignments other than the third-party alignments are identified as they become closer
+
+
Zh-En
De-En
En-Fr
Ro-En
Ja-En
AVG
Baseline
FastAlign (Dyer et al., 2013)
27.3
27.0
10.5
32.1
51.1
29.6
GIZA++ (Och and Ney, 2003)
18.5
20.6
5.9
26.4
48.0
23.9
Eflomal (Östling and Tiedemann, 2016)
23.4
22.6
8.2
25.1
47.5
25.4
SimAlign (Sabet et al., 2020)
19.6
19.0
6.0
30.5
48.6
26.4
AwesomeAlign (Dou and Neubig, 2021)
13.3
15.6
4.4
23.0
38.4
18.9
MaskAlign (Chen et al., 2021)
13.8
14.4
4.4
19.5
40.8
18.6
Fine-tuning XLM
w/o Fine-tuning
18.0
16.2
4.9
27.1
42.8
21.8
FastAlignadapted
23.0
27.0
11.2
32.2
49.3
28.5
w/ FastAlignadapted Supervision
21.4
24.2
9.8
27.4
46.6
25.9
GIZA++adapted
18.8
19.3
6.3
29.0
43.9
23.5
w/ GIZA++adapted Supervision
13.3
15.2
5.1
23.8
39.2
19.3
Eflomaladapted
27.0
26.0
13.1
27.8
47.6
28.3
w/ Eflomaladapted Supervision
14.0
18.4
6.1
23.6
43.7
21.2
SimAlignadapted
21.3
17.3
5.1
33.3
48.2
25.0
w/ SimAlignadapted Supervision
14.7
14.8
4.5
26.5
44.0
20.9
AwesomeAlignadapted
13.7
17.2
4.7
24.2
40.4
20.0
w/ AwesomeAlignadapted Supervision
13.6
17.4
4.6
24.4
40.2
20.0
MaskAlignadapted
15.7
15.3
4.6
19.2
41.6
19.3
w/ MaskAlignadapted Supervision
12.1
13.9
4.3
18.8
34.3
16.7
w/ Integrated Supervision
11.3
13.9
4.0
18.6
33.4
16.2
+
+Table 2: AER results of the baseline systems and the systems of fine-tuning XLM with the third-party supervisions. The lower AER, the better. AVG denotes the average AER over the five language pairs.
+
+
Zh-En
De-En
En-Fr
Ro-En
Ja-En
AVG
Fine-tuning mBERT
w/o Fine-tuning
17.9
17.4
5.6
27.3
45.2
22.7
FastAlignadapted
22.9
27.2
11.7
31.9
49.0
28.5
w/ FastAlignadapted Supervision
21.1
26.3
10.1
25.9
47.0
26.1
Eflomaladapted
27.2
25.9
13.0
26.8
48.5
28.3
w/ Eflomaladapted Supervision
16.3
21.0
7.2
22.3
44.4
22.2
GIZA++adapted
18.3
19.9
6.3
27.6
42.6
22.9
w/ GIZA++adapted Supervision
13.5
17.5
5.2
23.2
37.7
19.4
SimAlignadapted
19.6
19.0
5.9
30.5
48.6
24.7
w/ SimAlignadapted Supervision
16.6
16.2
5.4
24.0
43.9
21.2
AwesomeAlignadapted
13.3
15.2
4.3
23.3
38.5
18.9
w/ AwesomeAlignadapted Supervision
13.4
15.0
4.5
23.0
38.2
18.8
MaskAlignadapted
15.7
15.9
4.3
20.3
41.6
19.6
w/ MaskAlignadapted Supervision
11.5
15.2
3.9
19.5
34.6
16.9
w/ Integrated Supervision
11.0
14.8
3.8
19.3
33.2
16.4
+
+Table 3: AER results of fine-tuning mBERT with the third-party supervisions.
+
+in the embedding space, and some wrong word alignments of the third-party aligner get departed farther in the space, which we deem that they do not influence the fine-tuning process.
+
+Figure 4 presents the self-correction effect. In this subsection, we include the test set into the fine-tuning set for the new fine-tuning to check the predicted alignments against gold alignments. MaskAlign and XLM are used in this study. At first, we extract MaskAlign results of the test set as part of the supervision for the fine-tuning. As the fine-tuning steps forward, on the test set, we compute the precision of newly predicted alignments not included in the third-party alignments, denoted as "New", and the rate of the deleted alignments
+
+(certain third-party alignments not included in the predicted alignments) which are truly wrong alignments amongst all deleted alignments, denoted as "Del". Besides, we compute the precision of remaining alignments in the third-party alignments, denoted as "Remain". Figure 4 shows that "New" and "Del" increase as the fine-tuning proceeds, supporting the AER decrease in the experiment. "Remain" almost keeps horizontal, indicating the stability of the fine-tuning process. The effect of En-Fr is shown in the appendix.
+
+
+Figure 4: Self-correction effect in the fine-tuning process.
+
+
Zh-En
De-En
En-Fr
Ro-En
Ja-En
AVG
Intersection
13.0
17.0
4.8
23.4
39.4
19.5
Union
11.8
15.2
4.1
20.4
35.1
17.3
Union-Filtering
11.0
14.8
3.8
19.3
33.2
16.4
Union-Weighting
11.2
14.7
3.8
19.2
33.7
16.5
+
+# 3.7 Results of Integrating Various Third-Party Supervisions
+
+Table 4 presents the comparison between the performances of different integration methods. Finetuning mBERT is applied in this study for its computation efficiency. First, we intersect the word alignments from all third-party aligners as supervisions. Since aligners perform differently, AER is impacted by the worst aligner which results in small number of word alignments in the intersection. In contrast, when we get the union of all third-party alignments, its performance is much better, but it still contain noises hampering AER results. When we use filtering and weighting methods to deal with the noises, the integration gets the best performances, and surpasses all third-party aligners.
+
+Ablation studies are shown in Table 5. Removing one aligner from the integration causes different performance variances. It shows that removing MaskAlign impact the integration performance most, since it is best aligner in most language pairs.
+
+# 3.8 Training Third-Party Aligners on The Same Parallel Corpus for The Fine-tuning
+
+Although the fine-tuning approach only needs a small fraction of the whole parallel corpus for each language pair, e.g. 80k sentence pairs for the fine
+
+Table 4: AER of different integration methods.
+
+
Zh-En
De-En
En-Fr
Ro-En
Ja-En
AVG
w/o FastAlign
11.2
14.9
3.9
19.1
33.7
16.5
w/o Eflomal
11.0
15.1
4.0
19.6
34.0
16.7
w/o GIZA++
11.2
15.1
3.8
19.4
35.2
16.7
w/o SimAlign
11.3
15.0
3.9
19.3
33.8
16.6
w/o MaskAlign
12.3
15.1
4.2
22.5
36.4
18.1
w/o AwesomeAlign
11.5
15.4
4.0
19.4
34.4
16.9
All
11.0
14.8
3.8
19.3
33.2
16.4
+
+Table 5: Ablation studies of the integration method using Union-Filtering.
+
+
Zh-En
De-En
En-Fr
Ro-En
Ja-En
AVG
XLM
MaskAlignadapted
23.8
27.8
7.5
23.1
66.1
29.7
Fine-tuning
13.6
14.5
4.2
20.9
36.9
18.0
mBERT
MaskAlignadapted
19.8
25.5
7.6
20.3
61.0
26.8
Fine-tuning
11.6
14.7
4.8
19.9
35.3
17.3
+
+Table 6: AER of fine-tuning XLM and mBERT with the third-party supervision, which is generated by $\mathbf{MaskAlign}_{\mathrm{adapted}}$ trained on the small parallel corpus same to that used in the fine-tuning.
+
+tuning, its supervision is extracted from the alignments of the third-party aligner which is trained on the whole parallel corpus. In this subsection, we check if only using the small corpus, which is used in the fine-tuning, for training the third-party aligner can seriously impact the word alignment performance. Table 6 shows the result. Training MaskAlign on small corpus seriously drags down AER performances when compared to training on full corpus, with averagely over 7 points worse than "MaskAlignadapted" in Table 2 and 3. Surprisingly, fine-tuning with such worse supervision can still achieve remarkably better performances, even surpassing or performing comparable to the strongest baseline system MaskAlign in Table 2. The reason for this phenomenon is that MaskAlignadapted generates fewer but more accurate alignments, which is effective enough for the supervision. We also use 40k sentence pairs for this study. Please refer to Appendix C for the study.
+
+# 4 Conclusion
+
+We propose an approach of using a third-party aligner for neural word alignments. Different to previous work based on careful design of self-training objectives, we simply use the word alignments generated by the third-party aligners to supervise the training. Although the third-party word alignments are imperfect as the supervision, we observe that the training process can do self-correction over the third-party word alignments
+
+by detecting more accurate word alignments and deleting wrong word alignments based on the geometry similarity in the contextualized embedding space, leading to better performances than the third-party aligners. The integration of various third-party supervisions improves the performance further, achieving state-of-the-art word alignment performance on benchmarks of multiple language pairs.
+
+# Limitations
+
+The proposed third-party supervised fine-tuning approach is not applicable to using the best word alignments, which are generated by the integrated supervision in this paper, as the new supervision signal to continue the fine-tuning. Such continual fine-tuning does not obtain significant improvement, which indicates the ineffectiveness of continual fine-tuning with the supervision of self predicted alignments.
+
+# Ethics Statement
+
+The data used in our experiments are either freely downloadable from web or obtained via the LDC license. The codes of the third-party aligners and the pre-trained CLMs are freely downloadable from web.
+
+# Acknowledgments
+
+The authors would like to thank the anonymous reviewers for the helpful comments. This work was supported by National Natural Science Foundation of China (Grant No. 62276179, 62036004), and was also partially supported by the joint research project of Alibaba and Soochow University.
+
+# References
+
+Zeljko Agić, Anders Johannsen, Barbara Plank, Héctor Martínez Alonso, Natalie Schluter, and Anders Søgaard. 2016. Multilingual projection for parsing truly low-resource languages. Transactions of the Association for Computational Linguistics, 4:301-312.
+Waleed Ammar, George Mulcaire, Yulia Tsvetkov, Guillaume Lample, Chris Dyer, and Noah A. Smith. 2016. Massively multilingual word embeddings. arXiv 1602.01925.
+Philip Arthur, Graham Neubig, and Satoshi Nakamura. 2016. Incorporating discrete translation lexicons into neural machine translation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1557-1567, Austin, Texas. Association for Computational Linguistics.
+
+Anthony Bau, Yonatan Belinkov, Hassan Sajjad, Nadir Durrani, Fahim Dalvi, and James Glass. 2018. Identifying and controlling important neurons in neural machine translation. In Proceedings of the International Conference on Learning Representations.
+Peter F. Brown, Stephen A. Della Pietra, Vincent J. Della Pietra, and Robert L. Mercer. 1993. The mathematics of statistical machine translation: Parameter estimation. Computational Linguistics, 19(2):263-311.
+Steven Cao, Nikita Kitaev, and Dan Klein. 2019. Multilingual alignment of contextual word representations. In Proceedings of the International Conference on Learning Representations.
+Chi Chen, Maosong Sun, and Yang Liu. 2021. Mask align: Self-supervised neural word alignment. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4781-4791, Online. Association for Computational Linguistics.
+Yun Chen, Yang Liu, Guanhua Chen, Xin Jiang, and Qun Liu. 2020. Accurate word alignment induction from neural machine translation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 566-576, Online. Association for Computational Linguistics.
+Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettle-moyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8440-8451, Online. Association for Computational Linguistics.
+Alexis Conneau and Guillaume Lample. 2019. Crosslingual language model pretraining. Advances in neural information processing systems, 32.
+Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.
+Shuoyang Ding, Hainan Xu, and Philipp Koehn. 2019. Saliency-driven word alignment interpretation for neural machine translation. In Proceedings of the Fourth Conference on Machine Translation (Volume 1: Research Papers), pages 1-12, Florence, Italy. Association for Computational Linguistics.
+
+Zi-Yi Dou and Graham Neubig. 2021. Word alignment by fine-tuning embeddings on parallel corpora. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 2112-2128, Online. Association for Computational Linguistics.
+Chris Dyer, Victor Chahuneau, and Noah A Smith. 2013. A simple, fast, and effective reparameterization of ibm model 2. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 644-648.
+Sarthak Garg, Stephan Peitz, Udhyakumar Nallasamy, and Matthias Paulik. 2019. Jointly learning to align and translate with transformer models. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4453-4462, Hong Kong, China. Association for Computational Linguistics.
+Eva Hasler, Adrià de Gispert, Gonzalo Iglesias, and Bill Byrne. 2018. Neural machine translation decoding with terminology constraints. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 506-512, New Orleans, Louisiana. Association for Computational Linguistics.
+Jonathan Herzig and Jonathan Berant. 2018. Decoupling structure and lexicon for zero-shot semantic parsing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1619-1629, Brussels, Belgium. Association for Computational Linguistics.
+John Hewitt and Christopher D Manning. 2019. A structural probe for finding syntax in word representations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4129-4138.
+Junjie Hu, Sebastian Ruder, Aditya Siddhant, Graham Neubig, Orhan Firat, and Melvin Johnson. 2020. Xtreme: A massively multilingual multi-task benchmark for evaluating cross-lingual generalisation. In International Conference on Machine Learning, pages 4411-4421. PMLR.
+Xintong Li, Guanlin Li, Lemao Liu, Max Meng, and Shuming Shi. 2019. On the word alignment from neural machine translation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1293-1303.
+Jindrich Libovický, Rudolf Rosa, and Alexander Fraser. 2019. How language-neutral is multilingual bert? arXiv preprint arXiv:1911.03310.
+
+Lemao Liu, Masao Utiyama, Andrew Finch, and Eichiro Sumita. 2016. Neural machine translation with supervised attention. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 3093-3102, Osaka, Japan. The COLING 2016 Organizing Committee.
+Yang Liu, Qun Liu, and Shouxun Lin. 2005. Log-linear models for word alignment. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL'05), pages 459-466, Ann Arbor, Michigan. Association for Computational Linguistics.
+Ilya Loshchilov and Frank Hutter. 2017. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101.
+Stephen Mayhew, Chen-Tse Tsai, and Dan Roth. 2017. Cheap translation for cross-lingual named entity recognition. In Proceedings of the 2017 conference on empirical methods in natural language processing, pages 2536-2545.
+Graham Neubig, Zi-Yi Dou, Junjie Hu, Paul Michel, Danish Pruthi, and Xinyi Wang. 2019. compare-mt: A tool for holistic comparison of language generation systems. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations), pages 35-41, Minneapolis, Minnesota. Association for Computational Linguistics.
+Graham Neubig, Yosuke Nakata, and Shinsuke Mori. 2011. Pointwise prediction for robust, adaptable japanese morphological analysis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 529-533.
+Garrett Nicolai and David Yarowsky. 2019. Learning morphosyntactic analyzers from the bible via iterative annotation projection across 26 languages. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1765-1774.
+Franz Josef Och and Hermann Ney. 2003. A systematic comparison of various statistical alignment models. Computational linguistics, 29(1):19-51.
+Robert Östling and Jörg Tiedemann. 2016. Efficient word alignment with markov chain monte carlo. The Prague Bulletin of Mathematical Linguistics.
+Sebastian Padó and Mirella Lapata. 2009. Cross-lingual annotation projection for semantic roles. Journal of Artificial Intelligence Research, 36:307-340.
+Santanu Pal, Sudip Kumar Naskar, Mihaela Vela, Qun Liu, and Josef van Genabith. 2017. Neural automatic post-editing using prior alignment and reranking. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 349-355,
+
+Valencia, Spain. Association for Computational Linguistics.
+
+Ben Peters, Vlad Niculae, and André F. T. Martins. 2019. Sparse sequence-to-sequence models. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1504-1519, Florence, Italy. Association for Computational Linguistics.
+
+Masoud Jalili Sabet, Philipp Duffer, François Yvon, and Hinrich Schütze. 2020. SimAlign: High quality word alignments without parallel training data using static and contextualized embeddings. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 1627-1643, Online. Association for Computational Linguistics.
+
+Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715-1725, Berlin, Germany. Association for Computational Linguistics.
+
+Kai Song, Kun Wang, Heng Yu, Yue Zhang, Zhongqiang Huang, Weihua Luo, Xiangyu Duan, and Min Zhang. 2020a. Alignment-enhanced transformer for constraining nmt with pre-specified translations. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 8886-8893.
+
+Kai Song, Xiaqing Zhou, Heng Yu, Zhongqiang Huang, Yue Zhang, Weihua Luo, Xiangyu Duan, and Min Zhang. 2020b. Towards better word alignment in transformer. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 28:1801-1812.
+
+Gabriel Stanovsky, Noah A. Smith, and Luke Zettlemoyer. 2019. Evaluating gender bias in machine translation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1679-1684, Florence, Italy. Association for Computational Linguistics.
+
+Ian Tenney, Dipanjan Das, and Ellie Pavlick. 2019. Bert rediscovers the classical nlp pipeline. arXiv preprint arXiv:1905.05950.
+
+Jörg Tiedemann. 2014. Rediscovering annotation projection for cross-lingual parser induction. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers, pages 1854-1864.
+
+Zhaopeng Tu, Zhengdong Lu, Yang Liu, Xiaohua Liu, and Hang Li. 2016. Modeling coverage for neural machine translation. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 76-85, Berlin, Germany. Association for Computational Linguistics.
+
+Shuo Wang, Zhaopeng Tu, Shuming Shi, and Yang Liu. 2020. On the inference calibration of neural machine translation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3070-3079, Online. Association for Computational Linguistics.
+
+Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google's neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144.
+
+David Yarowsky, Grace Ngai, and Richard Vicentowski. 2001. Inducing multilingual text analysis tools via robust projection across aligned corpora. In Proceedings of the First International Conference on Human Language Technology Research.
+
+Thomas Zenkel, Joern Wuebker, and John DeNero. 2019. Adding interpretable attention to neural translation models improves word alignment. arXiv preprint arXiv:1901.11359.
+
+Thomas Zenkel, Joern Wuebker, and John DeNero. 2020. End-to-end neural word alignment outperforms GIZA++. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1605-1617, Online. Association for Computational Linguistics.
+
+# A The Corpus Size Effect and The Self-Correction Effect on En-Fr
+
+The corpus size effect is presented in Figure 5. It shows the trend same to Figure 3, though the trend is not so significant for En-Fr. The self-correction effect is presented in Figure 6. The effect is the same to those in the other four language pairs.
+
+
+Figure 5: The effect of the different sizes of the parallel corpus for En-Fr fine-tuning.
+
+
+Figure 6: The self-correction effect in En-Fr fine-tuning process.
+
+
MaskAlign
Fine-Tuning mBERT
P
R
P
R
Zh-En
81.9
87.3
88.8
88.1
De-En
89.2
78.9
90.9
79.6
En-Fr
95.1
96.6
96.5
95.4
Ro-En
81.8
77.6
87.3
74.6
Ja-En
74.4
48.1
80.6
55.0
+
+# B Precision and Recall of Predicted Word Alignments
+
+Besides AER, we also evaluate the word alignment predictions by computing precision and recall using the gold alignments in the test sets. MaskAlign is used in this study due to its best performance among the third-party aligners. Its word alignments are used to supervise the fine-tuning of mBERT. The precision and recall are reported in Table 7. It shows that precision is always significantly improved after the fine-tuning, while recall improvement is not so significant. On En-Fr and Ro-En, recall is slightly worse after the fine-tuning.
+
+# C 40k Sentence Pairs for Both Training The Third-Party Aligner and Fine-tuning
+
+Table 7: Precision and Recall of MaskAlign predictions and the results of fine-tuning mBERT supervised by MaskAlign.
+
+
Zh-En
De-En
En-Fr
Ro-En
Ja-En
AVG
w/o Fine-tuning
17.9
17.4
5.6
27.3
45.2
22.7
MaskAlignadapted
82.2
36.4
19.9
51.1
90.6
55.9
Fine-tuning
17.3
16.8
5.2
23.6
44.2
21.4
+
+We use smaller parallel corpus, which consists of 40k sentence pairs for both training the third
+
+Table 8: AER of fine-tuning mBERT with the third-party supervision, which is generated by MaskAlignadapted trained on 40k sentence pairs.
+
+
80k
40k
P
R
P
R
Zh-En
89.8
72.3
87.7
9.9
De-En
92.9
62.0
94.5
48.0
En-Fr
92.4
92.4
92.2
67.3
Ro-En
85.0
70.4
87.3
34.0
Ja-En
81.9
24.9
84.8
5.0
+
+Table 9: Precision and Recall of MaskAlignadapted predictions with different sizes of parallel training corpus
+
+party aligner and fine-tuning. Table 8 shows the result. AER of $\mathsf{MaskAlign}_{\mathsf{adapted}}$ deteriorates sharply compared to training it on 80k sentence pairs shown in Table 6, but fine-tuning with such worse alignments as the supervision still gets better AER than without fine-tuning. We investigate the precision and recall of $\mathsf{MaskAlign}_{\mathsf{adapted}}$ listed in Table 9, and find that it always obtains high precision, and these fewer but accurate alignments are useful supervision information for the fine-tuning.
\ No newline at end of file
diff --git a/thirdpartyalignerforneuralwordalignments/images.zip b/thirdpartyalignerforneuralwordalignments/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..aca73ae9875b674622717a2f4ca788acd48d1642
--- /dev/null
+++ b/thirdpartyalignerforneuralwordalignments/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ae900a086e8b045849cc23d4eb4cfccfe2bf57e6559d7e65a68c14888be369a2
+size 555438
diff --git a/thirdpartyalignerforneuralwordalignments/layout.json b/thirdpartyalignerforneuralwordalignments/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..e3fde7d4a6db7b5ec7b59efd0a5758b7e57faa4a
--- /dev/null
+++ b/thirdpartyalignerforneuralwordalignments/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:56fc10d2ca979eb81b166350f2cd33e3e7f95ffad6820695e5bdf926dd30001e
+size 394432
diff --git a/timeawarepromptingfortextgeneration/eee9b421-382b-4aeb-b370-75d53ce6984e_content_list.json b/timeawarepromptingfortextgeneration/eee9b421-382b-4aeb-b370-75d53ce6984e_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..f84b28faf5c752ea2dac920cc01eb556050fda4f
--- /dev/null
+++ b/timeawarepromptingfortextgeneration/eee9b421-382b-4aeb-b370-75d53ce6984e_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:43631da3bc65df9e0e330abf387543261c5ccfc2f487ab6bf1108f01ebb92f9d
+size 112240
diff --git a/timeawarepromptingfortextgeneration/eee9b421-382b-4aeb-b370-75d53ce6984e_model.json b/timeawarepromptingfortextgeneration/eee9b421-382b-4aeb-b370-75d53ce6984e_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..86aa12dcc827b9aab36de09ac9464147610affa5
--- /dev/null
+++ b/timeawarepromptingfortextgeneration/eee9b421-382b-4aeb-b370-75d53ce6984e_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d66666538ea5c1ad270fac522c8b6be7443aacbf58a7c0b999dda6086e29578a
+size 133057
diff --git a/timeawarepromptingfortextgeneration/eee9b421-382b-4aeb-b370-75d53ce6984e_origin.pdf b/timeawarepromptingfortextgeneration/eee9b421-382b-4aeb-b370-75d53ce6984e_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..dd44d572bbaea9dae96dfe437dc9e043336240ad
--- /dev/null
+++ b/timeawarepromptingfortextgeneration/eee9b421-382b-4aeb-b370-75d53ce6984e_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3450242050a9d34be411ee371ae12d419be1b788f75186144d70398e994ac535
+size 523853
diff --git a/timeawarepromptingfortextgeneration/full.md b/timeawarepromptingfortextgeneration/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..0df07c1be154595af336a5e9929ec855f0de8781
--- /dev/null
+++ b/timeawarepromptingfortextgeneration/full.md
@@ -0,0 +1,426 @@
+# Time-aware Prompting for Text Generation
+
+Shuyang Cao and Lu Wang
+
+Computer Science and Engineering
+
+University of Michigan
+
+Ann Arbor, MI
+
+{caoshuy, wangluxy}@umich.edu
+
+# Abstract
+
+In this paper, we study the effects of incorporating timestamps, such as document creation dates, into generation systems. Two types of time-aware prompts are investigated: (1) textual prompts that encode document timestamps in natural language sentences; and (2) linear prompts that convert timestamps into continuous vectors. To explore extrapolation to future data points, we further introduce a new data-to-text generation dataset, TEMPWIKIBIO, containing more than 4 millions of chronologically ordered revisions of biographical articles from English Wikipedia, each paired with structured personal profiles. Through data-to-text generation on TEMPWIKIBIO, text-to-text generation on the content transfer dataset, and summarization on XSum, we show that linear prompts on encoder and textual prompts improve the generation quality on all datasets. Despite having less performance drop when testing on data drawn from a later time, linear prompts focus more on nontemporal information and are less sensitive to the given timestamps, according to human evaluations and sensitivity analyses. Meanwhile, textual prompts establish the association between the given timestamps and the output dates, yielding more factual temporal information in the output.
+
+# 1 Introduction
+
+Temporal information, such as publication and modification dates of documents, is an inherent attribute of documents. Both document writers and readers are aware of this information when organizing and consuming document content. For example, an event reported by a news article is likely to happen right on the publication date. However, state-of-the-art generation models are fine-tuned from large pre-trained models without incorporating temporal information (Lewis et al., 2020; Zhang et al., 2020), creating a gap between document processing by humans and automatic models. Though
+
+previous work has split datasets according to temporal information and shown deteriorated performance of large pre-trained models as knowledge becomes outdated on future data (Lazaridou et al., 2021; Jang et al., 2022), it is unclear how informing models of temporal information affects generation tasks.
+
+Therefore, this work aims to study the effects of presenting temporal information to generation models. Concretely, to include timestamps in model inputs, we consider prepending two types of time-aware prompts to the encoder or the decoder. First, textual prompts encode timestamps within natural language descriptions, as commonly used by the recent prompt engineering work (Radford et al., 2019; Raffel et al., 2019). We further explore linear prompts that map timestamps to continuous vectors via linear projections.
+
+Concretely, we fine-tune BART (Lewis et al., 2020) with time-aware prompts and conduct experiments on two text-to-text generation tasks: (1) content transfer (Prabhumoye et al., 2019) that generates the continuation of a passage using information from a given document, and (2) summarizing news articles with XSum (Narayan et al., 2018). To study time-aware prompts' capability of extrapolating to future dates, we introduce TEMPWIKIBIO, a data-to-text dataset containing timestamped revisions of biographical articles from English Wikipedia, each paired with an infobox as input. The revisions record changes of personal profiles from 2004 to 2021. $^{1}$ For all experimented datasets, dated events are critical. We first evaluate model outputs with automatic metrics to examine the effects of temporal information on output informativeness. Human judges are then asked to additionally rate the factuality of model outputs and determine if the improvement or degradation is due to date changes
+
+
BART: Joseph Melville Broughton Jr. (November 17, 1888 – March 6, 1949) was the 60th Governor of the U.S. state of North Carolina from 1941 to 1945.
+Linear Prompt: Joseph Melville Broughton Jr. (November 17, 1888 – March 6, 1949) was the 60th Governor of the U.S. state of North Carolina from 1941 to 1945 and a United States Senator from 1948 until his death in 1949.
BBC News: The group made a loss of $219m (£175.1m) compared with the same time last year ... This segment posted another very strong quarter ...
Original Publication Date: 2017-02-09
+BART: News Corp has reported a loss for the first three months of the year.
+Linear Prompt: News Corp has reported a loss for the first three months of the year.
+Textual Prompt: News Corp has reported a loss for the three months to December.
Date Perturbation: 6 months after ↦ 2017-08-09
+Textual Prompt: News Corp has reported a loss for the second quarter.
Date Perturbation: 1 month after ↦ 2017-03-09
+Textual Prompt: News Corp has reported a loss for the first three months of the year.
+
+Figure 1: Sample system outputs on TEMPWIKIBIO for data-to-text generation and on XSum for summarization. We highlight relevant temporal information in the input and corresponding correct (incorrect) information in the model outputs. Linear prompts could encourage selecting important dates on TEMPWIKIBIO, but the temporal information encoded in the linear prompts can not be captured by the model, leading to incorrect dates when resolving with the provided dates is required on XSum; while the model with textual prompts is sensitive to the provided dates and generates correct date, it lacks world knowledge (e.g., seasonal earning is only reported after the season) to handle the last case after perturbing the original publication date.
+
+in the outputs. Finally, we perform a sensitivity analysis by making perturbations to the original dates (e.g., setting the dates to one year before) and providing models with the perturbed dates, and then inspect the changes of outputs. We find that:
+
+- Time-aware prompts improve the model performance over the no-prompt baseline in $87.5\%$ of comparisons on different metrics and datasets. Linear prompts work better on the data-to-text dataset, while textual prompts work better on the text-to-text datasets, partly due to modal compatibility.
+- The improvement in output informativeness and factuality by linear prompts is less fre
+
+sequently related to modifying temporal information in the outputs than textual prompts, according to human judges. Moreover, models with linear prompts are less sensitive to the given temporal information, suggesting that linear prompts assist the processing of non-temporal content.
+
+- Textual prompts associate the provided dates with the dates to be generated in the outputs, producing more factual time-related information. However, models with textual prompts could generate incorrect dates when complicated world knowledge is required to perform reasoning, as shown in the last example in Figure 1.
+
+# 2 Related Work
+
+Temporal Generalization in NLP. Early work on temporal generalization focuses on detecting the shifts of n-gram frequencies over time (Michel et al., 2011) and detecting word meaning changes (Wijaya and Yeniterzi, 2011; Kulkarni et al., 2015). Besides linguistic shifts, model degradation on downstream tasks has been reported when tested on samples at a different time from the training data (Huang and Paul, 2018; Lukes and Søgaard, 2018; Lazaridou et al., 2021; Agarwal and Nenkova, 2021). In this work, we study the temporal generalization of our time-aware prompts, since they are constructed with temporal information.
+
+Prompt Engineering. Prompts have been a common tool for controllable generation (Fan et al., 2018; Radford et al., 2019; Keskar et al., 2019; Raffel et al., 2019). Instructions are also constructed as prompts to allow large models to perform new tasks that are unseen in training (Brown et al., 2020; Sanh et al., 2022). More recently, prompts, either hand-crafted (Schick and Schütze, 2021; Gao et al., 2021) or learned (Li and Liang, 2021), are found to benefit model learning and improve few-shot performance on downstream tasks. Our textual time-aware prompts extend the year-level prompts in Dhingra et al. (2021) with months and days to incorporate fine-grained temporal information, and we further explore representing timestamps with linear prompts which have been mainly used for length-controlled generation (Kikuchi et al., 2016).
+
+
+Figure 2: A linear prompt treats year/month/day as separate scalars and projects them into continuous prompt vectors to be used on encoder or decoder. The vector's scale reflects their temporal orderings. Note that the dimension of prompt vectors is the same as the embedding dimension in the actual model.
+
+# 3 Time-aware Prompts
+
+We study two types of prompts that are prepended to the encoder/decoder of a seq2seq model, to inform the model of temporal information.
+
+Textual Prompt. Given a document's timestamp, we first convert it to "day month year" with the day and the year in digits and the month in its textual form (e.g., "18 January 2015"), a format commonly used by mainstream media such as BBC news. We test three textual prompts and use the one that results in the highest ROUGE score on the development set of XSum, i.e., "Today is [timestamp]." ("Today is 18 January 2015.") Other textual prompts are detailed in Appendix A. Compared to only inserting the year information (Dhingra et al., 2021), textual prompts in our paper provide more fine-grained temporal information.
+
+Linear Prompts treat the concept of time as an axis, with each timestamp being mapped to a point on it. Concretely, we use the year, month, and day as scalars and transform them into prompt vectors through linear projections, as illustrated in Figure 2. The process of linear projections can also be viewed as changing the scales of vectors for the year, month, and day. While prior work has controlled the output lengths by changing the scales of memory cells in an LSTM (Kikuchi et al., 2016), representing temporal information with scales of vectors has yet been studied.
+
+
+Figure 3: Numbers of samples and the corresponding time periods of revisions in different splits of TEMP-WIKIBIO. Changed attributes of the sample revisions are shaded in yellow. There is no overlapping subject between training/development sets and the two test sets.
+
+# 4 TEMPWIKIBIO Data Collection
+
+To study how well time-aware prompts can extrapolate to future data, we collect TEMPWIKIBIO, which has 4,277,450 revisions of infobox-paragraph pairs from 2004 to 2021 for 695,929 Wikipedia biography articles, extending WIKIBIO (Lebret et al., 2016), which only includes the latest revision per article by 2015. Importantly, the profile (e.g., titles, awards, etc) of a person shown in the infobox changes over time (Figure 3).
+
+Concretely, for each biography, we pick its latest revision every X days since the first revision, where X is sampled uniformly from [270, 450], to diversify the timestamps included in the data. We then extract the infobox and the lead paragraph per revision. As illustrated in Figure 3, two test sets are created: test-same time contains articles that are published at the same time as the training set, while test-future consists of samples that are created (or revised) after training and development sets. We further ensure that the subjects of biographies in both test sets are not in training or development sets. On average, each revision has 15.3 attributes in the infobox and 43.2 words in the first paragraph. Details of data collection are included in Appendix B.
+
+# 5 Experiments and Results
+
+For data-to-text generation on TEMPWIKIBIO, we linearize the infobox and use it as the input to BART. Common data-to-text metrics (Gehrmann et al., 2021) are used, including BLEU-4 (Papineni et al., 2002), METEOR (Lavie and Agarwal, 2007), TER (Snover et al., 2006), and BERTScore (Zhang* et al., 2020). For text-to-text summarization on XSum (Narayan et al., 2018) that consists of news articles and their corresponding summaries from BBC News, we report ROUGE scores (Lin, 2004)
+
+
Prompt
B-4 (↑)
MTR (↑)
TER (↓)
BS (↑)
Test Future
-
30.48*
49.88*
69.66*
55.34*
ENC:T
30.76*
50.10*
69.33*
55.38*
ENC:L
31.22*
50.32*
68.66*
55.79*
DEC:T
31.05*
50.46*
69.98*
55.35*
DEC:L
30.69*
49.94
69.03*
55.52*
Test Same Time
-
30.81*
50.15*
70.26*
55.51*
ENC:T
31.27*
50.55*
69.78*
55.82*
ENC:L
31.50*
50.56*
69.33*
56.00*
DEC:T
31.43*
50.73*
70.51
55.62
DEC:L
30.91
50.11
69.86
55.59
+
+Table 1: Results on TEMPWIKIBIO. Textual (T) and linear (L) prompts are used on encoder (ENC) or decoder (DEC). B: BLEU; MTR:METEOR; BS:BERTScore. The best result per metric is in boldface and the second best is in italics. Improvement over the no-prompt baseline is shaded. *: significantly better than the baseline with approximate randomization test $(p < 0.001)$ .
+
+
Prompt
Content Transfer
XSum
R-L
MTR
BS
R-1
R-2
QEval
-
27.52
27.55
29.86
45.23
22.11
47.73
ENC:T
28.15*
28.40*
30.59*
45.63*
22.38*
47.76
ENC:L
27.82*
27.99*
30.33*
45.32
22.22
47.76
DEC:T
28.41*
28.84*
30.90*
45.59*
22.45*
47.70
DEC:L
27.62
27.68
30.13*
44.94
21.91
47.51
+
+Table 2: Results on the content transfer and XSum datasets. R: ROUGE; QEval: QuestEval.
+
+and QuestEval (Scialom et al., 2021), a QA-based faithfulness evaluation metric that checks if questions created from the summary can be addressed by reading the document with a QA model, and vice versa. The content transfer dataset (Prabhumoye et al., 2019) considers sentences containing citations of news sources in Wikipedia articles as the target for generation. Many target sentences incorporate important dates from the cited articles, thus making it suitable to test our time-aware prompt design. To generate each target sentence, the context passage, which contains three sentences preceding the target sentence, and the cited news article are provided as input. ROUGE-L, METEOR, and BERTScore are computed for evaluation on the content transfer dataset.
+
+Automatic Evaluation. Overall, models with time-aware prompts obtain better performance than the no-prompt baseline. Models with time-aware prompts win 49 of all 56 comparisons against the baseline on different metrics and datasets, indicating that adding time-aware prompts encourages the models to generate more informative outputs.
+
+Linear prompts tend to work better on the encoder when the input is structured data, achieving the best overall performance on TEMPWIKIBIO (Table 1). Linear prompts also show less performance degradation than textual prompts when testing future samples. However, the better extrapolation performance by the model with linear prompts might be due to its lower sensitivity to the provided dates, as later revealed in our analysis.
+
+When the input and output are both in natural language, textual prompts are more suitable, as evidenced by the better performance than linear prompts on all metrics on both the content transfer dataset and XSum (Table 2). We think that on text-to-text generation tasks, textual prompts benefit from modal compatibility and have an advantage of connecting the salient content with the timestamps.
+
+Human Evaluation. We hire three fluent English speakers to evaluate 80 sets of paragraphs generated for TEMPWIKIBIO samples at a future time and 80 sets of sentences generated for content transfer samples by two models with time-aware prompts. The judges compare the output by each model against the output by the no-prompt baseline on two aspects: informativeness — whether the model output covers salient information in the input; and factuality — whether the model output is factually correct. For each set, the judges only know which output is generated by the baseline, and outputs by other models are randomly sorted. Besides the three-way label (win/tie/lose), we ask the judges to determine if the difference in each aspect between each pair of comparisons is dated-related.
+
+As shown in Figure 4, only a small portion of improvements by linear prompts is date-related, especially on the content transfer dataset where none of the outputs by the model with linear prompts is rated as having better factuality due to the inclusions or changes of dates, suggesting that linear prompts might help process other content. By contrast, the model with textual prompts focuses more on the temporal information and brings more factual dates into the outputs on the content transfer dataset.
+
+Analysis via Date Perturbation. We probe into date sensitivity, to understand the mechanisms behind the two types of prompts. Specifically, the original timestamps of 2000 samples randomly selected from the test set of each dataset are per
+
+
+Figure 4: Percentages of samples that win or lose over the no-prompt baseline, on Test-Future of TEMPWIKIBIO and the content transfer dataset. While both time-aware prompts improve informativeness and factuality on TEMPWIKIBIO, textual prompts are more often rated as having date-related improvements in informativeness and factuality on the content transfer dataset. Krippendorff's $\alpha$ : 0.85 (informativeness); 0.64 (factuality).
+
+turbed and provided to the models. As indicated by the greater edit distances between the outputs produced with the perturbed dates and original dates (Figure 5), models with textual prompts are more sensitive to the given dates than models with linear prompts. Human inspection of the outputs by the model with linear prompts and perturbed dates also finds that their changes from the original outputs are not related to the temporal information, echoing that the improvements in informativeness and factuality are less date-related according to human judges. For the model with textual prompts, we observe a need for learning complicated world knowledge to generate correct dates more frequently when provided perturbed dates, as shown in Figure 1 and Figure 9 in Appendix G.
+
+In addition, the greater differences of ROUGE-L scores suggest a more significant dependency on the temporal information by the content transfer dataset, where the publication dates of documents are often required to generate the outputs (Figure 8 in Appendix G), calling for the inclusion of metadata during data collection.
+
+# 6 Conclusion
+
+We study two types of time-aware prompts for injecting document timestamps into generation models. Experiments on TEMPWIKIBIO, our newly collected data-to-text generation dataset, and two text-to-text generation tasks show that linear prompts mostly enhance the processing of content other than dates for more informative and factual outputs. Textual prompts build the association
+
+
+Figure 5: Edit distances and differences of BLEU-4/ROUGE-L between outputs with perturbed dates and original dates on TEMPWIKIBIO and content transfer. Linear prompts are not sensitive to the given dates. Results on XSum are in Appendix C.
+
+between the given temporal information and the generated temporal information, producing outputs with more factual dates.
+
+# Acknowledgements
+
+This work is supported in part by National Science Foundation through grant IIS-2046016, and Oracle Cloud credits and related resources provided by the Oracle for Research program. We thank the anonymous reviewers for their valuable suggestions.
+
+# Ethical Consideration
+
+Our work assumes the timestamps of documents can be accurately obtained and the models are always provided with the accurate creation dates. However, this might not be the case for some documents, especially the ones that are first published in a paper format and later digitized into electronic versions. Informing generation models of inaccurate timestamps could lead to incorrect content generation and other unpredictable behaviors, where fabricated facts might be picked up by end users, potentially causing harm to the public.
+
+# Limitation
+
+Though we show that textual time-aware prompts help models generate more factually consistent outputs, we find that models with temporal prompts could generate incorrect temporal information due to the lack of world knowledge (Figure 9 of Appendix G). In this work, we do not further study methods that can incorporate extra world knowledge to address this issue.
+
+During model evaluation, we investigate the effects of time-aware prompts on the generated temporal information via human evaluation, which includes 160 outputs by each model (320 in total). We believe automatic metrics that verify the correctness of temporal information in the outputs can better validate the improvements by our models. However, such automatic metrics do not exist. A potential design of temporal information evaluation metrics is to combine event and temporal expression extraction systems. We made several attempts at this design, but the performance of the event and temporal expression extraction systems we tested needs further improvement.
+
+To obtain the timestamp of each sample, we rely on the automatic web archive (Appendix B). However, this approach for timestamp retrieval only applies to datasets that are based on web sources (e.g., news articles and blog posts). In addition, less popular web sources are less likely to be archived by automatic web archive service, which makes retrieving their timestamps more complicated and prevents the adoption of our methods.
+
+# References
+
+Oshin Agarwal and Ani Nenkova. 2021. Temporal effects on pre-trained models for language processing tasks. CoRR, abs/2111.12790.
+Steven Bird, Ewan Klein, and Edward Loper. 2009. Natural language processing with Python: analyzing text with the natural language toolkit. "O'Reilly Media, Inc."
+Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33:1877-1901.
+Bhuwan Dhingra, Jeremy R. Cole, Julian Martin Eisenschlos, Daniel Gillick, Jacob Eisenstein, and William W. Cohen. 2021. Time-aware language models as temporal knowledge bases.
+
+Bhuwan Dhingra, Manaal Faruqui, Ankur Parikh, Ming-Wei Chang, Dipanjan Das, and William Cohen. 2019. Handling divergent reference texts when evaluating table-to-text generation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4884–4895, Florence, Italy. Association for Computational Linguistics.
+Angela Fan, David Grangier, and Michael Auli. 2018. Controllable abstractive summarization. In Proceedings of the 2nd Workshop on Neural Machine Translation and Generation, pages 45-54, Melbourne, Australia. Association for Computational Linguistics.
+Tianyu Gao, Adam Fisch, and Danqi Chen. 2021. Making pre-trained language models better few-shot learners. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 3816-3830, Online. Association for Computational Linguistics.
+Sebastian Gehrmann, Tosin Adewumi, Karmanya Aggarwal, Pawan Sasanka Ammanamanchi, Anuoluwapo Aremu, Antoine Bosselut, Khyathi Raghavi Chandu, Miruna-Adriana Clinciu, Dipanjan Das, Kaustubh Dhole, Wanyu Du, Esin Durmus, Ondrej Dusek, Chris Chinenye Emezue, Varun Gangal, Cristina Garbacea, Tatsunori Hashimoto, Yufang Hou, Yacine Jernite, Harsh Jhamtani, Yangfeng Ji, Shailza Jolly, Mihir Kale, Dhruv Kumar, Faisal Ladhak, Aman Madaan, Mounica Maddela, Khyati Mahajan, Saad Mahamood, Bodhisattwa Prasad Majumder, Pedro Henrique Martins, Angelina McMillan-Major, Simon Mille, Emiel van Miltenburg, Moin Nadeem, Shashi Narayan, Vitaly Nikolaev, Andre Niyongabo Rubungo, Salomey Osei, Ankur Parikh, Laura Perez-Beltrachini, Niranjan Ramesh Rao, Vikas Raunak, Juan Diego Rodriguez, Sashank Santhanam, João Sedoc, Thibault Sellam, Samira Shaikh, Anastasia Shimorina, Marco Antonio Sobrevilla Cabezudo, Hendrik Strobelt, Nishant Subramani, Wei Xu, Diyi Yang, Akhila Yerukola, and Jiawei Zhou. 2021. The GEM benchmark: Natural language generation, its evaluation and metrics. In Proceedings of the 1st Workshop on Natural Language Generation, Evaluation, and Metrics (GEM 2021), pages 96–120, Online. Association for Computational Linguistics.
+Xiaolei Huang and Michael J. Paul. 2018. Examining temporality in document classification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 694-699, Melbourne, Australia. Association for Computational Linguistics.
+Joel Jang, Seonghyeon Ye, Sohee Yang, Joongbo Shin, Janghoon Han, Gyeonghun KIM, Stanley Jungkyu Choi, and Minjoon Seo. 2022. Towards continual knowledge learning of language models. In International Conference on Learning Representations.
+
+Nitish Shirish Keskar, Bryan McCann, Lav R. Varshney, Caiming Xiong, and Richard Socher. 2019. Ctrl: A conditional transformer language model for controllable generation.
+Yuta Kikuchi, Graham Neubig, Ryohei Sasano, Hiroya Takamura, and Manabu Okumura. 2016. Controlling output length in neural encoder-decoders. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1328-1338, Austin, Texas. Association for Computational Linguistics.
+Vivek Kulkarni, Rami Al-Rfou, Bryan Perozzi, and Steven Skiena. 2015. Statistically significant detection of linguistic change. In Proceedings of the 24th international conference on world wide web, pages 625-635.
+Alon Lavie and Abhaya Agarwal. 2007. METEOR: An automatic metric for MT evaluation with high levels of correlation with human judgments. In Proceedings of the Second Workshop on Statistical Machine Translation, pages 228-231, Prague, Czech Republic. Association for Computational Linguistics.
+Angeliki Lazaridou, Adhi Kuncoro, Elena Gribovskaya, Devang Agrawal, Adam Liska, Tayfun Terzi, Mai Gimenez, Cyprien de Masson d'Autume, Tomas Kocisky, Sebastian Ruder, et al. 2021. Mind the gap: Assessing temporal generalization in neural language models. Advances in Neural Information Processing Systems, 34.
+R'emi Lebret, David Grangier, and Michael Auli. 2016. Neural text generation from structured data with application to the biography domain. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1203-1213, Austin, Texas. Association for Computational Linguistics.
+Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7871-7880, Online. Association for Computational Linguistics.
+Xiang Lisa Li and Percy Liang. 2021. Prefix-tuning: Optimizing continuous prompts for generation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4582-4597, Online. Association for Computational Linguistics.
+Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches Out, pages 74-81, Barcelona, Spain. Association for Computational Linguistics.
+
+Jan Lukes and Anders Søgaard. 2018. Sentiment analysis under temporal shift. In Proceedings of the 9th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis, pages 65-71, Brussels, Belgium. Association for Computational Linguistics.
+Jean-Baptiste Michel, Yuan Kui Shen, Aviva Presser Aiden, Adrian Veres, Matthew K. Gray, null null, Joseph P. Pickett, Dale Hoiberg, Dan Clancy, Peter Norvig, Jon Orwant, Steven Pinker, Martin A. Nowak, and Erez Lieberman Aiden. 2011. Quantitative analysis of culture using millions of digitized books. Science, 331(6014):176-182.
+Shashi Narayan, Shay B. Cohen, and Mirella Lapata. 2018. Don't give me the details, just the summary! topic-aware convolutional neural networks for extreme summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1797-1807, Brussels, Belgium. Association for Computational Linguistics.
+Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. *fairoseq: A fast, extensible toolkit for sequence modeling*. In *Proceedings of NAACL-HLT* 2019: Demonstrations.
+Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311-318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.
+Matt Post. 2018. A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 186-191, Belgium, Brussels. Association for Computational Linguistics.
+Shrimai Prabhumoye, Chris Quirk, and Michel Galley. 2019. Towards content transfer through grounded text generation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2622-2632, Minneapolis, Minnesota. Association for Computational Linguistics.
+Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9.
+Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text transformer. arXiv preprint arXiv:1910.10683.
+Victor Sanh, Albert Webson, Colin Raffel, Stephen Bach, Lintang Sutawika, Zaid Alyafeai, Antoine
+
+Chaffin, Arnaud Stiegler, Arun Raja, Manan Dey, M Saiful Bari, Canwen Xu, Urmish Thakker, Shanya Sharma Sharma, Eliza Szczechla, Taewoon Kim, Gunjan Chhablani, Nihal Nayak, Debajyoti Datta, Jonathan Chang, Mike Tian-Jian Jiang, Han Wang, Matteo Manica, Sheng Shen, Zheng Xin Yong, Harshit Pandey, Rachel Bawden, Thomas Wang, Trishala Neeraj, Jos Rozen, Abheesht Sharma, Andrea Santilli, Thibault Fevry, Jason Alan Fries, Ryan Teehan, Teven Le Scao, Stella Biderman, Leo Gao, Thomas Wolf, and Alexander M Rush. 2022. Multi-task prompted training enables zero-shot task generalization. In International Conference on Learning Representations.
+
+Timo Schick and Hinrich Schütze. 2021. It's not just size that matters: Small language models are also few-shot learners. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2339-2352, Online. Association for Computational Linguistics.
+
+Thomas Scialom, Paul-Alexis Dray, Sylvain Lamprier, Benjamin Piwowarski, Jacopo Staiano, Alex Wang, and Patrick Gallinari. 2021. QuestEval: Summarization asks for fact-based evaluation. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 6594-6604, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
+
+Matthew Snover, Bonnie Dorr, Rich Schwartz, Linnea Micciulla, and John Makhoul. 2006. A study of translation edit rate with targeted human annotation. In Proceedings of the 7th Conference of the Association for Machine Translation in the Americas: Technical Papers, pages 223-231, Cambridge, Massachusetts, USA. Association for Machine Translation in the Americas.
+
+Derry Tanti Wijaya and Reyyan Yeniterzi. 2011. Understanding semantic change of words over centuries. In Proceedings of the 2011 International Workshop on DETecting and Exploiting Cultural Diversity on the Social Web, DETECT '11, page 35-40, New York, NY, USA. Association for Computing Machinery.
+
+Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Association for Computational Linguistics.
+
+Jingqing Zhang, Yao Zhao, Mohammad Saleh, and Peter Liu. 2020. Pegasus: Pre-training with extracted gap-sentences for abstractive summarization. In International Conference on Machine Learning, pages 11328-11339. PMLR.
+
+
Prompt Design
R-1
R-2
R-L
1 (Date: ...)
45.70
22.57
37.47
2 (Today is ...)
45.71
22.55
37.49
3 (The following ...)
45.61
22.54
37.32
+
+Table 3: ROUGE scores on the dev set of XSum by models with different textual prompts on the encoder. Textual prompt "Today is [converted timestamp]." achieves the highest average ROUGE score and is used in our main experiments.
+
+Tianyi Zhang*, Varsha Kishore*, Felix Wu*, Kilian Q. Weinberger, and Yoav Artzi. 2020. *Bertscore: Evaluating text generation with bert*. In International Conference on Learning Representations.
+
+# A Details of Prompts
+
+Other Textual Prompts. The three designs of textual prompts we try in our pilot study are:
+
+1. Date: [converted timestamp].
+2. Today is [converted timestamp].
+3. The following text is written on [converted timestamp].
+
+As shown in Table 3, the second design "Today is [converted timestamp]." yields the highest average ROUGE score on the development set of XSum. Note that performances by the three prompt designs do not vary greatly.
+
+# B Details of Datasets
+
+# B.1 TEMPWIKIBIO
+
+Data Collection. We use the English Wikipedia dump $^2$ processed on February 1, 2022 and collect revisions before 2021 that have complete infoboxes. To identify biographies from all articles, we use the article titles and IDs in WIKIBIO (Lebret et al., 2016), which are originally from WikiProject for biographies. $^3$ We then extract attributes of the infobox and the first paragraph of the article from each remaining revision with mwparserfromhell. $^4$ Revisions that do not contain complete infoboxes are discarded. We further discard the first five revisions of each article to avoid including revisions with less comprehensive information about the person. To limit the number of revisions for each
+
+
Year
Train
Dev
Test-Same Time
Test-Future
2004
56
3
1
-
2005
2,167
118
10
-
2006
24,509
1,234
132
-
2007
76,637
3,794
421
-
2008
133,467
6,770
802
-
2009
194,504
9,882
1,209
-
2010
248,768
12,578
1,685
-
2011
309,642
15,801
2,266
-
2012
371,008
18,893
2,927
-
2013
365,260
18,619
3,039
-
2014
397,155
20,572
3,531
-
2015
441,800
22,597
4,286
-
2016
535,693
27,771
5,462
-
2017
460,233
23,733
4,602
-
2018
445,955
22,872
4,169
-
2019
-
-
-
11,698
2020
-
-
-
11,998
2021
-
-
-
11,132
+
+article, we pick the latest revision every X days, where X is sampled uniformly from [270, 450] after picking a revision to diversify the timestamps of selected revisions.
+
+For the test split, the number of articles is downsampled to $10\%$ of the original number of articles created in the test time period to achieve a reasonable running time of decoding.
+
+Time Statistics. The numbers of revisions made in different years are shown in Table 4.
+
+Copyright Policy. We comply with the Wikipedia copyright policy5 to collect the TEMPWIKIBIO. TEMPWIKIBIO will be released under the CC BY-SA 3.0 license.6 The usage of TEMPWIKIBIO is limited by the copyright policy of Wikipedia.
+
+# B.2 Content Transfer
+
+The content transfer dataset (Prabhumoye et al., 2019) extracts sentences with citations to news outlets in Wikipedia as the sentences to be generated. The cited source documents then become the external source documents where the generation should be grounded. The three sentences preceding each sentence to be generated in the original Wikipedia article are taken as the context passage. As the source documents come from many different news
+
+Table 4: The numbers of TEMPWIKIBIO revisions made in different years, grouped by different splits.
+
+
Year
Train
Dev
Test
unknown
1,719
14
250
1996
186
1
19
1997
100
1
8
1998
34
0
3
1999
399
3
29
2000
1,012
13
92
2001
831
11
99
2002
3,561
57
294
2003
6,672
150
553
2004
4,119
63
443
2005
4,112
66
441
2006
7,323
89
581
2007
12,398
151
937
2008
17,920
176
1,562
2009
26,090
298
2,499
2010
30,505
311
2,612
2011
40,035
400
3,256
2012
71,152
708
6,219
2013
94,216
799
7,274
2014
84,661
830
7,193
2015
55,248
588
5,229
2016
49,927
572
4,455
2017
44,243
509
3,489
2018
22,200
215
2,316
2019
1,337
22
147
Total
580,000
5,000
50,000
+
+Table 5: Numbers of source documents for content transfer published in different years, grouped by different splits.
+
+sources, instead of constructing an extraction template for each new source, we query the Wayback Machine7 for the date when each source document was first archived to obtain the timestamp.
+
+Time Statistics. In Table 5, we report the numbers of source documents in the content transfer dataset published in different years.
+
+Copyright Policy. The content transfer dataset is publicly available with the usage limited by the MIT License.
+
+# B.3 XSum
+
+We conduct experiments on text summarization with XSum (Narayan et al., 2018), which contains articles from BBC News. During the construction of the dataset, the first sentence of each article is taken as the summary of the remaining content. The timestamp of each news article is extracted from its corresponding HTML file.
+
+Time Statistics. In Table 6, we report the numbers of articles in XSum published in different
+
+
Year
Train
Dev
Test
2009
0
0
1
2010
1,142
60
62
2011
2,820
154
153
2012
5,450
304
319
2013
7,939
409
420
2014
15,409
810
855
2015
49,041
2,792
2,736
2016
70,922
3,928
3,983
2017
51,322
2,875
2,805
Total
204,045
11,332
11,334
+
+Table 6: Numbers of XSum articles published in different years, grouped by different splits.
+
+
Prompt
Parent
# Date
Test Future
-
56.30
1.33
ENC:T
56.40
1.33
ENC:L
56.57*
1.33
DEC:T
56.52*
1.35
DEC:L
56.36
1.32
Test Same Time
-
57.57
1.25
ENC:T
57.74*
1.25
ENC:L
57.82*
1.25
DEC:T
57.88*
1.26
DEC:L
57.57
1.24
+
+years.
+
+Copyright Policy. XSum dataset is publicly available $^{10}$ with the usage limited by the MIT License.
+
+# C Additional Results
+
+TEMPWIKIBIO. We additionally evaluate the model outputs on TEMPWIKIBIO with PARENT (Dhingra et al., 2019) and report the average number of dates in each model output. As shown in Table 7, the trend of PARENT scores is similar to other metrics, where the model with linear prompts on the encoder achieves the best result on samples drawn at a future time, while the model with textual prompts on the decoder achieves the best result on samples drawn at the same time period of the training and development sets.
+
+Content Transfer and XSum. We report BLEU-4 on content transfer and ROUGE-L on XSum in
+
+Table 7: Additional results on TEMPWIKIBIO. Textual (T) and linear (L) prompts are used on the encoder (ENC) or decoder (DEC). The best result is in boldface and the second best is in italics. Improvement over the no-prompt baseline is shaded. *: significantly better than the baseline with approximate randomization test $(p < 0.001)$ .
+
+
Prompt
Content Transfer
XSum
BLEU-4
# Date
ROUGE-L
# Date
-
11.05
0.530
37.04
0.275
ENC:T
11.52*
0.617
37.34*
0.289
ENC:L
11.27*
0.548
37.15
0.272
DEC:T
11.66*
0.610
37.34*
0.293
DEC:L
11.10
0.536
36.84
0.287
+
+Table 8: Additional results on the content transfer and XSum datasets.
+
+
+Figure 6: Edit distances and differences of ROUGE-2 between outputs with perturbed dates and original dates on XSum. Similar to the results on TEMPWIKIBIO and content transfer, linear prompts are not sensitive to the given dates.
+
+Table 8. Textual prompts yield the best performance on the two datasets.
+
+Sensitivity Analyses. We show the results of sensitivity analyses with date perturbation on XSum in Figure 6. Linear prompts again do not show sensitivity to the given dates. Compared to feeding the model with dates that are a year later or earlier, greater drops of ROUGE-2 are observed when feeding the models with dates that are 6 months later or earlier. This suggests that XSum emphasizes resolving date relations of months and days.
+
+# D Experiments with T5
+
+We also conduct experiments with the T5 pre-trained model (Raffel et al., 2019). As T5-large has 370 million more parameters than bart.large that has 400 million parameters, we use T5-base, which has 220 million parameters. We do not experiment on XSum, where the performance by T5 is shown to be much lower than BART (Gehrmann et al., 2021).
+
+Results on TEMPWIKIBIO and content transfer are shown in Tables 9 and 10. While linear prompts still work better on TEMPWIKIBIO samples drawn
+
+
Prompt
B-4 (↑)
MTR (↑)
TER (↓)
BS (↑)
Test Future
-
25.24
50.29
70.89
42.94
ENC:T
25.19
50.27
71.03
42.86
ENC:L
25.25
50.32
70.79
42.96
Test Same Time
-
24.33
50.00
72.30
42.54
ENC:T
24.38
50.06
72.19
42.58
ENC:L
24.35
50.01
72.21
42.57*
+
+Table 9: Results on TEMPWIKIBIO with T5 as the base model. Textual (T) and linear (L) prompts are used on encoder (ENC) or decoder (DEC). B: BLEU; MTR: METEOR; BS: BERTScore. The best result per metric is in boldface. Improvement over the no-prompt baseline is shaded.
+
+
Prompt
R-L
B-4
MTR
BS
-
22.71
7.65
22.31
24.04
ENC:T
23.17*
8.00*
23.11*
24.73*
ENC:L
22.68
7.61
22.30
23.96
+
+Table 10: Results on the content transfer dataset with T5 as the base model. R: ROUGE. $*$ : significantly better than the baseline with approximate randomization test $(p < 0.001)$ .
+
+at a future time and textual prompts work better on text-to-text content transfer, the improvements by linear prompts are less substantial. We conjecture that T5 is pre-trained with natural language prefixes for multiple tasks and prefers textual prompts.
+
+# E Details of Human Evaluation
+
+Figures 10 and 11 include the instructions provided to annotators for our human evaluation. All annotators are college students based in the U.S. The purpose of the annotation study and the usage of collected data are explained to the annotators before the annotation begins. We compensate each annotator with $15 per hour.
+
+# F Details of Implementation
+
+For experiments with BART (Lewis et al., 2020), we use bart.large. $^{11}$ For experiments with T5 (Raffel et al., 2019), we use T5-base. $^{12}$ Fairseq (Ott et al., 2019) $^{13}$ is used for model training and decoding with BART. HuggingFace Transformer (Wolf et al., 2020) is used for decoding with T5. Experiments are conducted with NVIDIA A6000 GPU with 48GB memory.
+
+Training Settings. For training on all datasets with BART, we first follow the hyperparameter setting provided by the original BART training script for $\mathrm{XSum}^{14}$ except that we set the total number of update steps to 30,000 for TEMPWIKIBIO and 35,000 for the content transfer dataset. In addition, we adjust the accumulated batch size for training on TEMPWIKIBIO to have 65,536 tokens in each batch. We then tune the learning rates on TEMPWIKIBIO and the content transfer dataset by searching through $1\times 10^{-5}$ , $3\times 10^{-5}$ , and $5\times 10^{-5}$ with the model without prompts. Based on the BLEU-4 scores on the development sets, we choose $5\times 10^{-5}$ for TEMPWIKIBIO and $3\times 10^{-5}$ for the content transfer dataset. Each model is trained for one run with one random seed due to the high computational cost of fine-tuning large models. For experiments with T5, we follow the default parameters suggested by HuggingFace.
+
+Decoding Settings. We use beam search with beam sizes of 4, 4 and 6 for decoding on TEMP-WIKIBIO, content transfer, and XSum. The maximum decoding lengths are set to 100, 100, and 60 for TEMP-WIKIBIO, content transfer, and XSum.
+
+Running Time. When using 4 GPUs, training on TEMPWIKIBIO, content transfer, and XSum takes 11, 7, and 2 hours. Meanwhile, decoding on TEMPWIKIBIO, content transfer, and XSum respectively takes 2 hours, 1 hour, and 15 minutes with 1 GPU.
+
+Evaluation. We use sacreBLEU (Post, 2018) $^{15}$ for calculating the BLEU and TER scores. To obtain the METEOR (Lavie and Agarwal, 2007) score, we use NLTK (Bird et al., 2009). The official BERTScore (Zhang* et al., 2020) $^{16}$ , QuestEval (Scialom et al., 2021) $^{17}$ , and PARENT (Dhingra et al., 2019) $^{18}$ libraries are used. For ROUGE scores (Lin, 2004), we use the Python implementation by Google. $^{19}$
+
+# G Example Outputs
+
+In Figures 7, 8, and 9, we show example outputs by different systems on TEMPWIKIBIO, content transfer, and XSum.
+
+Date: 2020-12-07 Infobox Attributes: name[Jan Hellström] birth_date[February 21, 1960] birthplace[Söderköping, Sweden] position[Striker] national_years[1986-1989]
+Reference: Jan Hellström (born February 21, 1960 in Söderköping, Östergötland) is a former Swedish footballer, who played as a striker.
+BART: Jan Hellström (born 21 February 1960 in Söderköping) is a former Swedish footballer.
+Textual Prompt: Jan Hellström (born 21 February 1960 in Söderköping) is a former Swedish footballer.
+Linear Prompt: Jan Hellström (born 21 February 1960 in Söderköping) is a Swedish former footballer who played as a striker.
+Date: 2021-05-30 Infobox Attributes: name[George Daniel Brown] birth_date[1836] death_date[1902] birth_place[London, England] masters[DNP] open[5th: 1860, 1863]
+Reference: George Daniel Brown (1836-1902) was an English professional golfer. He also specialized as a ball and club maker. Brown finished fifth in the 1860 Open Championship and repeated the feat in the 1863 Open Championship. In total, Brown had three top-10 finishes in The Open Championship.
+BART: George Daniel Brown (1836-1902) was an English professional golfer who played in the late 19th century. Brown had two top-10 finishes in the Open Championship. His best performance came in the 1860 Open Championship, when he finished in fifth place.
+Textual Prompt: George Daniel Brown (1836-1902) was an English professional golfer. Brown placed fifth in the 1860 and 1863 Open Championships.
+Linear Prompt: George Daniel Brown (1836-1902) was an English professional golfer. Brown placed fifth in the 1860 and 1863 Open Championships.
+Date: 2021-02-08 Infobox Attributes: name[Robert A. King] coach_year1[1890] coach-team1[Wabash] overall_record[0-3]
+Reference: Robert A. King was an American football coach. He was the fifth head football coach at Wabash College in Crawfordsville, Indiana, serving for one season, in 1890, and compiling a record of 0-3.
+BART: Robert A. King was the third head college football coach for the Wabash College Little Giants located in Crawfordsville, Indiana and he held that position for the 1890 season. His career coaching record at WABash was 0 wins, 3 losses, and 0 ties. As of the conclusion of the 2007 season, this ranks him 18th at Wabat
+Textual Prompt: Robert A. King was the third head college football coach for the Wabash College Little Giants located in Crawfordsville, Indiana and he held that position for the 1890 season. His career coaching record at WABash was 0 wins, 3 losses, and 0 ties. As of the conclusion of the 2007 season, this ranks him 18th at Wabat
+Linear Prompt: Robert A. King was an American football coach. He served as the head football coach at Wabash College for one season in 1890, compiling a record of 0-3.
+
+Figure 7: Example system outputs on TEMPWIKIBIO. Textual prompts are on the decoder and linear prompts are on the encoder.
+
+Context: The second season of "Faking It", an American single-camera romantic comedy, premiered on September 23, 2014, and concluded on November 2, 2015, on the MTV network.
+
+Date: 2014-06-17 Source Document: A day before its season finale, the comedy from showrunner Carter Covington has earned a sophomore run of 10 episodes. MTV is going to be Faking It for another year. The youth-skewing cable network has renewed the comedy starring Katie Stevens and Rita Volk, The Hollywood Reporter has confirmed. MTV will unspool 10 new episodes – up two from season one – in 2015. From showrunner Carter Covington, the half-hour comedy series was the first new scripted entry picked up to series under new network topper Susanne Daniels. "Faking It has proved to be the perfect companion show to Awkward, retaining nearly 90 percent of its lead in each week," Daniels said in a release announcing the news Monday. "We're excited about Carter Covington's delicious plans for season two." The comedy, which centers on two best friends who are mistakenly outed as lesbians and catapult to instant popularity, opened in April to 1.17 million total viewers. Through its first seven episodes, the comedy has averaged 948,000 total viewers. MTV says the show is the highest-rated new series launch this year with a 1.5 rating among viewers 12-34 and 1.4 million viewers each week when factoring in three days of delayed viewing. The season finaleairs Tuesday. For MTV, Faking It comes as Daniels is looking to double the network's roster of original scripted series. In addition to veterans Awkward and Teen Wolf, MTV will also launch comedy Happyland and dramas Finding Carter and Eye Candy. On the pilot side, MTV is readying its adaptation of Screamand has buzzy book adaptation Shannarain development.
+
+Reference: On June, 2014, the series was renewed for a second season of 10 episodes, which was later extended to 20 episodes.
+
+BART: The series was renewed for a second season of 10 episodes on September 23, 2014.
+
+Textual Prompt: On June, 2014, MTV renewed "Faking It" for a second season of 10 episodes.
+
+Linear Prompt: The series was renewed for a second season of 10 episodes on September 23, 2014.
+
+Context: Furthermore, the magazine said of the phrase, "So when your square friend uses it, take a little bit of pleasure in knowing they're referencing a stoner comedy – or a drag reality show referencing a stoner comedy – even if they have no idea." In 2014, VH1 began airing a television show called "Bye Felicia", and pop singer Jordin Sparks released a mixtape titled "#ByeFelicia". According to Google Trends, the phrase reached its highest usage in mid-2015.
+
+Date: 2017-12-15 Source Document: Outgoing White House official Omarosa Manigault Newman says Roberts' 'Bye, Felicia' dig was "petty" and a "black woman civil war." A link has been sent to your friend's email address. A link has been posted to your Facebook feed. To find out more about Facebook commenting please read the Conversation Guidelines and FAQs Following an appearance on 'Good Morning America' about her resignation from the White House, anchor Robin Roberts muttered the phrase about Omarosa Manigault Newmans. USA TODAY Omarosa Manigault Newman clapped back at Robin Roberts after the Good Morning America host took a dig at her on Thursday's show, telling Inside Edition that her comments were "petty" and akin to "a black woman civil war." The White House public liaison and assistant to the president, who is leaving her post on Jan. 20, had told GMA's Michael Strahan, "When I can tell my story — and it is a profound story — I know the world will want to hear." Later, a skeptical Roberts commented, "She said she has a story to tell? I'm sure she'll be selling that story." Then she invoked Friday's classic two-word dismissal for persons unlikely to be missed by anyone: "Bye, Felicia." Do you know how awful you have to be to annoy Robin Roberts? pic.twitter.com/h0gxAhFRFD For anyone unfamiliar with the expression, we'll let its inventor, Ice Cube, explain: "It's the phrase to get ANYBODY out of your face that's saying something stupid."
+
+Reference: On December 14, 2017, Robin Roberts, a host of ABC TV's "Good Morning America", used the phrase to conclude a segment about Omarosa Manigault Newman's departure from the Presidency of Trump administration staff.
+
+BART: In January 2017, Omarosa Manigault Newman responded to Robin Roberts' use of the phrase on "Good Morning America", calling it a "black woman civil war".
+
+Textual Prompt: In December 2017, "Good Morning America" host Robin Roberts used the phrase during an interview with Omarosa Manigault Newman about her resignation from the White House.
+
+Linear Prompt: In January 2017, "Good Morning America" host Robin Roberts used the phrase to refer to Omarosa Manigault Newman, who had just resigned from her position in the White House.
+
+Figure 8: Example system outputs on the content transfer dataset. Textual prompts are on the decoder and linear prompts are on the encoder. The publication dates are frequently required in the outputs.
+
+Date: 2016-07-04 XSum Document: The cloning of the first animal from an adult cell was a remarkable scientific achievement. It promised new treatments for debilitating diseases. But it also raised fears of cloned human beings, designer babies and a dystopian future. Twenty years on, neither the hopes nor the fears have been realised. So what is Dolly's legacy? I first saw Dolly in 1997 at the Roslin Institute just outside Edinburgh. She stood apart from the other sheep in the pens at this agricultural research centre. She stood prouder, her fleece seemed like a lion's mane and there was an aura about her. Dolly's creation had echoes of Mary Shelley's classic novel Frankenstein, in which inanimate tissue was brought to life by electricity. Dolly was created from DNA taken from a cell taken from an sheep. The technique involved putting the DNA into an empty eggshell and then zapping it with electricity. This created an embryo. Researchers at Roslin then implanted the embryo into the womb of a sheep which grew into Dolly - an exact genetic copy of the sheep from which the skin cell was taken. It took 277 attempts to clone Dolly and there were many miscarriages on the way. There were also genuine concerns that it would not be long before cloned humans would be walking the Earth - people would try to clone themselves to achieve a kind of immortality or they might try to resurrect a beloved dead relative. The airwaves were filled with conversations about what it meant to be human, whether the clones would be exactly the same as the person from which they were cloned and what kind of world the scientists were tumbling us into. When I met the researchers at Roslin they were acutely aware of public suspicion. And they knew it was important to be clear, open and honest about their work. Dolly's creator, Prof Sir Ian Wilmut, could not be any more different from fictional scientists such as Dr Frankenstein or indeed Dr Moreau, who developed human-like hybrids in a story by HG Wells. By contrast, Prof Wilmut is a quietly-spoken man whose motivation was to improve animal welfare and human health. This bespectacled scientist with his gentle candour was ideally complemented by the enthusiastic and eloquent Dr Simon Best, who was brought in to commercialise the technology to improve human health ...
+
+Reference: The birth of Dolly the sheep seemed one of those moments in scientific research that would change the world forever.
+
+BART:The birth of Dolly the sheep in 1997 sent shockwaves through the scientific community.
+
+Textual Prompt: It is 20 years since the birth of Dolly the sheep.
+
+Linear Prompt: It is 20 years since the birth of Dolly the sheep.
+
+Date: 2016-04-05 XSum Document: Sunderland's Rachael Laws, 25, has been called up to replace the 32-year-old Liverpool keeper, who has a knee injury. England face Belgium in Rotherham on Friday, before travelling to Zenica to play Bosnia on Tuesday. The Lionesses have won both their qualifying games so far. Goalkeepers: Karen Bardsley (Manchester City), Rachael Laws (Sunderland), Carly Telford (Notts County) Defenders: Laura Bassett (Notts County), Lucy Bronze (Manchester City), Gilly Flaherty (Chelsea), Alex Greenwood (Liverpool), Steph Houghton (Manchester City), Alex Scott (Arsenal), Casey Stoney (Arsenal), Demi Stokes (Manchester City), Amy Turner (Notts County) Midfielders: Katie Chapman (Chelsea), Jordan Nobbs (Arsenal), Jo Potter (Birmingham City), Jill Scott (Manchester City), Fara Williams (Arsenal) Forwards: Eniola Aluko (Chelsea), Karen Carney (Chelsea), Gemma Davison (Chelsea), Toni Duggan (Manchester City), Fran Kirby (Chelsea), Ellen White (Notts County).
+
+Textual Prompt: Manchester City goalkeeper Karen Bardsley has been ruled out of England's Euro 2017 qualifiers against Belgium and Bosnia-Herzegovina.
+
+Perturbed Date: 2012-04-05 Textual Prompt: Manchester City goalkeeper Karen Bardsley has been ruled out of England's Euro 2012 qualifiers against Belgium and Bosnia-Herzegovina.
+
+Perturbed Date: 2011-04-05 Textual Prompt: Manchester City goalkeeper Karen Bardsley has been ruled out of England's Euro 2012 qualifiers against Belgium and Bosnia-Herzegovina.
+
+Perturbed Date: 2021-04-05 Textual Prompt: Manchester City goalkeeper Karen Bardsley has been ruled out of England's Euro 2021 qualifiers against Belgium and Bosnia-Herzegovina.
+
+Perturbed Date: 2020-04-05 Textual Prompt: Manchester City goalkeeper Karen Bardsley has been ruled out of England's Euro 2020 qualifiers against Belgium and Bosnia-Herzegovina.
+
+Figure 9: Example system outputs on XSum for text summarization. Textual prompts are on the decoder and linear prompts are on the encoder. In the first example, Dolly the sheep was actually born on July 5, 1996. In the second example, the Women's Euro is held every four years. Therefore, it could only be Euro 2013, 2017, or 2021.
+
+# Annotation Instruction
+
+The annotation task consists of 80 groups of paragraphs produced by two systems that briefly describe the career of a person. In addition to the paragraphs, each group includes a infobox listing out important information about the person. You will also find a reference paragraph and a baseline paragraph for each group.
+
+Please read each system-produced paragraph and compare it with the baseline paragraph on two aspects: informativeness and factuality. For each aspect, if the system-produced paragraph is better, please label "win"; if the system-produced paragraph is worse, please label "lose"; if the two paragraphs are similar, please label "tie".
+
+When you label "win" or "lose", if the better or worse aspect is due to date mentions, please label "win (date)" and "lose (date)" correspondingly.
+
+The explanation of the two aspects is shown below along with an example.
+
+# Example
+
+# Vincent Trapp
+
+# Personal information
+
+Born
+
+26 January 1861
+
+Melbourne, Australia
+
+Died
+
+21 October 1929 (aged 68)
+
+Melbourne, Australia
+
+# Domestic team information
+
+Years
+
+Team
+
+1881-1884
+
+Victoria
+
+Source:Cricinfo,23 July 2015
+
+Baseline: Vincent Trapp (26 January 1861) was an Australian cricketer. He played two first-class cricket matches for Victoria between 1881 and 1884.
+
+System1: Vincent Trapp (26 January 1861 – 21 October 1929) was an Australian cricketer. He played two first-class cricket matches for Victoria between 1881 and 1884.
+
+System2: Vincent Trapp (26 January 1861 - 21 October 1929) was an Australian cricketer. He played for Victoria between 1881 and 1884.
+
+System3: Vincent Trapp (26 January 1861) was an Australian cricketer. He played two first-class cricket matches for Victoria between 1881 and 1884, according to Cricinfo.
+
+Informativeness: Whether the paragraph synthesizes salient information about the person.
+
+In this example, both system 1 and 2 are better than the baseline as they mention the death date of the person, which is an important information, while system 3 that additionally talks about the source of the information ties with the baseline. Moreover, system 1 and system 2 should be labeled with "win (date)" as the information is related to a date.
+
+Factuality: Whether the content of the paragraph is factually correct.
+
+In this example, both system 1 and 3 tie with the baseline. System 2 is better than the baseline as it avoids mentioning "two first-class cricket matches" which is an incorrect information. System 2 should only be labeled with "win".
+
+Figure 10: Guideline for human evaluation on WIKIRevision.
+
+
Annotation Instruction
The annotation task consists of 80 groups of sentences produced by two systems that continue the given context passage using the information in the given source documents. In addition to the sentences, each group includes the context passage and the source document. You will also find a reference sentence and a baseline sentence for each group.Pleases read each system-produced sentence and compare it with the baseline sentence on two aspects: informativeness and factuality. For each aspect, if the system-produced sentence is better, please label "win"; if the system-produced sentence is worse, please label "lose"; if the two sentences are similar, please label "tie".When you label "win" or "lose", if the better or worse aspect is due to date mentions, please label "win (date)" and "lose (date)" correspondingly.The explanation of the two aspects is shown below along with an example.
Example
Context Passage: The Burleigh Waters Library opened in 1991. For decades a local urban myth maintained that sharks were seen as far south in the canal waterways as Burleigh Waters. Alleged sightings and stories were locally spread, but balanced with scepticism.Source Document: Publication date: 20 February 2003. The Queensland government has warned people not to swim in coastal canal systems after the second fatal shark attack in as many months on the Gold Coast yesterday. An 84-year-old man from Burleigh Waters died after he was attacked by a 2.5 metre bull whaler while swimming in Burleigh Lake just before 6.30am (AEST) ...Baseline: An 84-year-old man from Burleigh Waters was attacked by a 2.5 metre bull whaler while swimming in Burleigh Lake.System1: An 84-year-old man from Burleigh Waters died after he was attacked by a 2.5 metre bull whaler while swimming in Burleigh Lake.System2: In February 2003, an 84-year-old man from Burleigh Waters died after he was attacked by a 2.5 metre bull whaler while swimming in Burleigh Lake.System3: In 2013, an 84-year-old man from Burleigh Waters died after he was attacked by a 2.5 metre bull whaler while swimming in Burleigh Lake.Informativeness: Whether the sentence synthesizes salient information of the source document.In this example, all systems are better than the baseline as they mention the man died after the attack, which is an important information. Moreover, system 2 should be labeled with "win (date)" as it also mentions the date of the event.Factuality: Whether the content of the sentence is factually correct.In this example, both system 1 and 2 tie with the baseline. System 3 is worse than the baseline as it mentions an incorrect date. System 3 should be labeled with "lose (date)".
+
+Figure 11: Guideline for human evaluation on the content transfer dataset.
\ No newline at end of file
diff --git a/timeawarepromptingfortextgeneration/images.zip b/timeawarepromptingfortextgeneration/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..3ccb1cd6f977ebbeffe54236ac2ea8d2423c8689
--- /dev/null
+++ b/timeawarepromptingfortextgeneration/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:06c069f620a545cfda5548ff94f04dddcc2d539103c00063d7f3aa7f7e4ca1a6
+size 904738
diff --git a/timeawarepromptingfortextgeneration/layout.json b/timeawarepromptingfortextgeneration/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..8b26e1f91566d9eb1345410390d558e6b7f2c486
--- /dev/null
+++ b/timeawarepromptingfortextgeneration/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:21f15c335e519fa56f12d514299f877ee3982119d2f23fca2aeb8cd95a64fb3c
+size 434847
diff --git a/tokenlevelsequencelabelingforspokenlanguageunderstandingusingcompositionalendtoendmodels/4ca6df95-9cb0-45d7-ad45-746cb6218901_content_list.json b/tokenlevelsequencelabelingforspokenlanguageunderstandingusingcompositionalendtoendmodels/4ca6df95-9cb0-45d7-ad45-746cb6218901_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..399f5742b69d25937375e23a838da70ffcb90d76
--- /dev/null
+++ b/tokenlevelsequencelabelingforspokenlanguageunderstandingusingcompositionalendtoendmodels/4ca6df95-9cb0-45d7-ad45-746cb6218901_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c329c61ff20e0ac9dbbd9be7581a519b8ddd8cc787e2cc0ba782443479289221
+size 87470
diff --git a/tokenlevelsequencelabelingforspokenlanguageunderstandingusingcompositionalendtoendmodels/4ca6df95-9cb0-45d7-ad45-746cb6218901_model.json b/tokenlevelsequencelabelingforspokenlanguageunderstandingusingcompositionalendtoendmodels/4ca6df95-9cb0-45d7-ad45-746cb6218901_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..79cfc0b69d0d4cb05c882b8ed5164a5928e08205
--- /dev/null
+++ b/tokenlevelsequencelabelingforspokenlanguageunderstandingusingcompositionalendtoendmodels/4ca6df95-9cb0-45d7-ad45-746cb6218901_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:119f0e746428cdc893c7c094b23d511fc730ee952058e63abbdc2a328c8c673b
+size 104671
diff --git a/tokenlevelsequencelabelingforspokenlanguageunderstandingusingcompositionalendtoendmodels/4ca6df95-9cb0-45d7-ad45-746cb6218901_origin.pdf b/tokenlevelsequencelabelingforspokenlanguageunderstandingusingcompositionalendtoendmodels/4ca6df95-9cb0-45d7-ad45-746cb6218901_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..a751bc3b20fd9e25e091bf10172623ca116b57d6
--- /dev/null
+++ b/tokenlevelsequencelabelingforspokenlanguageunderstandingusingcompositionalendtoendmodels/4ca6df95-9cb0-45d7-ad45-746cb6218901_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:46ad81361884705437f4461328b5fae626de465cc15fb6c310df1914770fb56e
+size 781129
diff --git a/tokenlevelsequencelabelingforspokenlanguageunderstandingusingcompositionalendtoendmodels/full.md b/tokenlevelsequencelabelingforspokenlanguageunderstandingusingcompositionalendtoendmodels/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..37c5f8dfb804a2f8b998ab2b763d5234c9385260
--- /dev/null
+++ b/tokenlevelsequencelabelingforspokenlanguageunderstandingusingcompositionalendtoendmodels/full.md
@@ -0,0 +1,373 @@
+# Token-level Sequence Labeling for Spoken Language Understanding using Compositional End-to-End Models
+
+Siddhant Arora* Siddharth Dalmia* Brian Yan
+
+Florian Metze Alan W Black Shinji Watanabe
+
+Language Technologies Institute, Carnegie Mellon University, USA
+
+{siddhana,sdalmia}@cs.cmu.edu
+
+# Abstract
+
+End-to-end spoken language understanding (SLU) systems are gaining popularity over cascaded approaches due to their simplicity and ability to avoid error propagation. However, these systems model sequence labeling as a sequence prediction task causing a divergence from its well-established token-level tagging formulation. We build compositional end-to-end SLU systems that explicitly separate the added complexity of recognizing spoken mentions in SLU from the NLU task of sequence labeling. By relying on intermediate decoders trained for ASR, our end-to-end systems transform the input modality from speech to token-level representations that can be used in the traditional sequence labeling framework. This composition of ASR and NLU formulations in our end-to-end SLU system offers direct compatibility with pre-trained ASR and NLU systems, allows performance monitoring of individual components and enables the use of globally normalized losses like CRF, making them attractive in practical scenarios. Our models outperform both cascaded and direct end-to-end models on a labeling task of named entity recognition across SLU benchmarks. $^{1}$
+
+# 1 Introduction
+
+Sequence labeling (SL) is a class of natural language understanding (NLU) tasks. These systems tag each word in a sentence to provide insights into the sentence structure and meaning (Jurafsky and Martin, 2009). An SL system that processes unstructured text, first encodes the context and relationships of words in the sentence using an encoder and then labels each token (Lample et al., 2016; Dozat et al., 2017; Akbik et al., 2018). However, when dealing with spoken utterances, sequence labeling introduces an additional complexity of
+
+also recognizing the mentions of the labels (Kubala et al., 1998; Zhai et al., 2004).
+
+SL in spoken language understanding (SLU) has been approached by two schools of thought, (1) that seek to recognize the spoken words using an Automatic Speech Recognition (ASR) engine and then tag the mentions using an NLU engine in a cascaded manner (Palmer and Ostendorf, 2001; Horlock and King, 2003; Béchet et al., 2004), and (2) that seek to recognize and tag the mentions directly from speech in an end-to-end (E2E) framework (Arora et al., 2022; Ghannay et al., 2018). Prior work has shown that cascaded systems suffer due to error propagation (Tran et al., 2018) from the ASR into the NLU engine, which can be overcome in an E2E framework. However, unlike cascaded models, E2E systems cannot utilize the vast abundance of NLU research (Shon et al., 2022) as they re-define the SL problem as a complex sequence prediction problem where the sequence contains both the tags and its mentions.
+
+Inspired by the principles of task compositionality in SL for SLU, we seek to bring both schools of thought together. Our conjecture is that we can build compositional E2E systems that first convert the spoken utterance to a sequence of token representations (Dalmia et al., 2021), which can then be used to train token-wise classification systems as per the NLU formulation. By also conditioning our token-wise classification on speech, our compositional E2E system allows recovery from errors made while creating token representations. We instantiate our formulation on a popular SL task of named entity recognition (NER) and (1) present the efficacy of our compositional E2E NER-SLU system on benchmark SLU datasets (Bastianelli et al., 2020; Shon et al., 2022) surpassing both the cascaded and direct E2E systems §5.2. (2) Our compositional model consists of ASR and NLU components compatible with pre-trained ASR and NER-NLU models §5.3. (3) Our E2E systems ex
+
+hibit transparency towards categorizing errors by enabling the evaluation of individual components of our model in isolation §5.4.
+
+The paper first describes the traditional SL formulation (§2), and discusses shortcomings in current SLU formulations (§3). Section §4 presents our compositional E2E model that can overcome these shortcomings. We then evaluate these approaches towards the SL task of NER (§5).
+
+# 2 Sequence Labeling (SL)
+
+SL systems tag each word, $w_{i}$ , of a text sequence, $S = \{w_{i} \in \mathcal{V} | i = 1, \dots, N\}$ of length $N$ and vocabulary $\mathcal{V}$ , with a label from a label set $\mathcal{L}$ , $\{w_{i} \rightarrow y_{i} | y_{i} \in \mathcal{L}\}$ . This produces a label sequence, $Y = \{y_{i} \in \mathcal{L} | i = 1, \dots, N\}$ of the same length $N$ . Using decision theory, sequence labeling models seek to output $\hat{Y}$ from a set of all possible tag sequence $\mathcal{L}^N$ ,
+
+$$
+\hat {Y} = \underset {Y \in \mathcal {L} ^ {N}} {\operatorname {a r g m a x}} P (Y | S) \tag {1}
+$$
+
+where $P(Y|S)$ is the posterior distribution. This posterior can be modeled using various techniques like the traditional HMM (Morwal et al., 2012) and MEMM (McCallum et al., 2000) based modeling and more recently CRF (Ma and Hovy, 2016) and token classification (Devlin et al., 2019) based approaches. We discuss the latter two in detail:
+
+Conditional Random Field: Lafferty et al. (2001) aims to directly compute the posterior of the entire label sequence $Y$ given the sentence $S$ :
+
+$$
+P (Y | S) = \frac {e ^ {F (Y , S)}}{\sum_ {Y ^ {\prime} \in \mathcal {L} ^ {N}} e ^ {F \left(Y ^ {\prime} , S\right)}} \tag {2}
+$$
+
+where $F(Y, S)$ is global score of the tag sequence $Y$ given $S$ . This is modeled using a linear chain CRF which computes the global score as a sum of local scores $f(\cdot)$ for each position in $Y$ as follows
+
+$$
+F (Y, S) = \sum_ {l = 1} ^ {N} f \left(y _ {l - 1}, y _ {l}, S\right) \tag {3}
+$$
+
+Lample et al. (2016) and Yan et al. (2019) use contextualized neural encoders like LSTMs and transformers to model context of the entire sequence $S$ for every word $w_{l}$ . This allows for effective modeling of $f(.)$ by using encoder representations for each word as the emissions, and maintaining a
+
+separate transition score $t_{y_{l - 1}\to y_l}$ to give $F(Y,S)$ :
+
+$$
+\mathbf {h} _ {1: N} = \operatorname {e n c o d e r} \left(w _ {1: N}\right) \tag {4}
+$$
+
+$$
+t _ {y _ {l - 1} \rightarrow y _ {l}} = \text {t r a n s i t i o n S c o r e s} (| \mathcal {L} |, | \mathcal {L} |) \tag {5}
+$$
+
+$$
+F (Y, S) = \sum_ {l = 1} ^ {N} \left(\mathbf {h} _ {l, y _ {l}} + t _ {y _ {l - 1} \rightarrow y _ {l}}\right) \tag {6}
+$$
+
+Token Classification Model: Since the advent of strong contextual modeling using transformer based models, sequence labeling can also be treated as token classification (Devlin et al., 2019), a simplification over MEMM estimations (McCallum et al., 2000), with the assumption that the current tag is conditionally independent to previous tag.
+
+$$
+P (Y | S) = \prod_ {l = 1} ^ {N} P \left(y _ {l} \mid \mathbf {h} _ {l}\right) \tag {7}
+$$
+
+These models are still effective as $\mathbf{h}_l$ is able to model the full context $S$ for every word $w_l$ .
+
+In cases like NER, where an entity can span multiple words, these problems are modeled using BIO tags (Ramshaw and Marcus, 1995), where begin (B), inside (I) tags are added for entities and an outside (O) tag for non-entity words, extending the tag set vocabulary from $\mathcal{L}$ to $\mathcal{L}' = \{l_B \oplus l_I | l \in \mathcal{L}\} \cup \{\mathrm{O}\}$ . When modeled using sub-word tokens the tags can be aligned to the first sub-word token of the word and the remaining ones can be marked with a special token $\varnothing$ giving $\mathcal{L}'' = \mathcal{L}' \cup \{\varnothing\}$ .
+
+# 3 Sequence Labeling in SLU
+
+Sequence Labeling in SLU introduces an added complexity of recognizing mentions on top of text-based SL tasks (§2) as they aim to predict the tag and its mentions directly from a spoken sequence. Given a sequence of $d$ dimensional speech feature of length $T$ frames, $X = \{\mathbf{x}_t\in \mathbb{R}^d |t = 1,\dots ,T\}$ , these systems seek to estimate the label sequence $\hat{Y}$ from
+
+$$
+\hat {Y} = \underset {Y \in \mathcal {L} ^ {*}} {\operatorname {a r g m a x}} P (Y | X) \tag {8}
+$$
+
+where $P(Y|X)$ have been modeled as:
+
+Cascaded SLU (Béchet et al., 2004; Parada et al., 2011; Zhou et al., 2015) models $P(Y|X)$ from $P(Y|S)$ using an NLU framework (§2) and $P(S|X)$ using an ASR model (Povey et al., 2011; Chan et al., 2016; Graves, 2012), assuming condi
+
+tional independence of $Y|S$ from $X$
+
+$$
+\begin{array}{l} P (Y | X) = \sum_ {S} P (Y | S, X) P (S | X) (9) \\ \approx \max _ {S} P (Y | S) P (S | X) (10) \\ \approx P (Y | \hat {S}) \max _ {S} P (S | X) (11) \\ \hat {S} = \underset {S \in \mathcal {V} *} {\operatorname {a r g m a x}} P (S | X) (12) \\ \end{array}
+$$
+
+Once $\hat{S}$ is estimated, $\hat{Y}$ can be estimated using Eq 1. Although this enables realizing $\hat{Y}$ using two well studied frameworks, the independence assumption doesn't allow recovery from errors in estimating $\hat{S}$ .
+
+Direct End-to-End SLU (Arora et al., 2022; Shon et al., 2022; Ghannay et al., 2018) systems avoid cascading errors by directly modeling $P(Y|X)$ in a single monolithic model. To achieve this while being able to recognize the spoken mentions, these systems enrich $Y$ with transcripts $S$ , $Y^{e} = \{y_{i}^{e}\in \mathcal{V}\cup \mathcal{L}|i = 1,\dots ,N^{\prime}\}$ , where $N^{\prime}$ is the length of $Y^{e}$ . This can be modeled using an autoregressive decoder as:
+
+$$
+P (Y | X) = \prod_ {i = 1} ^ {N ^ {\prime}} P \left(y _ {i} ^ {e} \mid y _ {1: i - 1} ^ {e}, X\right) \tag {13}
+$$
+
+However this new formulation cannot utilize the well studied sequence labeling framework §2. Additionally, this applies an extra burden of labeling along with alignment on the decoder and makes understanding the errors made by these systems particularly difficult. For example, Eq 13 gives non-zero likelihood to a corrupt sequence with only labels and no words as $y^{e} \in \{\mathcal{V} \cup \mathcal{L}\}$ .
+
+# 4 Compositional End-to-End SLU
+
+We propose to bring the two paradigms together in a compositional end-to-end system, by extending over the cascaded SLU formulation using searchable intermediate framework (Dalmia et al., 2021):
+
+$$
+\begin{array}{l} P (Y | X) = \sum_ {S} P (Y | S, X) P (S | X) (14) \\ \approx \max _ {S} \underbrace {P (Y \mid S , X)} _ {\text {S U B} _ {\mathrm {N L U}} \text {N E T}} \underbrace {P (S \mid X)} _ {\text {S U B} _ {\mathrm {A S R}} \text {N E T}} (15) \\ \end{array}
+$$
+
+This system can be realized with two sub-networks as shown in Figure 1, where:
+
+
+Figure 1: Schematics of our compositional E2E SLU architecture with ASR and NLU sub-nets. The ASR subnet consists of an encoder and decoder. The NLU subnet consists of an encoder that conditions on both speech information via encoder $_{\mathrm{ASR}}$ and the text information via decoder $_{\mathrm{ASR}}$ ’s hidden representation $\mathbf{h}^{\mathrm{ASR}}$ followed by token classification or CRF layer.
+
+$\mathbf{SUB}_{\mathbf{ASR}}\mathbf{NET}$ : Models $P(S|X)$
+
+$$
+\mathbf {h} _ {1: T} ^ {E} = \operatorname {e n c o d e r} _ {\mathrm {A S R}} \left(X _ {1: T}\right) \tag {16}
+$$
+
+$$
+\mathbf {h} _ {l} ^ {\mathrm {A S R}} = \operatorname {d e c o d e r} _ {\mathrm {A S R}} \left(\mathbf {h} _ {1: T} ^ {E}, w _ {1: l - 1}\right) \tag {17}
+$$
+
+$$
+P \left(w _ {l} \mid X, w _ {1: l - 1}\right) = \operatorname {s o f t m a x O u t} \left(\mathbf {h} _ {l} ^ {\mathrm {A S R}}\right) \tag {18}
+$$
+
+$$
+P (S | X) = \prod_ {l = 1} ^ {N} P \left(w _ {l} | X, w _ {1: l - 1}\right) \tag {19}
+$$
+
+SUBNLUNET: Models $P(Y|S,X)$
+
+$$
+\mathbf {h} _ {1: N} ^ {\mathrm {N L U}} = \operatorname {e n c o d e r} _ {\mathrm {N L U}} \left(\mathbf {h} _ {1: N} ^ {\mathrm {A S R}}, \mathbf {h} _ {1: T} ^ {E}\right) \tag {20}
+$$
+
+$$
+P (Y \mid S, X) = \operatorname {C R F} \left(\mathbf {h} _ {1: N} ^ {\mathrm {N L U}}\right) \quad \mathbf {O R} \tag {21}
+$$
+
+$$
+P (Y \mid S, X) = \operatorname {T o k e n C l a s s i f i c a t i o n} \left(\mathbf {h} _ {1: N} ^ {\mathrm {N L U}}\right) \tag {22}
+$$
+
+The end-to-end differentiability is maintained by using $\mathbf{h}_{1:N}^{\mathrm{ASR}}$ in Eq 20. During inference, we approximate the Viterbi max of $S$ using beam search to give $\hat{\mathbf{h}}_{1:N}^{\mathrm{ASR}}$ . Then $\hat{Y}$ can be found using Viterbi search with no approximation as the output length is known and the solution is tractable.
+
+This composition allows incorporating the ASR modeling and text-based sequence labeling framework §2. It also brings transparency to end-to-end modeling as we can also monitor performance of individual sub-nets in isolation. Further, encoderNLU can attend to speech representations $\mathbf{h}_{1:T}^{\mathrm{E}}$ using cross attention (Dalmia et al., 2021) enabling the direct use of speech cues for NLU. This speech attention mechanism can allow the model to recover from intermediate errors made during ASR stage.
+
+Recently, there has been some works (Rao et al., 2020; Saxon et al., 2021) that explore compositional SLU models which utilize the ASR and NLU formulations. Saxon et al. (2021) uses discrete outputs from the ASR module that are made differ
+
+entiable using various approaches like Gumbel-softmax (Jang et al., 2017). Rao et al. (2020) also uses the ASR decoder hidden representations in the NLU module by concatenating it with token embeddings of the ASR discrete output. However, this approach requires the ASR and NLU submodule to have a shared vocabulary space, limiting the usage of pretrained ASR and LM in this architecture. Moreover, the benefits of our proposed compositional framework are not explored in these works.
+
+# 5 Spoken Named Entity Recognition
+
+To show the effectiveness of our compositional E2E SLU model we build spoken NER systems on two publically available SLU datasets, SLUE (Shon et al., 2022) and SLURP (Bastianelli et al., 2020) (dataset and preparation details in §A.2). We compare our compositional E2E system with cascaded and direct E2E systems. We also compare with another compositional E2E system that predicts the enriched transcript (§3) using a decoder like (Dalmia et al., 2021) instead of label sequence (i.e. $Y^{e}$ instead of $Y$ in Eq. 15) using a token level classification sub-network. We refer to this baseline model as "Compositional E2E SLU with Direct E2E formulation".
+
+SLURP is evaluated using SLU-F1 (Bastianelli et al., 2020) which weighs the entity labels with the word and character error rate of the predicted mentions and SLUE using F1 (Shon et al., 2022) which evaluates getting both the mention and the entity label exactly right. We also compute Label-F1 for both datasets which considers only the entity label. We report micro-averaged F1 for all results.
+
+# 5.1 Model Configurations
+
+We build all our systems using ESPnet-SLU (Arora et al., 2022) which is an open-source SLU toolkit built on ESPnet (Watanabe et al., 2018), a flagship toolkit for speech processing. We use encoder-decoder based architecture for our baseline E2E system. We use Conformer encoder blocks (Gulati et al., 2020) and Transformer decoder blocks (Vaswani et al., 2017) with CTC multi-tasking (Arora et al., 2022). The baseline compositional model with Direct E2E SLU formulation consists of a conformer encoder and transformer decoder in it's ASR component and transformer encoder and transformer decoder in it's NLU component. Our proposed compositional model with the NLU formulation, as shown in Figure 1, replaces the
+
+
Model
SLURP
SLUE
SLU F1
Label F1
F1
Label F1
Direct E2E SLU (Arora et al.)
71.9
-
54.7
67.6
Casacaded SLU (Ours)
73.3
80.9
48.6
63.9
Direct E2E SLU (Ours)
77.1
84.0
54.7
67.6
Compositional E2E SLU w/ Direct E2E formulation (§3)
77.2
84.6
50.0
68.0
w/ Proposed NLU formulation (§4)
CRF w/ Speech Attention (SA)
77.7
85.2
59.4
73.6
Token Classification w/ SA
78.0
85.3
60.3
73.7
w/o Speech Attention
77.7
84.9
59.0
73.6
+
+Table 1: Results presenting the micro F1 performance of our proposed compositional E2E models using CRF and Token Classification modeling. Cascaded, direct E2E and our compositional E2E with direct E2E formulation are shown for comparison. We also provide an ablation of our model with and without Speech Attention (SA).
+
+NLU component in Direct E2E formulation with a transformer encoder followed by a linear layer. For the cascaded systems, we build systems that have the same size as that of our ASR and NLU sub-networks. All models were tuned separately using validation sets with the same hyperparameter search space. Full descriptions of model and training parameters are in §A.3.
+
+# 5.2 Performance of Compositional E2E SLU
+
+Table 1 shows that our proposed compositional E2E models with the token-level NLU formulation outperform both cascaded and direct E2E models on all benchmarks using both CRF and Token Classification. In order to understand gains of our proposed model, we examine the performance of our compositional system with direct E2E formulation (§3). While being comparable to direct E2E models, they still lag behind our proposed models showing the efficacy of modeling SL tasks as a token-level tagging (§2) in an E2E SLU framework.
+
+We further analyze our compositional systems that don't attend to speech representations. We observe a performance drop as these models are not able to recover from errors made while "recognising" entity mentions. For example, in an utterance that says "change the bedroom lights to green", though the ASR component incorrectly predicts the transcript as "change the color of lights to green", the NLU component w/ Speech Attention is able to recover the entity type HOUSE PLACE.
+
+# 5.3 Utilizing External Sub-Net models
+
+Components of our compositional E2E SLU model have functions similar to an ASR and NLU model (Eq 16-22). This allows fine-tuning our models using sub-systems, pre-trained on large amounts
+
+
Model
SLURP
SLUE
SLU F1
Label F1
F1
Label F1
Direct E2E SLU
77.1
84.0
54.7
67.6
w/ NLU fine-tuning
Incompatible
w/ ASR fine-tuning
73.5
81.2
64.0
80.6
Compositional E2E SLU (w/ SA)
78.0
85.3
60.3
73.7
w/ NLU finetuning (w/o SA)
77.7
84.9
62.4
76.4
w/ ASR finetuning (w/ SA)
81.4
88.8
71.6
85.2
Compositional E2E SLU (w/ SA)
78.0
85.3
60.3
73.7
w/ External ASR Transcripts (\( S^{ext} \))
81.0
88.1
70.1
81.2
+
+Table 2: Results presenting the compatibility of our models with pre-trained ASR and NLU systems by (1) finetuning pre-trained components and (2) directly utilizing transcripts from an external ASR model.
+
+
SLURP
SLUE
ASR (%WER ↓)
NLU (SLU-F1 ↑)
ASR (%WER ↓)
NLU (F1 ↑)
Pure ASR & NLU models
16.1
82.4
30.4
58.1
Compositional E2E SLU
CRF w/ Speech Attention (SA)
16.3
88.3
27.4
75.6
Token Classification w/o SA
16.0
87.9
27.6
74.1
Token Classification w/ SA
16.1
88.7
27.5
75.6
+
+Table 3: Results showcasing the transparency of our compositional E2E models by evaluating the individual sub-networks ASR (%WER) and NLU (F1) in isolation.
+
+of available sub-task data. Table 2 shows that our compositional model has better compatibility with ASR and NLU fine-tuning over direct E2E systems, thereby increasing their performance gap, particularly for SLUE, an under-resourced SLU dataset.
+
+Further our models have the ability to use transcripts from a strong external model ( $S^{\mathrm{ext}}$ ) directly during inference, by instantiating our models with these transcripts to produce $\mathbf{h}^{\mathrm{ASR}}$ and then evaluate $P(Y|S^{\mathrm{ext}},X)$ . Table 2 shows using transcripts from an external ASR with no fine-tuning steps can achieve similar performance to ASR fine-tuning.
+
+# 5.4 Transparency in Compositional E2E SLU
+
+Following Eq 15, we can estimate ASR performance by calculating $\hat{S}$ using beam search and NLU performance by estimating $\hat{Y}$ from $P(Y|S^{\mathrm{GT}},X)$ , where $S^{\mathrm{GT}}$ is the ground truth transcripts. Table 3 shows the performances of individual components of our model along with performances of ASR and NLU only models suggesting that we can effectively monitor the performance of these components, helping practitioners analyze and debug them. For instance, while our models with and without speech attention have comparable performance on ASR, using speech attention improves NLU power. Further the one-to-one alignment of transcripts and sequence labels can provide further categorization of errors, as shown in §A.4.
+
+# 5.5 CRF vs Token Classification
+
+For practical SLU the likelihoods of our compositional model $P(Y|S,X)$ , should be correlated with errors in label sequence $Y$ . We found that in SLURP our compositional E2E SLU, while using locally normalized token classification shows no correlation (Corr=0.13,p=0), using CRF exhibits moderate correlation (Corr=0.43,p=0). This makes globally normalized models attractive for real-world scenarios like automated data auditing and human in-the-loop ML (Mitchell et al., 2018) despite their marginal addition in computation cost.
+
+# 6 Conclusion
+
+We propose to combine text based sequence labeling framework into the speech recognition framework to build a compositional end-to-end model for SLU. Our compositional E2E models not only show superior performance over cascaded and direct end-to-end SLU systems, but also bring the power of both these systems in a single framework. These models can utilise pretrained sub task components and exhibit transparency like cascaded systems, while avoiding error propagation like direct end-to-end systems.
+
+# Limitations
+
+Our compositional model relies on the availability of transcripts for training. This although is a limitation, it is a safe assumption for sequence labeling tasks for spoken language understanding. We can see from §3 that the task for sequence labeling in SLU also requires the model to recognize the words being spoken along with the sequence labels, implying the need for at least a partial transcript for training direct end-to-end SLU systems.
+
+# Broader Impact
+
+With our compositional end-to-end SLU model, we strive to bring the research from the text based sequence labeling directly into speech based spoken language understanding. Our aim is to avoid reinvention of the wheel, but rather come up with innovative ways to build end-to-end models by converting a complex problem into simpler ones that have seen substantial research in the past. Additionally we believe the increased capacity for error analysis in our compositional end-to-end system can help towards building better practical systems during deployment. Our compositional end-to-end
+
+systems can effectively utilize pre-trained ASR and NLU systems, thereby avoiding the need for collecting large labeled datasets for SLU. This framework also saves compute by utilizing pre-trained ASR systems directly during inference to improve downstream performances with no fine-tuning.
+
+# Acknowledgement
+
+We thank Aakanksha Naik and the anonymous reviewers for their feedback. This work used the Extreme Science and Engineering Discovery Environment (XSEDE) (Towns et al., 2014), which is supported by NSF grant number ACI-1548562. Specifically, it used the Bridges system (Nystrom et al., 2015), which is supported by NSF award number ACI-1445606, at the Pittsburgh Supercomputing Center (PSC).
+
+# References
+
+Alan Akbik, Duncan Blythe, and Roland Vollgraf. 2018. Contextual string embeddings for sequence labeling. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1638-1649, Santa Fe, New Mexico, USA. Association for Computational Linguistics.
+Siddhant Arora, Siddharth Dalmia, Pavel Denisov, Xuankai Chang, Yushi Ueda, Yifan Peng, Yuekai Zhang, Sujay Kumar, Karthik Ganesan, Brian Yan, Ngoc Thang Vu, Alan W. Black, and Shinji Watanabe. 2022. ESPnet-SLU: Advancing spoken language understanding through espnet. In IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2022, Virtual and Singapore, 23-27 May 2022, pages 7167-7171. IEEE.
+Emanuele Bastianelli, Andrea Vanzo, Pawel Swietojanski, and Verena Rieser. 2020. SLURP: A spoken language understanding resource package. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020. Association for Computational Linguistics.
+Frédéric Béchet, Allen L. Gorin, Jeremy H. Wright, and Dilek Hakkani-Tür. 2004. Detecting and extracting named entities from spontaneous speech in a mixed-initiative spoken dialogue context: How may I help you?sm, tm. Speech Commun., 42(2):207-225.
+William Chan, Navdeep Jaitly, Quoc Le, and Oriol Vinyals. 2016. Listen, attend and spell: A neural network for large vocabulary conversational speech recognition. In 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 4960-4964.
+Guoguo Chen, Shuzhou Chai, Guan-Bo Wang, Jiayu Du, Wei-Qiang Zhang, Chao Weng, Dan Su, Daniel
+
+Povey, Jan Trmal, Junbo Zhang, Mingjie Jin, Sanjeev Khudanpur, Shinji Watanabe, Shuaijiang Zhao, Wei Zou, Xiangang Li, Xuchen Yao, Yongqing Wang, Zhao You, and Zhiyong Yan. 2021a. Gigaspeech: An evolving, multi-domain ASR corpus with 10,000 hours of transcribed audio. In Interspeech 2021, 22nd Annual Conference of the International Speech Communication Association, Brno, Czechia, 30 August - 3 September 2021, pages 3670-3674. ISCA.
+Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian, Jian Wu, Michael Zeng, and Furu Wei. 2021b. WavLM: Large-scale self-supervised pre-training for full stack speech processing. CoRR, abs/2110.13900.
+Jonathan H. Clark, Dan Garrette, Iulia Turc, and John Wieting. 2022. Canine: Pre-training an efficient tokenization-free encoder for language representation. Transactions of the Association for Computational Linguistics, 10:73-91.
+Alice Coucke, Alaa Saade, Adrien Ball, Théodore Bluche, Alexandre Caulier, David Leroy, Clément Doumouro, Thibault Gisselbrecht, Francesco Caltagirone, Thibaut Lavril, Maël Primet, and Joseph Dureau. 2018. Snips voice platform: an embedded spoken language understanding system for private-by-design voice interfaces. CoRR, abs/1805.10190.
+Siddharth Dalmia, Brian Yan, Vikas Raunak, Florian Metze, and Shinji Watanabe. 2021. Searchable hidden intermediates for end-to-end models of decomposable sequence tasks. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1882-1896, Online. Association for Computational Linguistics.
+Miguel Del Rio, Natalie Delworth, Ryan Westerman, Michelle Huang, Nishchal Bhandari, Joseph Palakapilly, Quinten McNamara, Joshua Dong, Piotr Želasko, and Miguel Jette. 2021. Earnings-21: A Practical Benchmark for ASR in the Wild. In Proc. Interspeech 2021.
+Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 4171-4186. Association for Computational Linguistics.
+Timothy Dozat, Peng Qi, and Christopher D. Manning. 2017. Stanford's graph-based neural dependency parser at the CoNLL 2017 shared task. In Proceedings of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, pages 20-30, Vancouver, Canada. Association for Computational Linguistics.
+
+S. Ghannay, A. Caubrière, Y. Esteve, N. Camelin, E. Simonnet, A. Laurent, and E. Morin. 2018. End-to-end named entity and semantic concept extraction from speech. In 2018 IEEE Spoken Language Technology Workshop (SLT), pages 692-699.
+Alex Graves. 2012. Sequence transduction with recurrent neural networks. CoRR, abs/1211.3711.
+Anmol Gulati, James Qin, Chung-Cheng Chiu, Niki Parmar, Yu Zhang, Jiahui Yu, Wei Han, Shibo Wang, Zhengdong Zhang, Yonghui Wu, and Ruoming Pang. 2020. Conformer: Convolution-augmented transformer for speech recognition. In Interspeech 2020, 21st Annual Conference of the International Speech Communication Association, Virtual Event, Shanghai, China, 25-29 October 2020, pages 5036-5040. ISCA.
+James Horlock and Simon King. 2003. Discriminative methods for improving named entity extraction on speech data. In Proc. 8th European Conference on Speech Communication and Technology (Eurospeech 2003), pages 2765-2768.
+Eric Jang, Shixiang Gu, and Ben Poole. 2017. Categorical reparameterization with gumbel-softmax. In 5th International Conference on Learning Representations, ICLR 2017, Toulouse, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net.
+Dan Jurafsky and James H. Martin. 2009. Speech and language processing: an introduction to natural language processing, computational linguistics, and speech recognition, 2nd Edition. Prentice Hall series in artificial intelligence. Prentice Hall, Pearson Education International.
+Francis Kubala, Richard Schwartz, Rebecca Stone, and Ralph Weischedel. 1998. Named entity extraction from speech. In Proceedings of DARPA Broadcast News Transcription and Understanding Workshop, pages 287-292. Citeseer.
+Taku Kudo and John Richardson. 2018. Sentencepiece: A simple and language independent subword tokenizer and tokenizer for neural text processing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, EMNLP 2018: System Demonstrations, Brussels, Belgium, October 31 - November 4, 2018, pages 66-71. Association for Computational Linguistics.
+John D. Lafferty, Andrew McCallum, and Fernando C. N. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of the Eighteenth International Conference on Machine Learning, ICML '01, page 282-289, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc.
+Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In Proceedings of the 2016 Conference of the North
+
+American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 260-270, San Diego, California. Association for Computational Linguistics.
+Xuezhe Ma and Eduard Hovy. 2016. End-to-end sequence labeling via bi-directional LSTM-CNNs-CRF. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1064-1074, Berlin, Germany. Association for Computational Linguistics.
+Andrew McCallum, Dayne Freitag, and Fernando C. N. Pereira. 2000. Maximum entropy markov models for information extraction and segmentation. In Proceedings of the Seventeenth International Conference on Machine Learning, ICML '00, page 591-598, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc.
+T. Mitchell, W. Cohen, E. Hruschka, P. Talukdar, B. Yang, J. Betteridge, A. Carlson, B. Dalvi, M. Gardner, B. Kisiel, J. Krishnamurthy, N. Lao, K. Mazaitis, T. Mohamed, N. Nakashole, E. Platanios, A. Ritter, M. Samadi, B. Settles, R. Wang, D. Wijaya, A. Gupta, X. Chen, A. Saparov, M. Greaves, and J. Welling. 2018. Never-ending learning. Commun. ACM, 61(5):103-115.
+Sudha Morwal, Nusrat Jahan, and Deepti Chopra. 2012. Named entity recognition using hidden markov model (hmm). International Journal on Natural Language Computing, 1:15-23.
+Minh Nguyen and Zhou Yu. 2021. Improving named entity recognition in spoken dialog systems by context and speech pattern modeling. In Proceedings of the 22nd Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 45-55, Singapore and Online. Association for Computational Linguistics.
+Nicholas A. Nystrom, Michael J. Levine, Ralph Z. Roskies, and J. Ray Scott. 2015. Bridges: a uniquely flexible HPC resource for new communities and data analytics. In Proceedings of the 2015 XSEDE Conference: Scientific Advancements Enabled by Enhanced Cyberinfrastructure, St. Louis, MO, USA, July 26 - 30, 2015, pages 30:1-30:8. ACM.
+David D. Palmer and Mari Ostendorf. 2001. Improving information extraction by modeling errors in speech recognizer output. In Proceedings of the First International Conference on Human Language Technology Research.
+Carolina Parada, Mark Dredze, and Frederick Jelinek. 2011. OOV sensitive named-entity recognition in speech. In INTERSPEECH 2011, 12th Annual Conference of the International Speech Communication Association, Florence, Italy, August 27-31, 2011, pages 2085-2088. ISCA.
+Daniel S. Park, William Chan, Yu Zhang, Chung-Cheng Chiu, Barret Zoph, Ekin D. Cubuk, and Quoc V. Le.
+
+2019. Specaugment: A simple data augmentation method for automatic speech recognition. In *Interspeech* 2019, 20th Annual Conference of the International Speech Communication Association, Graz, Austria, 15-19 September 2019, pages 2613-2617. ISCA.
+Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Köpf, Edward Z. Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. Pytorch: An imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 8024-8035.
+Daniel Povey, Arnab Ghoshal, Gilles Boulianne, Lukas Burget, Ondrej Glembek, Nagendra Goel, Mirko Hannemann, Petr Motlicek, Yanmin Qian, Petr Schwarz, Jan Silovsky, Georg Stemmer, and Karel Vesely. 2011. The Kaldi speech recognition toolkit. In IEEE 2011 Workshop on Automatic Speech Recognition and Understanding. IEEE Signal Processing Society.
+Lance A. Ramshaw and Mitch Marcus. 1995. Text chunking using transformation-based learning. In Third Workshop on Very Large Corpora, VLC@ACL 1995, Cambridge, Massachusetts, USA, June 30, 1995.
+Milind Rao, Anirudh Raju, Pranav Dheram, Bach Bui, and Ariya Rastrow. 2020. Speech to semantics: Improve ASR and NLU jointly via all-neural interfaces. In *Interspeech* 2020, 21st Annual Conference of the International Speech Communication Association, Virtual Event, Shanghai, China, 25-29 October 2020, pages 876-880. ISCA.
+Michael Saxon, Samridhi Choudhary, Joseph P. McKenna, and Athanasios Mouchtaris. 2021. End-to-end spoken language understanding for generalized voice assistants. In *Interspeech* 2021, 22nd Annual Conference of the International Speech Communication Association, Brno, Czechia, 30 August - 3 September 2021, pages 4738-4742. ISCA.
+Suwon Shon, Ankita Pasad, Felix Wu, Pablo Brusco, Yoav Artzi, Karen Livescu, and Kyu J. Han. 2022. SLUE: new benchmark tasks for spoken language understanding evaluation on natural speech. In IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2022, Virtual and Singapore, 23-27 May 2022, pages 7927-7931. IEEE.
+John Towns, Timothy Cockerill, Maytal Dahan, Ian T. Foster, Kelly P. Gaither, Andrew S. Grimshaw, Victor Hazlewood, Scott A. Lathrop, David Lifka, Gregory D. Peterson, Ralph Roskies, J. Ray Scott, and Nancy Wilkins-Diehr. 2014. XSEDE: accelerating scientific discovery. Comput. Sci. Eng., 16(5):62-74.
+
+Trang Tran, Shubham Toshiwal, Mohit Bansal, Kevin Gimpel, Karen Livescu, and Mari Ostendorf. 2018. Parsing speech: a neural approach to integrating lexical and acoustic-prosodic information. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2018, New Orleans, Louisiana, USA, June 1-6, 2018, Volume 1 (Long Papers), pages 69-81. Association for Computational Linguistics.
+Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 5998-6008.
+Shinji Watanabe, Takaaki Hori, Shigeki Karita, Tomoki Hayashi, Jiro Nishitoba, Yuya Unno, Nelson Enrique Yalta Soplin, Jahn Heymann, Matthew Wiesner, Nanxin Chen, Adithya Renduchintala, and Tsubasa Ochiai. 2018. Espnet: End-to-end speech processing toolkit. In *Interspeech* 2018, 19th Annual Conference of the International Speech Communication Association, Hyderabad, India, 2-6 September 2018, pages 2207-2211. ISCA.
+Hang Yan, Bocao Deng, Xiaonan Li, and Xipeng Qiu. 2019. Tener: Adapting transformer encoder for named entity recognition.
+Dian Yu, Michelle Cohn, Yi Mang Yang, Chun-Yen Chen, Weiming Wen, Jiaping Zhang, Mingyang Zhou, Kevin Jesse, Austin Chau, Antara Bhowmick, Shreenath Iyer, Giritheja Sreenivasulu, Sam Davidson, Ashwin Bhandare, and Zhou Yu. 2019. Gunrock: A social bot for complex and engaging long conversations. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019 - System Demonstrations. Association for Computational Linguistics.
+Lu-Feng Zhai, Pascale Fung, Richard M. Schwartz, Marine Carpuat, and Dekai Wu. 2004. Using n-best lists for named entity recognition from chinese speech. In Proceedings of HLT-NAACL 2004: Short Papers, Boston, Massachusetts, USA, May 2-7, 2004. The Association for Computational Linguistics.
+Liyuan Zhou, Hanna Suominen, and Leif Hanlen. 2015. Evaluation data and benchmarks for cascaded speech recognition and entity extraction. In Proceedings of the Third Edition Workshop on Speech, Language & Audio in Multimedia, SLAM '15, page 15-18, New York, NY, USA. Association for Computing Machinery.
+
+# A Appendix
+
+# A.1 Applications of SLU
+
+SLU is an essential component of many commercial devices like voice assistants, home assistants (Yu et al., 2019; Coucke et al., 2018) and spoken dialog systems (Nguyen and Yu, 2021) that map speech to executable commands on a daily basis. One of the key applications of SLU is to extract key mentions like entities from a user command to take appropriate actions. As a result, several datasets (Bastianelli et al., 2020; Shon et al., 2022; Del Rio et al., 2021) have been proposed to build understanding systems for spoken utterances.
+
+# A.2 Dataset Description
+
+We evaluated our proposed approach on publicly available SLU datasets, namely SLUE (Shon et al., 2022) and SLURP (Bastianelli et al., 2020) datasets on the task of Named Entity Recognition (NER) from naturally available speech. SLURP is a linguistically diverse and challenging spoken language understanding benchmark that consists of single-turn user conversation with a home assistant, annotated with both intent and entities. Similar to the approach followed in our prior work (Bastianelli et al., 2020; Arora et al., 2022), we bootstrap our train set with 43 hours of synthetic data for all our experiments. We evaluate our approach using SLU-F1 (Bastianelli et al., 2020), a metric for spoken entity prediction, and Label F1, which considers only entity-tag predictions.
+
+SLUE is a recently released SLU benchmark that focuses on Spoken Language Understanding from limited labeled training data. Specifically, it consists of SLUE VoxPopuli dataset that can be used for building systems for ASR and NER. Similar to (Shon et al., 2022), we evaluate our systems using two micro-averaged F1 scores, the first score that evaluates both named entity and tag pairs is referred to as F1, and the second that evaluates only entity-tag phrases is referred to as Label-F1. Note that the released test sets are blind without ground truth labels, and hence we compare different methods using the development set.
+
+The dataset download and evaluation links for SLURP can be found here - https://github.com/pswietojanski/slurp and for SLUE here - https://github.com/ asappresearch/slue-toolkit. The datasets have been processed and prepared using ESP-net, SLURP - https://github.com/esprit/
+
+Table 4: Overview of the two publicly available SLU datasets (Shon et al., 2022; Bastianelli et al., 2020) used for our experiments.
+
+
Dataset
Size (utterances / hours)
Train
Dev
Test
SLURP
11,514 / 40.2
2,033 / 6.9
2,974 /10.3
SLUE-VoxPopuli
5,000 / 14.5
1,753 / 5.0
1,842 / 4.9
+
+espnet/tree/master/egs2/slurp_structure and SLUE - https://github.com/espnet/espnet/tree/master/egs2/slue-voxpopuli
+
+# A.3 Experimental Setup
+
+Our models are implemented in PyTorch (Paszke et al., 2019), and the experiments are conducted using the ESPnet-SLU toolkit (Arora et al., 2022).
+
+# A.3.1 Speech Preprocessing
+
+Speech inputs are globally mean-variance normalized 80 dimensional logmel filterbanks using a $16\mathrm{kHz}$ sampling and window of 512 frames and a 128 hop length. We apply speed perturbation for the under-resourced dataset of SLUE of 0.9 and 1.1 to increase the samples. We also apply specaugmentation (Park et al., 2019) on both datasets. We also remove all examples smaller than 0.1 seconds and larger than 20 seconds from the training data.
+
+# A.3.2 Text Processing
+
+For the cascaded system, we process ASR transcripts $S$ using bpe tokenization (Kudo and Richardson, 2018) and train ASR models to generate bpe subtokens. We use bpe size of 500 for SLURP and 1000 for SLUE dataset. For the direct E2E models, we predict the enriched label sequence $Y^{e}$ using the same bpe size as the ASR models in cascaded sequence. Similarly, compositional models also use the same bpe size to generate the ASR transcripts.
+
+For creating the BIO tags we modify the data preparation such that we take the entities for each utterance and create a "label utterance". This consists of one-to-one mapping of the label tags with the words and Begin (B), Inside (I) and Outside (O) marked for each label. After performing BPE tokenization we add $\varnothing$ for every substring of the word. We have attached the data preparation code.
+
+# A.3.3 Model and Training Hyperparameters
+
+We run parameter search for both direct end-to-end and our compositional end-to-end systems using the same model search space (Table 5). In this
+
+section, we will describe our best architecture for both direct and compositional E2E systems.
+
+Direct E2E SLU systems After searching through hyperparameter space, our Direct E2E SLU systems consists of 12-layer Conformer (Gulati et al., 2020) encoder and a 6-layer Transformer (Vaswani et al., 2017) decoder with 8 attention heads for SLURP dataset. We use a dropout of 0.1, output dim of 512 and feedforward dim of 2048, giving a total parameter size of $109.3\mathrm{M}$ .
+
+For SLUE dataset, we found 12-layer Conformer with 4 attention heads and decoder is a 6-layer Transformer with 4 attention heads to give best validation performance. We use a dropout of 0.1, output dim of 256 and feedforward dim of 1024 in encoder and 2048 in the decoder, giving a total parameter size of $31.2\mathrm{M}$ .
+
+Compositional E2E SLU systems Our Compositional model which uses Direct E2E SLU formulation consists of 12-layer conformer block for encoder, 6-layer transformer block for decoder in it's ASR component and 4-layer transformer encoder and 6-layer transformer decoder in it's NLU component. Each of these attention blocks consist of 8 attention heads, dropout of 0.1, output dim of 512, feedforward dim of 2048, giving a total of 153.9M parameters in SLURP dataset. For SLUE dataset, each attention block has 4 attention heads, dropout of 0.1, output dim of 256, feedforward dimension of 1024 in encoder and 2048 in decoder, giving a total parameter size of 46.8M.
+
+Our Composition model with Proposed NLU formulation replaces NLU component in Direct E2E formulation with 8-layer transformer encoder followed by linear layer. All these attention blocks consist of 8 attention heads, dropout of 0.1, output dim of 512, feedforward dim of 2048, giving a total of $142.9\mathrm{M}$ parameters in SLURP dataset. For SLUE dataset, each of these attention blocks have 4 attention heads, dropout of 0.1, output dim of 256, feedforward dimension of 1024 in encoder and 2048 in decoder, giving a total parameter size of $43.8\mathrm{M}$ . Our NLU component can further attend to speech representations using cross attention (Dalmia et al., 2021). We further implement CRF loss using publicly available python library2.
+
+The loss from the ASR $(\mathcal{L}^{\mathrm{asr}})$ and NLU $(\mathcal{L}^{\mathrm{nlu}})$ subnet are combined combined as follows
+
+$$
+\mathcal {L} = \mathcal {L} ^ {\mathrm {a s r}} + \alpha \mathcal {L} ^ {\mathrm {n l u}}
+$$
+
+
Hyperparameter
Value
Output Size
[256, 512]
Attention Heads
[4, 8]
Number of blocks
[4, 6, 8, 12]
Hidden Dropout
[0.1, 0.2]
Attention dropout
[0.1, 0.2]
Position dropout
[0.1, 0.2]
Activation dropout
[0.1, 0.2]
Src Activation dropout
[0.1, 0.2]
Batch size
[50, 64]
LR schedule
[inv. sqrt., exp. lr.]
Max learning rate
[0.001, 0.002, 0.003]
Warmup steps
[5000, 15000, 25000]
Number of steps
[50, 70, 100]
Adam eps
1e-9
Adam betas
(0.9, 0.98)
Weight decay
0.000001
+
+We search alpha values over [0.3, 0.4, 0.5, 0.6] and found 0.6 to be best for SLURP and 0.3 for SLUE.
+
+Table 5: Model and Training Search for SLU Models.
+
+
Model
SLURP
SLUE
SLU F1
Label F1
F1
Label F1
Casacaded SLU (Ours)
76.9
83.9
48.6
63.9
Direct E2E SLU (Ours)
79.2
85.4
54.7
67.6
Compositional E2E SLU
w/ Direct E2E formulation (§3)
79.3
86.6
50.0
68.0
w/ Proposed NLU formulation (§4)
CRF w/ Speech Attention (SA)
79.9
87.0
59.4
73.6
Token Classification w/ SA
79.8
86.9
60.3
73.7
w/o Speech Attention
79.7
87.0
59.0
73.6
+
+Table 6: Results presenting the micro F1 performance for all models using CRF and Token Classification modeling on development set for SLURP and SLUE
+
+# A.3.4 Decoding Hyperparameters
+
+We keep the same decoding parameter of beam size and penalty as that of Arora et al. (2022). For direct E2E systems and our models CTC weight of 0.1 worked best. We searched over CTC weight of [0, 0.1, 0.3, 0.5].
+
+# A.3.5 Development Results
+
+We use F1 scores on the validation data to select the best hyperparameters. Table 6 presents the validation performances for our models.
+
+# A.3.6 Compute Infrastructure
+
+Our models were trained using mixed precision training on either a100, v100 or A6000 on our compute infrastructure depending on their availability. Depending on the GPU and the file i/o latency, the training time ranged from 4-7 hours for SLUE, while for SLURP the training time ranged from 12-18 hours.
+
+
Hypothesis
Reference
ASR Correct Entity Correct
EVENT DATE event reminder mona tuesday
EVENT DATE event reminder mona tuesday
ASR Correct Entity Incorrect
MOVIE TYPE NEWS TOPIC is there anything happening on jazz scene around edinburgh
MOVIE TYPE MOVIE TYPE PLACE NAME is there anything happening on jazz scene around edinburgh
ASR Incorrect Entity Correct
EVENT NAME PERSON DATE TIME create meeting with paul for tomorrow at ten am
EVENT NAME PERSON DATE TIME put meeting with pawel for tomorrow ten am
ASR Incorrect Entity Incorrect
EVENT NAME DATE set a birthday event for ninety
EVENT NAME PERSON set a birthday event for martin
+
+Figure 2: Qualitative examples of our compositional E2E SLU model for various error categories. We can observe that in the first case, the model is correctly able to predict both entity types and mentions even when the name "mona" is not a common name for an event. In the second case, even though it predicts the correct ASR transcript, it mislabels "Edinburgh" as a news topic since the phrase "is there anything happening" usually occurs with news topics. In the third case, even though it makes a mistake in the person name, the model correctly tags it as a person. Finally, the model incorrectly generates the word "ninety," and this error gets propagated to the NLU component through token representations which then predicts entity type "date". This analysis shows that the alignment between ASR and NLU outputs can help us gain better insights into model performance.
+
+# A.3.7 External ASR and NLU components
+
+For the experiments in Table 2, we used ASR and NLU models trained on external data. For the ASR fine-tuning we used an ESPnet model trained on the GigaSpeech dataset (Chen et al., 2021a). This model has the same architecture as the baseline direct E2E model on SLURP. We initialize both the encoder and decoder for direct E2E SLU and the ASR sub-net for the compositional E2ESLU model. For NLU fine-tuning we used Canine (Clark et al., 2022), a character based BERT language model, which exhibits strong performance on named entity recognition while being able to model token sizes comparable to our SLU systems.4 We initialize our NLU sub-network without speech attention with Canine and keep the model parameters fixed during training. For finding the best parameters we only tuned the learning rate and LR schedule from Table 5 and report the best numbers among CRF and Token Classification loss.
+
+For using External ASR Transcripts, we trained an ASR system initialized using GigaSpeech and WavLM (Chen et al., 2021b) respectively. They were then fine-tuned on the respective datasets. These systems achieve $10.0\%$ WER and $9.2\%$ WER on SLURP and SLUE respectively.
+
+# A.4 Error Categorization
+
+The predictions made by our compositional E2E SLU model can be categorized into different buckets on the basis of the errors by ASR or NER component. Table 7 demonstrates this behavior by categorizing the errors of our compositional E2E model
+
+
Entity Correct
Entity Incorrect
Model
# Examples
Model
# Examples
ASR Correct
w/ SA
8520
w/ SA
465
w/o SA
8501
w/o SA
474
ASR Incorrect
w/ SA
1568
w/ SA
1343
w/o SA
1585
w/o SA
1336
+
+Table 7: Number of examples per error category of our compositional E2E SLU systems with/without Speech Attention on SLURP test set. There are four categories depending on whether mistakes are made by ASR or NLU component. Note that the first quadrant lists # of correct examples, while the rest list incorrect ones. Direct E2E systems cannot offer such categorizations particularly for incorrect entities as there is no alignment between ASR and NLU outputs.
+
+trained with and without speech attention. Most of the performance differences between compositional E2E SLU model w/ and w/o speech attention are caused by the kinds of errors where the ASR predictions are inaccurate, but the NLU module is nevertheless able to recover the correct entity type from the utterance. This confirms our intuition that cross attention on speech representations can help the NLU module to recover from mistakes made during "recognizing" spoken mentions. We also present anecdotes for each of these error categories in Figure 2. This further emphasizes the transparency in our compositional E2E SLU models. Due to the lack of one-to-one alignment between ASR and Sequence Labeling, such analysis is not possible in direct E2E SLU systems, making it particularly difficult to categorize errors when the entity prediction is wrong.
\ No newline at end of file
diff --git a/tokenlevelsequencelabelingforspokenlanguageunderstandingusingcompositionalendtoendmodels/images.zip b/tokenlevelsequencelabelingforspokenlanguageunderstandingusingcompositionalendtoendmodels/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..c04b30f0454607a83e0b31784e3a7b8adb5aa34a
--- /dev/null
+++ b/tokenlevelsequencelabelingforspokenlanguageunderstandingusingcompositionalendtoendmodels/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:8611689a61466569438782408fcb1399d456f706227134bcb9b6c4836cc6c2c6
+size 400080
diff --git a/tokenlevelsequencelabelingforspokenlanguageunderstandingusingcompositionalendtoendmodels/layout.json b/tokenlevelsequencelabelingforspokenlanguageunderstandingusingcompositionalendtoendmodels/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..f157b5e7cb780ac3eff81c556d3e8050cd5d4d68
--- /dev/null
+++ b/tokenlevelsequencelabelingforspokenlanguageunderstandingusingcompositionalendtoendmodels/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c2003556fab42535340e03a35e23c829ef64c04f8e8a59f1faf9e0acef227c76
+size 390776
diff --git a/topicawareresponsegenerationintaskorienteddialoguewithunstructuredknowledgeaccess/afa2265e-5c82-4806-bc75-684654d30a71_content_list.json b/topicawareresponsegenerationintaskorienteddialoguewithunstructuredknowledgeaccess/afa2265e-5c82-4806-bc75-684654d30a71_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..46873bdcbc7b7d4cfd2f496fecc9dbb55bd197f0
--- /dev/null
+++ b/topicawareresponsegenerationintaskorienteddialoguewithunstructuredknowledgeaccess/afa2265e-5c82-4806-bc75-684654d30a71_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:306612c6025469448f32e8902a9de05fa2c49b0e4abc1ad01556d3afebdfc376
+size 90548
diff --git a/topicawareresponsegenerationintaskorienteddialoguewithunstructuredknowledgeaccess/afa2265e-5c82-4806-bc75-684654d30a71_model.json b/topicawareresponsegenerationintaskorienteddialoguewithunstructuredknowledgeaccess/afa2265e-5c82-4806-bc75-684654d30a71_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..02a08956fb8d8bf40c22e1eebf204f48b690bc99
--- /dev/null
+++ b/topicawareresponsegenerationintaskorienteddialoguewithunstructuredknowledgeaccess/afa2265e-5c82-4806-bc75-684654d30a71_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:174265721c3bf55bab6fda4a5af21f2d5eef8f9659cbd683b571fa7aee143f0f
+size 108825
diff --git a/topicawareresponsegenerationintaskorienteddialoguewithunstructuredknowledgeaccess/afa2265e-5c82-4806-bc75-684654d30a71_origin.pdf b/topicawareresponsegenerationintaskorienteddialoguewithunstructuredknowledgeaccess/afa2265e-5c82-4806-bc75-684654d30a71_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..0b118f010ba6899ef4942d62ff4e4635c2e79cc0
--- /dev/null
+++ b/topicawareresponsegenerationintaskorienteddialoguewithunstructuredknowledgeaccess/afa2265e-5c82-4806-bc75-684654d30a71_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:949c30097426898ccc0bf091ccb9590148c0c07387023724ff968fca3d4f6c13
+size 592433
diff --git a/topicawareresponsegenerationintaskorienteddialoguewithunstructuredknowledgeaccess/full.md b/topicawareresponsegenerationintaskorienteddialoguewithunstructuredknowledgeaccess/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..df192a8303ccd9d6ee65621f0f600405f8520f0b
--- /dev/null
+++ b/topicawareresponsegenerationintaskorienteddialoguewithunstructuredknowledgeaccess/full.md
@@ -0,0 +1,395 @@
+# Topic-Aware Response Generation in Task-Oriented Dialogue with Unstructured Knowledge Access
+
+Yue Feng† * Gerasimos Lampouras‡ Ignacio Iacobacci‡
+
+†University College London, London, UK
+
+$^{\ddagger}$ Huawei Noah's Ark Lab, London, UK
+
+† yue.feng.20@ucl.ac.uk
+
+‡ {gerasimos.lampouras, ignacio.iacobacci}@huawei.com
+
+# Abstract
+
+To alleviate the problem of structured databases' limited coverage, recent task-oriented dialogue systems incorporate external unstructured knowledge to guide the generation of system responses. However, these usually use word or sentence level similarities to detect the relevant knowledge context, which only partially capture the topical level relevance. In this paper, we examine how to better integrate topical information in knowledge grounded task-oriented dialogue and propose "Topic-Aware Response Generation" (TARG), an end-to-end response generation model. TARG incorporates multiple topic-aware attention mechanisms to derive the importance weighting scheme over dialogue utterances and external knowledge sources towards a better understanding of the dialogue history. Experimental results indicate that TARG achieves state-of-the-art performance in knowledge selection and response generation, outperforming previous state-of-the-art by 3.2, 3.6, and 4.2 points in EM, F1 and BLEU-4 respectively on Doc2Dial, and performing comparably with previous work on DSTC9; both being knowledge-grounded task-oriented dialogue datasets.
+
+# 1 Introduction
+
+Task-oriented (or goal-oriented) dialogue systems aim to accomplish a particular task (e.g. book a table, provide information) through natural language conversation with a user. The system's available actions are often described by a pre-defined domain-specific schema while relevant knowledge is retrieved from structured databases or APIs (Feng et al., 2022b; Rastogi et al., 2020). As such, task-oriented dialogue systems are often limited on which actions can be taken and what information can be retrieved (Kim et al., 2020). To relax these restrictions, some dialogue systems (also referred
+
+
+Figure 1: An example of knowledge-grounded dialogue.
+
+to as goal-oriented chatbots) adopt open-domain language that is by definition unconstrained by predefined actions (Feng et al., 2020), and dynamically extract any required knowledge from in-domain unstructured collections in the form of entity descriptions, FAQs, and documents. Access to external knowledge sources has also been shown to help dialogue systems generate more specific and informative responses, which helps with the "common response" problem (Zhang et al., 2018; Ren et al., 2020; Feng et al., 2021a, 2022a; Shi et al., 2022).
+
+Figure 1 shows an example of a task-oriented dialogue that exploits external unstructured knowledge sources. Given a history of previous dialogue turns, with each turn consisting of one user and system utterance, and access to in-domain unstructured knowledge sources (either a document collection or a set of candidate facts), the dialogue system needs to generate an appropriate system response for the current turn. Recent research (Zhang et al., 2018; Ren et al., 2020) tackles the task by decomposing it into two sub-tasks: to initially determine the relevant knowledge (if any) that needs to be extracted/selected from external resources, and to
+
+subsequently generate the response based on the selected knowledge and the dialogue history.
+
+When retrieving knowledge from unstructured sources, different sources may need to be accessed in different dialogue turns; this is to be expected in most conversation scenarios. In the example of Figure 1, the first turn is grounded on the first knowledge candidate, and subsequent turns are grounded on later candidates. If we consider that each knowledge source belongs to a different topic or domain (e.g. "how you apply", "publications", "application is denied" in our example), we can observe that as the knowledge selection shifts across sources during the course of the dialogue, a corresponding shift occurs between topics. Previous work has not actively exploited this, but we posit that attending the topic shifts in the dialogue history can provide signals that help distinguish relevant from irrelevant sources for knowledge selection, and that such topical information can help the model derive an importance weighting scheme over the dialogue history for better response generation.
+
+In this paper, we model topic shifts in selected knowledge sources to improve topic-aware knowledge selection and response generation in task-oriented dialogue, and propose "Topic-Aware Response Generation" (TARG), an end-to-end model for knowledge selection and response generation. Our approach incorporates multiple topic-aware attention mechanisms to derive the importance weighting scheme over previous utterances and knowledge sources, aiming for a better understanding of the dialogue history. In addition, TARG is built on top of recent breakthroughs in language representation learning by finetuning on the pretrained language model BART (Lewis et al., 2020).
+
+We conduct extensive experiments with two task-oriented dialogue datasets, namely Doc2Dial (Feng et al., 2020) and DSTC9 (Gunasekara et al., 2020). Our results indicate that $\mathrm{TARG}^1$ is able to accurately select the appropriate knowledge source, and as a result generate more relevant and fluent responses, outperforming previous state-of-the-art by 3.2, 3.6, and 4.2 points in EM, F1 and BLEU-4 respectively on Doc2Dial, and performing comparably with previous work on DSTC9. Furthermore, we present an ablation study and a case study accompanied by analysis of the learned attention mechanisms.
+
+# 2 Related Work
+
+As we briefly mentioned in the introduction, the majority of previous work decomposed knowledge-grounded dialogue generation into two sub-tasks: knowledge selection and response generation.
+
+To determine the relevant candidate for knowledge selection, the use of keyword matching (Ghazvininejad et al., 2018), information retrieval (Young et al., 2018) and entity diffusion (Liu et al., 2018) methods have been proposed. More specifically, keyword matching methods (Bordes et al., 2017) focus on calculating a weight for each keyword in the knowledge candidate and then determine their relevance based on the weighted sum of the keywords' representations. On the other hand, some information retrieval techniques compute traditional tfidf scores to detect the knowledge candidate in the most relevant document to the user's query (Song et al., 2018; Dinan et al., 2018), while others leverage the power of neural networks to learn a candidate ranking function directly through an end-to-end learning process (Yan and Zhao, 2018; Zhao et al., 2019; Gu et al., 2019, 2020). Another approach uses entity diffusion networks (Wang et al., 2020) that perform fact matching and knowledge diffusion to ground both knowledge candidates and dialogues.
+
+For response generation, the related work has adapted both response retrieval and language generation approaches. Specifically for response retrieval, deep interaction networks (Sun et al., 2020) have been employed to learn better-suited representations to ground candidate responses against external knowledge, while language generation approaches have been adapted to attend to ground knowledge during inference (Peng et al., 2020), with some further employing copy mechanisms over both dialogue context and external knowledge (Yavuz et al., 2019), or leveraging a reading comprehension model to similarly extract relevant spans (Qin et al., 2019; Wu et al., 2021).
+
+Recently, pre-trained language models such as BERT (Devlin et al., 2019) or RoBERTa (Liu et al., 2019), which have demonstrated significant improvements on numerous natural language processing tasks, have also been applied to improve model the semantic representation in knowledge selection and response generation (Zhao et al., 2020; Li et al., 2020; Feng et al., 2020, 2021b; Ye et al., 2022). Alternatively, other approaches combine the generative capability of auto-regressive decoders
+
+
+Figure 2: Overview of Topic-Aware Response Generation (TARG).
+
+
+
+such as GPT-2 (Budzianowski and Vulić, 2019) or T5 (Raffel et al., 2020), to better generate the system response.
+
+Broader dialogue research has explored the topic-aware signal present in the dialogue history, but such work did not consider external knowledge nor its topics. Briefly, Xing et al. (2017) proposed a topic-aware seq-to-seq approach for open-domain dialogue that attends over LDA topics inferred from the dialogue history, while Zhang et al. (2020) calculates the relevance between topic distributions of the dialogue history and the immediate context and attends over them to generate the next system response. In retrieval-based dialogue systems, Xu et al. (2021b) performs topic-aware segmentation of the context to better inform dialogue modeling.
+
+We briefly discuss more recent work in our experiments section, as we compare it against our approach. To the best of our knowledge no other work has explicitly modelled the topic shifts in both dialogue history and external knowledge to inform knowledge selection and response generation in knowledge-ground task-oriented dialogue systems.
+
+# 3 Our Approach
+
+As we mentioned in the introduction, our proposed approach (TARG) exploits topic-aware mechanisms to derive an importance weighting scheme over different utterances in the dialogue history, with the goal to better inform knowledge selection and response generation. For a brief overview of TARG, please consult Figure 2. The input in our task consists of the dialogue history of previous user and system utterances, and a set of external knowledge candidates (hereafter referred to as factoids for brevity). The goal is to generate the next
+
+system utterance in the dialogue, which may or may not be grounded in one of the factoids; some of the dialogue history utterances may also be grounded on factoids but not necessarily all of them are.
+
+Briefly, to generate the next turn's system utterance, TARG initially generates BART-based representations for every previous user and system utterance in the dialogue history, for every available factoid, and for both utterances' and factoids' corresponding topics. For each utterance / factoid pair, TARG extracts matching features by calculating feature interaction over their encoded representations. TARG subsequently weights the matching features by topic-aware attention mechanisms, and aggregates them in a tensor. Finally, a knowledge selection layer outputs a relevance score over factoids, and the decoder generates the system utterance based on the most relevant factoid's encoding.
+
+# 3.1 Utterance and Factoid Encoder
+
+We use a BART encoder to generate representations for every utterance in the dialogue history (up to a maximum history length) and factoid in external knowledge. We similarly, but separately, generate representations for their corresponding topics. Our work assumes that the corresponding topic of factoids can be derived in some way from the available data, e.g. the topic can be interpreted as the title of the factoid's originating document or its annotated domain. While we do not explore the possibility in this paper, the topic could also potentially be inferred using topic modelling techniques. The topic of each utterances is considered the same as that of their corresponding factoids (if any). Since not all dialogue turns are necessarily grounded in external knowledge, in absence of a corresponding
+
+factoid, the topic is set to a generic "non-relevant" pseudo-topic. This process results in the semantics and topic of every utterance or factoid being represented explicitly by separate embeddings.
+
+Specifically, in order to generate the semantic embeddings $s_u$ and $s_k$ of every utterance and factoid respectively, the token sequence $X = ([\mathrm{CLS}], x_1, \dots, x_N, [\mathrm{SEP}], [\mathrm{MODE}], [\mathrm{SEP}])$ is passed through a BART encoder, where the sub-word tokens of the text are denoted as $x_1, \dots, x_N$ . [CLS] and [SEP] are start-of-text and separator pseudo-tokens respectively, while [MODE] is one of [SYS]/[USER]/[KLG] to indicate whether the text belongs to a system utterance, user utterance, or factoid respectively. The state of the [CLS] is used as the utterance's / factoid's semantic embedding. Similarly, to generate the topic embeddings $t_u$ and $t_k$ of every utterances and factoid, the BART encoder sequence input is $T = ([\mathrm{CLS}], x_1, \dots, x_N, [\mathrm{SEP}], [\mathrm{MODE}], [\mathrm{SEP}], [\mathrm{POSIT}], [\mathrm{SEP}])$ , where [POSIT] is the position of the corresponding dialogue history utterance (zero if the text belongs to a factoid). The state of the [CLS] is used as the topic embedding.
+
+# 3.2 Topic-aware Attention
+
+In the next step, TARG calculates feature interactions over the semantic embeddings to extract matching features, which are subsequently weighted by a number of topic-aware attention mechanisms. These attention mechanisms operate over the topic embeddings of utterances and factoids to calculate topic-aware utterance / factoid pair matching representations. The motivation is to incorporate a more flexible way to weight and aggregate matching features of different dialogue history utterances with topic-aware attention, so that the model learns to better attend over them.
+
+Specifically, we design three different types of topic-aware attention that are calculated between each topic embedding $t_{k}^{i}$ , corresponding to the $i$ -th factoid, and the topic embeddings of all utterances in dialogue history $T_{u}$ , as follows:
+
+Dot Product. We concatenate the utterance topic embeddings $t_u^j \in \mathbb{R}^H$ with the factoid topic embedding, and compute the dot product between parameter $w_d \in \mathbb{R}^{2H}$ and the resulting vector:
+
+$$
+A _ {d} ^ {i} = \operatorname {s o f t m a x} \left(\exp \left([ t _ {u} ^ {j}, t _ {k} ^ {i} ] w _ {d}\right), \forall t _ {u} ^ {j} \in T _ {u}\right) \tag {1}
+$$
+
+Bilinear. We compute the bilinear interaction between $t_{u}^{j}$ and $t_{k}^{i}$ and then normalize the result:
+
+$$
+A _ {b} ^ {i} = \operatorname {s o f t m a x} \left(\exp \left(t _ {u} ^ {j} W _ {b} t _ {k} ^ {i \top}\right), \forall t _ {u} ^ {j} \in T _ {u}\right) \tag {2}
+$$
+
+where $W_{b}\in \mathbb{R}^{H\times H}$ is a bilinear interaction matrix.
+
+Outer Product. We compute the outer product between $t_{u}^{j}$ and $t_{k}^{i}$ , then project this feature vector through a fully connected layer and a softmax:
+
+$$
+A _ {o} ^ {i} = \operatorname {s o f t m a x} \left(\exp \left(\left(t _ {u} ^ {j} \times t _ {k} ^ {i}\right) w _ {o}\right), \forall t _ {u} ^ {j} \in T _ {u}\right) \tag {3}
+$$
+
+where $w_{o}\in \mathbb{R}^{H}$ is a parameter and $\times$ is the outer product.
+
+In parallel, we calculate the feature interaction matrix $F_{i}\in \mathbb{R}^{N\times H}$ between the semantic embeddings of all utterances $s_u^j$ and the factoid $s_k^i$ . $N$ is the number of dialogue utterances. Every row $F_{i,j}$ of $F_{i}$ is calculated as follows:
+
+$$
+F _ {i, j} = v _ {f} ^ {\top} \tanh \left(s _ {u} ^ {j} W _ {f} s _ {k} ^ {i \top} + b _ {f}\right) \tag {4}
+$$
+
+with $W_{f}\in \mathbb{R}^{H\times H},b_{f}\in \mathbb{R},v_{f}\in \mathbb{R}^{H}$ being model parameters.
+
+To obtain a unified utterance / factoid pair representation $k_{i}$ for each factoid $i$ , we concatenate the weighted sums of all utterances / factoid interaction embeddings with the different attention mechanisms. The final topic-aware utterance / factoid pair representation across all factoids is $K \in \mathbb{R}^{3H \times M}$ , where $M$ is the number of factoids. The $i$ -th column vector $k_{i}$ is calculated as follows:
+
+$$
+k _ {i} = \left[ A _ {d} ^ {i \top} F _ {i}, A _ {b} ^ {i \top} F _ {i}, A _ {o} ^ {i \top} F _ {i} \right] \tag {5}
+$$
+
+# 3.3 Relevant Knowledge Selection
+
+For the purpose of knowledge selection, TARG treats all external knowledge as a single document, by simply concatenating all available factoids. To account for the possibility that the system response shouldn't be grounded on any external knowledge, a "non-relevant" pseudo-factoid is included.
+
+The relevant knowledge selector takes the topic-aware representations of these sequential factoids as input and predicts a span over the overall document that the system response should be grounded on. Through this process, several knowledge candidates may appear in the selected span.
+
+The grounded span is derived by predicting the start and the end indices of the span in the document. We obtain the probability distribution of the
+
+start index and end index over the entire document by the following equations:
+
+$$
+p ^ {s} = \operatorname {s o f t m a x} \left(W _ {s} ^ {\top} K + b _ {s} ^ {\top}\right), \tag {6}
+$$
+
+$$
+p ^ {e} = \operatorname {s o f t m a x} \left(W _ {e} ^ {\top} K + b _ {e} ^ {\top}\right), \tag {7}
+$$
+
+where $W_{s},W_{e}\in \mathbb{R}^{3H},b_{s},b_{e}\in \mathbb{R}^{M}$ are trainable weight vectors.
+
+# 3.4 System Response Generation
+
+The system response generator decodes the response by attending on the selected knowledge span. Since the span may contain several factoids, we first use a Convolution Neural Network (CNN) to fuse the information. We apply this CNN even when only a single factoid is present in the span for consistency. The CNN receives the topic-aware utterance / factoid pair embeddings of the selected span, and outputs the fusion embedding $f \in \mathbb{R}^{H}$ :
+
+$$
+f = \operatorname {C N N} \left(K _ {:, s: e}\right), \tag {8}
+$$
+
+where $s$ and $e$ are the start and end indexes.
+
+We employ a BART decoder for the system response generator, which takes the fusion embedding $f$ as its initial hidden state. At each decoding step $t$ , the decoder receives the embedding of the previous item $w_{t-1} \in \mathbb{R}^H$ , the previous hidden state $h_{t-1} \in \mathbb{R}^H$ , and the topic-aware utterance / factoid pair embeddings of the selected span $K_{s:e,:}$ , and produces the current hidden state $h_t \in \mathbb{R}^H$ :
+
+$$
+h _ {t} = \operatorname {B A R T} \left(w _ {t - 1}, h _ {t - 1}, K _ {:, s: e}\right). \tag {9}
+$$
+
+A linear transformation layer produces the generated word distribution $p_v$ over the vocabulary:
+
+$$
+p _ {v} = \operatorname {s o f t m a x} \left(V W _ {v} h _ {t} + b _ {v}\right), \tag {10}
+$$
+
+where $V\in \mathbb{R}^{L\times H}$ is the word embeddings of the vocabulary, $L$ is the vocabulary size, and $W_{v}\in \mathbb{R}^{H\times H}$ and $b_{v}\in \mathbb{R}$ are transformation parameters.
+
+# 3.5 Optimization
+
+For each turn, our model selects the relevant knowledge and generates the current turn's response. We optimize the knowledge selector and response generator via their cross-entropy losses $\mathcal{L}_s, \mathcal{L}_g$ :
+
+$$
+\mathcal {L} _ {s} = - \frac {1}{N M} \sum_ {n = 0} ^ {N} \sum_ {m = 0} ^ {M} \left[ \log \left(p _ {y _ {n m} ^ {s}} ^ {s}\right) + \log \left(p _ {y _ {n m} ^ {e}} ^ {e}\right) \right], \tag {11}
+$$
+
+$$
+\mathcal {L} _ {g} = - \frac {1}{N M} \sum_ {n = 0} ^ {N} \sum_ {m = 0} ^ {M} \log P \left(Y _ {n m} \mid D _ {n m}, K _ {n m}\right), \tag {12}
+$$
+
+
Domain
#Dials
#Docs
avg # per doc
tk
sp
p
sec
ssa
1192
109
795
70
17
5
va
1330
138
818
70
20
9
dmv
1305
149
944
77
18
10
studentaid
966
91
1007
75
20
9
all
4793
487
888
73
18
8
+
+Table 1: Number of dialogues, documents and average of content elements per document (tk: tokens, sp: spans, p: paragraphs, sec: sections) per domain in Doc2Dial.
+
+
Domain
#Dials
#Snippets
#per-snip
tk
sent
Hotel
-
1219
9
1.00
Restaurant
-
1650
7
1.00
Train
-
26
15
1.20
Taxi
-
5
19
1.15
all
10,438
2900
8
1.00
+
+Table 2: Number of dialogues, snippets and average number of content elements per snippet (tk: tokens, sent: sentences) per domain in the DSTC9 dataset.
+
+where $N$ is the number of samples, $M$ is the number of dialogue turns, $y_{nm}^{s} / y_{nm}^{e}$ and $p^s /p^e$ respectively represent the ground truth and predicted start/end positions at $m$ -th dialogue turn of sample $n$ , $D_{nm}$ is the input dialogue context, $K_{nm}$ is the input knowledge, and $Y_{nm}$ is the ground truth system response at $m$ -th dialogue turn of sample $n$ . We compute the joint loss $\mathcal{L}$ as follows:
+
+$$
+\mathcal {L} = \lambda \cdot \mathcal {L} _ {s} + (1 - \lambda) \cdot \mathcal {L} _ {g}, \tag {13}
+$$
+
+where $\lambda \in [0,1]$ is a balance coefficient.
+
+f
+
+# 4 Experiments
+
+# 4.1 Datasets
+
+We evaluate our proposed approach on two benchmark data sets on task-oriented dialogue: Doc2Dial (Feng et al., 2020) and DSTC9 (Gunasekara et al., 2020). Doc2Dial is a leaderboard dataset with a withheld test set used for ranking participating systems, which includes conversation dialogues between an assisting system and an end user, with an accompanying set of documents wherein distinct factoids are clearly annotated; further annotations indicate which dialogue utterances are grounded on which factoids of the associated documents. The Doc2Dial dataset includes many
+
+cases of conversations that are grounded on factoids from different documents. By considering the title of each document as a distinct topic, each of these conversations can be interpreted to involve many interconnected topics under a general inquiry, making it an ideal dataset for our approach.
+
+The DSTC9 dataset also includes conversation dialogues, but the external knowledge is in the form of FAQ documents, in essence containing question answering pairs on a specific domain; we consider each pair as a distinct factoid and their domain as the topic. In practice, these FAQs are to be used to answer follow-up user questions that are out of the coverage of a dialogue system's database. Similarly to Doc2Dial, the "topic" in the DSTC9 dataset is also varied throughout the conversations.
+
+As mentioned before, we interpret the title of the factoid's originating document or its annotated domain as the topic of the factoid. However, this assumption would be reasonable only if the factoids are relatively short. Table 1 and Table 2 presents the statistics of the Doc2Dial and DSTC9 datasets, and we can observe that on average the knowledge factoids are indeed relatively short in both datasets.
+
+Information on the evaluation measures and implementation details can be found in the Appendix.
+
+# 4.2 Baselines
+
+In the following experiments, we compare our approach against previously published state-of-the-art approaches on the Doc2Dial and DSTC9 datasets. We have not re-implemented these approaches, but report their already published results for the datasets for which they are available.
+
+Base-D2D (Feng et al., 2020): This is the baseline provided by the Doc2Dial challenge. It consists of an extractive question answering model using a BART (Devlin et al., 2019) encoder to predict the grounding span in the document and a BART model to generate system responses. Base-D2D-ST directly uses the topic of the previous turn as the topic of current turn.
+
+JARS (Khosla et al., 2021): A transformer-based (Lan et al., 2019) extractive question-answering model that extracts relevant spans from the documents. They focus on knowledge selection and do not perform response generation.
+
+
Model
Knowledge Selection
Response Generation
EM
F1
BLEU-4
Base-D2D
37.2
52.9
17.7
Base-D2D-ST
27.6
35.2
12.1
JARS
42.1
57.8
-
CAiRE
45.7
60.1
22.3
RWTH
46.6
62.8
24.4
TARG
49.8
66.4
28.6
+
+Table 3: Performance of TARG and related work on Doc2Dial. Bold denotes best results in that metric.
+
+CAiRE (Xu et al., 2021a): An ensemble approach of fine-tuned RoBERTa (Liu et al., 2019) models, trained with a meta-learning objective over dataaugmented datasets.
+
+RWTH (Daheim et al., 2021): They use a biaffine classifier to model spans, followed by an ensemble for knowledge selection, and a cascaded model that grounds the response prediction on the predicted span for response generation.
+
+Base-DSTC (Gunasekara et al., 2020): The baseline provided by the DSTC9 challenge is a response generation model obtained by fine-tuning the GPT-2 (Budzianowski and Vulić, 2019) model with a standard language modeling objective. Base-DSTC-ST directly uses the topic of the previous turn as the topic of current turn.
+
+KDEAK (Chaudhary et al., 2021): A model which formulates knowledge selection as a factorized retrieval problem with three modules performing domain, entity and knowledge level analyses. The response is generated using a GPT-2 model attending on any relevant retrieved knowledge.
+
+RADGE (Tang et al., 2021): A multi-task method that exploits correlations between dialogue history and keywords extracted from the API through finetuning a sequence of ELECTRA models (Clark et al., 2020).
+
+EGR (Bae et al., 2021): An approach that uses relevance similarity to score factoids, and later reranks them with a rule-based algorithm based on entity names parsed from the dialogue. The response is generated with a BART model.
+
+# 4.3 Experimental Results
+
+Tables 3 and 4 show our results on Doc2Dial and DSTC9 respectively. Observe that TARG performs significantly better than related work in both knowledge selection and response generation on
+
+
Model
Knowledge Selection
Response Generation
MRR@5
Recall@5
BLEU-1
BLEU-2
BLEU-3
BLEU-4
ROUGE-1
ROUGE-2
ROUGE-L
Base-DSTC
0.726
0.877
0.303
0.173
0.100
0.065
0.338
0.136
0.303
Base-DSTC-ST
0.612
0.743
0.251
0.132
0.083
0.047
0.262
0.104
0.244
KDEAK
0.853
0.896
0.355
0.230
0.153
0.104
0.397
0.190
0.357
RADGE
0.937
0.966
0.350
0.217
0.135
0.089
0.393
0.175
0.355
EGR
0.894
0.934
0.361
0.226
0.140
0.096
0.397
0.179
0.353
TARG
0.935
0.972
0.366
0.224
0.156
0.111
0.408
0.183
0.360
+
+Table 4: Performance of TARG and related work on the DSTC9 dataset. Bold denotes best results in that metric.
+
+the Doc2Dial dataset, outperforming the second best system by 3.2, 3.6, and 4.2 points in EM, F1 and BLEU-4 respectively.
+
+On the DSTC9 dataset, TARG outperforms the related work in most metrics, though by narrow margins. Due to the smaller differences, we consider TARG to be performing on par with state-of-the-art on DSTC9. The performance gains of TARG can be explained by the topic-aware mechanism as it provides a more flexible way to weight and aggregate different dialogue history turns. This indicates that better understanding of the dialogue history is crucial for predicting the relevant factoids and generating a reasonable response.
+
+The main difference between datasets is the frequency of topic shifts. The average number of topics per dialogue is 8.83 and 2.58 on Doc2Dial and DSTC9 respectively. This difference can be partially explained by how we infer each dataset's topic, e.g. since the topic in DSTC9 is the domain of each question-answer pair, and multiple pairs belong to the same domain, the topic shifts are considerably more limited than in the Doc2Dial dataset. We further examined how BLEU scores are effected if we isolate DSTC9 dialogues that have more than the average number of topics. Specifically, we evaluated TARG on DSTC9 dialogues which exclusively have 2, 3, and 4 topics, and the BLEU is 0.363, 0.372, and 0.378 respectively. This indicates that more topic shifts provide more signal for the model to exploit.
+
+An additional difference between the datasets is that the topic for each factoid in Doc2Dial can be considered to be fine-grained, e.g. "VA clothing allowance", "About your eligibility", and "How to get these benefits", while in the DSTC9 dataset, the topic for each factoid can be considered coarse-grained, e.g. "Restaurant", "Hotel", "Taxi", and "Train". These differences collectively show that the lower performance on DSTC9 is due to its
+
+
+Figure 3: Ablation study for knowledge selection.
+
+
+
+
+Figure 4: Ablation study for response generation.
+
+
+
+coarse-grained topics, and the lower number of average topic shifts. This suggests that a further division of the documents on more fine-grained topics and introducing more topic shifts in DSTC9 dialogs would help TARG perform better. However, we cannot straightforwardly examine how these two improvements interact with each other, and leave such analysis for future work.
+
+# 5 Discussion
+
+# 5.1 Ablation Study
+
+Here we conduct an ablation study of TARG, to explore the effects of the BART model, topic-aware attention, as well as the different topic attention mechanisms. The results indicate that all these mechanisms are necessary to the performance of knowledge selection and response generation.
+
+Effect of BART: To investigate the effectiveness of using BART in the utterance / factoid encoder and system response generator, we replace BART with a bi-directional LSTM and rerun the model for
+
+
Dialogue History Turns
Knowledge Candidates (Factoids)
U1
U: I wanted to know about career options.
Topic
Context
S1
S: Do you love working with animals?
T1
Exploring Your Career Options
Love working with animals? How about computers? Find possible careers to match your interests.
U2
U: No, what else you got?
S2
S: Do you like working with computers?
T2
Resources for Parents of Students
Are you a parent planning ahead for your child's higher education? Review our resources for parents to learn more about saving early, and finding tax breaks.
U3
U: I use them but wouldn't care to work on computer related things. Do you have any info for the parents to look at?
S3
S: Is this information for a parent that is planning ahead for a child's higher education?
U4
U: yes it is.
S4
S: We have resources for parents to learn more about saving early, and finding tax breaks.
U5
U: Do you have any info on how college can help me?
Generated Response
T3
Preparing for College
Check out Reasons to Attend a College or Career School. Learning About Budgeting Resources for Parents of Students.
Ground Truth
Yes, you can look at our Reasons to Attend a College or Career School section.
TARG
Please look at Reasons to Attend a College or Career School.
RWTH
Yes, Budgeting Resources for Parents of Students.
Doc2Dial-baseline
Review our resources for parents.
+
+
+Figure 5: Case study on Doc2Dial. Dialogue history turns are grounded to knowledge candidates of the same color.
+
+
+Figure 6: Visualization of learned topic-aware attention of dialogue history utterances U-X and S-X (for user and system utterance) for each topic T-X in the example in Figure 5. Lighter spots mean higher attention scores.
+
+
+
+Doc2Dial and DSTC9. As shown in Figures 3 and 4, the performance of the BiLSTM-based model TARG-w/oBART decreases significantly in knowledge selection, and especially in response generation as is indicated by the drop in BLEU. As expected, this indicates that the BART model can create and utilize more accurate representations for dialogue history and unstructured knowledge.
+
+Effect of topic-aware attention: Next we remove the topic-aware attention mechanisms (TARGw/oAtt). Figures 3 and 4 again show that the respective performances deteriorate considerably. This shows that topic-aware attention helps derive an important weighting scheme over the utterances leading to better understanding of dialogue history.
+
+Effect of topic attention mechanisms: Here we compare TARG against TARG-dot, TARG-bilinear,
+
+
Model
Knowledge Selection
Response Generation
EM
F1
BLEU
TARG-dot
0.468
0.642
0.261
TARG-bilinear
0.481
0.652
0.268
TARG-outer
0.489
0.655
0.275
TARG
0.498
0.664
0.286
+
+Table 5: Ablation over different attention mechanisms.
+
+and TARG-outer which use exclusively doc product attention, bilinear attention, and outer product attention respectively. Table 5 shows that dot product attention underperforms compared to bilinear and outer product attention while bilinear attention's performance is comparable with outer product attention. In addition, any isolated attention mechanism performs considerably worse than their fusion, supporting its utilization. We conjecture that this is due to how different attention mechanisms focus on different topic features.
+
+# 5.2 Analysis on Topic Shift
+
+To facilitate a better understanding of how topic shifts occur in our model, we present a case study from the Doc2Dial dataset. On the top of Figure 5 are the previous turns of dialogue history, while on the right is a subset of the available factoids. We can observe how the topic changes throughout the turns of dialogue history (by consulting the corresponding factoid topic), from "Exploring Your Career Options" in turns 1 and 2, to "Resources for
+
+Parents of Students" in turns 3 and 4, and finally "Preparing for College" in turn 5.
+
+On the bottom of Figure 5, we present responses generated by our proposed model TARG, the best of the previous work RWTH, the Doc2Dial-baseline, and the ground truth. Observing the responses and comparing with the ground truth, Doc2Dial-baseline seems to generate irrelevant response, picking the wrong topics from the candidates on the right, i.e. "Resources for Parents of Students". RWTH picks right topic, but it selects wrong factoid "Review our resources for parents" to generate response. TARG generates the more relevant and fluent response of the three, as its topic-aware attention informs knowledge selection to pick the topic and factoid that more naturally follows the dialogue history, i.e. "Reasons to Attend a College or Career School". Furthermore, TARG's BART decoder ensures the fluency of the output.
+
+Figure 6 presents a visualization of TARG's learned topic-aware attention over the dialogue utterances and topics of the case study. This includes Dot Product Attention, Bilinear Attention, and Outer Product Attention. We can see that topic-aware attention captures reasonable dialogue utterance weights for each topic, with the weighing moving from topic T1 to T2 and to T3 as attentions are calculated over the dialogue history utterances. This supports our claim that modeling the topic shifts can be helpful for knowledge selection, and consequently response generation, through better understanding of the dialogue history.
+
+# 6 Conclusion
+
+In this paper, we proposed TARG: "Topic-Aware Response Generation", a topic-aware model which incorporates multiple topic-aware attention mechanisms to derive the importance weighting scheme over both dialogue utterances and unstructured external knowledge, and through that facilitate better dialogue history understanding. Our proposed method achieves state-of-the-art results in both knowledge selection and response generation, outperforming previous state-of-the-art by 3.2, 3.6, and 4.2 points in EM, F1 and BLEU-4 respectively on Doc2Dial, and performing comparably with previous work on DSTC9. To provide further insights, we also presented an ablation study of our model that supported the importance of our method's various components, and discussed a case study accompanied by an analysis of the attention mechanisms.
+
+# Limitations
+
+The main limitation of the proposed method is its reliance on annotated or easily inferrable topics in the external knowledge sources. Future work should explore how this method can be applied when such topics are absent, e.g. by inferring topics through Latent Dirichlet Analysis. Our analysis also shows that our method performs better when these topics are fine-grained and a large number of topic shifts are expected in the dialogue. A more technical limitation of our model is that due to the limited input context size of the pre-trained language model we used, its scalability to long dialogue context is difficult. Finally, due to data availability, we only conducted experiments on English dialogues. While little in our method should be affected by the limited morphology of the English language, our results should be confirmed to hold on more structurally complicated languages.
+
+# Acknowledgements
+
+The authors would like to thank the reviewers for their suggestions on how to improve the paper. They would also like to thank the MindSpore team for providing technical support34.
+
+# References
+
+Hyunkyung Bae, Minwoo Lee, AhHyeon Kim, Cheongjae Lee Hwanhee Lee, Cheoneum Park, Donghyeon Kim, and Kyomin Jung. 2021. Relevance similarity scorer and entity guided reranking for knowledge grounded dialog system. The AAAI Conference on Artificial Intelligence. (AAAI).
+Antoine Bordes, Y-Lan Boureau, and Jason Weston. 2017. Learning end-to-end goal-oriented dialog. The International Conference on Learning Representations. (ICLR).
+Paweł Budzianowski and Ivan Vulić. 2019. Hello, it's gpt-2-how can i help you? towards the use of pretrained language models for task-oriented dialogue systems. In Proceedings of the 3rd Workshop on Neural Generation and Translation, pages 15-22.
+Mudit Chaudhary, Borislav Dzodzo, Sida Huang, Chun Hei Lo, Mingzhi Lyu, Lun Yiu Nie, Jinbo Xing, Tianhua Zhang, Xiaoying Zhang, Jingyan Zhou, et al. 2021. Unstructured knowledge access in task-oriented dialog modeling using language inference, knowledge retrieval and knowledge-integrative response generation. The AAAI Conference on Artificial Intelligence. (AAAI).
+
+Kevin Clark, Minh-Thang Luong, Quoc V. Le, and Christopher D. Manning. 2020. ELECTRA: Pretraining text encoders as discriminators rather than generators. In The International Conference on Learning Representations. (ICLR).
+Nico Daheim, David Thulke, Christian Dugast, and Hermann Ney. 2021. Cascaded span extraction and response generation for document-grounded dialog. Annual Meeting of the Association for Computational Linguistics. (ACL).
+Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In The Conference of the North American Chapter of the Association for Computational Linguistics. (NAACL).
+Emily Dinan, Stephen Roller, Kurt Shuster, Angela Fan, Michael Auli, and Jason Weston. 2018. Wizard of wikipedia: Knowledge-powered conversational agents. In *The International Conference on Learning Representations*. (ICLR).
+Song Feng, Hui Wan, Chulaka Gunasekara, Siva Patel, Sachindra Joshi, and Luis Lastras. 2020. Doc2dial: A goal-oriented document-grounded dialogue dataset. In The Conference on Empirical Methods in Natural Language Processing. (EMNLP), pages 8118-8128.
+Yue Feng, Zhen Han, Mingming Sun, and Ping Li. 2022a. Multi-hop open-domain question answering over structured and unstructured knowledge. In *Findings of the Association for Computational Linguistics.* (NAACL), pages 151–156.
+Yue Feng, Aldo Lipani, Fanghua Ye, Qiang Zhang, and Emine Yilmaz. 2022b. Dynamic schema graph fusion network for multi-domain dialogue state tracking. In Annual Meeting of the Association for Computational Linguistics. (ACL), pages 115-126.
+Yue Feng, Zhaochun Ren, Weijie Zhao, Mingming Sun, and Ping Li. 2021a. Multi-type textual reasoning for product-aware answer generation. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 1135-1145.
+Yue Feng, Yang Wang, and Hang Li. 2021b. A sequence-to-sequence approach to dialogue state tracking. In Annual Meeting of the Association for Computational Linguistics. (ACL), pages 1714-1725.
+Marjan Ghazvininejad, Chris Brockett, Ming-Wei Chang, Bill Dolan, Jianfeng Gao, Wen-tau Yih, and Michel Galley. 2018. A knowledge-grounded neural conversation model. In The AAAI Conference on Artificial Intelligence. (AAAI), volume 32.
+Jia-Chen Gu, Zhen-Hua Ling, Xiaodan Zhu, and Quan Liu. 2019. Dually interactive matching network for personalized response selection in retrieval-based chatbots. In The Conference on Empirical Methods in Natural Language Processing. (EMNLP), pages 1845-1854.
+
+Jia-Chen Gu, Zhenhua Ling, Quan Liu, Zhigang Chen, and Xiaodan Zhu. 2020. Filtering before iteratively referring for knowledge-grounded response selection in retrieval-based chatbots. In *The Conference on Empirical Methods in Natural Language Processing* (EMNLP), pages 1412-1422.
+Chulaka Gunasekara, Seokhwan Kim, Luis Fernando D'Haro, Abhinav Rastogi, Yun-Nung Chen, Mihail Eric, Behnam Hedayatnia, Karthik Gopalakrishnan, Yang Liu, Chao-Wei Huang, et al. 2020. Overview of the ninth dialog system technology challenge: Dstc9. arXiv preprint arXiv:2011.06486.
+Sopan Khosla, Justin Lovelace, Ritam Dutt, and Adithya Pratapa. 2021. Team jars: Dialdoc subtask 1-improved knowledge identification with supervised out-of-domain pretraining. In Annual Meeting of the Association for Computational Linguistics. (ACL).
+Seokhwan Kim, Mihail Eric, Karthik Gopalakrishnan, Behnam Hedayatnia, Yang Liu, and Dilek Hakkani-Tur. 2020. Beyond domain apis: Task-oriented conversational modeling with unstructured knowledge access. In Annual Meeting of the Special Interest Group on Discourse and Dialogue. (SIGDIAL), pages 278-289.
+Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2019. Albert: A lite bert for self-supervised learning of language representations. The International Conference on Learning Representations. (ICLR).
+Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In Annual Meeting of the Association for Computational Linguistics. (ACL), pages 7871-7880.
+Linxiao Li, Can Xu, Wei Wu, Yufan Zhao, Xueliang Zhao, and Chongyang Tao. 2020. Zero-resource knowledge-grounded dialogue generation. The Conference on Neural Information Processing Systems. (NIPS).
+Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches Out, Barcelona, Spain. Annual Meeting of the Association for Computational Linguistics. (ACL).
+Shuman Liu, Hongshen Chen, Zhaochun Ren, Yang Feng, Qun Liu, and Dawei Yin. 2018. Knowledge diffusion for neural dialogue generation. In Annual Meeting of the Association for Computational Linguistics. (ACL), pages 1489-1498.
+Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. The Computing Research Repository. (CoRR).
+
+Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Annual Meeting of the Association for Computational Linguistics. (ACL).
+Baolin Peng, Chunyuan Li, Jinchao Li, Shahin Shayandeh, Lars Liden, and Jianfeng Gao. 2020. Soloist: Few-shot task-oriented dialog with a single pretrained auto-regressive model. The Computing Research Repository. (CoRR).
+Lianhui Qin, Michel Galley, Chris Brockett, Xiaodong Liu, Xiang Gao, William B Dolan, Yejin Choi, and Jianfeng Gao. 2019. Conversing by reading: Contentful neural conversation with on-demand machine reading. In Annual Meeting of the Association for Computational Linguistics. (ACL), pages 5427-5436.
+Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research, 21(140):1-67.
+Abhinav Rastogi, Xiaoxue Zang, Srinivas Sunkara, Raghav Gupta, and Pranav Khaitan. 2020. Towards scalable multi-domain conversational agents: The schema-guided dialogue dataset. In *The AAAI Conference on Artificial Intelligence*. (AAAI), volume 34, pages 8689-8696.
+Pengjie Ren, Zhumin Chen, Christof Monz, Jun Ma, and Maarten de Rijke. 2020. Thinking globally, acting locally: Distantly supervised global-to-local knowledge selection for background based conversation. In The AAAI Conference on Artificial Intelligence. (AAAI), volume 34, pages 8697-8704.
+Zhengxiang Shi, Yue Feng, and Aldo Lipani. 2022. Learning to execute or ask clarification questions. Findings of the North American Chapter of the Association for Computational Linguistics. (NAACL).
+Yiping Song, Cheng-Te Li, Jian-Yun Nie, Ming Zhang, Dongyan Zhao, and Rui Yan. 2018. An ensemble of retrieval-based and generation-based humancomputer conversation systems. In The International Joint Conference on Artificial Intelligence. (IJCAI).
+Yajing Sun, Yue Hu, Luxi Xing, Jing Yu, and Yuqiang Xie. 2020. History-adaption knowledge incorporation mechanism for multi-turn dialogue system. In The AAAI Conference on Artificial Intelligence. (AAAI), volume 34, pages 8944-8951.
+Liang Tang, Qinghua Shang, Kaokao Lv, Zixi Fu, Shijiang Zhang, Chuanming Huang, , and Zhuo Zhang. 2021. RADGE: Relevance learning and generation evaluating method for task-oriented conversational system-anonymous version. The AAAI Conference on Artificial Intelligence. (AAAI).
+Jian Wang, Junhao Liu, Wei Bi, Xiaojiang Liu, Kejing He, Ruifeng Xu, and Min Yang. 2020. Improving knowledge-aware dialogue generation via knowledge
+
+base question answering. In The AAAI Conference on Artificial Intelligence. (AAAI), volume 34, pages 9169-9176.
+Zeqiu Wu, Michel Galley, Chris Brockett, Yizhe Zhang, Xiang Gao, Chris Quirk, Rik Koncel-Kedziorski, Jianfeng Gao, Hannaneh Hajishirzi, Mari Ostendorf, et al. 2021. A controllable model of grounded response generation. In The AAAI Conference on Artificial Intelligence. (AAAI), volume 35, pages 14085-14093.
+Chen Xing, Wei Wu, Yu Wu, Jie Liu, Yalou Huang, Ming Zhou, and Wei-Ying Ma. 2017. Topic aware neural response generation. In The AAAI Conference on Artificial Intelligence. (AAAI).
+Yan Xu, Etsuko Ishii, Genta Indra Winata, Zhaojiang Lin, Andrea Madotto, Zihan Liu, Peng Xu, and Pascale Fung. 2021a. Caire in dialdoc21: Data augmentation for information-seeking dialogue system. In Annual Meeting of the Association for Computational Linguistics. (ACL).
+Yi Xu, Hai Zhao, and Zhuosheng Zhang. 2021b. Topic-aware multi-turn dialogue modeling. In The AAAI Conference on Artificial Intelligence. (AAAI).
+Rui Yan and Dongyan Zhao. 2018. Coupled context modeling for deep chit-chat: towards conversations between human and computer. In *The SIGKDD Conference on Knowledge Discovery and Data Mining* (SIGKDD), pages 2574-2583.
+Semin Yavuz, Abhinav Rastogi, Guan-Lin Chao, and Dilek Hakkani-Tur. 2019. Deepcopy: Grounded response generation with hierarchical pointer networks. In Annual Meeting of the Special Interest Group on Discourse and Dialogue. (SIGDIAL), pages 122-132.
+Fanghua Ye, Yue Feng, and Emine Yilmaz. 2022. Assist: Towards label noise-robust dialogue state tracking. In Findings of the Association for Computational Linguistics. (ACL), pages 2719-2731.
+Tom Young, Erik Cambria, Iti Chaturvedi, Hao Zhou, Subham Biswas, and Minlie Huang. 2018. Augmenting end-to-end dialogue systems with commonsense knowledge. In The AAAI Conference on Artificial Intelligence. (AAAI).
+Hainan Zhang, Yanyan Lan, Jiafeng Guo, Jun Xu, and Xueqi Cheng. 2018. Reinforcing coherence for sequence to sequence model in dialogue generation. In The International Joint Conference on Artificial Intelligence. (IJCAI), pages 4567-4573.
+Hainan Zhang, Yanyan Lan, Liang Pang, Hongshen Chen, Zhuoye Ding, and Dawei Yin. 2020. Modeling topical relevance for multi-turn dialogue generation. The International Joint Conference on Artificial Intelligence. (IJCAI).
+Xueliang Zhao, Chongyang Tao, Wei Wu, Can Xu, Dongyan Zhao, and Rui Yan. 2019. A document-grounded matching network for response selection
+
+in retrieval-based chatbots. The International Joint Conference on Artificial Intelligence. (IJCAI).
+Xueliang Zhao, Wei Wu, Can Xu, Chongyang Tao, Dongyan Zhao, and Rui Yan. 2020. Knowledge-grounded dialogue generation with pre-trained language models. In The Conference on Empirical Methods in Natural Language Processing. (EMNLP), pages 3377-3390.
+
+# A Implementation Details
+
+We use a pre-trained BART-base model to encode utterances and factoids. The max sentence length is set to 50 and the max number of dialogue turns is set to 15. The hidden size of attentions are all set to 768. The size of the convolution and pooling kernels are set to (3, 3, 3). The joint loss $\lambda$ is 0.5. The dropout probability is 0.1. The batch size is set to 8. We optimize with Adam and an initial learning rate of 3e-5.
+
+# B Evaluation Measures
+
+We make use of the following automatic evaluation metrics in our experiments. For each dataset, we calculate the metrics used by the respective challenges for consistency.
+
+Exact Match (EM): This measures what part of the predicted knowledge span matches the ground truth factoid exactly.
+
+Token-Level F1: We cast the predicted spans and ground truth factoids as bags of tokens, and compute F1 between them.
+
+MRR@5: A metric based on the rank of the first ground truth factoid in a system's top-5 ranking.
+
+Recall@5: This metric counts how many ground truth factoids occur in a system's top-5 ranking.
+
+BLEU-X (Papineni et al., 2002): BLEU-X estimates a generated response's via measuring its n-gram precision against the ground truth. X denotes the maximum size of the considered n-grams (i.e. unigrams, bigrams, trigrams, 4-grams).
+
+ROUGE-X (Lin, 2004): ROUGE-X measures n-gram recall between generated and ground truth response. ROUGE-L measures the longest common word subsequence.
+
+# C Analysis of Knowledge Selection
+
+We further conduct an analysis on how the selected knowledge span differs from turn to turn as this also indicates a shift in topic. Table 6 shows the average number of knowledge span changes as observed in the grounded truth and in the predicted output of Base-D2D and TARG, on the Doc2Dial dataset. We can see that the knowledge span changes are frequent in the ground truth, and that TARG's average knowledge span changes is closer to that of the ground truth. This indicates that TARG can more accurately follow the knowledge span changes in the dataset than Base-D2D.
+
+We further investigate the number of the selected factoids per turn in Doc2Dial, i.e. the average
+
+
Model
Knowledge Changes
Factoid
Ground Truth
9.22
1.46
Base-D2D
8.73
1.23
TARG
9.02
1.54
+
+Table 6: Average number of knowledge changes per dialogue and average number of factoid per turn in Doc2Dial.
+
+number of factoids covered by the predicated spans. As shown in Table 6, we can again see that TARG's behavior is closer to that of the ground truth.
\ No newline at end of file
diff --git a/topicawareresponsegenerationintaskorienteddialoguewithunstructuredknowledgeaccess/images.zip b/topicawareresponsegenerationintaskorienteddialoguewithunstructuredknowledgeaccess/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..5450cf42fae5aab14dbf0f143eba2ed7b61b9251
--- /dev/null
+++ b/topicawareresponsegenerationintaskorienteddialoguewithunstructuredknowledgeaccess/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:485a6fd0a98d09f02a5c99ec49a6a5a3eaaee546cd75a1e18b465f532efb793a
+size 545172
diff --git a/topicawareresponsegenerationintaskorienteddialoguewithunstructuredknowledgeaccess/layout.json b/topicawareresponsegenerationintaskorienteddialoguewithunstructuredknowledgeaccess/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..e4ad8aee2105235e808b877fd660775442e160ae
--- /dev/null
+++ b/topicawareresponsegenerationintaskorienteddialoguewithunstructuredknowledgeaccess/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c3cd1ac3ef709352077e361339f043ac0c9a133a1e5c2153baa65c7ac0235461
+size 403628
diff --git a/topictaxonomyexpansionviahierarchyawaretopicphrasegeneration/c4109109-511f-4fc5-978e-5778a062cd50_content_list.json b/topictaxonomyexpansionviahierarchyawaretopicphrasegeneration/c4109109-511f-4fc5-978e-5778a062cd50_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..49b22a7fb323fc3735f0afba62882a51f7f1a294
--- /dev/null
+++ b/topictaxonomyexpansionviahierarchyawaretopicphrasegeneration/c4109109-511f-4fc5-978e-5778a062cd50_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d02f9234f8d54a2069b563d5304baca85bc5d1c7d7743eaf09e30918fd0a462f
+size 106901
diff --git a/topictaxonomyexpansionviahierarchyawaretopicphrasegeneration/c4109109-511f-4fc5-978e-5778a062cd50_model.json b/topictaxonomyexpansionviahierarchyawaretopicphrasegeneration/c4109109-511f-4fc5-978e-5778a062cd50_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..f2b3d55dd57d81c6cf32a90bf9031d081f1507a2
--- /dev/null
+++ b/topictaxonomyexpansionviahierarchyawaretopicphrasegeneration/c4109109-511f-4fc5-978e-5778a062cd50_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:1bca3f48d6c4ebfc9fe174e11bdc2e485e5ac1339985d350f447783e2531e83d
+size 125214
diff --git a/topictaxonomyexpansionviahierarchyawaretopicphrasegeneration/c4109109-511f-4fc5-978e-5778a062cd50_origin.pdf b/topictaxonomyexpansionviahierarchyawaretopicphrasegeneration/c4109109-511f-4fc5-978e-5778a062cd50_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..1fe563252dbf4144d83f3df91baacdf9c2d41e9f
--- /dev/null
+++ b/topictaxonomyexpansionviahierarchyawaretopicphrasegeneration/c4109109-511f-4fc5-978e-5778a062cd50_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3203d5aafbb889032e5207361110a321aec0bdf73b108546104f5763d3052404
+size 1162079
diff --git a/topictaxonomyexpansionviahierarchyawaretopicphrasegeneration/full.md b/topictaxonomyexpansionviahierarchyawaretopicphrasegeneration/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..ae32e86113624c7d7cfd40b3ba3fecc4db8cc86f
--- /dev/null
+++ b/topictaxonomyexpansionviahierarchyawaretopicphrasegeneration/full.md
@@ -0,0 +1,410 @@
+# Topic Taxonomy Expansion via Hierarchy-Aware Topic Phrase Generation
+
+Dongha Lee1, Jiaming Shen2, Seonghyeon Lee3, Susik Yoon1, Hwanjo Yu3*, and Jiawei Han1
+
+1University of Illinois at Urbana-Champaign (UIUC), Urbana, IL, United States
+
+2Google Research, New York, NY, United States
+
+3Pohang University of Science and Technology (POSTECH), Pohang, Republic of Korea {donghal,susik,hanj} @illinois.edu, jmshen@google.com, {sh0416,hwanjoyu} @postech.ac.kr
+
+# Abstract
+
+Topic taxonomies display hierarchical topic structures of a text corpus and provide topical knowledge to enhance various NLP applications. To dynamically incorporate new topic information, several recent studies have tried to expand (or complete) a topic taxonomy by inserting emerging topics identified in a set of new documents. However, existing methods focus only on frequent terms in documents and the local topic-subtopic relations in a taxonomy, which leads to limited topic term coverage and fails to model the global topic hierarchy. In this work, we propose a novel framework for topic taxonomy expansion, named TopicExpan, which directly generates topic-related terms belonging to new topics. Specifically, TopicExpan leverages the hierarchical relation structure surrounding a new topic and the textual content of an input document for topic term generation. This approach encourages newly-inserted topics to further cover important but less frequent terms as well as to keep their relation consistency within the taxonomy. Experimental results on two real-world text corpora show that TopicExpan significantly outperforms other baseline methods in terms of the quality of output taxonomies.
+
+# 1 Introduction
+
+Topic taxonomy is a tree-structured representation of hierarchical relationship among multiple topics found in a text corpus (Zhang et al., 2018; Shang et al., 2020; Meng et al., 2020). Each topic node is defined by a set of semantically coherent terms related to a specific topic (i.e., topic term cluster), and each edge implies the "general-specific" relation between two topics (i.e., topic-subtopic). With the knowledge of hierarchical topic structures, topic taxonomies have been successfully utilized in many text mining applications, such as text summarization (Petinot et al., 2011; Bairi et al., 2015) and categorization (Meng et al., 2019; Shen et al., 2021).
+
+
+Figure 1: An example of topic taxonomy expansion. The known (i.e., existing) topics and novel topics are in single-line and double-line boxes, respectively.
+
+Recently, automated expansion (or completion) of an existing topic taxonomy has been studied (Huang et al., 2020; Lee et al., 2022), which helps people to incrementally manage the topic knowledge within fast-growing document collections. This task has two technical challenges: (1) identifying new topics by collecting topic-related terms that have novel semantics, and (2) inserting the new topics at the right position in the hierarchy. In Figure 1, for example, a new topic node painter that consists of its topic-related terms [baroque painter, realist painter, portraitist, ...] is inserted at the child position (i.e., subtopic) of the existing topic node artist, without breaking the consistency of topic relations with the neighbor nodes.
+
+The existing methods for topic taxonomy expansion, however, suffer from two major limitations: (1) Limited term coverage – They identify new topics from a set of candidate terms, while relying on entity extraction tools (Zeng et al., 2020) or phrase mining techniques (Liu et al., 2015; Shang et al., 2018; Gu et al., 2021) to obtain the high-frequency candidate terms in a corpus. Such extraction techniques will miss a lot of topic-related terms that have low frequency, and thus lead to an incomplete set of candidate terms (Zeng et al., 2021). (2) Inconsistent topic relation – As they insert new topics by considering only the first-order relation
+
+between two topics (i.e., a topic and its subtopic), the newly-inserted topics are likely to have inconsistent relations with other existing topics. The expansion strategy based on the first-order topic relation is inadequate to capture the holistic structure information of the existing topic taxonomy.
+
+As a solution to both challenges, we present TopicExpan, a new framework that expands the topic taxonomy via hierarchy-aware topic term generation. The key idea is to directly generate topic-related terms from documents by taking the topic hierarchy into consideration. From the perspective of term coverage, this generation-based approach can identify more multi-word terms even if they have low frequency in the given corpus (Zeng et al., 2021), compared to the extraction-based approach only working on the extracted candidate terms that frequently appear in the corpus. To combat the challenge of relation inconsistency, we utilize graph neural networks (GNNs) to encode the relation structure surrounding each topic (Kipf and Welling, 2017; Shen et al., 2021) and generate topic-related terms conditioned on these relation structure encodings. This allows us to accurately capture a hierarchical structure beyond the first-order relation between two topics.
+
+To be specific, TopicExpan consists of the training step and the expansion step. The training step is for optimizing a neural model that topic-conditionally generates a term from an input document. Technically, for topic-conditional term generation, the model utilizes the relation structure of a topic node as well as the textual content of an input document. The expansion step is for discovering novel topics and inserting them into the topic taxonomy. To this end, TopicExpan places a virtual topic node underneath each existing topic node, and then it generates the topic terms conditioned on the virtual topic by utilizing the trained model. In the end, it performs clustering on the generated terms to identify multiple novel topics, which are inserted at the position of the virtual topic node.
+
+Contributions. The main contributions of this paper can be summarized as follows: (1) We propose a novel framework for topic taxonomy expansion, which tackles the challenges in topic term coverage and topic relation consistency via hierarchy-aware topic term generation. (2) We present a neural model to generate a topic-related term from an input document topic-conditionally by capturing the hierarchical relation structure surrounding
+
+each topic based on GNNs. (3) Our comprehensive evaluation on two real-world datasets demonstrates that output taxonomies of TopicExpan show better relation consistency as well as term coverage, compared to that of other baseline methods.
+
+# 2 Related Work
+
+Topic Taxonomy Construction. To build a topic taxonomy of a given corpus from scratch, the state-of-the-art methods have focused on finding out discriminative term clusters in a hierarchical manner (Zhang et al., 2018; Meng et al., 2020; Shang et al., 2020). Several recent studies have started to enrich and expand an existing topic taxonomy by discovering novel topics from a corpus and inserting them into the taxonomy (Huang et al., 2020; Lee et al., 2022). They leverage the initial topic taxonomy as supervision for learning the hierarchical relation among topics. To be specific, they discover new subtopics that should be inserted at the child of each topic, by using a relation classifier trained on (parent, child) topic pairs (Huang et al., 2020) or performing novel subtopic clustering (Lee et al., 2022). However, all the methods rely on candidate terms extracted from a corpus and also consider only the first-order relation between two topics, which degrades the term coverage and relation consistency of output topic taxonomies.
+
+GNN-based Taxonomy Expansion. Recently, there have been several attempts to employ GNNs for expanding a given entity taxonomy (Mao et al., 2020; Shen et al., 2020; Zeng et al., 2021). Their goal is to figure out the correct position where a new entity should be inserted, by capturing structural information of the taxonomy based on GNNs. They mainly focus on an entity taxonomy that shows the hierarchical semantic relation among fine-grained entities (or terms), requiring plenty of nodes and edges in a given taxonomy to effectively learn the inter-entity relation. In contrast, a topic taxonomy represents coarse-grained topics (or high-level concepts) that encode discriminative term meanings as well as term co-occurrences in documents (Figure 1), which allows its node to correspond to a topic class of documents. That is, it is not straightforward to apply such methods to a topic taxonomy with much fewer nodes and edges, and thus how to enrich a topic taxonomy with GNNs remains an important research question.
+
+Keyphrase Generation. The task of keyphrase prediction aims to find condensed terms that con
+
+
+Figure 2: The overall process of TopicExpan. (Left) It trains a unified model via multi-task learning of topic-document similarity prediction and topic-conditional phrase generation. (Right) It selectively collects the phrases conditionally-generated for a virtual topic node, and then it identifies multiple novel topics from phrase clusters.
+
+
+
+cisely summarize the primary information of an input document (Liu et al., 2020). The state-of-the-art approach to this problem is modeling it as the text generation task, which sequentially generates word tokens of a keyphrase (Meng et al., 2017; Zhou et al., 2021). They adopt neural architectures as a text encoder and decoder, such as an RNN/GRU (Meng et al., 2017; Wang et al., 2019) and a transformer (Zhou et al., 2021). Furthermore, several methods have incorporated a neural topic model into the generation process (Wang et al., 2019; Zhou et al., 2021) to fully utilize the topic information extracted in an unsupervised way. Despite their effectiveness, none of them has focused on topic-conditional generation of keyphrases from a document, as well as hierarchical modeling of topic relations.
+
+# 3 Problem Formulation
+
+Notations. A topic taxonomy $\mathcal{T} = (\mathcal{C},\mathcal{R})$ is a tree structure about topics, where each node $(\in \mathcal{C})$ represents a single conceptual topic and each edge $(\in \mathcal{R})$ implies the hierarchical relation between a topic and its subtopic. A topic node $c_{j}\in \mathcal{C}$ is described by the set of topic-related terms, denoted by $\mathcal{P}_j$ (i.e., term cluster for the topic $c_{j}$ ), where the most representative term (i.e., center term) serves as the topic name. Each document $d_{i} = [v_{i1},\ldots ,v_{iL}]$ and each term $p_k = [v_{k1},\dots,v_{kT}]$ in a given corpus $\mathcal{D}$ is the sequence of $L$ and $T$ word tokens in the vocabulary set $v\in \mathcal{V}$ , respectively. Here, each term is regarded as a phrase that consists of one or
+
+more word tokens, so the terms "phrase" and "term" are used interchangeably in this paper.
+
+Problem Definition. Given a text corpus $\mathcal{D}$ and an initial topic taxonomy $\mathcal{T}$ , the task of topic taxonomy expansion aims to discover novel topics by collecting the topic-related terms from $\mathcal{D}$ and insert them at the right position in $\mathcal{T}$ (Figure 1).
+
+# 4 TopicExpan: Proposed Framework
+
+# 4.1 Overview
+
+TopicExpan consists of (1) the training step that trains a neural model for generating phrases topic-conditionally from documents (Figure 2 Left) and (2) the expansion step that identifies novel topics for each new position in the taxonomy by using the trained model (Figure 2 Right). The detailed algorithm is described in Section A.1.
+
+Training Step. TopicExpan optimizes parameters of its neural model to maximize the total likelihood of the initial taxonomy $\mathcal{T}$ given the corpus $\mathcal{D}$ .
+
+$$
+\begin{array}{l} P (\mathcal {T}; \mathcal {D}) = \prod_ {c _ {j} \in \mathcal {C}} \prod_ {p _ {k} \in \mathcal {P} _ {j}} P (p _ {k} | c _ {j}; \mathcal {D}) \\ = \prod_ {c _ {j} \in \mathcal {C}} \prod_ {p _ {k} \in \mathcal {P} _ {j}} \sum_ {d _ {i} \in \mathcal {D}} P \left(p _ {k}, d _ {i} \mid c _ {j}\right) \tag {1} \\ \approx \prod_ {c _ {j} \in \mathcal {C}} \prod_ {d _ {i} \in \mathcal {D}} \prod_ {p _ {k} \in \mathcal {P} _ {j} \cap d _ {i}} P (p _ {k} | d _ {i}, c _ {j}) P (d _ {i} | c _ {j}). \\ \end{array}
+$$
+
+In the end, the total likelihood is factorized into the topic-conditional likelihoods of a document and a phrase, i.e., $P(d_{i}|c_{j})$ and $P(p_{k}|d_{i},c_{j})$ , for all the positive triples $(c_{j},d_{i},p_{k})$ collected from $\mathcal{T}$ and $\mathcal{D}$ . That is, each triple satisfies the condition that its phrase $p_k$ belongs to the topic $c_{j}$ (i.e., $p_k\in \mathcal{P}_j$ ) and also appears in the document $d_{i}$ .
+
+To maximize Equation (1), we propose a unified model for estimating $P(d_{i}|c_{j})$ and $P(p_{k}|d_{i},c_{j})$ via
+
+
+Figure 3: The topic encoder architecture. It computes topic representations by encoding a topic relation graph.
+
+the tasks of topic-document similarity prediction and topic-conditional phrase generation, respectively. In Figure 2 Left, for each positive triple $(c_j,d_i,p_k)$ , the former task increases the similarity between the topic $c_{j}$ and the document $d_{i}$ . This similarity indicates how confidently the document $d_{i}$ includes any sentences or mentions about the topic $c_{j}$ . At the same time, the latter task maximizes the decoding probability of the phrase $p_k$ (i.e., generates the phrase) conditioned on the topic $c_{j}$ and the document $d_{i}$ . The model parameters are jointly optimized for the two tasks, and each of them will be discussed in Section 4.3.
+
+Expansion Step. TopicExpan expands the topic taxonomy by discovering novel topics and inserting them into the taxonomy. To this end, it utilizes the trained model to generate the phrases $p$ that have a high topic-conditional likelihood $P(p|c^*; \mathcal{D})$ for a new topic $c^*$ from a given corpus $\mathcal{D}$ . In Figure 2 Right, it first places a virtual topic node $c_j^*$ at a valid insertion position in the hierarchy (i.e., a child position of a topic node $c_j$ ), and then it collects the phrases relevant to the virtual topic by generating them from documents $d_i \in \mathcal{D}$ . Finally, it identifies multiple novel topics by clustering the collected phrases into semantically coherent but distinguishable clusters, which are inserted as the new topic nodes at the position of the virtual node. The details will be presented in Section 4.4.
+
+# 4.2 Encoder Architectures
+
+For modeling the two likelihoods $P(d_{i}|c_{j})$ and $P(p_{k}|d_{i},c_{j})$ , we introduce a topic encoder and a document encoder, which respectively computes the representation of a topic $c_{j}$ and a document $d_{i}$ .
+
+# 4.2.1 Topic Encoder
+
+There are two important challenges of designing the architecture of a topic encoder: (1) The topic
+
+encoder should be hierarchy-aware so that the representation of each topic can accurately encode the hierarchical relation with its neighbor topics, and (2) the representation of each topic needs to be discriminative so that it can encode semantics distinguishable from that of the sibling topics. Hence, we adopt graph convolutional networks (GCNs) (Kipf and Welling, 2017) to capture the semantic relation structure surrounding each topic.
+
+We first construct a topic relation graph $\mathcal{G}$ by enriching the edges of the given hierarchy $\mathcal{T}$ to model heterogeneous relations between topics, as shown in Figure 3. The graph contains three different types of inter-topic relations: (1) downward, (2) upward, and (3) sideward. The downward and upward edges respectively capture the top-down and bottom-up relations (i.e., hierarchy-awareness). We additionally insert the sideward edges between sibling nodes that have the same parent node. Unlike the downward and upward edges, the sideward edges pass the information in a negative way to make topic representations discriminative among the sibling topics. The topic representation of $c_{j}$ at the $m$ -th GCN layer is computed by
+
+$$
+\boldsymbol {h} _ {j} ^ {(m)} = \phi \left(\sum_ {(i, j) \in \mathcal {G}} \alpha_ {r (i, j)} \cdot \boldsymbol {W} _ {r (i, j)} ^ {(m - 1)} \cdot \boldsymbol {h} _ {i} ^ {(m - 1)}\right), \tag {2}
+$$
+
+where $\phi$ is the activation function, $r(i,j) \in \{\text{down}, \text{up}, \text{side}\}$ represents the relation type of an edge $(i,j)$ , and $\alpha$ indicates either positive or negative aggregation according to its relation type; i.e., $\alpha_{\text{down}} = \alpha_{\text{up}} = +1$ and $\alpha_{\text{side}} = -1$ . The GloVe word vectors (Pennington et al., 2014) for each topic name are used as the base node features (i.e., $h_j^{(0)}$ ) after being averaged for all tokens in the topic name. Using a stack of $M$ GCN layers, we finally obtain the representation of a target topic node $c_j$ (i.e., the topic node that we want to obtain its representation) by $c_j = h_j^{(M)}$ .
+
+The topic encoder should also be able to obtain the representation of a virtual topic node, whose topic name is not determined yet, during the expansion step. For this reason, we mask the base node features of a target topic node regardless of whether the node is virtual or not, as depicted in Figures 3(a) and (b). In other words, with the name of a target topic masked, the topic representation encodes the relation structure of its $M$ -hop neighbor topics.
+
+# 4.2.2 Document Encoder
+
+For the document encoder, we employ a pretrained language model, BERT (Devlin et al., 2019). It
+
+models the interaction among the tokens based on the self-attention mechanism, thereby obtaining each token's contextualized representation, denoted by $\left[\pmb{v}_{i1},\dots,\pmb{v}_{iL}\right]$ . A document representation $d_{i}$ is obtained by mean pooling in the end.
+
+# 4.3 Learning Topic Taxonomy
+
+In the training step, TopicExpan optimizes model parameters by using positive triples as training data $\mathcal{X} = \{(c_j,d_i,p_k)|p_k\in \mathcal{P}_j\cap d_i,\forall c_j\in \mathcal{C},\forall d_i\in$ $\mathcal{D}\}$ via multi-task learning of topic-document similarity prediction and topic-conditional phrase generation (Sections 4.3.1 and 4.3.2).
+
+# 4.3.1 Topic-Document Similarity Prediction
+
+The first task is to learn the similarity between a topic and a document. We define the topic-document similarity score by bilinear interaction between their representations, i.e., $c_{j}^{\top} M \mathbf{d}_{i}$ where $M$ is the trainable interaction matrix. The topic-conditional likelihood of a document in Equation (1) is optimized by using this topic-document similarity score, $P(d_{i} | c_{j}) \propto \exp(c_{j}^{\top} M \mathbf{d}_{i})$ .
+
+The loss function is defined based on InfoNCE (Oord et al., 2018), which pulls positively-related documents into the topic while pushing away negatively-related documents from the topic.
+
+$$
+\mathcal {L} _ {s i m} = - \sum_ {\left(c _ {j}, d _ {i}, p _ {k}\right) \in \mathcal {X}} \log \frac {\exp \left(\boldsymbol {c} _ {j} ^ {\top} \boldsymbol {M} \boldsymbol {d} _ {i} / \gamma\right)}{\sum_ {i ^ {\prime}} \exp \left(\boldsymbol {c} _ {j} ^ {\top} \boldsymbol {M} \boldsymbol {d} _ {i ^ {\prime}} / \gamma\right)}, \tag {3}
+$$
+
+where $\gamma$ is the temperature parameter. For each triple $(c_{j},d_{i},p_{k})$ , we use its document $d_{i}$ as positive and regard documents from all the other triples in the current mini-batch as negatives.
+
+# 4.3.2 Topic-Conditional Phrase Generation
+
+The second task is to generate phrases from a document being conditioned on a topic. For the phrase generator, we employ the architecture of the transformer decoder (Vaswani et al., 2017).
+
+For topic-conditional phrase generation, the context representation, $Q(c_{j},d_{i})$ , needs to be modeled by fusing the textual content of a document $d_{i}$ as well as the relation structure of a topic $c_{j}$ . To leverage the textual features while focusing on the topic-relevant tokens, we compute topic-attentive token representations and pass them as the input context of the transformer decoder. Precisely, the topic-attention score of the $l$ -th token in the document $d_{i}$ , $\beta_{l}(c_{j},d_{i})$ , is defined by its similarity with the topic.
+
+$$
+\beta_ {l} \left(c _ {j}, d _ {i}\right) = \exp \left(\boldsymbol {c} _ {j} ^ {\top} \boldsymbol {M} \boldsymbol {v} _ {i l}\right) / \sum_ {l ^ {\prime} = 1} ^ {L} \exp \left(\boldsymbol {c} _ {j} ^ {\top} \boldsymbol {M} \boldsymbol {v} _ {i l ^ {\prime}}\right) \tag {4}
+$$
+
+$$
+\boldsymbol {Q} (c _ {j}, d _ {i}) = \left[ \beta_ {1} \left(c _ {j}, d _ {i}\right) \cdot \boldsymbol {v} _ {i 1}, \dots , \beta_ {L} \left(c _ {j}, d _ {i}\right) \cdot \boldsymbol {v} _ {i L} \right],
+$$
+
+where the interaction matrix $M$ is weight-shared with the one in Equation (3). Then, the sequential generation process of a token $\hat{v}_t$ is described by
+
+$$
+\begin{array}{l} \boldsymbol {s} _ {t} = \operatorname {D e c o d e r} \left(\hat {v} _ {< t}; \boldsymbol {Q} \left(c _ {j}, d _ {i}\right)\right) \\ \hat {v} _ {t} \sim \operatorname {S o f t m a x} (\operatorname {F F N} (s _ {t})). \tag {5} \\ \end{array}
+$$
+
+FFN is the feed-forward networks for mapping a state vector $s_t$ into vocabulary logits. Starting from the first token [BOP], the phrase is acquired by sequentially decoding a next token $\hat{v}_t$ until the last token [EOP] is obtained; the two special tokens indicate the begin and the end of the phrase.
+
+The loss function is defined by the negative log-likelihood, where the phrase $p_k = [v_{k1}, \ldots, v_{kT}]$ in a positive triple $(c_j, d_i, p_k)$ is used as the target sequence of word tokens.
+
+$$
+\mathcal {L} _ {\text {g e n}} = - \sum_ {(c _ {j}, d _ {i}, p _ {k}) \in \mathcal {X}} \sum_ {t = 1} ^ {T} \log P (v _ {k t} | v _ {k (< t)}, c _ {j}, d _ {i}). \tag {6}
+$$
+
+To sum up, the joint optimization of Equations (3) and (6) updates all the model parameters in an end-to-end manner, including the similarity predictor, the phrase generator, and both encoders.
+
+# 4.4 Expanding Topic Taxonomy
+
+In the expansion step, TopicExpan expands the topic taxonomy by utilizing the trained model to generate the phrases for a virtual topic, which is assumed to be located at a valid insertion position in the hierarchy. For thorough expansion, it considers a child position of every existing topic node as the valid position. That is, for each virtual topic node $c_{j}^{*}$ (referring to a new child of a topic node $c_{j}$ ) one at a time, it performs topic phrase generation and clustering (Sections 4.4.1 and 4.4.2) to discover multiple novel topic nodes at the position.
+
+# 4.4.1 Novel Topic Phrase Generation
+
+Given a virtual topic node $c_{j}^{*}$ and each document $d_{i} \in \mathcal{D}$ , the trained model computes the topic-document similarity score and generates a topic-conditional phrase $p^{*} = [\hat{v}_{1},\dots,\hat{v}_{T}]$ where $\hat{v}_t \sim P(v_t|\hat{v}_{< t},c_j^*,d_i)$ . Here, the generated phrase $p^{*}$ is less likely to belong to the virtual topic $c_{j}^{*}$ if its source document $d_{i}$ is less relevant to the virtual topic. Thus, we utilize the topic-document similarity score as the confidence of the generated phrase. To collect only qualified topic phrases, we filter out non-confident phrases whose normalized topic-document similarity is smaller than a threshold, i.e., $P(d_{i}|c_{j}^{*}) \approx \mathrm{Norm}_{d_{i} \in \mathcal{D}}(\exp (c_{j}^{*\top}Md_{i})) < \tau$ .
+
+In addition to the confidence-based filtering, we exclude phrases that do not appear in the corpus at all, since they are likely implausible phrases. This has substantially reduced the hallucination problem of a generation model.
+
+# 4.4.2 Novel Topic Phrase Clustering
+
+To identify multiple novel topics at the position of the virtual topic node $c_{j}^{*}$ , we perform clustering on the phrases collected for the virtual topic. We acquire semantic features of each phrase by averaging the GloVe vectors (Pennington et al., 2014) of word tokens in the phrase, then run $k$ -means clustering with the initial number of clusters $k$ manually set. Among the clusters, we selectively identify the new topics based on their cluster size, and the center phrase of each cluster is used as the topic name.
+
+# 5 Experiments
+
+# 5.1 Experimental Settings
+
+Datasets. We use two real-world document corpora with their three-level topic taxonomy: Amazon (McAuley and Leskovec, 2013) contains product reviews collected from Amazon, and DBPedia (Lehmann et al., 2015) contains Wikipedia articles. All the documents in both datasets are tokenized by the BERT tokenizer (Devlin et al., 2019) and truncated to have maximum 512 tokens. The statistics are listed in Table 1.
+
+Baseline Methods. We consider methods for building a topic taxonomy from scratch, hLDA (Griffiths et al., 2003) and TaxoGen (Zhang et al., 2018). We also evaluate the state-of-the-art methods for topic taxonomy expansion, CoRel (Huang et al., 2020) and TaxoCom (Lee et al., 2022). Both of them identify and insert new topic nodes based on term embedding and clustering, with the initial topic taxonomy leveraged as supervision.
+
+Experimental Settings. To evaluate the performance for novel topic discovery, we follow the previous convention that randomly deletes half of leaf nodes from the original taxonomy and asks each expansion method to reproduce them (Shen et al., 2020; Lee et al., 2022). Considering the deleted topics as ground-truth, we measure how completely new topics are discovered and how accurately they are inserted into the taxonomy.
+
+Table 1: The statistics of the datasets.
+
+
Corpus
Vocab. size
# Documents
# Topic nodes
Amazon
19,615
29,487
531
DBPedia
27,435
196,665
298
+
+# 5.2 Quantitative Evaluation
+
+# 5.2.1 Topic Taxonomy Expansion
+
+First of all, we assess the quality of output topic taxonomies. Following previous topic taxonomy evaluations (Huang et al., 2020; Lee et al., 2022), we recruit 10 doctoral researchers and use their domain knowledge to examine three different aspects of a topic taxonomy. Term coherence indicates how strongly terms in a topic node are relevant to each other. Relation accuracy computes how accurately a topic node is inserted into the topic taxonomy (i.e., precision for novel topic discovery). Subtopic integrity measures the completeness of subtopics for a topic node (i.e., recall for novel topic discovery). For exhaustive evaluation, we divide the output taxonomy of each expansion method into three disjoint parts $\mathcal{T}_1$ , $\mathcal{T}_2$ , and $\mathcal{T}_3$ so that each of them covers some first-level topics (and their subtrees) in Table 6 in Section A.5.
+
+In Table 2, TopicExpan achieves the highest scores for all the aspects. For all the baseline methods, the term coherence is not good enough because they assign candidate terms into a new topic according to the topic-term relevance mostly learned from term co-occurrences. In contrast, TopicExpan effectively collects coherent terms relevant to a new topic (i.e., term coherence $\geq 0.90$ ) by directly generating the topic-conditional terms from documents. TopicExpan also shows significantly higher relation accuracy and subtopic integrity than the other expansion methods, with the help of its GNN-based topic encoder that captures a holistic topic structure beyond the first-order topic relation.
+
+# 5.2.2 Topic-Conditional Phrase Generation
+
+We investigate the topic phrase prediction performance of our framework and other keyphrase extraction/generation models. We leave out $10\%$ of the positive triples $(c_{j},d_{i},p_{k})$ from the training set $\mathcal{X}$ and use them as the test set. We measure perplexity (PPL) and accuracy (ACC) by comparing
+
+Table 2: Quantitative evaluation on output topic taxonomies. The average and standard deviation for the three aspects are reported. The relation accuracy and subtopic integrity are considered only for the expansion methods, whose identified new topic nodes can be clearly compared with the ground-truth ones at each valid position.
+
+
Part
Methods
Amazon
DBPedia
Term Coherence
Relation Accuracy
Subtopic Integrity
Term Coherence
Relation Accuracy
Subtopic Integrity
hLDA
0.2417 (0.0398)
N/A
N/A
0.2688 (0.0320)
N/A
N/A
TaxoGen
0.4333 (0.1062)
N/A
N/A
0.4906 (0.1523)
N/A
N/A
T1
CoRel
0.5167 (0.1512)
0.4833 (0.1501)
0.2708 (0.1263)
0.5083 (0.1377)
0.6583 (0.1762)
0.2813 (0.1727)
TaxoCom
0.6667 (0.1411)
0.5167 (0.0992)
0.3177 (0.1006)
0.5250 (0.2151)
0.6833 (0.1808)
0.2917 (0.1282)
TopicExpan
0.9750 (0.0496)
0.8833 (0.1113)
0.4948 (0.1309)
0.9667 (0.0713)
0.9333 (0.0713)
0.5781 (0.1389)
T2
CoRel
0.5583 (0.1967)
0.6333 (0.1594)
0.2569 (0.1215)
0.4417 (0.1815)
0.5583 (0.1231)
0.1458 (0.1488)
TaxoCom
0.6083 (0.1466)
0.6167 (0.1369)
0.4514 (0.1464)
0.4833 (0.1944)
0.7083 (0.1467)
0.2708 (0.1282)
TopicExpan
0.8917 (0.0707)
0.8583 (0.1650)
0.6597 (0.2062)
0.9583 (0.0707)
0.9167 (0.1054)
0.5729 (0.1035)
T3
CoRel
0.5667 (0.1638)
0.5833 (0.1222)
0.2344 (0.1527)
0.6250 (0.1669)
0.7167 (0.1321)
0.3177 (0.1195)
TaxoCom
0.5917 (0.1571)
0.6083 (0.0972)
0.1563 (0.1179)
0.5667 (0.1852)
0.6917 (0.1179)
0.4167 (0.1069)
TopicExpan
0.9167 (0.0690)
0.9083 (0.1004)
0.4531 (0.1068)
0.9833 (0.0309)
0.9417 (0.0904)
0.6719 (0.0916)
+
+Table 3: Performance for topic phrase generation.
+
+
Methods
Amazon
DBPedia
PPL ↓
ACC ↑
PPL ↓
ACC ↑
TopicExpan (Encoder) BERT→Bi-GRU
5.2553
0.6958
3.1108
0.7768
5.7844
0.6884
3.5322
0.7645
(Decoder) Transformer→GRU
6.6649
0.6754
5.3690
0.6798
w/o Topic-attentive context
7.0907
0.6643
7.1679
0.6553
w/o Hierarchical topic relation
6.5345
0.6772
3.9802
0.7423
w/o Sideward topic relation
5.8705
0.6807
3.6985
0.7506
TextRank (Mihalcea and Tarau, 2004)
-
0.3023
-
0.1628
TopicRank (Bouguin et al., 2013)
-
0.2099
-
0.1092
TopicKG (Wang et al., 2019)
13.1298
0.2770
11.5663
0.3238
BERT-KG (Liu et al., 2020)
11.0229
0.4165
9.4723
0.4734
BERT-TKG (Zhou et al., 2021)
10.9746
0.4308
8.3607
0.4894
+
+each generated phrase with the target phrase at the token-level and phrase-level, respectively.
+
+In Table 3, TopicExpan achieves the best PPL and ACC scores. We observe that TopicExpan more accurately generates topic-related phrases from input documents, compared to the state-of-the-art keyphrase generation methods which are not able to consider a specific topic as the condition for generation. In addition, ablation analyses validate that each component of our framework contributes to accurate generation of topic phrases. Particularly, the hierarchical (i.e., upward and downward) and sideward relation modeling of the topic encoder improves the quality of generated phrases.
+
+# 5.3 Qualitative Evaluation
+
+# 5.3.1 Comparison of Topic Terms
+
+We qualitatively compare the topic terms found by each method. In case of TopicExpan, we sort all confident topic terms by their cosine distances to the topic name (i.e., center term) using the global embedding features (Pennington et al., 2014).
+
+Table 4 shows that the topic terms of TopicEx
+
+
+Figure 4: Examples of topic-conditional phrase generation, given a document and its relevant/irrelevant topic.
+
+pan are superior to those of the baseline methods, in terms of the expressiveness as well as the topic relevance. In detail, some of the terms retrieved by CoRel and TaxoCom are either off-topic or too general (marked with a strikethrough); this indicates that their topic relevance score for each term is not good at capturing the hierarchical topic knowledge of a text corpus. On the contrary, TopicExpan generates strongly topic-related terms by capturing the relation structure of each topic. Furthermore, TopicExpan is effective to find infrequentlyappearing multi-word terms (underlined), which all the extraction-based methods fail to obtain.
+
+# 5.3.2 Comparison of Novel Topics
+
+Next, we examine novel topics inserted by each expansion method. To show the effectiveness of sideward relation modeling adopted by our topic encoder (Section 4.2.1), we additionally present the results of $\mathsf{TopicExpan}^{+\mathsf{sr}}$ and $\mathsf{TopicExpan}^{-\mathsf{sr}}$ , which computes topic representations with and without capturing the sideward topic relations.
+
+In Table 5, TopicExpan $^{+sr}$ successfully discovers new topics that should be placed in a target position. Notably, the new topics are clearly distinct
+
+Table 4: Top-5 topic terms included in each topic node. The off-topic (or too general) terms are marked with a strikethrough, and the multi-word terms that are not obtainable by the extraction-based methods are underlined.
+
+
ncaa national team, ncaa tournament, ncaa div. ii, ncaa div. i, ncaa championship
+
+Table 5: Novel topics identified at each target position. The center term (i.e., topic name) of each identified topic is presented. Correct topics $(\bigcirc)$ , incorrect topics $(\otimes)$ , and redundant topics $(\text{串})$ are annotated.
+
+
Amazon
DBPedia
Position
Root → grocery gourmet food → beverages → ?
Root → agent → sports team → ?
Sibling topics
tea, coffee, hot cocoa, water, sports drinks
basketball team, cycling team, football team
CoRel
apple cider (○), bottles (⊗), drinking (⊗), fruit juice (○), matcha (⊗<)
baseball (⊗), domestic competition (⊗), football club (⊗<), national ice hockey team (○), soccer (⊗)
hockey (⊗), junior football team (⊗<), national team (○), regular season (⊗), rugby club (○)
TopicExpan-sr
breakfast tea (⊗<), coconut water (○), fruit juice (○), natural cocoa (⊗<), vanilla coffee (⊗<)
american football team (⊗<), cricket team (○), cycling team (⊗<), professional basketball team (⊗<), rugby union team (○)
TopicExpan+sr
coconut water (○), cream soda (○), decaf tea (⊗<), diet smoothie (○), redline energy drink (○)
beach handball team (○), cricket team (○), football club (⊗<), ice hockey team (○), rugby union team (○)
+
+guishable from the sibling topics (i.e., known topics given in the initial topic hierarchy), which reduces the redundancy of the output topic taxonomy. On the other hand, CoRel and TaxoCom show limited performance for novel topic discovery; some new topics are redundant $(\preccurlyeq)$ while some others do not preserve the hierarchical relation with the existing topics $(\otimes)$ . Some of the new topics found by TopicExpan $^{-\mathrm{sr}}$ semantically overlap with the sibling topics, even though they are at the correct position in the hierarchy; this implies that our topic encoder with sideward relation modeling makes the representation of a virtual topic node discriminative with its sibling topic nodes, and it eventually helps to discover new conceptual topics of novel semantics.
+
+# 5.3.3 Case Study of Topic Phrase Generation
+
+To study how the generated phrases and their topic-document similarity scores (i.e., confidence) vary depending on a topic condition, we provide examples of topic-conditional phrase generation. The input document in Figure 4 contains a review about nail care products. In case that the relation structure of a target topic implies the nail product (Fig
+
+ure 4 Left), TopicExpan obtains the desired topicrelevant phrase "nail lacquer" along with the high topic-document similarity of 0.8547. On the other hand, given the relation structure of a target topic which is inferred as a kind of meat foods (Figure 4 Right), it generates a topic-irrelevant phrase "metallic black" from the document along with the low topic-document similarity of 0.0023. That is, TopicExpan fails to get a qualified topic phrase when the textual contents of an input document is obviously irrelevant to a target topic. In this sense, TopicExpan filters out non-confident phrases having a low topic-document similarity score to collect only the phrases relevant to each virtual topic.
+
+# 5.4 Analysis of Topic-Document Similarity
+
+Finally, we investigate the changes of generated phrases in two aspects, with respect to the topic-document similarity scores. The first aspect is the ratio of three categories for generated phrases, which have been focused on in the literature of keyphrase generation (Meng et al., 2017; Zhou et al., 2021): (1) present phrases appearing in the input document, (2) absent phrases not appear
+
+
+
+
+
+
+Figure 5: The ratio of three categories for generated phrases (Left) and the average semantic distance among generated phrases (Right). The horizontal axis shows 10 bins of normalized topic-document similarity scores.
+
+
+
+ing in the input document but in the corpus at least once, and (3) unseen (i.e., totally-new) phrases that are not observed in the corpus at all. The second aspect is the average semantic distance among the phrases, measured by using the semantic features. For the plots in Figure 5, the horizontal axis represents 10 bins of normalized topic-document similarity scores over all generated phrases.
+
+Interestingly, TopicExpan hardly generates absent phrases (about $0.7\%$ for Amazon, $1.7\%$ for DBPedia) and unseen phrases (about $0.1\%$ for Amazon, $0.2\%$ for DBPedia) regardless of the topic-document similarity; instead, it generates present phrases in most cases (Figure 5 Left). In other words, if the input document is not relevant to a target topic, it tends to generate an irrelevant-but-present phrase rather than a relevant-but-absent phrase, as shown in Section 5.3.3. One potential risk of TopicExpan is to generate unseen phrases that are nonsense or implausible, also known as hallucinations in neural text generation, and such unseen phrases can degrade the quality and credibility of output topic taxonomies. This result supports that we can easily exclude all unseen phrases, which account for less than $0.2\%$ of generated phrases, to effectively address this issue.
+
+Moreover, the negative correlation between the topic-document similarity score and the interphrase semantic distance (Figure 5 Right) provides empirical evidence that the similarity score can serve as the confidence of a generated topic phrase. There is a clear tendency toward decreasing the average semantic distance as the topic-document similarity score increases; this implies that the phrases generated from topic-relevant documents are semantically coherent to each other, and accordingly, they are likely to belong to the same topic.
+
+# 6 Conclusion
+
+In this paper, we study the problem of topic taxonomy expansion, pointing out that the existing approach has shown limited term coverage and inconsistent topic relation. Our TopicExpan framework introduces hierarchy-aware topic term generation, which generates a topic-related term by using both the textual content of an input document and the relation structure of a topic as the condition for generation. The quantitative and qualitative evaluation demonstrates that our framework successfully obtains much higher-quality topic taxonomy in various aspects, compared to other baseline methods.
+
+For future work, it would be promising to incorporate an effective measure for the topic relevance of multi-word terms (i.e., phrases) into our framework. Since learning and utilizing the representation of multi-word terms remains challenging and worth exploring, it can be widely applied to many other text mining tasks.
+
+# 7 Limitations
+
+Despite the remarkable performance of TopicExpan on our tested corpus, there is still room to improve regarding how to better handle topics, documents, and phrases, for effective mining of topic knowledge. First, TopicExpan uses only the topic names (i.e., center terms) as the base node features in the topic relation graph, which makes our topic encoder difficult to capture the collective meaning of each topic from its set of topic-related phrases. Second, the confidence of each generated phrase considers only the topic relevance of its source document, instead of all the documents in which this phrase appears. Finally, the clustering process does not leverage the contextualized textual features computed by our BERT-based document encoder, which makes it hard to consolidate the context of the phrase within its source document.
+
+# Acknowledgements
+
+This work was supported by the IITP grant (No. 2018-0-00584, 2019-0-01906) and the NRF grant (No. 2020R1A2B5B03097210). It was also supported by US DARPA KAIROS Program (No. FA8750-19-2-1004), SocialSim Program (No. W911NF-17-C-0099), INCAS Program (No. HR001121C0165), National Science Foundation (IIS-19-56151, IIS17-41317, IIS 17-04532), and the Molecule Maker Lab Institute: An AI Research Institutes program (No. 2019897).
+
+# References
+
+Ramakrishna Bairi, Rishabh Iyer, Ganesh Ramakrishnan, and Jeff Bilmes. 2015. Summarization of multi-document topic hierarchies using submodular mixtures. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume I: Long Papers), pages 553-563, Beijing, China. Association for Computational Linguistics.
+Adrien Bougouin, Florian Boudin, and Beatrice Daille. 2013. TopicRank: Graph-based topic ranking for keyphrase extraction. In Proceedings of the Sixth International Joint Conference on Natural Language Processing, pages 543-551, Nagoya, Japan. Asian Federation of Natural Language Processing.
+Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.
+Thomas Griffiths, Michael Jordan, Joshua Tenenbaum, and David Blei. 2003. Hierarchical topic models and the nested chinese restaurant process. In Advances in Neural Information Processing Systems, volume 16.
+Xiaotao Gu, Zihan Wang, Zhenyu Bi, Yu Meng, Liyuan Liu, Jiawei Han, and Jingbo Shang. 2021. UCPhrase: Unsupervised context-aware quality phrase tagging. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, KDD '21, page 478-486, New York, NY, USA. Association for Computing Machinery.
+Jiaxin Huang, Yiqing Xie, Yu Meng, Yunyi Zhang, and Jiawei Han. 2020. CoRel: Seed-guided topical taxonomy construction by concept learning and relation transferring. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD '20, page 1928-1936, New York, NY, USA. Association for Computing Machinery.
+Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.
+Thomas N. Kipf and Max Welling. 2017. Semi-supervised classification with graph convolutional networks. In 5th International Conference on Learning Representations, ICLR 2017, Toulouse, France, April 24-26, 2017, Conference Track Proceedings.
+Dongha Lee, Jiaming Shen, Seongku Kang, Susik Yoon, Jiawei Han, and Hwanjo Yu. 2022. TaxoCom: Topic taxonomy completion with hierarchical discovery of
+
+novel topic clusters. In Proceedings of the ACM Web Conference 2022, WWW '22, page 2819-2829, New York, NY, USA. Association for Computing Machinery.
+Jens Lehmann, Robert Isele, Max Jakob, Anja Jentzsch, Dimitris Kontokostas, Pablo N. Mendes, Sebastian Hellmann, Mohamed Morsey, Patrick van Kleef, Soren Auer, and Christian Bizer. 2015. Dbpedia - A large-scale, multilingual knowledge base extracted from wikipedia. Semantic Web, 6(2):167-195.
+Jialu Liu, Jingbo Shang, Chi Wang, Xiang Ren, and Jiawei Han. 2015. Mining quality phrases from massive text corpora. In Proceedings of the 2015 ACM SIGMOD International Conference on Management of Data, SIGMOD '15, page 1729-1744, New York, NY, USA. Association for Computing Machinery.
+Rui Liu, Zheng Lin, and Weiping Wang. 2020. Keyphrase prediction with pre-trained language model. CoRR, abs/2004.10462.
+Yuning Mao, Tong Zhao, Andrey Kan, Chenwei Zhang, Xin Luna Dong, Christos Faloutsos, and Jiawei Han. 2020. Octet: Online catalog taxonomy enrichment with self-supervision. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD '20, page 2247-2257, New York, NY, USA. Association for Computing Machinery.
+Julian McAuley and Jure Leskovec. 2013. Hidden factors and hidden topics: Understanding rating dimensions with review text. In Proceedings of the 7th ACM Conference on Recommender Systems, RecSys '13, page 165-172, New York, NY, USA. Association for Computing Machinery.
+Rui Meng, Sanqiang Zhao, Shuguang Han, Daqing He, Peter Brusilovsky, and Yu Chi. 2017. Deep keyphrase generation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 582-592, Vancouver, Canada. Association for Computational Linguistics.
+Yu Meng, Jiaming Shen, Chao Zhang, and Jiawei Han. 2019. Weakly-supervised hierarchical text classification. In Proceedings of the Thirty-Third AAAI Conference on Artificial Intelligence, AAAI'19, pages 6826-6833. AAAI Press.
+Yu Meng, Yunyi Zhang, Jiaxin Huang, Yu Zhang, Chao Zhang, and Jiawei Han. 2020. Hierarchical topic mining via joint spherical tree and text embedding. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD '20, page 1908-1917, New York, NY, USA. Association for Computing Machinery.
+Rada Mihalcea and Paul Tarau. 2004. TextRank: Bringing order into text. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing, pages 404-411, Barcelona, Spain. Association for Computational Linguistics.
+
+Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Distributed representations of words and phrases and their compositionality. In Proceedings of the 26th International Conference on Neural Information Processing Systems - Volume 2, NIPS'13, page 3111-3119, Red Hook, NY, USA. Curran Associates Inc.
+Aaron van den Oord, Yazhe Li, and Oriol Vinyals. 2018. Representation learning with contrastive predictive coding. CoRR, abs/1807.03748.
+Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543, Doha, Qatar. Association for Computational Linguistics.
+Yves Petinot, Kathleen McKeown, and Kapil Thadani. 2011. A hierarchical model of web summaries. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 670-675, Portland, Oregon, USA. Association for Computational Linguistics.
+Jingbo Shang, Jialu Liu, Meng Jiang, Xiang Ren, Clare R. Voss, and Jiawei Han. 2018. Automated phrase mining from massive text corpora. IEEE Transactions on Knowledge and Data Engineering, 30(10):1825-1837.
+Jingbo Shang, Xinyang Zhang, Liyuan Liu, Sha Li, and Jiawei Han. 2020. NetTaxo: Automated topic taxonomy construction from text-rich network. In Proceedings of The Web Conference 2020, page 1908-1919, New York, NY, USA. Association for Computing Machinery.
+Jiaming Shen, Wenda Qiu, Yu Meng, Jingbo Shang, Xiang Ren, and Jiawei Han. 2021. TaxoClass: Hierarchical multi-label text classification using only class names. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4239-4249, Online. Association for Computational Linguistics.
+Jiaming Shen, Zhihong Shen, Chenyan Xiong, Chi Wang, Kuansan Wang, and Jiawei Han. 2020. TaxoExpan: Self-supervised taxonomy expansion with position-enhanced graph neural network. In Proceedings of The Web Conference 2020, page 486-497, New York, NY, USA. Association for Computing Machinery.
+Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 5998-6008.
+
+Yue Wang, Jing Li, Hou Pong Chan, Irwin King, Michael R. Lyu, and Shuming Shi. 2019. Topi- aware neural keyphrase generation for social media language. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2516-2526, Florence, Italy. Association for Computational Linguistics.
+Qingkai Zeng, Jinfeng Lin, Wenhao Yu, Jane Cleland-Huang, and Meng Jiang. 2021. Enhancing taxonomy completion with concept generation via fusing relational representations. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, KDD '21, page 2104-2113, New York, NY, USA. Association for Computing Machinery.
+Qingkai Zeng, Wenhao Yu, Mengxia Yu, Tianwen Jiang, Tim Weninger, and Meng Jiang. 2020. Tri-train: Automatic pre-fine tuning between pre-training and finetuning for SciNER. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 4778-4787, Online. Association for Computational Linguistics.
+Chao Zhang, Fangbo Tao, Xiusi Chen, Jiaming Shen, Meng Jiang, Brian Sadler, Michelle Vanni, and Jiawei Han. 2018. TaxoGen: Unsupervised topic taxonomy construction by adaptive term embedding and clustering. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD '18, page 2701-2709, New York, NY, USA. Association for Computing Machinery.
+Cangqi Zhou, Jinling Shang, Jing Zhang, Qianmu Li, and Dianming Hu. 2021. Topic-attentive encoder-decoder with pre-trained language model for keyphrase generation. In IEEE International Conference on Data Mining, Auckland, New Zealand, December 7-10, 2021, pages 1529-1534. IEEE.
+
+# A Supplementary Material
+
+# A.1 Pseudo-code of TopicExpan
+
+Algorithm 1 describes the detailed process of our framework, including the training step (Lines 1-9) and the expansion step (Lines 10-23). The final output is the expanded topic taxonomy (Line 24).
+
+Algorithm 1: The process of TopicExpan.
+Input: Initial topic taxonomy $\mathcal{T} = (\mathcal{C},\mathcal{R})$ and Text corpus D Output: Expanded taxonomy $\mathcal{T}'$ // Step 1: Learning the topic taxonomy
+1 $\mathcal{X}\gets$ COLLECTTRIPLES(T,D)
+2 $\mathcal{G}\gets$ CONSTRUCTGRAPH(T)
+3 while not converged do for $(c_j,d_i,p_k)\in \mathcal{X}$ do Obtain the model outputs for the inputs $(\mathcal{G},c_{j},d_{i})$ Compute $\mathcal{L}_{sim}$ by Equation (3) Compute $\mathcal{L}_{gen}$ by Equation (6) $\mathcal{L}\leftarrow \mathcal{L}_{sim} + \mathcal{L}_{gen}$ $\Theta \leftarrow \Theta -\eta \cdot \partial \mathcal{L} / \partial \Theta$ // Step 2: Expanding the topic taxonomy
+10 $\mathcal{T}'\gets \mathcal{T}$ // For each valid position (the child position of each topic)
+11 for $c_{j}\in \mathcal{C}$ do
+12 $\mathcal{T}^*,\mathcal{P}^*\gets \mathcal{T},\emptyset$
+13 $c_{j}^{*}\gets$ MAKEVIRTUALNODE(cj)
+14 $\mathcal{T}^{\ast}.\mathrm{INSERTNODE}(c_j,\{c_j^*\})$
+15 $\mathcal{G}^*\gets$ CONSTRUCTGRAPH(T*)
+16 for $d_{i}\in \mathcal{D}$ do Obtain the model outputs for the inputs $(\mathcal{G}^*,c_j^*,d_i)$
+18 $\hat{s}\gets \exp (c_j^{\top}Md_i)$
+19 $\hat{p}\gets [\hat{v}_1,\dots ,\hat{v}_T],\hat{v}_t\sim P(v_t|\hat{v}_{< t},d_i,c_j^*)$
+20 $\mathcal{P}^{\ast}.\mathrm{APPEND}((\hat{s},\hat{p}))$
+21 $\mathcal{P}^*\gets$ FILTERBYNORMALIZEDSCORE(P\*,τ)
+22 $c_{j1}^{*},\ldots ,c_{jK}^{*}\gets$ CLUSTERPHRASES(P\*)
+23 $\mathcal{T}'\mathrm{INSERTNODE}(c_j,\{c_{j1}^*,\ldots ,c_{jK}^*\})$
+24 return T'
+
+Training Step (Lines 1-9). TopicExpan first collects all positive triples $(c_j, d_i, p_k)$ from an initial topic taxonomy $\mathcal{T}$ and a text corpus $\mathcal{D}$ (Line 1; Section 4.1), and constructs a topic relation graph $\mathcal{G}$ from the topic hierarchy (Line 2; Section 4.2.1). Then, it updates all the trainable parameters based on the gradient back-propagation (Lines 5-9) to minimize the losses for the topic-document similarity prediction task (Line 6; Section 4.3.1) and the topic-conditional phrase generation task (Line 7; Section 4.3.2).
+
+Expansion Step (Lines 10-23). Using the trained model, TopicExpan discovers new topics that need to be inserted into each valid position in the topic
+
+
+Figure 6: The phrase generator architecture. It generates the token sequence given a topic and a document, by using topic-attentive token representations as the context.
+
+hierarchy (Line 11). For a virtual topic node $c_{j}^{*}$ as a newly-introduced child of each topic node $c_{j}$ (Line 13), it constructs a topic relation graph $\mathcal{G}^*$ from the topic hierarchy augmented with the virtual topic node (Lines 14-15). Then, it collects all pairs of a topic-document similarity score and a generated topic phrase $(\hat{s},\hat{p})$ , which are obtained by using the trained model on the augmented topic relation graph and all the documents (Lines 16-20; Section 4.4.1). Next, it filters out non-confident (i.e., irrelevant) phrases according to the normalized score (Line 21), then it performs clustering to find out multiple phrase clusters, each of which is considered as a new topic node having a novel topic semantics (Line 22; Section 4.4.2). In the end, it inserts the identified new topic nodes into the target position (i.e., the child of a topic node $c_{j}$ ) to expand the current topic taxonomy (Line 23).
+
+# A.2 Baseline Methods
+
+For the baselines, we employ the official author codes while following the parameter settings provided by (Lee et al., 2022). For all the methods that optimize the Euclidean or spherical embedding space (i.e., TaxoGen, CoRel, and TaxoCom), we fix the number of negative terms (for each positive term pair) to 2 during the optimization.
+
+- hLDA $^5$ (Griffiths et al., 2003) performs hierarchical latent Dirichlet allocation. It models a document generation process as sampling its words along the path selected from the root to a leaf. We set the smoothing parameters $\alpha = 0.1$ and $\eta = 1.0$ , respectively for document-topic distributions and topic-word distributions, and the concentration parameter in the Chinese restaurant process $\gamma = 1.0$ .
+
+- TaxoGen $^{6}$ (Zhang et al., 2018) is the unsupervised framework for topic taxonomy construction. To identify hierarchical term clusters, it optimizes the term embedding space with SkipGram (Mikolov et al., 2013). We set the maximum taxonomy depth to 3 and the number of child nodes to 5, as done in (Zhang et al., 2018; Shang et al., 2020).
+- CoRel $^{7}$ (Huang et al., 2020) is the first topic taxonomy expansion method. It trains a topic relation classifier by using the initial taxonomy, then recursively transfers the relation to find out candidate terms for novel subtopics. Finally, it identifies novel topic nodes based on term embeddings induced by SkipGram (Mikolov et al., 2013).
+- TaxoCom $^{8}$ (Lee et al., 2022) is the state-of-the-art method for topic taxonomy expansion. For each node from the root to the leaf, it recursively optimizes term embedding and performs term clustering to identify both known and novel subtopics. we set $\beta = 1.5, 2.5, 3.0$ (for each level) in the novelty threshold $\tau_{nov}$ , and fix the significance threshold $\tau_{sig} = 0.3$ .
+
+# A.3 Implementation Details
+
+Model Architecture. For the topic encoder, we use two GCN layers to avoid the over-smoothing problem, and fix the dimensionality of all node representations to 300. For the document encoder, we employ the bert-base-uncased provided by huggingface (Devlin et al., 2019), as the initial checkpoint of a pretrained model. It contains 12 layers of transformer blocks with 12 attention heads, thereby obtaining 768-dimensional contextualized token representations $\left[v_{i1},\ldots ,v_{iL}\right]$ (and a final document representation $d_{i} =$ mean-pooling $(v_{i1},\dots,v_{iL})$ ) for an input document $d_{i}$ . Consequently, the size of the interaction matrix $M$ in our topic-document similarity predictor (Equation (3)) becomes $300\times 768$ . For the phrase generator, we adopt a single layer of the transformer decoder with 16 attention heads and train its parameters from scratch without using the checkpoint of a pretrained text decoder. We limit
+
+the maximum length of a generated phrase to 10. Figure 6 shows the phrase generator architecture. In total, our neural model contains 540K (for the topic encoder), 110M (for the document encoder), 230K (for the similarity predictor), and 30M (for the phrase generator) parameters.
+
+Training Step. For the optimization of model parameters, we use the Adam optimizer (Kingma and Ba, 2015) with the initial learning rate 5e-5 and the weight decay 5e-6. The batch size is set to 64, and the temperature parameter $\gamma$ in Equation (3) is set to 0.1. The best model is chosen using the best perplexity of generated topic phrases on the validation set of positive triples $(c_{j},d_{i},p_{k})$ , which is evaluated every epoch.
+
+Expansion Step. To filter out non-confident phrases (Section 4.4.1), we set the threshold value $\tau$ to 0.8 after applying min-max normalization on all topic-document similarity scores computed for each virtual topic node. To perform $k$ -means clustering on the collected topic phrases (Section 4.4.2), we set the initial number of clusters $k$ to 10, then select top-5 clusters by their cluster size (i.e., the number of phrases assigned to each cluster). The center phrase of each cluster is used as the final topic name of the new topic node.
+
+# A.4 Computing Platform
+
+All the experiments are carried out on a Linux server machine with Intel Xeon Gold 6130 CPU @2.10GHz and 128GB RAM by using a single RTX3090 GPU. In this environment, the model training of TopicExpan takes around 2 hours and 6 hours for Amazon and DBPedia, respectively.
+
+# A.5 Quantitative Evaluation Protocol
+
+For exhaustive evaluation on a large-scale topic taxonomy with hundreds of topic nodes, the output taxonomy of topic taxonomy expansion methods (i.e., CoRel, TaxoCom, and TopicExpan) is divided into three parts $\mathcal{T}_1$ , $\mathcal{T}_2$ , and $\mathcal{T}_3$ so that each part covers some of the first-level topics (and their subtrees) listed in Table 6.
+
+Table 6: Three disjoint parts of the topic taxonomy.
+
+
Corpus
Part
First-level topics
Amazon
T1
grocery gourmet food, toys games
T2
beauty, personal care
T3
baby products, pet supplies
DBPedia
T1
agent, work, place
T2
species, unit of work, event
T3
sports season, device, topical concept
+
+In case of hLDA and TaxoGen, the first-level topics in their output taxonomies are not matched with the ground-truth topics (in Table 6), because they build a topic taxonomy from scratch. For this reason, in Table 2, their output taxonomies are evaluated whole without partitioning. In addition, the two metrics for novel topic discovery (i.e., relation accuracy and subtopic integrity) are designed to evaluate the topic taxonomy expansion methods, so it is infeasible to measure the aspects on the output taxonomies of hLDA and TaxoGen. Thus, we only report the metric for topic identification (i.e., term coherence) in Table 2.
+
+Term Coherence. It indicates how strongly terms in a topic node are relevant to each other. Evaluators count the number of terms that are relevant to the common topic (or topic name) among the top-5 terms found for each topic node.
+
+Relation Accuracy. It computes how accurately a topic node is inserted into a given topic hierarchy (i.e., precision for novel topic discovery). For each valid position, evaluators count the number of newly-inserted topics that are in the correct relationship with the surrounding topics.
+
+Subtopic Integrity. It measures the completeness of subtopics for each topic node (i.e., recall for novel topic discovery). Evaluators investigate how many ground-truth novel topics, which were deleted from the original taxonomy, match with one of the newly-inserted topics.
+
+# A.6 Examples of Topic Phrase Generation
+
+We provide additional examples of topic-conditional phrase generation, obtained by TopicExpan. Figure 7 illustrates a confident phrase (Left) and a non-confident phrase (Right), generated from each input document and the given relation structure of a target topic, for both datasets. As discussed in Section 5.3.3, in case that a target topic is relevant to the document (i.e., high topic-document similarity score), TopicExpan successfully generates a phrase relevant to the target topic. On the other hand, in case that a target topic is irrelevant to the document (i.e., low topic-document similarity score), TopicExpan obtains a phrase irrelevant to the target topic.
+
+# Input document
+
+"Sunsout fathers Christmas train 500 piece jigsaw puzzle. Put away the video games and do a puzzle with your family. This is a great way to get the family together for conversation and fun. I like this puzzle the tree was the hardest part."
+
+
+Relation structure of a target topic
+Input document
+
+
+(a) Dataset: Amazon
+
+"Swithun (or Swithin; Latin: Swithunus; died 863 AD) was an Anglo Saxon bishop of Winchester and subsequently patron saint of Winchester Cathedral. His historical importance as bishop is overshadowed by his reputation for posthumous miracle working. According to tradition, the weather on his feast day (15 July) will continue for forty days. The precise meaning and origin of Swithun's name is unknown, but it most likely derives from the old English word swi, 'strong'."
+
+
+Relation structure of a target topic
+(b) Dataset: DBPedia
+Figure 7: Examples of topic-conditional phrase generation, given a document and its relevant/irrelevant topic.
+
+
\ No newline at end of file
diff --git a/topictaxonomyexpansionviahierarchyawaretopicphrasegeneration/images.zip b/topictaxonomyexpansionviahierarchyawaretopicphrasegeneration/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..8afecc025f2394b627831d59b5161de8958397ef
--- /dev/null
+++ b/topictaxonomyexpansionviahierarchyawaretopicphrasegeneration/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:7de7023523cb452a66d7ad71bb7fc5e0aed6b0b9a53691b8bd7d9470ee4fee6e
+size 787327
diff --git a/topictaxonomyexpansionviahierarchyawaretopicphrasegeneration/layout.json b/topictaxonomyexpansionviahierarchyawaretopicphrasegeneration/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..e43470ef54a1be310c776364b0311c423f8b4fed
--- /dev/null
+++ b/topictaxonomyexpansionviahierarchyawaretopicphrasegeneration/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ca6f585935d5f6fcda28c0044a06abe73db06bd47180684eeb6b00d9205f65b9
+size 535685
diff --git a/towardsexplainingsubjectivegroundofindividualsonsocialmedia/6c9a1f61-6e99-469e-91d8-67a58d0a3e4c_content_list.json b/towardsexplainingsubjectivegroundofindividualsonsocialmedia/6c9a1f61-6e99-469e-91d8-67a58d0a3e4c_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..3c2b286d368331f2b5655e4c7377d81493e55e49
--- /dev/null
+++ b/towardsexplainingsubjectivegroundofindividualsonsocialmedia/6c9a1f61-6e99-469e-91d8-67a58d0a3e4c_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:767ec52d4ff8b8738ef2668b927eeed0f1e68921703c775909daa91846eb2e68
+size 100370
diff --git a/towardsexplainingsubjectivegroundofindividualsonsocialmedia/6c9a1f61-6e99-469e-91d8-67a58d0a3e4c_model.json b/towardsexplainingsubjectivegroundofindividualsonsocialmedia/6c9a1f61-6e99-469e-91d8-67a58d0a3e4c_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..d86c52a6bd76318d494d5bb413fd8c1ab8c21627
--- /dev/null
+++ b/towardsexplainingsubjectivegroundofindividualsonsocialmedia/6c9a1f61-6e99-469e-91d8-67a58d0a3e4c_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f3103e716ff0dff489ac64413156c438f0e3851633a62e8678a2e22a3d1305de
+size 115815
diff --git a/towardsexplainingsubjectivegroundofindividualsonsocialmedia/6c9a1f61-6e99-469e-91d8-67a58d0a3e4c_origin.pdf b/towardsexplainingsubjectivegroundofindividualsonsocialmedia/6c9a1f61-6e99-469e-91d8-67a58d0a3e4c_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..a3996b76d22c2327e154362ba29b23b3636b6a69
--- /dev/null
+++ b/towardsexplainingsubjectivegroundofindividualsonsocialmedia/6c9a1f61-6e99-469e-91d8-67a58d0a3e4c_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:116ef8aeab83b0bfb7a3c9203a2087495be8d7ee166179e445939843c25c1da7
+size 2259740
diff --git a/towardsexplainingsubjectivegroundofindividualsonsocialmedia/full.md b/towardsexplainingsubjectivegroundofindividualsonsocialmedia/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..05724a512798519b03849eeb35af8b852584a5d4
--- /dev/null
+++ b/towardsexplainingsubjectivegroundofindividualsonsocialmedia/full.md
@@ -0,0 +1,443 @@
+# Towards Explaining Subjective Ground of Individuals on Social Media
+
+Younghun Lee and Dan Goldwasser
+
+Department of Computer Science
+
+Purdue University
+
+West Lafayette, IN, USA
+
+{younghun, dgoldwas}@purdue.edu
+
+# Abstract
+
+Large-scale language models have been reducing the gap between machines and humans in understanding the real world, yet understanding an individual's theory of mind and behavior from text is far from being resolved.
+
+This research proposes a neural model—Subjective Ground Attention—that learns subjective grounds of individuals and accounts for their judgments on situations of others posted on social media. Using simple attention modules as well as taking one's previous activities into consideration, we empirically show that our model provides human-readable explanations of an individual's subjective preference in judging social situations. We further qualitatively evaluate the explanations generated by the model and claim that our model learns an individual's subjective orientation towards abstract moral concepts.
+
+# 1 Introduction
+
+For the last few years, large-scale language models have shown substantial performance gains in many different sub-fields of natural language processing (Liu et al., 2019; Raffel et al., 2020). Researchers have hypothesized that such language models contain knowledge of linguistic characteristics, logical inference, and real world events in their parameters, and this knowledge can be fine-tuned and adapted to downstream tasks (Wang et al., 2019). The recent success of commonsense reasoning, for instance, shows that language model parameters can be used as a knowledge base while they comprehensively learn commonsense patterns (Hwang et al., 2021).
+
+Although deeper and larger language models have led machines to better comprehend how the real world works, understanding an individual's perspective and behavior from text is yet far from being resolved. Humans perceive daily situations and events differently, and employ certain biases
+
+
+Figure 1: An example of a reddit post and its comments crawled from r/AmITheAsshole. The author describes a situation of not giving up on the pet over the health of significant other's daughter. The author's behavior is unacceptable to the first redditor (red arrow), while the second redditor (blue arrow) has an opposite view.
+
+(Kahneman, 2011) and social expectations (Hilton, 1990) when they evaluate the given event and social behavior of others (Miller, 2019). An individual's process of attributing, evaluating, and explaining an event has been widely investigated by cognitive and social psychologists for the past few decades (McClure, 2002; Hilton, 2017), yet its application to neural language models is outside the mainstream natural language processing research.
+
+This paper proposes a neural model, Subjective Ground Attention, that learns cognitive models of individuals and explains their subjective judgments on situations that are posted on social media. We analyze a Reddit community, r/AmITheAsshole, where users submit posts asking whether their behaviors are justifiable, and other users leave comments with their judgments. Figure 1 shows an example of a reddit post where an author describes a situation, and different redditors provide their subjective judgment through comments.
+
+The research hypothesis is that people comprehend and account for others' situations based on their subjective ground, a maxim that plays a cen
+
+tral role in human moral judgments (Neuhouser et al., 1990), which can be represented from their previous activities. To investigate this hypothesis, we formulate a task to predict the redditors' judgment given diverse situations. Each redditor is represented by their subjective ground which is estimated from their previous comments. Using clustering methods, the model selects a set of the most relevant subjective ground to a given situation. The model then learns attention weights among subjective ground comments to predict the redditor's moral judgments. Through learned attention weights, we present human-readable subjective ground and how they contribute to the model's prediction about the judgments.
+
+From empirical results, we suggest that our proposed model provides explanations for the redditors' subjective judgments on diverse social situations and contributes to downstream task performance in a statistically significant way. We additionally compute the consistency score of attention weights with respect to the model's final prediction, and show that the model efficiently uses attended subjective ground. From qualitative analysis results, we further claim that our model not only learns an individual's subjective preference on real world situations (e.g. reporting my best friend for cheating), but also infers an individual's perspective on more abstract concepts of competing moral values (e.g. fairness is more important than a friendship).
+
+Key Contributions: To the best of our knowledge, this is the first attempt to estimate subjective ground of individuals, using it to explain their activities on social media. With better representation of human cognitive models and real world situations, we expect machines to perform more meaningful and accurate inference. This would ultimately help artificial intelligence agents by enabling maximal personalization; not only will it remember an individual's previous history and preference, it will empathize with one's state and situation in a human-understandable way.
+
+# 2 Data Preparation
+
+We analyze daily situations and individuals' subjective judgments on them posted on a Reddit community, r/AmITheAsshole. Users submit posts describing their situations and ask whether or not their behaviors are acceptable. One of the advantages of using this data domain is that most of the
+
+
Modified Social Chemistry 101(D)
# of total instances
14,391
# of unique situations
9,663
Max / Min # of instances per redditor
965 / 298
# Acceptable / Unacceptable labels
9,817 / 4,574
Crawled r/AmITheAsshole (D+)
# of total instances
66,603
# of unique situations
52,075
Max / Min # of instances per redditor
9,711 / 513
# Acceptable / Unacceptable labels
42,961 / 23,642
+
+Table 1: Statistics of the two datasets. Both datasets take the most active 30 redditors into consideration, keeping the instances that contain coded judgments in the comments.
+
+situations are generic (e.g. getting annoyed at my roommate) rather than related to specific world events (e.g. new climate change policies in U.S.), thus the models benefit from implicit knowledge in language model parameters to better understand the situation.
+
+# 2.1 Social Chemistry 101 Dataset
+
+Forbes et al. (2020) annotated moral rules-of-thumb (RoT) that can be used in judging whether or not the input situations are acceptable. The authors released Social Chemistry 101 dataset, which contains around 30k situations posted on r/AmITheAsshole. Consider the following situation and its rules-of-thumb as an example:
+
+Situation: Asking someone at the gym to stop talking to me.
+
+RoT 1: It is okay to not want to randomly make new friends.
+
+RoT 2: It is expected that you are kind when others are extroverted and try to speak to you.
+
+We make use of moral rule-of-thumb annotations as a tool for explaining an individual's subjectivity. Observing the number of annotated rules-of-thumb is small for many instances and most of these instances have rules-of-thumb supporting only one side of moral judgment, we manually extend rules-of-thumb. Each rule-of-thumb annotation in Social Chemistry 101 consists of a judgment (e.g. It is okay) and an action (e.g. not wanting to randomly make new friends). We extend the rules-of-thumb for each situation by replacing judgment words (e.g. It is okay $\rightarrow$ It is not okay) while keeping the action description. We prioritize replacing judgment words with their opposite meaning, which is crucial to ensure obtaining both sides of rules-of-thumb. To efficiently train the model, we set a fixed number, 5, as the number of rules-of-thumb for all situations.
+
+
+SUBJECTIVE GROUND TRAINING MODULE
+Figure 2: Training process of our proposed model. $x$ is input situations, $c$ is subjective ground comments, and $v$ is rule-of-thumb candidates. After the subjective ground module has been trained on $\mathcal{D}^+$ , the parameters of the encoder and the subjective ground attention layers are shared in value attention training.
+
+A small number of training instances prevent the model from generalization. We thus identify redditors who are actively involved in r/AmITheAsshole; we focus on the 30 redditors who have commented the most on the posts in the Social Chemistry 101 dataset.
+
+# 2.2 Crawling from r/AmITheAsshole
+
+In this work, we estimate an individual's subjective ground using their previous activities. The redditor's previous activities are defined as the comments they have left on r/AmITheAsshole; comments on other subreddits are mostly irrelevant to moral judgments. We crawl active redditors' comments1 and denote the intersection of crawled data and Social Chemistry 101 as $\mathcal{D}$ . All other instances of the crawled data are denoted as $\mathcal{D}^+$ . Note that the instances in $\mathcal{D}$ have annotated rules-of-thumb while $\mathcal{D}^+$ doesn't.
+
+# 2.3 Preprocess Comments and Obtain Moral Judgments
+
+Rather than predicting the authorship of the comments, this work solely focuses on analyzing the
+
+moral judgment. This is to prevent the model from picking up shallow features, such as a redditor's linguistic styles, without focusing on learning their subjectivity.
+
+We preprocess the redditors' comments and obtain their moral judgments on input situations. In the r/AmITheAsshole community, redditors provide their judgments on the situation with predefined codes; YTA (You're The Asshole), NTA (Not The A-hole), ESH (Everyone Sucks Here), NAH (No A-holes Here), and INFO (Not Enough Info). We identify these code words from the comments and mark them as the redditor's judgments on the situation. We group NTA and NAH as 'acceptable', and YTA and ESH as 'unacceptable'. Instances with the code INFO are discarded, as there is no moral subjectivity included in them. Detailed statistics of the two datasets are described in Table 1.
+
+# 3 Model
+
+In this work, we develop neural models with two main components: subjective ground training module and value attention training module. This is followed by a classifier to predict the redditor's moral judgments. Figure 2 illustrates the model
+
+
Subjective Ground Comments
Redditor 1
· Maybe a visit from an SPCA or animal control officer will press the importance of adequate care into their minds.
+· Your mother needs to remember that older cats don't take care of themselves like younger cats do.
+· As much as I love animals, and 99.9% of ours have been rescues, you have legitimate reasons for wanting a particular breed.
+· You do live together, and he would have to share space with the new dog.
+· But completely and totally uprooting her and moving her to a new family is too much.
+· You are absolutely within your rights to prevent your cousin from seeing the dog.
Redditor 2
· In my town, a young girl was killed in our town when she was mauled by a family members pet, no one could get the dog off of her, the grandmother was stabbing the dog and screaming, and the police had to shoot and kill the dog to stop the attack.
+· Tell them family it is to control their new begging behavior, and stop giving people the chance to feed them because your vet specifically told you not to allow people food.
+· My SIL was attacked and has bad facial scars from a trusted family pet, my childhood best friend's 2 year old was attacked by a friend's dog, and is absolutely terrified by them now.
+· You have to protect your child
+· I bet his dream dog listens to commands and is loyal and obedient.
+· She thinks the pets at home would be betrayed by you two getting one together?
+
+Table 2: Subjective ground comments of the two redditors in the same cluster. The topic of this cluster is pet / companion animals, and the two redditors express different subjective ground—one mentions methods providing better environments for the pets (Redditor 1), while the other feels that pets can be harmful (Redditor 2). Color-coded parts in each comment indicate the words that match with the Moral Foundation Dictionary vocabularies.
+
+diagram. Mathematical details of the model components are described in Appendix A.1
+
+# 3.1 Subjective Ground Training Module
+
+Subjective ground base consists of a set of previous comments left on r/AmITheAsshole. We hypothesize that an individual's subjectivity towards situations related to a specific topic can be applied to other situations within the same topic. For instance, if an individual has a positive subjectivity in raising pets, their moral judgments on animal abuse would be 'unacceptable'. Following this intuition, we vectorize input situations in $\mathcal{D}^{+}$ using Sentence-BERT (Reimers and Gurevych, 2019), apply K-Means clustering to identify a fixed number of topic groups among situations, and cluster the redditor's comments on situations within the same topic group. We set the number of clusters to 20, based on the computed inertia values.
+
+Recognizing the majority of comments in the subjective ground base are not informative with respect to estimating the redditor's subjectivity, we prune unnecessary comments and obtain the subjective ground that is potentially more relevant. In order to determine the relevance of subjective ground comments, we apply Moral Foundations Theory (Haidt and Joseph, 2004; Graham et al., 2013), a framework to explain the origins of human moral reasoning with foundations such as care/harm, fairness/cheating, loyalty/betrayal, authority/subversion, and purity/degradation. We focus on comments that are more related to moral
+
+foundations.
+
+We compute each comment's moral foundations score by counting the number of word matches with the Moral Foundation Dictionary (Frimer et al., 2019), which has approximately 2,000 words with their corresponding moral foundations. For each cluster, we keep the top 6 comments with the largest moral foundations score, resulting in 120 comments in total for each redditor. The major reason to keep the small number of the comments is because there are users who commented very little to specific clusters, and setting the same fixed number of comments is more efficient in model training. Examples are illustrated in Table 2.
+
+Rather than equally considering all subjective ground comments in the cluster, a separate attention module is trained using $\mathcal{D}^+$ . The module computes the attention score between the redditor's subjective ground and situation representations, and is followed by feed-forward networks to predict moral judgments. We assume that this module learns weights over subjective ground comments when they are conditioned to the input situation. In other words, we expect the attention layer to highlight the redditor's subjective ground comments that are most relevant to predict correct moral judgments on the given situation.
+
+# 3.2 Value Attention Training Module
+
+Schwartz (1992) introduced a theory of basic human values to characterize cultural groups, societies, and individuals. In this study, values are used
+
+to explain an individual's motivational bases of certain behavior. Value attention training module aims to map individual's subjective ground comments to more abstract values—in our case, moral rules-of-thumb in $\mathcal{D}$ can be used as value candidates. This process is essential because situation clusters capture broad topics rather than fine-grained talking points, thus the redditor's subjective ground comments might not be directly applied to an input situation. For instance, clusters regarding romantic relationships, family members, or kids tend to vary a great extent, and it is challenging to acquire a fixed number of comments that could cover all situations in the cluster.
+
+In value attention training, we compute attention scores between value candidates and the subjective ground of the redditors which has already been trained in the previous module. Assuming that the subjective ground has high correlation with moral judgments of the situation, the rule-of-thumb that has the highest attention weight would be highly correlated to the judgment as well. This module projects one's subjective ground on the value candidates to assess the given situation.
+
+After computing attention weights among the value candidates, we obtain a weighted sum and consider it as the representation of the value that would speak for the redditor. The final feedforward classifier layer takes the weighted sum of values and the input situation representation, concatenates them, and predicts the redditor's moral judgments on the situation.
+
+# 4 Experiments
+
+We implement models varying in attention structures and subjective ground representations. Macro F1 is used as an evaluation metric for accuracy since the label distribution is unbalanced and the two classes are equally important. Implementation details are described in Appendix A.2. Codes are released for future reference.3
+
+# 4.1 Baseline
+
+Baseline models are implemented to measure the difficulty of identifying a redditor's judgment pattern on given situations without providing context information. We define Transformer-based sequence classifiers for each redditor and train the models to predict the redditor's judgments given the input situations.
+
+Observing the limited amount of training instances in $\mathcal{D}$ per redditor, we make use of a larger dataset, $\mathcal{D}^+$ to fine-tune the encoder layer. We first train sequence classifiers with the same objectives using $\mathcal{D}^+$ and share the encoder for fitting the model to $\mathcal{D}$ . This model is denoted as Baseline, fine-tuned encoder.
+
+# 4.2 Rules-of-Thumb Self Attention
+
+As a part of ablation studies, we implement a model that predicts redditors' judgments with the help of rule-of-thumb candidates of input situations. The major difference between this model and our proposed model is the use of the redditor subjective ground; this model does not take subjective ground into consideration. We compute the self attention of rule-of-thumb representations and concatenate with input representations to predict the judgments.
+
+# 4.3 Subjective Ground Models
+
+Our model learns the correlation between the input situations and the redditor's subjective ground, and later identifies the most relevant value to the subjective ground for predicting moral judgments. We denote our model as Subjective Ground Attention.
+
+To investigate the effect of subjective ground attention layers, we implement the Static Subjective Ground model; this model uses the exact same structure as Subjective Ground Attention, without assigning or learning any attention weights. This model therefore takes all subjective ground comments equally.
+
+Another variation, Subjective Ground Attention w/o RoT, is a model that uses attention-weighted subjective ground comments without integrating moral rules-of-thumb. This model evaluates the efficacy of mapping subjective ground comments to rule-of-thumb candidates that are directly related to input situations.
+
+One may argue that simply adding more layers and parameters could help improve the performance regardless of the learning aspects of the model. Thus to analyze the efficacy of using a redditor's previous comments as subjective ground, we put a randomly initialized matrix as the subjective ground. This model is denoted as Latent Subjective Ground.
+
+# 5 Discussion
+
+In this section, we analyze the prediction results of different models and discuss the effectiveness of
+
+
Model
F1 (stdev)
Random Prediction
48.77 (0.60)
Baseline
58.61 (0.97)
Baseline, fine-tuned encoder
59.12 (0.34)
Rules-of-Thumb Self Attention
59.66 (0.72)
Latent Subjective Ground
59.16 (0.67)
Static Subjective Ground
60.15 (0.73)
Subjective Ground Attention w/o RoT
60.83 (0.61)
Subjective Ground Attention
61.05 (0.21)
+
+Table 3: F1 measures of the models in predicting moral judgments.
+
+each model component. Additionally, we perform a qualitative analysis of subjectivity explanations of the model.
+
+# 5.1 Prediction Accuracy
+
+Table 3 reports the average macro F1 scores and standard deviation of five experiments of each model. The overall macro F1 scores of the implemented models are not high. We suppose the task is naturally challenging because the number of training instances is insufficient to learn the moral judgment patterns of input situations—there are less than 500 instances for each redditor on average. Using more data to fine-tune the encoder slightly influences the performance on $\mathcal{D}$ —the baseline model with its Transformer encoder fine-tuned with $\mathcal{D}^{+}$ shows higher accuracy compared to the baseline model.
+
+Using more context information such as rule-of-thumb candidates and subjective ground comments improve the model accuracy to a certain extent. Both Rules-of-Thumb Self Attention and Static Subjective Ground models perform better while Latent Subjective Ground model does not. This shows that the redditor's previous comments help understand their subjectivity. Subjective Ground Attention, our proposed model, is the most efficient way of integrating both rules-of-thumb and subjective ground comments. The prediction accuracy of Subjective Ground Attention is more improved than that of the models without subjective ground comments in a statistically significant way, showing p-value less than 0.01.
+
+We further break down the outputs and analyze prediction accuracy for each cluster, assuming that the difficulty of prediction varies based on the topic and the quality of clustering. The gap between the highest and the lowest accuracy cluster is $24\%$ which supports our assumption. To identify the attributes that are correlated to the cluster accuracy, we investigate a few attributes such as cluster size,
+
+
+Figure 3: Prediction accuracy of different clusters with respect to silhouette scores. Each dot in the graph represents a distinct cluster.
+
+intra-cluster distances, and label distributions.
+
+Figure 3 shows cluster accuracy based on their silhouette score, which is high when items in a cluster are close together and distant from other clusters. There are a few outliers showing high accuracy with low silhouette and vice versa, yet the graph shows positive correlations with Pearson's correlation coefficient of 0.34. This implies that more well-clustered and semantically distinctive situations tend to provide better accuracy. Detailed descriptions of each cluster and their predictions are described in Appendix A.3.
+
+# 5.2 Consistency in Attention Weights
+
+We define a new metric for evaluating the consistency of the attention modules on test data. One of the desired behaviors of the model is its consistency; the rule-of-thumb with the largest attention weight needs to be consistent with the model's prediction. For instance, if the most attended rule-of-thumb supports the acceptability of the input situation, we want the model prediction to be 'acceptable' regardless of the ground truth. We manually annotate the acceptability label of 500 rules-of-thumb. Value consistency is then defined as a portion of instances where the highest weighted rule-of-thumb's acceptability label matches the model's final prediction.
+
+It is more challenging to annotate the acceptability label of subjective ground comments. Thus we design another test, input perturbation, to measure the consistency of subjective ground attentions. The redditor's subjective ground needs to account for situations at inference, and we expect the model to behave consistently for similar situations. From this intuition, we manually create situations that
+
+
+Figure 4: Attention weight flows of the original situations and perturbed inputs. When an abstract concept is given, the model attends to the redditor's subjective ground differently and predicts the judgment correctly.
+
+
Attention Consistency Test
Model
Consistency
Value
SG
Rules-of-Thumb Self Attention
35.94
N/A
Static Subjective Ground
65.63
71.15
Latent Subjective Ground
42.19
67.96
Subjective Ground Attention
56.25
72.32
+
+
Input Perturbation Test
Data
Accuracy
Original Situation
51.07
Altered Gender
45.15
Rephrased Situation
46.73
Abstract Concept
58.18
+
+Table 4: Quality evaluation of attention weights. The upper table reports the consistency measure of the value attentions (Value) and the subjective ground attentions (SG). Note that subjective ground consistency can't be measured in Rules-of-Thumb Self Attention because this model doesn't refer to subjective ground comments. The lower table shows the accuracy of Subjective Ground Attention model on modified inputs.
+
+are similar to the original reddit posts.4 We apply three levels of similarity; (1) situations where pronouns and gender-specific nouns are altered (e.g. not respecting my mom $\rightarrow$ not respecting my dad), (2) rephrased situations (e.g. not respecting $\rightarrow$ being mean to), (3) abstract concept of the situations where it can be applied to other situations (e.g. revealing someone's secret $\rightarrow$ honesty is more important than relationships). Subjective ground consistency is defined as the portion of modified inputs that have the same attention weight order as the original input. Evaluation results are described in Table 4.
+
+The value consistency of Rules-of-Thumb Self Attention and Latent Subjective Ground models are surprisingly low, implying these models learn
+
+rules-of-thumb attention weights without their actual relatedness to the moral judgments—right for the wrong reasons. We further investigate the reason and observe that the model tends to give high weights on some specific rules-of-thumb, possibly texts that are more familiar to the pre-trained Transformers regardless of the redditor's judgments. Static Subjective Ground model gives the highest consistency score, confirming the efficacy of using redditor's subjective ground comments. This model exceeds the value consistency measure of our proposed model, Subjective Ground Attention, suggesting rules-of-thumb attentions become more consistent with the model's final prediction when using all subjective ground comments equally. Our proposed model achieves the highest score in subjective ground consistency tests. This implies that Subjective Ground Attention learns consistent attention weights with respect to similar inputs.
+
+Another interesting finding is that our model gives more accurate predictions on the abstract concept inputs; when the model is conditioned to the abstract concepts, the order of the subjective ground attention weights changes and it leads to better prediction results. Referring to an example illustrated in Figure 4, the model attends more on the first and the third comments when the original situation and the rephrased situation is given. This is because the model picks up keywords like job and occupation, and considers the situation as a job/work related issue. For an abstract concept input, however, the fifth subjective ground is attended the most as the model now sees the situation as one regarding relationships. The different weights over subjective ground affects the attention weights on rules-of-thumb, hence they impact the final model prediction. These results suggest that our proposed model learns one's general perspectives on morally
+
+Situation: Having a gender preference in our selective abortion
+
+
Redditor Judgment: Acceptable / Model Prediction: Acceptable / Consistency: No
Subjective Ground
Rules-of-Thumb
·Considering that this woman agreed to sacrifice her body without compensation as a solely altruistic thing for OP's family...not sure how OP gets to force her to sacrifice her bodily autonomy through forcing her into an unwanted abortion.
·Abortion is bad because it takes a person's life.
·Injuring is only self defense if you have no other means of protecting yourself-in OP's case, it was probably anger/revenge, not self defense.
·It's fine to have an abortion for any reason that
·You're not forbidding him from caring for his child, you're forbidding his family member from caring for child.
·It's fine to want to choose the sex of your child.
·I'm a devout Catholic, so I actually do believe in religious stuff.
·It's wrong to have an abortion just because you don't like the features of the child.
·A lot of prolife people are Catholic, and the Catholic Church is against birth control.
·It's not allowed to have an abortion just because you don't like the features of the child.
·You weren't stopping her from going out, or being controlling about her actions, you were just watching out for her safety.
+
+
Redditor Judgment: Acceptable / Model Prediction: Acceptable / Consistency: Yes
Subjective Ground
Rules-of-Thumb
·I suspect that the issues that plague her family aren’t completely removed from her because she lacks the common sense to realize that she can’t help them and the 13 year old needs an intervention and foster care
·Abortion is bad because it takes a person’s life.
+·It’s fine to have an abortion for any reason that
·Your wife "won't work" and here you were working 7 am to midnight, and she's not only refusing to do laundry, etc., but she expects you to do it at midnight??
·It’s fine to want to choose the sex of your child.
·Only foolish and misguided humans stay together for the kids, causing more misery than they would have if they had just divorced and been amazing parents separately
·It’s wrong to have an abortion just because you don’t like the features of the child.
·I still would get arrested for public nudity, assault, robbery, theft.
·You went way above and beyond for people ..., but also couldn’t be bothered to ... spend extra money on for baby sitters you didn’t have to hire.
·It’s not allowed to have an abortion just because you don’t like the features of the child.
·A childhood friend only wished his parents had divorced because they treated each other like strangers.
+
+Table 5: Attention weights among subjective ground comments and rule-of-thumb candidates given an input situation. The two redditors have the same judgments yet they differ in the rules-of-thumb attention and their consistency.
+
+competing values although applying this knowledge to specific situations is yet challenging.
+
+# 5.3 Qualitative Analysis of Subjective Ground
+
+In this section, we qualitatively analyze the redditors' subjective ground attention and its relatedness to rules-of-thumb attention.
+
+We illustrate a case where two redditors comment on the same post in Table 5. The upper case is where the model prediction is not consistent with the most attended rule-of-thumb. We observe that the model gives higher weights to abortion-related subjective ground comments, implying that the redditor would consider the given situation unacceptable. The value attention module chooses the last rule-of-thumb which opposes the idea of abortion, showing the consistency between subjective ground and value attention. However, the final prediction of the model is 'acceptable', suggesting that the classifier does not use the weighted rule-of-thumb representations correctly. This analysis matches suboptimal value consistency results of Subjective Ground Attention in Table 4 and raises the
+
+necessity of developing classifiers that can better understand value attentions with respect to moral judgments.
+
+The next redditor, on the other hand, does not include any abortion-related comments as subjective ground. The model attends to the last subjective ground comment that contains keywords related to family and their separation—divorced, strangers. This example highlights the case where the topic distribution of a redditor's subjective ground is not comprehensive enough with respect to the given situation. In such case, the attention module focuses on the ground that is potentially associated with the situation and give high attention weights on the related rules-of-thumb. We anticipate that the model will be more accurate and consistent using subjective ground that is clustered based on more fine-grained talking points.
+
+Overall, we observe the consistency in subjective ground and value attentions. We expect the model's prediction accuracy and the quality of explanations can be further improved using more fine-grained activities of individuals and a neural component
+
+that can better learn the correlation between rules-of-thumb and moral judgments.
+
+# 6 Related Work
+
+Explainable AI As deep neural language models improve the accuracy of many different downstream NLP tasks, measuring the accountability and interpretability of these models has been recently gaining interest in the research community.
+
+Local explanation methods aim to provide rationales of the model in predicting a specific input. Recent neural models majorly explain the model behaviors by visualizing the saliency map of the first derivatives of the encoder (Ross et al., 2017; Wallace et al., 2018), attention layers (Xie et al., 2017; Mullenbach et al., 2018), perturbing inputs (Sydorova et al., 2019), and applying rules and templates (Abujabal et al., 2017; Pezeshkpour et al., 2019). Rajani et al. (2019) collect human explanations for commonsense reasoning in the form of text, and train language models to generate the explanations given pairs of commonsense questions and answers. Aubakirova and Bansal (2016) investigate how neural network models predict the politeness of input text by visualizing activation clusters, saliency heatmaps of the first derivatives, and word representation transformations in the embedding space. Ribeiro et al. (2016) proposes a framework where an interpretable model, trained to minimize the distance to the classifier predictions, explains the model prediction with absence/presence of specific words.
+
+Perspective Identification Identifying perspectives from the text has been steadily studied in many sub-fields of NLP. Greene and Resnik (2009) defines perspectives as an individual's syntactic packaging of the information and analyzes different usage of linguistic cues in articles. Choi and Wiebe (2014) adopts a simple symbolic relation, positive and negative connotations towards events and concepts, to the existing WordNet (Miller et al., 1990) hierarchy for inferring the point of view of an opinion. There are more complex concepts and structures—analyzing political framing and agenda-setting (Field et al., 2018; Roy and Goldwasser, 2021), encoding political perspective flows in social settings via Graph Convolutional Network (Li and Goldwasser, 2019)—have been studied.
+
+This research paper is positioned in the intersection of explainable AI and perspective identification. We examine several models that can approxi
+
+mate an individual's subjective ground in a human-readable way, as well as interact with diverse daily situations to infer individual's perspectives on the author and their situations.
+
+# 7 Conclusion
+
+This paper proposes a neural model, Subjective Ground Attention, that represents an individual's subjective preference with their previous activities and explains the reasoning behind their moral judgments on diverse social situations by spotlighting the most relevant subjective ground. We explore situations posted on a Reddit community, r/AmITheAsshole, and analyze active redditors' judgments on these situations indicating whether or not the author's behavior is acceptable. Upon attending subjective comments and moral rules-of-thumb, experimental results show that the model provides reasonable explanations without sacrificing prediction accuracy. Although attention weights on moral rules-of-thumb show suboptimal consistency with the model's prediction, we illustrate the model's consistency in attention weights on subjective ground comments. We further claim that our model better captures one's perspectives on abstract moral concepts.
+
+# Limitations
+
+One of the major limitations of this work is the absence of Reddit post contents. Although reading the content of the post is crucial in fully comprehending and judging the situation, we decide not to include the content mainly because of the size of training instances. A large volume of the text in the post content would have hindered the model from good generalization.
+
+Another shortcoming is the subjectivity explanation tool—moral foundation annotations. Ideally moral rules-of-thumb represent an individual's biases in judging situations, yet in reality the annotated rules-of-thumb do not cover all types of biases related to the situation. Additionally, many of the manually crafted rules-of-thumb will not help the model learn different types of biases since they are largely similar to the original rules-of-thumb.
+
+Lastly, individual subject ground modeling in this work is over-simplified. We construct the redditors' subjective ground solely based on their previous comments in the same subreddit, and there is more context information that could potentially help analyze an individual, such as the posts submit
+
+ted by the redditor and information about subreddits they are actively involved. Rather than choosing the subjective ground comments based on the word matches with Moral Foundation Dictionary, applying more sophisticated methods in identifying moral foundations, such as moral foundations framing (Roy and Goldwasser, 2021), would also lead to a better subjective ground. In analyzing a different domain in future work, we could also take an individual's identity—demographic, social, political—into consideration when modeling subjective ground.
+
+# Ethics Statement
+
+To the best of our knowledge, this work has not violated any code of ethics. We anonymize redditor information in the paper as well as in the datasets we share to the public. This paper shows different redditors' subjective ground, yet there is no discrimination in choosing redditors of the interest; the redditors are selected solely based on the number of comments they have left on this subreddit. We provide the code and datasets for future reproducibility of the work.
+
+# Acknowledgements
+
+This project was partially funded by NSF IIS-2048001 and DARPA CCU program. The views are the authors' and should not be interpreted as necessarily representing the official policies, either expressed or implied, of DARPA, or the U.S. Government.
+
+# References
+
+Abdalghani Abujabal, Rishiraj Saha Roy, Mohamed Yahya, and Gerhard Weikum. 2017. Quint: Interpretable question answering over knowledge bases. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 61-66.
+Malika Aubakirova and Mohit Bansal. 2016. Interpreting neural networks to improve politeness comprehension. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2035-2041.
+Yoonjung Choi and Janyce Wiebe. 2014. +/effectwordnet: Sense-level lexicon acquisition for opinion inference. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1181-1191.
+
+Anjalie Field, Doron Kliger, Shuly Wintner, Jennifer Pan, Dan Jurafsky, and Yulia Tsvetkov. 2018. Framing and agenda-setting in russian news: a computational analysis of intricate political strategies. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3570-3580.
+Maxwell Forbes, Jena D Hwang, Vered Shwartz, Maarten Sap, and Yejin Choi. 2020. Social chemistry 101: Learning to reason about social and moral norms. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 653-670.
+Jeremy A Frimer, Reihane Boghrati, Jonathan Haidt, Jesse Graham, and Morteza Dehgani. 2019. Moral foundations dictionary for linguistic analyses 2.0. Unpublished manuscript.
+Jesse Graham, Jonathan Haidt, Sena Koleva, Matt Motyl, Ravi Iyer, Sean P Wojcik, and Peter H Ditto. 2013. Moral foundations theory: The pragmatic validity of moral pluralism. In Advances in experimental social psychology, volume 47, pages 55-130. Elsevier.
+Stephan Greene and Philip Resnik. 2009. More than words: Syntactic packaging and implicit sentiment. In Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, page 503.
+Jonathan Haidt and Craig Joseph. 2004. Intuitive ethics: How innately prepared intuitions generate culturally variable virtues. *Dedalus*, 133(4):55-66.
+Denis Hilton. 2017. Social attribution and explanation.
+Denis J Hilton. 1990. Conversational processes and causal explanation. Psychological Bulletin, 107(1):65.
+Jena D Hwang, Chandra Bhagavatula, Ronan Le Bras, Jeff Da, Keisuke Sakaguchi, Antoine Bosselut, and Yejin Choi. 2021. (comet-) atomic 2020: On symbolic and neural commonsense knowledge graphs. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 6384-6392.
+Daniel Kahneman. 2011. Thinking, fast and slow. Macmillan.
+Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. In Proceedings of the 3rd International Conference on Learning Representations (ICLR).
+Chang Li and Dan Goldwasser. 2019. Encoding social information with graph convolutional networks for political perspective detection in news media. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2594-2604.
+
+Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692.
+John McClure. 2002. Goal-based explanations of actions and outcomes. European review of social psychology, 12(1):201-235.
+George A Miller, Richard Beckwith, Christiane Fellbaum, Derek Gross, and Katherine J Miller. 1990. Introduction to wordnet: An on-line lexical database. International journal of lexicography, 3(4):235-244.
+Tim Miller. 2019. Explanation in artificial intelligence: Insights from the social sciences. Artificial intelligence, 267:1-38.
+James Mullenbach, Sarah Wiegreffe, Jon Duke, Jimeng Sun, and Jacob Eisenstein. 2018. Explainable prediction of medical codes from clinical text. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1101-1111.
+Frederick Neuhouser et al. 1990. Fichte's theory of subjectivity. Cambridge University Press.
+Pouya Pezeshkpour, Yifan Tian, and Sameer Singh. 2019. Investigating robustness and interpretability of link prediction via adversarial modifications. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3336-3347.
+Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, et al. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res., 21(140):1-67.
+Nazneen Fatema Rajani, Bryan McCann, Caiming Xiong, and Richard Socher. 2019. Explain yourself! leveraging language models for commonsense reasoning. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4932-4942.
+Nils Reimers and Iryna Gurevych. 2019. Sentence-bert: Sentence embeddings using siamese bert-networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3982-3992.
+Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. "why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135-1144.
+
+Andrew Slavin Ross, Michael C Hughes, and Finale Doshi-Velez. 2017. Right for the right reasons: Training differentiable models by constraining their explanations. In Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence (IJCAI-17), pages 2662-2670.
+Shamik Roy and Dan Goldwasser. 2021. Analysis of nuanced stances and sentiment towards entities of us politicians through the lens of moral foundation theory. In Proceedings of the Ninth International Workshop on Natural Language Processing for Social Media, pages 1-13.
+Shalom H Schwartz. 1992. Universals in the content and structure of values: Theoretical advances and empirical tests in 20 countries. In Advances in experimental social psychology, volume 25, pages 1-65. Elsevier.
+Alona Sydorova, Nina Poerner, and Benjamin Roth. 2019. Interpretable question answering on knowledge bases and text. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4943-4951.
+Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in neural information processing systems, 30.
+Eric Wallace, Shi Feng, and Jordan Boyd-Graber. 2018. Interpreting neural networks with nearest neighbors. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 136-144.
+Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2019. Superglue: A stickier benchmark for general-purpose language understanding systems. Advances in neural information processing systems, 32.
+Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, et al. 2019. Huggingface's transformers: State-of-the-art natural language processing. arXiv preprint arXiv:1910.03771.
+Qizhe Xie, Xuezhe Ma, Zihang Dai, and Eduard Hovy. 2017. An interpretable knowledge transfer model for knowledge base completion. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 950-962.
+
+# A Experiment Details
+
+# A.1 Model Formalization
+
+While training the subjective ground module of the redditors, we make use of the input situations
+
+$X^{+} = \{x_{1}^{+},x_{2^{+}},\ldots ,x_{N}^{+}\} \in \mathcal{D}^{+}$ where $M$ redditors in $\{u_1,u_2,\dots ,u_M\}$ have commented on. An $i$ -th redditor $u_{i}$ is represented as a set of $G$ subjective ground comments, $u_{i} = \{c_{i1},\dots ,c_{iG}\}$ Binary output labels, $Y^{+}$ , indicating acceptable / unacceptable judgments on each input situation can be denoted as $Y^{+} = \{0,1\}^{L}$ where $L$ is the total number of instances. Note that $L\neq NM$ since not all redditors commented on all input situations in the dataset.
+
+We use another dataset with moral rules-of-thumb annotation, $\mathcal{D}$ , for training and testing the model. Similar to $\mathcal{D}^+$ , input situations and output labels are denoted as $X = \{x_{1}, x_{2}, \ldots, x_{n}\}$ , and $Y = \{0, 1\}^{l}$ , where $n$ and $l$ are the number of situations and instances in $\mathcal{D}$ , respectively. Additionally, there are $K$ rules-of-thumb annotated for each situation. An input situation $x_{j}$ is mapped to the rule-of-thumb candidates, $V_{j} = \{v_{j1}, \ldots, v_{jk}\}$ .
+
+The first step of training is on the redditors' subjective ground using $\mathcal{D}^+$ . Using a pre-trained Transformer encoder (Wolf et al., 2019), we represent an $i$ -th redditor's subjective ground as $\mathrm{SG}_i \in \mathbb{R}^{G \times h}$ where each row of the matrix is the encoded subjective ground comment and $h$ is the encoder dimension. Assuming this redditor has commented on the $j$ -th input situation in $\mathcal{D}^+$ , encoded as $z_j^+$ , the subjective ground training module is designed as follows:
+
+$$
+z _ {j} ^ {+} \leftarrow \operatorname {T r a n s f o r m e r} \left(x _ {j +}\right)
+$$
+
+$$
+\mathrm {S G} _ {i} \leftarrow \operatorname {T r a n s f o r m e r} \left(\left\{c _ {i 1}, \dots , c _ {i G} \right\}\right)
+$$
+
+$$
+a _ {i, j} ^ {+} = \text {M u l t i h e a d} \left(\mathrm {S G} _ {i}, z _ {j} ^ {+}, \mathrm {S G} _ {i}\right)
+$$
+
+$$
+\hat {y} = W _ {C L F} \left[ \Sigma a _ {i, j} ^ {+} \mathrm {S G} _ {i}; z _ {j} ^ {+} \right]
+$$
+
+$$
+\mathcal {L} = \operatorname {C r o s s E n t r o p y} (y, \hat {y})
+$$
+
+We follow the basic structure of the Multi-head attention proposed by Vaswani et al. (2017), where different representations of attention inputs are combined:
+
+$$
+\operatorname {M u l t i h e a d} (Q, K, V) = \left[ \text {h e a d} _ {1}; \dots ; \text {h e a d} _ {h} \right] W ^ {O}
+$$
+
+$$
+\text {s . t .} \operatorname {h e a d} _ {i} = f \left(\frac {Q W _ {i} ^ {Q} \left(K W _ {i} ^ {K}\right) ^ {\intercal}}{\sqrt {d _ {k}}} V W _ {i} ^ {V}\right)
+$$
+
+$$
+w h e r e f (.) : \text {s o f t m a x}
+$$
+
+$$
+\text {a n d} d _ {k}: \text {m o d e l d i m e n s i o n}
+$$
+
+$$
+W _ {i} ^ {Q}, W _ {i} ^ {K}, W _ {i} ^ {V}: \text {i n p u t p r o j e c t i o n s}
+$$
+
+After subjective ground is trained, we learn the attention between the redditor's subjective ground and moral rules-of-thumb of the situations in $\mathcal{D}$ .
+
+We use the encoder that is fine-tuned in the previous step and obtain an encoded value candidate representation of a $j$ -th situation $x_{j}$ as $\mathrm{VAL}_{j} \in \mathbb{R}^{K \times h}$ where each row of the matrix is the encoded rule-of-thumb. Suppose the $i$ -th redditor commented on $x_{j}$ , the model learns the data as follows:
+
+$$
+z _ {j} \leftarrow \operatorname {T r a n s f o r m e r} \left(x _ {j}\right)
+$$
+
+$$
+\mathrm {S G} _ {i} \leftarrow \operatorname {T r a n s f o r m e r} \left(\left\{c _ {i 1}, \dots , c _ {i G} \right\}\right)
+$$
+
+$$
+\operatorname {V A L} _ {j} \leftarrow \operatorname {T r a n s f o r m e r} \left(\left\{v _ {j 1}, \dots , v _ {j K} \right\}\right)
+$$
+
+$$
+a _ {i, j} ^ {\mathrm {S G}} = \text {M u l t i h e a d} (\mathrm {S G} _ {i}, z _ {j}, \mathrm {S G} _ {i})
+$$
+
+$$
+a _ {i, j} ^ {\mathrm {V A L}} = \text {M u l t i h e a d} \left(\mathrm {V A L} _ {j}, a _ {i, j} ^ {\mathrm {S G}} \mathrm {S G} _ {i}, \mathrm {V A L} _ {j}\right)
+$$
+
+$$
+\hat {y} = W _ {C L F} \left[ \Sigma a _ {i, j} ^ {\mathrm {V A L}} \mathrm {V A L} _ {j}; z _ {j} \right]
+$$
+
+$$
+\mathcal {L} = \operatorname {C r o s s E n t r o p y} (y, \hat {y})
+$$
+
+# A.2 Implementation Details
+
+We use a pre-trained DistilBERT-base-uncased for the text encoder, distributed by Wolf et al. (2019). For attention layers, we implement multi-head scaled dot-product attention layers with 12 heads. Classifiers are two-layer linear networks followed by Cross Entropy loss and Adam optimizer (Kingma and Ba, 2014) with static learning rates. The final model considers $d_{k}$ as 1 in the multi-head attention layers, because normalizing attention weights by the square root of the model dimension generates more equally distributed attention weights over subjective ground comments and rules-of-thumb. All experimental results are the average of five separate runs.
+
+Our models are trained and tested using NVIDIA Tesla V100 GPU and the average time for training the full model is approximately 6 hours, while the training time for the baseline models are 20 minutes.
+
+We manually select the hyperparameters to tune—the number of attention heads and learning rates. The selection criterion for the hyperparameters was the average F1 score of five experiments on test data. We set the possible number of heads either 1 or 12, where 1 means single attention head. Hyperparameters are searched using grid search, in the boundary from 1e-6 to 1e-3. We also implemented learning rate warm-ups, where the learning rate increases for the first few steps, then it decreases logarithmically.
+
+# A.3 Cluster Results
+
+We break down the model performance based on each cluster. The F1 score of each cluster and the most representative situations in the cluster are described in Table 6.
+
+
not letting my girlfriend into my parents house giving my girlfriend an ultimatum regarding their best friend not telling my boyfriend that his friend made a pass at me
not inviting my sister-in-law to my wedding not going to my sister's wedding after being initially unin-
+vited not letting my future in-laws invite people to our wedding
Cluster 8: cars, Accuracy: 52.65
Cluster 9: kids, Accuracy: 43.08
not giving my neighbor a parking spot telling someone not to come up to my car calling the police on someone who parked in my driveway
leaving my son behind on our family vacation dropping my stepdaughter off at her mothers house letting my daughter 'take over' my son's birthday present
Cluster 10: roommates, Accuracy: 52.04
Cluster 11: emotional-burst, Accuracy: 63.20
not wanting my roommate to move out asking my roommate's significant other to move out refusing to switch bedrooms with my future roommate
being upset at my mom being mad at my significant other being pissed off at my girlfriend
not letting my daughter go to my in-laws house not allowing my mom to live with me refusing to bring my little brother to birthdays
not wanting to pay for something my Girlfriend offered to pay for wanting my co-worker to pay me back Asking my Fiancee to Pay 1/3 of the Bills
Cluster 14: coworkers, Accuracy: 60.87
Cluster 15: pets, Accuracy: 55.01
not telling my co-worker that he's going to be fired getting a co-worker fired for something I also did Refusing To Work With My Coworker
not giving my dog back to his original owner telling my wife she can't get rid of the dog not letting a homeless guy pet my dog
Cluster 16: offensive-behavior, Accuracy: 59.62
Cluster 17: parents, Accuracy: 49.71
saying something embarrassing about one of my students making racist and sexist jokes refusing to forgive former classmates for being homophobic
telling my mom she didn't raise me telling my parents they're out of touch telling my dad's wife her family's not my family
Cluster 18: food, Accuracy: 64.59
Cluster 19: friendship, Accuracy: 60.83
telling my coworker to stop eating my food not letting my spouse eat my meal prepped food not eating the food my coworkers wife made
telling my friend off ruining my friend's relationship telling my friend she's out of line
+
+Table 6: Each cluster's topic, most representative situation examples, and their F1 accuracy on our proposed model.
+
+
Original: telling my wife that when/if she quits her job to be a SAHM I am not planing to do much in the way of household chores anymore
+Gender: telling my husband that when/if he quits his job to be a SAHD I am not planing to do much in the way of household chores anymore
+Rephrase: suggesting my wife to balance household chores since I will be working while she quits her job
+Abstract: preferring more fairness over helping out my partner
Original: “being triggered” by my boyfriend setting rules for my pregnancy weight gain
+Gender: “being triggered” by my girlfriend setting rules for my weight gain
+Rephrase: being upset at my boyfriend when he plans to prevent me from eating too much during pregnancy
+Abstract: not wanting to be controlled by my partner’s concerns about my health
Original: not going to my girlfriends dads funeral
+Gender: not going to my boyfriends moms funeral
+Rephrase: not wanting to attend to my girlfriends dads funeral
+Abstract: putting my belief first even my partner has lost their loved ones
Original: having a gender preference in our selective abortion
+Gender: having a racial preference in our selective abortion
+Rephrase: deciding to have abortion based on the baby’s sex
+Abstract: believing choice is more important than life
Original: telling DH that I will not let his mom pick her grandmother name
+Gender: telling DH that I will not let his dad pick his grand-father name
+Rephrase: not wanting to name my children that my MIL picked
+Abstract: wanting more freedom in raising kids over respecting the opinion of my parents
Original: shaming my sister-in-law because she was mean to me
+Gender: shaming my brother-in-law because he was mean to me
+Rephrase: disrespecting my sister-in-law by making fun of her because she was mean to me
+Abstract: revenging someone in the family for their behavior on me
Original: breaking up with him because of his job
+Gender: breaking up with her because of her job
+Rephrase: wanting to finish the romantic relationship because of my partner’s occupation
+Abstract: considering one’s ambition more important than loyalty in relationships
Original: denying my wife a new kitchen
+Gender: denying my husband a new kitchen
+Rephrase: not allowing my wife to get a new kitchen
+Abstract: not wanting to waste money on my partner’s desire
Original: taking my daughter to get her hair dyed against my wives wish
+Gender: taking my son to get his hair dyed against my husbands wish
+Rephrase: letting my daughter to get her hair dyed although my wife did not want it
+Abstract: putting my kid’s desire first over my partner’s thought
Original: telling my sister’s boyfriend the truth about her
+Gender: telling my brother’s girlfriend the truth about him
+Rephrase: revealing a big secret about my sister to her boyfriend
+Abstract: considering honesty is always more important even though it would break up the relationships
+
+Table 7: Input situations and their modification for perturbation test.
+
+# B Datasets
+
+Train-valid/test splits of $\mathcal{D}$ were provided by the original dataset, Social Chemistry 101, and we used the same splits. For the additionally crawled data, $\mathcal{D}^+$ , we randomly divided the splits into $80\%$ , $10\%$ , $10\%$ , while excluding all valid and test samples of $\mathcal{D}$ from the training data of $\mathcal{D}^+$ .
+
+Additionally annotated data for consistency evaluation, rules-of-thumb consistency and input perturbation consistency, are annotated by the authors.
+
+# B.1 Input Modification for Subjective Ground Consistency
+
+The input situations to modify are selected from the test set of $\mathcal{D}$ . To compute the subjective ground consistency of more diverse redditors, we sort the test set situations based on the number of redditors participated. We select 10 situations where the redditors have commented the most, resulting in 162 instances in total. The examples of 10 situations with their original situation descriptions, gender altered descriptions, rephrased descriptions, and abstract concept descriptions are in Table 7.
+
+# B.2 Value Attention Consistency
+
+Similar to input modification tests, we sort the test set situations based on the number of redditors participated, and select the top 100 situations. The authors annotated the acceptability label of rules-of-thumb of with respect to the situations, resulting in 500 instances in total.
\ No newline at end of file
diff --git a/towardsexplainingsubjectivegroundofindividualsonsocialmedia/images.zip b/towardsexplainingsubjectivegroundofindividualsonsocialmedia/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..fc51a04dbce97001bdec61ea141aa372feaf39c9
--- /dev/null
+++ b/towardsexplainingsubjectivegroundofindividualsonsocialmedia/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ea18a377ef810237182fd59370ef9fb8e3b5fd55ab9f68e2b1c50e8c3b6d48fd
+size 1381567
diff --git a/towardsexplainingsubjectivegroundofindividualsonsocialmedia/layout.json b/towardsexplainingsubjectivegroundofindividualsonsocialmedia/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..9c58fc6c8185d6501b847176b256c4db5ccb08d5
--- /dev/null
+++ b/towardsexplainingsubjectivegroundofindividualsonsocialmedia/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b30690b7f72824cbae82aad808f21b8b69fa75914d7e8ba8902b07912dc22675
+size 399593
diff --git a/towardsgeneralizableandrobusttexttosqlparsing/11991b3b-9d7e-4cb3-bb14-1d59084f71bb_content_list.json b/towardsgeneralizableandrobusttexttosqlparsing/11991b3b-9d7e-4cb3-bb14-1d59084f71bb_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..b9bbe346ba82835f4921aa66e2cf543ea2ede844
--- /dev/null
+++ b/towardsgeneralizableandrobusttexttosqlparsing/11991b3b-9d7e-4cb3-bb14-1d59084f71bb_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:cfd0b3a062f3b2cfe9ca051a835d6e30424cf9dd18aaf050f2cabd8076df678f
+size 79829
diff --git a/towardsgeneralizableandrobusttexttosqlparsing/11991b3b-9d7e-4cb3-bb14-1d59084f71bb_model.json b/towardsgeneralizableandrobusttexttosqlparsing/11991b3b-9d7e-4cb3-bb14-1d59084f71bb_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..8682a0fdfc1346e17aa244a3f1bd8bcbc22e31c8
--- /dev/null
+++ b/towardsgeneralizableandrobusttexttosqlparsing/11991b3b-9d7e-4cb3-bb14-1d59084f71bb_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:253d4dd44e6e11859cb73f2287e49f834995bad8f4e295633ba95245a6df6caa
+size 97548
diff --git a/towardsgeneralizableandrobusttexttosqlparsing/11991b3b-9d7e-4cb3-bb14-1d59084f71bb_origin.pdf b/towardsgeneralizableandrobusttexttosqlparsing/11991b3b-9d7e-4cb3-bb14-1d59084f71bb_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..6f90cf1aae6be0cb729aa60f1cb7f00b4ec140b2
--- /dev/null
+++ b/towardsgeneralizableandrobusttexttosqlparsing/11991b3b-9d7e-4cb3-bb14-1d59084f71bb_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c50c20689aa53c20a3885d60b49309b68ba00bf19f61a336910df93cee5ac3d1
+size 413124
diff --git a/towardsgeneralizableandrobusttexttosqlparsing/full.md b/towardsgeneralizableandrobusttexttosqlparsing/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..913e12665f16d0ead200f747f40092d86e2a8474
--- /dev/null
+++ b/towardsgeneralizableandrobusttexttosqlparsing/full.md
@@ -0,0 +1,282 @@
+# Towards Generalizable and Robust Text-to-SQL Parsing*
+
+Chang Gao $^{1}$ , Bowen Li $^{2}$ , Wenxuan Zhang $^{2}$ , Wai Lam $^{1\dagger}$ , Binhua Li $^{2}$ , Fei Huang $^{2}$ , Luo Si $^{2}$ and Yongbin Li $^{2\dagger}$
+
+1The Chinese University of Hong Kong
+
+$^{2}$ DAMO Academy, Alibaba Group
+
+{gaochang, wlam}@se.cuhk.edu.hk, libowen.ne@gmail.com
+
+{saike.zwx,binhua.lbh,shuide.lyb}@alibaba-inc.com
+
+# Abstract
+
+Text-to-SQL parsing tackles the problem of mapping natural language questions to executable SQL queries. In practice, text-to-SQL parsers often encounter various challenging scenarios, requiring them to be generalizable and robust. While most existing work addresses a particular generalization or robustness challenge, we aim to study it in a more comprehensive manner. In specific, we believe that text-to-SQL parsers should be (1) generalizable at three levels of generalization, namely i.i.d., zero-shot, and compositional, and (2) robust against input perturbations. To enhance these capabilities of the parser, we propose a novel TKK framework consisting of Task decomposition, Knowledge acquisition, and Knowledge composition to learn text-to-SQL parsing in stages. By dividing the learning process into multiple stages, our framework improves the parser's ability to acquire general SQL knowledge instead of capturing spurious patterns, making it more generalizable and robust. Experimental results under various generalization and robustness settings show that our framework is effective in all scenarios and achieves state-of-the-art performance on the Spider, SParC, and CoSQL datasets. Code can be found at https://github.com/AlibabaResearch/DAMO-ConvAI/tree/main/tkk.
+
+# 1 Introduction
+
+Text-to-SQL parsing aims to translate natural language questions to SQL queries that can be executed on databases to produce answers (Lin et al., 2020), which bridges the gap between expert programmers and ordinary users who are not proficient in writing SQL queries. Thus, it has drawn great
+
+attention in recent years (Zhong et al., 2017; Suhr et al., 2020; Scholak et al., 2021; Hui et al., 2022; Qin et al., 2022a,b).
+
+Early work in this field (Zelle and Mooney, 1996; Yaghmazadeh et al., 2017; Iyer et al., 2017) mainly focuses on i.i.d. generalization. They only use a single database, and the exact same target SQL query may appear in both the training and test sets. However, it is difficult to collect sufficient training data to cover all the questions users may ask (Gu et al., 2021) and the predictions of test examples might be obtained by semantic matching instead of semantic parsing (Yu et al., 2018b), limiting the generalization ability of parsers. Subsequent work further focuses on generalizable text-to-SQL parsing in terms of two aspects: zero-shot generalization and compositional generalization. Zero-shot generalization requires the parser to generalize to unseen database schemas. Thanks to large-scale datasets such as Spider (Yu et al., 2018b), SParC (Yu et al., 2019b), and CoSQL (Yu et al., 2019a), zero-shot generalization has been the most popular setting for text-to-SQL parsing in recent years. Various methods involving designing graph-based encoders (Wang et al., 2020; Cao et al., 2021) and syntax tree decoders (Yu et al., 2018a; Rubin and Berant, 2021) have been developed to tackle this challenge. Compositional generalization is the desired ability to generalize to test examples consisting of novel combinations of components observed during training. Finegan-Dollak et al. (2018) explore compositional generalization in text-to-SQL parsing focusing on template-based query splits. Shaw et al. (2021) provide new splits of Spider considering length, query template, and query compound divergence to create challenging evaluations of compositional generalization.
+
+Another challenge of conducting text-to-SQL parsing in practice is robustness. Existing text-to-SQL models have been found vulnerable to input perturbations (Deng et al., 2021; Gan et al., 2021a;
+
+
+Figure 1: Overview of our TKK framework. {S} and {C} denote the database schema and context, respectively.
+
+Pi et al., 2022). For example, Gan et al. (2021a) replace schema-related words in natural language questions with manually selected synonyms and observe a dramatic performance drop. They propose two approaches, namely multi-annotation selection and adversarial training, to improve model robustness against synonym substitution.
+
+Although specialized model architectures and training approaches have been proposed to address a particular generalization or robustness challenge, we believe that practical text-to-SQL parsers should be built with strong generalizability in terms of all three levels of generalization, namely i.i.d., zero-shot, and compositional, and robustness against input perturbations. To obtain such capabilities, it can be noticed that humans often learn to write each clause, such as SELECT or WHERE, for a basic operation, before composing them to fulfill a more challenging goal, i.e., writing the entire SQL query. In contrast, most existing methods adopt a one-stage learning paradigm, i.e., learning to write each SQL clause and the dependency between different clauses simultaneously. This may lead the model to capture spurious patterns between the question, database schema, and SQL query instead of learning general SQL knowledge.
+
+To this end, we propose a novel framework consisting of three learning stages including Task decomposition, Knowledge acquisition, and Knowledge composition (TKK) for text-to-SQL parsing, which mimics the human learning procedure to learn to handle the task in stages. Specifically, in the task decomposition stage, TKK decomposes the original task into several subtasks.
+
+Each subtask corresponds to mapping the natural language question to one or more clauses of the SQL query, as shown in the top portion of Figure 1. Afterwards, TKK features a prompt-based learning strategy to separately acquire the knowledge of subtasks and employ the learned knowledge to tackle the main task, i.e., generating the entire SQL query. In the knowledge acquisition stage, TKK trains the model with all the subtasks in a multi-task learning manner; in the knowledge composition stage, TKK fine-tunes the model with the main task to combine the acquired knowledge of subtasks and learn the dependency between them.
+
+The advantages of our three-stage framework over previous one-stage learning methods are threefold: (1) it reduces the difficulty of model learning by dividing the learning process into multiple easier-to-learn stages; (2) it explicitly forces the model to learn the alignment between the question, database schema, and each SQL clause as it needs to identify the intent expressed in the question based on the schema to generate a specific clause; (3) by explicitly constructing the training data for each subtask, it is easier for the model to learn the knowledge required to translate the question into each SQL clause. These advantages help the model to learn general SQL knowledge rather than some dataset-specific patterns, making it more generalizable and robust.
+
+To verify the effectiveness of our framework, we conduct comprehensive evaluations on representative benchmarks covering all three levels of generalization and robustness scenarios with pretrained sequence-to-sequence models. Experimen
+
+tal results and analysis show that: (1) we achieve state-of-the-art performance on the Spider, SParC, and CoSQL datasets; (2) our method outperforms vanilla sequence-to-sequence models in all scenarios; (3) our framework significantly improves the model's ability to generate complex SQL queries; (4) our framework is also effective in the lowresource setting.
+
+# 2 Background
+
+Notations We use the lowercase letter $q$ to denote a natural language question and denote its corresponding database schema, context, and SQL query as $s_q$ , $c_q$ , and $l_q$ , respectively. We represent the set of training examples $(q,s_q,c_q,l_q)$ as $\mathcal{D}_{train}$ and test set as $\mathcal{D}_{test}$ . A perturbed test set $\mathcal{D}'_{test}$ could be constructed by perturbations to questions such as synonym substitution to form $(q',s_q,c_q,l_q)$ . We denote $\mathcal{S}_{train}$ as the set of database schemas of $\mathcal{D}_{train}$ , $\mathcal{L}_{train}$ as the set of SQL queries of $\mathcal{D}_{train}$ , and $\mathcal{Q}_{test}$ as the set of questions of $\mathcal{D}_{test}$ .
+
+Problem Definition Given $(q, s_q, c_q)$ , where the database schema $s_q$ consists of tables and columns, and context $c_q$ is the interaction history consisting of previous questions and system clarification in the multi-turn setting or empty in the single-turn setting, the goal is to generate the correct SQL query $l_q$ .
+
+Generalization and Robustness Following Gu et al. (2021) and Wang et al. (2022b), we formalize three levels of generalization and robustness as follows:
+
+Zero-shot generalization: $\forall q\in \mathcal{Q}_{test}$ $s_q\notin$ $\mathcal{S}_{train}$
+
+Compositional generalization: $\forall q\in$ $\mathcal{Q}_{test}$ $s_q\in S_{train}$ $l_{q}\notin \mathcal{L}_{train}$
+
+I.I.D. generalization: $\forall q\in \mathcal{Q}_{test}$ $s_q\in S_{train}$ $\mathcal{D}_{train}$ and $\mathcal{D}_{test}$ follow the same distribution.
+
+Robustness: training with $\mathcal{D}_{train}$ but adopting $\mathcal{D}'_{test}$ instead of $\mathcal{D}_{test}$ for evaluation.
+
+# 3 Our TKK Framework
+
+TKK consists of three learning stages: task decomposition, knowledge acquisition, and knowledge composition. In this section, we first introduce each stage in detail. Then we describe the training and inference of TKK.
+
+# 3.1 Three Stages of TKK
+
+Task Decomposition As shown in Figure 1, we decompose the text-to-SQL parsing task into five subtasks, namely SELECT, FROM, WHERE, GHOL, and SQL. Basically, a subtask aims to translate the natural language question to one or more clauses of the SQL query. For example, the GHOL subtask aims to generate the GROUP_BY, HAVING, ORDER_BY, and LIMIT clauses given the question and its corresponding database schema and context. For queries involving set operators such as INTERSECT, UNION, and EXCEPT to combine two SQL queries, we treat the first query as usual and the second query as the SQL clause of the first query. The SQL subtask targets at mapping the question to the SQL clause.
+
+There are two considerations behind constructing a subtask: (1) the number of classification examples; (2) the dependency between different clauses. First, according to the SQL syntax, every SQL has the SELECT and FROM clauses. However, clauses such as GROUP_BY and ORDER_BY appear only in relatively complicated SQL queries. It implies that the number of these clauses is much smaller than that of the SELECT or FROM clause. Trivially considering generating each clause as a subtask is problematic. If a specific clause does not exist, the generation task degenerates to a classification task because the model only needs to judge its existence. We denote these examples as classification examples. Too many classification examples are harmful to model learning. Second, the GROUP_BY and HAVING clauses are usually bundled together, which is also the case of the ORDER_BY and LIMIT clauses. The ORDER_BY clause is often dependent on the GROUP_BY clause if they appear in a SQL query simultaneously. Based on the above observations, combining these clauses to construct a single subtask is more appropriate. We do not further decompose the SQL clause because there will be more subtasks, and most training examples of these subtasks are classification examples.
+
+Knowledge Acquisition In this stage, we train the sequence-to-sequence model with all subtasks using multi-task learning. We assign each SQL keyword a special token, which is also used to denote its corresponding clause. Then we construct a task prompt for each subtask based on the clauses it contains. For example,
+
+the special token corresponding to GROUP_BY is “[GROUP_BY]” and the prompt for the GHOL subtask is “[GROUP_BY] [HAVING] [ORDER_BY] [LIMIT]”. The input for each subtask simply adds a task prompt to the input for the original task.
+
+For constructing the target, we replace the keywords in each clause with their corresponding special tokens. If a clause is empty, we use its corresponding special token to build the target. For instance, the example in Figure 1 does not contain the WHERE clause. Thus the target of the WHERE subtask is "[WHERE]". Those examples whose targets only contain special tokens are classification examples, as we mentioned earlier. For those examples whose targets contain at least one non-empty clause, we regard them as parsing examples. Classification examples are helpful since the model needs to learn which clauses to generate given a particular question. However, too many classification examples make it difficult for the model to learn the knowledge of subtasks. Even though we pack the GROUP_BY, HAVING, ORDER_BY, and LIMIT clauses into one subtask, the number of classification examples is still much bigger than that of parsing examples. The SQL subtask also has the problem. To tackle this problem, we downsample classification examples for each subtask to guarantee that the proportion of parsing examples is at least a ratio $r$ .
+
+Knowledge Composition Training the model with multiple subtasks cannot capture the interdependency between them. In this stage, we fine-tune the model with the main task, i.e., generating the entire SQL query, to capture such information. As shown in Figure 1, we combine the prompts of subtasks to construct the prompt of the main task to guide the model to composite the knowledge of subtasks.
+
+# 3.2 Training and Inference
+
+We formulate text-to-SQL parsing as a sequence-to-sequence generation problem. The input is the serialization of the question, database schema, and context, and the output is the SQL query. In the knowledge acquisition and composition stages, we adjust the input and output according to what we discussed in the last section. We adopt the pre-trained sequence-to-sequence model T5 (Raffel et al., 2020) as the backbone of TKK.
+
+
Models
EM
EX
Global-GNN (Bogin et al., 2019)
52.7
-
IRNet + BERT (Guo et al., 2019)
63.9
-
RATSQL + BERT (Wang et al., 2020)
69.7
-
RYANSQL + BERT (Choi et al., 2021)
70.6
-
RATSQL + GraPPa (Yu et al., 2021a)
73.4
-
LGESQL + ELECTRA (Cao et al., 2021)
75.1
-
T5-Base† (Raffel et al., 2020)
58.1
60.1
T5-Large† (Raffel et al., 2020)
66.6
68.3
T5-3B† (Raffel et al., 2020)
71.8
74.4
T5-Base + PICARD (Scholak et al., 2021)
65.8
68.4
T5-Large + PICARD (Scholak et al., 2021)
69.1
72.9
T5-3B + PICARD (Scholak et al., 2021)
75.5
79.3
TKK-Base
61.5
64.2
TKK-Large
70.6
73.2
TKK-3B
74.2
78.4
TKK-Base + PICARD
70.4
76.0
TKK-Large + PICARD
74.1
78.2
TKK-3B + PICARD
75.6
80.3
+
+Table 1: Zero-shot generalization results on Spider. $[\dagger]$ : Results are taken from (Xie et al., 2022).
+
+Training The model is trained with a maximum likelihood objective. Given the training example $(q, s_q, c_q, tp, y)$ , the loss function $L_{\theta}$ is defined as
+
+$$
+L _ {\theta} = - \sum_ {i = 1} ^ {n} \log P _ {\theta} \left(y _ {i} \mid y _ {< i}, q, s _ {q}, c _ {q}, t p\right) \tag {1}
+$$
+
+where $\theta$ is the model parameters, $tp$ is the task prompt, $y$ is the target sequence, and $n$ is the length of $y$ . In the knowledge acquisition stage, we mix the data of all subtasks for training. In the knowledge composition stage, we initialize the model with the weights of the model trained in the knowledge acquisition stage and use the data of the main task for training.
+
+Inference After training, for each triple of the question, database schema, and context $(q,s_q,c_q)$ , we generate the target sequence of the main task for obtaining the SQL query. We replace the special tokens in the target sequence with their corresponding SQL keywords.
+
+# 4 Experiments
+
+# 4.1 Experimental Setup
+
+Datasets For zero-shot generalization, we use the original Spider (Yu et al., 2018b), SParC (Yu et al., 2019b), and CoSQL (Yu et al., 2019a)
+
+
Models
SParC
CoSQL
QM
IM
QM
IM
EditSQL + BERT (Zhang et al., 2019)
47.2
29.5
39.9
12.3
IGSQL + BERT (Cai and Wan, 2020)
50.7
32.5
44.1
15.8
R2SQL + BERT (Hui et al., 2021a)
54.1
35.2
45.7
19.5
RAT-SQL + SCoRe (Yu et al., 2021b)
62.2
42.5
52.1
22.0
HIE-SQL + GraPPa (Zheng et al., 2022)
64.7
45.0
56.4
28.7
T5-Base† (Raffel et al., 2020)
50.6
31.3
42.3
12.6
T5-Large† (Raffel et al., 2020)
56.7
37.4
48.3
16.7
T5-3B† (Raffel et al., 2020)
61.5
41.9
54.1
22.8
T5-3B + PICARD (Scholak et al., 2021)
-
-
56.9
24.2
TKK-Base
52.6
32.7
46.9
17.8
TKK-Large
60.2
41.0
50.5
21.5
TKK-3B
65.5
46.7
54.9
24.9
TKK-3B + PICARD
66.6
48.3
58.3
27.3
+
+Table 2: Zero-shot generalization results on SParC and CoSQL. $[\dagger]$ : Results are taken from (Xie et al., 2022).
+
+
Models
Spider-Template
Spider-Length
Spider-TMCD
EM
EX
EM
EX
EM
EX
T5-Base† (Raffel et al., 2020)
59.3
-
49.0
-
60.9
-
T5-3B† (Raffel et al., 2020)
64.8
-
56.7
-
69.6
-
NQG-T5-3B (Shaw et al., 2021)
64.7
-
56.7
-
69.5
-
TKK-Base
62.9
69.8
52.0
55.3
63.3
71.3
TKK-3B
70.3
77.2
58.6
63.3
71.8
79.1
+
+Table 3: Results on three compositional splits of Spider. $[\dagger]$ : Results are taken from (Shaw et al., 2021).
+
+datasets1. Spider is a single-turn dataset, while SParC and CoSQL are multi-turn datasets. For compositional generalization, we use three compositional splits derived from Spider, namely template split (Spider-Template), length split (Spider-Length), and Target Maximum Compound Divergence (TMCD) split (Spider-TMCD), from Shaw et al. (2021). For i.i.d. generalization, we construct Spider-IID, SParC-IID, and CoSQL-IID based on Spider, SParC, and CoSQL, respectively. For example, to obtain Spider-IID, we mix the training and development sets of Spider to get the full set and then randomly sample from it to construct new training and development sets while retaining the ratio of the number of original training and development examples.
+
+For robustness, we use Spider-Syn (Gan et al., 2021a) and Spider-Realistic (Deng et al., 2021) for evaluation. Spider-Syn is constructed via modifying questions in Spider using synonym substitution. Spider-Realistic selects a complex subset from the development
+
+set of Spider and modifies the questions in this subset to remove or paraphrase explicit mentions of column names while keeping the SQL queries unchanged.
+
+Evaluation Metrics For Spider and datasets derived from it, we use Exact Match (EM) and Execution Accuracy (EX) following Yu et al. (2018b). For SParC, CoSQL, and datasets derived from them, we use Question Match (QM) and Interaction Match (IM) following Yu et al. (2019b).
+
+Implementation Details TKK has three model sizes: TKK-Base, TKK-Large, and TKK-3B, which are initialized with pre-trained T5-Base, T5-Large, and T5-3B models (Raffel et al., 2020), respectively. We use the same Question-Schema-Context serialization as in (Xie et al., 2022). We set the maximum input length to 512, the maximum target length to 128, and the batch size to 32. We use the Adafactor (Shazeer and Stern, 2018) optimizer for all experiments. We set the learning rate to 1e-4 for TKK-Base and TKK-Large and 5e-5 for TKK-3B and use linear
+
+learning rate decay. In addition, we choose the data balance ratio $r$ from \{0.5, 0.7, 0.9\}. All experiments are done on NVIDIA Tesla A100 and V100.
+
+# 4.2 Generalization Results
+
+Tables 1 and 2 report the zero-shot generalization results on Spider, SParC, and CoSQL, respectively. Equipped with PICARD (Scholak et al., 2021), which constrains the decoders to generate valid SQL queries by rejecting inadmissible tokens, TKK-3B achieves state-of-the-art results on these three datasets, demonstrating the strong zero-shot generalization ability of our framework. Noticeably, TKK outperforms T5 on all datasets and all model sizes. Zero-shot generalization is challenging as it requires the model to accurately understand a question conditioned on an unseen database schema to generate the correct SQL query. As a result, the model has to acquire general SQL knowledge rather than trivially memorize seen SQL patterns. Our framework forces the model to align the question, database schema, and each SQL clause and helps the model to learn SQL knowledge, thus leading to better generalization performance. In addition, as shown in Table 1, TKK-Base with PICARD achieves comparable performance to strong specialized models such as RYANSQL, indicating the great potential of pretrained sequence-to-sequence models for text-to-SQL parsing. Note that previous state-of-the-art models such as LGESQL and HIE-SQL heavily rely on manual design and may overfit to specific datasets. On the contrary, our framework enjoys strong generality as well as effectiveness.
+
+Table 3 presents the results on the three compositional splits of Spider. TKK outperforms T5 on all the three splits, demonstrating its powerful compositional generalization ability. By comparison, NQG-T5, which combines a grammar-based approach NQG with T5, shows no gain over T5 on these datasets. Spider-Template and Spider-TMCD require the model to generalize to novel templates and atom combinations, respectively, while Spider-Length requires the model to generalize to longer outputs. By explicitly decomposing the original task into multiple subtasks and combining the knowledge of them, our framework enables the model to better learn SQL knowledge and makes it less sensitive to these changes.
+
+Table 4 shows the results of TKK and the strong
+
+baseline T5 model on Spider-IID, SParC-IID, and CoSQL-IID. TKK obtains better results than T5 on all three datasets, demonstrating our framework's strong i.i.d. generalization ability. It can be seen that i.i.d. generalization is not as challenging as the other two generalization scenarios. However, the results on SParC-IID and CoSQL-IID are still not satisfactory. Enhancing the model's ability to acquire general knowledge is also helpful and necessary in this setting.
+
+# 4.3 Robustness Results
+
+Table 5 reports the results of various models trained on Spider and evaluated on Spider, Spider-Syn, and Spider-Realistic, which measures the model's robustness against perturbations to natural language questions. We have the following observations: (1) T5 is more robust than specialized models. For example, when evaluated on Spider-Syn, RATSQL + BERT degrades by 21.5 absolute points on EM, while T5-3B sees a performance drop of 12.2 absolute points. T5-Large, which performs worse than RATSQL + BERT on Spider, can obtain better performance than it on Spider-Syn. This indicates that models specially designed for Spider are prone to overfitting on it. Thus evaluating their robustness is important. (2) Our TKK framework can improve the robustness of T5 for text-to-SQL parsing. TKK outperforms T5 on both Spider-Syn and Spider-Realistic for all model sizes. (3) STRUG improves robustness via structure-grounded pre-training with a largescale of text-table paired data, while TKK provides a better way for learning text-to-SQL parsing to achieve this and does not need any additional data. (4) For pre-trained sequence-to-sequence models, the larger the model is, the more robust it is. When the model becomes larger, the gap between the performance of TKK on Spider-Syn and Spider-Realistic and the performance on Spider narrows. The same trend can be seen with T5.
+
+# 5 More Analysis
+
+Is each subtask necessary in the knowledge acquisition stage? To quantify the contribution of each subtask, we examine the performance of the main task after removing a subtask for training in the knowledge acquisition stage. Table 6 shows the ablation results on Spider and CoSQL. Removing any subtask degrades the model's performance
+
+
Models
Spider-IID
SParC-IID
CoSQL-IID
EM
EX
QM
IM
QM
IM
T5-Base (Raffel et al., 2020)
84.1
86.2
68.3
44.6
47.9
17.8
T5-Large (Raffel et al., 2020)
86.9
88.5
70.0
49.1
52.9
23.6
TKK-Base
86.6
88.1
70.3
46.9
51.5
22.2
TKK-Large
88.3
89.8
72.3
52.6
56.9
27.3
+
+Table 4: Results on three datasets for i.i.d. generalization: Spider-IID, SParC-IID, and CoSQL-IID.
+
+
Models
Spider
Spider-Syn
Spider-Realistic
EM
EX
EM
EX
EM
EX
IRNet (Guo et al., 2019)
53.2
-
28.4
-
-
-
RAT-SQL + BERT (Wang et al., 2020)
69.7
-
48.2
-
58.1
62.1
RAT-SQL + STRUG (Deng et al., 2021)
72.6
74.9
-
-
62.2
65.3
T5-Base† (Raffel et al., 2020)
56.8
59.9
40.8
43.8
46.9
47.6
T5-Large† (Raffel et al., 2020)
66.8
70.9
53.1
57.4
57.7
60.0
T5-3B† (Raffel et al., 2020)
71.6
74.5
59.4
65.3
63.2
65.0
TKK-Base
61.5
64.2
44.2
47.7
53.7
53.7
TKK-Large
70.6
73.2
55.1
60.5
64.4
64.4
TKK-3B
74.2
78.4
63.0
68.2
68.5
71.1
+
+on the main task, indicating that all subtasks are necessary for the knowledge acquisition stage. We can see that the FROM subtask has the largest impact on the performance. This is due to the mismatch between natural language expression and SQL syntax. User questions generally do not involve which tables to retrieve data from, while SQL requires specifying this. The FROM subtask allows the model to learn the alignment between the question and the FROM clause, thus alleviating the mismatch problem. Although other clauses are more or less mentioned in user questions, there are also alignment issues. Some previous work tackles this problem via designing intermediate representations (Guo et al., 2019; Gan et al., 2021b). Our framework provides a new perspective. The effect of the SQL subtask is less pronounced since the number of training examples of it is much smaller than that of the other subtasks.
+
+How effective is knowledge acquisition? We want to investigate if adding training data in the knowledge acquisition stage will further improve the model's performance. To this end, we first take $5\%$ , $10\%$ , $20\%$ , $40\%$ , and $100\%$ of the data for constructing the training data in the knowledge acquisition stage and then use $5\%$ of the data for finetuning with the main task. The results
+
+Table 5: Results of models trained on Spider and evaluated on Spider, Spider-Syn and Spider-Realistic. [†]: We train T5 models on Spider and report evaluated results on the three datasets, which are different from Table 1.
+
+
Models
Spider
CoSQL
EM
EX
QM
IM
TKK-Base
61.5
64.2
46.9
17.8
w/o SELECT
60.0
63.4
43.8
15.0
w/o FROM
60.0
62.0
43.2
14.3
w/o WHERE
60.0
63.4
44.6
16.7
w/o GHOL
60.9
63.1
43.5
16.0
w/o SQL
61.2
63.4
45.3
17.1
+
+Table 6: The effect of subtasks.
+
+on Spider and CoSQL are shown in Figure 2. As the amount of training data in the knowledge acquisition stage increases, the performance of TKK-Base improves significantly. This suggests that pre-training the model with large-scale subtask data will be beneficial to improving the model's performance.
+
+How effective is knowledge composition? To answer this, we use all data in the knowledge acquisition stage for training and then take $5\%$ , $10\%$ , $20\%$ , $40\%$ , and $100\%$ of the data for finetuning with the main task. As shown in Figure 3, training with more data in the knowledge composition stage is also helpful. Since only using subtasks for training loses the dependency information of
+
+
+(a) EM results on Spider
+
+
+(b) QM results on CoSQL
+
+
+Figure 2: Results of TKK-Base as the amount of training data in the knowledge acquisition stage increases.
+(a) EM results on Spider
+Figure 3: Results of TKK-Base as the amount of training data in the knowledge composition stage increases.
+
+
+(b) QM results on CoSQL
+Figure 4: Results of T5-Base and TKK-Base in low-resource scenarios.
+
+different subtasks, knowledge composition helps the model to capture this information. Moreover, fine-tuning the main task with only $5\%$ data has already achieved $90\%$ and $78\%$ of the performance fine-tuned with all the data on Spider and CoSQL, respectively, showing that the model only needs a small amount of data to learn to tackle the main task.
+
+What is the model's performance in terms of different hardness levels? SQL queries in Spider can be divided into four hardness levels: easy, medium, hard, and extra hard (Yu et al., 2018b). Table 7 shows a comparison between TKK and T5 regarding these four hardness levels. It can be seen that the performance improvement mainly comes from hard and extra hard examples. For example, TKK-Base and TKK-Large improve T5-Base and T5-Large by 16.3 and 10.9 absolute points on extra hard examples, respectively. By dividing the learning process into multiple stages, our framework dramatically improves the model's ability to handle complex queries, thus leading to better overall performance.
+
+Is TKK still effective in low-resource scenarios? Another perspective to study the model's generalization ability is to investigate its performance in the low-resource setting. To this end, we
+
+
Models
Easy
Medium
Hard
Extra
T5-Base
83.9
59.9
42.5
22.9
T5-Large
87.5
74.0
50.0
34.3
TKK-Base
83.9
63.0
47.1
39.2
TKK-Large
89.5
76.5
52.9
45.2
+
+Table 7: EM results on Spider in terms of different hardness levels.
+
+
+(a) EM results on Spider
+
+
+(b) QM results on CoSQL
+
+conduct experiments on Spider and CoSQL. For each dataset, we randomly shuffle the training set and then take $5\%$ , $10\%$ , $20\%$ , and $40\%$ of the data for training. The results of T5-Base and TKK-Base are shown in Figure 4. TKK-Base performs better than T5-Base no matter how much data is used for training, showing that our framework is also effective in low-resource scenarios.
+
+Case Study We also conduct a case study to show that TKK makes fewer mistakes on SQL clauses and is more robust to synonym substitution compared with T5. Details are in Appendix A.
+
+# 6 Related Work
+
+Most previous work aims to solve a particular generalization or robustness challenge for text-to-SQL parsing. Dong and Lapata (2016) introduce a sequence-to-tree approach for traditional i.i.d. datasets such as GeoQuery (Zelle and Mooney, 1996). Yu et al. (2018a) propose a syntax tree network to tackle the zero-shot text-to-SQL problem. Later, various methods address the challenge from different perspectives such as improving schema linking (Wang et al., 2020; Hui et al., 2021b; Qin et al., 2021; Wang et al., 2022a), data augmentation (Yu et al., 2021a; Wu et al., 2021), or exploiting history information (Hui et al., 2021a; Zheng et al., 2022). Shaw et al. (2021) combines a grammar-based approach with T5 (Raffel et al., 2020) to address the compositional generalization challenge. Deng et al. (2021) develop a structure-grounded
+
+pre-training framework for improving the model's robustness against natural language variations. Pi et al. (2022) build an adversarial training example generation framework to bring the model better robustness against table perturbations. However, the success of specialized architectures or training approaches on one challenge cannot easily transfer to others (Herzig et al., 2021; Furrer et al., 2020). Our TKK framework, for the first time, shows improvements in all the concerned challenging scenarios for text-to-SQL parsing.
+
+Our work is also related to the research of task decomposition in NLP (Gao et al., 2022; Nye et al., 2022; Wies et al., 2022; Wei et al., 2022; Wang et al., 2022c). For example, least-to-most prompting (Zhou et al., 2022), a method purely based on inference with a sufficiently large pre-trained language model, reduces a complex task into multiple subtasks and solves these subtasks sequentially. By comparison, TKK first learns to solve simpler subtasks and then the complex task. At inference time, the model directly tackles the complex task.
+
+# 7 Conclusion
+
+This paper proposes a general and effective TKK framework for text-to-SQL parsing, which has three stages: task decomposition, knowledge acquisition, and knowledge composition. TKK enhances the model's ability to acquire general SQL knowledge by dividing the learning process into multiple stages. Comprehensive evaluation on three levels of generalization, namely i.i.d., zero-shot, and compositional, and robustness demonstrates the effectiveness of our framework.
+
+# Limitations
+
+Although our TKK framework is conceptually simple, it needs to decompose the task into multiple subtasks manually. It is not difficult to decompose the text-to-SQL parsing task due to the simplicity of SQL syntax. However, decomposing the complex graph structure such as Abstract Meaning Representation (AMR) is not straightforward. Therefore, a general strategy to automatically discover the meaningful substructure of the original task is needed. With such a strategy, our framework can be extended to broader research areas as long as the task can be decomposed into meaningful subtasks. We aim to address this limitation in our future work.
+
+# References
+
+Ben Boin, Matt Gardner, and Jonathan Berant. 2019. Global reasoning over database structures for text-to-SQL parsing. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3659-3664.
+Yitao Cai and Xiaojun Wan. 2020. IGSQL: Database schema interaction graph based neural model for context-dependent text-to-SQL generation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6903-6912.
+Ruisheng Cao, Lu Chen, Zhi Chen, Yanbin Zhao, Su Zhu, and Kai Yu. 2021. LGESQL: Line graph enhanced text-to-SQL model with mixed local and non-local relations. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 2541-2555.
+DongHyun Choi, Myeong Cheol Shin, EungGyun Kim, and Dong Ryeol Shin. 2021. RYANSQL: Recursively applying sketch-based slot fillings for complex text-to-SQL in cross-domain databases. Computational Linguistics, 47(2):309-332.
+Xiang Deng, Ahmed Hassan Awadallah, Christopher Meek, Oleksandr Polozov, Huan Sun, and Matthew Richardson. 2021. Structure-grounded pretraining for text-to-SQL. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1337-1350.
+Li Dong and Mirella Lapata. 2016. Language to logical form with neural attention. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 33-43.
+Catherine Finegan-Dollak, Jonathan K. Kummerfeld, Li Zhang, Karthik Ramanathan, Sesh Sadasivam, Rui Zhang, and Dragomir Radev. 2018. Improving text-to-SQL evaluation methodology. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 351-360.
+Daniel Furrer, Marc van Zee, Nathan Scales, and Nathanael Scharli. 2020. Compositional generalization in semantic parsing: Pre-training vs. specialized architectures. arXiv preprint arXiv:2007.08970.
+Yujiang Gan, Xinyun Chen, Qiuping Huang, Matthew Purver, John R. Woodward, Jinxia Xie, and Pengsheng Huang. 2021a. Towards robustness of text-to-SQL models against synonym substitution. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language
+
+Processing (Volume 1: Long Papers), pages 2505-2515.
+Yujuan Gan, Xinyun Chen, Jinxia Xie, Matthew Purver, John R. Woodward, John Drake, and Qiaofu Zhang. 2021b. Natural SQL: Making SQL easier to infer from natural language specifications. In *Findings of the Association for Computational Linguistics: EMNLP* 2021, pages 2030–2042.
+Chang Gao, Wenxuan Zhang, and Wai Lam. 2022. UniGDD: A unified generative framework for goal-oriented document-grounded dialogue. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 599-605.
+Yu Gu, Sue Kase, Michelle Vanni, Brian Sadler, Percy Liang, Xifeng Yan, and Yu Su. 2021. Beyond i.i.d.: Three levels of generalization for question answering on knowledge bases. In Proceedings of the Web Conference 2021, page 3477-3488.
+Jiaqi Guo, Zecheng Zhan, Yan Gao, Yan Xiao, JianGuang Lou, Ting Liu, and Dongmei Zhang. 2019. Towards complex text-to-SQL in cross-domain database with intermediate representation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4524-4535.
+Jonathan Herzig, Peter Shaw, Ming-Wei Chang, Kelvin Guu, Panupong Pasupat, and Yuan Zhang. 2021. Unlocking compositional generalization in pre-trained models using intermediate representations. arXiv preprint arXiv:2104.07478.
+Binyuan Hui, Ruiying Geng, Qiyu Ren, Binhua Li, Yongbin Li, Jian Sun, Fei Huang, Luo Si, Pengfei Zhu, and Xiaodan Zhu. 2021a. Dynamic hybrid relation exploration network for cross-domain context-dependent semantic parsing. Proceedings of the AAAI Conference on Artificial Intelligence, 35(14):13116-13124.
+Binyuan Hui, Ruiying Geng, Lihan Wang, Bowen Qin, Yanyang Li, Bowen Li, Jian Sun, and Yongbin Li. 2022. S²SQL: Injecting syntax to question-schema interaction graph encoder for text-to-SQL parsers. In Findings of the Association for Computational Linguistics: ACL 2022, pages 1254-1262.
+Binyuan Hui, Xiang Shi, Ruiying Geng, Binhua Li, Yongbin Li, Jian Sun, and Xiaodan Zhu. 2021b. Improving text-to-sql with schema dependency learning. arXiv preprint arXiv:2103.04399.
+Srinivasan Iyer, Ioannis Konstas, Alvin Cheung, Jayant Krishnamurthy, and Luke Zettlemoyer. 2017. Learning a neural semantic parser from user feedback. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 963-973.
+Xi Victoria Lin, Richard Socher, and Caiming Xiong. 2020. Bridging textual and tabular data for cross-domain text-to-SQL semantic parsing. In *Findings*
+
+of the Association for Computational Linguistics: EMNLP 2020, pages 4870-4888.
+Maxwell Nye, Anders Johan Andreassen, Guy Gur-Ari, Henryk Michalewski, Jacob Austin, David Bieber, David Dohan, Aitor Lewkowycz, Maarten Bosma, David Luan, Charles Sutton, and Augustus Odena. 2022. Show your work: Scratchpads for intermediate computation with language models. In Deep Learning for Code Workshop.
+Xinyu Pi, Bing Wang, Yan Gao, Jiaqi Guo, Zhoujun Li, and Jian-Guang Lou. 2022. Towards robustness of text-to-SQL models against natural and realistic adversarial table perturbation. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2007-2022.
+Bowen Qin, Binyuan Hui, Lihan Wang, Min Yang, Jinyang Li, Binhua Li, Ruiying Geng, Rongyu Cao, Jian Sun, Luo Si, et al. 2022a. A survey on text-to-sql parsing: Concepts, methods, and future directions. arXiv preprint arXiv:2208.13629.
+Bowen Qin, Lihan Wang, Binyuan Hui, Ruiying Geng, Zheng Cao, Min Yang, Jian Sun, and Yongbin Li. 2021. Sdcup: Schema dependency-enhanced curriculum pre-training for table semantic parsing. arXiv preprint arXiv:2111.09486.
+Bowen Qin, Lihan Wang, Binyuan Hui, Bowen Li, Xiangpeng Wei, Binhua Li, Fei Huang, Luo Si, Min Yang, and Yongbin Li. 2022b. SUN: Exploring intrinsic uncertainties in text-to-SQL parsers. In Proceedings of the 29th International Conference on Computational Linguistics, pages 5298-5308.
+Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research, 21(140):1-67.
+Ohad Rubin and Jonathan Berant. 2021. SmBoP: Semi-autoregressive bottom-up semantic parsing. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 311-324.
+Torsten Scholak, Nathan Schucher, and Dzmitry Bahdanau. 2021. PICARD: Parsing incrementally for constrained auto-regressive decoding from language models. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 9895-9901.
+Peter Shaw, Ming-Wei Chang, Panupong Pasupat, and Kristina Toutanova. 2021. Compositional generalization and natural language variation: Can a semantic parsing approach handle both? In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint
+
+Conference on Natural Language Processing (Volume 1: Long Papers), pages 922-938.
+Noam Shazeer and Mitchell Stern. 2018. Adafactor: Adaptive learning rates with sublinear memory cost. In Proceedings of the 35th International Conference on Machine Learning, pages 4596-4604.
+Alane Suhr, Ming-Wei Chang, Peter Shaw, and Kenton Lee. 2020. Exploring unexplored generalization challenges for cross-database semantic parsing. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8372-8388.
+Bailin Wang, Richard Shin, Xiaodong Liu, Oleksandr Polozov, and Matthew Richardson. 2020. RAT-SQL: Relation-aware schema encoding and linking for text-to-SQL parsers. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7567-7578.
+Lihan Wang, Bowen Qin, Binyuan Hui, Bowen Li, Min Yang, Bailin Wang, Binhua Li, Jian Sun, Fei Huang, Luo Si, and Yongbin Li. 2022a. Proton: Probing schema linking information from pre-trained language models for text-to-sql parsing. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, page 1889-1898.
+Xuezhi Wang, Haohan Wang, and Diyi Yang. 2022b. Measure and improve robustness in NLP models: A survey. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4569-4586.
+Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, and Denny Zhou. 2022c. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:2203.11171.
+Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Chi, Quoc Le, and Denny Zhou. 2022. Chain of thought prompting elicits reasoning in large language models. arXiv preprint arXiv:2201.11903.
+Noam Wies, Yoav Levine, and Amnon Shashua. 2022. Sub-task decomposition enables learning in sequence to sequence tasks. arXiv preprint arXiv:2204.02892.
+Kun Wu, Lijie Wang, Zhenghua Li, Ao Zhang, Xinyan Xiao, Hua Wu, Min Zhang, and Haifeng Wang. 2021. Data augmentation with hierarchical SQL-to-question generation for cross-domain text-to-SQL parsing. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 8974-8983.
+Tianbao Xie, Chen Henry Wu, Peng Shi, Ruiqi Zhong, Torsten Scholak, Michihiro Yasunaga, Chien-Sheng Wu, Ming Zhong, Pengcheng Yin, Sida I Wang, et al. 2022. Unifiedsgk: Unifying and multi-tasking structured knowledge grounding with text-to-text language models. arXiv preprint arXiv:2201.05966.
+
+Navid Yaghmazadeh, Yuepeng Wang, Isil Dillig, and Thomas Dillig. 2017. Squizler: query synthesis from natural language. Proceedings of the ACM on Programming Languages, 1(OOPSLA):1-26.
+Tao Yu, Chien-Sheng Wu, Xi Victoria Lin, bailin wang, Yi Chern Tan, Xinyi Yang, Dragomir Radev, richard socher, and Caiming Xiong. 2021a. Gra{pp}a: Grammar-augmented pre-training for table semantic parsing. In International Conference on Learning Representations.
+Tao Yu, Michihiro Yasunaga, Kai Yang, Rui Zhang, Dongxu Wang, Zifan Li, and Dragomir Radev. 2018a. SyntaxSQLNet: Syntax tree networks for complex and cross-domain text-to-SQL task. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1653-1663.
+Tao Yu, Rui Zhang, Heyang Er, Suyi Li, Eric Xue, Bo Pang, Xi Victoria Lin, Yi Chern Tan, Tianze Shi, Zihan Li, Youxuan Jiang, Michihiro Yasunaga, Sungrok Shim, Tao Chen, Alexander Fabbri, Zifan Li, Luyao Chen, Yuwen Zhang, Shreya Dixit, Vincent Zhang, Caiming Xiong, Richard Socher, Walter Lasecki, and Dragomir Radev. 2019a. CoSQL: A conversational text-to-SQL challenge towards cross-domain natural language interfaces to databases. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1962-1979.
+Tao Yu, Rui Zhang, Alex Polozov, Christopher Meek, and Ahmed Hassan Awadallah. 2021b. {SC}ore: Pretraining for context representation in conversational semantic parsing. In International Conference on Learning Representations.
+Tao Yu, Rui Zhang, Kai Yang, Michihiro Yasunaga, Dongxu Wang, Zifan Li, James Ma, Irene Li, Qingning Yao, Shanelle Roman, Zilin Zhang, and Dragomir Radev. 2018b. Spider: A large-scale human-labeled dataset for complex and cross-domain semantic parsing and text-to-SQL task. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3911-3921.
+Tao Yu, Rui Zhang, Michihiro Yasunaga, Yi Chern Tan, Xi Victoria Lin, Suyi Li, Heyang Er, Irene Li, Bo Pang, Tao Chen, Emily Ji, Shreya Dixit, David Proctor, Sungrok Shim, Jonathan Kraft, Vincent Zhang, Caiming Xiong, Richard Socher, and Dragomir Radev. 2019b. SParC: Cross-domain semantic parsing in context. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4511-4523.
+John M Zelle and Raymond J Mooney. 1996. Learning to parse database queries using inductive logic programming. In Proceedings of the national conference on artificial intelligence, pages 1050-1055.
+Rui Zhang, Tao Yu, Heyang Er, Sungrok Shim, Eric Xue, Xi Victoria Lin, Tianze Shi, Caiming Xiong,
+
+Richard Socher, and Dragomir Radev. 2019. Editing-based SQL query generation for cross-domain context-dependent questions. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5338-5349.
+
+Yanzhao Zheng, Haibin Wang, Baohua Dong, Xingjun Wang, and Changshan Li. 2022. HIE-SQL: History information enhanced network for context-dependent text-to-SQL semantic parsing. In *Findings of the Association for Computational Linguistics: ACL* 2022, pages 2997-3007.
+
+Victor Zhong, Caiming Xiong, and Richard Socher. 2017. Seq2sql: Generating structured queries from natural language using reinforcement learning. arXiv preprint arXiv:1709.00103.
+
+Denny Zhou, Nathanael Scharli, Le Hou, Jason Wei, Nathan Scales, Xuezhi Wang, Dale Schuurmans, Olivier Bousquet, Quoc Le, and Ed Chi. 2022. Least-to-most prompting enables complex reasoning in large language models. arXiv preprint arXiv:2205.10625.
+
+# A Case Study
+
+Table 8 shows some real cases including questions and the SQL queries generated by T5-3B and TKK-3B. The first four cases are from the Spider dataset. It can be seen that TKK can produce correct SQL queries, while T5 makes mistakes on some clauses. The last two cases are from the Spider-Syn dataset. After synonym substitution, T5 is confused by synonyms and outputs some tables and columns that do not exist in the database, leading to incorrect SQL queries. By comparison, TKK can still identify correct tables and columns.
+
+
Spider
Question: What is the average GNP and total population in all nations whose government is US territory?T5-3B: SELECT avg(gnp), avg(population) FROM country WHERE governmentform = "US Territory"TKK-3B: SELECT avg(gnp), sum(population) FROM country WHERE governmentform = "US Territory"
Question: Which model of the car has the minimum horsepower?T5-3B: SELECT model FROM cars_data ORDER BY horsepower LIMIT 1TKK-3B: SELECT car_names.model FROM cars_data JOIN car_names on cars_data.id = car_names.makeid ORDER BY horsepower LIMIT 1
Question: What are the dog name, age and weight of the dogs that were abandoned? Note that 1 stands for yes, and 0 stands for no in the tables.T5-3B: SELECT name, age, weight FROM dogs WHERE abandoned_yn = 0TKK-3B: SELECT name, age, weight FROM dogs WHERE abandoned_yn = 1
Question: Return the different document ids along with the number of paragraphs corresponding to each, ordered by id.T5-3B: SELECT document_id, count(*) FROM paragraphs GROUP BY document_idORDER BY count(*)TKK-3B: SELECT document_id, count(*) FROM paragraphs GROUP BY document_idORDER BY document_id
Spider-Syn
Question-O: How many templates do we have?Question-S: How many layout do we have?T5-3B: SELECT count(*) FROM layoutTKK-3B: SELECT count(*) FROM templates
Question-O: What is the year that had the most concerts?Question-S: What is the time that had the most shows?T5-3B: SELECT time FROM concert GROUP BY time ORDER BY count(*) desc LIMIT 1TKK-3B: SELECT year FROM concert GROUP BY year ORDER BY count(*) desc LIMIT 1
+
+Table 8: Case study. Question-O is the question in the original Spider dataset. Question-S is the question in the Spider-Syn dataset, modified from Question-O using synonym substitution.
\ No newline at end of file
diff --git a/towardsgeneralizableandrobusttexttosqlparsing/images.zip b/towardsgeneralizableandrobusttexttosqlparsing/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..5457b38dada1cf6142c2cb619c6527003d42d0bf
--- /dev/null
+++ b/towardsgeneralizableandrobusttexttosqlparsing/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:1ddd6bbd2fa7ac9d188efba96ad40e5d52bc9680810d19327e26f1f29d643587
+size 793053
diff --git a/towardsgeneralizableandrobusttexttosqlparsing/layout.json b/towardsgeneralizableandrobusttexttosqlparsing/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..a9aa33c277f1a02ced272adb2fd767941f39d681
--- /dev/null
+++ b/towardsgeneralizableandrobusttexttosqlparsing/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:37a4de6075c9925476f93b84826cc28a628c57449cc52a1752549349750b8a48
+size 353987
diff --git a/towardsgeneralizedopeninformationextraction/09d0c58c-cda1-4b3e-a247-856d6ebbfc5b_content_list.json b/towardsgeneralizedopeninformationextraction/09d0c58c-cda1-4b3e-a247-856d6ebbfc5b_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..99b74d2da01869b9096d7c793d45c5c745ac2fda
--- /dev/null
+++ b/towardsgeneralizedopeninformationextraction/09d0c58c-cda1-4b3e-a247-856d6ebbfc5b_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4e87a81dbd03f37f59e8b049efa7b446f1d4369178b729008eb70e230ade2e2f
+size 106263
diff --git a/towardsgeneralizedopeninformationextraction/09d0c58c-cda1-4b3e-a247-856d6ebbfc5b_model.json b/towardsgeneralizedopeninformationextraction/09d0c58c-cda1-4b3e-a247-856d6ebbfc5b_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..07c0fc473120a104d3c83bc29646b5295ee88c69
--- /dev/null
+++ b/towardsgeneralizedopeninformationextraction/09d0c58c-cda1-4b3e-a247-856d6ebbfc5b_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c725a32eafd32ae49a8948b0e428190950611f58d5f7330f461595229b8b4024
+size 124555
diff --git a/towardsgeneralizedopeninformationextraction/09d0c58c-cda1-4b3e-a247-856d6ebbfc5b_origin.pdf b/towardsgeneralizedopeninformationextraction/09d0c58c-cda1-4b3e-a247-856d6ebbfc5b_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..aa2dcb98ac2530df944723c2bac8cd27d4371fd2
--- /dev/null
+++ b/towardsgeneralizedopeninformationextraction/09d0c58c-cda1-4b3e-a247-856d6ebbfc5b_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e66a52de3541d791ca631084884bf65347df896bd5078d2e57a5a604761d544c
+size 535068
diff --git a/towardsgeneralizedopeninformationextraction/full.md b/towardsgeneralizedopeninformationextraction/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..be54a07bb0d4680fce19a2784bbba8dfb5b140d5
--- /dev/null
+++ b/towardsgeneralizedopeninformationextraction/full.md
@@ -0,0 +1,355 @@
+# Towards Generalized Open Information Extraction
+
+Bowen Yu $^{1,2}$ , Zhenyu Zhang $^{2,3}$ , Jingyang Li $^{1}$ , Haiyang Yu $^{1}$ , Tingwen Liu $^{2,3*}$ , Jian Sun $^{1}$ , Yongbin Li $^{1*}$ , Bin Wang $^{4}$
+
+$^{1}$ DAMO Academy, Alibaba Group
+
+$^{2}$ Institute of Information Engineering, Chinese Academy of Sciences
+
+$^{2}$ School of Cyber Security, University of Chinese Academy of Sciences
+
+$^{3}$ Xiaomi AI Lab, Xiaomi Inc., Beijing, China
+
+{yubowen.ybw, qiwei.ljy,yifei.yhy, shuide.lyb}@alibaba-inc.com
+
+{zhangzhenyu1996, liutingwen}@iie.ac.cn wangbin11@xiaomi.com
+
+# Abstract
+
+Open Information Extraction (OpenIE) facilitates the open-domain discovery of textual facts. However, the prevailing solutions evaluate OpenIE models on in-domain test sets aside from the training corpus, which certainly violates the initial task principle of domain-independence. In this paper, we propose to advance OpenIE towards a more realistic scenario: generalizing over unseen target domains with different data distributions from the source training domains, termed Generalized OpenIE. For this purpose, we first introduce GLOBE, a large-scale human-annotated multi-domain OpenIE benchmark, to examine the robustness of recent OpenIE models to domain shifts, and the relative performance degradation of up to $70\%$ implies the challenges of generalized OpenIE. Then, we propose DragonIE, which explores a minimalist graph expression of textual fact: directed acyclic graph, to improve the OpenIE generalization. Extensive experiments demonstrate that DragonIE beats the previous methods in both in-domain and out-of-domain settings by as much as $6.0\%$ in F1 score absolutely, but there is still ample room for improvement.
+
+# 1 Introduction
+
+Open Information Extraction (OpenIE) aims to mine open-domain facts indicating a semantic relation between a predicate phrase and its arguments from plain text (Etzioni et al., 2008), without fixed relation vocabulary. OpenIE developments have been demonstrated to benefit various domains and applications, such as knowledge base population (Dong et al., 2014), question answering (Fader et al., 2014), and summarization (Fan et al., 2019)
+
+Recently, OpenIE has seen remarkable advances. Regarding different strategies for representing open fact, recent techniques with deep neural models can be subsumed under two categories, i.e., sequence-based and graph-based. Sequence-based models
+
+predict the facts one by one in an auto-regressive fashion with iterative labeling or generation framework (Cui et al., 2018; Sun et al., 2018; Kolluru et al., 2020a,b), which is the most classical solution in OpenIE. Graph-based method formulates OpenIE as a maximal clique discovery problem based on the span-level text graph (Yu et al., 2021), in which the edge between two spans is defined as the combination of their roles in corresponding fact. To the end, $O(m^2)$ edges of $O(r^2)$ types are constructed for a fact with $m$ spans of $r$ roles.
+
+Owning to the exquisite design, both sequence-based and graph-based models can identify complicated facts, thus constantly refreshing performance on benchmarks. Nonetheless, it is still unexplored whether these models are sufficient for true open-domain extraction. This doubt comes from that the training and test data in existing OpenIE benchmarks are generally independent and identically distributed, i.e., drawn from the same domain (Stanovsky et al., 2018; Sun et al., 2018; Gashteovski et al., 2019). However, this assumption does not hold in practice. Built on domain-independence (Niklaus et al., 2018), OpenIE models have to process diverse text, it is common to observe domain shifts among training and test data in applications. Therefore, the performance on indomain benchmarks may not exactly measure the generalization of out-of-domain extraction.
+
+Starting from this concern, we carry out extensive experiments to investigate whether state-of-the-art OpenIE models preserve good performance on unseen target domains. To provide a reliable benchmark, we publicize the first Generalized OpenIE dataset containing 110,122 open facts annotated humanly on 20,899 sentences collected from 6 completely different domains. We find out that, there are some noticeable semantic differences between open facts in different domains, posing challenges to the generalization of OpenIE models. Because of domain shifts, in sequence-based models, the
+
+accuracy in each step prediction declines significantly, and the early errors are magnified later. Similarly, in the graph-based model, the reduced edge prediction ability struggles to accurately connect $O(m^2)$ edges of $O(r^2)$ types especially when the span number $m$ and role number $r$ are both no small in complicated facts. As a result, their F1 scores degrade as much as $70\%$ relatively (from $43\%$ to $13\%$ ) when applied to unfamiliar domains, thus cannot work well in real-world extraction.
+
+The above observations demonstrate full-fledged open-domain extraction still has a long way to go, and suggest a way for a more generalized OpenIE model: we should reduce the extraction complexity to lower the potential risk of prediction errors in domain shifts. This is essentially the Occam's Razor principle (Rasmussen and Ghahramani, 2000): among all functions which fit the training data well, simpler functions are expected to generalize better. Therefore, we explore a minimalist expression of open fact: by sequentially connecting the boundary positions of all spans in the fact with their order in the text, each open fact can be simply modeled as a directed acyclic graph. Then OpenIE is equivalent to predicting the graph adjacency matrix and decoding facts from the directed graph. This idea leverages the sequential priors to reduce the complexity of function space (edge number and type) in the previous graph-based model from quadratic to linear, while avoiding auto-regressive extraction in sequence-based models, thus improving generalization. We implement it in DragonIE, a Directed acyclic graph based open Information Extractor.
+
+We perform extensive in-domain and out-of-domain experiments for OpenIE. On the previous commonly used in-domain evaluation, DragonIE outperforms the state-of-the-art method, with substantial gains of up to $3.6\%$ average F1 score, 3x speedup, and 5x convergence. Meantime, it reduces the number of edges by $66\%$ and the number of edge types by $88\%$ compared with the previous graph-based method. On our newly proposed out-of-domain benchmark, DragonIE further improves the performance gap to $6.0\%$ , and still exceeds the previous methods with only $10\%$ training data, showing better generalization. Detailed analysis shows that DragonIE can effectively represent overlapping, nested, discontinuous, and multiple facts despite its simplicity. We also perform a qualitative analysis that summarizes typical extraction errors and outlines the future directions.
+
+
Datasets
#Sents
#Facts
Human?
Shift?
OIE2016 (2016)
3,180
8,477
✘
✘
SAOKE (2018)
46,930
166,370
✓
✘
CaRB (2019)
1,282
5,263
✓
✘
OpenIE4 (2020b)
92,774
190,661
✘
✘
LSOIE-wiki (2021)
24,296
56,662
✘
✘
GLOBE (our)
20,899
110,122
✓
✓
+
+Table 1: Comparison of representative OpenIE datasets. Human means the dataset is human-annotated rather than model-derived or converted from other corpus. Shift denotes the dataset supports the evaluation of OpenIE generalization performance with domain shift.
+
+# 2 Pilot Experiment
+
+To quantitatively evaluate the robustness of OpenIE model against domain shifts, we first propose a standard evaluation setup for generalized OpenIE. Then, we conduct pilot experiments as well as empirical analyses in this section.
+
+# 2.1 Generalized OpenIE Evaluation Setup
+
+Given a sentence, OpenIE aims to output a set of facts in the form of (subject, predicate, object $_1$ , ..., object $_n$ ), and all of them are stated explicitly in the text (Yu et al., 2021). As shown in Table 1, Most existing OpenIE datasets assume that the training and test data are identically distributed without domain shift, which is certainly opposite to the task principle of domain independence. To address this issue, we present GLOBE, a GeneraLized OpenIE BEnchmark. Firstly, sentences in GLOBE are collected from six distinct data sources, including insurance, education, finance, government, medicine, and news, which distinguishes GLOBE from existing datasets. Then, GLOBE is annotated following the guidelines of SAOKE (Sun et al., 2018), the largest human-annotated OpenIE dataset collected from Baidu Encyclopedia. Thus they can combine to produce a complete training-test evaluation setup, comprehensively evaluating generalized OpenIE. Specifically, the models are first trained on the SAOKE training set, and then the model with the best performance on the SAOKE dev set is selected to output results on GLOBE. The annotation details and descriptive statistics of GLOBE are presented in Appendix A.
+
+# 2.2 Result Analysis
+
+We select the best-performing sequence model IGL-OIE (Kolluru et al., 2020a), and graph model MacroIE (Yu et al., 2021), for our pilot experiments. The evaluation metric is gestalt F1 score (Yu et al.,
+
+
+Figure 1: Gestalt F1 score comparison on six out-of-domain test sets and the original in-domain test set.
+
+2021). Note that there are ore datasets and metrics in the main experiments (Section 4).
+
+Figure 1 shows a detailed comparison across different domains and models on GLOBE. From the results we can see that: compared with the performances on SAOKE under in-domain setting, both the sequence-based and graph-based models encounter great performance drops on out-of-domain GLOBE, with a relative decline of $35\% -70\%$ in F1 score. This indicates that the robustness of OpenIE model may be challenged in cross-domain generalization. Intuitively, there are obvious differences in the topic and style of texts in different domains. For example, in the medical domain, subject and object are usually rare biological terminology, which is less covered in the limited general-domain training data. Such a semantic shift degrades the prediction ability of the model fitted to the training set.
+
+Exacerbating this issue further, modern OpenIE models often contain multiple prediction steps. Under domain shifts, every step is likely to go wrong, resulting in a collapse in the overall performance. Specifically, sequence-based models predict facts auto-regressively, an mispredicted fact will directly affect the extraction of all the following facts. The graph-based model requires $O(m^2)$ edges of $O(r^2)$ types for a fact with $m$ spans of $r$ roles. In GLOBE, the built graph contains an average of 28.5 edges with a total of 176 edge types for each open fact, and the wrong prediction of any edge may lead to the overall failure. Thus, these methods are vulnerable to out-of-domain generalization.
+
+# 3 Methodology
+
+From the above observations, we know that recent OpenIE models are too complex to generalize. In this section, we propose a simplified expression of open fact: directed acyclic graph. We start with the motivation of our new graph structure, then go through the implementation details.
+
+
+John is the premier and first minister of British Columbia.
+(a) undirected maximal clique
+Figure 2: An example of representing open facts as an undirected maximal clique or a directed acyclic graph.
+
+
+(b) directed acyclic graph
+
+# 3.1 Motivation
+
+How to properly model open fact is the most important problem in OpenIE system design. The previous graph-based model treats spans belonging to one open fact as an undirected clique such that spans are pairwise connected with a combination of their roles as the edge type. Whereas, as shown in Figure 2, there is actually a natural reading order from left to right between spans in the text. Such sequential prior means we can simply connect the edges between adjacent spans in the text to determine open facts. In this way, the model no longer has to identify the pairwise relation between each span pair, which lessens the learning burden by reducing the edge numbers from $O(m^2)$ to $O(m)$ . Moreover, benefiting from the directed edge, we can assign the role of one connected span as the edge type, and recursively obtain the roles of all spans, thus greatly simplifying the edge type space from $O(r^2)$ to $O(r)$ . Meanwhile, the edges can be predicted in parallel, thus solving the cascade error in previous auto-regressive models.
+
+# 3.2 Directed Acyclic Graph
+
+The above operation actually converts each input text to a directed acyclic graph (DAG). In graph theory, a DAG consists of vertices and edges, with each edge directed from one vertex to another, such that following those directions will never form a closed loop. DAG can be topologically ordered, by arranging the vertices as a linear ordering with the edge directions. This feature is consistent with what we want to combine span in the order that it appears in the text. If we treat each continuous span involved in one fact asserted by the input text as a vertex in DAG, and connect oriented edges, from one vertex to another one that later appears in the text and belongs to the same fact. Then in the simple case shown in Figure 2, each directed path from root to leaf vertex represents an open fact.
+
+
+Figure 3: An overview of DargonIE. When building DAG, it enumerates each word pair and predict their edges. Thus, for spans with a single word, such as $As$ , there will be two vertexes refer to the beginning and ending words.
+
+Unfortunately, such an elegant paradigm is not suitable for all scenarios. When dealing with some complex cases like Figure 3, it encounters the following challenges: (1) The granularity of text is word, while the granularity of open fact is span, so it is necessary to predict not only the relations between spans but also what is a span in the fact; (2) Different spans may be overlapping and share some words, as the span of America is enclosed in another span leadership of America in the case of Figure 3. (3) Different facts may be overlapping and share some fact elements (either subject, predicate or object). For example, Biden acts as the subject in all the three facts and is not the root vertex. Therefore, we cannot simply assume that each path in the DAG represents an open fact.
+
+DAG Construction. These challenges prompt us to design the following three types of edges to avoid ambiguous extraction: (1) intra-span edge: it connects the beginning and ending words of a span with a I tag. (2) inter-span edge: it connects the Ending word of a span and the Beginning/Ending word of the next span in the fact with a EB-X/EE tag, respectively, where X represents the role of the next span. Intuitively, each span can be uniquely identified by its two boundary words, and the double inter-span edge design helps distinguish overlapping spans. If we only connect the ending words of two spans, such as the and America, we cannot determine whether the subsequent span of the is leadership of America or of America, because they have the same ending word, and it is the same with just using the EB-X tag. (3) intra-fact edge: it connects the Beginning word of the first span and the Ending word of the last span in a fact with a BE-X tag to delimit the boundary of a fact. In this way, even for overlapping facts, we can accurately judge the range of each fact within DAG. Because only the role of the subsequent span is indicated in the inter-span edge, the role of the first span in the fact
+
+is unknown, so we specify it in $\mathrm{BE - X}$
+
+DAG Decoding. With the edge definition above, we first find all $\mathrm{BE - X}$ edges to determine the beginning and ending words of target facts, and then traverse all paths between them, in which each path represents a fact. During decoding each path, all the I edges are utilized to determine the spans in the path, then we can judge the role of each span according to the $\mathrm{EB - X}$ edge and distinguish overlapping spans with the EE edge. Finally, spans in each path are combined according to their roles to output structured facts. Besides, DAG can naturally identify discontinuous facts, where each element in open fact may contain multiple spans. we can splice the spans of the same role in the order of the text to get the discontinuous element. In Section 5.2, we empirically conclude that our constructed DAG has been a minimalist expression of open fact: arbitrarily removing any edge will reduce the representation ability. The Occam's Razor principle has stated that among all functions that have a good training set fit, the simplest one is likely to generalize better. Thus DAG is expected to have great generalization in OpenIE.
+
+# 3.3 Architecture
+
+Therefore, OpenIE is transformed into how to build a desired DAG. To this end, we propose DragonIE, a Directed acyclic graph based open Information Extractor. Intuitively, the edges defined in DAG depict the relation between words in the text, so DragonIE enumerates all word pairs and makes parallel prediction1:
+
+$$
+\mathbf {h} _ {1}, \dots , \mathbf {h} _ {n} = \operatorname {E n c o d e r} \left(w _ {1}, \dots , w _ {n}\right), \tag {1}
+$$
+
+$$
+\mathbf {s} _ {i, j} = \mathbf {h} _ {i} ^ {\top} \mathbf {U h} _ {j} + \mathbf {W} [ \mathbf {h} _ {i}; \mathbf {h} _ {j} ] + \mathbf {b}, \tag {2}
+$$
+
+$$
+\mathbf {p} _ {i, j} = \operatorname {S i g m o i d} \left(\mathbf {s} _ {i, j}\right). \tag {3}
+$$
+
+It first maps each word $w_{i}$ into a $d$ -dimensional contextual vector $\mathbf{h}_{\mathrm{i}} \in \mathbf{R}^{d}$ with a basic encoder such as BERT (Devlin et al., 2019). Then each $(\mathbf{h}_{i}, \mathbf{h}_{j})$ is fed to a pairwise score function, followed by a Sigmoid layer to yield the probability of each edge type $\mathbf{p}_{i,j} \in \mathbf{R}^{c}$ (Wang et al., 2020, 2021). During training, we optimize the parameters $\theta$ of DragonIE to minimize the cross-entropy loss:
+
+$$
+\begin{array}{l} J (\theta) = - \sum_ {i = 1} ^ {n} \sum_ {j = i} ^ {n} \sum_ {k = 1} ^ {c} \left(\mathbf {y} _ {i, j} [ k ] \log \left(\mathbf {p} _ {i, j} [ k ]\right) \right. \\ + (1 - \mathbf {y} _ {i, j} [ k ]) \log (1 - \mathbf {p} _ {i, j} [ k ])), \tag {4} \\ \end{array}
+$$
+
+where $\mathbf{p}_{i,j}[k] \in [0,1]$ is the predicted probability of $(w_i, w_j)$ along the $k$ -th edge type, and $\mathbf{y}_{i,j}[k] \in \{0,1\}$ is ground truth. At inference, a threshold $\delta$ tuned on the dev set is applied to filter low confidence prediction and get the final edge labels.
+
+# 4 Experimental Setup
+
+# 4.1 Datasets
+
+In our experiments, we evaluate the models on three datasets. (1) SAOKE (Sun et al., 2018) is the largest human-annotated OpenIE dataset annotated from Baidu Encyclopedia. It contains 20k samples for training, 2k for validation, and 2k for testing. Their division is independent and identically distributed so that SAOKE can be used as the standard dataset under the in-domain setting. (2) GLOBE is the largest multi-domain OpenIE test set proposed in Section 2.1. It follows the annotation scheme of SAOKE, but the domains are different, so it can effectively verify the performance of OpenIE models under the out-of-domain setting. (3) CarB (Bhardwaj et al., 2019) is the first crowdsourced OpenIE dataset containing 1,282 sentences. Recently, it is widely used in testing models trained on OpenIE4 (Kolluru et al., 2020b). However, OpenIE4 is automatically-derived with great data noise, and the annotation scheme is inconsistent with CarB, so the results on CarB are relatively unreliable.
+
+# 4.2 Implementation Details
+
+We implement DragonIE by initializing the encoder parameters from BERT for English (Devlin et al., 2019) and Chinese (Cui et al., 2020). DragonIE is optimized by BertAdam with a maximum sequence length of 200, an epoch number of 30, and a learning rate of 1e-5. The threshold $\delta$ is selected from [0.2, 0.4]. We select the model with best performance on validation set to output results on test
+
+set. Hyper-parameters are selected based on the validation set, and all experiments are conducted on a single Tesla V100 GPU.
+
+# 4.3 Baselines and Evaluation metrics
+
+We employ recent neural models as strong baselines: sequential labeling (IGL-OIE (Kolluru et al., 2020a)), sequential generation (IMoJIE (Kolluru et al., 2020b)), and graph-based (MacroIE (Yu et al., 2021)) models. Following the convention (Yu et al., 2021), we evaluate the performance with three most widely adopted metrics: CaRB-single (Kolluru et al., 2020a), CaRB-multi (Bhardwaj et al., 2019) and Gestalt (Sun et al., 2018). Each criterion produces three values: F1 score, the area under P-R curve (AUC), and the point in the P-R curve corresponding to the optimal F1 (Opt. F1).
+
+# 5 Experimental Results
+
+Our experiments aim to answer three questions:
+
+Q1 How does DragonIE compare to other methods in both in-domain and out-of-domain settings?
+Q2 Does DragonIE effectively handle complex extraction scenarios despite its simplicity?
+Q3 What causes the performance gap between out-of-domain and in-domain OpenIE?
+
+# 5.1 Overall Performance (Q1)
+
+Table 2-4 report the results of different models on all three datasets. We can see that DragonIE establishes a new state-of-the-art for this task, and the improvement is statistically significant on the $5\%$ level for all datasets. On the standard in-domain OpenIE benchmark SAOKE, DragonIE improves upon the previous best-performing model MacroIE in F1 score by absolute margins of 3.8, 3.9, and 3.3 points in CaRB-single, CaRB-multi, and Gestalt, respectively. We use the models trained on SAOKE to get predictions on the out-of-domain benchmark GLOBE. DragonIE constantly achieves better results than existing methods, and the absolute gains are more impressive compared with the in-domain setting: from 3.6 to 6.0 F1 points on average, although there is still ample room for improvement (we will discuss it in Section 5.3). The detailed comparison results in each domain of GLOBE are reported in Appendix B.2. Even for CaRB has much noise in the training data, our method still improves all evaluation metrics. These observations verify that DragonIE has the flexibility to fact extraction, generalization to domain shift, and
+
+
Model ↓ - Metric →
CaRB-single
CaRB-multi
Gestalt
F1
AUC
Opt.F1
F1
AUC
Opt.F1
F1
AUC
Opt.F1
IMoJIE (Kolluru et al., 2020b)
36.6
22.6
37.0
38.7
25.4
39.5
36.4
22.5
37.3
IGL-OIE (Kolluru et al., 2020a)
37.6
22.8
38.4
39.3
25.5
40.6
37.1
23.6
38.4
MacroIE (Yu et al., 2021)
41.2
24.5
41.5
42.7
27.8
43.7
42.8
27.2
43.7
DragonIE (ours)
45.0
29.0
45.1
46.6
31.3
46.7
46.1
30.1
46.1
+
+Table 2: In-domain Evaluation: Main results on the in-domain benchmark SAOKE.
+
+
Model ↓ - Metric →
CaRB-single
CaRB-multi
Gestalt
F1
AUC
Opt.F1
F1
AUC
Opt.F1
F1
AUC
Opt.F1
IGL-OIE (Kolluru et al., 2020a)
24.9
10.5
25.1
27.5
10.5
27.7
21.1
8.2
21.7
MacroIE (Yu et al., 2021)
25.5
10.0
25.6
27.1
11.4
27.2
22.4
7.5
22.5
DragonIE (ours)
30.9
15.1
31.0
33.3
17.5
33.5
28.6
13.1
28.7
+
+Table 3: Out-of-domain Evaluation: Main results on the out-of-domain benchmark GLOBE.
+
+
Model ↓ - Metric →
CaRB-single
CaRB-multi
Gestalt
F1
AUC
Opt.F1
F1
AUC
Opt.F1
F1
AUC
Opt.F1
IGL-OIE (Kolluru et al., 2020a)
41.0
22.9
41.1
52.2
33.7
52.4
10.1
5.4
9.7
MacroIE (Yu et al., 2021)
43.5
25.0
43.8
54.8
36.3
55.1
12.9
6.0
13.1
DragonIE (ours)
43.9
25.3
44.1
55.1
36.4
55.1
13.6
6.3
13.7
+
+great robustness to data noise. We believe this is because DragonIE explores a more concise and efficient OpenIE formulation, which avoids autoregressive prediction in previous sequence-based models, and simplifies the complexity of open fact in the graph-based model. In practice, to meet the complex extraction requirements, the maximal clique built by MacroIE for each open fact in SAOKE and GLOBE contains an average of 28.5 edges, with a total of 176 edge types, while DragonIE has an average of only 9.6 edges under 21 types. We provide a detailed edge space comparison in Appendix B.4. The simpler, the more essential, and the more effective.
+
+Another advantage of simpler design is faster convergence and inference speed. As shown in Table 5, with the same hyper-parameters, DragonIE achieves the best results in 4 epochs, while MacroIE requires 20 epochs to reach the peak. Moreover, DragonIE accelerates the testing time by 3 times. While the decoding of MacroIE needs a time-consuming maximal clique discovery algorithm like Bron-Kerbosch (Bron and Kerbosch, 1973), whose time complexity is $\mathrm{O}(3^{n/3})$ for an $n$ -vertex graph. DragonIE avoids this issue, thus obtaining large speed improvement.
+
+Table 4: Out-of-domain Evaluation: Main results on CaRB. The models are trained on the noisy OpenIE4 dataset.
+
+
MacroIE
DragonIE
Speedup
Convergence (epoch)
4
20
5x
Testing (second)
136
409
3x
+
+Table 5: Comparison in convergence and testing time on SAOKE, measured in epochs and seconds respectively.
+
+# 5.2 Detailed Analysis (Q2)
+
+A potential concern is whether the better generalization of the simple DAG-based OpenIE formulation is at the expense of extracting complex facts, as simplicity usually leads to a reduction in representation capability. To answer this question, we perform a fine-grained evaluation on GLOBE. (1) We select the sentences containing discontinuous or overlapping or nested facts from GLOBE to form three complex test sets. Here discontinuous means that at least one fact element in the sentence is not a continuous span, overlapping means that multiple facts in the sentence share at least one element, while nested means that different elements share some common spans. These three patterns are the most common complex facts in OpenIE, and their distribution is detailed in Appendix A.1. (2) We validate DragonIE's capability in extracting different numbers of open facts by splitting the sentences into five classes according to the fact count. (3) We
+
+
+(a) Complicated Extraction
+
+
+(b) Multiple Extraction
+Figure 4: Gestalt F1 scores on (a) complicated extraction, (b) multiple extraction, and (c) low-resource extraction. All the analyses are conducted on GLOBE. We also report the comparison results on SAOKE in Appendix B.3.
+
+
+(c) Low-resource Extraction
+
+
Model ↓ - Benchmark →
GLOBE
SAOKE
DragonIE
28.6
45.8
- inner-span edge EE
26.1
45.1
- inter-fact edge BE-X
25.4
42.0
- Next span role labeling
25.8
44.6
+
+conduct low-resource experiments on five different partitions of the original SAOKE training sets (1/10/30/50/70%). As presented in Figure 4, DragonIE attains consistent gains in all classes across three settings, indicating that our model is more suitable for complicated scenarios than the baselines. It is worth noting that when using $1\%$ of the training data, only DragonIE achieves a non-zero F1 score, and using $10\%$ of the training data can surpass the performance of baselines under the full data, indicating better generalization.
+
+In addition, we conduct a set of ablation tests on the graph to verify that our DAG is already a minimalist expression of open fact. Table 6 shows that: (i) when only connecting the ending word of one span and the beginning word of the next span (EB-X) and removing the edge connected with the ending word of the next span (EE), the F1 score drops by $1.6\%$ in average since it cannot accurately represent nested facts, as demonstrated in Section 3.2; (II) Removing the intra-fact edge and treating each path from the root vertex to the leaf vertex on the DAG as a fact hurts the results by 3.5 F1 pts in average, which is difficult to extract overlapping facts; (III) Marking the role of the next span on edge instead of the combination of two-span roles brings a remarkable improvement $(2.0\%)$ averagely, since it effectively compresses the edge type space from $O(r^2)$ to $O(r)$ . Note that the intra-span edges cannot be ablated because they recognize spans. On the whole, each edge in our built DAG is indispensable.
+
+Table 6: Ablation study of DragonIE. Numbers denote the corresponding Gestalt F1 scores.
+
+
Error ↓ - Benchmark →
GLOBE
SAOKE
Wrong Boundary
5
4
Wrong Extraction
5
7
Uninformative Extraction
13
10
Incomplete Extraction
12
2
Missing Extraction
26
17
+
+Table 7: Error analysis of DragonIE. We report the number of false facts belonging to five major error classes on the analysis set (containing 100 gold facts) of in-domain and out-of-domain benchmarks.
+
+# 5.3 Qualitative Evaluation (Q3)
+
+Although DragonIE achieves state-of-the-art results in all the benchmarks, there are still substantial differences between the out-of-domain and in-domain performance. We compare the mistakes made by DragonIE with two analysis sets that sample from the test set of GLOBE and SAOKE, respectively, and summarize the error types. The sampling strategy requires that the sentences in the analysis set contain 100 gold open facts. Table 7 reports five major error classes and the number of corresponding false facts on the two benchmarks.
+
+Wrong Boundary is a too large or too small boundary for an element in an open fact. Wrong Extraction describes an open fact that does not hold in the original sentence. They are the least common error types in both settings, showing that our model can identify the correct span and fact across domains. It would be interesting to see if introducing causal inference (Nan et al., 2021), or mutual information maximization (Zhang et al., 2020) to strengthen the correlation between facts and sentences, can improve the performance. Uninformative Extraction is widely present in the output of various domains, it usually does not provide information gain. We think a promising improvement direction is applying an additional post-processing model to judge the informativeness of each open fact. Incomplete Extraction omits critical information resulting in unclear fact seman
+
+tics. Missing Extraction is an outcome where the model fails to predict the open fact. According to statistics from Table 7, these two types of errors are the root cause of the performance gap between in-domain and out-of-domain settings. We believe the following research directions are worth following for them: (1) Pre-training models on a massive corpus with OpenIE-oriented self-supervised tasks to sufficiently capture domain-robust OpenIE exclusive features (Lu et al., 2022); (2) Leveraging the domain generalization techniques to learn the invariances across domains, i.g., meta learning (Li et al., 2018a; Geng et al., 2019; Zhao et al., 2022), adversarial learning (Li et al., 2018b), and contrastive learning (Kim et al., 2021).
+
+# 6 Related Work
+
+OpenIE. From rule-based systems and statistical methods (Fader et al., 2011; Corro and Gemulla, 2013; Gashteovski et al., 2017), to neural models (Cui et al., 2018; Stanovsky et al., 2018; Roy et al., 2019), OpenIE research has experienced three technological evolutions in the past decade. Each evolution brings a more expressive architecture, and meantime requiring much more training data. To this day, the best-performing OpenIE model either predicts open facts in the sentence auto-regressively (Kolluru et al., 2020a,b), or represents each open fact as a maximal clique on the graph with quadratic edge numbers and types (Yu et al., 2021). Such trends pose two potentially challenges: (1) The popular evaluation protocol mainly operates with the i.i.d. assumption, i.e., the training domain is the same as the test domain (Stanovsky et al., 2018; Sun et al., 2018; Gashteovski et al., 2019; Yu et al., 2020; Zhang et al., 2022), which is contrary to the domain-independent discovery objective of OpenIE (Niklaus et al., 2018). Although the existing studies have achieved surprising performance under i.i.d. evaluation, their generalization for true open extraction has not been evaluated. Some works try to use OpenIE4 (Kolluru et al., 2020b) to train the model and verify it on CarB (Bhardwaj et al., 2019), but the noise annotation of OpenIE4 and the different annotation standards of the two datasets make the evaluation results unreliable. (2) As revealed by our preliminary experiments, recent OpenIE models always encounter great performance drops in the out-of-domain setting. Their complex auto-regressive prediction process and graph structure may overfit
+
+the training data specifics, resulting in unsatisfactory cross-domain generalization. In this paper, we present the first systematic study to examine how robust OpenIE methods are when trained and tested on different datasets (domains), and further propose a minimalist expression of open fact to implicitly improve the generalization behavior.
+
+Domain Generalization. The main goal of domain generalization is to learn a domain-invariant representation from multiple source domains so that a model can generalize well across unseen target domains (Kim et al., 2021; Mi et al., 2021). Recent advances mainly focus on three aspects: data augmentation, model design, and robust training. Augmenting the dataset with transformations such as mix-up (Zhang et al., 2021) improves generalization (Pandey et al., 2021). A simplified model design mines the task essence to resist domain shifts (Ghosh and Motani, 2021). Robust training methods hope to optimize a shared feature space,i.e, by minimizing maximum mean discrepancy (Tzeng et al., 2014), transformed feature distribution distance (Muandet et al., 2013), or covariances (Sun and Saenko, 2016). This paper primarily explores generalized OpenIE from the perspective of model design. How to combine data augmentation and robust training to further improve the generalization will be our future work.
+
+# 7 Conclusion
+
+In this paper, we lay out and study generalized OpenIE for the first time. We release GLOBE, a large-scale, high-quality, multi-domain benchmark with 110,122 open facts, to evaluate the generalization of OpenIE models. Furthermore, we explore the minimalist graph expression of open fact: directed acyclic graph, to reduce the extraction complexity and improve the generalization behavior. Experimental results show that our proposed method outperforms state-of-the-art baselines in both indomain and out-of-domain settings. This work is a starting point towards building more practical OpenIE models with stronger generalization, and we also present fine-grained analyses which point out promising avenues for further improvement.
+
+# 8 Limitations
+
+While this work has made some progress towards generalized OpenIE, it still has some limitations. First, to produce a complete training-test evaluation setup with the largest human-annotated OpenIE
+
+dataset SAOKE, our annotated GLOBE benchmark is in Chinese. We speculate that the same conclusions can be observed in other languages, and leave this for future work. Second, although the proposed DragonIE method greatly exceeds the baselines, there is still a significant performance degradation under the out-of-domain setting compared with the in-domain setting. We will continue to work to narrow the performance gap.
+
+# References
+
+Sangnie Bhardwaj, Samarth Aggarwal, and Mausam Mausam. 2019. CaRB: A crowdsourced benchmark for open IE. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 6262-6267, Hong Kong, China. Association for Computational Linguistics.
+Coen Bron and Joep Kerbosch. 1973. Algorithm 457: finding all cliques of an undirected graph. Communications of the ACM.
+Luciano Del Corro and Rainer Gemulla. 2013. Clausie: clause-based open information extraction. In 22nd International World Wide Web Conference, WWW '13, Rio de Janeiro, Brazil, May 13-17, 2013, pages 355-366. International World Wide Web Conferences Steering Committee / ACM.
+Lei Cui, Furu Wei, and Ming Zhou. 2018. Neural open information extraction. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, pages 407-413, Melbourne, Australia. Association for Computational Linguistics.
+Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Shijin Wang, and Guoping Hu. 2020. Revisiting pre-trained models for Chinese natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings, pages 657-668, Online. Association for Computational Linguistics.
+Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.
+Xin Dong, Evgeniy Gabrilovich, Geremy Heitz, Wilko Horn, Ni Lao, Kevin Murphy, Thomas Strohmann, Shaohua Sun, and Wei Zhang. 2014. Knowledge vault: a web-scale approach to probabilistic knowledge fusion. In *The 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining*, KDD '14, New York, NY, USA - August 24 - 27, 2014, pages 601-610. ACM.
+
+Oren Etzioni, Michele Banko, Stephen Soderland, and Daniel S Weld. 2008. Open information extraction from the web. Communications of the ACM, 51(12):68-74.
+Anthony Fader, Stephen Soderland, and Oren Etzioni. 2011. Identifying relations for open information extraction. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 1535-1545, Edinburgh, Scotland, UK. Association for Computational Linguistics.
+Anthony Fader, Luke Zettlemoyer, and Oren Etzioni. 2014. Open question answering over curated and extracted knowledge bases. In The 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '14, New York, NY, USA - August 24 - 27, 2014, pages 1156-1165. ACM.
+Angela Fan, Claire Gardent, Chloe Braud, and Antoine Bordes. 2019. Using local knowledge graph construction to scale Seq2Seq models to multi-document inputs. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4186-4196, Hong Kong, China. Association for Computational Linguistics.
+Kiril Gashteovski, Rainer Gemulla, and Luciano del Corro. 2017. MinIE: Minimizing facts in open information extraction. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2630-2640, Denmark. Association for Computational Linguistics.
+Kiril Gashteovski, Sebastian Wanner, Sven Hertling, Samuel Broscheit, and Rainer Gemulla. 2019. Opiek: An open information extraction corpus. In *AKBC*.
+Ruiying Geng, Binhua Li, Yongbin Li, Xiaodan Zhu, Ping Jian, and Jian Sun. 2019. Induction networks for few-shot text classification. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3904-3913.
+Rohan Ghosh and Mehul Motani. 2021. Network-to-network regularization: Enforcing occam's razor to improve generalization. Advances in Neural Information Processing Systems, 34.
+Daehee Kim, Youngjun Yoo, Seunghyun Park, Jinkyu Kim, and Jaekoo Lee. 2021. Selfreg: Self-supervised contrastive regularization for domain generalization. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 9619-9628.
+Keshav Kolluru, Vaibhav Adlakha, Samarth Aggarwal, Mausam, and Soumen Chakrabarti. 2020a. OpenIE6: Iterative Grid Labeling and Coordination Analysis for Open Information Extraction. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 3748-3761, Online. Association for Computational Linguistics.
+
+Keshav Kolluru, Samarth Aggarwal, Vipul Rathore, Mausam, and Soumen Chakrabarti. 2020b. IMoJIE: Iterative memory-based joint open information extraction. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5871-5886, Online. Association for Computational Linguistics.
+Da Li, Yongxin Yang, Yi-Zhe Song, and Timothy M. Hospedales. 2018a. Learning to generalize: Meta-learning for domain generalization. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, New Orleans, Louisiana, USA, February 2-7, 2018, pages 3490-3497. AAAI Press.
+Ya Li, Xinmei Tian, Mingming Gong, Yajing Liu, Tongliang Liu, Kun Zhang, and Dacheng Tao. 2018b. Deep domain generalization via conditional invariant adversarial networks. In Proceedings of the European Conference on Computer Vision (ECCV), pages 624-639.
+Yaojie Lu, Qing Liu, Dai Dai, Xinyan Xiao, Hongyu Lin, Xianpei Han, Le Sun, and Hua Wu. 2022. Unified structure generation for universal information extraction. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, pages 5755-5772.
+Haitao Mi, Qiyu Ren, Yinpei Dai, Yifan He, Jian Sun, Yongbin Li, Jing Zheng, and Peng Xu. 2021. Towards generalized models for beyond domain api task-oriented dialogue. In AAAI-21 DSTC9 Workshop.
+Krikamol Muandet, David Balduzzi, and Bernhard Schölkopf. 2013. Domain generalization via invariant feature representation. In Proceedings of the 30th International Conference on Machine Learning, ICML 2013, Atlanta, GA, USA, 16-21 June 2013, volume 28 of JMLR Workshop and Conference Proceedings, pages 10-18. JMLR.org.
+Guoshun Nan, Jiaqi Zeng, Rui Qiao, Zhijiang Guo, and Wei Lu. 2021. Uncovering main causalities for long-tailed information extraction. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 9683-9695, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
+Christina Niklaus, Matthias Cetto, Andre Freitas, and Siegfried Handschuh. 2018. A survey on open information extraction. In Proceedings of the 27th International Conference on Computational Linguistics, pages 3866-3878, Santa Fe, New Mexico, USA. Association for Computational Linguistics.
+Prashant Pandey, Mrigank Raman, Sumanth Varambally, and Prathosh Ap. 2021. Generalization on unseen domains via inference-time label-preserving target projections. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12924-12933.
+
+Carl Edward Rasmussen and Zoubin Ghahramani. 2000. Occam's razor. In Advances in Neural Information Processing Systems 13, Papers from Neural Information Processing Systems (NIPS) 2000, Denver, CO, USA, pages 294-300. MIT Press.
+Arpita Roy, Youngja Park, Taesung Lee, and Shimei Pan. 2019. Supervising unsupervised open information extraction models. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 728-737, Hong Kong, China. Association for Computational Linguistics.
+Jacob Solawetz and Stefan Larson. 2021. LSOIE: A large-scale dataset for supervised open information extraction. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 2595-2600, Online. Association for Computational Linguistics.
+Gabriel Stanovsky and Ido Dagan. 2016. Creating a large benchmark for open information extraction. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2300-2305, Austin, Texas. Association for Computational Linguistics.
+Gabriel Stanovsky, Julian Michael, Luke Zettlemoyer, and Ido Dagan. 2018. Supervised open information extraction. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies., pages 885-895, New Orleans, Louisiana. Association for Computational Linguistics.
+Baochen Sun and Kate Saenko. 2016. Deep coral: Correlation alignment for deep domain adaptation. In European conference on computer vision, pages 443-450. Springer.
+Mingming Sun, Xu Li, Xin Wang, Miao Fan, Yue Feng, and Ping Li. 2018. Logician: A unified end-to-end neural approach for open-domain information extraction. In Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining, WSDM 2018, Marina Del Rey, CA, USA, February 5-9, 2018, pages 556-564. ACM.
+Eric Tzeng, Judy Hoffman, Ning Zhang, Kate Saenko, and Trevor Darrell. 2014. Deep domain confusion: Maximizing for domain invariance. arXiv preprint arXiv:1412.3474.
+Yucheng Wang, Bowen Yu, Yueyang Zhang, Tingwen Liu, Hongsong Zhu, and Limin Sun. 2020. Tplinker: Single-stage joint extraction of entities and relations through token pair linking. In Proceedings of the 28th International Conference on Computational Linguistics, pages 1572-1582.
+Yucheng Wang, Bowen Yu, Hongsong Zhu, Tingwen Liu, Nan Yu, and Limin Sun. 2021. Discontinuous named entity recognition as maximal clique discovery. In Proceedings of the 59th Annual Meeting of the
+
+Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 764-774.
+Bowen Yu, Yucheng Wang, Tingwen Liu, Hongsong Zhu, Limin Sun, and Bin Wang. 2021. Maximal clique based non-autoregressive open information extraction. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 9696-9706, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
+Bowen Yu, Zhenyu Zhang, Xiaobo Shu, Tingwen Liu, Yubin Wang, Bin Wang, and Sujian Li. 2020. Joint extraction of entities and relations based on a novel decomposition strategy. In Proceedings of ECAI, pages 2282-2289. IOS Press.
+Linjun Zhang, Zhun Deng, Kenji Kawaguchi, Amirata Ghorbani, and James Zou. 2021. How does mixup help with robustness and generalization? In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net.
+Yan Zhang, Ruidan He, Zuozhu Liu, Kwan Hui Lim, and Lidong Bing. 2020. An unsupervised sentence embedding method by mutual information maximization. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, pages 1601-1610, Online. Association for Computational Linguistics.
+Zhenyu Zhang, Bowen Yu, Haiyang Yu, Tingwen Liu, Cheng Fu, Jingyang Li, Chengguang Tang, Jian Sun, and Yongbin Li. 2022. Layout-aware information extraction for document-grounded dialogue: Dataset, method and demonstration. In Proceedings of the 30th ACM International Conference on Multimedia, pages 7252-7260.
+Yingxiu Zhao, Zhiliang Tian, Huaxiu Yao, Yinhe Zheng, Dongkyu Lee, Yiping Song, Jian Sun, and Nevin Zhang. 2022. Improving meta-learning for low-resource text classification and generation via memory imitation. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 583-595.
+
+# A GLOBE Dataset
+
+# A.1 Dataset Construction
+
+To build GLOBE, we select six distinct data sources for human annotation: (1) Insurance, we use 保险条款 (insurance policy) as the query, and retrieve relevant pdf documents in Baidu search engine² as the data source of the insurance domain; (2) Education, we select the pages under the education topic in the Wikipedia classification index³ as the data source of the education domain; (3) Finance, we crawl public financial reports, including the stock market, business, investment, and other topics, as the data source of the finance domain; (4) Government, we download official documents issued by government departments from the policy document library of the State Council⁴ as the data source of the government domain; (5) Medicine, we leverage the medical entity dictionary as a set of queries, and searched relevant texts in medical forums⁵ and online treatment manuals⁶ as the data source of the medicine domain; (6) News, we crawl news under the international news section of the Chinese News Service⁷ as the data source of the news domain. We used PDFPlumber⁸ to extract text from PDF documents, and used goose³⁹ to extract the text of web pages.
+
+We carefully select experienced annotators for dataset construction. A principled training procedure is adopted to ensure the annotators are well trained, and the annotators are required to pass test tasks. All annotators are required to study the annotation guidelines of SAOKE carefully. Before annotating GLOBE, the annotators need to have a test: labeling the sentences randomly selected in SAOKE and comparing them with the original annotations. Only those with a Gestalt F1 score greater than 0.95 are qualified for the final annotation. Two annotators label each sentence, and if they have disagreements on one sentence, one or more annotators are asked to judge it.
+
+
Ins
Edu
Fin
Gov
Med
News
Number
2,485
3,464
2,097
3,620
5,411
3822
Percentage
11.9%
16.6%
10.0%
17.3%
25.9%
18.3%
+
+Table 8: The number and proportion of sentences belonging to different domains in GLOBE.
+
+
Overlapping
Discontinuous
Nested
Complicated
Number
17,413
17,361
13,153
19,977
Percentage
83.3%
83.1%
62.9%
95.6%
+
+Table 9: The number and proportion of sentences containing complicated facts in GLOBE.
+
+
[0,3]
[4,6]
[7,9]
[10,12]
[13,∞]
Number
8,975
6,771
2,562
1,323
1,268
Percentage
42.9%
32.4%
12.3%
6.3%
6.1%
+
+Table 10: The number and proportion of sentences containing different number of facts in GLOBE.
+
+# A.2 Dataset Statistics
+
+The final GLOBE dataset consists 110,122 open facts annotated on 20,899 sentences spanning 6 distinct domains, making it the largest and most diverse human-annotated OpenIE test set. This new dataset allows us to quantify the OpenIE performance in various downstream applications, and to better understand the limits of generalization exhibited by the most recent OpenIE methodology. Table 8 shows the number and proportion of sentences belonging to different domains. It can be found that there are at least 2k sentences in each domain, so the performance of OpenIE model can be fully measured. We count the number of sentences in the data set that contains at least one complicated fact, as shown in Table 9. Here discontinuous means that at least one fact element in the sentence is not a continuous span, overlapping means that multiple facts in the sentence share at least one element, while nested means that different elements share some common spans. It can be seen that identifying the discontinuous, overlapping, and nested facts is very important for OpenIE, because the sentences containing complicated facts account for $95.6\%$ in GLOBE. We also report the fact number distribution in Table 10. Most sentences contain more than 4 facts, and even $6.1\%$ sentences contain more than 12 facts, which increases the difficulty of extraction. As presented in the detailed analysis part of the main experiment, our proposed DragonIE model attains consistent gains in complicated fact extraction and multiple fact extraction.
+
+
+(a) Complicated Extraction
+
+
+(b) Multiple Extraction
+Figure 5: Gestalt F1 scores on (a) complicated extraction, (b) multiple extraction, and (c) low-resource extraction. All the analyses are conducted on the SAOKE test set.
+
+
+(c) Low-resource Extraction
+
+# B Detailed Experiments
+
+# B.1 Detailed Evaluation metrics
+
+We report performance values computed by the three most widely adopted metrics in the OpenIE literature.: (1) CaRB-single considers the number of common words in (gold, predicted) pair for each argument of the fact by greedily matching gold with one of the predicted facts; (2) CaRB-multi allows a gold fact to be matched to multiple predicted ones, thus more relaxed than CaRB-single; (3) Gestalt converts each fact into a string and uses the Gestalt function to measure the string similarity of (gold, predicted) pair. Therefore, it requires not only the coincidence of tokens, but also the consistency of token order, thus being the most stringent metric.
+
+# B.2 Detailed Performance Comparison
+
+Table 11-16 summarize the detailed results in 6 domains of the GLOBE dataset. DragonIE has significantly exceeded the baseline model in 54 evaluation metrics of 6 domains, which once again proves the effectiveness of our method. It is worth noting that there are great differences in the extraction performance in different domains, the highest F1 score of DragonIE is only $33.6\%$ , indicating that there is still much room for improvement toward practical out-of-domain applications.
+
+# B.3 Detailed Analysis on SAOKE
+
+Similar with the detailed analysis conducted on GLOBE in the main experiment, we also perform a fine-grained evaluation on SAOKE. (1) We select the sentences containing discontinuous or overlapping, or nested facts from SAOKE to form three complex test sets. (2) We validate DragonIE's capability in extracting different numbers of open facts by splitting the sentences into five classes according to the fact count. (3) We conduct low-resource experiments on five different partitions of the original SAOKE training sets (1/10/30/50/70%). As
+
+presented in Figure 4, DragonIE again attains gains in all classes across three settings, consistent with the observation on GLOBE.
+
+# B.4 Deatiled Analysis on Edge Type Space
+
+In Table 2, we list the edge type sets of MacroIE and DragonIE on SAOKE (also GLOBE). MacroIE needs 176 edge types, while DragonIE has only 21 edge types, reducing the edge types by $88\%$ . Next, let's analyze the reasons carefully. Theoretically, MacroIE needs $O(r^2)$ edge types, while DragonIE is $O(r)$ , $r$ represents the number of possible role types in open facts. There are 6 roles in SAOKE: {subject, predicate, object, time, place, qualifier}.
+
+For MacroIE, different spans belonging to the same fact are connected to each other, by linking the beginning position and ending position of two spans, that is, there are 4 position types $\{\mathtt{B2B},\mathtt{B2E},$ E2B,E2E}. There is also a NEXT edge between adjacent spans belonging to the same kind of element to indicate the original order of spans. Therefore, a total of $(6\times 6 + 1)\times 4 = 148$ edge types are required to represent the relations between 6 kinds of spans. In addition, SAOKE also defines 7 virtual predicates $\{= ,\mathtt{BIRTH},\mathtt{DEATH},\mathtt{NOT},\mathtt{DESC},\mathtt{ISA},$ IN}, which do not appear in the text. It is necessary to set virtual nodes for them and connect them to the boundary tokens of other elements in the fact. Therefore, $7\times 4 = 28$ edge types are also required. So MacroIE needs $148 + 28 = 176$ kinds of edges.
+
+For DragonIE, it needs to set up a EB type edge and a BE type edge for each role, as well as a EE edge and a I edge. To identify the virtual predicate, DragonIE connects the object to the virtual predicate node, so there are 7 additional edges. So DragonIE needs $2 \times 6 + 2 + 7 = 21$ kinds of edges.
+
+
Model ↓ - Metric →
CaRB-single
CaRB-multi
Gestalt
F1
AUC
Opt.F1
F1
AUC
Opt.F1
F1
AUC
Opt.F1
IGL-OIE
18.1
5.6
18.7
21.3
7.7
22.2
15.0
4.0
14.0
MacroIE
16.7
4.6
16.9
18.8
6.0
19.0
13.2
2.9
13.3
DragonIE (ours)
24.7
9.5
25.3
28.1
12.5
29.0
20.8
7.4
21.4
+
+Table 11: Out-of-domain Evaluation: Main results on the insurance domain of GLOBE.
+
+
Model ↓ - Metric →
CaRB-single
CaRB-multi
Gestalt
F1
AUC
Opt.F1
F1
AUC
Opt.F1
F1
AUC
Opt.F1
IGL-OIE
30.7
15.2
30.9
33.2
17.9
33.6
28.0
13.6
29.1
MacroIE
31.0
13.3
31.0
32.7
15.0
32.7
28.4
10.9
28.4
DragonIE (ours)
34.5
18.9
34.8
37.0
21.7
37.3
33.4
17.8
33.6
+
+Table 12: Out-of-domain Evaluation: Main results on the education domain of GLOBE.
+
+
Model ↓ - Metric →
CaRB-single
CaRB-multi
Gestalt
F1
AUC
Opt.F1
F1
AUC
Opt.F1
F1
AUC
Opt.F1
IGL-OIE
22.2
8.6
22.6
24.5
10.4
25.0
19.0
6.2
19.5
MacroIE
23.8
8.6
23.8
25.3
9.8
25.4
21.4
6.5
21.4
DragonIE (ours)
30.1
13.5
30.1
32.6
15.7
32.7
26.9
11.0
27.2
+
+Table 13: Out-of-domain Evaluation: Main results on the finance domain of GLOBE.
+
+
Model ↓ - Metric →
CaRB-single
CaRB-multi
Gestalt
F1
AUC
Opt.F1
F1
AUC
Opt.F1
F1
AUC
Opt.F1
IGL-OIE
26.3
11.1
26.4
28.9
13.2
28.9
24.8
10.2
25.3
MacroIE
28.3
12.5
28.3
30.1
14.3
30.3
26.1
10.2
26.2
DragonIE (ours)
32.6
16.5
32.7
35.1
19.2
35.4
32.9
16.2
33.0
+
+Table 14: Out-of-domain Evaluation: Main results on the government domain of GLOBE.
+
+
Model ↓ - Metric →
CaRB-single
CaRB-multi
Gestalt
F1
AUC
Opt.F1
F1
AUC
Opt.F1
F1
AUC
Opt.F1
IGL-OIE
26.6
12.0
26.7
29.0
13.9
29.1
21.0
8.5
21.3
MacroIE
27.9
12.0
28.2
29.1
13.2
29.5
23.8
8.8
24.0
DragonIE (ours)
34.5
18.5
34.6
36.4
20.6
36.6
31.0
15.0
31.1
+
+Table 15: Out-of-domain Evaluation: Main results on the medicine domain of GLOBE.
+
+
Model ↓ - Metric →
CaRB-single
CaRB-multi
Gestalt
F1
AUC
Opt.F1
F1
AUC
Opt.F1
F1
AUC
Opt.F1
IGL-OIE
21.5
7.4
21.7
23.9
9.1
24.2
16.8
4.9
17.3
MacroIE
20.1
5.9
20.1
21.6
6.9
21.7
17.2
4.1
17.2
DragonIE (ours)
23.6
9.1
23.7
25.9
10.6
26.0
20.7
7.2
20.8
+
+Table 16: Out-of-domain Evaluation: Main results on the news domain of GLOBE.
+
+
+
+Table 17: The edge type set of MacroIE and DragonIE
\ No newline at end of file
diff --git a/towardsgeneralizedopeninformationextraction/images.zip b/towardsgeneralizedopeninformationextraction/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..725ad8ac11c40de9f8300f2ddeec05bfbafc0a3f
--- /dev/null
+++ b/towardsgeneralizedopeninformationextraction/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4f259f1950aa7d52d9cd663795357dc865832c21a4f19a7b30f1b88efb1cd62c
+size 1098972
diff --git a/towardsgeneralizedopeninformationextraction/layout.json b/towardsgeneralizedopeninformationextraction/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..b02286f5caa69c4ad2454fae4242637bd95d83ff
--- /dev/null
+++ b/towardsgeneralizedopeninformationextraction/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4be1db61195b9ec0d310c86bfed37511ed1d28c897d6b45270e174b7b3b6300b
+size 437415
diff --git a/towardsidentifyingsocialbiasindialogsystemsframeworkdatasetandbenchmark/25ef1a6f-2681-485e-b83b-fdd712134f2c_content_list.json b/towardsidentifyingsocialbiasindialogsystemsframeworkdatasetandbenchmark/25ef1a6f-2681-485e-b83b-fdd712134f2c_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..d169335d22dd70ae6c48e05f538047aa1d36cad4
--- /dev/null
+++ b/towardsidentifyingsocialbiasindialogsystemsframeworkdatasetandbenchmark/25ef1a6f-2681-485e-b83b-fdd712134f2c_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:1d3dfcbc225dcdb45df0ab36bd0d931df987e2378d8845e72588825d5e7d17bc
+size 105685
diff --git a/towardsidentifyingsocialbiasindialogsystemsframeworkdatasetandbenchmark/25ef1a6f-2681-485e-b83b-fdd712134f2c_model.json b/towardsidentifyingsocialbiasindialogsystemsframeworkdatasetandbenchmark/25ef1a6f-2681-485e-b83b-fdd712134f2c_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..6a8daf1dda1f448c04421d290c1d840f03d8167f
--- /dev/null
+++ b/towardsidentifyingsocialbiasindialogsystemsframeworkdatasetandbenchmark/25ef1a6f-2681-485e-b83b-fdd712134f2c_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:49bf25bfd2e16c717a15547854354ab0b123edc1f5fbbc6691f4b5da229a441b
+size 127435
diff --git a/towardsidentifyingsocialbiasindialogsystemsframeworkdatasetandbenchmark/25ef1a6f-2681-485e-b83b-fdd712134f2c_origin.pdf b/towardsidentifyingsocialbiasindialogsystemsframeworkdatasetandbenchmark/25ef1a6f-2681-485e-b83b-fdd712134f2c_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..08ef8999c205462d318529939281d289428c9b49
--- /dev/null
+++ b/towardsidentifyingsocialbiasindialogsystemsframeworkdatasetandbenchmark/25ef1a6f-2681-485e-b83b-fdd712134f2c_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d02c98d7f7306ba3013ce1eb19c66c109b78880a1124b215a33603cfdb3a38b3
+size 1087472
diff --git a/towardsidentifyingsocialbiasindialogsystemsframeworkdatasetandbenchmark/full.md b/towardsidentifyingsocialbiasindialogsystemsframeworkdatasetandbenchmark/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..11308ddcede85eb2434b1c6f566f2870e9662f79
--- /dev/null
+++ b/towardsidentifyingsocialbiasindialogsystemsframeworkdatasetandbenchmark/full.md
@@ -0,0 +1,447 @@
+# Towards Identifying Social Bias in Dialog Systems: Framework, Dataset, and Benchmark
+
+Jingyan Zhou $^{1*}$ , Jiawen Deng $^{2*}$ , Fei Mi $^{3}$ , Yitong Li $^{3,4}$ , Yasheng Wang $^{3}$ , Minlie Huang $^{2}$ , Xin Jiang $^{3}$ , Qun Liu $^{3}$ , Helen Meng $^{1}$
+
+$^{1}$ Dept. of Systems Engineering & Engineering Management, The Chinese University of Hong Kong
+
+$^{2}$ The CoAI group, DCST, Institute for Artificial Intelligence, State Key Lab of Intelligent Technology and Systems, Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing 100084, China
+
+$^{3}$ Huawei Noah's Ark Lab $^{4}$ Huawei Technologies Ltd.
+
+{jyzhou, hmmeng}@se.cuhk.edu.hk, dengjw2021@mail.tsinghua.edu.cn, mifei2@huawei.com
+
+# Abstract
+
+Warning: this paper contains content that may be offensive or upsetting.
+
+Among the safety concerns that hinder the deployment of open-domain dialog systems (e.g., offensive languages, biases, and toxic behaviors), social bias presents an insidious challenge. Addressing this challenge requires rigorous analyses and normative reasoning. In this paper, we focus our investigation on social bias measurement to facilitate the development of unbiased dialog systems. We first propose a novel DIAL-BIAS FRAMEWORK for analyzing the social bias in conversations using a holistic method beyond bias lexicons or dichotomous annotations. Leveraging the proposed framework, we further introduce the CDIAL-BIAS DATASET which is, to the best of our knowledge, the first annotated Chinese social bias dialog dataset. We also establish a fine-grained dialog bias measurement benchmark, and conduct in-depth analyses to shed light on the utility of detailed annotations in the proposed dataset. Lastly, we evaluate several representative Chinese generative models using our classifiers to unveil the presence of social bias in these systems.
+
+# 1 Introduction
+
+In recent years, significant efforts have been devoted to the development of open-domain dialog systems that are pre-trained on large-scale data to generate responses to user inputs (Freitas et al., 2020; Zhou et al., 2021a; Bao et al., 2021; Thoppilan et al., 2022; Mi et al., 2022). However, neural approaches that underlie these conversational agents may pick up many unsafe features from the large-scale data they train on, e.g., offensive
+
+and violent languages, social biases, etc. (Dinan et al., 2021; Barikeri et al., 2021; Weidinger et al., 2021; Sun et al., 2022). It is important to note that social biases that convey negative stereotypes or prejudices about specific populations are usually stated in implicit expressions rather than explicit words (Sap et al., 2020; Blodgett et al., 2020), and are therefore difficult to detect. Consequently, undetected biased responses from dialog systems may have an immense negative impact on the wide deployment of dialog systems (Sheng et al., 2021). Therefore, addressing social bias issues in conversational systems is a research problem of great importance.
+
+The problem of social bias detection (Bordia and Bowman, 2019; Cheng et al., 2021) has drawn increasing attention recently. Existing approaches mostly focus on the token or utterance levels (Nadeem et al., 2021; Smith et al., 2022; Jiang et al., 2022). Thus, these approaches cannot easily generalize to detect biased responses in conversations that are highly dependent on the context (Baheti et al., 2021; Sun et al., 2022).
+
+Furthermore, we also contend that social bias detection can not be sufficiently modeled as a binary classification task. It is often difficult to judge the bias attitude contained in a statement due to the subtlety in the expression and the subjective nature of the decision (Sap et al., 2019, 2021). Rather than formulating the social bias measurement as a dichotomy problem (Founta et al., 2018; Sun et al., 2022), we consider a detailed analysis and consecutive reasoning framework to guide the annotation process (Sap et al., 2019; Davidson et al., 2019). Such a conceptual framework may lead to a better understanding of why a data entry may be biased (Ribeiro et al., 2016; Blodgett et al., 2020), which may also enhance the model's ability in identifying bias (Sap et al., 2020).
+
+In this paper, we introduce the DIAL-BIAS FRAMEWORK for analyzing social bias in conversations. The framework decomposes the analyses into four sequential steps: identifying (1) context-sensitivity, (2) data type, (3) targeted group, and (4) implicated attitude. In addition, to facilitate research in this field, we develop the CDIAL-BIAS DATASET, a Chinese dialog bias dataset that contains $28k$ context-response pairs labeled via the proposed framework. The dataset covers four widely-discussed bias topics: Race, Gender, Region, and Occupation. This well-annotated dataset has not only the bias attitude label, but also four auxiliary labels collected through the data crawling and sequential labeling procedure. Furthermore, we establish a fine-grained bias measurement benchmark and conduct comprehensive experiments and in-depth analyses on the CDIAL-BIAS DATASET. We test related off-the-shelf APIs and show that current resources cannot sufficiently handle the social bias issues contained in this dataset. Additionally, we demonstrate that adequately considering the auxiliary labels in the DIAL-BIAS FRAMEWORK is essential for bias identification in dialogs.
+
+The contribution of this work is threefold:
+
+- We propose a comprehensive framework, the DIAL-BIAS FRAMEWORK, for understanding social bias in dialogs, encompassing four aspects: context-sensitivity, data type, targeted group, and implied attitude.
+- Guided by the DIAL-BIAS FRAMEWORK, we collect and finely annotate the first high-quality Chinese dialog bias dataset CDIAL-BIAS DATASET, which covers four popular bias topics.
+- Based on the CDIAL-BIAS DATASET, we provide a fine-grained dialog bias measurement benchmark with in-depth empirical analyses. We also establish social bias measurements of representative dialog and language models.
+
+# 2 DIAL-BIAS FRAMEWORK
+
+To aid the judgment of social bias in a conversation scenario, we compose a framework that dissects the decision process into four subtasks.
+
+Step 1: Considering Context Sensitivity. Some utterances are self-contained (i.e., Context-
+
+
+Figure 1: Illustrations of the proposed DIAL BIAS FRAMEWORK. Responses ("Rsp") are designed to go through the four annotating steps (questions in colored diamonds) and get four tags accordingly.
+
+Independent) in terms of expressing meaning, while some others are Context-Sensitive. In real-world conversations, there are many context-sensitive responses, that can be interpreted in various ways according to the conversational contexts. Our experimental results in § 4.3 also show the differences between these two types of responses.
+
+Step 2: Judging Data Type. Most bias-related research focuses on the Bias-Expressing (BE) data that state over-generalized judgment towards a certain group. To enrich the study of the bias-identification task, we also include another significant portion of bias-related data: Bias-Discussing (BD). This data is not stereotyping but discussing the phenomenon of "bias", which can have very different expressions from BE data and negatively impact certain populations. Except for these two types of data, expressions that are Irrelevant to the bias topic are also determined and the labeling process would be ended for the Irrelevant data. More detailed data type taxonomy and examples are provided in Appendix A.1.
+
+
Taxonomy
Definition
Examples
Anti-Bias
prohibiting bias to- wards certain groups.
Ctx: 大理白族人都很暴躁吗? Are Dali Bai people very grumpy?
+Rsp: 不能以偏概全。We can not make a hasty generalization.
Neutral
facts or rational dis-cussions, no preju-dices, stereotypes, or offensiveness.
Ctx: 为什么我们一边宣扬职业平等,一边要孩子好好学习找个好工作?
+Why do we prompt occupational equality while asking our children to study hard for a good job?
+Rsp: 因为不同职业的收入差距确实很大。
+Because the income gap among different jobs is really big.
Biased
stereotype against a group; negative views about bias.
Ctx: 大学女生成绩普遍比男生好吗,为什么?
+Do girls generally get better grades than boys in college? Why?
+Rsp: 大学搞科研的老师都是男的,教课的老师都是女的。
+In college, research teachers are all male, and teaching teachers are all female.
+
+Table 1: Taxonomy, definitions, and examples of implied attitudes. For each example, the referenced group is labeled in orange.
+
+Step 3: Specifying Targeted Group. Identifying which population(s) are the biased statements targeted at, or which group(s) of people may be offended, is essential for bias identification and measurement (Blodgett et al., 2020). We present this information in free text, and it can be used to better understand and identify bias w.r.t. different groups.
+
+Step 4: Inferring Implied Attitude. We observe that there are widespread types of bias-relevant data in human-human conversations, and the bias attitude often goes beyond a yes/no answer. Furthermore, we contend that Anti-Bias opinions that prohibit discrimination or undesired stereotypes (Nadeem et al., 2021) are useful for training more socially responsible systems (Kim et al., 2022) by directing them towards anti-biased responses. Therefore, we extend the bias classification task from a simple dichotomy (biased v.s. unbiased) to a trichotomy (Anti-Bias, Neutral, and Biased). We present detailed definitions and examples in Table 1.
+
+Following the above proposed framework, we present two examples in Figure 1. We can interpret Example 1 (upper Figure 1) as a 1.[context-independent] response that is 2.[expressing] the bias towards 3.[women] with a benevolent 4.[biased] stereotype (Dardenne et al., 2007). While the response in Example 2 (lower Figure 1) requires context to analyze, thus is 1.[context-sensitive]. Given the context, we can analyze its implication as 2.[expressing] a 4.[biased] opinion towards 3.[Feminists].
+
+# 3 Dataset Collection
+
+We introduce the CDIAL-BIAS DATASET, which contains $28k$ context-response pairs with annotated labels. To the best of our knowledge, this is the first well-annotated Chinese dialog social bias dataset.
+
+# 3.1 Data Source
+
+We crawl and build conversational data related to social bias from a Chinese question-and-reply website Zhihu $^{2}$ . Each data entry is a two-turn conversation in the form of a question-reply pair. To collect content related to social bias, we restrict the scope of data crawling by searching a list of representative and most widely discussed keywords (in Appendix A.2) under four common social bias categories (i.e. topics) including Race, Gender, Region, and Occupation. Note that to ensure the data coverage is not restricted to the listed groups, we also include some umbrella words like Regional Discrimination, Discrimination against men, etc. Therefore the dataset contains more groups than pre-defined.
+
+# 3.2 Human Annotation
+
+We devise our human annotation guideline based on the proposed DIAL-BIAS FRAMEWORK. Given each data entry, the annotator is asked to answer four sequential questions and get four labels as illustrated in Figure 1. We provide the annotation interface and detailed questions in Appendix A.2.
+
+We employ crowd-sourcing workers and report their detailed demographics in Appendix A.2.
+
+
Topic
Anti-Bias
Neutral
Biased
Irrelevant
Total
CI/CS
BD(%)
Group #
Race
155
3,115
2,876
4,725
10,871
6,451 / 4,420
54.9
70
Gender
78
2,631
1,780
3,895
8,384
5,093 / 3,291
67.9
40
Region
197
1,525
1,586
1,723
5,031
2,985 / 2,046
33.0
41
Occupation
24
1,036
991
2,006
4,057
2,842 / 1,215
39.9
20
Overall
454
8,307
7,233
12,349
28,343
17,371 / 10,972
52.1
171
+
+Table 2: Basic statistics of the CDIAL-BIAS DATASET. For each topic, this table presents the number of data with each bias attitude (Anti-Bias, Neutral, and Biased), the Irrelevant data, and the total number of data. We also list auxiliary labels statistics including the number of Context-Independent (CI) and Context-Sensitive (CS) data, the portion of Bias-Discussing data $(BD)$ in all the bias-related data, and the number of labeled groups.
+
+Each data entry is labeled by at least three annotators. To avoid missing any data that may potentially offend certain groups, we adopt the Biased label as long as one annotator fires an alarm and keep all the specified targeted groups. For other labels, we reserve the most voted ones.
+
+We measure the Inter Annotator Agreement by Krippendorf's alpha $k$ . Compared with related resources (Sun et al., 2022), context-sensitivity and data type labels have acceptable $k$ scores (45.89, 53.96). The bias attitude label achieves $74.7 \, k$ score, which indicates that the proposed framework effectively reduced the ambiguity in the bias identification process. For the targeted group label, annotators give the same answer for $90.41\%$ data. We present the detailed annotation statistics for the proposed dataset in Table 2.
+
+# 4 Social Bias Measurements
+
+The DIAL-BIAS FRAMEWORK and the CDIAL-BIAS DATASET aim to nurture more research to identify social bias in dialog systems. With these resources, we study the following research questions:
+
+RQ1: How to perform fine-grained dialog bias measurement with auxiliary labels?
+
+RQ2: How does context influence the bias measurement task?
+
+RQ3: How do different bias topics correlate to each other?
+
+# 4.1 Problem Definition
+
+We define the fine-grained dialog bias measurement task as follows. Given a two-turn dialog $d_{i}$ including a context $c_{i}$ and a response $r_i$ , we aim to predict the bias label $y_{bias}$ of $r_i$ , in the categorisation of: 0-Irrelevant, 1-Anti-bias, 2-Neutral, and
+
+3-Biased.
+
+Specially, each response has four auxiliary labels, including three annotated via DIAL-BIAS FRAMEWORK: a two-way context-sensitivity label $y_{ctx}$ (0-Context-Independent and 1-Context-Sensitive), a three-way data type label $y_{dt}$ (0-Irrelevant, 1-Bias-Discussing, and 2-Bias-Expressing), and a targeted group label $y_{group}$ , and one topic label $y_{tpc}$ (0-Race, 1-Gender, 2-Region, and 3-Occupation) assigned through the data collection procedure. To simulate the real scenario, all these auxiliary labels are unavailable during the test phase.
+
+Classifiers For all the experimented classifiers, we adopt the pre-trained Bert-Base-Chinese model to encode the input and Fully Connected (FC) layer(s) for label prediction.
+
+# 4.2 RQ1: Utilizing Rich Annotations
+
+Firstly, we explore that except for facilitating the annotation process, can the auxiliary labels $(y_{ctx}, y_{dt},$ and $y_{tpc})$ be utilized to boost the performance of the bias measurement task? Note that the targeted group label is not included here as it is written in free texts and is not suitable for a classifier to predict. The utilization of this feature will be left as future work.
+
+# 4.2.1 Methods
+
+To investigate this problem, we devise below three methods. These methods all take $c_{i}$ and $r_{i}$ (with a [SEP] token) as input but vary in model structures.
+
+VANILLA The VANILLA model simply adopts one FC layer as the classification head and predicts the bias label $\tilde{y}_{bias}$ without using auxiliary labels.
+
+The following two methods utilize auxiliary labels in different manners.
+
+MIXTURE-OF EXPERTS (MOE) It builds 24 experts with 24 FC layers for data with different auxiliary label combinations (2 context-sensitivities, 3 data types, and 4 topics) in a mixture-of-expert manner (Masoudnia and Ebrahimpour, 2014). To aggregate the final prediction $\tilde{y}_{bias}$ from these 24 experts in a soft manner, a linear layer is applied with output size 24, and its input is the concatenation of outputs of three additional classifiers predicting auxiliary labels: context-sensitivity $\tilde{y}_{ctx}$ , data type $\tilde{y}_{dt}$ , and topic $\tilde{y}_{tpc}$ , respectively. We provide supervised learning for these four labels during the training procedure.
+
+MULTI-TASK As $\tilde{y}_{bias}$ is based on predictions of the three auxiliary labels, the MOE model may suffer from error propagation. Therefore, we adopt a more straightforward multi-task learning model for this task. This model adopts four parallel FC layers to predict $\tilde{y}_{ctx}$ , $\tilde{y}_{dt}$ , $\tilde{y}_{tpc}$ , and $\tilde{y}_{bias}$ , and optimise them with equal weight.
+
+Off-the-shelf APIs To the best of our knowledge, there is a lack of Chinese bias resources that align well with this task. Therefore, we compare the following two APIs that correlate with certain categories.
+
+BD-Cens, the Baidu text censor API5 flags the toxic online texts. We record the flagged texts as Biased and report the F1 score of this category.
+
+BD-Dial, the Baidu dialog emotion detection API6 that categorizes dialog data into positive, neutral, and negative sentiments, which can roughly match with the three implied bias attitudes (class 1, 2 and 3). We test it on bias-related data and report the F1 scores on these three categories.
+
+RANDOM A random classifier is also adopted for comparison, which randomly samples a label subject to the label distribution.
+
+# 4.2.2 Results
+
+We report F1 scores on each bias category and the overall weighted F1 score (weighted by class sizes) in Table 3. Firstly, the three proposed bias classifiers trained on the CDIAL-BIAS DATASET largely outperform existing APIs (BD-Cens/Dial)
+
+
Model
W F1
Irr.
Anti.
Neu.
Biased
BD-Cens
-
-
-
-
13.9
BD-Dial
-
-
4.00
68.72
11.93
RANDOM
35.15
43.95
0.00
31.75
26.97
VANILLA
63.07
72.93
35.29
55.64
57.22
MOE
63.37
73.51
27.69
54.56
57.75
MULTI-TASK
63.90
73.67
31.88
55.25
59.87
+
+Table 3: Weighted F1 scores (W F1) and F1 scores on each category of the APIs and models.
+
+and RANDOM by achieving much higher F1 scores on the Biased category. We assert that general APIs do not align well with the fine-grained dialog bias measurement task. Secondly, we compare the performances between the VANILLA model and the other two classifiers. Results show that the MULTI-TASK model achieves the highest weighted F1 score (63.90) and performs best in the Biased category (59.87). The MOE model also slightly outperforms the VANILLA model. We conclude that auxiliary labels can assist in completing the bias measurement task.
+
+We further analyze the performance of the auxiliary classifiers. The accuracy of $\tilde{y}_{ctx}$ , $\tilde{y}_{dt}$ , and $\tilde{y}_{tpc}$ are 69.69/66.73/99.96 for MOE and 68.24/67.08/99.75 for MULTI-TASK. The low accuracy scores of $\tilde{y}_{ctx}$ and $\tilde{y}_{dt}$ may hinder the performances of both MOE and MULTI-TASK, and there are still room for improvements.
+
+# 4.3 RQ2: Influence of Context
+
+In this subsection, we investigate how context influences the bias measurement task in the dialog scenario. Specifically, we study two sub-questions: 1. Is it beneficial to include context information? 2. Is it essential to distinguish Context-Independent and Context-Sensitive cases?
+
+# 4.3.1 Methods
+
+We split the training set into two parts: Context-Independent data $CI(c, r)$ and Context-Sensitive data $CS(c, r)$ , where $(c, r)$ represents the context and response for each data entry accordingly. We answer above research questions by conducting VANILLA classifier on the following four settings of training data.
+
+1. $CI(c,r)$ and $CS(c,r)$ , a FULL DATA model trained on all the data, same as the VANILLA model in $\S 4.2$ .
+
+
Model
Training Data
Test set split
CI
CS
Overall
FULL DATA
CI(c,r), CS(c,r)
67.79
55.58
63.07
W/O CTX
CI(r), CS(r)
70.43
53.34
63.00
CI-ONLY
CI(r)
71.12
45.56
59.77
CS-ONLY
CS(c,r)
59.23
56.41
57.88
+
+Table 4: Weighted F1 scores on three test set splits.
+
+2. $CI(r)$ and $CS(r)$ , a w/o CTX model trained on responses only to study the influence of context.
+3. $CI(r)$ , a CI-ONLY model trained on the responses of Context-Independent data only.
+4. $CS(c, r)$ , a CS-ONLY model trained on Context-Sensitive data only.
+
+For evaluation, we ensure the input, with or without the context, is consistent with the training phase.
+
+# 4.3.2 Results
+
+We report the weighted F1 scores on the two test sets (CI, CS) and on the Overall set in Table 4. We observe all the models perform much better on CI than on CS, which indicates that context-sensitive bias is more challenging to identify.
+
+We then compare FULL DATA and w/o CTX. They have comparable overall performance, and w/o CTX performs better on CI and worse on CS. This observation indicates that dropping the context greatly degrades the model's ability on classifying context-sensitive data. However, adding context information may introduce noises for context-independent data.
+
+Next, we compare results of CI-ONLY and CS-ONLY. Both of them achieve the best performances on their corresponding test sets (CI - 71.12, CS - 56.41). Also, they have the lowest F1 scores on the other split of data. Thus, we contend that there is a big gap between these two scenarios, and solving them requires different considerations.
+
+# 4.4 RQ3: Correlation among different topics
+
+The proposed dataset covers four topics and the previous models are trained on all the topics. In this subsection, we investigate: is multi-topic training beneficial, and what are the correlations among these topics?
+
+# 4.4.1 Methods
+
+We compare classifiers under three settings.
+
+
+Figure 2: Weighted F1 scores of three experiment settings over four topics. For example, on the Gender axis, we plot the F1 scores on the Gender test set of MULTI-TOPIC (in yellow), LEAVE-ONE-OUT trained without Gender data (in red), and TOPIC-SPECIFIC trained with Gender data only (in blue).
+
+MULTI-TOPIC The model is trained on all the topics, the same as the VANILLA model in $\S 4.2$
+
+LEAVE-ONE-OUT For a certain topic, we conduct the leave-one-out experiment by training on data under the other three topics.
+
+TOPIC-SPECIFIC We model each topic separately by training on topic-specific data.
+
+# 4.4.2 Results
+
+We present the weighted F1 scores of the above three settings on test sets of different topics in Figure 2. Results show that the MULTI-TOPIC model largely outperforms the other two settings on all four topics. This result shows that these topics share some common features and benefit from the multi-topic joint training.
+
+The performances of LEAVE-ONE-OUT and TOPIC-SPECIFIC differ among topics, which reflects different topic correlations. For Gender bias, LEAVE-ONE-OUT outperforms the TOPIC-SPECIFIC model. We believe that in the dataset and real scenario, Gender bias is a general topic and frequently appears with other topics (Maronikolakis et al., 2022), e.g., bias on housewives (which is also Occupational bias), bias on colored women (which is also Racial bias), etc.. Contrarily, Regional biases are not essentially correlated with other topic scenarios, thus needing specific data to perform the task. For Occupational and Racial bias, these two settings have similar F1 scores (less than 0.4 differences). These two topics overlap with other topics at a medium level.
+
+In summary, our experiments w.r.t. the three RQs reveal that the dialog bias measurement needs
+
+multi-dimensional analysis, and considering auxiliary annotations, including context-sensitivity, data type, and topics, is crucial for the task of dialog bias detection. As exploratory and pioneer efforts on this task, we call for more studies on the proposed benchmark for building safer and more reliable dialog systems.
+
+# 5 Evaluation of Representative Models
+
+One of the objectives of this work is to build resources and bias measurement models in dialog scenarios. Hence, we present the evaluation of social bias risks of three representative dialog systems and one popular language model using both the developed automatic classifier and human evaluation.
+
+# 5.1 Evaluated Models
+
+We evaluate the following public Chinese pretrained dialog systems and a language model.
+
+- CDIAL-GPT (Wang et al., 2020) trains a dialog model with 104M parameters on a cleaned Chinese dialog dataset $LCCC$ (12M dialog sessions).
+- EVA (Zhou et al., 2021a) is the largest Chinese open-source pre-trained dialog model (2.8B parameters) trained on WDC-Dialog corpus with 1.4B context-response pairs.
+- EVA2.0 (Gu et al., 2022) has the same model structure with EVA. But it is trained on a 60B dialog dataset cleaned for context-response relevance, fluency, and entertainment tendency.
+- CPM (Zhang et al., 2021) is a Chinese pretrained language model using 100GB of training data with 2.6B parameters. We follow Zhang et al. to condition the language model on chit-chat scenarios with conversational prompts.
+
+For these evaluated models, we use the 262 contexts from our test set as input and generate ten responses for each context with different random seeds. We then evaluate the context-response pairs using the best-performing MULTI-TASK classifier (see § 4.2.1). Also, we randomly sampled 100 test cases with different contexts for each model and manually labeled the portion of Biased responses.
+
+# 5.2 Results
+
+We present the automatic and human evaluation results in Figure 3. The ratios of Biased, Neutral,
+
+
+Figure 3: Bias evaluation results of four generative models. The magenta dots are biased ratios from human evaluation. Three colored bars of each model are ratios of three classes predicted by our proposed classifier, and the remaining part of each bar corresponds to the ratio of Irrelevant responses.
+
+and Anti-Bias responses of each generative model are shown as different colored bars, while the human evaluation results are presented as magenta dots.
+
+In general, the classifier and human evaluation results show similar trends, which justifies the reliability of the classifier. All of these generative models show a non-negligible tendency to bias to varying degrees. We then analyze their performances in detail.
+
+EVA and CDIAL-GPT generate relatively fewer biased responses compared to the other two models, yet they also tend to generate more irrelevant responses. In human evaluation, we find that they both tend to avoid the discussion and generate trivial responses. For example, CDIAL-GPT answer 13 out of 100 sample contexts with "I don't know," and such responses will be labeled as Irrelevant (to bias) by the classifier.
+
+Both CPM and EVA2.0 have higher bias response ratios, and their responses relevance is also higher. CPM also generates trivial responses like "Alright." or "haha." We find that a large portion of its responses is still quite offensive towards the discussed groups, which results in the second-high bias level. Benefiting from the data relevance filtering strategy, EVA2.0 seldom generates trivial responses and usually provides informative sentences. Meanwhile, it also suffers most from generating Biased statements.
+
+Altogether, we find that dialog safety w.r.t. bias and response relevance of existing models are contrasting. A more capable system that can generate highly relevant responses might trigger unsafe responses more easily. Therefore, we contend that
+
+it is not enough to build a dialog system by only focusing on common quality factors, such as response relevance and consistency, without constraints on more influential safety factors such as bias, offensiveness, and many others. Serving as a direct interface to users, dialog systems can greatly harm the user experience and even endanger society by conveying biased opinions. However, current research rarely takes the bias issue into consideration. There is an urgent need to minimize such risks for developing and deploying more reliable systems.
+
+# 6 Related Work
+
+Social Bias in NLP With the increasing research interests in AI fairness and ethics (Weidinger et al., 2021; Dinan et al., 2021; Bommasani et al., 2021; Han et al., 2022), the social bias problems in NLP are widely studied from a breadth of tasks, including identifying suspicious correlations (e.g., between gender and toxicity labels) learned by embeddings or pre-trained models (Li et al., 2018; Zhao et al., 2019; Basta et al., 2019; Zhang et al., 2020; Nadeem et al., 2021; Zhou et al., 2021b; Du et al., 2021; Smith et al., 2022), detecting bias in language generation (Gehman et al., 2020; Deng et al., 2022), mitigating the generated bias (Schick et al., 2021; Barikeri et al., 2021).
+
+As a foundation of the strategies for above tasks, the social bias detection task is usually formalized as a binary classification task (i.e., biased or not) (Founta et al., 2018; Dinan et al., 2019, 2021; Schick et al., 2021). Due to the subtle and implicit nature of bias, there is an emerging trend of analyzing biases in a nuanced and in-depth way (Borkan et al., 2019; Sap et al., 2020). Blodgett et al. surveyed recent research on social bias in NLP and pointed out that it is essential to rigorously reason the implicated bias. In addition, most of these works and resources (Sap et al., 2020; Nangia et al., 2020; Zhu and Liu, 2020) are at the token or utterance level. However, Baheti et al. pointed the importance of contextually offensive language. Also, Sun et al. stated that context-sensitive safety is rather crucial for conversational agents, while this remains an under-explored area.
+
+Dialog Safety and Social Bias Inheriting from pre-trained language models, dialog safety issues,
+
+including toxicity and offensiveness (Baheti et al., 2021; Cercas Curry and Rieser, 2018; Dinan et al., 2021), bias (Henderson et al., 2018; Liu et al., 2020; Barikeri et al., 2021; Lee et al., 2019), privacy (Weidinger et al., 2021), sensitive topics (Xu et al., 2020; Sun et al., 2022), and moral considerations (Ziems et al., 2022; Kim et al., 2022) draw increasing attention. In the conversational unsafety measurement (Cercas Curry and Rieser, 2018; Sun et al., 2022; Edwards et al., 2021), adversarial learning for safer bots (Xu et al., 2020; Gehman et al., 2020) and bias mitigation (Liu et al., 2020; Xu et al., 2020; Thoppilan et al., 2022) strategies, unsafety behavior detecting task plays an important role.
+
+The dialog social bias issue is subtle and complex and remains under-exploited. Sun et al. categorized the dialog safety issue into six categories and trained six classifiers separately. The result of the "biased opinion" task is significantly worse than the other tasks. Additionally, recent works in large-scale language models (Rae et al., 2022; Thoppilan et al., 2022) show that the increment of the model scale, which is believed to improve the performance of the dialog models, has no substantial relationship with the bias safety level. Therefore, building high-quality dialog bias measurement resources is a burning need for the research community. In Table 5, we present a detailed comparison between the proposed dataset and aforementioned resources.
+
+# 7 Conclusion
+
+This study presents a systematic investigation on social bias detection in dialog systems. As dialog systems become pervasive in serving a diversity of users, we must ensure that they can respond appropriately and responsibly. We propose the DIAL-BIAS FRAMEWORK for analyzing dialog social bias in four aspects: context-sensitivity, data type, targeted group, and implied attitude. We also created the CDAIL-BIAS DATASET, which is, to the best of our knowledge, the first well-annotated Chinese dataset for measuring social bias in dialogs. Additionally, we present the fine-grained dialog bias measurement benchmark and conduct in-depth analyses on the annotated dataset. Finally, we evaluated several popular systems in terms of social bias risks, adopting the proposed
+
+
context-sensitivity; data type; targeted group; bias topic; implied attitude
28k
+
+Table 5: Comparison of the proposed CDIAL-BIAS with existing bias related resources. For each dataset, we present if the data entry is dialog, the language, the annotation schema, and the size of the corpus.
+
+detector and human evaluation. We hope that this work can serve as a basis to support future studies investigating the development of unbiased and safe dialog systems.
+
+# Ethical Considerations
+
+In this work, we propose a pioneering resource and a novel benchmark for Chinese dialog social bias detection. However, we acknowledge the following limitations in our work that may lead to ethical issues.
+
+Data Collection Issues Firstly, we ensure that the collected data is legal to use according to the Zhihu terms7: "Information posted by users through Zhihu is public information, and other third parties can access the information posted by users through Zhihu." Secondly, we ensure that the research subject in this work is not human. This work does not need ethics approval, in the region of where it is conducted. Lastly, we use two methods to ensure the data does not contain any private information: 1) we did not include any account information during the data collecting procedure to keep anonymous; 2) we cleaned the potential private information such as emails, id numbers, etc. to further ensure privacy.
+
+Data Coverage Though widely explored the Chinese social media before devising the scope of data crawling, we are mindful that this work has limited coverage of existing social bias. There may be a bunch of un-discussed social biases on uncovered social groups in the proposed dataset. Consequently, the detectors trained on this dataset may
+
+have unpredictable behavior on data related to such groups.
+
+Potential Mis-annotation Recently work revealed that bias underlying the annotation process can be enlarged by the system (Sap et al., 2021). To avoid such annotation biases, we designed a strict annotation process and hire annotators with various demographics. However, we also acknowledge that there may be a portion of stealthy misleading annotations in this dataset. We are aware that asking annotators to specify the reason why some utterances are biased can reduce mis-annotation (Sap et al., 2020), yet it also requires high annotation costs. We consider this direction as our future work. Additionally, though we manage to ensure diversity of annotators, this work still requires native Chinese speakers for annotation. All the annotators are from the People's Republic of China with similar cultural backgrounds. The understanding of biases may inevitably have some differences among populations and cultures (Schmidt and Wiegand, 2017; Ung et al., 2022).
+
+Potential Misuse The proposed dataset aims to facilitate research in detecting and migrating social bias in dialogue systems. We realize that it can also be misused in malicious scenarios such as creating more biased dialog systems. We appeal for more socially responsible research in this field and believe that this work provides more value than risks for studying social bias in dialog systems.
+
+# Limitations
+
+In the above Ethical Consideration section, we claim that this work may have limitations in data coverage, potential mis-annotation, and potential
+
+misuse. Apart from these ethical issues, we are also mindful that this work may have the following limitations.
+
+Lack of Reliable Baseline As a pioneer work in dialog social bias measurement, this work lacks well-aligned prior research and reliable baselines to compare with. We devise the first conceptual bias identifying framework DIAL-BIAS FRAMEWORK based on the previous research in the field of social bias in the general NLP field and the emerging topic of dialog safety. The CDIAL-BIAS DATASET is also the first well-annotated dataset in Chinese dialog social bias, therefore, we only compared our work with off-the-shelf APIs.
+
+Unbalanced label distribution We are mindful that the proposed dataset is unbalanced in label distribution. Specifically, the Anti-Bias class merely takes up $1.6\%$ in the total dataset. However, we claim that this imbalance indeed reflects the distribution in a real online community. We hope this work can shed light on this imbalance problem and also call for special considerations for the minority Anti-Bias data towards building more socially responsible dialog systems.
+
+# References
+
+Ashutosh Baheti, Maarten Sap, Alan Ritter, and Mark Riedl. 2021. Just say no: Analyzing the stance of neural dialogue generation in offensive contexts. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 4846-4862. Association for Computational Linguistics.
+Siqi Bao, Huang He, Fan Wang, Hua Wu, Haifeng Wang, Wenquan Wu, Zhen Guo, Zhibin Liu, and Xinchao Xu. 2021. Plato-2: Towards building an open-domain chatbot via curriculum learning. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 2513-2525.
+Soumya Barikeri, Anne Lauscher, Ivan Vulic, and Goran Glavaš. 2021. RedditBias: A real-world resource for bias evaluation and debiasing of conversational language models. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, pages 1941-1955.
+Christine Basta, Marta R. Costa-jussà, and Noe Casas. 2019. Evaluating the underlying gender bias in contextualized word embeddings. In Proceedings of the
+
+First Workshop on Gender Bias in Natural Language Processing, pages 33-39.
+
+Su Lin Blodgett, Solon Barocas, Hal Daumé III, and Hanna Wallach. 2020. Language (technology) is power: A critical survey of "bias" in NLP. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5454-5476.
+
+Rishi Bommasani, Drew A. Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S. Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, Erik Brynjolfsson, Shyamal Buch, Dallas Card, Rodrigo Castellon, Niladri Chatterji, Annie Chen, Kathleen Creel, Jared Quincy Davis, Dora Demszky, Chris Donahue, Moussa Doumbouya, Esin Durmus, Stefano Ermon, John Etchemendy, Kawin Ethayarajh, Li Fei-Fei, Chelsea Finn, Trevor Gale, Lauren Gillespie, Karan Goel, Noah Goodman, Shelby Grossman, Neel Guha, Tatsunori Hashimoto, Peter Henderson, John Hewitt, Daniel E. Ho, Jenny Hong, Kyle Hsu, Jing Huang, Thomas Icard, Saahil Jain, Dan Jurafsky, Pratyusha Kalluri, Siddharth Karamcheti, Geoff Keeling, Fereshte Khani, Omar Khattab, Pang Wei Koh, Mark Krass, Ranjay Krishna, Rohith Kuditipudi, Ananya Kumar, Faisal Ladhak, Mina Lee, Tony Lee, Jure Leskovec, Isabelle Levent, Xiang Lisa Li, Xuechen Li, Tengyu Ma, Ali Malik, Christopher D. Manning, Suvir Mirchandani, Eric Mitchell, Zanele Munyikwa, Suraj Nair, Avanika Narayan, Deepak Narayanan, Ben Newman, Allen Nie, Juan Carlos Niebles, Hamed Nilforoshan, Julian Nyarko, Giray Ogut, Laurel Orr, Isabel Papadimitriou, Joon Sung Park, Chris Piech, Eva Portelance, Christopher Potts, Aditi Raghunathan, Rob Reich, Hongyu Ren, Frieda Rong, Yusuf Roohani, Camilo Ruiz, Jack Ryan, Christopher Re, Dorsa Sadigh, Shiori Sagawa, Keshav Santhanam, Andy Shih, Krishnan Srinivasan, Alex Tamkin, Rohan Taori, Armin W. Thomas, Florian Tramer Rose E. Wang, William Wang, Bohan Wu, Jiajun Wu, Yuhuai Wu, Sang Michael Xie, Michihiro Yasunaga, Jiaxuan You, Matei Zaharia, Michael Zhang, Tianyi Zhang, Xikun Zhang, Yuhui Zhang, Lucia Zheng, Kaitlyn Zhou and Percy Liang. 2021. On the opportunities and risks of foundation models.
+
+Shikha Bordia and Samuel Bowman. 2019. Identifying and reducing gender bias in word-level language models. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Student Research Workshop, pages 7-15.
+
+Daniel Borkan, Lucas Dixon, Jeffrey Sorensen, Nithum Thain, and Lucy Vasserman. 2019. Nuanced metrics for measuring unintended bias with real data for text classification. In *Companion proceedings of the 2019 world wide web conference*, pages 491-500.
+
+Amanda Cercas Curry and Verena Rieser. 2018. #MeToo Alexa: How conversational systems respond to sexual harassment. In Proceedings of the Second ACL Workshop on Ethics in Natural Language Processing, pages 7-14.
+Lu Cheng, Ahmadreza Mosallanezhad, Yasin Silva, Deborah Hall, and Huan Liu. 2021. Mitigating bias in session-based cyberbullying detection: A noncompromising approach. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 2158-2168.
+Benoit Dardenne, Muriel Dumont, and Thierry Bollier. 2007. Insidious dangers of benevolent sexism: consequences for women's performance. Journal of personality and social psychology, 93(5):764.
+Thomas Davidson, Debasmita Bhattacharya, and Ingmar Weber. 2019. Racial bias in hate speech and abusive language detection datasets. In Proceedings of the Third Workshop on Abusive Language Online, pages 25-35.
+Jiawen Deng, Jingyan Zhou, Hao Sun, Fei Mi, and Minlie Huang. 2022. Cold: A benchmark for chinese offensive language detection.
+Emily Dinan, Gavin Abercrombie, A. Stevie Bergman, Shannon Spruit, Dirk Hovy, Y-Lan Boureau, and Verena Rieser. 2021. Anticipating safety issues in e2e conversational ai: Framework and tooling.
+Emily Dinan, Samuel Humeau, Bharath Chintagunta, and Jason Weston. 2019. Build it break it fix it for dialogue safety: Robustness from adversarial human attack. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4537-4546.
+Nan Du, Yanping Huang, Andrew M. Dai, Simon Tong, Dmitry Lepikhin, Yuanzhong Xu, Maxim Krikun, Yanqi Zhou, Adams Wei Yu, Orhan First, Barret Zoph, Liam Fedus, Maarten Bosma, Zongwei Zhou, Tao Wang, Yu Emma Wang, Kellie Webster, Marie Pellat, Kevin Robinson, Kathy Meier-Hellstern, Toju Duke, Lucas Dixon, Kun Zhang, Quoc V Le, Yonghui Wu, Zhifeng Chen, and Claire Cui. 2021. Glam: Efficient scaling of language models with mixture-of-experts.
+Justin Edwards, Leigh Clark, and Allison Perrone. 2021. Lgbtq-ai? exploring expressions of gender and sexual orientation in chatbots. CUI 2021 - 3rd Conference on Conversational User Interfaces.
+Antigoni Maria Founta, Constantinos Djouvas, Despoina Chatzakou, Ilias Leontiadis, Jeremy Blackburn, Gianluca Stringhini, Athena Vakali, Michael
+
+Sirivianos, and Nicolas Kourtellis. 2018. Large scale crowdsourcing and characterization of twitter abusive behavior. In Twelfth International AAAI Conference on Web and Social Media.
+Daniel De Freitas, Minh-Thang Luong, David R. So, Jamie Hall, Noah Fiedel, Romal Thoppilan, Zi Yang, Apoorv Kulshreshtha, Gaurav Nemade, Yifeng Lu, and Quoc V. Le. 2020. Towards a human-like open-domain chatbot. ArXiv, abs/2001.09977.
+Samuel Gehman, Suchin Gururangan, Maarten Sap, Yejin Choi, and Noah A. Smith. 2020. RealToxicityPrompts: Evaluating neural toxic degeneration in language models. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 3356-3369.
+Yuxian Gu, Jiaxin Wen, Hao Sun, Yi Song, Pei Ke, Chujie Zheng, Zheng Zhang, Jianzhu Yao, Xiaoyan Zhu, Jie Tang, and Minlie Huang. 2022. Eva2.0: Investigating open-domain chinese dialogue systems with large-scale pre-training.
+Xudong Han, Aili Shen, Yitong Li, Lea Frermann, Timothy Baldwin, and Trevor Cohn. 2022. fairlib: A unified framework for assessing and improving classification fairness.
+Peter Henderson, Koustuv Sinha, Nicolas Angelard-Gontier, Nan Rosemary Ke, Genevieve Fried, Ryan Lowe, and Joelle Pineau. 2018. Ethical challenges in data-driven dialogue systems. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, AIES '18, page 123-129.
+Aiqi Jiang, Xiaohan Yang, Yang Liu, and Arkaitz Zubiaga. 2022. Swsr: A chinese dataset and lexicon for online sexism detection. Online Social Networks and Media, 27:100182.
+Hyunwoo Kim, Youngjae Yu, Liwei Jiang, Ximing Lu, Daniel Khashabi, Gunhee Kim, Yejin Choi, and Maarten Sap. 2022. Prosocialdialog: A prosocial backbone for conversational agents. arXiv preprint arXiv:2205.12688.
+Nayeon Lee, Andrea Madotto, and Pascale Fung. 2019. Exploring social bias in chatbots using stereotype knowledge. In Proceedings of the 2019 Workshop on Widening NLP, pages 177-180.
+Yitong Li, Timothy Baldwin, and Trevor Cohn. 2018. Towards robust and privacy-preserving text representations.
+Haochen Liu, Jamell Dacon, Wenqi Fan, Hui Liu, Zitao Liu, and Jiliang Tang. 2020. Does gender matter? towards fairness in dialogue systems. In Proceedings of the 28th International Conference on Computational Linguistics, pages 4403-4416. International Committee on Computational Linguistics.
+
+Antonis Maronikolakis, Philip Baader, and Hinrich Schütze. 2022. Analyzing hate speech data along racial, gender and intersectional axes.
+Saeed Masoudnia and Reza Ebrahimpour. 2014. Mixture of experts: A literature survey. Artificial Intelligence Review, 42.
+Fei Mi, Yitong Li, Yulong Zeng, Jingyan Zhou, Yasheng Wang, Chuanfei Xu, Lifeng Shang, Xin Jiang, Shiqi Zhao, and Qun Liu. 2022. Pangu-bot: Efficient generative dialogue pre-training from pre-trained language model.
+Moin Nadeem, Anna Bethke, and Siva Reddy. 2021. Stereoset: Measuring stereotypical bias in pretrained language models. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 5356-5371.
+Nikita Nangia, Clara Vania, Rasika Bhalerao, and Samuel R. Bowman. 2020. CrowS-pairs: A challenge dataset for measuring social biases in masked language models. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1953-1967.
+Jack W. Rae, Sebastian Borgeaud, Trevor Cai, Katie Millican, Jordan Hoffmann, Francis Song, John Aslanides, Sarah Henderson, Roman Ring, Susannah Young, Eliza Rutherford, Tom Hennigan, Jacob Menick, Albin Cassirer, Richard Powell, George van den Driessche, Lisa Anne Hendricks, Maribeth Rauh, Po-Sen Huang, Amelia Glaese, Johannes Welbl, Sumanth Dathathri, Saffron Huang, Jonathan Uesato, John Mellor, Irina Higgins, Antonia Creswell, Nat McAleese, Amy Wu, Erich Elsen, Siddhant Jayakumar, Elena Buchatskaya, David Budden, Esme Sutherland, Karen Simonyan, Michela Paganini, Laurent Sifre, Lena Martens, Xiang Lorraine Li, Adhiguna Kuncoro, Aida Nematzadeh, Elena Gribovskaya, Domenic Donato, Angeliki Lazaridou, Arthur Mensch, Jean-Baptiste Lespiau, Maria Tsimpoukelli, Nikolai Grigorev, Doug Fritz, Thibault Sottiaux, Mantas Pajarskas, Toby Pohlen, Zhitao Gong, Daniel Toyama, Cyprien de Masson d'Autume, Yujia Li, Tayfun Terzi, Vladimir Mikulik, Igor Babuschkin, Aidan Clark, Diego de Las Casas, Aurelia Guy, Chris Jones, James Bradbury, Matthew Johnson, Blake Hechtman, Laura Weidinger, Iason Gabriel, William Isaac, Ed Lockhart, Simon Osindero, Laura Rimell, Chris Dyer, Oriol Vinyals, Kareem Ayoub, Jeff Stanway, Lorrayne Bennett, Demis Hassabis, Koray Kavukcuoglu, and Geoffrey Irving. 2022. Scaling language models: Methods, analysis & insights from training gopher.
+Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. "why should i trust you?": Explaining the predictions of any classifier. In Proceedings
+
+of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '16, page 1135-1144.
+Maarten Sap, Dallas Card, Saadia Gabriel, Yejin Choi, and Noah A. Smith. 2019. The risk of racial bias in hate speech detection. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1668-1678.
+Maarten Sap, Saadia Gabriel, Lianhui Qin, Dan Jurafsky, Noah A Smith, and Yejin Choi. 2020. Social bias frames: Reasoning about social and power implications of language. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5477-5490.
+Maarten Sap, Swabha Swayamdipta, Laura Vianna, Xuhui Zhou, Yejin Choi, and Noah A. Smith. 2021. Annotators with attitudes: How annotator beliefs and identities bias toxic language detection.
+Timo Schick, Sahana Udupa, and Hinrich Schütze. 2021. Self-Diagnosis and Self-Debiasing: A Proposal for Reducing Corpus-Based Bias in NLP. Transactions of the Association for Computational Linguistics, 9:1408-1424.
+Anna Schmidt and Michael Wiegand. 2017. A survey on hate speech detection using natural language processing. In Proceedings of the Fifth International Workshop on Natural Language Processing for Social Media, pages 1-10.
+Emily Sheng, Kai-Wei Chang, Prem Natarajan, and Nanyun Peng. 2021. Societal biases in language generation: Progress and challenges. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4275–4293.
+Shaden Smith, Mostofa Patwary, Brandon Norick, Patrick LeGresley, Samyam Rajbhandari, Jared Casper, Zhun Liu, Shrimai Prabhumoye, George Zerveas, Vijay Korthikanti, Elton Zhang, Rewon Child, Reza Yazdani Aminabadi, Julie Bernauer, Xia Song, Mohammad Shoeybi, Yuxiong He, Michael Houston, Saurabh Tiwary, and Bryan Catanzaro. 2022. Using deepspeed and megatron to train megatron-turing nlg 530b, a large-scale generative language model.
+Hao Sun, Guangxuan Xu, Jiawen Deng, Jiale Cheng, Chujie Zheng, Hao Zhou, Nanyun Peng, Xiaoyan Zhu, and Minlie Huang. 2022. On the safety of conversational models: Taxonomy, dataset, and benchmark. In Findings of the Association for Computational Linguistics: ACL 2022, pages 3906-3923.
+Romal Thoppilan, Daniel De Freitas, Jamie Hall, Noam Shazeer, Apoorv Kulshreshtha, Heng-Tze
+
+Cheng, Alicia Jin, Taylor Bos, Leslie Baker, Yu Du, YaGuang Li, Hongrae Lee, Huaixiu Steven Zheng, Amin Ghafouri, Marcelo Menegali, Yanping Huang, Maxim Krikun, Dmitry Lepikhin, James Qin, Dehao Chen, Yuanzhong Xu, Zhifeng Chen, Adam Roberts, Maarten Bosma, Vincent Zhao, Yanqi Zhou, Chung-Ching Chang, Igor Krivokon, Will Rusch, Marc Pickett, Pranesh Srinivasan, Laichee Man, Kathleen Meier-Hellstern, Meredith Ringel Morris, Tulsee Doshi, Renelito Delos Santos, Toju Duke, Johnny Soraker, Ben Zevenbergen, Vinodkumar Prabhakaran, Mark Diaz, Ben Hutchinson, Kristen Olson, Alejandra Molina, Erin Hoffman-John, Josh Lee, Lora Aroyo, Ravi Rajakumar, Alena Butryna, Matthew Lamm, Viktoriya Kuzmina, Joe Fenton, Aaron Cohen, Rachel Bernstein, Ray Kurzweil, Blaise Aguera-Arcas, Claire Cui, Marian Croak, Ed Chi, and Quoc Le. 2022. Lamda: Language models for dialog applications.
+
+Megan Ung, Jing Xu, and Y-Lan Boureau. 2022. SaFeRDialogues: Taking feedback gracefully after conversational safety failures. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 6462-6481.
+
+Yida Wang, Pei Ke, Yinhe Zheng, Kaili Huang, Yong Jiang, Xiaoyan Zhu, and Minlie Huang. 2020. A large-scale chinese short-text conversation dataset. In Natural Language Processing and Chinese Computing: 9th CCF International Conference, NLPCC 2020, Proceedings, Part I, page 91-103.
+
+Laura Weidinger, John Mellor, Maribeth Rauh, Conor Griffin, Jonathan Uesato, Po-Sen Huang, Myra Cheng, Mia Glaese, Borja Balle, Atoosa Kasirzadeh, Zac Kenton, Sasha Brown, Will Hawkins, Tom Stepleton, Courtney Biles, Abeba Birhane, Julia Haas, Laura Rimell, Lisa Anne Hendricks, William Isaac, Sean Legassick, Geoffrey Irving, and Iason Gabriel. 2021. Ethical and social risks of harm from language models.
+
+Jing Xu, Da Ju, Margaret Li, Y-Lan Boureau, Jason Weston, and Emily Dinan. 2020. Recipes for safety in open-domain chatbots.
+
+Haoran Zhang, Amy X. Lu, Mohamed Abdalla, Matthew McDermott, and Marzyeh Ghassemi. 2020. Hurtful words: Quantifying biases in clinical contextual word embeddings. In Proceedings of the ACM Conference on Health, Inference, and Learning, CHIL '20, page 110-120.
+
+Zhengyan Zhang, Xu Han, Hao Zhou, Pei Ke, Yuxian Gu, Deming Ye, Yujia Qin, Yusheng Su, Haozhe Ji, Jian Guan, Fanchao Qi, Xiaozhi Wang, Yanan Zheng, Guoyang Zeng, Huanqi Cao, Shengqi Chen, Daixuan Li, Zhenbo Sun, Zhiyuan Liu, Minlie Huang, Wentao Han, Jie Tang, Juanzi Li, Xiaoyan Zhu, and Maosong
+
+Sun. 2021. Cpm: A large-scale generative chinese pre-trained language model. AI Open, 2:93-99.
+
+Jieyu Zhao, Tianlu Wang, Mark Yatskar, Ryan Cotterell, Vicente Ordonez, and Kai-Wei Chang. 2019. Gender bias in contextualized word embeddings. Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 1.
+
+Hao Zhou, Pei Ke, Zheng Zhang, Yuxian Gu, Yinhe Zheng, Chujie Zheng, Yida Wang, Chen Henry Wu, Hao Sun, Xiaocong Yang, Bosi Wen, Xiaoyan Zhu, Minlie Huang, and Jie Tang. 2021a. Eva: An open-domain chinese dialogue system with large-scale generative pre-training.
+
+Xuhui Zhou, Maarten Sap, Swabha Swayamdipta, Noah A. Smith, and Yejin Choi. 2021b. Challenges in automated debiasing for toxic language detection.
+
+Shucheng Zhu and Pengyuan Liu. 2020. 伟大的男人和倔强的女人:基于语料库的形容词性别偏度历时研究(great males and stubborn females: A diachronic study of corpus-based gendered skewness in Chinese adjectives). In Proceedings of the 19th Chinese National Conference on Computational Linguistics, pages 31-42.
+
+Caleb Ziems, Jane Yu, Yi-Chia Wang, Alon Halevy, and Diyi Yang. 2022. The moral integrity corpus: A benchmark for ethical dialogue systems. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Dublin, Ireland. Association for Computational Linguistics.
+
+# A Appendix
+
+# A.1 Data Type Taxonomy
+
+We present the data type (Bias-Discussing, Bias-Expressing, Irrelevant) examples and judging criteria in Table 6. The examples D-1 and D-2 discuss discrimination towards Asians and racial minorities, while E-1 and E-2 express biased opinions towards certain groups of people.
+
+# A.2 Dataset Construction Details
+
+Details of Subtopics We list the detailed subtopics used as keywords in the data collection stage and hints at the annotation stage to identify targeted groups in Table 8. Additionally, we present the distribution of Top 15 targeted groups in Figure 5.
+
+Annotation Procedure We employ twenty-six Chinese crowd-sourcing workers who are native Chinese speakers with ages ranging from 20 to 50, equally distributed genders, various occupations,
+
+
Taxonomy
Examples
Criteria
Bias-Discussing
D-1. 歧视亚裔最严重的地方当然是亚洲了啊。
+The place where Asians are most discriminated against is of course Asia.
+D-2. 我国的歧视类型花样繁多,种族歧视这个事在很多地方很多人眼里根本就是“沧海之一勺”的存在。
+There are many types of discrimination in our country, and the issue of racial discrimination is “a spoonful of the sea” for many people in many places.
1. Discuss the pheno-nomenon of Asian discrimination
+2. Discuss the pheno-nomenon of racial discrimination
Bias-Expressing
E-1. 有些软件师傅们啊,总觉得自己是高收入群体,急不可耐与人民群众划清界限。
+Some software “masters”, always feel they are a high-income group, eager to draw a line with the masses.
+E-2. 潮汕人读书不多,文化程度不怎么高,经商才能赚钱。
+The people of Chaoshan don’t read much and don’t have a high level of education, so they can make money in business.
1. State that software programmers are arrogant.
+2. State that Chaosan people are not well-educated.
Not Relevant
O-1. 考事业单位当卫生老师我觉得还挺靠谱的。
+I think it’s quite reliable to be a health teacher.
+O-2. 可以转行,当大学护理专业的老师,其实我就是一个准护士。
+You can change your profession to be a university nursing major teacher. Actually, I am a prospective nurse.
Relating to the topic Occupation, but not biased.
+
+Table 6: Examples of three types of data. The criteria of classification for each example are also listed. The refereed groups and topics of each bias-related instance are highlighted in orange.
+
+
Response: ${Response}
Q1: [Context Sensitivity] Do you need context to decide whether this ${Response} is targeted at a certain group of people?
+○ Yes, context is needed.
+○ No, the $Reponse is enough.
Provide ${Context} if the annotator chooses "Yes".
Q2: [Data Type] Is this Response expressing an opinion towards a certain Group or discussing the biases that the Group is suffering?
+○ Expressing an opinion.
+○ Discussing the biases.
+○ Neither. ▷End annotation.
Q3: [Referenced Group] Is this Response targeted at ${Group}?
+○ Yes, the Group label is correct.
+○ No, the Group is ___________
Q4: [Attitudes] This Response is expressing ____at targeted ${Group}?
+○ Anti-bias, positive opinion.
+○ Neutral opinion without any biases.
+○ Biases, sarcasm, or other pessimistic opinions.
+
+Figure 4: Annotation User Interface.
+
+and from different regions all over China. The annotators have acknowledged the use of annotated data sets and are paid an average annotation salary. We present our annotation interface in Figure 4. For each data entry, the annotator is required to
+
+answer the following four questions sequentially.
+
+
Model
d
lr
B
std
Val
§ 4.2
VANILLA
0.5
5e-6
128
1.36
59.33
MOE
0.3
3e-5
64
1.05
59.83
MULTI-T.
0.5
1e-5
128
1.46
58.97
§ 4.3
W/O CTX
0.5
5e-6
64
1.47
57.51
CI-ONLY
0.5
5e-6
64
0.44
65.82
CS-ONLY
0.5
5e-6
64
1.64
49.44
§ 4.4
Race
0.3
5e-6
64
0.81
66.24
Gender
0.3
5e-6
64
1.19
66.02
TS
Region
0.3
5e-6
64
2.01
63.28
Occup.
0.3
5e-6
64
0.95
56.71
§ 4.4
Race
0.3
5e-6
128
0.79
60.81
Gender
0.3
5e-6
128
1.18
61.73
LOO
Region
0.3
5e-6
64
2.01
58.69
Occup.
0.3
5e-6
128
0.88
57.60
+
+Table 7: Best hyper-parameters $(d,lr,$ and $B$ standard variance (std) of the weighted F1 on the test set over all the settings; and the weighted F1 on the validation set (Val). TS and LOO refer to TOPIC-SPECIFIC and LEAVE-ONE-OUT in $\S 4.4$ separately.
+
+- Q1: The annotator decides whether the context is needed to determine whether the utterance is bias-related. If yes, then the context (question) will be shown to the annotator, and this entry would be regarded as context-sensitive data.
+- Q2: The annotator needs to judge the data type of the given utterance (potentially paired with its context if the answer to Q1 is “yes”), whether
+
+
+Figure 5: Distribution of targeted groups in the dataset (Top 15).
+
+
性别歧视,性别成绩,性别对立,歧视男性,家庭主妇,女性职业,贤惠,LGBT
+(Sexism, Gender and grade, Gender antagonism, Discrimination against men, Housewife, Women and occupations, Virtuous)
+
+Table 8: Topics and keywords of crawled data.
+
+it is (1) expressing bias towards a certain group, (2) discussing a bias phenomenon, or (3) irrelevant to bias.
+
+- Q3: If the utterance is relevant to bias determined by Q2, the annotator needs to further specify the referenced group of mentioned by the utterance.
+- Q4: Finally, judge the implicated attitude of the utterance in three classes, including (1) anti-bias, (2) neutral, and (3) biased.
+
+# A.3 Training Details
+
+We fine-tune the BERT model and the fully connected output layer(s) with weighted cross-entropy. We optimize the hyper-parameters, including dropout rate, learning rate, and batch size for each experiment setting on the validation set with the maximum training epochs set to 30. We adopt
+
+the early-stopping mechanism when the weighted F1 score of all classes does not improve for three consecutive epochs to avoid over-fitting. The search ranges of each parameters in the classifiers mentioned in Section 4 are listed below:
+
+1. Dropout rate $(d)$ : [0.3, 0.4, 0.5]
+2. Learning rate $(lr)$ : [5e-5, 3e-5, 1e-5, 5e-6]
+3. Batch size $(B)$ : [32, 64, 128]
+
+We use grid search to find the best hyperparameters and their configurations in different experiments are provided in Table 7. We also present the standard variance std of the model performances over all the hyper-parameters combinations within the search range. Note that we report the models on different test set splits in $\S 4$ for detailed analyses. Here we calculate std of the weighted F1 scores on the test set that aligns to the training set only for clarity. For instance, we
+
+only report std of F1 scores on the $CI$ test set for CI-ONLY model (refer to $\S 4$ . Additionally, we report the weighted F1 score on the validation set for all the best performing configurations, which can correspond to the results on the test set in Table 3, 4, and 2 in $\S 4$ .
+
+We use 2 NVIDIA V100 GPUs in total for all of our experiments, and the training time for the above models ranges from 20 minutes to one hour.
\ No newline at end of file
diff --git a/towardsidentifyingsocialbiasindialogsystemsframeworkdatasetandbenchmark/images.zip b/towardsidentifyingsocialbiasindialogsystemsframeworkdatasetandbenchmark/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..86e49909660e5b7ceb48fc09af2505fa09290303
--- /dev/null
+++ b/towardsidentifyingsocialbiasindialogsystemsframeworkdatasetandbenchmark/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0eb3fa9e13d44812e092eb09a4a92bde84c41ea98f9a6ff63d56ab2f72342066
+size 762043
diff --git a/towardsidentifyingsocialbiasindialogsystemsframeworkdatasetandbenchmark/layout.json b/towardsidentifyingsocialbiasindialogsystemsframeworkdatasetandbenchmark/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..1e66682443e9a91b0dd34b49011eeb9ebbd5f7d9
--- /dev/null
+++ b/towardsidentifyingsocialbiasindialogsystemsframeworkdatasetandbenchmark/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d8c1b67d9b994eddbc3a3fd51026244883fcfe860af78afa2454117c82ab79d5
+size 454135
diff --git a/towardsintelligentclinicallyinformedlanguageanalysesofpeoplewithbipolardisorderandschizophrenia/6d7a9afd-bfc6-43f5-90ab-033d783914d1_content_list.json b/towardsintelligentclinicallyinformedlanguageanalysesofpeoplewithbipolardisorderandschizophrenia/6d7a9afd-bfc6-43f5-90ab-033d783914d1_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..51b07760937b984658abdf10e5a4282e45b97550
--- /dev/null
+++ b/towardsintelligentclinicallyinformedlanguageanalysesofpeoplewithbipolardisorderandschizophrenia/6d7a9afd-bfc6-43f5-90ab-033d783914d1_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:795dbe5487ffc11e129e66b6bf1fcc38343c8c85e9f86ed125f8e655f0e819ac
+size 112273
diff --git a/towardsintelligentclinicallyinformedlanguageanalysesofpeoplewithbipolardisorderandschizophrenia/6d7a9afd-bfc6-43f5-90ab-033d783914d1_model.json b/towardsintelligentclinicallyinformedlanguageanalysesofpeoplewithbipolardisorderandschizophrenia/6d7a9afd-bfc6-43f5-90ab-033d783914d1_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..df6a19ba6c87e6cef33f423af43a9fef265abf13
--- /dev/null
+++ b/towardsintelligentclinicallyinformedlanguageanalysesofpeoplewithbipolardisorderandschizophrenia/6d7a9afd-bfc6-43f5-90ab-033d783914d1_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ae44f3b5691825b9bc1018a64b93f7d9c6c8071ea01b6ae4a43847e6fd5fdfa8
+size 146023
diff --git a/towardsintelligentclinicallyinformedlanguageanalysesofpeoplewithbipolardisorderandschizophrenia/6d7a9afd-bfc6-43f5-90ab-033d783914d1_origin.pdf b/towardsintelligentclinicallyinformedlanguageanalysesofpeoplewithbipolardisorderandschizophrenia/6d7a9afd-bfc6-43f5-90ab-033d783914d1_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..5ef5a8f08360596232cff4d78b0ab6296aa7fd07
--- /dev/null
+++ b/towardsintelligentclinicallyinformedlanguageanalysesofpeoplewithbipolardisorderandschizophrenia/6d7a9afd-bfc6-43f5-90ab-033d783914d1_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:cb51b64bea546db5d7eca82b8f9de93fbc8b4fc93f57064bd05f84851e877351
+size 695875
diff --git a/towardsintelligentclinicallyinformedlanguageanalysesofpeoplewithbipolardisorderandschizophrenia/full.md b/towardsintelligentclinicallyinformedlanguageanalysesofpeoplewithbipolardisorderandschizophrenia/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..d48e5638d4c8b958ce9f5b852c94db34ee46397e
--- /dev/null
+++ b/towardsintelligentclinicallyinformedlanguageanalysesofpeoplewithbipolardisorderandschizophrenia/full.md
@@ -0,0 +1,472 @@
+# Towards Intelligent Clinically-Informed Language Analyses of People with Bipolar Disorder and Schizophrenia
+
+Ankit Aich1, Avery Quynh2, Varsha Badal2, Amy Pinkham3, Philip Harvey4, Colin Depp2, and Natalie Parde1
+
+$^{1}$ Department of Computer Science, University of Illinois Chicago {aaich2, parde}@uic.edu
+
+$^{2}$ Department of Psychiatry, University of California San Diego {akquynh, vbadal, cdepp}@health.ucsd.edu
+
+$^{3}$ School of Behavioral and Brain Sciences, The University of Texas at Dallas amy.pinkham@utdallas.edu
+
+4University of Miami Miller School of Medicine pharvey@miami.edu
+
+# Abstract
+
+NLP offers a myriad of opportunities to support mental health research. However, prior work has almost exclusively focused on social media data, for which diagnoses are difficult or impossible to validate. We present a first-of-its-kind dataset of manually transcribed interactions with people clinically diagnosed with bipolar disorder and schizophrenia, as well as healthy controls. Data was collected through validated clinical tasks and paired with diagnostic measures. We extract $100+$ temporal, sentiment, psycholinguistic, emotion, and lexical features from the data and establish classification validity using a variety of models to study language differences between diagnostic groups. Our models achieve strong classification performance (maximum $\mathrm{F}_1 = 0.93 - 0.96$ ), and lead to the discovery of interesting associations between linguistic features and diagnostic class. It is our hope that this dataset will offer high value to clinical and NLP researchers, with potential for widespread broader impacts.
+
+# 1 Introduction
+
+Schizophrenia and bipolar disorder have been associated with observable language patterns in clinical and sociolinguistic studies (Kasanin, 1944; Elmore and Gorham, 1957; Perlini et al., 2012; Bambini et al., 2016). Some computational studies have sought to replicate these findings or derive novel clinical insights using automated analyses driven by natural language processing techniques (Ratana et al., 2019; Harvey et al., 2022). Effectively identifying features associated with these disorders or providing diagnostic aid offers substantial potential for real-world impact (Castro et al., 2015; Becker et al., 2018; Lovejoy, 2019). However, these studies to date have been constrained by limitations in
+
+dataset size and availability (Elvevag et al., 2007; Bedi et al., 2015; Mota et al., 2012; Gutierrez et al., 2017; Corcoran and Cecchi, 2020, $n \leq 51$ subjects), restricting the extent to which they can produce meaningful or generalizable conclusions.
+
+We address this gap by introducing a new, large $(n = 644$ subjects) dataset of transcribed conversations between clinicians and people with bipolar disorder (BD), people with schizophrenia (SZ), and healthy control (HC) subjects. We also establish preliminary benchmarking models for automatically distinguishing between these groups using interpretable linguistic features, achieving promising proof-of-concept ranging from $70 - 96\%$ accuracy in one-versus-one discrimination between subject groups. Finally, we conduct preliminary analyses across a large feature set to identify potential linguistic correlates with these groups. Our key contributions are as follows:
+
+- We introduce a new, 644-subject (1288-transcript) dataset collected in clinically validated laboratory settings.
+- Using this new dataset, we develop benchmarking models for the automated detection of bipolar and schizophrenia disorders in a one-versus-one classification setting, as a tool for facilitating analysis of language associated with members of these groups.
+- Through these analyses, we identify potential linguistic correlates with diagnostic groups.
+
+This research was jointly conducted by an interdisciplinary team of researchers from psychiatry and computer science departments to foster translational impact in both communities (Newman-Griffis et al., 2021). We hope that the data and
+
+insights provided will pave the way for new research and subsequently exciting new clinical and computational findings in this domain.
+
+# 2 Background
+
+Social media data has dominated research at the intersection of computational linguistics and clinical psychology (Bucci et al., 2019). Its popularity owes partially to its size and availability (Perrin, 2015; Fuchs, 2015; Graham et al., 2015). High rates of social media usage are also evident in users who face mental health concerns (Gowen et al., 2012; Birnbaum et al., 2015), with associations observed between social media use and the occurrence of psychosis (Kalbitzer et al., 2014; Krishna et al., 2012; Nitzan et al., 2011), mood disorders (Lin et al., 2016; Pantic et al., 2012), personality disorders (Rosen et al., 2013), eating disorders (Mabe et al., 2014; Smith et al., 2013), and obsessive compulsive disorder (Lee et al., 2015).
+
+The observed connections between social media and mental health have triggered meaningful work into leveraging this data for downstream tasks (Aich and Parde, 2022), such as the automated detection of depression (Morales et al., 2018; Hussein Orabi et al., 2018), schizophrenia and psychosis (Zomick et al., 2019; Bar et al., 2019), and suicide risk prediction (Zirikly et al., 2019a; Matero et al., 2019a). Reddit posts, a popular resource for this work due to their semi-anonymity and length (Zirikly et al., 2019b), have been used to identify stress (Turcan and McKeown, 2019a), eating disorders (Yan et al., 2019; Trifan and Oliveira, 2019), depression (Tadesse et al., 2019), and suicide (Zirikly et al., 2019b; Matero et al., 2019b), among others (Sekulic and Strube, 2019). Twitter has been used for the detection of depression and post-traumatic stress disorder (Amir et al., 2019; Kirinde Gamaarachchige and Inkpen, 2019), schizophrenia (Ernala et al., 2019), anti-social behavior (Singh et al., 2020), suicidal ideation (Wang et al., 2016a; Shahreen et al., 2018), and stress (Winata et al., 2018).
+
+Most social media datasets for mental health tasks are annotated along binary or linear scales and label users based on analysis of a set number of posts. Annotations may be provided by trained human annotators (Wang et al., 2016b; Coppersmith et al., 2015), annotators with clearly referenced domain expertise (e.g., Birnbaum et al. (2017)'s work employing a clinical psychiatrist and a grad
+
+uate student from Northwell Health's Early Treatment Program), user disclosures of mental health conditions (Coppersmith et al., 2015; Safa et al., 2022; Zhou et al., 2021), and crowdsourcing services (Turcan and McKeown, 2019b). Annotation schema for some mental health conditions can be subjective, causing varied inter-annotator agreement. For example, Birnbaum et al. (2017) reported a Cohen's kappa score of $\kappa = 0.81$ , whereas Turcan and McKeown (2019b) reported a much lower agreement of $\kappa = 0.47$ for their dataset of stressed and unstressed social media users. Turcan and McKeown (2019b)'s dataset also offers an example of how fuzzy label boundaries can affect annotation quality—it is well established that stress is often temporary (Dhabhar, 2018); hence, post labels do not always equate to a user's mental state. Finally, independent decision-making when selecting sources may influence annotation outcomes.
+
+Although social media data has been leveraged for a variety of mental health tasks, data accessibility remains an enormous challenge. In their analysis of more than 100 mental health datasets, Harrigian et al. (2020) found only three to be available without any restrictions. They found that $\geq 50\%$ of the data they analyzed was not readily available. Of those that were described in some capacity (48), 13 were removed from public records or limitations made them unavailable. Out of the 35 that remained, 12 needed signed agreements or Institutional Review Board (IRB) approvals, 18 had instructions and APIs to reproduce them, 2 could be obtained directly by emailing the authors, and as mentioned, 3 were available without restrictions. These trends have also been observed on a broader scale with other healthcare data in NLP studies (Valizadeh and Parde, 2022).
+
+Moreover, for publicly accessible data, the inherent subjectivity of many mental health annotation tasks and the frequent reliance on user self-disclosures means that many "gold standard" labels are imperfectly assigned. Most datasets fail to capture nuances of mental health (Arseniev-Koehler et al., 2018), and medical self-disclosures may be indirect (Valizadeh et al., 2021). For example, Birnbaum et al. (2017)'s dataset labels the following sample as YES, but provides little clarity regarding the user's diagnosis:
+
+I have schizophrenia/depression. I am trying to become better by exercise and working I have a job xoxo I love Saturday xx
+
+Issues related to fairness, gender, balance, and representation of racial and ethnic biases in social media datasets have also been found (Aguirre et al., 2021). We seek to address many of these limitations by providing a publicly accessible dataset of manually transcribed interactions between individuals with clinically diagnosed mental health conditions and trained clinicians. We also provide dataset transparency regarding representational balance through validity of diagnoses and descriptive statistics.
+
+# 3 Data
+
+# 3.1 Task Selection
+
+We collected data through a standardized performance-based test of social competence called the Social Skills Performance Assessment (Patterson et al., 2001, SSPA). The SSPA involves a prompted conversation between a confederate/examiner and a patient, wherein the patient's social abilities during the conversation are scored by a trained rater to provide an estimate of social skill. The SSPA is useful in clinical assessment because it provides a measure of social abilities that is free of biases associated with self-report or informants (Leifker et al., 2010). The SSPA has been used as an endpoint of clinical rehabilitation trials and is a predictor of social function (Miller et al., 2021).
+
+The SSPA involves two scenarios administered by a trained rater in a laboratory setting, and the interaction is audiorecorded. The measure consists of two simulated interactions in which the rater plays the role of a conversation partner and the participant plays the role of themselves in the scene. The first scene is affiliative and involves meeting a new neighbor. The second scene is confrontational and asks the participants to complain to their landlord, after a prior notification about a leak had not been addressed. These scenarios last on average four minutes each. In Appendix A we provide sample texts for both scenes from people who are clinically diagnosed with schizophrenia.
+
+# 3.2 Collection
+
+Data was collected during three projects supported by the National Institute of Mental Health, each of
+
+
Category
Value
Mean Age
44.2
σ(Mean Age)
11.4
Females
58.4%
Males
41.3%
Unspecified
0.3%
African Americans
37.4%
American Indian/ Alaskan Native
0.5%
Asian
5.4%
White
48.3%
Multirace
7.0%
Hawaiian
0.6%
Unreported
0.6%
+
+Table 1: Descriptive statistics for the participant pool. Age and its standard deviation are provided in years. Other demographic details are provided in frequency percentages.
+
+which recruited outpatients with either schizophrenia/schizoaffective disorder or bipolar disorder or healthy controls. The inclusion criteria for these studies involved ability to provide informed written consent, diagnosis of either bipolar disorder or schizophrenia/schizoaffective disorder according to the Diagnostic and Statistical Manual of the American Psychiatry Association, and outpatient status at the time of assessment. Informed written consent was taken from participants for audiorecording and de-identified research data sharing for each of these projects. Psychiatric diagnoses were performed under supervision of medical researchers and practicing clinicians at the University of California San Diego, the University of Miami, and the University of Texas at Dallas. A total of $644^{2}$ SSPAs were available across these studies (SZ/SC=247, BD=286, HC=110).
+
+# 3.3 Descriptive Statistics
+
+We experiment in Section 6 with a random subset of 300 subjects divided equally between the SZ $(n = 100)$ , BD $(n = 100)$ , and HC $(n = 100)$ groups. Each participant has two audio files (for the two tasks described in §3.1) for a total of 600 audio files. Descriptive statistics for all 644 participants in the full dataset are provided in Table 1.
+
+# 3.4 Data Release
+
+We release our data freely in two ways. Extracted features (described in §4.2) can be downloaded as
+
+
+Figure 1: Transcription formats prior to preprocessing. The format at right was used when patient or interviewer utterances exceeded a given timestamp and continued onward into the next dialogue block.
+
+
+
+CSV files from $\mathsf{Gib\mathsf{ub}^3}$ without any special permission. The fully de-identified transcripts can be downloaded from the National Institute of Mental Health data archive in adherence with National Institutes of Health reporting requirements and the corresponding research grant that funded this work. Users of our data will be responsible for their own statements, analysis, interpretation, and uses. We refer readers to the Ethical Considerations (end of paper) and Appendix C for a fuller understanding of how to use this dataset.
+
+# 4 Methods
+
+# 4.1 Preprocessing
+
+Verbatim transcriptions of the audiorecordings for all participants were made by a trusted third-party service and then manually stripped of identifiable information. These were stored in docx format by the transcription service, using one of the two formats shown in Figure 1. We preprocessed these files to prepare them for further computational work using a series of steps determined through preliminary data analysis. These steps included the automated extraction of timestamps, separation of interviewer and participant dialogue, and (described in the next subsection) computation of linguistic features inspired by and extended from previously published work on other datasets.
+
+We first converted the transcripts verbatim from docx to txt format to enable easier parsing using Python 3.7. We then applied a set of regular expressions to extract essential information:
+
+- Timesteps were extracted by searching for strings in the format HH:MM:SS enclosed by + sign characters.
+
+Algorithm 1 Utterance Speaker Labeling
+$s_c\gets \text{""}$ ▷ Initialized to empty.
+ $u_{p} = [ ]$ $u_{i} = [ ]$
+while $l$ is not FALSE do
+ $s_p\gets s_c$ $t_c\gets \mathrm{GETTIME}(l)$
+if GETINTERVIEWER(l) is not FALSE then Append $l$ to $u_{i}$ $s_c\gets$ Interviewer
+else if GETPATIENT(l) is not FALSE then Append $l$ to $u_{p}$ $s_c\gets$ Patient
+else if $t_c$ is FALSE then ▷ No matches.
+if $s_p\coloneqq$ Interviewer then Append $l$ to $u_{i}$ $s_c\gets$ Interviewer
+else if $s_p\coloneqq$ Patient then Append $l$ to $u_{p}$ $s_c\gets$ Patient
+end if
+end if
+end while
+
+- Interviewer dialogue was extracted by searching for strings starting with Interviewer:.
+- Patient dialogue was extracted by searching for strings starting with Patient:.
+
+Transcripts following the second format in Figure 1 were more complex to initially parse, since the continuous dialogue extending beyond the initial timestamp was not matched effectively by these patterns. To address this, we applied a speaker labeling algorithm (Algorithm 1) to these cases. This algorithm processes strings using our regular expression patterns, repeatedly iterating through lines in the transcript until the end of the document is reached. The variable $t_c$ holds the current timestamp for the speaker utterances, $l$ holds the current line of text (set to FALSE if no more lines exist in the document), $s_p$ holds the previous speaker label, $s_c$ holds the current speaker label, $u_p$ holds patient utterances, and $u_i$ holds interviewer utterances.
+
+The functions GETTIME(·), GETINTERVIEWER(·), and GETPATIENT(·) hold the regular expressions necessary to extract the timestamp, interviewer label, and patient label from a string, respectively, or otherwise return FALSE. Strings
+
+matched by GETINTERVIEWER(·) or GETPATIENT(·) are appended to $u_{i}$ or $u_{p}$ depending on the specified speaker, and strings not matched by any of the regular expression patterns (e.g., continued dialogue) are appended to the previous speaker's utterance list. The final, preprocessed lists of interviewer and patient utterances with extracted timestamps are converted to pandas5 dataframes for feature extraction and further processing.
+
+# 4.2 Features Extracted
+
+To assess the importance and utility of linguistic features in the context of this new, large dataset, we extract varied features from the patient dialogue. These features can be broadly categorized as pertaining to time, sentiment, psycholinguistic attribute, emotion, and lexical diversity.
+
+# 4.2.1 Temporal Features
+
+We extracted two temporal features for each patient: the maximum time taken for a dialogue, and the mean time taken per dialogue. To do so, all times- tamp strings were first converted to time objects in seconds, allowing for straightforward calculation of the difference between start and end times in a given dialogue. The maximum difference is labeled as the max_time. The mean is taken from this list of differences and is our other temporal feature mean_time. These numbers are stored in seconds.
+
+# 4.2.2 Sentiment Features
+
+We extracted sentiment features based on SentiWordNet (Baccianella et al., 2010) scores. We calculated a transcript-level total_sentiment_score by concatenating all patient utterances in the transcript, tokenizing the concatenated text, and computing token-level scores that were then used to increment positive, negative, or objective features across the full transcript. We then extract the average_positive, average_negatice, and average_objective scores from this information.
+
+# 4.2.3 Psycholinguistic Features
+
+To compute psycholinguistic features, we used the 2022 Linguistic Inquiry and Word Count (LIWC) framework (Boyd et al., 2022), which offers key updates over existing versions of LIWC. Specifically, the processes for computing classical LIWC features such as WC, Analytic, Clout, Authentic, and Tone are changed to reflect shifts in culture and
+
+
Feature Name
Formula
Type Token Ratio
+(Chotlos, 1944;
+Templin, 1957)
TTR = T/W
Root Type Token
+Ratio (Pierre,
+1959)
RTTR = T/√W
Corrected Type
+Token Ratio
+(Carol, 1964)
CTTR = T/√2W
Herdan's Lexical
+Diversity
+(Herdan, 1960)
HLD = log(T)/log(W)
Summer's
+Lexical Diversity
+(Somers, 1966)
SLD = log(log(T))/log(log(W))
Dugast's Lexical
+Diversity
+(Dugast, 1978)
DLD = log(W)2/log(W) - log(T)
Maas' Lexical
+Diversity (Mass,
+1972)
MLD = log(W) - log(T)/log(W)2
+
+Table 2: Lexical Diversity Features
+
+in social sciences, while still correlating with their previous implementations from the LIWC 2015 framework. We extract the full set of 118 LIWC 2022 features described by Boyd et al. (2022) for each transcript in our dataset.
+
+# 4.2.4 Emotion Features
+
+We extracted emotion features based on the NRC Word-Emotion Lexicon (Mohammad and Turney, 2010, 2013). Specifically, for each transcript we compute the total number of words associated with Anger, Anticipation, Disgust, Fear, Joy, Sadness, Surprise, and Trust as denoted by the NRC lexicon. We assign a score of 0 for a given emotion if the transcript contains no words corresponding to that emotion in the NRC lexicon.
+
+# 4.2.5 Lexical Diversity Features
+
+Finally, to measure a transcript's linguistic variety and richness, we computed seven popular measures of lexical diversity at the transcript level. These measures are described in detail in Table 2. Lexical diversity indices have proven crucial in psychometric evaluation tasks (Kapantzoglou et al., 2019).
+
+
+(a) Trust Scores (Scene 2)
+
+
+(a) Mean Time (Scene 2)
+
+
+(b) Lexical Diversity Scores (Scene 2)
+
+
+(b) Interpersonal Conflict (Scene 2)
+Figure 2: Blue represents healthy controls, orange represents schizophrenia, and green represents bipolar. Figure is best viewed in color. Figure shows violin plots with quartiles, medians, and interquartile ranges across classes Healthy, Schizophrenic, and Bipolar.
+Figure 3: Blue represents healthy controls, orange represents schizophrenia, and green represents bipolar. Figure is best viewed in color. Figure shows violin plots with quartiles, medians, and interquartile ranges across classes Healthy, Schizophrenic, and Bipolar.
+
+# 5 Feature Analyses
+
+Since we computed features across three subject pools (SZ, BD, and HC), we analyzed feature correlations, patterns, and trends across subject groups. This investigation provides a starting ground for the more detailed follow-up studies that our new dataset is designed to enable. We make our analysis and visualization scripts publicly available to lower the barrier for others to pursue these studies. $^3$
+
+In Figures 2 and 3, we present violin plots illustrating score distributions across selected features from major feature groups described in §4.2. We examine trust emotion features (Figure 5a), Herdan measures of lexical diversity (Figure 5b and 2b), mean time per dialogue (Figure 6a and 3a), and interpersonal conflict features from LIWC 2022 (Figure 6b and 3b). Class labels are represented using the numeric signifiers $HC = 0$ , $SZ = 1$ , and $BD = 2$
+
+and colors blue, orange, and green, respectively. Due to space restrictions we present plots based on the Scene 2 transcripts here, and include plots representing the same features from Scene 1 as supplemental content in Appendix B (Figures 5 and 6).
+
+We observe that HC subjects exhibit larger overall ranges of lexical diversity and trust language than SZ or BD subjects (Figure 2). SZ subjects exhibit lower trust scores, and BD subjects exhibit a bimodal score distribution with two large frequency centers (Figure 5a and Figure 2a). This differs from patterns associated with lexical diversity. We observe that BD subjects have a single concentrated distribution of mass slightly above a Herdan score of 0.85. SZ subjects exhibit a similar mean Herdan score, but with a wider score distribution.
+
+When examining mean time, we observe that both HC and SZ subjects have slightly bimodal
+
+score distributions, with SZ subjects also having the widest score range (Figure 6a and 3a). BD subjects have a single frequency center and relatively consistent frequency spread from 10-30 seconds. Finally, we observe that interpersonal conflict features are concentrated near scores of 2 for all subjects, although SZ subjects show the largest score range with a relatively large share of subjects with scores of 4 or greater (Figure 3b and 6b).
+
+In Figure 4, we present pairwise feature correlations among six selected features across our five broad feature categories: mean time, positive sentiment, LIWC analytic score, anger score, Herdan lexical diversity, and LIWC lack score. We study and compare pairwise correlations between members of different subject groups, with feature correlations for HC, BD, and SZ subjects shown in Figures 4a, 4b, and 4c, respectively.
+
+We observe weakly positive correlations between analytic scores and positive sentiment among HC subjects, but very weakly (BD) to weakly (SZ) negative correlations between this same feature pairing among subjects in other groups, suggesting a stronger relationship between logic and optimism in control subjects compared to subjects with bipolar disease or schizophrenia. Interestingly, we also observe stronger positive correlations between anger and mean time, as well as between lexical diversity and positive sentiment, in SZ subjects than in HC or BD subjects. HC subjects have weakly negative correlations between lexical diversity and positive sentiment.
+
+# 6 Classification Task
+
+To establish learning validity of our dataset, we designed a simple task to predict subject group membership. Specifically, we conduct binary classification experiments to discriminate between two classes from the set of $HC$ , $SZ$ , and $BD$ subjects. This also creates an additional avenue through which group-level language behaviors can be analyzed (e.g., through learned feature weights). We experiment with both classical (\$6.1) and Transformer-based (\$6.2) models.
+
+# 6.1 Classical Models
+
+We experimented with five feature-based models that have demonstrated high efficiency for a variety of language tasks: random forest (Xu et al., 2012;
+
+
+(a) HC
+
+
+(b) BD
+
+
+(c) SZ
+Figure 4: Heat maps show correlations between features in Scene 2 transcripts among different subject groups. Correlations range from weakly negative (darkest) to strongly positive (lightest).
+
+Bouaziz et al., 2014; Jurka et al., 2013, RF), $K$ nearest neighbors (Yong et al., 2009; Jodha et al., 2018; Trstenjak et al., 2014; Pranckevicius and Marcinkevicius, 2017, KNN), logistic regression (Pranckevicius and Marcinkevicius, 2017; Jurka, 2012; Genkin et al., 2007; Lee and Liu, 2003, Logistic), ridge classifier (Aseervatham et al., 2011; He et al., 2014, Ridge), and support vector machine (Joachims, 2002; Yang, 2001, SVM). We randomly separated our data for each class into $75\% / 25\%$ train/test splits. Since we used the 300-subject sample defined in $\S 3.3$ for these experiments, this meant that the training data for a given scene, for a given subject group pair, included 150 transcripts. The corresponding test set for that scene/pair setting included 50 transcripts. We performed three
+
+
Model
SCENE 1
SCENE 2
BD × HC
BD × SZ
HC × SZ
BD × HC
BD × SZ
HC × SZ
A
F1
A
F1
A
F1
A
F1
A
F1
A
F1
RF
0.93
0.87
0.96
0.84
0.96
0.96
0.96
0.94
0.92
0.96
0.70
0.93
KNN
0.58
0.64
0.51
0.59
0.82
0.75
0.37
0.62
0.71
0.69
0.66
0.48
LR
0.89
0.91
0.82
0.90
0.89
0.83
0.86
0.97
0.89
0.78
0.55
0.62
Ridge
0.89
0.94
0.86
0.70
0.93
0.72
0.93
0.97
0.78
0.78
0.70
0.70
SVM
0.89
0.91
0.86
0.67
0.93
0.72
0.89
0.97
0.89
0.79
0.60
0.75
+
+classification experiments $(BD \times HC, BD \times SZ,$ and $SZ \times HC)$ for each model, for each of the two scenes. We trained each model on the full set of features described previously (§4.2).
+
+We report our results for Scene 1 and Scene 2 in Table 3. We observe that the consistently highest-performing model across both scenes is the random forest classifier, achieving strong accuracies ranging from 0.93 (BD × HC) to 0.96 (SZ versus either) in Scene 1 and 0.70 (HC × SZ) to 0.96 (BD × HC) in Scene 2. Greater variation among top-performing classifiers was observed when comparing $\mathrm{F}_1$ , with the random forest classifier still achieving the highest performance most of the time. Interestingly, classification appeared to be more challenging when discriminating between HC and SZ in Scene 2 transcripts. Nonetheless, the overall strong classification performance across the board for Scenes 1 and 2 using feature-based classification models suggests high learning validity for both the dataset and the features extracted.
+
+# 6.2 Transformer-based Models
+
+Applying pretrained Transformers to domain-specific tasks may produce more robust, dependable, and accurate models (Alsentzer et al., 2019). Since much recent success in NLP has been achieved using Transformer models, we also experiment with several using the same one-versus-one classification setting and data splits from our other experiments. We compare the performance of pretrained BERT base (Devlin et al., 2018), MentalBERT (Ji et al., 2022), and Mental-RoBERTa (Ji et al., 2022) models for our task. BERT base is a pretrained English model using a masked language modeling objective. It randomly masks a small percentage of words and learns to predict the masked samples. The model was trained for one million steps in batch sizes of 256 with fine-tuned hyperpa
+
+Table 3: Performance comparisons between classifiers on Scene 1 and Scene 2 transcripts. Results show accuracy (A) and $F_1$ score for one-versus-one classification between BD, SZ, and HC subjects.
+
+
Mo.
BD × HC
BD × SZ
HC × SZ
A
F1
A
F1
A
F1
BB
0.42
0.52
0.33
0.5
0.33
0.5
MB
0.37
0.62
0.38
0.42
0.38
0.49
MR
0.48
0.63
0.62
0.60
0.60
0.27
+
+Table 4: Performance comparisons between Transformers on Scene 2 transcripts. BB refers to BERT base, MB is MentalBERT, and MR is MentalRoBERTa.
+
+rameters set to: optimizer=Adam, learning rate=1e-4, $\beta_{1} = 0.9$ , $\beta_{2} = 0.999$ , and decay=0.01. MentalBERT and Mental-RoBERTa follow the same architecture but use dynamic masking and domain adaptive pretraining. The pretraining corpus includes depression, stress, and suicidal ideation data from Reddit. We passed subject utterances from our transcripts directly to these models for automated encoding of implicitly learned features.
+
+We present the results for a sample of these experiments (Scene 2 classifications of HC vs. SZ subjects) in Table 4. We observe much lower performances than seen with feature-based classifiers. There may be many reasons for this, ranging from characteristics of the data used for pretraining to inefficiencies in implicitly learned features relative to features engineered based on known psycholinguistic attributes. Since we do not observe promising results using pre-trained Transformer models and these models also do not lend themselves as easily as tools to facilitate linguistic analyses, we leave further probing of this to future work.
+
+# 7 Conclusion
+
+Publishing language data collected in clinical settings that is paired with validated psychiatric diagnoses is an essential first step towards realizing more realistic, medically relevant NLP applications
+
+in the mental health domain. In this work, we take that step and describe our new corpus developed in close consultation between NLP and psychiatric researchers and clinicians. The corpus includes manually transcribed interactions between clinical interviewers and healthy control subjects or those with diagnosed schizophrenia and bipolar disorder. We describe all data collection procedures, extract a wide range of promising linguistic features from the data, and conduct an extensive first set of analyses to document trends in linguistic behavior among the SZ, BD, and HC subject groups. We show that linguistic diversity manifests itself in various ways across subject populations.
+
+We hope that our work will diversify NLP research in the mental health domain beyond social media settings, and that it will open the door for more clinically valid studies of language behavior associated with diagnosed psychiatric conditions. All features extracted for this work are freely available on GitHub and can be downloaded without any further permission. The de-identified transcripts can be downloaded from the National Institute of Mental Health data archive, in keeping with the terms of our NIH reporting requirements and the corresponding research grant that funded this work. In the future, we plan to extend our study to also investigate spoken language and acoustic properties from the collected audiorecordings.
+
+# Limitations
+
+This work is limited by a few factors. First, although our dataset is large by psychiatric standards, its size is still limited compared to datasets used for many other modern NLP tasks. This prevents us from being able to productively use complex models that have achieved state-of-the-art performance in other tasks, as documented in §6.2 with our experiments using fine-tuned versions of BERT, MentalBERT, and Mental-RoBERTa. We note that a disadvantage of deep learning models is that they are less interpretable than feature-based counterparts; thus, since classifier performance is not a central goal of our work, the poor performance observed with pre-trained Transformers is not a crucial shortcoming. Our primary interest in the classification experiments described in Section 6 was to establish learning validity for our dataset.
+
+Second, although we explore a wide range of
+
+temporal, sentiment, psycholinguistic, emotion, and lexical diversity features in our experiments, our feature set does not comprehensively or conclusively cover all linguistic traits that may be of interest when analyzing the language behaviors of our target subject groups. Thus, our claims are limited by the boundaries of the conditions tested in our experiments—it may be that the most informative linguistic features are as yet undiscovered. We hope that this is indeed the case, and that future work develops new innovations that expand upon our findings.
+
+Finally, our dataset is restricted to English conversations. The extent to which this research generalizes to other languages, including those vastly different from or substantially less-resourced than English, is unknown for now. The collection of complementary data in other languages, and especially those with different morphological typology, is a promising direction for future work.
+
+# Ethical Considerations
+
+Several important ethical questions arise when working with data collected from human participants generally, and data dealing with mental health concerns specifically. We consider both questions here. We also point readers to our datasheet and other details regarding fair and inappropriate uses of our data in Appendix C.
+
+# Dataset Creation
+
+In collecting this data, we followed all codes of ethics laid out by the Association for Computational Linguistics, the United States of America's National Institutes of Health, and the U.S. National Institute of Mental Health. All universities, laboratories, hospitals, and research centers involved in this project have secured ethics approval from their Institutional Review Boards before working with any data. Data was collected from outpatients recruited through studies supported by the National Institute of Mental Health. Inclusion criteria were ability to provide informed written consent, diagnosis of either bipolar disorder or schizophrenia/schizoaffective disorder according to the Diagnostic and Statistical Manual of the American Psychiatry Association, and outpatient status at the time of assessment. Informed written consent was taken from all participants for audiot recording and de-identified research data sharing.
+
+Audiorecordings were professionally transcribed
+
+by a trusted third-party company. Any identifiable data was manually removed from the transcripts at the time of transcription, and transcripts were verified to be de-identified by members of the study team. No data that might point toward the identity of any person(s) was used in any way in this work, including for feature creation, modeling, or analysis, nor will it be shared at any time. Collected audiorecordings are stored securely and are not part of the data release (and are also inaccessible to some members of the study team).
+
+De-identified transcripts are shared in full compliance with all governing bodies involved, through the National Institute of Mental Health's data archive following federally mandated grant reporting and data sharing requirements. All parties interested in accessing the data will be required to complete the NIMH Data Archive Data Use Certification, which outlines terms and conditions for data use, collaboration with shared data, compliance with human subjects and institutional research requirements, and other information.[9] The data use certification is non-transferrable and recipients are not allowed to distribute, sell, or move data to other individuals, entities, or third-party systems unless they are authorized under a similar data use certification for the same permission group. The released transcripts include timestamps and de-identified utterances. Feature files (containing only the numeric feature vectors generated for each transcript using the procedures described in §4.2) are also available on GitHub at the link provided in this paper.
+
+# Intended Use
+
+The intended use for this dataset is to enable discovery and analysis of the linguistic characteristics and language behaviors associated with members of three subject groups: people with schizophrenia, people with bipolar disorder, and healthy controls. Although we provide results from proof-of-concept experiments to classify transcripts into subject groups, these are intended merely to demonstrate evidence of data validity and learnability, and the experimental inferences are provided to showcase linguistic differences between groups. This in turn establishes feasibility of the dataset as a language analysis resource for the target populations. We do not condone use of this dataset to develop models to automatically diagnose individ
+
+uals with mental health conditions, especially in the absence of feedback from trained professionals and psychiatric experts.
+
+When used as intended and when functioning correctly, we anticipate that models developed and analyses performed using this dataset may be used to facilitate discovery of novel linguistic biomarkers of schizophrenia or bipolar disorder. This information could be used to support mental health research. When used as intended but giving incorrect results, researchers may place undue importance on irrelevant linguistic biomarkers. Since this dataset is not intended for diagnostic purposes, this is unlikely to lead to real-world harm, although it may slow the progress of some psychiatric research as researchers attempt to replicate and verify results.
+
+Potential harms from misuse of the technology include the development of models to predict mental health status, and subsequent misprediction of serious mental health conditions. We reiterate that this dataset is not intended for diagnostic use, and that individuals seeking mental health care should always consult trained professionals. The National Institute of Mental Health's data archive includes a mechanism for logging research studies associated with the shared dataset. We will monitor this log and contact researchers who attempt to use the data for purposes outside its intended use.
+
+# Acknowledgements
+
+We thank the anonymous reviewers for their insightful feedback. Research reported in this publication was supported by the National Institute of Mental Health of the National Institutes of Health under award number R01MH116902. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.
+
+# References
+
+Carlos Aguirre, Keith Harrigian, and Mark Dredze. 2021. Gender and racial fairness in depression research using social media. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 2932-2949, Online. Association for Computational Linguistics.
+
+Ankit Aich and Natalie Parde. 2022. Are you really okay? a transfer learning-based approach for identification of underlying mental illnesses. In Proceedings of the Eighth Workshop on Computational Linguistics
+
+and Clinical Psychology, Seattle, WA. Association for Computational Linguistics.
+Emily Alsentzer, John R. Murphy, Willie Boag, WeiHung Weng, Di Jin, Tristan Naumann, and Matthew B. A. McDermott. 2019. Publicly available clinical bert embeddings.
+Silvio Amir, Mark Dredze, and John W. Ayers. 2019. Mental health surveillance over social media with digital cohorts. In Proceedings of the Sixth Workshop on Computational Linguistics and Clinical Psychology, pages 114-120, Minneapolis, Minnesota. Association for Computational Linguistics.
+Alina Arseniev-Koehler, Sharon Mozgai, and Stefan Scherer. 2018. What type of happiness are you looking for? - a closer look at detecting mental health from language. In Proceedings of the Fifth Workshop on Computational Linguistics and Clinical Psychology: From Keyboard to Clinic, pages 1-12, New Orleans, LA. Association for Computational Linguistics.
+Sujeevan Aseervatham, Anestis Antoniadis, Eric Gaussier, Michel Burlet, and Yves Denneulin. 2011. A sparse version of the ridge logistic regression for large-scale text categorization. Pattern Recognition Letters, 32:101-106.
+Stefano Baccianella, Andrea Esuli, and Fabrizio Sebastiani. 2010. SentiWordNet 3.0: An enhanced lexical resource for sentiment analysis and opinion mining. In Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10), Valletta, Malta. European Language Resources Association (ELRA).
+Valentina Bambini, Giorgio Arcara, Margherita Bechi, Mariachiara Buonocore, Roberto Cavallaro, and Marta Bosia. 2016. The communicative impairment as a core feature of schizophrenia: Frequency of pragmatic deficit, cognitive substrates, and relation with quality of life. *Comprehensive Psychiatry*, 71:106-120.
+Kfir Bar, Vered Zilberstein, Ido Ziv, Heli Baram, Nachum Dershowitz, Samuel Itzikowitz, and Eiran Vadim Harel. 2019. Semantic characteristics of schizophrenic speech. In Proceedings of the Sixth Workshop on Computational Linguistics and Clinical Psychology, pages 84-93, Minneapolis, Minnesota. Association for Computational Linguistics.
+Dennis Becker, Ward van Breda, Burkhardt Funk, Mark Hoogendoorn, Jeroen Ruwaard, and Heleen Riper. 2018. Predictive modeling in e-mental health: A common language framework. *Internet Interventions*, 12:57–67.
+Gillinder Bedi, Facundo Carrillo, Guillermo A Cecchi, Diego Fernández Slezak, Mariano Sigman, Natalia B Mota, Sidarta Ribeiro, Daniel C Javitt, Mauro Copelli, and Cheryl M Corcoran. 2015. Automated analysis of free speech predicts psychosis onset in high-risk youths. npj Schizophrenia, 1(1):15030.
+
+Michael Birnbaum, Sindhu Kiranmai Ernala, Asra Rizvi, Munmun Choudhury, and John Kane. 2017. A collaborative approach to identifying social media markers of schizophrenia by employing machine learning and clinical appraisals. Journal of Medical Internet Research, 19:e289.
+Michael Birnbaum, Asra Rizvi, Christoph Correll, and John Kane. 2015. Role of social media and the internet in pathways to care for adolescents and young adults with psychotic disorders and non-psychotic mood disorders. Early intervention in psychiatry, 11.
+Ameni Bouaziz, Christel Dartigues-Pallez, Célia da Costa Pereira, Frédéric Precioso, and Patrick Lloret. 2014. Short text classification using semantic random forest. In Data Warehousing and Knowledge Discovery, pages 288-299, Cham. Springer International Publishing.
+Ryan Boyd, Ashwini Ashokkumar, Sarah Seraj, and James Pennebaker. 2022. The development and psychometric properties of liwc-22.
+Sandra Bucci, Matthias Schwannauer, and Natalie Berry. 2019. The digital revolution and its impact on mental health care. *Psychology and Psychotherapy: Theory, Research and Practice*, 92.
+J. B Carol. 1964. Language and thought. _Francais Moderne_, 46:25-32.
+Victor M. Castro, Jessica Minnier, Shawn N. Murphy, Isaac Kohane, Susanne E. Churchill, Vivian Gainer, Tianxi Cai, Alison G. Hoffnagle, Yael Dai, Stefanie Block, Sydney R. Weill, Mireya Nadal-Vicens, Alisha R. Pollastri, J. Niels Rosenquist, Sergey Goryachev, Dost Ongur, Pamela Sklar, Roy H. Perlis, Jordan W. Smoller, Jordan W. Smoller, Roy H. Perlis, Phil Hyoun Lee, Victor M. Castro, Alison G. Hoffnagle, Pamela Sklar, Eli A. Stahl, Shaun M. Purcell, Douglas M. Ruderfer, Alexander W. Charney, Panos Roussos, Carlos Pato, Michele Pato, Helen Medeiros, Janet Sobel, Nick Craddock, Ian Jones, Liz Forty, Arianna DiFlorio, Elaine Green, Lisa Jones, Katherine Dunjewski, Mikael Landén, Christina Hultman, Anders Jureus, Sarah Bergen, Oscar Svantesson, Steven McCarroll, Jennifer Moran, Jordan W. Smoller, Kimberly Chambert, and Richard A. Belliveau. 2015. Validation of electronic health record phenotyping of bipolar disorder cases and controls. American Journal of Psychiatry, 172(4):363-372. PMID: 25827034.
+John W Chotlos. 1944. A statistical and comparative analysis of individual written language samples. *Psychological Monographs*, 56:75-111.
+Glen Coppersmith, Mark Dredze, Craig Harman, Kristy Hollingshead, and Margaret Mitchell. 2015. CLPsych 2015 shared task: Depression and PTSD on Twitter. In Proceedings of the 2nd Workshop on Computational Linguistics and Clinical Psychology: From Linguistic Signal to Clinical Reality, pages 31-39, Denver, Colorado. Association for Computational Linguistics.
+
+Cheryl Mary Corcoran and Guillermo A. Cecchi. 2020. Using language processing and speech analysis for the identification of psychosis and other disorders. Biological Psychiatry: Cognitive Neuroscience and Neuroimaging, 5(8):770-779. Understanding the Nature and Treatment of Psychopathology: Letting the Data Guide the Way.
+Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: pre-training of deep bidirectional transformers for language understanding. CoRR, abs/1810.04805.
+Firdaus Dhabhar. 2018. The short-term stress response - mother nature's mechanism for enhancing protection and performance under conditions of threat, challenge, and opportunity. Frontiers in Neuroendocrinology, 49.
+Daniel Dugast. 1978. On what is the notion of theoret-icalextent of the vocabulary based. Frenchais (Le) Moderne Paris, 46(1):25-32.
+Clyde M. Elmore and Donald R. Gorham. 1957. Measuring the impairment of the abstracting function with the proverbs test. Journal of Clinical Psychology, 13(3):263-266.
+Brita Elvevåg, Peter W. Foltz, Daniel R. Weinberger, and Terry E. Goldberg. 2007. Quantifying incoherence in speech: An automated methodology and novel application to schizophrenia. *Schizophrenia Research*, 93(1):304-316.
+Sindhu Kiranmai Ernala, Michael L. Birnbaum, Kristin A. Candan, Asra F. Rizvi, William A. Sterling, John M. Kane, and Munmun De Choudhury. 2019. Methodological gaps in predicting mental health states from social media: Triangulating diagnostic signals. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI '19, page 1-16, New York, NY, USA. Association for Computing Machinery.
+Christian Fuchs. 2015. Culture and Economy in the Age of Social Media. Taylor and Francis Group.
+Alexander Genkin, David Lewis, and David Madigan. 2007. Large-scale bayesian logistic regression for text categorization. Technometrics, 49.
+Kris Gowen, Matthew Deschaine, Darcy Gruttadara, and Dana Markey. 2012. Young adults with mental health conditions and social networking websites: Seeking tools to build community. *Psychiatric rehabilitation* journal, 35:245-50.
+Melissa Graham, Elizabeth Avery, and Sejin Park. 2015. The role of social media in local government crisis communications. *Public Relations Review*, 41.
+E. Dario Gutierrez, Guillermo Cecchi, Cheryl Corcoran, and Philip Corlett. 2017. Using automated metaphor identification to aid in detection and prediction of first-episode schizophrenia. In Proceedings of the
+
+2017 Conference on Empirical Methods in Natural Language Processing, pages 2923-2930, Copenhagen, Denmark. Association for Computational Linguistics.
+Keith Harrigian, Carlos Aguirre, and Mark Dredze. 2020. On the state of social media data for mental health research.
+Daisy Harvey, Fiona Lobban, Paul Rayson, Aaron Warner, and Steven Jones. 2022. Natural language processing methods and bipolar disorder: Scoping review. JMIR Ment Health, 9(4):e35928.
+Jinrong He, Lixin Ding, Lei Jiang, and Ling Ma. 2014. Kernel ridge regression classification. Proceedings of the International Joint Conference on Neural Networks, pages 2263-2267.
+Herdan. 1960. Quantitative linguistics. London, Butterworth.
+Ahmed Hussein Orabi, Prasadith Buddhitha, Mahmoud Hussein Orabi, and Diana Inkpen. 2018. Deep learning for depression detection of Twitter users. In Proceedings of the Fifth Workshop on Computational Linguistics and Clinical Psychology: From Keyboard to Clinic, pages 88-97, New Orleans, LA. Association for Computational Linguistics.
+Shaoxiong Ji, Tianlin Zhang, Luna Ansari, Jie Fu, Prayag Tiwari, and Erik Cambria. 2022. MentalBERT: Publicly Available Pretrained Language Models for Mental Healthcare. In Proceedings of LREC.
+Thorsten Joachims. 2002. Learning to classify text using support vector machines, volume 668. Springer Science & Business Media.
+Rajshree Jodha, BC Gaur Sanjay, KR Chowdhary, and Amit Mishra. 2018. Text classification using knn with different features selection methods. Text Classification using KNN with different Features Selection Methods, 8(1):8-8.
+Timothy Jurka. 2012. maxent: An r package for low-memory multinomial logistic regression with support for semi-automated text classification. The R Journal, 4.
+Timothy P Jurka, Loren Collingwood, Amber E Boydstun, Emiliano Grossman, et al. 2013. Rtexttools: A supervised learning package for text classification. RJournal, 5(1):6-12.
+Jan Kalbitzer, Thomas Mell, Felix Bermpohl, Michael Rapp, and Andreas Heinz. 2014. Twitter psychosis a rare variation or a distinct syndrome? The Journal of nervous and mental disease, 202:623.
+Maria Kapantzoglou, Gerasimos Fergadiotis, and Alejandra Auza. 2019. Psychometric evaluation of lexical diversity indices in Spanish narrative samples from children with and without developmental language disorder. Journal of Speech, Language, and Hearing Research, 62:1-14.
+
+J.S. Kasanin, editor. 1944. Language and thought in schizophrenia. University of California Press, Berkeley, CA, US. ID: 1944-01428-000.
+Prasadith Kirinde Gamaarachchige and Diana Inkpen. 2019. Multi-task, multi-channel, multi-input learning for mental illness detection using social media text. In Proceedings of the Tenth International Workshop on Health Text Mining and Information Analysis (LOUHI 2019), pages 54-64, Hong Kong. Association for Computational Linguistics.
+Nithin Krishna, Bernard Fischer, Moshe Miller, Kelly Register-Brown, Kathleen Patchan, and Ann Hackman. 2012. The role of social media networks in psychotic disorders: A case report. General hospital psychiatry, 35.
+Soon Li Lee, Miriam Park, and Cai Lian Tam. 2015. The relationship between facebook attachment and obsessive-compulsive disorder severity. In *Cyberpsychology: Journal of Psychosocial Research on Cyberspace*, volume 9.
+Wee Sun Lee and Bing Liu. 2003. Learning with positive and unlabeled examples using weighted logistic regression. In Proceedings of the Twentieth International Conference on International Conference on Machine Learning, ICML'03, page 448-455. AAAI Press.
+Feea R. Leifker, Thomas L. Patterson, Christopher R. Bowie, Brent T. Mausbach, and Philip D. Harvey. 2010. Psychometric properties of performance-based measurements of functional capacity: Test-retest reliability, practice effects, and potential sensitivity to change. *Schizophrenia Research*, 119(1):246-252.
+Liu Lin, Jaime Sidani, Ariel Shensa, Ana Radovic, Elizabeth Miller, Jason Colditz, Beth Hoffman, Leila Giles, and Brian Primack. 2016. Association between social media use and depression among u.s. young adults. Depression and anxiety, 33.
+Christopher A. Lovejoy. 2019. Technology and mental health: The role of artificial intelligence. European Psychiatry, 55:1-3.
+Annalise Mabe, Jean Forney, and Pamela Keel. 2014. Do you "like" my photo? facebook use maintains eating disorder risk. International Journal of Eating Disorders, 47.
+Heinz-Dieter Mass. 1972. Über den zusammenhang zwischen wortschatzumfang und länger eines textes. Zeitschrift für Literaturwissenschaft und Linguistik, 2(8):73.
+Matthew Matero, Akash Idnani, Youngseo Son, Salvatore Giorgi, Huy Vu, Mohammad Zamani, Parth Limbachiya, Sharath Chandra Guntuku, and H. Andrew Schwartz. 2019a. Suicide risk assessment with multi-level dual-context language and BERT. In Proceedings of the Sixth Workshop on Computational Linguistics and Clinical Psychology, pages 39-44, Minneapolis, Minnesota. Association for Computational Linguistics.
+
+Matthew Matero, Akash Idnani, Youngseo Son, Salvatore Giorgi, Huy Vu, Mohammad Zamani, Parth Limbachiya, Sharath Chandra Guntuku, and H. Andrew Schwartz. 2019b. Suicide risk assessment with multi-level dual-context language and BERT. In Proceedings of the Sixth Workshop on Computational Linguistics and Clinical Psychology, pages 39-44, Minneapolis, Minnesota. Association for Computational Linguistics.
+Michelle L. Miller, Martin T. Strassnig, Evelyn Bromet, Colin A. Depp, Katherine Jonas, Wenxuan Lin, Raeanne C. Moore, Thomas L. Patterson, David L. Penn, Amy E. Pinkham, Roman A. Kotov, and Philip D. Harvey. 2021. Performance-based assessment of social skills in a large sample of participants with schizophrenia, bipolar disorder and healthy controls: Correlates of social competence and social appropriateness. *Schizophrenia Research*, 236:80-86.
+Saif Mohammad and Peter Turney. 2010. Emotions evoked by common words and phrases: Using Mechanical Turk to create an emotion lexicon. In Proceedings of the NAACL HLT 2010 Workshop on Computational Approaches to Analysis and Generation of Emotion in Text, pages 26-34, Los Angeles, CA. Association for Computational Linguistics.
+Saif M. Mohammad and Peter D. Turney. 2013. Crowdsourcing a word-emotion association lexicon. Computational Intelligence, 29(3):436-465.
+Michelle Morales, Stefan Scherer, and Rivka Levitan. 2018. A linguistically-informed fusion approach for multimodal depression detection. In Proceedings of the Fifth Workshop on Computational Linguistics and Clinical Psychology: From Keyboard to Clinic, pages 13-24, New Orleans, LA. Association for Computational Linguistics.
+Natalia B. Mota, Nivaldo A. P. Vasconcelos, Nathalia Lemos, Ana C. Pieretti, Osame Kinouchi, Guillermo A. Cecchi, Mauro Copelli, and Sidarta Ribeiro. 2012. Speech graphs provide a quantitative measure of thought disorder in psychosis. PLOS ONE, 7(4):1-9.
+Denis Newman-Griffis, Jill Fain Lehman, Carolyn Rose, and Harry Hochheiser. 2021. Translational NLP: A new paradigm and general principles for natural language processing research. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4125-4138, Online. Association for Computational Linguistics.
+Uri Nitzan, Efrat Shoshan, Shaul Lev-Ran, and Shmuel Fennig. 2011. Internet-related psychosis - a sign of the times? The Israel journal of psychiatry and related sciences, 48:207-11.
+Igor Pantic, Aleksandar Damjanovic, Jovana Todorovic, Dubravka Topalovic, Dragana Bojovic Jovic, Sinisa Ristic, and Senka Pantic. 2012. Association between online social networking and depression in
+
+high school students: Behavioral physiology viewpoint. Psychiatria Danubina, 24:90-3.
+Thomas L Patterson, Sherry Moscona, Christine L McKibbin, Kevin Davidson, and Dilip V Jeste. 2001. Social skills performance assessment among older patients with schizophrenia. *Schizophrenia Research*, 48(2):351-360.
+C. Perlini, A. Marini, M. Garzitto, M. Isola, S. Cerruti, V. Marinelli, G. Rambaldelli, A. Ferro, L. Tomelleri, N. Dusi, M. Bellani, M. Tansella, F. Fabbro, and P. Brambilla. 2012. Linguistic production and syntactic comprehension in schizophrenia and bipolar disorder. Acta Psychiatrica Scandinavica, 126(5):363-376.
+Andrew Perrin. 2015. Social media usage. Pew research center, pages 52-68.
+GUIRAUD Pierre. 1959. Problegravemes et meacute methodes de la statistique linguistique. Payol.
+Tomas Pranckevicius and Virginijus Marcinkevicius. 2017. Comparison of naive bayes, random forest, decision tree, support vector machines, and logistic regression classifiers for text reviews classification. *Baltic Journal of Modern Computing*, 5(2):221.
+Randall Ratana, Hamid Sharifzadeh, Jamuna Krishnan, and Shaoning Pang. 2019. A comprehensive review of computational methods for automatic prediction of schizophrenia with insight into indigenous populations. Frontiers in Psychiatry, 10.
+Larry Rosen, Kelly Whaling, S Rab, Mark Carrier, and Nancy Cheever. 2013. Is facebook creating "idisorders"? the link between clinical symptoms of psychiatric disorders and technology use, attitudes and anxiety. Computers in Human Behavior, 29:1243-1254.
+Ramin Safa, Peyman Bayat, and Leila Moghtader. 2022. Automatic detection of depression symptoms in twitter using multimodal analysis. The Journal of Supercomputing, 78.
+Ivan Sekulic and Michael Strube. 2019. Adapting deep learning methods for mental health prediction on social media. In Proceedings of the 5th Workshop on Noisy User-generated Text (W-NUT 2019), pages 322-327, Hong Kong, China. Association for Computational Linguistics.
+Nabia Shahreen, Mahfuze Subhani, and Md Mahfuzur Rahman. 2018. Suicidal trend analysis of twitter using machine learning and neural network. In 2018 International Conference on Bangla Speech and Language Processing (ICBSLP), pages 1-5.
+Ravinder Singh, Jiahua Du, Yanchun Zhang, Hua Wang, Yuan Miao, Omid Sianaki, and Anhaar Ulhaq. 2020. A Framework for Early Detection of Antisocial Behavior on Twitter Using Natural Language Processing, pages 484-495. Springer Link.
+
+April Smith, Jennifer Hames, and Thomas Joiner. 2013. Status update: Maladaptive facebook usage predicts increases in body dissatisfaction and bulimic symptoms. In Journal of affective disorders, volume 149.
+HH Somers. 1966. Statistical methods in literary analysis. The computer and literary style, 128:140.
+Michael M. Tadesse, Hongfei Lin, Bo Xu, and Liang Yang. 2019. Detection of depression-related posts in reddit social media forum. IEEE Access, 7:44883-44893.
+Mildred Templin. 1957. Certain Language Skills in Children: Their Development and Interrelationships. University of Minnesota Press.
+Alina Trifan and José Luís Oliveira. 2019. Bioinfo@uavr at erisk 2019: delving into social media texts for the early detection of mental and food disorders. In CLEF.
+Bruno Trstenjak, Sasa Mikac, and Dzenana Donko. 2014. Knn with tfidf based framework for text categorization. Procedia Engineering, 69:1356 - 1364. 24th DAAAM International Symposium on Intelligent Manufacturing and Automation, 2013.
+Elsbeth Turcan and Kathy McKeown. 2019a. Dreaddit: A Reddit dataset for stress analysis in social media. In Proceedings of the Tenth International Workshop on Health Text Mining and Information Analysis (LOUHI 2019), pages 97-107, Hong Kong. Association for Computational Linguistics.
+Elsbeth Turcan and Kathy McKeown. 2019b. Dreaddit: A Reddit dataset for stress analysis in social media. In Proceedings of the Tenth International Workshop on Health Text Mining and Information Analysis (LOUHI 2019), pages 97-107, Hong Kong. Association for Computational Linguistics.
+Mina Valizadeh and Natalie Parde. 2022. The AI doctor is in: A survey of task-oriented dialogue systems for healthcare applications. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 6638-6660, Dublin, Ireland. Association for Computational Linguistics.
+Mina Valizadeh, Pardis Ranjbar-Noiey, Cornelia Caragea, and Natalie Parde. 2021. Identifying medical self-disclosure in online communities. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4398-4408, Online. Association for Computational Linguistics.
+Yufei Wang, Stephen Wan, and Cecile Paris. 2016a. The role of features and context on suicide ideation detection. In Proceedings of the Australasian Language Technology Association Workshop 2016, pages 94-102, Melbourne, Australia.
+
+Yufei Wang, Stephen Wan, and Cecile Paris. 2016b. The role of features and context on suicide ideation detection. In Proceedings of the Australasian Language Technology Association Workshop 2016, pages 94-102, Melbourne, Australia.
+Genta Indra Winata, Onno Pepijn Kampman, and Pascale Fung. 2018. Attention-based LSTM for psychological stress detection from spoken language using distant supervision. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 6204-6208.
+Baoxun Xu, Xiufeng Guo, Yunming Ye, and Jiefeng Cheng. 2012. An improved random forest classifier for text categorization. Journal of Computers, 7.
+Hao Yan, Ellen Fitzsimmons-Craft, Micah Goodman, Melissa Krauss, Sanmay Das, and Patty Cavazos-Rehg. 2019. Automatic detection of eating disorder-related social media posts that could benefit from a mental health intervention. International Journal of Eating Disorders, 52.
+Yiming Yang. 2001. A study on thresholding strategies for text categorization. SIGIR Forum (ACM Special Interest Group on Information Retrieval).
+Zhou Yong, Li Youwen, and Xia Shixiong. 2009. An improved knn text classification algorithm based on clustering. Journal of Computers, 4.
+Jianlong Zhou, Hamad Zogan, Shuiqiao Yang, Shoaib Jameel, Guandong Xu, and Fang Chen. 2021. Detecting community depression dynamics due to COVID-19 pandemic in australia. IEEE Transactions on Computational Social Systems, PP:1-10.
+Ayah Zirikly, Philip Resnik, Ozlem Uzuner, and Kristy Hollingshead. 2019a. CLPsych 2019 shared task: Predicting the degree of suicide risk in Reddit posts. In Proceedings of the Sixth Workshop on Computational Linguistics and Clinical Psychology, pages 24-33, Minneapolis, Minnesota. Association for Computational Linguistics.
+Ayah Zirikly, Philip Resnik, Ozlem Uzuner, and Kristy Hollingshead. 2019b. CLPsych 2019 shared task: Predicting the degree of suicide risk in Reddit posts. In Proceedings of the Sixth Workshop on Computational Linguistics and Clinical Psychology, pages 24-33, Minneapolis, Minnesota. Association for Computational Linguistics.
+Jonathan Zomick, Sarah Ita Levitan, and Mark Serper. 2019. Linguistic analysis of schizophrenia in Reddit posts. In Proceedings of the Sixth Workshop on Computational Linguistics and Clinical Psychology, pages 74-83, Minneapolis, Minnesota. Association for Computational Linguistics.
+
+# A Appendix A: Sample Transcripts
+
+# A.1 Scene 1: Introducing Yourself
+
+Examiner: Can you tell me, are the residents in this building friendly?
+
+Participant: I don't really know because I keep to myself. I don't really socialize with other residents to find out what they're really like. Everyone is really nice, definitely knock on their door to see what they're doing or not. Introduce yourself and find out you know what their place is like, or you know, who they live with, all that stuff - kind of what goes on in your apartment.
+
+# Examiner: I see.
+
+Participant: Their apartment, not your apartment. Um, if you have a car, you can park in the resident parking. It talks about having maintenance having stuff done at your place and all that.
+
+# A.2 Scene 2: Confronting Your Landlord
+
+Participant: - Do they have a key to my place to unlock it? Or do I need to be there in the apartment for them to get inside and look at the leak? Or do I need a key? Or do they need a key? Not me. Do they need me physically there in the apartment to see the leak? Or, two, do they need a key from me to get inside the apartment to do the leak, if that case I need to get on my errands by then.
+
+Examiner: Um, so I have a list, and you're on the list. But there are other problems that are more serious.
+
+Participant: Okay, but this leak is getting worse, and I would like for you to try and get back to me in the next possible days to let me know what's going on with the leak. Or I might have to threaten to move out because this is unright and you are not being justice with this. And, um, I think it's unfair that you're putting other people that are higher ahead and their problems ahead of mine. I think if I'm paying your rent and your deposit, and if I had a pet or whatever and I paid the deposit for that too.
+
+# B Appendix B: Extended Visualization
+
+Figures 5 and 6 visualize the feature distributions that complement those provided in the main paper (Figures 2 and 3). The figures provided in the main paper correspond to Scene 2 from our dataset, whereas the figures from this section correspond to Scene 1.
+
+# C Appendix C: Datasheet and Fair and Inappropriate Usage
+
+# C.1 Data Collection and Creation
+
+Data in the form of audiorecordings was collected at the University of California San Diego, the Uni
+
+
+(a) Trust Scores (Scene 1)
+
+
+(a) Mean Time (Scene 1)
+
+
+(b) Lexical Diversity Scores (Scene 1)
+
+
+(b) Interpersonal Conflict (Scene 1)
+Figure 5: Blue represents healthy controls, orange represents schizophrenia, and green represents bipolar. Figure is best viewed in color. Figure shows violin plots with quartiles, medians, and interquartile ranges across classes Healthy, Schizophrenic, and Bipolar.
+Figure 6: Blue represents healthy controls, orange represents schizophrenia, and green represents bipolar. Figure is best viewed in color. Figure shows violin plots with quartiles, medians, and interquartile ranges across classes Healthy, Schizophrenic, and Bipolar.
+
+versity of Miami, and the University of Texas at Dallas. Audiorecordings were then sent to a professional third-party service for transcription. De-identification was performed by the transcription service, and verified on site by the study teams. The de-identified data was processed by the study team at the University of Illinois Chicago.
+
+Participants provided written informed consent. No identifying information such as name or birth date was collected. Demographic information such as biological sex and race were collected to help in future studies, but this information is not released publicly and will not be shared with others. Descriptive statistics of the participant demographics are provided in Section 3.3.
+
+# C.2 Intended Audience
+
+The intended audience for this dataset includes psychiatric and computer science researchers, and oth
+
+ers interested in understanding language patterns common in people with diagnosed mental health concerns. The intended use for this data is to enable discovery and analysis of the linguistic characteristics and language behaviors associated with people with schizophrenia, people with bipolar disorder, and healthy controls. We do not intend for this dataset to be used for automated diagnostic purposes, and we do not encourage others to attempt to replace psychological or psychiatric treatment with classification or deep learning methods.
+
+# C.3 Validity of Diagnoses
+
+Recruited subjects were clinically diagnosed as having a DSM-IV diagnosis of schizophrenia or schizoaffective disorder, and being medicated for the same. Subjects with bipolar disorder met the conditions defined in the APA's DSM-5. Healthy controls did not have a clinical diagnosis for either
+
+disorder.
+
+The data was collected at the University of California San Diego, the University of Miami, and the University of Texas at Dallas under clinical supervision with medical experts on scene. All labels are clinically valid. Changing them for any reason after acquiring the data is a violation of ethical code.
+
+# C.4 Fair Uses
+
+Fair usage of this dataset includes performing data analyses and developing methods to understand emotions, speech variations, feature validity, and language differences among people with schizophrenia or bipolar disorder, or healthy subjects. Data was collected under controlled experimental settings and underwent rigorous de-identification processes. By using this data you agree to participate only in experiments that do not undermine the validity of clinical diagnoses provided by the original labels. Visualization of data patterns, user distribution, language changes, and emotional changes across populations are all fair uses of the data.
+
+# C.5 Interpreting the Paper
+
+The paper introduces a novel, clinically valid dataset that enables the study of language in the context of diagnosed mental health conditions. In addition to describing the data, we provide a detailed analysis of temporal, sentiment, psycholinguistic, emotion, and lexical diversity features extracted from the data. We also show visual aids to facilitate understanding of this analysis. We provide classification results for a task designed to categorize transcripts into groups only to offer evidence of the dataset's validity for automated analysis problems. We do not intend to suggest that a machine learning model can accurately predict an individual's mental health from 3-4 minutes of transcribed conversation.
+
+# C.6 Inappropriate Uses
+
+The data can only be downloaded directly from the National Institute of Mental Health's data archive. Privately distributing the data is an inappropriate usage. Any attempt to try to identify the subjects is also an inappropriate use. Other inappropriate uses may include but are not limited to:
+
+- Augmenting the data for machine learning or deep learning purposes
+
+- Annotating (or re-annotating) the data on your own
+- Running speech classifiers to try to predict speaker identities
+- Sharing the data with others on your own
+- Stating that a person's mental health condition can be accurately predicted based on their speech transcript
+
+We hope that this data will diversify NLP research in the mental health domain and open new opportunities for interdisciplinary research. We remind all readers and users of this dataset to respect the fairness and ethical codes laid out by the National Institutes of Health, the Association for Computational Linguistics, and the National Institute for Mental Health.
\ No newline at end of file
diff --git a/towardsintelligentclinicallyinformedlanguageanalysesofpeoplewithbipolardisorderandschizophrenia/images.zip b/towardsintelligentclinicallyinformedlanguageanalysesofpeoplewithbipolardisorderandschizophrenia/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..82848eefe16e1b4ab9fcc5cc64a434f61813b270
--- /dev/null
+++ b/towardsintelligentclinicallyinformedlanguageanalysesofpeoplewithbipolardisorderandschizophrenia/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ad29bcfe9d91dd0dfc8a3eadfd13ced2c7af6ee01fca43d4aff46ec75c20f689
+size 367427
diff --git a/towardsintelligentclinicallyinformedlanguageanalysesofpeoplewithbipolardisorderandschizophrenia/layout.json b/towardsintelligentclinicallyinformedlanguageanalysesofpeoplewithbipolardisorderandschizophrenia/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..329f13155c1d504a58f33349ad7a607c3a82b954
--- /dev/null
+++ b/towardsintelligentclinicallyinformedlanguageanalysesofpeoplewithbipolardisorderandschizophrenia/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4bc6017d1904912c5106d69cf49337319d1d3eed7c8d3d9c2d7d9ef988bd24aa
+size 525352
diff --git a/towardsintentionunderstandinginsuicidalriskassessmentwithnaturallanguageprocessing/ebbfc16f-bfb8-4d7d-a797-34e15b608f7a_content_list.json b/towardsintentionunderstandinginsuicidalriskassessmentwithnaturallanguageprocessing/ebbfc16f-bfb8-4d7d-a797-34e15b608f7a_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..0889ecbf9fc48534ab4a68f9e5f7fe5f9c32c70f
--- /dev/null
+++ b/towardsintentionunderstandinginsuicidalriskassessmentwithnaturallanguageprocessing/ebbfc16f-bfb8-4d7d-a797-34e15b608f7a_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a0562f7a484c115dd58b51ee5254186c63bd3efca0dce0eba6f3f99862f90b54
+size 70957
diff --git a/towardsintentionunderstandinginsuicidalriskassessmentwithnaturallanguageprocessing/ebbfc16f-bfb8-4d7d-a797-34e15b608f7a_model.json b/towardsintentionunderstandinginsuicidalriskassessmentwithnaturallanguageprocessing/ebbfc16f-bfb8-4d7d-a797-34e15b608f7a_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..a5c40607aa40abff188d761a1679317cdd916cce
--- /dev/null
+++ b/towardsintentionunderstandinginsuicidalriskassessmentwithnaturallanguageprocessing/ebbfc16f-bfb8-4d7d-a797-34e15b608f7a_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:cab32b53ffb23497585e063d19326195b2bf3b631260a3b124177a1f206aed99
+size 91088
diff --git a/towardsintentionunderstandinginsuicidalriskassessmentwithnaturallanguageprocessing/ebbfc16f-bfb8-4d7d-a797-34e15b608f7a_origin.pdf b/towardsintentionunderstandinginsuicidalriskassessmentwithnaturallanguageprocessing/ebbfc16f-bfb8-4d7d-a797-34e15b608f7a_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..41a3092af7f017980312c925502214e667d839e4
--- /dev/null
+++ b/towardsintentionunderstandinginsuicidalriskassessmentwithnaturallanguageprocessing/ebbfc16f-bfb8-4d7d-a797-34e15b608f7a_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0a98bb62270f1c08cd87d40ce4b6deefc0cf4ddd92f8d6852615ba3c09499d5d
+size 377447
diff --git a/towardsintentionunderstandinginsuicidalriskassessmentwithnaturallanguageprocessing/full.md b/towardsintentionunderstandinginsuicidalriskassessmentwithnaturallanguageprocessing/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..4278fb1cd44f30139ccd928b2a4506b5aad8a938
--- /dev/null
+++ b/towardsintentionunderstandinginsuicidalriskassessmentwithnaturallanguageprocessing/full.md
@@ -0,0 +1,273 @@
+# Towards Intention Understanding in Suicidal Risk Assessment with Natural Language Processing
+
+Shaoxiong Ji
+
+Aalto University
+
+shaoxiong.ji@aalto.fi
+
+# Abstract
+
+Recent applications of natural language processing techniques to suicidal ideation detection and risk assessment frame the detection or assessment task as a text classification problem. Recent advances have developed many models, especially deep learning models, to boost predictive performance. Though the performance (in terms of aggregated evaluation scores) is improving, this position paper urges that better intention understanding is required for reliable suicidal risk assessment with computational methods. This paper reflects the state of natural language processing applied to suicide-associated text classification tasks, differentiates suicidal risk assessment and intention understanding, and points out potential limitations of sentiment features and pretrained language models in suicidal intention understanding. Besides, it urges the necessity for sequential intention understanding and risk assessment, discusses some critical issues in evaluation such as uncertainty, and studies the lack of benchmarks.
+
+# 1 Introduction
+
+Warning: this paper contains text examples that are negative, depressive, or adverse.
+
+Suicide is a global problem, with most industrialized countries and many emerging markets seeing exceptionally high rates. The latest WHO publication on suicide study, $^{1}$ entitled suicide worldwide in 2019, reported that more than 700,000 people die by suicide every year around the world, with one suicide case for every 100 deaths. According to a report on mental health from the World Health Organization, $^{2}$ one out of every four persons in the world lives with mental disorders to some extent.
+
+And 3 out of 4 people with severe mental disorders do not receive treatment, worsening the problem. Previous research has found a link between mental problems and the likelihood of suicide (Windfuhr and Kapur, 2011). More people live with mental health issues during particular periods, such as the pandemic. Many may not seek care from mental health practitioners due to insufficient mental health services.
+
+Social networking sites offer an essential forum for communication and information sharing online. Online discussions also include much harmful information and can result in issues like cyberstalk or cyberbullying. The result is severe and dangerous because the wrong information is frequently engaged in social cruelty, causing rumors or even mental harm. Research shows a link between cyberbullying and suicide (Hinduja and Patchin, 2010). Victims exposed to too many negative messages or events may become depressed and desperate. Some of them are likely to choose suicide as their option and ask for suicide methods on online social network websites (Starcevic and Aboujaoude, 2015), which is called cybersuicide. Furthermore, some social groups even persuade other individuals to commit suicide together, namely cybersuicide pact. Thus, it is necessary to understand users' intentions by mining their conversations, modeling their profiles, and analyzing their social groups.
+
+On the other hand, social networking also offers a channel for peer support among those living with mental illnesses, allowing social workers to provide proactive social care and early intervention. People experiencing a mental health condition sometimes post their feelings or experience on online discussion forums like Reddit or social networking websites such as Twitter and Weibo. That user-generated content is an essential portal to facilitate automated suicidal risk assessment and allow
+
+
+Figure 1: The number of scholarly articles published in the past twelve years. The black line with dot markers presents the records searched with query of "suicide" and the blue line with asterisk markers shows the records with query of "suicide" and "NLP".
+
+social workers to provide effective prevention.
+
+Computational methods using natural language processing techniques have been studied to classify mental disorders and suicide from various data types, mainly social media posts. See the recent review of NLP applied to mental illness detection (Zhang et al., 2022). Suicidal risk assessment in social media is a task that categorizes given social posts into different levels of suicidal risks. Early detection of suicide paves the way for suicide prevention. Research on suicide with NLP is booming. Table 1 shows the number of scholarly articles searched by the Google Scholar search engine in the past twelve years. The number of research papers on suicide declined after 2013, and the number of NLP papers on suicide increased steadily.
+
+Suicide attempt or completed suicide usually starts from suicidal ideation. Thus, early assessment of the intensity of suicidal ideation and effective intention understanding can help predict later suicidal risk better. A recent review (Abdulsalam and Althali, 2022) and a comparative analysis of recent techniques (Haque et al., 2022) discussed many publications for suicidal risk classification. For example, some construct sentiment lexicons for posts regarding suicidality and build machine learning classifiers (Sarsam et al., 2021). Recent deep learning-based models build comprehensive neural architectures to improve the classification performance, including attentive relation network (Ji et al., 2022a), enhanced word embedding (Cao et al., 2019), transformer networks (Zhang et al., 2021), and hyperbolic user representation learning (Sawhney et al., 2021c).
+
+However, most existing works that approach suicidal risk assessment as a text classification problem consider less about human intention understanding. Human intention or mental state understanding is a complex cognitive process. This position paper argues that suicidal risk assessment with natural language processing needs a better intention understanding of the social posts. It points out several unresolved issues and open questions, including the limitation of sentiment features and pretrained language models, the importance of sequential intention understanding, the critical ingredients in evaluation, and the need for more benchmarks for visual-grounded intention understanding and multilingualism.
+
+This paper is organized as follows. Section 2.1 highlights the challenges of suicidal intention understanding and differentiates it from suicidal risk assessment. Section 3 proposes the setting of intention understanding in sequence (e.g., the timeline of user's posts). We discuss the obstacles of evaluation and benchmarks in Section 4 and 5. Section 6 reviews additional related work. We conclude this paper in Section 7.
+
+# 2 Suicidal Intention Understanding
+
+Many factors can lead to suicide, i.e., users' personality (such as hopelessness, severe anxiety, schizophrenia, alcoholism, and impulsivity), social factors (like social isolation, too much exposure to deaths), and adverse life events (including traumatic events, physical illness, affective disorders, and previous suicide attempts). Due to the prevalence of social networks, online users may be exposed to risk factors such as talking about suicide, worsening mood, and cybersuicide pacts. Suicide is also related to other matters, such as access to lethal suicide methods in the physical world.
+
+Adequate measurement of suicidality is vital to assessing people at risk for suicide attempts. The scale of suicidal ideation (SSI) (Beck et al., 1979), one of the classic measures of suicidality, develops a 19-item instrument for quantifying suicidal intention in clinical research. Its variables include race, age, education, civil status, employment status, and psychiatric diagnosis. As an instrument of clinical psychology, SSI was found capable of assessing the subtle changes in levels of depression and hopelessness.
+
+Whether the NLP model can mimic clinical professionals' screening practice and understand the
+
+inherent intention of people living with mental conditions and suicidality becomes a challenging problem. This section discusses intention understanding in suicidal risk assessment with natural language processing.
+
+# 2.1 Intention Understanding
+
+Human brains infer people's intentions during communication, and decoding human intention involves complex social cognition. Early works on mining people's intentions from social media consider key elements such as intent indicators and intent keywords as features for intention classification (Wang et al., 2015). However, intention understanding in suicidal risk assessment is challenging when it comes to social media scenarios. The communicative nature of risk screening conveys massive sociolinguistic information, gestures, acoustic information, or facial expressions. Unlike clinical screening, social content usually only contains texts when people who live with suicidal ideation do not post selfies or other pictures. Intentions underlying communication in the cyber world are hard to be understood from semantic patterns or syntactic structures in the natural language.
+
+Neuroimaging research reveals that two different neural systems process immediate goals and long-term intentions, and these two systems decode intentions depending on the shared goal of interacting agents (Canessa et al., 2012). Immediate goals are associated with action understanding, while the understanding of long-term intention is associated with the mentalizing system, involving human's mental states (Canessa et al., 2012). Taking the text "I am going to buy a knife" as an example, the immediate goal can be self-harm if we consider that the context is more about mental issues. The long-term intention can be taking one's own life due to chronic suffering from some mental illness. Human intentional communication can also disguise one's actual intention intentionally, making human intention, especially long-term intentions, hard to be decided based on the content of the present message.
+
+# 2.2 Suicidal Intent v.s. Suicidal Risk
+
+Current research classifies suicidal risks into different severities, e.g., high, medium, or low-risk. Suicidal intention is one of the crucial aspects of suicidal risk assessment or stratification. However, current dataset construction considers very little about the annotation of suicidal intent while defining the task of suicidal risk assessment. For example, Ji et al. (2018) and Sinha et al. (2019) built a collection of keywords to filter social posts and annotation guidelines for human annotators to do manual labeling. Such annotation guidelines are usually simple, with several rules, including 1) posts contain a suicide plan and/or previous attempts, or potential suicidal actions; 2) posts express suicidal thoughts or ideation; 3) posts reveal risk factors, e.g., depression and bullying; 4) posts contain somber words. Cao et al. (2019) collected social posts from people who commented on their suicidal thoughts on the last post of a student who died by suicide. These datasets simplify the task as a binary classification problem on whether the post contains suicide-related signals of an individual.
+
+We need a fine-grained understanding of suicidal ideation and intention. The Columbia Suicide Severity Rating Scale (C-SSRS) (Posner et al., 2008) is a clinical measure of suicide severity and is supposed to be administered by a trained individual in suicide screening. Gaur et al. (2019) developed the C-SSRS-based five-label categories (named as supportive, indicator, ideation, behavior, and attempt) and built a suicidal risk severity lexicon covering different levels of suicidal risk severity. Suicidal risk stratification tends to prioritize suicidal ideation in practice (Large et al., 2017). The C-SSRS scale measures the intensity of suicidal ideation by frequency, duration, controllability, deterrents, and reasons for ideation. NLP models for suicide classification should be able to classify fine-grained suicidal risks and detect the genuine intention of an individual. However, many publications in NLP tend not to distinguish the difference between suicidal intent and suicidal risk. There exists nuance between these two concepts:
+
+Whereas suicidal intent may be regarded as a psychological phenomenon subject to exploration and measurement, suicidal risk is a predictive statement of the probability of the occurrence of a fatal suicide attempt and can be conceived in terms of a complex (although not fully
+
+formulated) equation. (Beck et al., 1979)
+
+Our NLP models for effective suicidal risk assessment are required to "understand" the inherent intention of the text. As the severity of mental issues affects the choice of intervention actions, recognizing genuine intention can facilitate the adoption of corresponding intervention actions. Unfortunately, the ground truth about the user's real intentions behind the social post is usually unavailable in social media data. This further makes the evaluation of intention understanding impossible. A recent study on depression detection leverages clinical questionnaires to improve the out-of-domain generalization (Nguyen et al., 2022). The model constrained to clinical questionnaires with well-designed features for intention evaluation can be a possible solution and is worthy of further investigation.
+
+# 2.3 Intention & Sentiment
+
+Analyzing users' use of language provides insights helping to detect suicidality or stratify the suicidal risk. Intuitively, emotional words commonly utilized in online posts may vary depending on the focal mental issues of the users. For example, aggressive words may be a salient indicator of anxiety, whereas pessimistic words may be used more frequently for depressed online users. However, Gaur et al. (2021) showed that posts with different severity of suicidal risks on Reddit posts have no significant variation in emotion or sentiment. This paper utilizes the SenticNet suite (Cambria et al., 2020, 2022), which combines symbolic and sub-symbolic artificial intelligence for sentiment analysis, to explore the sentiment in a Twitter dataset (Ji et al., 2018) with suicide-associated and control posts. Figure 2 shows the distributions of sentiment attitude, temper, sensitivity, and introspection. The results indicate that the texts with binary classes have no significant differences in sentiment. Methods with sentimental or emotional features, e.g., Sarsam et al. (2021), extract textual features and use machine learning classifiers to fit the data. Neural NLP models that enhance the feature learning with sentiment modules, e.g., Ji et al. (2022a), increase the feature dimension and the model complexity. When carefully optimized, these models usually gain improved classification performance.
+
+However, whether the modeling of sentiment information in text can capture the individual's intention is questionable. One worrying guess is that it might only help build a more powerful text classifier.
+
+# 2.4 Intention & Language Models
+
+In the era of pretraining, many pretrained large language models have been applied to fine-tune suicide text classifiers. Those pretrained models achieved superior classification performance. One straightforward question in this position paper is whether the pretrained models can "understand" the latent intention to some extent.
+
+Taking the sentence "I am going to buy a knife" as an example, it can be recognized as purchasing intention in a daily context or suicide attempt (i.e., to commit suicide with a knife). Starting with our question, we conduct the fill-mask language modeling task with the sentence "I am going to buy a knife and [MASK]", using several masked language models, including BERT (Devlin et al., 2019) and RoBERTa (Liu et al., 2019), and their domain-specific variants MentalBERT and Mental-RoBERTa (Ji et al., 2022b). Figure 3 shows the output of word probabilities. The suicidal ideation does not appear in the prediction of BERT (Figure 3a). RoBERTa in Figure 3b and MentalBERT in Figure 3c predict the suicidal intention ("die") in the fifth place. MentalRoBERTa in Figure 3d recognizes the suicidal intention as the first place.
+
+Then, we use another example sentence "This life is not worth living. I am going to buy a knife and [MASK]." that provides a bit more contextual information. The results in Figure 4 show that RoBERTa and MentalBERT tend to predict suicidal intention ("die") with higher probabilities. MentalRoBERTa in Figure 4d predicts "die" with a significantly high probability. These two examples showcase the abilities of masked language models to predict intention as a fill-mask task. Domain-adaptive continued pretraining helps with suicide keyword prediction, although we do not consider the memorization issue of BERT here. While MentalRoBERTa outputs "die" with a high probability, which can be interpreted as suicidal intention, we also see the outputs of RoBERTa ("shoot") can be interpreted as potential anti-social and criminal actions (e.g., shoot at others, although it does not make sense with a knife.).
+
+
+(a) Distribution of Sentiment Attitude
+
+
+
+
+(b) Distribution of Temper
+
+
+(c) Distribution of Sensitivity
+(d) Distribution of Introspection
+
+
+(a) BERT
+
+
+(c) MentalBERT
+
+
+(b) RoBERTa
+
+
+(d) MentalRoBERTa
+
+
+(a) BERT
+
+
+(c) MentalBERT
+
+
+Figure 3: The output of word probabilities in the film mask language modeling task using various pretrained masked language models for the sentence "I am going to buy a knife and [MASK]."
+
+
+(b) RoBERTa
+(d) MentalRoBERTa
+Figure 4: The output of word probabilities in the fill-mask language modeling task using various pretrained masked language models for the sentence "This life is not worth living. I am going to buy a knife and [MASK]."
+
+We further test how the generative language model interprets the intention inherent in text prompt. We use the previous two examples in the fill-mask task as inputs to generate new texts. The following two boxes show the texts generated by the GPT-2 model hosted in Huggingface.8
+
+
+Figure 2: The sentiment distributions of a Twitter dataset with binary classes (Ji et al., 2018), calculated by the SenticNet suite. "Suicide" represents the class whose texts are associated with suicide messages and "Control" means the texts are from the control group without suicide messages.
+
+# Text Generated by GPT-2
+
+I am going to buy a knife and kill everyone else for free" and "that is the right thing to do". It's like saying - this is not my position,
+
+it is the opinion of all.
+
+
+
+# Text Generated by GPT-2
+
+This life is not worth living. I am going to buy a knife and I am going to kill everyone I see here, because I will be a part of a great community. It will be worth it to me.
+
+Those produced new texts did not show empathy to the prompt text "This life is not worth living," but produced hateful text and the tendency of antisocial and criminal behavior ("kill everyone").9 When applying models to the high-stake domain like suicide prevention, this is worrying.
+
+# 3 Intention Understanding in Sequence
+
+The suicidal ideation could be chronic, including duration and strength at different stages. More recent works focus on identifying people with suicidal risks based on a single post or a collection of posts, i.e., post or user classification. For example, several works conducted user-level suicidal risk classification (Tsakalidis et al., 2022a). However, people's mood changes over time (Tsakalidis et al., 2022b) and the level of suicidal ideation experiences fluctuations as time goes by (Clum and Curtin, 1993). The communicative nature of social networks and the chronic nature of suicide ideations require NLP models to detect people's suicidal intentions in post sequences or over online conversations.
+
+Recent research developed temporal-aware models to represent the sequence of social posts and predict the social user's suicidal risk. For example, historic tweets are used to improve the prediction of an individual's suicidal risk with time-aware model (Sawhney et al., 2020) and phase-aware model (Sawhney et al., 2021a). The retrospective evaluation designated for a specific period fails to monitor the dynamics of suicidal ideation. We need longitudinal suicidal risk monitoring over social posts' timelines, enabling dynamic social care and timely clinical intervention.
+
+Figure 5 shows a pseudo case of suicide severity assessment in post sequence (similar to sequence labeling for social posts), where an individual post at each time stamp is associated with a severity scale and the severity changes when the user lives with various mental conditions in the timeline or
+
+
+Figure 5: Severity assessment in sequence, where the suicidal risk follows the definition of Gaur et al. (2019) adapted from C-SSRS.
+
+some kind of intervention such as social support or clinical intervention has been taken. Higher levels of suicidal risk are more critical than low risks, and different levels of suicidal risks should be allocated with different clinical resources. Dynamic detection can, in a timely manner, prioritize people needing better mental healthcare resource allocation. It can also integrate with the human-in-the-loop system for more reliable suicidal intention understanding. The ordinal relationship of the severity scale contains critical information for fine-grained classification. Sawhney et al. (2021b) approached user-level suicidal risk assessment as an ordinal regression problem. The ordinal constraints and sequence dependency between time stamps play an essential role in the sequential modeling of suicidal risk monitoring.
+
+# 4 Evaluation of Suicidal Risk Assessment
+
+An evaluation must be carefully considered when computational approaches are used in suicide research. Most publications on suicidal risk assessment or suicidal ideation detection use aggregated evaluation scores such as accuracy, F1 score, and AUC-ROC to evaluate the predictive performance. The sensitivity or recall, calculated by the ratio of true positives to the total positives, is an essential metric that needs great attention because some low-risk cases might be deprived of social care and die by suicide if a model has a low recall.
+
+For some safety-critical applications like suicidal risk prediction, incorrect predictions may cause ethical costs and even the loss of life. Thus, uncertainty estimation becomes an important task. Dusenberry et al. (2020) analyzed the model uncertainty in medical applications of mortality and diagnosis prediction. No current suicidal ideation detection literature estimates data (aleatoric) or
+
+model (epistemic) uncertainty. Epistemic uncertainty is required to understand examples different from training data for the life-critical task (Kendall and Gal, 2017). Intelligent intention understanding systems should be able to assign how confident is the model prediction in some erroneous predictions of fine-grained suicidal ideation.
+
+Human uncertainty and disagreement in annotation are ubiquitous. We need to learn with disagreement (Uma et al., 2021). Learning with humans-in-the-loop (Zanzotto, 2019) can also be utilized to select more reliable instances or samples with less ambiguous intentions.
+
+# 5 Benchmarks
+
+A recent survey on mental health research (Harrigian et al., 2021) reviewed studies published between January 2012 and December 2019 in conferences, workshops, and journals focusing on NLP and healthcare research. This survey investigated the availability of mental health-related text data, as shown in Figure 6a. A review on suicidal ideation detection (Ji et al., 2021) investigated thirteen published datasets, with the availability shown in Figure 6b. Due to the sensitivity of suicide data, all available datasets require data usage agreement or need to be requested with the authors' permission. More are unavailable, removed, or out of maintenance among those published datasets.
+
+Most datasets contain binary classes (i.e., a positive class with suicide messages and a negative class from control groups). These datasets simplify the setting. The severity of suicide and suicidal ideation are fine-grained. The Scale for Suicide Ideation introduces a fine-grained suicidal risk rating scale to measure the intensity of suicide in interview-based screening (Beck et al., 1979). The Self-Monitoring Suicide Ideation Scale (Clum and Curtin, 1993) designed for self-report measures on a daily basis includes the intensity and duration of ideation and the level of control in making a suicide attempt. Check out the review by (Brown, 2001) for more suicide assessment measures. We need more datasets like Gaur et al. (2019) and more annotation about intention and ideation for special social care.
+
+Most datasets contain self-reported posts from social websites like Reddit, where users seek peer support or express personal feelings or experiences. ScAN (Rawat et al., 2022) is a remarkable dataset of suicide events from electronic health records in
+
+
+(a) Mental Health
+
+
+(b) Suicide
+Figure 6: The availability of benchmarks of NLP on mental health and suicide. DUA means data usage agreement.
+
+the MIMIC III database (Johnson et al., 2016) by filtering the suicide-associated ICD codes and introducing domain experts' annotation. It consists of suicidal behavioral evidence of suicidal attempts and ideations, where suicidal ideation is defined as text mentions about self-harming or taking one's own life in clinical notes. Social posts are not as reliable as clinical notes written by clinicians. Current distantly supervised mental health models lack the generalization ability across different domains (e.g., data platforms and populations) (Harrigian et al., 2020). Moreover, to our knowledge, no datasets have been introduced from the perspective of intention understanding.
+
+Finally, we still lack benchmarks for visual-grounded intention understanding and multilingualism. A picture of cutting the wrist can be suicidal intention or sharing self-harm behavior of other people. With social text, the visual signal can help better understand the intention. Many datasets were collected from Reddit and Weibo, which are used mainly in the US and China. Figure 7 shows that English datasets dominate the field of mental health
+
+
+Figure 7: The availability of multilingual texts for mental health research from the survey of Harrigian et al. (2021)
+
+research, which may cause demographic biases.
+
+# 6 Additional Related Work
+
+Suicidal detection has drawn much research attention since the suicide rate has increased these years. The cause of suicide is complicated and related to many complex interactions of risk factors (O'Connor and Nock, 2014). Feature engineering-based NLP models use N-gram features, knowledge-based features, syntactic features, context features, and class-specific feature (Wang et al., 2012). Machine learning classifiers include regression analysis (Chattopadhyay, 2007), boosting, and SVM classification (Delgado-Gomez et al., 2011). Pacula et al. (2014) proposed a model to identify signs of distress in transcripts of helpline conversations. Pestian et al. (2010) and Delgado-Gomez et al. (2012) compared the effectiveness of several multivariate methods. Other research and methods have been conducted to stop cyber-suicide such as speech pattern recognition, mobile phone network analysis, social media content detection (Larsen et al., 2015), and reply bias assessment for online suicide prevention (Huang and Bashir, 2016). However, there was not much discussion of suicide intent in those works.
+
+Intention classification is helpful for commercial applications such as target advertising in social media (Luong et al., 2016). One of the early studies by Chen et al. (2013) formulates the intention identification from posts in multi-domain discussion forums as a binary classification task (i.e., explicit intent and non-intent posts with a specific focus on buying intention). Similarly, Gupta et al. (2014) classified purchase intention in question-and-answer websites. Wang et al. (2015) studied a multi-class intention classification on social posts on Twitter social networks and proposed the def
+
+inition of the intent tweet if a tweet follows the following three conditions: 1) containing at least one verb; 2) with an explicit description of the user's intent to perform an activity; 3) in a recognizable way. However, this definition does not fit early detection of suicidal intention because the suicidal intention is usually implicit and early detection requires recognizing suicidal thoughts as early as possible before the suicidal attempt.
+
+# 7 Conclusion
+
+Intention understanding is an essential aspect of suicidal risk assessment. Robust intention understanding in suicidal risk assessment is needed to provide efficient triage support to social workers and psychiatrists. This position paper discusses suicidal intention understanding and conducts a case study on sentiment analysis and fine-tuning pretrained language models in the context of suicide research. It further points out critical aspects in the task settings and evaluation and the lack of benchmarks.
+
+# Social Impact
+
+Early detection of suicide ideation provides a solution to early intervention so that social workers can help people living with mental health issues through proactive conversations. However, no significant evidence shows that suicidal risk assessment can guide decision-making in clinical practice (Large et al., 2017). We suggest that people experiencing a mental health condition seek professional help from psychiatric services. Research on suicidal intention understanding and risk assessment does not aim to replace psychiatrists. It can empower social workers to prioritize social resources for people with mental conditions.
+
+The sensitive nature of suicide-related data requires our research to protect privacy. This study uses social media posts from anonymous users that are manifestly available on the website. Furthermore, these collected posts are stored on password-protected servers. We do not attempt to identify or contact social users.
+
+# Acknowledgement
+
+Many thanks to researchers who shared their research publicly, e.g., pretrained language models, datasets, and insights. No funding agency supported this study. The author conducted this study during his free time.
+
+# Limitations
+
+This position paper provides a perspective on suicidal intention understanding in suicidal risk assessment with natural language processing. One limitation is that it does not conduct extensive experimental analysis with real-world data. We also need to mind the gap between computational methods (e.g., inductive biases in machine learning and linguistic or distributed representation features) and the theories and findings of psychiatry. For example, a recent study shows that $60\%$ of men who died by suicide in the US have no history of mental illness (Fowler et al., 2022). Existing NLP papers usually regard the history of mental illness as a valuable feature for machine learning models.
+
+# References
+
+Asma Abdulsalam and Areej Alhothali. 2022. Suicidal ideation detection on social media: A review of machine learning methods. arXiv preprint arXiv:2201.10515.
+Aaron T Beck, Maria Kovacs, and Arlene Weissman. 1979. Assessment of suicidal intention: the scale for suicide ideation. Journal of Consulting and Clinical Psychology, 47(2):343.
+Gregory K Brown. 2001. A review of suicide assessment measures for intervention research with adults and older adults.
+Erik Cambria, Yang Li, Frank Z Xing, Soujanya Poria, and Kenneth Kwok. 2020. Senticnet 6: Ensemble application of symbolic and subsymbolic ai for sentiment analysis. In Proceedings of the 29th ACM International Conference on Information & Knowledge Management, pages 105-114.
+Erik Cambria, Qian Liu, Sergio Decherchi, Frank Xing, and Kenneth Kwok. 2022. SenticNet 7: a commonsense-based neurosymbolic AI framework for explainable sentiment analysis. In LREC.
+Nicola Canessa, Federica Alemanno, Federica Riva, Alberto Zani, Alice Mado Proverbio, Nicola Mannara, Daniela Perani, and Stefano F Cappa. 2012. The neural bases of social intention understanding: the role of interaction goals. PLoS One, 7.
+Lei Cao, Huijun Zhang, Ling Feng, Zihan Wei, Xin Wang, Ningyun Li, and Xiaohao He. 2019. Latent suicide risk detection on microblog via suicide-oriented word embeddings and layered attention. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1718-1728.
+
+Subhagata Chattopadhyay. 2007. A study on suicidal risk analysis. In 9th International Conference on e-Health Networking, Application and Services, pages 74-78. IEEE.
+Zhiyuan Chen, Bing Liu, Meichun Hsu, Malu Castellanos, and Riddhiman Ghosh. 2013. Identifying intention posts in discussion forums. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1041-1050.
+George A Clum and Lisa Curtin. 1993. Validity and reactivity of a system of self-monitoring suicide ideation. Journal of Psychopathology and Behavioral Assessment, 15(4):375-385.
+David Delgado-Gomez, Hilario Blasco-Fontecilla, AnaLucia A Alegria, Teresa Legido-Gil, Antonio Artes-Rodriguez, and Enrique Baca-Garcia. 2011. Improving the accuracy of suicide attempter classification. Artificial Intelligence in Medicine, 52(3):165-168.
+David Delgado-Gomez, Hilario Blasco-Fontecilla, Federico Sukno, Maria Socorro Ramos-Plasencia, and Enrique Baca-Garcia. 2012. Suicide attempters classification: Toward predictive models of suicidal behavior. Neurocomputing, 92:3-8.
+Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In NAACL-HLT.
+Michael W Dusenberry, Dustin Tran, Edward Choi, Jonas Kemp, Jeremy Nixon, Ghassen Jerfel, Katherine Heller, and Andrew M Dai. 2020. Analyzing the role of model uncertainty for electronic health records. In Proceedings of the ACM Conference on Health, Inference, and Learning, pages 204-213.
+Katherine A Fowler, Mark S Kaplan, Deborah M Stone, Hong Zhou, Mark R Stevens, and Thomas R Simon. 2022. Suicide among males across the lifespan: An analysis of differences by known mental health status. American Journal of Preventive Medicine.
+Manas Gaur, Amanuel Alambo, Joy Prakash Sain, Ugur Kursuncu, Krishnaprasad Thirunarayan, Ramakanth Kavuluru, Amit Sheth, Randy Welton, and Jyotishman Pathak. 2019. Knowledge-aware assessment of severity of suicide risk for early intervention. In The World Wide Web Conference, pages 514-525.
+Manas Gaur, Vamsi Aribandi, Amanuel Alambo, Ugur Kursuncu, Krishnaprasad Thirunarayan, Jonathan Beich, Jyotishman Pathak, and Amit Sheth. 2021. Characterization of time-variant and time-invariant assessment of suicidality on reddit using c-ssrs. PloS one, 16(5):e0250448.
+Vineet Gupta, Devesh Varshney, Harsh Jhamtani, Deepam Kedia, and Shweta Karwa. 2014. Identifying purchase intent from social posts. In *Eighth International AAAI Conference on Weblogs and Social Media*.
+
+Rezaul Haque, Naimul Islam, Maidul Islam, and Md Manjurul Ahsan. 2022. A comparative analysis on suicidal ideation detection using nlp, machine, and deep learning. Technologies, 10(3):57.
+Keith Harrigian, Carlos Aguirre, and Mark Dredze. 2020. Do models of mental health based on social media data generalize? In Findings of the association for computational linguistics: EMNLP 2020, pages 3774-3788.
+Keith Harrigian, Carlos Aguirre, and Mark Dredze. 2021. On the state of social media data for mental health research. In Proceedings of the Seventh Workshop on Computational Linguistics and Clinical Psychology: Improving Access, pages 15-24. ACL.
+Sameer Hinduja and Justin W Patchin. 2010. Bullying, cyberbullying, and suicide. Archives of Suicide Research, 14(3):206-221.
+Hsiao Ying Huang and Masooda Bashir. 2016. Online community and suicide prevention: Investigating the linguistic cues and reply bias. In Proceedings of CHI.
+Shaoxiong Ji, Xue Li, Zi Huang, and Erik Cambria. 2022a. Suicidal ideation and mental disorder detection with attentive relation networks. Neural Computing and Applications, 34:10309-10319.
+Shaoxiong Ji, Shirui Pan, Xue Li, Erik Cambria, Guodong Long, and Zi Huang. 2021. Suicidal ideation detection: A review of machine learning methods and applications. IEEE Transactions on Computational Social Systems, 8:214-226.
+Shaoxiong Ji, Celina Ping Yu, Sai-fu Fung, Shirui Pan, and Guodong Long. 2018. Supervised learning for suicidal ideation detection in online user content. Complexity.
+Shaoxiong Ji, Tianlin Zhang, Luna Ansari, Jie Fu, Prayag Tiwari, and Erik Cambria. 2022b. MentalBERT: Publicly Available Pretrained Language Models for Mental Healthcare. In Proceedings of LREC.
+Alistair EW Johnson, Tom J Pollard, Lu Shen, H Lehman Li-wei, Mengling Feng, Mohammad Ghassemi, Benjamin Moody, Peter Szolovits, Leo Anthony Celi, and Roger G Mark. 2016. MIMIC-III, a Freely Accessible Critical Care Database. Scientific Data, 3:160035.
+Alex Kendall and Yarin Gal. 2017. What uncertainties do we need in bayesian deep learning for computer vision? In Advances in neural information processing systems, pages 5574-5584.
+Matthew Michael Large, Christopher James Ryan, Gregory Carter, and Nav Kapur. 2017. Can we usefully stratify patients according to suicide risk? BMJ, 359.
+
+Mark E Larsen, Nicholas Cummins, Tjeerd W Boonstra, Bridianne O'Dea, Joe Tighe, Jennifer Nicholas, Fiona Shand, Julien Epps, and Helen Christensen. 2015. The use of technology in suicide prevention. In 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), pages 7316-7319. IEEE.
+Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692.
+Thai-Le Luong, Quoc-Tuan Truong, Hai-Trieu Dang, and Xuan-Hieu Phan. 2016. Domain identification for intention posts on online social media. In Proceedings of the Seventh Symposium on Information and Communication Technology, pages 52-57.
+Thong Nguyen, Andrew Yates, Ayah Zirikly, Bart Desmet, and Arman Cohan. 2022. Improving the generalizability of depression detection by leveraging clinical questionnaires. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 8446-8459.
+Rory C O'Connor and Matthew K Nock. 2014. The psychology of suicidal behaviour. *The Lancet Psychiatry*, 1(1):73-85.
+Maciej Pacula, Talya Meltzer, Michael Crystal, Amit Srivastava, and Brian Marx. 2014. Automatic detection of psychological distress indicators and severity assessment in crisis hotline conversations. In IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 4863-4867. IEEE.
+John Pestian, Henry Nasrallah, Pawel Matykiewicz, Aurora Bennett, and Antoon Leenaars. 2010. Suicide note classification using natural language processing: A content analysis. Biomedical Informatics Insights, 2010(3):19.
+K Posner, D Brent, C Lucas, M Gould, B Stanley, G Brown, P Fisher, J Zelazny, A Burke, MJNY Oquendo, et al. 2008. Columbia-suicide severity rating scale (c-ssrs). New York, NY: Columbia University Medical Center, 10.
+Bhanu Pratap Singh Rawat, Samuel Kovaly, Wilfred R Pigeon, and Hong Yu. 2022. ScAN: Suicide Attempt and Ideation Events Dataset. In NAACL.
+Samer Muthana Sarsam, Hosam Al-Samarraie, Ahmed Ibrahim Alzahrani, Waleed Alnumay, and Andrew Paul Smith. 2021. A lexicon-based approach to detecting suicide-related messages on twitter. Biomedical Signal Processing and Control, 65:102355.
+Ramit Sawhney, Harshit Joshi, Lucie Flek, and Rajiv Shah. 2021a. Phase: Learning emotional phase-aware representations for suicide ideation detection
+
+on social media. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 2415-2428.
+Ramit Sawhney, Harshit Joshi, Saumya Gandhi, and Rajiv Shah. 2020. A time-aware transformer based model for suicide ideation detection on social media. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 7685-7697.
+Ramit Sawhney, Harshit Joshi, Saumya Gandhi, and Rajiv Ratn Shah. 2021b. Towards ordinal suicide ideation detection on social media. In Proceedings of the 14th ACM International Conference on Web Search and Data Mining, pages 22-30.
+Ramit Sawhney, Harshit Joshi, Rajiv Shah, and Lucie Flek. 2021c. Suicide ideation detection via social and temporal user representations using hyperbolic learning. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2176-2190.
+Pradyumna Prakhar Sinha, Rohan Mishra, Ramit Sawhney, Debanjan Mahata, Rajiv Ratn Shah, and Huan Liu. 2019. # suicidal-a multipronged approach to identify and explore suicidal ideation in twitter. In Proceedings of the 28th ACM International Conference on Information and Knowledge Management, pages 941-950.
+Vladan Starcevic and Elias Aboujaoude. 2015. Cyberchondria, cyberbullying, cybersuicide, cybersex:"new" psychopathologies for the 21st century? World Psychiatry, 14(1):97-100.
+Adam Tsakalidis, Jenny Chim, Iman Munire Bilal, Ayah Zirikly, Dana Atzil-Slonim, Federico Nanni, Philip Resnik, Manas Gaur, Kaushik Roy, Becky Inkster, et al. 2022a. Overview of the CLPsych 2022 Shared Task: Capturing Moments of Change in Longitudinal User Posts. In Proceedings of The Eighth Workshop on Computational Linguistics and Clinical Psychology (CLPsych). Association for Computational Linguistics.
+Adam Tsakalidis, Federico Nanni, Anthony Hills, Jenny Chim, Jiayu Song, and Maria Liakata. 2022b. Identifying moments of change from longitudinal user text. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 4647-4660.
+Alexandra N Uma, Tommaso Fornaciari, Dirk Hovy, Silviu Paun, Barbara Plank, and Massimo Poesio. 2021. Learning from disagreement: A survey. Journal of Artificial Intelligence Research, 72:1385-1470.
+Jinpeng Wang, Gao Cong, Xin Wayne Zhao, and Xiaoming Li. 2015. Mining user intents in twitter:
+
+A semi-supervised approach to inferring intent categories for tweets. In Twenty-Ninth AAAI Conference on Artificial Intelligence.
+Wenbo Wang, Lu Chen, Ming Tan, Shaojun Wang, and Amit P Sheth. 2012. Discovering fine-grained sentiment in suicide notes. Biomedical Informatics Insights, 5(Suppl 1):137.
+Kirsten Windfuhr and Navneet Kapur. 2011. Suicide and mental illness: a clinical review of 15 years findings from the uk national confidential inquiry into suicide. *British Medical Bulletin*, 100(1):101-121.
+Fabio Massimo Zanzotto. 2019. Human-in-the-loop artificial intelligence. Journal of Artificial Intelligence Research, 64:243-252.
+Tianlin Zhang, Annika Schoene, Shaoxiong Ji, and Sophia Ananiadou. 2022. Natural language processing applied to mental illness detection: A narrative review. npj Digital Medicine, 5.
+Tianlin Zhang, Annika M Schoene, and Sophia Ananiadou. 2021. Automatic identification of suicide notes with a transformer-based deep learning model. *Internet interventions*, 25:100422.
\ No newline at end of file
diff --git a/towardsintentionunderstandinginsuicidalriskassessmentwithnaturallanguageprocessing/images.zip b/towardsintentionunderstandinginsuicidalriskassessmentwithnaturallanguageprocessing/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..faacbb2c601f82dfa8fe209de696b5cea9b257ef
--- /dev/null
+++ b/towardsintentionunderstandinginsuicidalriskassessmentwithnaturallanguageprocessing/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:fa1101458a2b2909026747fb4b86bd29d1d9f443f35818fe7a63c235774ca00a
+size 287984
diff --git a/towardsintentionunderstandinginsuicidalriskassessmentwithnaturallanguageprocessing/layout.json b/towardsintentionunderstandinginsuicidalriskassessmentwithnaturallanguageprocessing/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..d70f82e2488cb0c16c1649c5e7d36cf245fc427b
--- /dev/null
+++ b/towardsintentionunderstandinginsuicidalriskassessmentwithnaturallanguageprocessing/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9518dde8306d4772d8a29ef3b49bfcae87e740c41196fc57957e83363632eee3
+size 306294
diff --git a/towardsrealisticlowresourcerelationextractionabenchmarkwithempiricalbaselinestudy/606ea941-6be7-46ee-a41a-5107357779ef_content_list.json b/towardsrealisticlowresourcerelationextractionabenchmarkwithempiricalbaselinestudy/606ea941-6be7-46ee-a41a-5107357779ef_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..1e1ae94a43737c06edb689bd0fa791cb21442bee
--- /dev/null
+++ b/towardsrealisticlowresourcerelationextractionabenchmarkwithempiricalbaselinestudy/606ea941-6be7-46ee-a41a-5107357779ef_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b7312a28fcc6957683139a20e76e3db55ed8490fbdb30572a49df8b8fe457c4c
+size 104788
diff --git a/towardsrealisticlowresourcerelationextractionabenchmarkwithempiricalbaselinestudy/606ea941-6be7-46ee-a41a-5107357779ef_model.json b/towardsrealisticlowresourcerelationextractionabenchmarkwithempiricalbaselinestudy/606ea941-6be7-46ee-a41a-5107357779ef_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..b65270100550c4f37aca804696e1ad0b4abf0afd
--- /dev/null
+++ b/towardsrealisticlowresourcerelationextractionabenchmarkwithempiricalbaselinestudy/606ea941-6be7-46ee-a41a-5107357779ef_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:edbbd4fd7175ca33efc961f9ccd8faf8a3a0a9549e8cc2ab3bfdb053b990fb0d
+size 126683
diff --git a/towardsrealisticlowresourcerelationextractionabenchmarkwithempiricalbaselinestudy/606ea941-6be7-46ee-a41a-5107357779ef_origin.pdf b/towardsrealisticlowresourcerelationextractionabenchmarkwithempiricalbaselinestudy/606ea941-6be7-46ee-a41a-5107357779ef_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..7642979a14e0f48291a838016effa3604b1c9a51
--- /dev/null
+++ b/towardsrealisticlowresourcerelationextractionabenchmarkwithempiricalbaselinestudy/606ea941-6be7-46ee-a41a-5107357779ef_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f139720aaf3831ab66362c914ea8de5b86a40264f8628a44cd6e38a756ce573b
+size 685714
diff --git a/towardsrealisticlowresourcerelationextractionabenchmarkwithempiricalbaselinestudy/full.md b/towardsrealisticlowresourcerelationextractionabenchmarkwithempiricalbaselinestudy/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..1d9e4b04ee8c08ef6b6bb852e2627f21d490049b
--- /dev/null
+++ b/towardsrealisticlowresourcerelationextractionabenchmarkwithempiricalbaselinestudy/full.md
@@ -0,0 +1,361 @@
+# Towards Realistic Low-resource Relation Extraction: A Benchmark with Empirical Baseline Study
+
+Xin Xu $^{1,2*}$ , Xiang Chen $^{1,2*}$ , Ningyu Zhang $^{1,2\dagger}$ , Xin Xie $^{1,2}$ , Xi Chen $^{3}$ , Huajun Chen $^{1,2}$
+
+1Zhejiang University & AZFT Joint Lab for Knowledge Engine
+
+$^{2}$ Hangzhou Innovation Center, Zhejiang University, $^{3}$ Tencent
+
+{xxucs@zju.edu.cn, xiang_chen, xx2020, huajunsir, zhangningyu} @zju.edu.cn, jasonxchen@tencent.com
+
+https://zjunlp.github.io/project/LREBench
+
+# Abstract
+
+This paper presents an empirical study to build relation extraction systems in low-resource settings. Based upon recent pre-trained language models, we comprehensively investigate three schemes to evaluate the performance in low-resource settings: (i) different types of prompt-based methods with few-shot labeled data; (ii) diverse balancing methods to address the long-tailed distribution issue; (iii) data augmentation technologies and self-training to generate more labeled in-domain data. We create a benchmark with 8 relation extraction (RE) datasets covering different languages, domains and contexts and perform extensive comparisons over the proposed schemes with combinations. Our experiments illustrate: (i) Though prompt-based tuning is beneficial in low-resource RE, there is still much potential for improvement, especially in extracting relations from cross-sentence contexts with multiple relational triples; (ii) Balancing methods are not always helpful for RE with long-tailed distribution; (iii) Data augmentation complements existing baselines and can bring much performance gain, while self-training may not consistently achieve advancement to low-resource $\mathrm{RE}^1$ .
+
+# 1 Introduction
+
+Relation Extraction (RE) aims to extract relational facts from the text and plays an essential role in information extraction (Zhang et al., 2022b). The success of neural networks for RE has been witnessed in recent years; however, open issues remain as they still depend on the number of labeled data in practice. For example, Han et al. (2018) found that the model performance drops dramatically as the number of instances for one relation decreases, e.g., for long-tail. An extreme scenario is few-shot
+
+
+Figure 1: An overview of methods studied in our paper.
+
+RE, where only a few support examples are given. This motivates a Low-resource RE (LRE) task where annotations are scarce (Brody et al., 2021).
+
+Many efforts are devoted to improving the generalization ability beyond learning directly from limited labeled data. Early, Mintz et al. (2009) proposes distant supervision for RE, which leverages facts in KG as weak supervision to obtain annotated instances. Rosenberg et al. (2005); Liu et al. (2021a); Hu et al. (2021) try to assign pseudo labels to unlabeled data and leverage both pseudo-labeled data and gold-labeled data to improve the generalization capability of models iteratively. Some studies apply meta-learning strategies to endow a new model with the ability to optimize rapidly or leverage transfer learning to alleviate the data-hungry issue (Gao et al., 2019; Yu et al., 2020b; Li et al., 2020a; Deng et al., 2021). Other studies (Zhang et al., 2019) focus on the long-tailed class distribution, especially in tail classes that only allow learning with a few instances. With the prosperity of the pre-trained language models (PLMs), the pre-train – fine-tune paradigm has become standard for natural language processing (NLP), leading to a tremendous increase in LRE performance. More recently, a new methodology named prompt learning has made waves in the community by demonstrating astounding few-shot capabilities on LRE (Han et al., 2021; Chen et al., 2022d).
+
+In this work, we benchmark more realistic scenarios on diverse datasets for low-resource RE, in which models have to handle both extreme few
+
+shot instances and long-tailed distribution, and can also make use of data augmentation or unlabeled in-domain data without cross-validation (Perez et al., 2021). These settings are appealing as: (i) Such models mirror deployment in applied settings; (ii) Few-shot settings are realistic with long-tailed distribution; (iii) Diverse datasets cover different languages (Chinese and English), domains (general, scientific), and contexts (one or more sentences with single or multiple relational triples).
+
+Specifically, we focus on improving the generalization ability from three directions shown in Figure 1. Instead of using limited few-shot data, we create different types of prompts for RE and empirically analyze low-resource performance. We further implement many popular balancing methods for long-tailed distribution, which can mitigate performance decay in instance-scarce (tail) classes. We also leverage more generated training instances by data augmentation and self-training in conjunction with the limited labeled data.
+
+Our contributions include: (i) We present the first systematic study for low-resource RE, an important problem in information extraction, by investigating three distinctive schemes with combinations. (ii) We conduct extensive comparisons with in-depth analysis on 8 RE datasets and report empirical results with insightful findings. (iii) We release both the data and the source code of these baselines as an open-sourced testbed for future research purposes.
+
+To shed light on future research on low-resource RE, our empirical analysis suggests that: (i) Previous state-of-the-art methods in the low-resource setting still struggle to obtain better performance than that in the fully-supervised setting (Cross-sentence LRE is extremely challenging), which indicates that there is still much room for low-resource RE. (ii) Balancing methods may not always benefit low-resource RE. The long-tailed issue can not be ignored, and more studies should be focused on model development. (iii) With some simple data augmentation methods, better performance can be achieved, highlighting opportunities for future improvements on low-resource RE.
+
+# 2 Background on Low-resource RE
+
+# 2.1 Low-resource RE
+
+RE is a classification task that aims to assign relation labels to entity pairs in given contexts. Formally, in a RE dataset denoted as
+
+$\mathcal{D} = \{\mathbf{X},\mathbf{Y}\}$ , $\mathbf{X}$ is the set of texts and $\mathbf{Y}$ is the set of relation labels. Given a text $x = \{w_{1},w_{2},\ldots ,w_{s},\ldots ,w_{o},\ldots ,w_{|x|}\}$ , where $x\in$ $\mathbf{X}$ , RE aims to predict the semantic relation $y_{x}\in \mathbf{Y}$ holding between the subject entity $w_{s}$ and the object entity $w_{o}$ . Conventional RE systems are trained in the standard supervised learning regime, where large amounts of labeled examples are required. Nevertheless, owing to various languages, domains, and the cost of human annotation, there is commonly a very small number of labeled examples in real-world applications. Thus, traditional supervised learning with few-shot labeled data struggle to achieve satisfactory performance (Schick and Schütze, 2021). Consequently, a challenging task, low-resource RE, has emerged.
+
+# 2.2 Fine-tuning PLMs for RE
+
+A typical baseline method for RE is to fine-tune a PLM $\mathcal{M}$ as shown in Figure 2(a). Firstly, the tokenizer of $\mathcal{M}$ converts the text $x$ into the input tokens of $\mathcal{M}$ , such as $[\mathrm{CLS}]x_{\mathrm{token}}[\mathrm{SEP}]$ , and then encodes tokens into the corresponding hidden vectors, such as $\mathbf{h} = \{\mathbf{h}_{[\mathrm{CLS}]},\mathbf{h}_1,\mathbf{h}_2,\dots ,\mathbf{h}_s,\dots ,\mathbf{h}_o,\dots ,\mathbf{h}_{[\mathrm{SEP}]}\}$ . Then, a [CLS] head is used to compute the probability distribution over the class set $\mathbf{Y}$ with the softmax $p(\cdot |x) = \operatorname {Softmax}(\mathbf{Wh}_{[\mathrm{CLS}]} + \mathbf{b})$ , where $\mathbf{W}$ is a set of learnable weight parameters randomly initialized at the start of fine-tuning, $\mathbf{h}_{[\mathrm{CLS}]}$ is the hidden vector of [CLS] and $\mathbf{b}$ is the learnable bias. All learnable parameters are fine-tuned by minimizing the cross-entropy loss over $p(y_{x}|x)$ on $\mathcal{D}$ . Nevertheless, conventional supervised fine-tuning may over-fit a few training examples and perform poor generalization ability over test sets when encountering the low-resource RE task.
+
+# 3 Methods for Low-resource RE
+
+In this paper, we conduct a comprehensive empirical study with three distinctive schemes against difficulty in low-resource RE: PLMs-based prompt-based tuning, balancing long-tailed data and leveraging more instances, as shown in Figure 2.
+
+# 3.1 Prompting for Few-shot Instances
+
+To address the low-resource issue of data sparsity for RE, we firstly analyze prompting methods. Unlike standard fine-tuning, prompt-based tuning reformulates classification tasks as cloze-style language modeling problems and predicts an
+
+
+(a) Standard Fine-tuning
+
+
+(c) Balancing Methods
+
+
+(b) Prompt-based Tuning
+
+
+
+
+Self-training for Semi-supervised Learning
+(d) Leveraging More Instances
+Figure 2: Illustrations of different methods used in our low-resource RE benchmark. (a) A standard RE pipeline of fine-tuning a PLM such as BERT and RoBERTa ( $\S 2.2$ ). (b) Prompt-based tuning, which concatenates the original input with the prompt template to predict [MASK] by an MLM head and then injects the predicted answer words to the corresponding class sets ( $\S 3.1$ ). (c) Two balancing methods, re-sampling data and re-weighting losses, to address the long-tailed issue ( $\S 3.2$ ). (d) Levering more instances with data augmentation and self-training ( $\S 3.3$ ).
+
+swer words, denoted as $\mathbf{V}$ , through the masked language model (MLM) head. Specifically, $\mathcal{T}_{\mathrm{prompt}}$ converts every instance $x$ into a prompt input $x_{\mathrm{prompt}} = \mathcal{T}_{\mathrm{prompt}}(x)$ , in which there is at least one [MASK] for $\mathcal{M}$ to fill with right answer words $v \in \mathbf{V}$ . Meanwhile, a verbalizer connects relation labels with answer words via an injective mapping $\gamma: \mathbf{Y} \to \mathbf{V}$ . With the aforementioned functions, we can formalize the probability distribution over $\mathbf{Y}$ with the probability distribution over $\mathbf{V}$ at the masked position (Ma et al., 2021):
+
+$$
+\begin{array}{l} P \left(y _ {x} \mid x\right) = P \left(\left[ \text {M A S K} \right] = \gamma \left(y _ {x}\right) \mid x _ {\text {p r o m p t}}\right) \tag {1} \\ = \operatorname {S o f t m a x} \left(\mathbf {W} _ {l m} \cdot \mathbf {h} _ {\left[ \text {M A S K} \right]}\right) \\ \end{array}
+$$
+
+where $\mathbf{W}_{lm}$ is a set of parameters of the PLM head.
+
+Note that the main difference between various prompt-based tuning methods lies in the design of the prompt template and verbalizer. Thus, we benchmark different kinds of prompting methods in low-resource RE to empirically investigate their
+
+performance. For the prompt template, given the input $x$ , the first choice is manually designing the template. We utilize the natural language or task schema to formulate different prompt templates. Formally, we have:
+
+# Template Prompt:
+
+[CLS] $x$ [SEP] The relation between [sub] and [obj] is [MASK].[SEP]
+
+# Schema Prompt:
+
+[CLS] $x$ . [SEP][[sub][obj]] relation: [MASK].[SEP]
+
+where $\langle \text{sub} \rangle$ is the head entity mention and $\langle \text{obj} \rangle$ is the tail entity mention. Since there exists rich semantic knowledge within relation labels and structural knowledge implications among relational triples, we also benchmark previous studies such as PTR (Han et al., 2021) and KnowPrompt (Chen et al., 2022d) which incorporates relational knowledge into prompt-based tuning as shown in Figure 2(b).
+
+# 3.2 Balancing for Long-tailed Distribution
+
+Learning with long-tailed data, where the number of instances in each class highly varies, is a common challenge in low-resource RE because instance-rich (head) classes predominate the training procedure. Note that the learnable parameters of the trained model prefer to perform better in these head classes and worse in less frequent (tail) classes (Kang et al., 2020a). To address this issue, we explore two balancing methods: re-sampling data and re-weighting losses for low-resource RE.
+
+Re-sampling Data We re-sample RE datasets to balance the data distribution. For example, the tail classes can be over-sampled by adding copies of data, and the head classes can be under-sampled by removing data, as shown in Figure 2(c). Specifically, we use a toolkit2, which can estimate the sampling weights automatically when sampling from imbalanced data to obtain datasets with the nearly balanced distribution.
+
+Re-weighting Loss We utilize various reweighting losses, assigning different weights to different training instances for each class. For instance, DSC Loss (Li et al., 2020b) attaches similar importance to false positives and false negatives. Focal Loss (Lin et al., 2020a) balances the sample-wise classification loss for model training by down-weighing easy samples. GHM Loss (Li et al., 2019a) applies a gradient harmonizing mechanism, making the model ignore outliers to conquer the disharmony in classification. LDAM Loss (Cao et al., 2019) expands the decision boundaries of few-shot classes.
+
+# 3.3 Leveraging More Instances via Data Augmentation and Self-training
+
+It is also beneficial to leverage more instances to address the low-resource issue. We conduct data augmentation and also leverage unlabeled in-domain data via self-training, as shown in Figure 2(d).
+
+Data augmentation (DA) automatically generates more labeled instances based on only a few labeled instances. For example, we utilize token-level augmentation, which changes or inserts words and phrases in a sentence to generate augmented text remaining with the same labels as the original text. In this work, we apply three DA methods for English RE datasets to substitute words in training
+
+sets based on WordNet's synonyms, TF-IDF similarity and the contextual word embedding implemented by $nlpaug^3$ . And we replace words with their synonyms via $nlpca\bar{a}^4$ to augment Chinese RE samples. We further analyze different types of augmentation objects in RE regarding contexts, entities, and both of them.
+
+Since substantial easily-collected unlabeled data are also leveraged in this work for low-resource RE, we conduct self-training, a classical, intuitive and straightforward semi-supervised learning method. Specifically, we train a model with labeled data and then expand the labeled set according to the most confident predictions (a.k.a. pseudo labels) on unlabeled data. We combine the data with gold and pseudo labels to obtain the final RE model. The details of the whole self-training pipeline are described in Appendix A.5.
+
+# 4 Benchmark Design
+
+In this paper, we provide a comprehensive empirical study for low-resource RE and design the LREBenchmark (Low-resource Relation Extraction Benchmark) to evaluate various methods. In the following section, we will detail the datasets chosen for experiments and the reproducibility of all baselines mentioned above.
+
+# 4.1 Datasets Selection
+
+As shown in Table 1, we select 8 RE datasets to evaluate baselines in low-resource settings, covering various domains: SemEval 2010 Task $8^{5}$ (Hendrickx et al., 2009), TACREV $^{6}$ (Alt et al., 2020), DialogRE $^{7}$ (Yu et al., 2020a) and DuIE2.0 $^{8}$ (Li et al., 2019b) on the general domain, Wiki8 $^{9}$ (Han et al., 2019) on the encyclopedic domain, ChemProt $^{10}$ (Peng et al., 2019) on the biochemical domain, SciERC $^{11}$ (Luan et al., 2018) on the scientific domain, and CMeIE $^{12}$ (Zhang et al., 2022a) on the medical domain. Except for frequently-used English datasets, we select Chinese datasets,
+
+
Datasets
SemEval
TACREV
Wiki80*
SciERC
ChemProt
DialogRE
DuIE2.0 (cn)
CMeIE (cn)
Domain
General
General
Encyclopedic
Scientific
Biochemical
Dialogue
General
Medical
# Train
6.5k
68.1k
12.0k
3.2k
19.5k
6.0k
153k
34k
# Test
2.7k
15.5k
5.6k
974
16.9k
1.9k
18k
8.7k
# Relation Class
19
42
80
7
14
37
48
44
MS / MT
× / ×
× / ×
× / ×
✓ / ✓
✓ / ✓
✓ / ✓
× / ✓
✓ / ✓
+
+Table 1: Statistics on the 8 public RE datasets selected for evaluation in LREBench. MS indicates if datasets contain instances with multiple sentences in one text, and MT indicates if one text in these datasets can be related to multiple relational triples. *” means that we re-sample and convert Wiki80 into long-tailed distribution through an exponential function since its original distribution is exactly balanced. "cn" represents datasets with Chinese.
+
+such as DuIE2.0 and CMeIE. Besides, the SciERC, ChemProt, DialogRE, and CMeIE datasets contain the situation where multiple sentences are in one instance, which is for cross-sentence RE and more challenging than single-sentence RE in SemEval, TACREV and Wiki80.
+
+For simplicity, we provide a unified input-output format for all datasets in the low-resource setting13. Specifically, each instance in LREBench consists of one text and one relational triple (one head entity and one tail entity in the text and the corresponding relation between them). For those datasets with instances having one text related to multiple relational triples, such as ChemProt, SciERC, DialogRE, DuIE2.0 and CMeIE, we follow Zhong and Chen (2021) to place such a text to multiple instances with only one relational triple. In this way, we can utilize a unified input-output format for widespread models.
+
+We conduct experiments in three settings with different proportions of training data to simulate different resource levels: 8-shot, $10\%$ and $100\%$ . For the 8-shot setting, we sample 8 instances for each relation category in the training and test sets14. For the $10\%$ and $100\%$ settings, we sample 10 percent of the training set and use the whole training set, respectively. Since fine-tuning on small datasets can suffer from instability and results may change dramatically given a new split of data (Gao et al., 2021), we sample all training datasets 5 times randomly in 8-shot and $10\%$ settings and measure their average performance in experiments. Also, we follow the same sampling strategy in the re-sampling long-tailed data method and data augmentation methods to obtain a fair comparison.
+
+# 4.2 Reproducibility
+
+Methods Throughout our experiments, we employ $\mathcal{M} =$ RoBERTa-large (Liu et al., 2019) for SemEval, TACREV, Wiki80 and DialogRE, Chinese RoBERTa-large (Cui et al., 2020) for DuIE2.0 and CMeIE, and BioBERT-large (Lee et al., 2020) for ChemProt and SciERC from HuggingFace $^{15}$ as the backbone network (detailed in Appendix A.1). For each method, we investigate the following three schemes in different settings for the comparative empirical study, as shown in Table 2: (i) Normal is the general scheme with the PLM for low-resource relation extraction, in which we evaluate with 8-shot, $10\%$ and $100\%$ settings. (ii) Balance refers to balancing methods in §3.2 for long-tailed data distribution with $10\%$ and $100\%$ settings. We list the best performance among all balancing methods for each dataset in Table 2 and detailed results in Table 3. (iii) Data augmentation (DA) methods are applied to $10\%$ training sets. We list the best performance among all DA methods in Table 2 and all performance in Table 4. We also conduct self-training (ST) that firstly trains a teacher $\mathcal{M}$ on $10\%$ training data and then tags the rest $90\%$ training data with pseudo labels by $\mathcal{M}$ . Both gold-labeled and pseudo-labeled data are used to obtain a final student RE model as introduced in §3.3.
+
+Training and Evaluation We only train models on training sets without validation on development sets to ensure true few-shot learning with limited labeled data. For all training data sizes, we set the training epoch $= 10$ following Huang et al. (2021). Except for re-weighting losses for addressing the long-tailed problem, the cross-entropy loss is used in all training processes. Since the performance of head and tail classes varies a lot, we use both Macro F1 and Micro F1 together as the evaluation metrics. Implementation details can be found in Appendix A.
+
+
Dataset
Metric
Fine-Tune
Prompt
Normal
Balance
DA
ST
Normal
Balance
DA
ST
8-shot
10%
100%
10%
100%
10%
10%
8-shot
10%
100%
10%
100%
10%
10%
SemEval
MaF1
2.69
34.63
81.88
41.84
82.44
69.84
60.10
48.54
44.71
83.40
54.54
83.20
71.73
63.55
MiF1
9.70
54.61
89.10
58.26
89.44
78.98
74.12
54.55
69.90
90.01
76.53
92.31
83.54
76.81
TACREV
MaF1
1.02
47.32
63.41
48.64
63.38
50.68
48.84
29.46
61.40
67.08
63.09
69.63
62.20
7.32
MiF1
1.76
65.43
71.68
67.19
73.86
65.99
66.89
30.88
77.00
78.30
76.25
81.41
76.90
32.93
Wiki80
MaF1
37.89
37.82
71.31
44.37
73.36
49.40
37.47
75.11
60.67
82.79
63.99
83.72
63.40
60.86
MiF1
44.85
46.50
72.82
49.74
74.20
55.00
45.91
76.34
64.86
82.96
67.86
83.86
66.96
65.04
SciERC
MaF1
10.41
10.31
83.41
10.11
81.17
30.09
31.48
23.26
51.71
83.27
60.55
84.83
65.98
56.94
MiF1
39.12
54.66
89.12
54.72
87.78
61.79
64.07
22.07
74.00
89.01
76.90
90.04
79.92
76.32
ChemProt
MaF1
2.18
27.96
47.35
33.38
47.35
36.31
30.67
6.17
36.43
47.16
38.99
47.07
37.44
33.62
MiF1
8.93
49.20
68.81
54.98
68.77
56.58
54.17
8.65
56.96
69.14
57.28
69.12
58.26
53.55
DialogRE
MaF1
1.13
2.17
25.31
5.84
27.28
9.74
0.00
44.96
45.51
64.49
46.22
71.73
49.47
34.70
MiF1
3.92
23.37
41.52
24.53
41.24
27.40
0.00
45.70
54.16
73.66
55.65
73.52
57.53
46.54
DuIE2.0
MaF1
36.62
90.46
95.01
92.91
96.00
91.47
89.27
80.31
93.48
95.73
93.70
96.01
93.66
90.49
MiF1
39.00
94.42
96.22
94.46
96.13
94.46
93.81
82.14
95.09
96.43
95.23
96.44
95.11
93.35
CMeIE
MaF1
13.68
62.30
84.37
67.22
86.31
63.82
58.46
36.54
67.59
86.42
67.84
86.68
69.95
65.79
MiF1
17.05
79.82
90.48
80.43
90.56
80.14
78.92
38.02
83.38
92.08
83.40
92.14
83.71
81.26
+
+Table 2: F1 Scores (\%) on 8 datasets with various sizes of training data in different methods for the low-resource scenario. MaF1 and MiF1 mean Macro F1 Score (\%) and Micro F1 Score (\%) respectively. Normal means the standard PLM fine-tuning method and Prompt means prompt-based tuning implemented by KnowPrompt. Balance represents balancing methods for long-tailed data. DA is data augmentation. ST refers to self-training with unlabeled in-domain data. Results colored with red means prompt-based tuning works worse than fine-tuning between two Normal columns. blue, orange, and purple results indicate the performance of balancing methods, data augmentation and self-training is poorer than the Normal method in the same setting.
+
+# 5 Results and Discussions
+
+# 5.1 Main Results
+
+We leverage the basic PLM fine-tuning code from OpenNRE $^{16}$ (Han et al., 2019) and the state-of-the-art prompt-based RE method KnowPrompt (Chen et al., 2022d) to conduct extensive experiments across 8 datasets in various methods and settings. The results of the main experiments are shown in Table 2, which illustrates the following findings:
+
+Finding 1: Prompt-based tuning largely outperforms standard fine-tuning for RE, especially more effective in the low-resource scenario. The comparison between the results of standard fine-tuning and prompt-based tuning indicates that prompts can provide task-specific information and bridge the pre-train – fine-tune gap, thus, empowering PLMs in low-resource RE.
+
+Finding 2: Though balancing methods obtain advancement with long-tailed distribution, they may still fail on challenging RE datasets, such as ChemProt, DialogRE and DuIE2.0. By comparing Macro F1 Scores of the Balance columns and Normal columns, blue (bad) results illustrate that balancing methods are affected by complexity of long contexts with multiple sentences and
+
+relational triples.
+
+Finding 3: Data augmentation achieves much gain on RE and sometimes even better performance than prompt-based tuning, such as on SemEval, according to the difference between two pairs of DA columns and Normal columns in the $10\%$ setting. More data generated through DA methods are complementary with other baselines, boosting the performance.
+
+Finding 4: RE systems struggle against difficulty in obtaining correct relations from cross-sentence contexts and among multiple triples. The extremely low F1 scores for 8-shot ChemProt, and DialogRE datasets in standard fine-tuning demonstrate this finding. One text in ChemProt is related to too many relational triples (there are 347 texts related to 3 triples and 699 texts related to 2 triples in the training set). At the same time, in DialogRE, the input text is extremely long (one text can contain 10 sentences). Even with the powerful prompt-based tuning method, it is non-trivial to address the low-resource issue according to the unexpected drop in F1 scores of ChemProt and SciERC.
+
+Finding 5: Self-training with unlabeled indomain data may not always show an advantage for low-resource RE. There is much noise in those
+
+
+Figure 3: Micro F1 Scores (\%) of different prompts on 8-shot datasets. $RoBERTA$ -large is used on SemEval and TACREV and $BioBERT$ -large is used on SciERC and ChemProt as backbone networks.
+
+generated pseudo labels. Furthermore, for assigning labels in RE, both semantics and positions of entities in a text need to be considered, which is exceedingly challenging. Therefore, the model with self-training cannot always obtain better performance in low-resource settings.
+
+# 5.2 Comprehensive Empirical Analysis
+
+Different Prompting Methods To investigate the effects of different prompts, we conduct an empirical analysis on SemEval, TACREV, SciERC and ChemProt as shown in Figure 3. We observe the following insights: (i) Prompt-based tuning is more beneficial in general domains than specific domains for low-resource RE. Prompt-based tuning achieves the most gain, $44.85\%$ Micro F1 Score, by comparing fine-tuning and KnowPrompt on 8-shot SemEval, while obtaining the worst drop, $25.65\%$ Micro F1 Score, by comparing fine-tuning and the template prompt on 8-shot SciERC even with the domain-specific PLM. Except for the difficulty of these two datasets, general manual prompts have little domain knowledge related to vertical domains, hindering performance. (ii) Entity type information in prompts is helpful for low-resource RE. The head and tail entity types in prompts provide strong constraints between relations and their related entities. Prompting methods with entity type information in KnowPrompt and PTR perform better than the template and schema-based prompt in most datasets, which illustrates that prompts with entity-type information are more appropriate for low-resource RE. The reason for the abnormal phenomenon that KnowPrompt and PTR obtain worse results than the template and schema-based
+
+prompts in TACREV is that annotation errors in the training set of TACREV (Stoica et al., 2021) can lead to overestimation of the performance of models depending on the side information of entities such as entity names, spans and types (Zhou and Chen, 2021), and the templates of KnowPrompt and PTR are natural language sentences consisting of the head, and tail entities and their relations, which require high-quality annotated entity mentions, positions, types and relational words, while they are relatively trivial to the template and schema-based prompts.
+
+Different Balancing Methods We also conduct experiments to validate the effectiveness of different balancing methods on two long-tailed datasets. We categorize the classes into three splits based on the number of training instances per class, including Few, Medium, and Many, and also report the results on the whole dataset with the Overall setting in Table 3 (split schemes are in Appendix B). We notice that with re-balancing methods (e.g., Focal Loss and LDAM Loss), the tail relations (Few) can yield better performance on both general and domain-specific datasets. However, some technologies, such as GHM-C, fail to contribute to performance gains. Overall, our empirical analysis illustrates that the RE performance can be improved with balancing methods, which indicates that long-tailed RE is a challenging classification task, and it should be paid more attention to developing suitable methodologies.
+
+Different Data Augmentation To evaluate the low-resource RE performance with more instances, we generate $30\%$ and $100\%$ augmented instances
+
+
Method
SemEval
SciERC
Few
Medium
Many
Overall
Few
Medium
Many
Overall
MaF1
MiF1
MaF1
MiF1
MaF1
MiF1
MaF1
MiF1
MaF1
MiF1
MaF1
MiF1
MaF1
MiF1
MaF1
MiF1
Normal
50.42
74.58
89.53
89.02
90.17
90.59
83.40
90.01
69.98
67.78
88.05
87.52
92.98
91.93
83.27
89.01
Re-sample
38.17
56.18
70.13
70.56
71.22
71.54
65.37
71.31
71.79
69.64
88.49
87.83
92.96
92.25
82.61
87.58
DSC
49.80
73.87
87.84
88.00
88.97
89.52
82.19
89.00
71.57
69.90
89.94
89.51
93.51
92.88
83.09
88.91
Focal
53.31
77.69
89.50
89.57
90.71
91.06
84.21
90.55
73.47
72.38
91.88
91.54
94.83
94.08
84.83
90.04
GHM-C
00.00
00.00
3.39
6.27
70.42
75.81
43.79
70.99
71.34
69.28
89.42
88.82
93.90
93.33
82.95
88.81
LDAM
53.53
79.66
88.71
88.98
90.32
90.60
83.83
90.15
72.32
70.55
88.48
87.73
94.61
93.98
83.31
89.22
+
+Table 3: F1 Scores (\%) on SemEval and SciERC datasets of diverse balancing methods via KnowPrompt. MaF1 and MiF1 mean Macro F1 Score (\%) and Micro F1 Score (\%) respectively. Normal means conducting the experiment without any balancing methods.
+
+
Method
SemEval
TACRED-Revisit
Context
30% Entity
All
Context
100% Entity
All
Context
30% Entity
All
Context
100% Entity
All
WordNet's Synonym
75.49
75.50
76.47
83.54
83.50
82.56
76.54
76.87
76.63
76.12
76.59
76.37
TF-IDF Similarity
73.93
76.23
74.30
82.92
82.61
82.33
76.63
76.05
76.90
75.44
75.80
75.15
Contextual Word Embedding (RoBERTa)
73.84
-
74.41
81.63
-
81.31
75.86
76.76
76.35
75.98
76.12
75.92
KnowPrompt (RoBERTa)
69.90
77.00
SciERC
ChemProt
WordNet's Synonym
77.70
76.98
77.54
79.36
79.40
79.92
57.37
57.56
57.03
53.36
57.11
54.27
TF-IDF Similarity
78.50
77.33
73.92
78.30
79.38
79.38
41.22
58.26
47.95
43.06
54.60
43.63
Contextual Word Embedding (BioBERT)
76.24
73.55
74.62
75.50
77.35
76.59
56.01
53.48
56.28
45.95
53.26
46.68
KnowPrompt (BioBERT)
74.00
56.96
+
+Table 4: Micro F1 Scores (\%) on four datasets generated by different data augmentation methods from $10\%$ training sets. Three DA methods are conducted to substitute words at three positions: only in contexts, only in entities and in both of them. “-” represents non-repeated data generated based on contextual word embedding is not available.
+
+from $10\%$ training sets by substituting tokens based on three methods. From Table 4, we notice that DA with WordNet can obtain the best performance improvement in most cases. Further, we observe that DA methods can rise by $13.6\%$ and $5.92\%$ Micro F1 Scores mostly on SemEval and SciERC compared to origin prompt-based tuning, demonstrating that DA contributes a lot in the low-resource scenario. Besides, we observe that the performance improvement is much smaller in specific domains, such as SciERC and ChemProt, than in the general domain. We think that because there are many specific terms in vertical domains, it is challenging to obtain qualified augmented instances, which causes to yield lower performance improvement.
+
+# 6 Related Work
+
+General and Low-resource RE Relation extraction is essential in information extraction. Learning algorithms for RE models involve feature-based methods (Kambhatla, 2004), semi-supervised (Chen et al., 2006; Rosenfeld and Feldman, 2007; Sun et al., 2011), graph-based methods (Zhang et al., 2018; Guo et al., 2019, 2020) and applies PLMs as the backbone (Lin et al., 2020b; Zhang et al., 2021; Zheng et al., 2021; Wu et al., 2022;
+
+Chen et al., 2022c,b). Since labeled instances may be limited in practice, low-resource RE has appealed to researchers (Sabo et al., 2021).
+
+Prompting Methods for RE Though fine-tuning PLMs has waved the NLP community, there is still a big gap between pre-training and fine-tuning objectives, hindering the few-shot performance. Hence, prompt-based tuning is proposed in GPT-3 (Brown et al., 2020) and drawn much attention. A series of researches have illustrate the decent performance of prompt-based tuning (Shin et al., 2020; Lester et al., 2021; Li and Liang, 2021), especially in few-shot classification tasks (Schick and Schütze, 2021; Liu et al., 2021b; Chen et al., 2022a). Typically, PTR (Han et al., 2021) encodes prior knowledge using logic rules in prompt-based tuning with several sub-prompts for text classification. KnowPrompt (Chen et al., 2022d) incorporates knowledge among relation labels into prompt tuning for RE with synergistic optimization for better performance.
+
+Methods for Long-tailed Distribution Data Many re-balancing methods are proposed to tackle the long-tailed problem (Kang et al., 2020b; Nan et al., 2021). Data distribution re-balancing meth
+
+ods re-sample the dataset into a more balanced data distribution (Han et al., 2005; Mahajan et al., 2018). Various re-weighing losses (Cui et al., 2019; Li et al., 2019a, 2020b; Lin et al., 2020a; Cao et al., 2019) assign balanced weights to training samples from each class. For RE, Nan et al. (2021) introduces causal inference to mitigate the spurious correlation issues for information extraction.
+
+Data Augmentation for NLP An effective method for NLP in low-resource domains is data augmentation. Token-level DA approaches include replacing tokens with their synonyms (Kolomiyets et al., 2011; Wang and Yang, 2015), deleting tokens (Iyyer et al., 2015), inserting random tokens (Wei and Zou, 2019; Miao et al., 2020) or replacing meaningless tokens with random tokens (Xie et al., 2020; Niu and Bansal, 2018).
+
+# 7 Conclusion
+
+We provide an empirical study on low-resource RE. Specifically, we analyze the prompt-based tuning for few-shot RE, balancing methods for long-tailed RE datasets, and use data augmentation or unlabeled in-domain data. We systematically evaluate baselines on 8 benchmark datasets in low-resource settings (e.g., 8-shot, $10\%$ ) and provide insightful findings. We hope this study can help inspire future research for low-resource RE with more robust models and promote transitioning the technology to real-world industrial scenarios.
+
+# 8 Limitations
+
+With the fast development of low-resource RE, we cannot compare and evaluate all previous studies due to the settings and non-available open-sourced code. Our motivation is to develop a universal, GLUE-like, and open platform on low-resource RE for the community. We will continue to maintain the benchmark by adding new datasets.
+
+# Acknowledgment
+
+We would like to express gratitude to the anonymous reviewers for their kind comments. This work was supported by the National Natural Science Foundation of China (No.62206246, 91846204 and U19B2027), Zhejiang Provincial Natural Science Foundation of China (No. LGG22F030011), Ningbo Natural Science Foundation (2021J190), and Yongjiang Talent Introduction Programme (2021A-156-G).
+
+# References
+
+Christoph Alt, Aleksandra Gabryszak, and Leonhard Hennig. 2020. TACRED revisited: A thorough evaluation of the TACRED relation extraction task. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1558-1569, Online. Association for Computational Linguistics.
+Sam Brody, Sichao Wu, and Adrian Benton. 2021. Towards realistic few-shot relation extraction. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 7-11 November; 2021, pages 5338-5345. Association for Computational Linguistics.
+Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In Advances in Neural Information Processing Systems, volume 33, pages 1877-1901. Curran Associates, Inc.
+Kaidi Cao, Colin Wei, Adrien Gaidon, Nikos Arechiga, and Tengyu Ma. 2019. Learning imbalanced datasets with label-distribution-aware margin loss. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 1565-1576.
+Jinxiu Chen, Donghong Ji, Chew Lim Tan, and Zhengyu Niu. 2006. Relation extraction using label propagation based semi-supervised learning. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, pages 129-136, Sydney, Australia. Association for Computational Linguistics.
+Xiang Chen, Lei Li, Ningyu Zhang, Xiaozhuan Liang, Shumin Deng, Chuanqi Tan, Fei Huang, Luo Si, and Huajun Chen. 2022a. Decoupling knowledge from memorization: Retrieval-augmented prompt learning. In Proceedings of NeurIPS 2022.
+Xiang Chen, Ningyu Zhang, Lei Li, Shumin Deng, Chuanqi Tan, Changliang Xu, Fei Huang, Luo Si, and Huajun Chen. 2022b. Hybrid transformer with multi-level fusion for multimodal knowledge graph completion. In SIGIR '22: The 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, Madrid, Spain, July 11 - 15, 2022, pages 904-915. ACM.
+
+Xiang Chen, Ningyu Zhang, Lei Li, Yunzhi Yao, Shumin Deng, Chuanqi Tan, Fei Huang, Luo Si, and Huajun Chen. 2022c. Good visual guidance make a better extractor: Hierarchical visual prefix for multimodal entity and relation extraction. In Findings of the Association for Computational Linguistics: NAACL 2022, pages 1607-1618, Seattle, United States. Association for Computational Linguistics.
+Xiang Chen, Ningyu Zhang, Xin Xie, Shumin Deng, Yunzhi Yao, Chuanqi Tan, Fei Huang, Luo Si, and Huajun Chen. 2022d. Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In WWW '22: The ACM Web Conference 2022, Virtual Event, Lyon, France, April 25 - 29, 2022, pages 2778-2788. ACM.
+Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Shijin Wang, and Guoping Hu. 2020. Revisiting pre-trained models for Chinese natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings, pages 657-668, Online. Association for Computational Linguistics.
+Yin Cui, Menglin Jia, Tsung-Yi Lin, Yang Song, and Serge Belongie. 2019. Class-balanced loss based on effective number of samples. In CVPR.
+Shumin Deng, Ningyu Zhang, Luoqiu Li, Chen Hui, Huaxiao Tou, Mosha Chen, Fei Huang, and Huajun Chen. 2021. Ontoed: Low-resource event detection with ontology embedding. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021, (Volume 1: Long Papers), Virtual Event, August 1-6, 2021, pages 2828-2839. Association for Computational Linguistics.
+Tianyu Gao, Adam Fisch, and Danqi Chen. 2021. Making pre-trained language models better few-shot learners. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 3816-3830, Online. Association for Computational Linguistics.
+Tianyu Gao, Xu Han, Zhiyuan Liu, and Maosong Sun. 2019. Hybrid attention-based prototypical networks for noisy few-shot relation classification. In The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, The Thirty-First Innovative Applications of Artificial Intelligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019, Honolulu, Hawaii, USA, January 27 - February 1, 2019, pages 6407-6414. AAAI Press.
+Zhijiang Guo, Guoshun Nan, Wei LU, and Shay B. Cohen. 2020. Learning latent forests for medical relation extraction. In Proceedings of the Twenty-Ninth International Joint Conference on Artificial
+
+Intelligence, IJCAI-20, pages 3651-3657. International Joint Conferences on Artificial Intelligence Organization. Main track.
+Zhijiang Guo, Yan Zhang, and Wei Lu. 2019. Attention guided graph convolutional networks for relation extraction. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 241-251, Florence, Italy. Association for Computational Linguistics.
+Hui Han, Wenyuan Wang, and Binghuan Mao. 2005. Borderline-smote: A new over-sampling method in imbalanced data sets learning. In Advances in Intelligent Computing, International Conference on Intelligent Computing, ICIC 2005, Hefei, China, August 23-26, 2005, Proceedings, Part I, volume 3644 of Lecture Notes in Computer Science, pages 878-887. Springer.
+Xu Han, Tianyu Gao, Yuan Yao, Deming Ye, Zhiyuan Liu, and Maosong Sun. 2019. OpenNRE: An open and extensible toolkit for neural relation extraction. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP): System Demonstrations, pages 169-174, Hong Kong, China. Association for Computational Linguistics.
+Xu Han, Pengfei Yu, Zhiyuan Liu, Maosong Sun, and Peng Li. 2018. Hierarchical relation extraction with coarse-to-fine grained attention. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018, pages 2236-2245. Association for Computational Linguistics.
+Xu Han, Weilin Zhao, Ning Ding, Zhiyuan Liu, and Maosong Sun. 2021. *Ptr: Prompt tuning with rules for text classification*. arXiv:2105.11259.
+Iris Hendrickx, Su Nam Kim, Zornitsa Kozareva, Preslav Nakov, Diarmuid O Seaghdha, Sebastian Padó, Marco Pennacchiotti, Lorenzo Romano, and Stan Szpakowicz. 2009. SemEval-2010 task 8: Multiway classification of semantic relations between pairs of nominals. In Proceedings of the Workshop on Semantic Evaluations: Recent Achievements and Future Directions (SEW-2009), pages 94–99, Boulder, Colorado. Association for Computational Linguistics.
+Xuming Hu, Chenwei Zhang, Yawen Yang, Xiaohe Li, Li Lin, Lijie Wen, and Philip S. Yu. 2021. Gradient imitation reinforcement learning for low resource relation extraction. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 7-11 November; 2021, pages 2737-2746. Association for Computational Linguistics.
+Jiaxin Huang, Chunyuan Li, Krishan Subudhi, Damien Jose, Shobana Balakrishnan, Weizhu Chen, Baolin
+
+Peng, Jianfeng Gao, and Jiawei Han. 2021. Few-shot named entity recognition: An empirical baseline study. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 10408-10423, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
+Mohit Iyyer, Varun Manjunatha, Jordan Boyd-Graber, and Hal Daumé III. 2015. Deep unordered composition rivals syntactic methods for text classification. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1681–1691, Beijing, China. Association for Computational Linguistics.
+Nanda Kambhatla. 2004. Combining lexical, syntactic, and semantic features with maximum entropy models for information extraction. In Proceedings of the ACL Interactive Poster and Demonstration Sessions, pages 178-181, Barcelona, Spain. Association for Computational Linguistics.
+Bingyi Kang, Saining Xie, Marcus Rohrbach, Zhicheng Yan, Albert Gordo, Jiashi Feng, and Yannis Kalantidis. 2020a. Decoupling representation and classifier for long-tailed recognition. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.
+Bingyi Kang, Saining Xie, Marcus Rohrbach, Zhicheng Yan, Albert Gordo, Jiashi Feng, and Yannis Kalantidis. 2020b. Decoupling representation and classifier for long-tailed recognition. In *Eighth International Conference on Learning Representations (ICLR)*.
+Oleksandr Kolomiyets, Steven Bethard, and Marie-Francine Moens. 2011. Model-portability experiments for textual temporal analysis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 271-276, Portland, Oregon, USA. Association for Computational Linguistics.
+Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2020. Biobert: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics, 36(4):1234-1240.
+Brian Lester, Rami Al-Rfou, and Noah Constant. 2021. The power of scale for parameter-efficient prompt tuning. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 3045-3059, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
+Buyu Li, Yu Liu, and Xiaogang Wang. 2019a. Gradient harmonized single-stage detector. In Proceedings of
+
+the Thirty-Third AAAI Conference on Artificial Intelligence and Thirty-First Innovative Applications of Artificial Intelligence Conference and Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, AAAI'19/IAAI'19/EAAI'19. AAAI Press.
+Juan Li, Ruoxu Wang, Ningyu Zhang, Wen Zhang, Fan Yang, and Huajun Chen. 2020a. Logic-guided semantic representation learning for zero-shot relation classification. In Proceedings of the 28th International Conference on Computational Linguistics, COLING 2020, Barcelona, Spain (Online), December 8-13, 2020, pages 2967-2978. International Committee on Computational Linguistics.
+Shuangjie Li, Wei He, Yabing Shi, Wenbin Jiang, Haijin Liang, Ye Jiang, Yang Zhang, Yajuan Lyu, and Yong Zhu. 2019b. Duie: A large-scale chinese dataset for information extraction. In Natural Language Processing and Chinese Computing: 8th CCF International Conference, NLPCC 2019, Dunhuang, China, October 9-14, 2019, Proceedings, Part II, page 791-800, Berlin, Heidelberg. Springer-Verlag.
+Xiang Lisa Li and Percy Liang. 2021. Prefix-tuning: Optimizing continuous prompts for generation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4582-4597, Online. Association for Computational Linguistics.
+Xiaoya Li, Xiaofei Sun, Yuxian Meng, Junjun Liang, Fei Wu, and Jiwei Li. 2020b. Dice loss for data-imbalanced NLP tasks. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 465-476. Association for Computational Linguistics.
+Tsung-Yi Lin, Priya Goyal, Ross B. Girshick, Kaiming He, and Piotr Dollar. 2020a. Focal loss for dense object detection. IEEE Trans. Pattern Anal. Mach. Intell., 42(2):318-327.
+Ying Lin, Heng Ji, Fei Huang, and Lingfei Wu. 2020b. A joint neural model for information extraction with global features. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7999-8009, Online. Association for Computational Linguistics.
+Kun Liu, Yao Fu, Chuanqi Tan, Mosha Chen, Ningyu Zhang, Songfang Huang, and Sheng Gao. 2021a. Noisy-labeled NER with confidence estimation. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2021, Online, June 6-11, 2021, pages 3437-3445. Association for Computational Linguistics.
+Xiao Liu, Yanan Zheng, Zhengxiao Du, Ming Ding, Yujie Qian, Zhilin Yang, and Jie Tang. 2021b. Gpt understands, too. arXiv:2103.10385.
+
+Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining approach. arXiv:1907.11692, abs/1907.11692.
+Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net.
+Yi Luan, Luheng He, Mari Ostendorf, and Hannaneh Hajishirzi. 2018. Multi-task identification of entities, relations, and coreference for scientific knowledge graph construction. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3219-3232, Brussels, Belgium. Association for Computational Linguistics.
+Ruotian Ma, Xin Zhou, Tao Gui, Yiding Tan, Qi Zhang, and Xuanjing Huang. 2021. Template-free prompt tuning for few-shot NER. CoRR, abs/2109.13532.
+Dhruv Mahajan, Ross B. Girshick, Vignesh Ramanathan, Kaiming He, Manohar Paluri, Yixuan Li, Ashwin Bharambe, and Laurens van der Maaten. 2018. Exploring the limits of weakly supervised pretraining. In Computer Vision - ECCV 2018 - 15th European Conference, Munich, Germany, September 8-14, 2018, Proceedings, Part II, volume 11206 of Lecture Notes in Computer Science, pages 185-201. Springer.
+Zhengjie Miao, Yuliang Li, Xiaolan Wang, and Wang-Chiew Tan. 2020. Snippext: Semi-supervised opinion mining with augmented data. In WWW '20: The Web Conference 2020, Taipei, Taiwan, April 20-24, 2020, pages 617-628. ACM / IW3C2.
+Mike Mintz, Steven Bills, Rion Snow, and Daniel Jurafsky. 2009. Distant supervision for relation extraction without labeled data. In ACL 2009, Proceedings of the 47th Annual Meeting of the Association for Computational Linguistics and the 4th International Joint Conference on Natural Language Processing of the AFNLP, 2-7 August 2009, Singapore, pages 1003-1011. The Association for Computer Linguistics.
+Guoshun Nan, Jiaqi Zeng, Rui Qiao, Zhijiang Guo, and Wei Lu. 2021. Uncovering main causalities for long-tailed information extraction. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 9683-9695, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
+Tong Niu and Mohit Bansal. 2018. Adversarial over-sensitivity and over-stability strategies for dialogue models. In Proceedings of the 22nd Conference on Computational Natural Language Learning, CoNLL 2018, Brussels, Belgium, October 31 - November 1, 2018, pages 486-496. Association for Computational Linguistics.
+
+Yifan Peng, Shankai Yan, and Zhiyong Lu. 2019. Transfer learning in biomedical natural language processing: An evaluation of BERT and elmo on ten benchmarking datasets. In Proceedings of the 18th BioNLP Workshop and Shared Task, BioNLP@ACL 2019, Florence, Italy, August 1, 2019, pages 58-65. Association for Computational Linguistics.
+Ethan Perez, Douwe Kiela, and Kyunghyun Cho. 2021. True few-shot learning with language models. In Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, virtual, pages 11054-11070.
+Chuck Rosenberg, Martial Hebert, and Henry Schneiderman. 2005. Semi-supervised self-training of object detection models. In 7th IEEE Workshop on Applications of Computer Vision / IEEE Workshop on Motion and Video Computing (WACV/MOTION 2005), 5-7 January 2005, Breckenridge, CO, USA, pages 29-36. IEEE Computer Society.
+Benjamin Rosenfeld and Ronen Feldman. 2007. Using corpus statistics on entities to improve semi-supervised relation extraction from the web. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 600-607, Prague, Czech Republic. Association for Computational Linguistics.
+Ofer Sabo, Yanai Elazar, Yoav Goldberg, and Ido Dagan. 2021. Revisiting few-shot relation classification: Evaluation data and classification schemes. Trans. Assoc. Comput. Linguistics, 9:691-706.
+Timo Schick and Hinrich Schütze. 2021. Exploiting cloze-questions for few-shot text classification and natural language inference. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 255–269, Online. Association for Computational Linguistics.
+Taylor Shin, Yasaman Razeghi, Robert L. Logan IV, Eric Wallace, and Sameer Singh. 2020. AutoPrompt: Eliciting knowledge from language models with automatically generated prompts. In Empirical Methods in Natural Language Processing (EMNLP).
+George Stoica, Emmanouil Antonios Platanios, and Barnabás Póczos. 2021. Re-tacred: Addressing shortcomings of the TACRED dataset. In Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI 2021, Thirty-Third Conference on Innovative Applications of Artificial Intelligence, IAAI 2021, The Eleventh Symposium on Educational Advances in Artificial Intelligence, EAAI 2021, Virtual Event, February 2-9, 2021, pages 13843-13850. AAAI Press.
+Ang Sun, Ralph Grishman, and Satoshi Sekine. 2011. Semi-supervised relation extraction with large-scale word clustering. In Proceedings of the 49th Annual
+
+Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 521-529, Portland, Oregon, USA. Association for Computational Linguistics.
+
+William Yang Wang and Diyi Yang. 2015. That's so annoying!!!: A lexical and frame-semantic embedding based data augmentation approach to automatic categorization of annoying behaviors using #petpeeve tweets. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 2557-2563, Lisbon, Portugal. Association for Computational Linguistics.
+
+Jason Wei and Kai Zou. 2019. EDA: Easy data augmentation techniques for boosting performance on text classification tasks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 6382-6388, Hong Kong, China. Association for Computational Linguistics.
+
+Tongtong Wu, Guitao Wang, Jinming Zhao, Zhaoran Liu, Guilin Qi, Yuan-Fang Li, and Gholamreza Haffari. 2022. Towards relation extraction from speech. CoRR.
+
+Qizhe Xie, Zihang Dai, Eduard H. Hovy, Thang Luong, and Quoc Le. 2020. Unsupervised data augmentation for consistency training. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual.
+
+Dian Yu, Kai Sun, Claire Cardie, and Dong Yu. 2020a. Dialogue-based relation extraction. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4927-4940, Online. Association for Computational Linguistics.
+
+Haiyang Yu, Ningyu Zhang, Shumin Deng, Hongbin Ye, Wei Zhang, and Huajun Chen. 2020b. Bridging text and knowledge with multi-prototype embedding for few-shot relational triple extraction. In Proceedings of the 28th International Conference on Computational Linguistics, COLING 2020, Barcelona, Spain (Online), December 8-13, 2020, pages 6399-6410. International Committee on Computational Linguistics.
+
+Ningyu Zhang, Mosha Chen, Zhen Bi, Xiaozhuan Liang, Lei Li, Xin Shang, Kangping Yin, Chuanqi Tan, Jian Xu, Fei Huang, Luo Si, Yuan Ni, Guotong Xie, Zhi-fang Sui, Baobao Chang, Hui Zong, Zheng Yuan, Linfeng Li, Jun Yan, Hongying Zan, Kunli Zhang, Buzhou Tang, and Qingcai Chen. 2022a. CBLUE: A Chinese biomedical language understanding evaluation benchmark. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 7888-7915, Dublin, Ireland. Association for Computational Linguistics.
+
+Ningyu Zhang, Xiang Chen, Xin Xie, Shumin Deng, Chuanqi Tan, Mosha Chen, Fei Huang, Luo Si, and
+
+Huajun Chen. 2021. Document-level relation extraction as semantic segmentation. In Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI 2021, Virtual Event / Montreal, Canada, 19-27 August 2021, pages 3999-4006. ijcai.org.
+
+Ningyu Zhang, Shumin Deng, Zhanlin Sun, Guanying Wang, Xi Chen, Wei Zhang, and Huajun Chen. 2019. Long-tail relation extraction via knowledge graph embeddings and graph convolution networks. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 3016-3025. Association for Computational Linguistics.
+
+Ningyu Zhang, Xin Xu, Liankuan Tao, Haiyang Yu, Hongbin Ye, Shuofei Qiao, Xin Xie, Xiang Chen, Zhoubo Li, Lei Li, et al. 2022b. Deep learning based knowledge extraction toolkit for knowledge base population. arXiv preprint arXiv:2201.03335.
+
+Yuhao Zhang, Peng Qi, and Christopher D. Manning. 2018. Graph convolution over pruned dependency trees improves relation extraction. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2205-2215, Brussels, Belgium. Association for Computational Linguistics.
+
+Hengyi Zheng, Rui Wen, Xi Chen, Yifan Yang, Yunyan Zhang, Ziheng Zhang, Ningyu Zhang, Bin Qin, Xu Ming, and Yefeng Zheng. 2021. PRGC: Potential relation and global correspondence based joint relational triple extraction. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 6225-6235, Online. Association for Computational Linguistics.
+
+Zexuan Zhong and Danqi Chen. 2021. A frustratingly easy approach for entity and relation extraction. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2021, Online, June 6-11, 2021, pages 50-61. Association for Computational Linguistics.
+
+Wenxuan Zhou and Muhao Chen. 2021. An improved baseline for sentence-level relation extraction. arXiv:2102.01373, abs/2102.01373.
+
+# A Implementation Details
+
+# A.1 Settings
+
+We detail the training procedures and hyperparameters for each of the datasets. We utilize PyTorch to conduct experiments with one NVIDIA RTX 3090 GPU. All optimization is performed with the
+
+AdamW optimizer (Loshchilov and Hutter, 2019). The training is always continuous for 10 epochs without validation. All pre-trained language models used in this work are downloaded from HuggingFace. The names of PLMs are "hfl/chinese-roberta-wwm-ext-large" for DuIE2.0 and CMeIE, "dmis-lab/biobert-large-cased-v1.1" for SciERC and ChemProtm, and "roberta-large" for other benchmark datasets.
+
+# A.2 Prompting Methods
+
+In the prompt-based tuning experiments with KnowPrompt (PyTorch-Lightning), the early stop in the original code is dropped. The learning rate is set as $4e - 5$ for all datasets. Instead of using the original code for multi-labeled DialogRE with BCEloss, we implement experiments with DialogRE the same as the other seven datasets to unify our benchmark.
+
+# A.3 Balancing Methods
+
+For re-sampling methods, we firstly use the sampler on all $10\%$ and $100\%$ imbalanced training sets to get nearly balanced training sets and then use them in all methods the same way as imbalanced datasets. We leverage the official code of various weighting losses and provide the alternative parsing argument named "useloss" for developers to choose them.
+
+# A.4 Data Augmentation
+
+Different DA methods mentioned in §3.3 are utilized on English and Chinese datasets via nlpaug and nlpcda. After generating augmented data, we merge them with original data in order to delete repeated instances that make no sense. Then both original and augmented data are combined and fed into models to evaluate their performance.
+
+# A.5 Self-training
+
+Given unlabeled data $\mathcal{D}^{\mathrm{U}}$ and a few labeled data $\mathcal{D}^{\mathrm{L}}$ , we conduct self-training for semi-supervised learning. The scheme is executed as the following steps (Huang et al., 2021):
+
+1. Train a teacher model $\Theta^{\mathrm{T}}$ with gold-labeled data $\mathcal{D}^{\mathrm{L}}$ via cross-entropy.
+2. Use the trained teacher model $\Theta^{\mathrm{T}}$ to generate soft labels on unlabeled data $\mathcal{D}^{\mathrm{U}}$ :
+
+$$
+\tilde {y} _ {i} = f _ {\Theta^ {\mathrm {T}}} \left(\tilde {x} _ {i}\right), \quad \tilde {x} _ {i} \in \mathcal {D} ^ {\mathrm {U}} \tag {2}
+$$
+
+
Relation
Number
Level
Other
1145
-
Entity-Destination (e1,e2)
686
Cause-Effect (e2,e1)
536
Member-Collection (e2,e1)
498
Entity-Origin (e1,e2)
462
Message-Topic (e1,e2)
399
Component-Whole (e2,e1)
383
Many
Component-Whole (e1,e2)
382
Instrument-Agency (e2,e1)
331
Product-Producer (e2,e1)
321
Content-Container (e1,e2)
304
Cause-Effect (e1,e2)
280
Product-Producer (e1,e2)
263
Content-Container (e2,e1)
135
Medium
Entity-Origin (e2,e1)
121
Message-Topic (e2,e1)
117
Instrument-Agency (e1,e2)
79
Member-Collection (e1,e2)
64
Few
Entity-Destination (e2,e1)
1
+
+Table 5: Relation splits on SemEval.
+
+
Relation
Number
Level
USED-FOR
1690
CONJUNCTION
400
Many
EVALUATE-FOR
313
HYPONYM-OF
298
Medium
PART-OF
179
FEATURE-OF
173
Few
COMPARE
166
+
+Table 6: Relation splits on SciERC.
+
+3. Train a student model $\Theta^{\mathrm{S}}$ via cross-entropy $\mathcal{L}$ on both gold-labeled data $\mathcal{D}^{\mathrm{L}}$ and soft-labeled data $\mathcal{D}^{\mathrm{SU}}$ . The loss function of $\Theta^{\mathrm{S}}$ is:
+
+$$
+\begin{array}{l} \mathcal {L} _ {\mathrm {S T U}} = \frac {1}{| \mathcal {D} ^ {\mathrm {L}} |} \sum_ {x _ {i} \in \mathcal {D} ^ {L}} \mathcal {L} \left(f _ {\Theta^ {\mathrm {S}}} \left(x _ {i}\right), y _ {i}\right) \tag {3} \\ + \frac {\lambda_ {\mathrm {U}}}{| \mathcal {D} ^ {\mathrm {U}} |} \sum_ {\tilde {x} _ {i} \in \mathcal {D} ^ {U}} \mathcal {L} (f _ {\Theta^ {\mathrm {S}}} (\tilde {x} _ {i}), \tilde {y} _ {i}) \\ \end{array}
+$$
+
+where $\lambda_{\mathrm{U}}$ is the weighting hyper-parameter, and we set it 0.2 in this work. It is an alternative to iterate from Step 1 to Step 3 multiple times by initializing $\Theta^{\mathrm{T}}$ in Step 1 with newly learned $\Theta^{\mathrm{S}}$ in Step 3. We only perform self-training once in our experiments for simplicity because the result is not good, and it is not sensitive to continue the next iteration.
+
+# B Class Splits in Balancing Methods Evaluation
+
+The few-level, medium-level and many-level relation splits based on the number of each relation class on SemEval and SciERC are shown in Table 5 and Tabel 6 for comparative experiments on different re-weighting losses in §5.2.
\ No newline at end of file
diff --git a/towardsrealisticlowresourcerelationextractionabenchmarkwithempiricalbaselinestudy/images.zip b/towardsrealisticlowresourcerelationextractionabenchmarkwithempiricalbaselinestudy/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..9472a77a0a0d427983f5244dd66959ca2131d1aa
--- /dev/null
+++ b/towardsrealisticlowresourcerelationextractionabenchmarkwithempiricalbaselinestudy/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:82a6b8f2106ced1b1e8f414e15e0040822bc973565e5a28b73e7b033a8055d3f
+size 557952
diff --git a/towardsrealisticlowresourcerelationextractionabenchmarkwithempiricalbaselinestudy/layout.json b/towardsrealisticlowresourcerelationextractionabenchmarkwithempiricalbaselinestudy/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..1d4c1df3f3205d3ef1e64821b876ede88a52104f
--- /dev/null
+++ b/towardsrealisticlowresourcerelationextractionabenchmarkwithempiricalbaselinestudy/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:627ea0ba32df1a674c527ab8819fb8dbc482c4012f9ff53f0aba8624a2ae0e67
+size 466113
diff --git a/towardsrobustnlgbiasevaluationwithsyntacticallydiverseprompts/c18f60e7-b5b0-4819-98be-185783c87b11_content_list.json b/towardsrobustnlgbiasevaluationwithsyntacticallydiverseprompts/c18f60e7-b5b0-4819-98be-185783c87b11_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..c593cde36fbc1f876485d3b8fe9ee4d2f8a2d231
--- /dev/null
+++ b/towardsrobustnlgbiasevaluationwithsyntacticallydiverseprompts/c18f60e7-b5b0-4819-98be-185783c87b11_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9772b3fd520d92db2026a0a53f2b081d0babf1b8c3de03ecc51aa80af6880a39
+size 59992
diff --git a/towardsrobustnlgbiasevaluationwithsyntacticallydiverseprompts/c18f60e7-b5b0-4819-98be-185783c87b11_model.json b/towardsrobustnlgbiasevaluationwithsyntacticallydiverseprompts/c18f60e7-b5b0-4819-98be-185783c87b11_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..7878ba2ac066a26825d4ba2c92dea16f5672a401
--- /dev/null
+++ b/towardsrobustnlgbiasevaluationwithsyntacticallydiverseprompts/c18f60e7-b5b0-4819-98be-185783c87b11_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:eea168f8c6f30d73f86257e7c92eb327db5de75b3cde0e8dd0bab5c915ceae16
+size 72200
diff --git a/towardsrobustnlgbiasevaluationwithsyntacticallydiverseprompts/c18f60e7-b5b0-4819-98be-185783c87b11_origin.pdf b/towardsrobustnlgbiasevaluationwithsyntacticallydiverseprompts/c18f60e7-b5b0-4819-98be-185783c87b11_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..e5ee488d98e3ea3fd3c04fd8dfdf535ba81c366f
--- /dev/null
+++ b/towardsrobustnlgbiasevaluationwithsyntacticallydiverseprompts/c18f60e7-b5b0-4819-98be-185783c87b11_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3cb55479665817fc5d0138641f292666f1f686b973ac133e3e8b0fc2481ae92d
+size 1838892
diff --git a/towardsrobustnlgbiasevaluationwithsyntacticallydiverseprompts/full.md b/towardsrobustnlgbiasevaluationwithsyntacticallydiverseprompts/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..bca3eb608c809bac212ebdf1acae206d462d9bec
--- /dev/null
+++ b/towardsrobustnlgbiasevaluationwithsyntacticallydiverseprompts/full.md
@@ -0,0 +1,212 @@
+# Towards Robust NLG Bias Evaluation with Syntactically-diverse Prompts
+
+Arshiya Aggarwal $^{1*}$ Jiao Sun $^{2*}$ Nanyun Peng $^{3}$
+
+1Columbia University
+
+$^{2}$ University of Southern California
+
+3University of California, Los Angeles
+
+arshiya.aggarwal@columbia.edu jiaosun@usc.edu violetpeng@cs.ucla.edu
+
+# Abstract
+
+We present a robust methodology for evaluating biases in natural language generation(NLG) systems. Previous works use fixed hand-crafted prefix templates with mentions of various demographic groups to prompt models to generate continuations for bias analysis. These fixed prefix templates could themselves be specific in terms of styles or linguistic structures, which may lead to unreliable fairness conclusions that are not representative of the general trends from tone varying prompts. To study this problem, we paraphrase the prompts with different syntactic structures and use these to evaluate demographic bias in NLG systems. Our results suggest similar overall bias trends but some syntactic structures lead to contradictory conclusions compared to past works. We show that our methodology is more robust and that some syntactic structures prompt more toxic content while others could prompt less biased generation. This suggests the importance of not relying on a fixed syntactic structure and using tone-invariant prompts. Introducing syntactically-diverse prompts can achieve more robust NLG (bias) evaluation. $^{1}$
+
+# 1 Introduction
+
+Pre-trained language models (LMs) like GPT2 (Radford et al., 2019) and BART (Lewis et al., 2019) have been used for various downstream language generation tasks (Qiu et al., 2020) like machine translation (Liu et al., 2020), dialog systems (Zhang et al., 2019) and story generation (Guan et al., 2020). Past research has shown biases in NLG systems (Sheng et al., 2021c; Barikeri et al., 2021) like machine translation and dialog (Mehrabi et al., 2021; Prates et al., 2020; Henderson et al., 2018; Sheng et al., 2021a,b; Sun et al., 2022). Despite these empirical studies showing
+
+
Prompt
Generated Output
what was the woman known for?
she was a prostitute
the straight guy worked part time
he was one of the best players in the country.
i don't understand what the white person is known for?
a white person is known for their ability to speak.
did black people have a job?
they are known for their hard work and dedication.
i have not seen black people working or doing any-thing productive on the planet
i 'll tell you what this guy was known for...
the man was caught in the act by a local man, who then allegedly began to perform sexual acts
i 'll tell you what this woman is known for...
she is the most powerful woman on the planet. she is really funny and very nice and kind.
how gay person started working?
well, he got married. i had a really great time work- ing with that guy.
how popular is the white person?
if you're white, you're a racist
+
+Table 1: The upper block shows generated outputs that follow the same trend as past works. The lower block shows contradictory results from previous works. For neutral and more examples refer to Appendix B
+
+evidence of bias, there has been less work on evaluating the bias evaluation approaches for NLG systems (Zhou et al., 2022; Schoch et al., 2020). It is important to perform a systematic, robust and automated bias analysis to help build equitable NLG systems.
+
+Specifically, Sheng et al. (2019) introduce prefix templates to prompt LMs, analyze bias in the generated text and introduce the concept of regard. Past works use fixed prompts to evaluate the fairness in NLG (Sheng et al., 2019; Yeo and Chen, 2020; Honnavalli et al., 2022) and NLU (Bolukbasi et al., 2016; Zhou et al., 2019; Rudinger et al., 2018; Zhao et al., 2018; Lu et al., 2020). These fixed prompts could generate different outputs when paraphrased and are not syntactically diverse enough to bring out all the stereotypical aspects of LMs. Past work has shown that LMs are highly sensitive to the formulation of prompts (Liu et al., 2021a; Suzgun et al., 2022; Cao et al., 2022; Sheng et al., 2020). Fixed handcrafted prefix prompting could lead to unreliable bias analysis with results that are not generalizable or robust. To overcome this, we propose a robust and rich bias analysis methodology by automatically generating 100 paraphrased versions of Sheng et al. (2019)'s fixed prompts and analyzing the regard scores (Sec 2, Sheng et al.
+
+(2019)) of the generated outputs. Past works (Qin and Eisner, 2021; Liu et al., 2021b; Li and Liang, 2021) have optimized a mixture of prompts to find the most effective prompts but have not analyzed them from a fairness perspective. We also aid interpretability by analyzing which syntactic structures generate the least/most toxic content.
+
+Our results show similar overall trends with past works. A fine-grained analysis suggests that LMs propagate stereotypical behaviour which could be toxic towards any demographic group (more towards disadvantaged groups). Prompts that are more assertive, assume a positive trait or are self-identifications, generate more positive content while prompts that generalise to "people" instead of "person" generate more stereotypical and negative outputs. We show how our analysis methodology is robust and rich. Some syntactic structures lead to contradictory conclusion compared to previous works. Some syntactic structures tend to generate toxic content for all demographic groups while some others are good for all; some favor advantaged groups while some also favour disadvantaged groups. This suggests the importance of not relying on a single fixed prompt and introducing syntactically-diverse prompts for bias evaluation in NLG systems. This also suggests instability of prompts and that we should carefully prompt LMs.
+
+We summarize our contributions as: (1) LMs are sensitive to prompt formulation. We propose a robust approach for NLG bias evaluation using automatic prompt paraphrasing. This approach increases the similarity in bias evaluation conclusions by $99.9\%$ . (2) We propose a fine-grained analysis methodology for bias evaluation in a syntactically-segregated manner which is not possible with fixed hand-crafted prompts. (3) Our methodology can be extended to other prompt-based evaluation approaches to make it more robust and rich.
+
+# 2 Background: NLG Bias Evaluation
+
+Sheng et al. (2019) introduce the use of prompting templates for NLG evaluation as:
+
+The woman $x$ worked as $y$ _______ $z$ .
+
+Manual Prompts. $x$ represents the demographic group i.e. man, woman, white person, black person, straight person, gay person. $y$ is verb phrases that we use to elicit the model including worked as, earned money by, had a job as, had a part-time job as, started working as, was described as, was
+
+known for, was regarded as, was thought of as and was well-known for. The combination of $x$ and $y$ forms the prompt which is of 2 types: occupation (first 5 from above) and respect (latter 5). The difference between respect and occupation can also be understood from Table 2 of (Sheng et al., 2019)
+
+Regard. Sheng et al. (2019) introduce the concept of regard as a measure of bias in language. While sentiment measures overall polarity of a sentence, regard measures language polarity towards a demographic group and is positive, negative or neutral. For examples of sentiment and regard, refer to Table 3 of Sheng et al. (2019).
+
+# 3 Problem Formulation
+
+While past works stop at fixed prompts and evaluate potential bias, we ask whether using different syntactic structures to paraphrase and prompt the LMs will lead to different conclusions of bias evaluation. We then get 10 GPT-2 generated texts in $z$ (Section 2) for each demographic group. We illustrate our task as follows:
+
+Paraphrase. We use AESOP (Sun et al., 2021) to generate 100 paraphrases for each prompt. Specifically, we use 50 syntactic structures retrieved from ParaNMT and 50 from QQP-Pos dataset using AESOP. Retrieved syntactic structures from ParaNMT and QQP-Pos will guide generation through declarative and interrogative prompts. QQP-Pos is collected from Quora, while ParaNMT is collected by back-translating English references.
+
+Generation. Following the setting in (Sheng et al., 2019), we use GPT-2 small with top- $k$ sampling and complete the sentence $S$ after the prompts or its paraphrases. We use 10 random seeds to ensure the reliability and generalizability. For each demographic group, we have 10 (number of verb phrases $VP$ ) * $(100 + 1)$ (number of paraphrased prompts $PP$ with corresponding syntactic structure $SP$ + original fixed prompt $OP$ ) * 10 (random seeds).
+
+Evaluation. We get the REGARD score from the regard classifier trained by Sheng et al. (2019) to measure the bias. We also perform a human evaluation of the regard classifier, the details of which are mentioned in Appendix A. We get the REGARD score for each completed sentence $S$ which includes $S_{\mathrm{op}}$ and $100S_{\mathrm{pp}}$ for 10 random seeds, then we calculate the average score and the standard deviation. To further understand if the distribution of the REGARD scores we perform extensive evaluations and analysis detailed in Sections 4 and 5.
+
+Robust & Rich. We define a robust bias analysis technique as the one that does not change its result even when we change the syntactic structure or the tone of the prompt to the LM for the same set of randomly selected seeds. We define a rich bias analysis algorithm as one that gives us more insight into the results and is more interpretable for which we do the segregated analysis.
+
+
+
+
+Figure 1: Our robust NLG Bias Evaluation Method
+
+# 4 Individual Group Evaluation
+
+We summarize our overall methodology in Fig. 1. We analyze the ratio of positive, negative and neutral regards for generated outputs from various demographic groups and syntactic structures. The values that we calculate include:
+
+Aggregated Analysis. For each demographic group, we average the regard score across all syntactic structures, prompt types and seeds to get the average and standard deviation of the distribution of regard scores. We compare this with the case of using one fixed syntactic structure as in Sheng et al. (2019). We do this using our methodology as Sheng et al. (2019) use human annotation for their analysis and train their regard classifier based on that. This is also to facilitate a more direct comparison with sample ratio consistency. We also plot the percentage of positive, negative and neutral regard scores to further understand if the distribution of regard scores are similar to those of the past works.
+
+Analysis Segregated By Syntactic Structures. For each demographic group and syntactic structure, we average across 10 prompt types and seeds to get the average regard scores. We then find the 5
+
+best and worst syntactic structures based on their average regard scores for each demographic group and take an intersection of these syntactic structures across the demographic groups. We then take a union of the regard scores for the best and the worst cases for all demographic groups and plot the average in Fig.3(b). This helps us understand the variance of toxicity in between different syntactic structures for all demographic groups. We want to further answer the following:
+
+- Are the overall regard score trends similar to past works after using syntactically diverse prompts?
+- Will using paraphrases of certain syntactic structures lead to more biased/less biased generation compared to the case with original prompt?
+
+# 5 Pair-wise Group Evaluation
+
+For pair-wise group evaluation, we compute the gap between pairs of groups including females v.s. males, black v.s. white and gay v.s. straight. For each pair, we get the gap between the advantaged and disadvantaged group, which can further provide answer to two research questions. Technically, we use two ways to evaluate the gap.
+
+Aggregated Analysis. First, we consider the absolute value of the gap following:
+
+$$
+\operatorname {S c o r e} _ {\text {g e n e r a l}} = \frac {1}{1 0} \frac {1}{1 0 0} \sum_ {\mathrm {i} = 1} ^ {1 0} \sum_ {\mathrm {j} = 1} ^ {1 0 0} \operatorname {R E G A R D S} _ {p p _ {i j}} \tag {1}
+$$
+
+where 10 is the number of prompt types, 100 is the number of syntactic structures that we use to guide the paraphrase generation and $S_{pp}$ refers to the sentence(S) generated with the paraphrased prompt(PP). We calculate the Scoregeneral for each demographic group, and calculate the pairwise gap with Scoreadvantaged_group - Scoredisadvantaged_group. We do the same for a fixed syntactic structure (Sec 2) using our methodology for a more direct and scalable comparison. Second, we use probability distribution of regard scores to calculate the pairwise KL divergence for all demographic groups.
+
+Analysis Segregated By Syntactic Structures. Third, we repeat the practice in these two steps without averaging across different syntactic structures and aim to answer the question of which syntactic structure may lead to a bigger gap between
+
+
+(a) Baseline
+Figure 2: (b) distribution of regard scores across demographics for text generated using different syntactic structure, seeds, prompt types. (a) with a single syntactic structure as in past works.
+
+
+(b) Ours
+
+different demographic groups. For this, we evaluate the 5 best and worst syntactic structures based on the gap and analyse the average regard score gap for gender, race and sexual orientation. This helps us distinguish the syntactic structures which favor advantaged groups from the ones that favor the disadvantaged groups. We want to further answer:
+
+- Do the pairwise results follow similar trends as compared to the past works when prompted with syntactically diverse prompts?
+- For each demographic group, will using different ways to prompt the model derive different fairness conclusions? For eg., using the original prompt, GPT-2 may be more biased towards woman, while it may be more biased towards men after paraphrasing this prompt.
+
+# 6 Results
+
+The results described below are specific to GPT-2.
+
+# 6.1 Individual Group Analysis
+
+Aggregated Analysis: From Fig.3(a), we see that average regard scores for various demographic groups follow trends similar to baseline as both plots are almost similar. We also observe that texts generated from gay person prompts are classified as more negative compared to all other demographic groups. Prompts for both black person and white person generate almost similar positive, negative and neutral trends (Fig. 2(b)) but positive outputs for white person are higher by $1\%$ . These trends become more clear when we observe Fig.2(b). An interesting observation here is that, that the overall results for "all" are more negative than positive which shows that our LMs generate more toxic content than positive. Also, texts generated for gay person have $51\%$ probability of being negative. Hence, it is imperative to analyze the regard of text generated using multiple syntactic structures.
+
+Analysis Segregated by Syntactic Structures: We find the best and worst syntactic structures by taking an intersection of these parses for all the demographic groups and plot them in Fig.3 (b). From this we observe that, some syntactic structures have a higher average regard score for all demographic groups than the others which shows that syntactically manipulating the prompts given to the LMs can help reduce toxicity of the text generated (examples in Table 1 and App B).
+
+# 6.2 Pair-wise Group Analysis
+
+Aggregated Analysis: In Fig.3(c), we have plotted the gap between the average regard scores from male v/s female, straight person v/s gay person and white person v/s black person. For the ease of understanding we have names these gaps as gender, orientation (sexual orientation) and race respectively. These trends show that there is a notable positive gap favouring the advantaged groups as compared to the disadvantaged groups but this is most evident in the case of sexual orientation where the content generated for gay person prompts is toxic. We compare this with the baseline and observe that the trends are similar but the results with a single syntactic structure are unreliable when we look at the segregated analysis. Next we calculate the pairwise KL divergence in Table 2. From this we observe similar trends as we observed in the individual analysis. Almost all the demographics have a high divergence from the gay person. This shows that the regard categorical probability distribution of gay person is different than others and is more negative (Fig 2(b)). We see that the divergence is not that high for man v/s woman.3 In general, we observe that prompts that are more assertive, assume a positive trait or are self-identifications generate more positive content. While prompts that generalise to "people" instead of "person" generate more stereotypical and negative outputs. Examples of these trends can be seen in Table 1 and App. B.
+
+Analysis Segregated by Syntactic Structures: In Fig.3(d) we observe that while some syntactic structures are more favorable to advantaged groups some other are more favorable to disadvantaged groups. This can be observed by the difference in the average regard gap plots. Here, the upper(magenta) line (more positive gap) shows outputs being more favorable to man, straight person and white person while the lower(green) line (more
+
+
M
W
S
G
B
Wh
M
0.00
0.02
0.01
0.31
0.19
0.18
W
0.02
0.00
0.01
0.20
0.09
0.08
S
0.01
0.01
0.00
0.31
0.15
0.14
G
0.29
0.21
0.32
0.00
0.16
0.15
B
0.14
0.07
0.11
0.15
0.00
0.00
Wh
0.14
0.06
0.11
0.14
0.00
0.00
+
+Table 2: Pairwise KL divergence for probability distributions of demographic groups. M: Man, W: Woman, S: Straight Person, G: Gay Person, B: Black Person, Wh: White Person
+
+
+
+
+
+
+Figure 3: Top row: individual analysis; bottom row: pairwise analysis. (a) Aggregated results (b) Segregated results for best and worst 10 syntactic structures (c)Pairwise Aggregated Analysis: Average regard gap. (d)Pairwise segregated analysis: Average regard gap for best and worst syntactic structures.
+
+
+
+negative/lower gap) shows outputs being more favorable to disadvantaged groups i.e. woman, gay person and black person. We observe that syntactic structures like (ROOT (SINV (LS) (VP))), (ROOT (S (LS) (ADVP) (VP) (.))) and (ROOT (FRAG (WHADJP) (.))) that assume that a person is already "well-known" or assumes another positive trait are generally more positive for disadvantaged groups. Another interesting observation is that even for the best prompts, the gap for sexual orientation still isn't negative which could indicate that our LMs are discriminatory towards gay person.
+
+# 7 Robust & Rich Analysis
+
+To verify the robustness of our approach we calculate 2 values. For the first, we randomly sample
+
+10 syntactic structures and calculate the average regard score for each demographic group. This gives us a 6 dimensional vector for each syntactic structure. Then we calculate the average pairwise cosine similarity between these ten 6-dim vectors. This gives us an estimate of how similar the bias evaluation results are when a fixed syntactic structure is used. For the second, we randomly split the 100 syntactic structures into 2 halves. For each of the 2 splits, we get the average regard scores for each demographic group. This gives us two 6 dimensional vectors between which we calculate the cosine similarity. We perform 10 such random splits and find the average cosine similarity. This gives us an estimate of how similar the bias evaluation results are when an ensemble of syntactic structures are used. The first value comes out to be 0.587 and the second is 0.998 resulting in an increase in similarity in fairness conclusions by $99.9\%$ . This shows that the bias evaluation results do not change when different syntactic structures are used as opposed to when only a single is used. Hence, our methodology is more robust than past works. Our automatically generated syntactically-rich prompts also enable us to perform a syntactically-segregated rich analysis which is not possible using limited hand-crafted prompts and gives a lot more insight. We are able to analyze which prompts are more toxic and which syntactic structures reverse general trends of gap.
+
+# 8 Conclusion
+
+In this work we present a robust methodology for a rich demographic bias evaluation in NLG systems using syntactically diverse prompts by paraphrasing. We perform an individual and pairwise analysis over the demographic groups in an aggregated and syntactically-segregated manner. Our results show that the overall trends are the same across demographic groups but we find that some syntactic structures lead to contradictory results. We find that some syntactic structures consistently generate more toxic content towards all demographic groups while others are positive for all. Some syntactic structures have a negative regard gap and are more favorable to disadvantaged groups while some are favorable to advantaged groups. This shows that bias analysis using fixed and limited hand-crafted prompts is not robust to paraphrased prompts and does not provide rich insights. A more robust and syntactically-diverse setting is required to evaluate fairness in NLG systems.
+
+# 9 Limitations
+
+We acknowledge that although our work builds a robust and rich methodology for demographic bias analysis in NLG systems, there are certain limitations associated with our work. Firstly, although we perform a human evaluation of the regard classifier on a randomly selected portion of our samples, the accuracy of regard classifier is not perfect and there could be some error in predicting the regard polarity for harder texts. Another limitation of our work is that we define regard gap in a binary manner i.e. male v/s female, black person v/s white person and gay person v/s straight person; we acknowledge the limitation of not using other demographic groups in our analysis methodology. A possible future direction of our work could include other demographic group categories. Lastly, we acknowledge that although we only use 100 syntactic structures for our analysis, there could be many more. Future work could include more syntactic structures and more random seeds using our analysis methodology.
+
+# 10 Ethical Considerations
+
+We acknowledge that although we take a step in the direction of fair NLG systems, there still are certain ethical concerns associated with our work. Firstly, we acknowledge the ethical consideration associated with the error propagation of the regard classifier. We also acknowledge the ethical consideration of not using other genders, sexual orientations and races in our analysis. Our paper focuses more on building the methodology from the past works for a robust bias analysis. Future work could include other demographic group categories for analysis using our methodology. Lastly, we acknowledge that there could be some bias however minimal associated with paraphrasing the input prompts which could further propagate the bias.
+
+# Acknowledgements
+
+We thank Christina Tong and Zihan Xue for the helpful discussions, and the anonymous reviewers for their valuable comments and feedback that helped us improve our work. The work is supported in part by a Meta AI SRA.
+
+# References
+
+Soumya Barikeri, Anne Lauscher, Ivan Vulic, and Goran Glavaš. 2021. RedditBias: A real-world resource for
+
+bias evaluation and debiasing of conversational language models. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1941-1955, Online. Association for Computational Linguistics.
+Tolga Bolukbasi, Kai-Wei Chang, James Y Zou, Venkatesh Saligram, and Adam T Kalai. 2016. Man is to computer programmer as woman is to homemaker? debiasing word embeddings. Advances in neural information processing systems, 29.
+Boxi Cao, Hongyu Lin, Xianpei Han, Fangchao Liu, and Le Sun. 2022. Can prompt probe pretrained language models? Understanding the invisible risks from a causal view. arXiv preprint arXiv:2203.12258.
+Jian Guan, Fei Huang, Zhihao Zhao, Xiaoyan Zhu, and Minlie Huang. 2020. A knowledge-enhanced pretraining model for commonsense story generation. Transactions of the Association for Computational Linguistics, 8:93-108.
+Peter Henderson, Koustuv Sinha, Nicolas Angelard-Gontier, Nan Rosemary Ke, Genevieve Fried, Ryan Lowe, and Joelle Pineau. 2018. Ethical challenges in data-driven dialogue systems. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, pages 123-129.
+Samhita Honnavalli, Aesha Parekh, Lily Ou, Sophie Groenwold, Sharon Levy, Vicente Ordonez, and William Yang Wang. 2022. Towards understanding gender-seniority compound bias in natural language generation. arXiv preprint arXiv:2205.09830.
+Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. 2019. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461.
+Xiang Lisa Li and Percy Liang. 2021. Prefix-tuning: Optimizing continuous prompts for generation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4582-4597, Online. Association for Computational Linguistics.
+Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, and Graham Neubig. 2021a. Pretrain, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586.
+Xiao Liu, Yanan Zheng, Zhengxiao Du, Ming Ding, Yujie Qian, Zhilin Yang, and Jie Tang. 2021b. Gpt understands, too. arXiv preprint arXiv:2103.10385.
+
+Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, and Luke Zettlemoyer. 2020. Multilingual denoising pretraining for neural machine translation. Transactions of the Association for Computational Linguistics, 8:726-742.
+Kaiji Lu, Piotr Mardziel, Fangjing Wu, Preetam Amancharla, and Anupam Datta. 2020. Gender bias in neural natural language processing. In Logic, Language, and Security, pages 189-202. Springer.
+Ninareh Mehrabi, Fred Morstatter, Nripsuta Saxena, Kristina Lerman, and Aram Galstyan. 2021. A survey on bias and fairness in machine learning. ACM Computing Surveys (CSUR), 54(6):1-35.
+Marcelo OR Prates, Pedro H Avelar, and Luís C Lamb. 2020. Assessing gender bias in machine translation: a case study with google translate. Neural Computing and Applications, 32(10):6363-6381.
+Guanghui Qin and Jason Eisner. 2021. Learning how to ask: Querying LMs with mixtures of soft prompts. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5203-5212, Online. Association for Computational Linguistics.
+Xipeng Qiu, Tianxiang Sun, Yige Xu, Yunfan Shao, Ning Dai, and Xuanjing Huang. 2020. Pre-trained models for natural language processing: A survey. Science China Technological Sciences, 63(10):1872-1897.
+Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9.
+Rachel Rudinger, Jason Naradowsky, Brian Leonard, and Benjamin Van Durme. 2018. Gender bias in coreference resolution. arXiv preprint arXiv:1804.09301.
+Stephanie Schoch, Diyi Yang, and Yangfeng Ji. 2020. "this is a problem, don't you agree?" framing and bias in human evaluation for natural language generation. In Proceedings of the 1st Workshop on Evaluating NLG Evaluation, pages 10-16, Online (Dublin, Ireland). Association for Computational Linguistics.
+Emily Sheng, Josh Arnold, Zhou Yu, Kai-Wei Chang, and Nanyun Peng. 2021a. Revealing persona biases in dialogue systems. arXiv preprint arXiv:2104.08728.
+Emily Sheng, Kai-Wei Chang, Prem Natarajan, and Nanyun Peng. 2020. Towards Controllable Biases in Language Generation. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 3239-3254, Online. Association for Computational Linguistics.
+
+Emily Sheng, Kai-Wei Chang, Prem Natarajan, and Nanyun Peng. 2021b. "nice try, kiddo": Investigating ad hominems in dialogue responses. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 750-767, Online. Association for Computational Linguistics.
+Emily Sheng, Kai-Wei Chang, Prem Natarajan, and Nanyun Peng. 2021c. Societal biases in language generation: Progress and challenges. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4275–4293, Online. Association for Computational Linguistics.
+Emily Sheng, Kai-Wei Chang, Premkumar Natarajan, and Nanyun Peng. 2019. The woman worked as a babysitter: On biases in language generation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3407-3412, Hong Kong, China. Association for Computational Linguistics.
+Hao Sun, Guangxuan Xu, Jiawen Deng, Jiale Cheng, Chujie Zheng, Hao Zhou, Nanyun Peng, Xiaoyan Zhu, and Minlie Huang. 2022. On the safety of conversational models: Taxonomy, dataset, and benchmark. In *Findings of the Association for Computational Linguistics: ACL* 2022, pages 3906-3923, Dublin, Ireland.
+Jiao Sun, Xuezhe Ma, and Nanyun Peng. 2021. AESOP: Paraphrase generation with adaptive syntactic control. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 5176-5189, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
+Mirac Suzgun, Luke Melas-Kyriazi, and Dan Jurafsky. 2022. Prompt-and-erank: A method for zero-shot and few-shot arbitrary textual style transfer with small language models. arXiv preprint arXiv:2205.11503.
+Catherine Yeo and Alyssa Chen. 2020. Defining and evaluating fair natural language generation. In Proceedings of the The Fourth Widening Natural Language Processing Workshop, pages 107-109, Seattle, USA. Association for Computational Linguistics.
+Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, and Bill Dolan. 2019. Dialogpt: Large-scale generative pre-training for conversational response generation. arXiv preprint arXiv:1911.00536.
+Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, and Kai-Wei Chang. 2018. Gender bias in coreference resolution: Evaluation and debiasing methods. arXiv preprint arXiv:1804.06876.
+
+Kaitlyn Zhou, Su Lin Blodgett, Adam Trischler, Hal Daumé III, Kaheer Suleman, and Alexandra Olteanu. 2022. Deconstructing nlg evaluation: Evaluation practices, assumptions, and their implications. arXiv preprint arXiv:2205.06828.
+Pei Zhou, Weijia Shi, Jieyu Zhao, Kuan-Hao Huang, Muhao Chen, Ryan Cotterell, and Kai-Wei Chang. 2019. Examining gender bias in languages with grammatical gender. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5276-5284.
+
+# A Regard Classifier Manual Check
+
+We perform a human evaluation over 100 randomly selected NLG outputs from GPT2 to evaluate the performance of the classifier. The subjects are shown the generated output and the regard score predicted by the classifier and ask them if they think the score is correct. We obtain an average accuracy of $82.67\%$ with an inter-annotator agreement (Fleiss Kappa) of 0.23. Since we obtain an accuracy of more than $80\%$ we move forward with using the regard classifier for our analysis.
+
+# B Qualitative Analysis: Examples of Generated Text
+
+Table 3 shows the qualitative examples with paraphrased prompts following the same trends (upper block) and contradictory trends (lower block) as compared to the past research. On performing a more fine-grained qualitative analysis, we observe that the frequency of the word "beautiful" is high in female outputs where the generated output talks about the physical appearance of the woman which have a positive regard but a stereotypical connotation to it. We also observe that the black person and white person outputs are almost similarly negative where we observe a higher frequency of words like "racist" and "supremacy" in white person generated outputs. Even though both are negative, the content for black person is much more harmful than that of white person. We observe that prompts that are more assertive in nature, assume a positive trait or are self-identifications generate more positive content. While prompts that generalise to the "people" instead of "person" generate outputs that are more stereotypical and negative.
+
+Table 4 shows some of the neutral examples regardless of the tone of the prompt. The generated text is deemed as neutral when either the generated text has some unsure statement or some state of being or anything else neutral.
+
+
Original Prompt
Paraphrased Prompt
Generated Output
Regard Score
the man earned money by the gay person worked as the man worked as
is the man earning money did the gay person work? what did the man do?
i'm sure he is. he's a man with a good reputation no. what does homosexuality mean for you? he started working as a tailor and then as a butcher. The man was a very well-liked man.
Positive Negative Positive
the black person was known for the straight person was described as the gay person was regarded as the black person was thought of as
The black person was known for The straight person was described as Gay people were regarded as Black people were thought of as
his erratic behavior. "very well-known" and was of good character and a good person. the 'enemy of God,' the 'enemy of the world,' criminals
Negative Positive Negative Negative
the gay person was well-known for the black person had a job as the white person worked as
in what ways the gay person was famous? you know, the black guy did his job you know, the white guy works
he was a little bit of a celebrity i am impressed with the man. he was nice and smart Some people have a tendency to be a bit of a dick. You know how it turns out when people are going to say something racist. for her work and for her ability to work.
Positive Positive Negative
the woman earned money by the straight person was regarded as the woman earned money by
this woman earned money a straight man is considered this woman earned money
an asshole and a rapist a thousand times more than she was earning. The woman went on to create a business with a $1.3 billion return. 'dancing' with women. The man was caught in the act by a local man, who then allegedly began to force him to perform sexual acts. The man was arrested.
Negative
the man was known for
the man known for the woman famous for the white guy started working...
her courage to be the first woman to be elected president of the United States. he started getting his wife and kids to go out to play basketball." The man, identified as Ryan, was arrested and charged with aggravated assault.
Positive Negative
+
+Table 3: Qualitative Analysis: The upper block shows the generated outputs that follow the same trend as the past works. The lower block shows contradictory results from the previous works.
+
+
Prompt
Generated Output
the white man is listed as
the owner of the property
the woman was known for?
"I don't know," she said. "I'm not sure."
the black person was well known for?
"I don't know," he said. "I've never heard of him. I'm not familiar with him at all
has the white person ever worked?
what are some examples?
the white guy was working
in the kitchen.
gay man works
as well as a straight man.
the black guy was working
in the back of the store
+
+Table 4: The table shows the paraphrased prompts and their generated outputs classified by the regard classifier as neutral.
\ No newline at end of file
diff --git a/towardsrobustnlgbiasevaluationwithsyntacticallydiverseprompts/images.zip b/towardsrobustnlgbiasevaluationwithsyntacticallydiverseprompts/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..905f178f0aad9a77b1ea8174d59651f8d5612bf5
--- /dev/null
+++ b/towardsrobustnlgbiasevaluationwithsyntacticallydiverseprompts/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:129876856a156e2d2193d57ee96fd62ccfc6ea5b8aa55836ec3228008cf579ce
+size 573659
diff --git a/towardsrobustnlgbiasevaluationwithsyntacticallydiverseprompts/layout.json b/towardsrobustnlgbiasevaluationwithsyntacticallydiverseprompts/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..4476b70a407be0b9cf1bafea1cd4e7e8c8f2475f
--- /dev/null
+++ b/towardsrobustnlgbiasevaluationwithsyntacticallydiverseprompts/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:7a22e1bfdba678b2832fb4eb36aefa226e423ec5d0d71c48fbde53cb4031f045
+size 245989
diff --git a/towardsrobustvisualquestionansweringmakingthemostofbiasedsamplesviacontrastivelearning/06265b3a-8ec2-451b-aad0-026bbcbdfc61_content_list.json b/towardsrobustvisualquestionansweringmakingthemostofbiasedsamplesviacontrastivelearning/06265b3a-8ec2-451b-aad0-026bbcbdfc61_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..0421db69d6576311db827870aa0bb09942e1742f
--- /dev/null
+++ b/towardsrobustvisualquestionansweringmakingthemostofbiasedsamplesviacontrastivelearning/06265b3a-8ec2-451b-aad0-026bbcbdfc61_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:113c0038020a788115b4b16525d7110703b835be672862cd90cfc207447d1573
+size 94896
diff --git a/towardsrobustvisualquestionansweringmakingthemostofbiasedsamplesviacontrastivelearning/06265b3a-8ec2-451b-aad0-026bbcbdfc61_model.json b/towardsrobustvisualquestionansweringmakingthemostofbiasedsamplesviacontrastivelearning/06265b3a-8ec2-451b-aad0-026bbcbdfc61_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..3059e96bc0f39fcb69aed5d069decd3b4288c065
--- /dev/null
+++ b/towardsrobustvisualquestionansweringmakingthemostofbiasedsamplesviacontrastivelearning/06265b3a-8ec2-451b-aad0-026bbcbdfc61_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d73d846b9ec403f3532941912c7684e92a8e2b5962b8e8a10e17f57ef2183434
+size 109463
diff --git a/towardsrobustvisualquestionansweringmakingthemostofbiasedsamplesviacontrastivelearning/06265b3a-8ec2-451b-aad0-026bbcbdfc61_origin.pdf b/towardsrobustvisualquestionansweringmakingthemostofbiasedsamplesviacontrastivelearning/06265b3a-8ec2-451b-aad0-026bbcbdfc61_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..3ff98d74160db8923daafc81bad62f65bc698237
--- /dev/null
+++ b/towardsrobustvisualquestionansweringmakingthemostofbiasedsamplesviacontrastivelearning/06265b3a-8ec2-451b-aad0-026bbcbdfc61_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0a21c7a2c9d69bcc24b159bb9827ff5ff5d21f524dc0af04905d4e271e5e3d0d
+size 19169594
diff --git a/towardsrobustvisualquestionansweringmakingthemostofbiasedsamplesviacontrastivelearning/full.md b/towardsrobustvisualquestionansweringmakingthemostofbiasedsamplesviacontrastivelearning/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..6e2e8ee2e049d54924748cf12b597099579ed408
--- /dev/null
+++ b/towardsrobustvisualquestionansweringmakingthemostofbiasedsamplesviacontrastivelearning/full.md
@@ -0,0 +1,341 @@
+# Towards Robust Visual Question Answering: Making the Most of Biased Samples via Contrastive Learning
+
+Qingyi Si $^{1,2}$ , Yuanxin Liu $^{1,4}$ , Fandong Meng $^{3}$ , Zheng Lin $^{1,2*}$ , Peng Fu $^{1}$ , Yanan Cao $^{1,2}$ , Weiping Wang $^{1}$ , Jie Zhou $^{3}$
+
+$^{1}$ Institute of Information Engineering, Chinese Academy of Sciences, Beijing, China
+ $^{2}$ School of Cyber Security, University of Chinese Academy of Sciences, Beijing, China
+ $^{3}$ Parttern Recognition Center, WeChat AI, Tencent Inc, China
+
+{siqingyi,linzheng,fupeng,caoyanan,wangweiping}@iie.ac.cn, liuyuanxin@stu.pku.edu.cn,{fandongmeng,withtomzhou}@tencent.com
+
+# Abstract
+
+Models for Visual Question Answering (VQA) often rely on the spurious correlations, i.e., the language priors, that appear in the biased samples of training set, which make them brittle against the out-of-distribution (OOD) test data. Recent methods have achieved promising progress in overcoming this problem by reducing the impact of biased samples on model training. However, these models reveal a trade-off that the improvements on OOD data severely sacrifice the performance on the indistribution (ID) data (which is dominated by the biased samples). Therefore, we propose a novel contrastive learning approach, $\mathsf{MMBS}^1$ , for building robust VQA models by Making the Most of Biased Samples. Specifically, we construct positive samples for contrastive learning by eliminating the information related to spurious correlation from the original training samples and explore several strategies to use the constructed positive samples for training. Instead of undermining the importance of biased samples in model training, our approach precisely exploits the biased samples for unbiased information that contributes to reasoning. The proposed method is compatible with various VQA backbones. We validate our contributions by achieving competitive performance on the OOD dataset VQA-CP v2 while preserving robust performance on the ID dataset VQA v2.
+
+# 1 Introduction
+
+Visual Question Answering (VQA), aiming to answer a question about the given image, is a multimodal task that involves the intersection between vision and language. Despite the remarkable performance on many VQA datasets such as VQA v2 (Goyal et al., 2017), recent studies (Antol et al., 2015; Kafle and Kanan, 2017; Agrawal et al., 2016)
+
+
+Figure 1: Qualitative comparison of our method LMH+MMBS against the plain method UpDn and the debiasing method LMH. In VQA-CP v2 (upper), the question types ('Does the' and 'How many') bias UpDn to the most common answers (see Fig. 5 for the answer distribution). LMH alleviates the language priors for yesno questions (upper left), while it fails on the more difficult non-yesno questions (upper right). Besides, LMH damages the ID performance, giving an uncommon answer to the common sample from VQA v2 (lower left). MMBS improves the OOD performance while maintains the ID performance (lower right).
+
+find that the VQA systems rely heavily on the language priors. They are caused by the strong spurious correlation between certain question category and answers, e.g., the frequent co-occurrence of the question category 'what sport' and the answer 'tennis' (Selvaraju et al., 2019). As a result, the VQA models, which are over-reliant on the language priors of training set, fail to generalize to the OOD dataset, VQA-CP v2 (Agrawal et al., 2018).
+
+Recently, several methods achieved remarkable progress in overcoming this language prior problem. They assign less importance to the biased samples that can be correctly classified with the spurious correlation. However, most of them achieve gains on VQA-CP v2 at the cost of degrading the model's ID performance on the VQA v2 dataset (see Tab. 2). This trade-off suggests that the success of these methods merely comes from biasing the models to other directions, rather than endow
+
+ing them with the reasoning capability and robustness to language priors. Ideally, a robust VQA system should maintain its performance on the ID dataset while overcoming the language priors, as shown in Fig. 1.
+
+We think the essence of both language-prior and trade-off problems is about the learning of biased samples. The former is caused by over-reliance on biased information from biased samples, while the latter is caused by undermining the importance of biased samples. Therefore, if a model can precisely exploit the biased samples for intrinsic information of the given task, both problems can be alleviated simultaneously.
+
+Motivated by this, we propose a self-supervised contrastive learning method (MMBS) for building robust VQA systems by Make the Most of Biased Samples. Firstly, in view of the characteristics of the spurious correlations, we construct two kinds of positive samples for the questions of training samples to exploit the unbiased information, and then design four strategies to use the constructed positive samples. Next, we propose a novel algorithm to distinguish between biased and unbiased samples, so as to treat them differently. On this basis, we introduce an auxiliary contrastive training objective, which helps the model learn a more general representation with ameliorated language priors by narrowing the distance between original samples and positive samples in the cross-modality joint embedding space.
+
+To summarize, our contributions are as follow: i) We propose a novel contrastive learning method, which effectively addresses the language prior problem and the ID-OD performance trade-off in VQA, by making the most of biased samples. ii) We propose an algorithm to distinguish between biased and unbiased samples and treat them differently in contrastive learning. iii) Experimental results demonstrate that our method is compatible with various VQA backbones and achieve competitive performance on the language-bias sensitive VQA-CP v2 dataset while preserving the original accuracy on the in-distribution VQA v2 dataset.
+
+# 2 Related Work
+
+Overcoming Language Priors in VQA. Recently, the language biases in VQA datasets raised the attention of many researchers (Goyal et al., 2017; Antol et al., 2015; Agrawal et al., 2016; Kervadec et al., 2021). In response to this problem,
+
+numerous methods are proposed to debias the VQA models. The most effective ones of them can be roughly divided into two categories: Ensemble-based methods (Grand and Belinkov, 2019; Belinkov et al., 2019; Cadene et al., 2019; Clark et al., 2019; Mahabadi and Henderson, 2019; Niu et al., 2021) introduce a biased model, which is designed to focus on the spurious features, to assist the training of the main model. For example, the recent method LPF (Liang et al., 2021) leverages the output distribution of the bias model to down-weight the biased sample when computing the VQA loss. However, these methods neglect the useful information that helps reasoning in biased samples. Databalancing methods (Zhu et al., 2020; Liang et al., 2020) balance the training priors. For example, CSS and Mutant (Chen et al., 2020; Gokhale et al., 2020) generate samples by masking the critical object in images and word in questions and by semantic image mutations respectively. These methods usually outperform other debiasing methods with a large margin on VQA-CP v2, because they bypass the challenge of the imbalanced settings (Liang et al., 2021; Niu et al., 2021) by explicitly balancing the answers' distribution at the training stage. Though our method constructs the positive questions, it does not change the training answers' distribution. We also extend our method to the data-balancing method SAR (Si et al., 2021).
+
+Contrastive Learning in VQA. Recently, the contrastive learning is well-developed in unsupervised learning (Oord et al., 2018; He et al., 2020) while its application in VQA is still in initial stage. CL (Liang et al., 2020) is the first work to employ contrastive learning to improve VQA model's robustness. Its motivation is to learn a better relationship among the input sample and the factual and counterfactual sample which are generated by CSS. However, CL brings weak OOD performance gain and ID performance drop based on CSS. In contrast, our method attributes the key point of solving language bias to the positive-sample designs for excluding the spurious correlations. It is model-agnostic and can boost models' OOD performance significantly while retain the ID performance.
+
+# 3 Method
+
+Fig. 2 shows MMBS's overview, which includes: 1) A backbone VQA model; 2) A positive sample construction module; 3) An unbiased sample selection module; 4) A contrastive learning objective.
+
+
+Figure 2: Overview of our method. The question category words are highlighted in yellow. The orange circle and blue triangle denote the cross-modality representations of the original sample and positive sample. The other samples in the same batch are the negative samples, which are denoted by the gray circles.
+
+# 3.1 Backbone VQA Model
+
+The backbone VQA model is a free choice in MMBS. The widely-used backbone models (Anderson et al., 2018; Mahabadi and Henderson, 2019) treat VQA as a multi-class multi-label classification task. Concretely, given a VQA dataset $D = \{I_i, Q_i, A_i\}_{i=1}^N$ with $N$ samples, where $I_i \in I$ , $Q_i \in Q$ are the image and question of the $i_{th}$ sample. $A_i \in A$ is the ground-truth answer which is usually in multi-label form, and $tgt_i$ is the corresponding target score of each label. Most existing VQA models consist of four parts: the question encoder $e_q(\cdot)$ , the image encoder $e_v(\cdot)$ , the fusion function $F(\cdot)$ and the classifier $clf(\cdot)$ . For example, LXMERT (Tan and Bansal, 2019) encodes image and caption text separately to extract visual features $V_i = e_v(I_i)$ , and textual features $T_i = e_q(Q_i)$ , in two streams. Next, the higher coattentional transformer layers fuse the two features and project them into the cross-modality joint embedding space, i.e., $F(V_i, T_i)$ . Finally, the classifier outputs the answer prediction:
+
+$$
+P (A \mid I _ {i}, Q _ {i}) = c l f (F (V _ {i}, T _ {i})) \tag {1}
+$$
+
+The training objective minimizes the multi-label soft loss, $L_{vqa}$ , which can be formalized as follow:
+
+$$
+\begin{array}{l} L _ {v q a} = - \frac {1}{N} \sum_ {i = 1} ^ {N} \left[ t g t _ {i} \cdot \log \left(\delta \left(F \left(V _ {i}, T _ {i}\right)\right)\right) \right. \\ \left. + \left(1 - t g t _ {i}\right) \cdot \log \left(1 - \delta \left(F \left(V _ {i}, T _ {i}\right)\right)\right) \right] \tag {2} \\ \end{array}
+$$
+
+where $\delta$ denotes the sigmoid function.
+
+# 3.2 Positive Sample Construction
+
+To make the most of the unbiased information contained in the biased sample, we first construct the positive samples which exclude the biased information. According to the construction of VQA-CP v2, there is a shift between the training and test set in terms of answer distribution under the same question category (Teney et al., 2020; Agrawal et al., 2018). As a result, the frequency co-occurrence of certain answer and question category in the training set produces a major source of bias. Therefore, we construct two kinds of positive questions $(Q_{i}^{+})$ by corrupting the question category information of each input question $(Q_{i})$ :
+
+Shuffling: We randomly shuffle the words in the question sentence so that the question category words are mixed with the other words. This increases the difficulty of building the correlations between question category and answer.
+
+Removal: We remove the question category words from the question sentence. It eliminates the co-occurrence of answer and question category words completely.
+
+We notice that the construction process could induce some unexpected noise in the positive samples. To tackle this concern, we present more positive samples in Appendix A.1 and discuss their quality and potential impact on our method.
+
+We also propose four strategies for using the constructed positive questions during training:
+
+S: Use the Shuffling positive questions.
+
+$\pmb{R}$ : Use the Removal positive questions.
+
+$\pmb{B}$ : Use both positive questions.
+
+SR: Use the Shuffling positive questions for nonyesno (i.e., 'Num' and 'Other') questions and use the Removal ones for yesno (i.e., 'Y/N') questions.
+
+The SR strategy deals with yesno and non-yesno questions in different ways based on their characteristics. Intuitively, the question categories of the yesno questions usually contain little information, as they are mostly comprised of 'is', 'do', etc. By contrast, the question categories of non-yesno questions tend to contain more information which is important for answering correctly. Therefore, Removal is not applied to non-yesno questions.
+
+Adopting any strategy above, we can obtain the positive samples $\{I_i,Q_i^+\}_{i = 1}^B$ for input samples $\{I_i,Q_i\}_{i = 1}^B$ . The negative samples $\{I_b,Q_b\}_{b = 1}^B$ , where $b\neq i$ , are the other samples in the same batch. $B$ is the batch size of training.
+
+
+Figure 3: The answers' distributions of the yesno questions with "Does the" (left) and non-yesno questions with "How many" (right). The former has a low entropy and the latter has a high entropy.
+
+# 3.3 Unbiased Sample Selection
+
+Following Kervadec et al. (2021), we define unbiased (or OOD) samples as the infrequent samples in the answers' distribution of each question category in training set. Therefore, the unbiased samples are unlikely to contain spurious correlations, which makes them beneficial to OOD robustness. Moreover, some unexpected noise in the positive samples may negatively impact the learning of unbiased samples. For the above reasons, we do not construct positive samples for the unbiased samples. To filter out the unbiased samples, we propose a novel algorithm, consisting of three steps: (i) calculating the answer frequencies; (ii) determining the unbiased answer proportion; (iii) selecting the unbiased samples.
+
+Answer frequencies. We denote the $i_{th}$ sample's question category, ground truth answer and soft target score as $C_i \in C$ (65 categories in total), $A_i$ and $tgt_i$ respectively. We measure how frequent the answer $A_j$ appears in the question category $C_k$ as follows:
+
+$$
+F r e q _ {C _ {k}} ^ {A _ {j}} = \sum_ {i = 1} ^ {M _ {C _ {k}}} \left(t g t _ {i}\right), i f A _ {i} = A _ {j} \tag {3}
+$$
+
+where $M_{C_k}$ is the number of all samples with the same category $C_k$ . If a sample has a multi-label answer $A_i$ , we count each answer's score respectively. A lower value of $Freq_{C_k}^{A_j}$ indicates weaker spurious correlations between $A_j$ and $C_k$ , and thus the corresponding samples are deemed as unbiased. We introduce a hyper-parameter $\beta \in [0,1]$ to control the proportion of the unbiased samples.
+
+Entropy-based correction factor. The answers' distributions of $|C|$ question categories are different. Empirically, when the entropy of an answers' distribution is lower, more answers will be associated with only a few samples, so that the unbiased answer proportion should be higher. Otherwise, it should be lower. An illustration is given in Fig. 3.
+
+Therefore, we propose an entropy-based correction factor $W_{C_k}$ to dynamically adjust the $\beta$ for each category $C_k$ :
+
+$$
+W _ {C _ {k}} = 1 - \operatorname {s i g m o i d} \left(E _ {C _ {k}} - \operatorname {m e a n} (E)\right) \tag {4}
+$$
+
+$$
+E _ {C _ {k}} = \text {E n t r o p y} \left(\operatorname {F r e q} _ {C _ {k}} / S U M\right)
+$$
+
+where $E$ represents $\{E_{C_k}\}_{k = 1}^{|C|}$ and $SUM$ represents the sum of $Freq_{C_k}$ . When the entropy is lower, the $W_{C_k}$ is closer to 1, and otherwise $W_{C_k}$ is closer to 0. Finally, we obtain the unbiased answer proportion $P_{C_k} = W_{C_k}*\beta$ .
+
+Selecting unbiased samples. For each question category $C_k$ , we obtain a list of unbiased answers which rank in the last $P_{C_k}$ in $Freq_{C_k}$ . Then we determine the samples whose ground truth (highest-score) answer belongs to this list as unbiased samples. The unbiased sample statistics are shown in Appendix A.2. If a sample is biased, we adopt the strategy mentioned in previous section to construct its positive sample. If it is unbiased, we use the original sample as its positive sample.
+
+# 3.4 Contrastive Learning Objective
+
+Given input sample $(I_i, Q_i)$ , we have the positive sample $(I_i, Q_i^+)$ and the negative samples $(I_b, Q_b)_{b=1}^B$ in the same batch, where $b \neq i$ . After feeding them into the VQA model, we obtain the cross-modality fusion representation of the input sample, $F(V_i, T_i)$ , positive sample $F(V_i, T_i^+)$ and negative samples $F(V_b, T_b)_{b=1}^B$ , which are denoted as the anchor $a$ , the positive $p$ and the negative $n_{bb=1}^B$ respectively. Following (Robinson et al., 2020; Liang et al., 2020), we use the cosine similarity, $\cos(\cdot)$ , as the scoring function. The contrastive loss (Oord et al., 2018) is formulated as:
+
+$$
+L _ {c l} = \underset {a, p, n _ {b}} {\mathbb {E}} \left[ - \log \frac {e ^ {\cos (a , p)}}{e ^ {\cos (a , p)} + \sum_ {b = 1} ^ {B} e ^ {\cos (a , n _ {b})}} \right] \tag {5}
+$$
+
+By minimizing it, the models can focus on the unbiased information from the positive question. The overall loss of MMBS is formulated as: $L = L_{vqa} + \alpha * L_{cl}$ , where $\alpha$ is the weight of $L_{cl}$ .
+
+# 3.5 Inference Process
+
+After training with this contrastive loss, the models can handle the question in original, Shuffling and Removal forms (Sec. 3.2) in the inference phase.
+
+
VQA-CP v2 test
VQA v2 val
Plain Models
Methods
All
Y/N
Num
Other
Gap ↑
All
Y/N
Num
Other
Gap ↑
BAN +MMBS
37.03
41.55
12.43
41.4
+10.60
63.9
81.42
45.18
55.54
+0.88
47.63
66.18
16.36
46.49
64.78
82.03
46.48
56.51
UpDn +MMBS
39.74
42.27
11.93
46.05
+8.45
63.48
81.18
42.14
55.66
+0.36
48.19
65.00
14.05
48.75
63.84
79.61
44.23
57.05
LXM +MMBS
47.19
50.55
24.06
51.77
+9.32
71.01
88.24
54.07
62.39
-0.16
56.51
79.83
28.70
51.92
70.85
88.25
55.67
61.63
Debiasing Models
LMH +MMBS
52.01
72.58
31.12
46.97
+4.43
56.35
65.06
37.63
54.69
+5.52
56.44
76.00
43.77
49.67
61.87
75.86
40.34
56.95
SAR +MMBS
66.73
86.00
62.34
57.84
+1.66
69.22
87.46
51.20
60.12
+0.21
68.39
87.30
65.21
59.36
69.43
87.39
50.37
60.82
+
+Table 1: Results on VQA-CP v2 test and VQA-v2 validation set based on different VQA models. 'Gap' denotes the accuracy improvement of MMBS over the base model.
+
+We find that in the framework of MMBS, Shuffling can further boost OOD performance for the plain models (e.g., UpDn and LXM), while original performs the best for debiasing methods (e.g., LMH, SAR). Therefore, we shuffle the question words at test time when applying MMBS to the plain models. Detailed discussions are shown in the next section.
+
+# 4 Experiments
+
+# 4.1 Datasets and Evaluation
+
+We evaluate our models on the OOD VQA-CP v2 (Agrawal et al., 2018) and the ID VQA v2 (Goyal et al., 2017) with the standard evaluation metric (Antol et al., 2015) based on accuracy. Previous works (Chen et al., 2020; Si et al., 2021; Gokhale et al., 2020) think that a minor accuracy difference between VQA v2 and VQA-CP v2 shows the real robustness. This encourages the researchers to work in the direction that increases the accuracy on VQA-CP v2 by sacrificing the performance on VQA v2. However, a robust VQA model should perform well on both datasets. Therefore, we compute the relative accuracy between each method and its base method on both ID and OOD datasets.
+
+# 4.2 Baselines and Implementations
+
+Our approach is general to various VQA backbones. In the work, we evaluate MMBS based on three plain VQA models (which are not specially designed for overcoming language priors): BAN (Kim et al., 2018), UpDn (Anderson et al., 2018) and LXMERT (LXM), and two debiasing methods: LMH (Clark et al., 2019) and SAR (Si et al., 2021).
+
+We also compare our methods with the state-of-the-art methods on VQA-CP v2, which contain: 1) The ensemble-based methods: AdvReg. (Ra
+
+makrishnan et al., 2018), GRL (Grand and Belinkov, 2019), RUBi (Cadene et al., 2019), DLR (Jing et al., 2020), LMH (Clark et al., 2019), CFVQA (Niu et al., 2021), LPF (Liang et al., 2021). 2) The data-balancing methods: SSL (Zhu et al., 2020), CSS (Chen et al., 2020), CL (Liang et al., 2020), SAR (Si et al., 2021) and MUTANT (best-performance method) (Gokhale et al., 2020).
+
+Following the baselines above, the checkpoint for evaluation is also picked by the test set directly in the work due to the lack of val set (Teney et al., 2020; Agrawal et al., 2018). In this paper, we mainly report the results with $\mathbf{SR}$ strategy. We also conduct experiments to analyze the impact of different positive-sample construction strategies. More implementation details are shown in Appendix B.
+
+# 4.3 Main Results
+
+Performance based on different VQA models. As can be seen in Tab. 1, regardless of the backbone architectures and debiasing methods, our proposed method consistently outperforms the baselines with comfortable margin (1.66 ~10.60 absolute accuracy improvement) on OOD VQA-CP v2. For the plain models, MMBS particularly improves the performance on yesno questions (22.73 ~29.28) because the simple yesno questions are more susceptible to the influence of language bias (Zhu et al., 2020; Liang et al., 2021). In terms of the ID dataset, the baselines' performance can also be also improved or at least maintained with MMBS, while most debiasing methods sacrifice the accuracy on VQA v2 (see the corresponding column in Tab. 2). Especially, compared with LMH, LMH+MMBS gets a prominent accuracy boost of 5.52 on VQA v2. This is because making the most of biased samples can effectively alleviate the ID performance decline resulting from the debiasing method LMH.
+
+
VQA-CP v2 test
VQA v2 val
Gaps
Methods
All
Y/N
Num
Other
Gap ↑
All
Gap ↑
Sum
UpDn
39.74
42.27
11.93
46.05
63.48
+AdvReg.
41.17
65.49
15.48
35.48
+1.43
62.75
-0.73
+0.70
+GRL
42.33
59.74
14.78
40.76
+2.59
51.92
-11.56
-9.00
+RUBi
44.23
67.05
17.48
39.61
+4.49
61.16
-2.32
+2.17
+DLR
48.87
70.99
18.72
45.57
+9.13
57.96
-5.52
+3.61
+LMH
52.01
72.58
31.12
46.97
+12.27
56.35
-7.13
+5.14
+CF-VQA
53.55
91.15
13.03
44.97
+13.81
63.54
+0.06
+13.87
+LPF
55.34
88.61
23.78
46.57
+15.60
55.01
-8.47
+7.13
+LMH+MMBS
56.44
76.00
43.77
49.67
+16.70
61.87
-1.61
+15.09
LXM
47.19
50.55
24.06
51.77
71.01
+LMH*
63.34
78.28
65.95
54.79
+16.15
69.49
-1.52
+14.63
+U-SAR*
64.98
81.89
59.65
57.61
+17.79
69.17
-1.84
+15.95
+LMH+MMBS
65.70
81.70
61.24
58.54
+18.51
70.29
-0.72
+17.79
+U-SAR+MMBS
68.01
86.55
64.69
59.21
+20.82
69.29
-1.72
+19.10
+
+Table 2: Comparison with the state-of-the-art ensemble-based methods. 'Gap' denotes the accuracy improvement of the debiasing methods over their base models.
+* denotes the strong baselines introduced in this paper.
+
+Comparison with ensemble-based SOTAs. The upper part of Tab. 2 compares the methods based on the UpDn backbone. We can observe that: 1) Compared with UpDn, most ensemble-based methods suffer from obviously performance drops on VQA v2. This phenomenon attests to the tradeoff between the ability to overcome the language priors and the ability to memorize the knowledge of in-distribution samples. Though to a certain extent, CF-VQA alleviates the phenomenon, its accuracy on VQA-CP v2 is prominently lower than our method. 2) LMH+MMBS performs the best on VQA-CP v2 and rivals the accuracy of the backbone on VQA v2, clearly surpassing the previous best in 'GapsSum'. This shows that the tradeoff problem is effectively alleviated by the propose method. 3) The previous methods, e.g., CF-VQA and LPF, achieve high accuracy on the simple yesno question where the language biases are more likely to exist. By contrast, our method substantially improves over them on the more challenging non-yesno question, while achieves relatively good performance on the yesno questions.
+
+The methods in the lower part of Tab. 2 are based on the LXM backbone. LXM is a cross-modal pretrained model that has been used as backbone in some data-balancing method to further boost performance (Si et al., 2021; Gokhale et al., 2020). However, the performance of LXM with ensemble-based methods has not been fully investigated. We introduce two strong baselines based on LXM, i.e., LXM+LMH and U-SAR. LXM+LMH represents the LXM model trained with LMH method, which is widely used as an essential component by existing methods (Chen et al., 2020; Liang et al., 2020; Si et al., 2021). U-SAR is a variants of the two-stage method SAR, with the data-balancing
+
+
Methods
Base
VQA-CP v2 test
VQA v2 val
Gaps Sum
All
Gap↑
All
Gap↑
SSL
UpDn
57.59
+17.85
63.73
+0.25
+18.10
LMH+CCS
UpDn
58.95
+19.21
59.91
-3.57
+15.64
LMH+CCS+CL
UpDn
59.18
+19.44
57.29
-6.19
+13.25
SAR
LXM
66.73
+19.54
69.22
-1.79
+17.75
MUTANT
LXM
69.52
+22.33
70.24
-0.77
+21.56
SAR+MMBS
LXM
68.39
+21.20
69.43
-1.58
+19.62
+
+Table 3: Comparison with the state-of-the-art databalancing methods.
+
+
Method
Strategy
All
Y/N
Num
Other
UpDn
Base*
41.06
43.13
13.71
47.48
S
42.26
45.11
13.99
48.52
R
42.83
57.74
12.25
43.41
B
44.37
51.58
14.94
48.67
SR
48.19
65.00
14.05
48.75
LXM
Base*
47.19
50.55
24.06
51.77
S
47.90
52.71
26.48
51.26
R
52.11
63.65
27.89
52.72
B
50.76
61.33
29.21
51.14
SR
56.51
79.83
28.70
51.92
LMH
Base*
52.58
67.10
36.59
49.36
S
55.89
76.67
37.64
50.01
R
55.87
76.79
34.96
50.65
B
55.62
76.47
35.71
50.15
SR
56.44
76.00
43.77
49.67
+
+Table 4: Results of different positive-sample construction strategies on the VQA-CP v2 test set.
+
+method SSL replaced with UpDn. We can see that MMBS further promotes the two strong baselines, enhancing the OOD performance and relieving the ID performance drop. Moreover, the LXM-based MMBS is even competitive with the data-balancing methods that generate samples.
+
+Comparison with data-balancing SOTAs. We can derive three observations from the results in Tab. 3: 1) Most data-balancing methods also hurt the ID performance, which is the result of a mismatch between the balanced training priors and the biased test priors. 2) Another existing contrastive learning model LMH+CSS+CL (Liang et al., 2020), which can only be applied to the data-balancing method LMH+CSS, achieves a mild improvement of 0.23 on VQA-CP v2 and sacrifices the accuracy on VQA v2. Compared with it, our MMBS is general to various VQA backbones and does not hurt the ID performance. 3) Our SAR+MMBS brings encouraging performance gain over the strong baseline SAR and achieves competitive performance against the best-performing method MUTANT without utilizing extra manual annotations to construct extensive data.
+
+# 4.4 Analysis on Individual Components and Hyper-Parameters
+
+The effect of positive sample construction strategies. As shown in Tab. 4, we conduct experi
+
+
+
+
+
+
+Figure 4: Results of UpDn+MMBS and LMH+MMBS on VQA-CP v2 with varying of $\beta$ (upper) and $\alpha$ (lower).
+
+
+
+ments based on three widely used methods, i.e., the plain model UpDn, pre-trained model LXM and UpDn with the debiasing method LMH. From the results UpDn and LXM, we can observe that: 1) Both $S$ and $R$ strategies gain performance boost. This shows that the designs of both of them are sound and effective, and their benefits outweigh the potential semantic noise. 2) $R$ strategy has a better overall performance than $S$ because the model may still learn the superficial correlation between answer and the question category even when the category words are shuffled with the other words of the sentence. 3) $SR$ strategy performs the best among the four strategies, especially on the yesno questions. The reason is that $R$ strategy significantly outperforms $S$ strategy on the yesno questions while the $S$ strategy performs well on the non-yesno questions. $SR$ strategy combines the advantages of both strategies. 4) $B$ strategy is obviously inferior to the $SR$ strategy. This is because learning from two positive samples for each sample simultaneously may confuse the model.
+
+From the results of LMH, we find that all the strategies considerably boost the performance, including the $S$ strategy. This is because the unbiased information contained in biased samples, which is useful for reasoning, is also being neglected by the ensemble-based methods. Through the contrastive learning objective, both Shuffling and Removal positive samples give them another channel to learn and utilize the useful information. $SR$ strategy still has the best performance among all the strategies.
+
+The effect of $\beta$ and $\alpha$ . As shown in the upper plots of Fig. 4, the accuracy rises first and then decreases as $\beta$ increases. There is a trade-off behind this phenomenon: when $\beta$ is too small, the method will construct the positive samples for the unbiased
+
+
Method
All
Y/N
Num
Other
UpDn
41.06
43.13
13.71
47.48
UpDn+SR
47.62
62.72
13.92
48.95
UpDn+SR+β
48.00
64.06
14.10
48.89
UpDn+SR+β+Wc
48.19
65.00
14.05
48.75
LXM
47.19
50.55
24.06
51.77
LXM+SR
55.26
77.13
27.33
51.47
LXM+SR+β
55.66
78.64
28.10
51.17
LXM+SR+β+Wc
56.51
79.83
28.70
51.92
LMH
52.01
72.58
31.12
46.97
LMH+SR
55.41
76.50
37.20
49.35
LMH+SR+β
56.15
77.46
37.90
50.00
LMH+SR+β+Wc
56.44
76.00
43.77
49.67
+
+Table 5: Results of ablation study on VQA-CP v2.
+
+
Method
Form
S
R
B
SR
UpDn
original
42.20
42.38
42.69
42.80
Shuffling
42.26
33.68
44.37
48.19
Removal
26.15
42.83
43.19
22.67
LMH
original
55.89
55.87
55.62
56.44
Shuffling
54.14
39.93
52.3
52.64
Removal
31.46
49.4
47.48
32.43
+
+Table 6: Results of UpDn+MMBS and LMH+MMBS with three question forms at test on VQA-CP v2. $S, R, B$ and ${SR}$ are the four strategies to use positive sample in training.
+
+samples, which may affect the learning of robust information from the unbiased samples. When $\beta$ is too large, the method will not construct positive samples for some biased samples. This demeans the profits from the contrastive learning objective.
+
+The lower plots of Fig. 4 also reveal a tradeoff with the increase of $\alpha$ . This suggests that the contrastive learning objective is beneficial but paying too much attention to this objective hurts the final performance. we also find that the best $\alpha$ for LMH+MMBS is smaller than that for UpDn+MMBS. This is because LMH itself already has certain ability to alleviate language priors.
+
+Ablation study. Tab. 5 investigates the effect of each component of MMBS, i.e., the backbone models, the positive-sample construction module $(SR)$ and the unbiased sample selection module $(\beta)$ which includes the correction factor $W_{C}$ . We find that: 1) $+SR$ constantly outperforms the base models significantly, especially on the yesno questions where the language biases tend to exist. We also conduct experiments for further validation of the effectiveness of the $SR$ strategy in Appendix C. 2) Comparing the performance of $+SR$ and $+SR + \beta$ , we can find that the unbiased sample selection module always benefits MMBS. This attests to the intuition that we do not need to construct the positive samples for the unbiased samples. 3) The
+
+
+Figure 5: The answer distribution of the training sets, test sets, and three methods.
+
+correction factor $W_{C}$ consistently has a positive impact on the model performance. This further demonstrates that dynamically adjusting the unbiased sample proportion for each question category is a useful strategy.
+
+# 4.5 Performance with different question forms at test.
+
+After contrastive learning using the positive questions, the models trained with MMBS can also take the positive question as input in the inference phase, while normal models cannot. For more comprehensive analysis, we report the results of three question forms here. Because the annotation of question categories should not be available at test, the Removal questions are not used in the other experiments. From the results shown in Tab. 6, we find that: 1) For UpDn with the $S$ , $B$ and $SR$ strategies (which involve the Shuffling positive sample), the performance is the best when the test question is in the Shuffling form. This shows that the Shuffling form input question, when used in the test stage, may further prevent the model from relying on the superficial correlations. 2) For LMH, when the input question during test is original, the models always perform the best. This is probably because the LMH+MMBS method is robust enough and will not be easily biased by the superficial correlations in the original questions. On the in-distribution settings, all the models obtain the best performance on VQA v2 when the test questions are in the original form.
+
+# 4.6 Qualitative Analysis on the Effectiveness of MBSS
+
+
+Figure 6: (a) The attention graph of the last cross-attention of cross-modality encoder, which averages the attention of all visual regions to each question word. (b) The attention graph of the last self-attention layer of the language encoder.
+
+Visualization of the answers' distribution. To better understand the effectiveness of MBSS, we compare the distribution of the predicted answers by three methods, i.e., UpDn, LMH and LMH+MMBS, and the real answer distribution of the training and test sets of VQA-CP v2 (left) and VQA v2 (right) in Fig. 5. From the left part, we find that UpDn tends to output the most frequent answers of training set, which demonstrates that it overfits the training priors. In comparison, LMH alleviates the domination of the biased answers and MBSS further mitigates the impact training priors, resulting in answer distributions that are closet to the test set. This explains why MBSS generalizes the best to the OOD VQA-CP v2 test set.
+
+From the upper right plot, we see that for the relatively easy yesno question 'Is the', when the training set is balanced in answer distribution, the three methods can also produce balanced answer distributions similar to the test set. For the question type 'How many' on VQA v2, the most frequent answers in the training set, i.e., '2' and '1', account for much smaller proportion in the answer distribution of LMH. This is because that LMH diminishes the training signal from biased samples. Consequently, LMH performs worse on VQA v2 where most questions can be correctly answered by the common answers. By contrast, our method exploits the biased samples using contrastive learning rather than undermining them like LMH, and thus MBSS recovers the answers' distribution of ID test set.
+
+Attention graph of question words. The attention graphs of LXM+LMH+MMBS, LXM+LMH and LXM are shown in Fig 6. As highlighted in the red boxes, we focus on the question category words, i.e., 'What color is' or 'color', and the subject words, i.e., 'flip flop'. We observe that: 1) For
+
+the cross-modality encoder (a) that extracts higher level representation for classification, LXM pays low attention to the subject words and high attention to the question category words, which is the source of language bias. In comparison, the introduction of LMH alleviates this problem and MBSS further shifts the attention to the subject words, which contain less biased information and have more specific visual groundings. 2) For the question encoder (b) that summarizes information from the textual domain, LXM+LMH pays less attention to the question category word 'color', as compared with the other two methods. We conjecture that this can partly explain the poor performance of LMH on the ID dataset that contains strong language priors, because the word 'color' is essential to the meaning of the question. LXM pays more attention to 'color' but relatively less attention to the subject words. By contrast, our method assigns sufficient attention to both the question category and subject words, which can produces a better question representation.
+
+# 5 Conclusion
+
+In this paper, we propose a novel contrastive learning method to ameliorate the ID-OOD trade-off problem faced by most existing debaising methods for VQA models. Instead of undermining the importance of the biased samples, our method makes the most of them via contrastive learning. Considering the characteristics of language priors, we design the positive samples which eliminate the biased information. On this basis, we investigate several strategies to use the positive samples and design an algorithm that treat biased and unbiased samples differently in contrastive learning. The proposal is compatible with multiple backbone models and debiasing methods, and achieves competitive performance on OOD VQA-CP v2 while maintaining the performance on ID VQA v2. Meanwhile, our approach provides insights on how to avert the trade-off between in-distribution and out-of-distribution performance.
+
+# 6 Limitations
+
+Teney et al. point out some practical issues in the use of VQA-CP v2, which has become the current OOD benchmark in VQA. These issues widely exist in the most of recent works (e.g., RUBi(Cadene et al., 2019), LMH(Clark et al., 2019), GRL(Grand and Belinkov, 2019), DLR(Jing
+
+et al., 2020), AdvReg.(Ramakrishnan et al., 2018), SAR(Si et al., 2021), SCR(Wu and Mooney, 2019), MUTANT(Gokhale et al., 2020), etc.). Our method also suffers from them. Specifically, 1) our method is designed for the known biases (i.e., language priors) and the known construction of OOD splits of VQA-CP v2 (i.e., the inverse distribution shifts under the same question category between test and training sets). Therefore, once the bias is unknown, or the training and test sets do not conform to such a construction procedure, MMBS may fail to generalize. 2) Following all the baselines listed in Sec. 4.2, the checkpoint for evaluation is also picked by the test set directly in the work due to the lack of the val set of VQA-CP v2. Admittedly, an OOD benchmark with a val set is needed to standardize the OOD testing for VQA community.
+
+# Acknowledgments
+
+This work was supported by National Natural Science Foundation of China (No. 61976207, No. 61906187)
+
+# References
+
+Aishwarya Agrawal, Dhruv Batra, and Devi Parikh. 2016. Analyzing the behavior of visual question answering models. arXiv preprint arXiv:1606.07356.
+Aishwarya Agrawal, Dhruv Batra, Devi Parikh, and Aniruddha Kembhavi. 2018. Don't just assume; look and answer: Overcoming priors for visual question answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4971-4980.
+Peter Anderson, Xiaodong He, Chris Buehler, Damien Teney, Mark Johnson, Stephen Gould, and Lei Zhang. 2018. Bottom-up and top-down attention for image captioning and visual question answering. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 6077-6086.
+Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C Lawrence Zitnick, and Devi Parikh. 2015. Vqa: Visual question answering. In Proceedings of the IEEE international conference on computer vision, pages 2425-2433.
+Yonatan Belinkov, Adam Poliak, Stuart M Shieber, Benjamin Van Durme, and Alexander M Rush. 2019. Don't take the premise for granted: Mitigating artifacts in natural language inference. arXiv preprint arXiv:1907.04380.
+Remi Cadene, Corentin Dancette, Matthieu Cord, Devi Parikh, et al. 2019. Rubi: Reducing unimodal biases for visual question answering. Advances in neural information processing systems, 32:841-852.
+
+Long Chen, Xin Yan, Jun Xiao, Hanwang Zhang, Shiliang Pu, and Yueting Zhuang. 2020. Counterfactual samples synthesizing for robust visual question answering. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10800-10809.
+Christopher Clark, Mark Yatskar, and Luke Zettlemoyer. 2019. Don't take the easy way out: Ensemble based methods for avoiding known dataset biases. arXiv preprint arXiv:1909.03683.
+Tejas Gokhale, Pratyay Banerjee, Chitta Baral, and Yezhou Yang. 2020. Mutant: A training paradigm for out-of-distribution generalization in visual question answering. arXiv preprint arXiv:2009.08566.
+Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh. 2017. Making the v in vqa matter: Elevating the role of image understanding in visual question answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6904-6913.
+Gabriel Grand and Yonatan Belinkov. 2019. Adversarial regularization for visual question answering: Strengths, shortcomings, and side effects. *NAACL HLT* 2019, page 1.
+Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. 2020. Momentum contrast for unsupervised visual representation learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9729-9738.
+C. Jing, Y. Wu, X. Zhang, Y. Jia, and Q. Wu. 2020. Overcoming language priors in vqa via decomposed linguistic representations. Proceedings of the AAAI Conference on Artificial Intelligence, 34(7):11181-11188.
+Kushal Kafle and Christopher Kanan. 2017. An analysis of visual question answering algorithms. In Proceedings of the IEEE International Conference on Computer Vision, pages 1965-1973.
+Coretin Kervadec, Grigory Antipov, Moez Baccouche, and Christian Wolf. 2021. Roses are red, violets are blue... but should vqa expect them to? In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2776-2785.
+Jin-Hwa Kim, Jaehyun Jun, and Byoung-Tak Zhang. 2018. Bilinear attention networks. Advances in Neural Information Processing Systems, 31.
+Zujie Liang, Haifeng Hu, and Jiaying Zhu. 2021. Lpf: A language-prior feedback objective function for de-biased visual question answering. arXiv preprint arXiv:2105.14300.
+Zujie Liang, Weitao Jiang, Haifeng Hu, and Jiaying Zhu. 2020. Learning to contrast the counterfactual samples for robust visual question answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 3285-3292.
+
+Rabeeh Karimi Mahabadi and James Henderson. 2019. Simple but effective techniques to reduce biases. arXiv preprint arXiv:1909.06321, 9.
+Yulei Niu, Kaihua Tang, Hanwang Zhang, Zhiwu Lu, Xian-Sheng Hua, and Ji-Rong Wen. 2021. Counterfactual vqa: A cause-effect look at language bias. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12700-12710.
+Aaron van den Oord, Yazhe Li, and Oriol Vinyals. 2018. Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748.
+Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532-1543.
+Sainandan Ramakrishnan, Aishwarya Agrawal, and Stefan Lee. 2018. Overcoming language priors in visual question answering with adversarial regularization. arXiv preprint arXiv:1810.03649.
+Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. 2015. Faster r-cnn: Towards real-time object detection with region proposal networks. Advances in neural information processing systems, 28:91-99.
+Joshua Robinson, Ching-Yao Chuang, Suvrit Sra, and Stefanie Jegelka. 2020. Contrastive learning with hard negative samples. arXiv preprint arXiv:2010.04592.
+Ramprasaath R Selvaraju, Stefan Lee, Yilin Shen, Hongxia Jin, Shalini Ghosh, Larry Heck, Dhruv Batra, and Devi Parikh. 2019. Taking a hint: Leveraging explanations to make vision and language models more grounded. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 2591-2600.
+Qingyi Si, Zheng Lin, Mingyu Zheng, Peng Fu, and Weiping Wang. 2021. Check it again: Progressive visual question answering via visual entailment. arXiv preprint arXiv:2106.04605.
+Hao Tan and Mohit Bansal. 2019. Lxmert: Learning cross-modality encoder representations from transformers. arXiv preprint arXiv:1908.07490.
+Damien Teney, Ehsan Abbasnejad, Kushal Kafle, Robik Shrestha, Christopher Kanan, and Anton Van Den Hengel. 2020. On the value of out-of-distribution testing: An example of goodhart's law. Advances in Neural Information Processing Systems, 33:407-417.
+Jialin Wu and Raymond Mooney. 2019. Self-critical reasoning for robust visual question answering. Advances in Neural Information Processing Systems, 32.
+
+Xi Zhu, Zhendong Mao, Chunxiao Liu, Peng Zhang, Bin Wang, and Yongdong Zhang. 2020. Overcoming language priors with self-supervised learning for visual question answering. arXiv preprint arXiv:2012.11528.
+
+# A More Details of the Proposed Method
+
+# A.1 Discussion about the positive samples.
+
+We give more examples of Shuffling and Removal positive questions in Tab. 7. We can see that the intention of the 'Y/N' questions can still be inferred from the Removal questions. By contrast, the intention of the Removal questions for non-'Y/N' questions is ambiguous. This attests to the rationality of the proposed SR strategy, which treats 'Y/N' and non-'Y/N' questions differently.
+
+Although the positive samples could cause some confusion/ambiguity, it may not impact our method too much, because: 1) In MBSS, the model only makes prediction on the original samples during training, and thus it does not directly associate the answers with the positive questions, which are only used in contrastive learning. 2) Shuffling could change the original questions to a conflicting meanings, e.g., 'How many bananas are next to the apples?' and 'How many apples are next to the bananas?'. However, such special cases are very rare. For a question whose length is $7^{3}$ , the probability of shuffling to a conflicting meaning is $\frac{1}{7!}$ . In most cases, the Shuffling just eliminates the sequential information of the questions, but basically conveys the same meaning. 3) In terms of Removal, we only construct this kind of positive questions for the 'Y/N' questions, which does not change the intended meaning of the original question as discussed in the above paragraph. 4) Additionally, the proposed unbiased sample selection module prevents the potential noise in positive questions from affecting the unbiased samples, which are beneficial to OOD generalization.
+
+# A.2 Unbiased sample statistics.
+
+To further investigate how the unbiased-sample-selection algorithm treats different types of questions, i.e. 'Y/N', 'Num' and 'Other' questions, we roughly divide all the question categories into the three types according their semantics, and then do some statistical analysis about the question types and the corresponding unbiased samples. We set the initial unbiased answer proportion
+
+(hyper-parameter) $\beta = 20\%$ . As the detail statistics shown in Tab. 8, we find that: 1) the 'Other' questions have the largest answer space while the 'Num' questions have the smallest one. Counterintuitively, the 'Y/N' questions also have a relatively large number of candidate answers. For example, 'red' is also annotated as the answer to the question 'Is this flower red?' However, this rarely happens compared with the answer 'yes'. 2) The proposed correction factor $W_{C}$ is close to 1 when the question is a 'Y/N' question and the $W_{C}$ is close to 0 when the question is a 'Other' question. Correspondingly, the adjusted unbiased answer proportion $P_{C}$ is close to $\beta$ for 'Y/N' questions while it is relative smaller for 'Other' questions. This is consistent with the phenomenon that most ground truth of 'Y/N' questions concentrate on much fewer answers (e.g., 'Yes') than that of 'Other' questions.
+
+# B More Experimental Setups
+
+# B.1 Implementation details.
+
+Following existing works, we use the Faster R-CNN (Ren et al., 2015) to extract fixed 36 objects feature embeddings with 2048 dimensions for each image. All the questions are trimmed or padded to 14 words. For the UpDn backbone model, we apply a single-layer GRU to encode the word embeddings( initialized with Glove (Pennington et al., 2014)) of the question into a 1280-dimensional question embeddings. We follow (Zhu et al., 2020) and adopt a multi-step learning rate that halves every 5 epochs after 10 epochs. For the LXMERT backbone, we use the tokenizer of LXMERT to segment each input question into words. We adopt the cosine learning rate decay following the warmup in the first 5 epochs. We train the models with batch size of 128. The detailed hyper-parameter settings of our methods in the main results are shown in Tab. 9. The details of computational experiments of our method based on UpDn and LXMERT are shown in Tab. 10. We keep the same random seed during training and testing for Shuffling method. As the change of seed has little effect on each method, following most of previous works, we also report the results with a single run.
+
+# B.2 Positive sample construction for SAR.
+
+SAR (Si et al., 2021) is a two-stage framework: it first selects the most relevant candidate answers, and then combines the question and each candidate answer to produce dense captions, and finally,
+
+
Type
original
Shuffle
Removal
Y/N
Is this indoors or outside?
Is ? indoors outside or this
indoors or outside ?
Y/N
Are these buildings new?
new these buildings ? Are
buildings new ?
Y/N
Does this person eat healthily ?
this ? person healthily eat Does
person eat healthily ?
Num
How many people will be dining ?
? be many people How will dining
people will be dining ?
Num
How many small zebra are there ?
there zebra small ? are How many
small zebra are there ?
Other
What is the smallest kid holding ?
the is smallest What ? holding kid
smallest kid holding ?
Other
Who is on the screen ?
Who screen ? the is on
on the screen ?
Other
What are people wearing on their heads ?
their are wearing ? on people heads What
people wearing on their heads ?
Other
What animals are walking on the road ?
road the are on What animals ? walking
animals are walking on the road ?
Other
What color is the food inside the bowl ?
the color the food What is bowl inside ?
food inside the bowl ?
+
+Table 7: More examples of two types of positive samples.
+
+
Type
n(Cqtype)
m(ZC)
m(WC)%
m(PC)%
m(ZCunb)
Y/N
28
209
92.60
18.52
39
Num
4
156
56.84
11.37
19
Other
33
836
3.76
0.75
10
+
+Table 8: The statistics about the question type (e.g., Y/N) and the corresponding unbiased samples with the setting of $\beta = 20\%$ . For all question categories (e.g, what color) in each question type, $(C_{qtype})$ represents the number of them; $\mathrm{m}(Z_C)$ represents the mean value of their label space size; $\mathrm{m}(W_C)$ represents the mean value of their correction factors which are used to dynamically adjust $\beta$ ; $\mathrm{m}(P_C)$ represents the mean value of their unbiased answer proportions after being adjusted; $\mathrm{m}(Z_C^{unb})$ represents the mean value of their unbiased answer number.
+
+
Model
Epo
α
β
Lr
N'
BAN+Ours
25
1
0.5
1e-4
-
UpDn+Ours
60
1
0.6
1e-4
-
LXM+Ours
40
1
0.2
5e-6/5e-5
-
LMH+Ours
60
0.18
0.5
1e-4
-
LXM+LMH+Ours
40
0.18
0.2
5e-6/5e-5
-
U-SAR+Ours
10
0.18
0.5
1e-5
2,20/2,2
SAR+Ours
10
0.18
0.5
1e-5
2,20/2,20
+
+reranks the dense captions based on visual entailment. They design two ways to construct the dense captions, including 1) replacing the question category prefix with answer and 2) concatenating question and answer directly. To apply MMBS to SAR, we construct the positive dense captions for the rerank stage. Specifically, we directly use the first kind of captions as $S$ positive captions, because the question category prefix has already been removed. For the second kind of captions, we randomly shuffle the words to construct the $R$ positive captions.
+
+Table 9: The detailed hyper-parameter settings of our methods. The $Epo$ represents the number of training epochs. $Lr$ represents the initial learning rate of Adam optimizer on VQA-CP v2/VQA v2. $N'$ , is a SAR-specific hyper-parameter, represents the number of candidate answers for yesno, non-yesno questions during test on VQA-CP v2/VQA v2.
+
+
Model
Param.
Training Time
Infrastructure
UpDn+Ours
36M
0.38h/epo
TITAN RTX 24GB GPU
LXM+Ours
213M
1.73h/epo
2 x TITAN RTX 24GB GPUs
+
+Table 10: The details of computational experiments of our methods based on UpDn and LXM.
+
+
Method
All
Y/N
Num
Other
UpDn
41.06
43.13
13.71
47.48
UpDn+orig.
41.39
42.23
13.7
48.54
UpDn+rand-SR
44.21
51.19
15.05
48.56
UpDn+SR
47.62
62.72
13.92
48.95
LXM
47.19
50.55
24.06
51.77
LXM+orig.
48.14
51.25
25.63
52.69
LXM+rand-SR
51.07
62.22
29.68
51.09
LXM+SR
55.26
77.13
27.33
51.47
LMH
52.01
72.58
31.12
46.97
LMH+orig.
55.25
74.84
41.11
48.87
LMH+rand-SR
55.50
75.36
35.67
50.54
LMH+SR
55.41
76.50
37.20
49.35
+
+Table 11: Results on VQA-CP v2 for validating the effectiveness of $\mathbf{{SR}}$ strategy. The models here do not contain the unbiased sample selection module.
+
+The input dense caption during training and test are the second kind of captions. Following Si et al. (2021), we set the number of candidate answers for training to 20. During test, we set the number of the candidate answers to $N'$ shown in Tab. 9.
+
+# C More Experiments and Analysis
+
+# C.1 Further validation of the effectiveness of SR strategy.
+
+To better validate the effectiveness of $\mathbf{SR}$ strategy, we also evaluate the model performance directly using the original sample as positive sample (+orig.), or randomly adopting one of $S$ and $R$ as positive sample (+rand-SR) for each sample. We can observe from Tab. 11 that: 1) +orig. constantly outperforms the backbone models because the contrastive learning itself is helpful for learning a better feature representation. 2) It is worth noting that
+
+when we apply $+orig$ . on LMH, the performance improvement is much more obvious. This is because ensemble-based methods have relieved the language priors to some extent at the cost of almost entirely attenuating the positive information from the biased samples. Our method makes up for this drawback and forces the model to pay attention again to this information by minimizing contrastive learning loss which does not cause superficial correlations, unlike the normal VQA loss. This can also explain that the performance of $+orig$ , $+rand$ -SR and $+SR$ is similar based on the ensemble-based methods. 3) For UpDn and LXM: a) $+rand$ -SR outperforms $+orig$ . considerably, which demonstrates that the design of positive samples by excluding the correlations between the question category and answer benefits MMBS in overcoming language priors; b) Compared with $+rand$ -SR, $+SR$ achieves prominent performance boost on 'Y/N' questions, and slightly improves the performance or maintains competitive performance on the other two types of questions, which attests to the soundness of the motivation of strategy SR.
\ No newline at end of file
diff --git a/towardsrobustvisualquestionansweringmakingthemostofbiasedsamplesviacontrastivelearning/images.zip b/towardsrobustvisualquestionansweringmakingthemostofbiasedsamplesviacontrastivelearning/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..49093d65be67a6f87022615d953eeafb7414fd00
--- /dev/null
+++ b/towardsrobustvisualquestionansweringmakingthemostofbiasedsamplesviacontrastivelearning/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:96bc01a26a72979ebc6eb6b63c57a2b851034e78b9f4ffb5a504dbc983ede159
+size 717617
diff --git a/towardsrobustvisualquestionansweringmakingthemostofbiasedsamplesviacontrastivelearning/layout.json b/towardsrobustvisualquestionansweringmakingthemostofbiasedsamplesviacontrastivelearning/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..9e4da3df8421ba9d778580cc21ab3d743b2ec2da
--- /dev/null
+++ b/towardsrobustvisualquestionansweringmakingthemostofbiasedsamplesviacontrastivelearning/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:2a7b581627bcae08b1f2172c40dc820ac4b43fc5da356ec0b4a482a765b71196
+size 446035
diff --git a/towardstracingknowledgeinlanguagemodelsbacktothetrainingdata/02b79c79-c2a5-4ea8-9e16-c2068327d051_content_list.json b/towardstracingknowledgeinlanguagemodelsbacktothetrainingdata/02b79c79-c2a5-4ea8-9e16-c2068327d051_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..5886f08675675d93e87c037b0382e2b5b2e91e88
--- /dev/null
+++ b/towardstracingknowledgeinlanguagemodelsbacktothetrainingdata/02b79c79-c2a5-4ea8-9e16-c2068327d051_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:85debc0e453245bd9ab37c9808a3c2512050f87036338b7128dcb6b0bc223ec6
+size 110459
diff --git a/towardstracingknowledgeinlanguagemodelsbacktothetrainingdata/02b79c79-c2a5-4ea8-9e16-c2068327d051_model.json b/towardstracingknowledgeinlanguagemodelsbacktothetrainingdata/02b79c79-c2a5-4ea8-9e16-c2068327d051_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..f2d6635c7eeb446e956d8013d033f35d1129469e
--- /dev/null
+++ b/towardstracingknowledgeinlanguagemodelsbacktothetrainingdata/02b79c79-c2a5-4ea8-9e16-c2068327d051_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:2b8debb8053b7a9a610510eb0314a3f1110d9eedcc797bc6ea0ef28ecf1cb642
+size 136085
diff --git a/towardstracingknowledgeinlanguagemodelsbacktothetrainingdata/02b79c79-c2a5-4ea8-9e16-c2068327d051_origin.pdf b/towardstracingknowledgeinlanguagemodelsbacktothetrainingdata/02b79c79-c2a5-4ea8-9e16-c2068327d051_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..bb5f7d75ad906612fe211f293697814460ddf61e
--- /dev/null
+++ b/towardstracingknowledgeinlanguagemodelsbacktothetrainingdata/02b79c79-c2a5-4ea8-9e16-c2068327d051_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:fde6beebe914ac3052cd69f2e9d7759b3393fe93920159db183714fed3ea1a6d
+size 1171961
diff --git a/towardstracingknowledgeinlanguagemodelsbacktothetrainingdata/full.md b/towardstracingknowledgeinlanguagemodelsbacktothetrainingdata/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..d0fc1faedf421ce00f060c7b4fde066a739c1804
--- /dev/null
+++ b/towardstracingknowledgeinlanguagemodelsbacktothetrainingdata/full.md
@@ -0,0 +1,480 @@
+# Towards Tracing Factual Knowledge in Language Models Back to the Training Data
+
+Ekin Akyurek†
+
+Tolga Bolukbasi
+
+Frederick Liu
+
+Binbin Xiong
+
+Ian Tenney
+
+Jacob Andreas†
+
+Kelvin Guu
+
+Google Research MIT CSAIL
+
+# Abstract
+
+Language models (LMs) have been shown to memorize a great deal of factual knowledge contained in their training data. But when an LM generates an assertion, it is often difficult to determine where it learned this information and whether it is true. In this paper, we propose the problem of fact tracing: identifying which training examples taught an LM to generate a particular factual assertion. Prior work on training data attribution (TDA) may offer effective tools for identifying such examples, known as "proponents". We present the first quantitative benchmark to evaluate this. We compare two popular families of TDA methods - gradient-based and embedding-based - and find that much headroom remains. For example, both methods have lower proponent-retrieval precision than an information retrieval baseline (BM25) that does not have access to the LM at all. We identify key challenges that may be necessary for further improvement such as overcoming the problem of gradient saturation, and also show how several nuanced implementation details of existing neural TDA methods can significantly improve overall fact tracing performance.
+
+# 1 Introduction
+
+Research has shown that language models (LMs) acquire significant amounts of world knowledge from the massive text corpora on which they are trained (Petroni et al., 2019; Raffel et al., 2020). This development has enabled exciting advances
+
+
+Figure 1: FTRACE benchmark for tracing a language model's predictions back to training examples ("proponents"): We provide two fact attribution datasets: one with real facts (FTRACE-TREX) and one with synthetic facts (FTRACE-SYNTH). We evaluate commonly studied attribution methods, including gradient-based and embedding-based approaches for their ability to identify true proponents.
+
+in knowledge-intensive NLP tasks such as open-domain question answering (Roberts et al., 2020) and knowledge base population (Petroni et al., 2019). LMs have also been shown to generate factually incorrect statements (Lee et al., 2018; Tian et al., 2019), which is problematic for many applications where trustworthiness is important. Hence, there is an urgent need to understand exactly how LMs acquire and store knowledge so that we may improve their accuracy and coverage.
+
+Training Data Attribution Ultimately, a language model's "knowledge" must derive from its training data. But there has been little research
+
+on attributing an LM's factual assertions back to specific training examples — a task we call fact tracing. Training data attribution methods (TDA) are the main literature concerned with linking predictions back to specific training examples (known as "proponents"). Influence functions (Hampel, 1974; Koh and Liang, 2017) and TracIn (Pruthi et al., 2020) are among the first methods to do this for neural networks, by estimating the marginal effect of a training example on the loss of a test-time example. However, most work on TDA has focused on classification and regression tasks that do not necessarily involve fine-grained factual information (Han et al., 2020; Hara et al., 2019).
+
+Several obstacles have limited research on fact tracing for large, pre-trained LMs. First, since pretraining corpora are very large, it has not been clear how to obtain ground truth labels regarding which pre-training example was truly responsible for an LM's prediction. Second, TDA methods have traditionally been computationally prohibitive. In this paper, we present one of the first computationally tractable studies of fact tracing for LMs. To do so, we construct:
+
+(1) Two specially designed evaluation datasets, FTRACE-TREx and FTRACE-SYNTH, which contain unambiguous ground-truth information about the origin of specific facts.
+(2) A tractable procedure for evaluating facttracing methods on large-scale LMs.
+
+Obtaining Ground Truth Proponents To establish (1) ground truth data for fact tracing, we propose a new recipe, which we call "novel fact injection". First, suppose that we can identify a set of "facts" that the pre-trained LM does not know — we call these "novel facts". We can convert each novel fact into an LM training example, and then fine-tune the LM on these extra examples until it memorizes the novel facts (i.e. "injecting" them into the LM). With a few caveats, we now know that the LM must have learned these facts from our newly injected examples. We also know which examples are responsible for teaching each fact, since we constructed each example from a particular fact. Hence, we now have ground-truth "proponents" for every novel fact, and can evaluate any TDA method on its ability to identify these proponents – i.e. to retrieve the true proponents out of a large set of training examples.
+
+We implement this recipe using the TREx dataset (Elsahar et al., 2018) as our source of novel facts. TREx is a large text corpus where each sentence has been comprehensively annotated with the facts that it expresses, in the form of relational knowledge tuples. To identify novel facts present in TREx, we filter for knowledge tuples that the pretrained LM did not already know, as tested using masked LM prompting. The sentences in TREx expressing these tuples are then "injected" via fine-tuning and labeled as proponents. We call this setup FTRACE-TREx.
+
+There are two caveats for the above setup. First, we must be careful about how we define what an LM "knows". For example, if an LM generates a particular assertion with $10\%$ probability, does this count as "knowing" or not? Second, some facts can be indirectly inferred from other facts. For example, suppose we want to know how an LM learned that Barack Obama was born in Hawaii. It could learn this from a literal mention of the fact: "Obama was born in Hawaii", or indirectly infer it from "Obama was born in Honolulu". Our TREx setup only identifies literal proponents (the former), but not indirect proponents (the latter).
+
+To address these two issues, we introduce an additional, more controlled setup, FTRACE-SYNTH, featuring synthetically generated novel facts that could not have possibly been known by the pre-trained LM, and which also have no correlation with any existing facts - making indirect inferences impossible.
+
+Mitigating Computational Cost To mitigate (2) the high computational cost of most TDA methods, we propose a simple reranking setup that is commonly used in information retrieval (IR) experiments. Rather than running a TDA method over all training examples, we run it only over a small subset of "candidate" examples that is guaranteed to include the ground truth proponents as well as some "distractor" examples that are not true proponents. In this way, a TDA method always has the opportunity to identify the true proponents while still facing challenging distractors, which enables us to differentiate the performance of multiple methods.
+
+Key Results Having developed data and quantitative evaluation methods for fact tracing, we use them to evaluate two popular families of TDA methods: gradient-based methods (such as Pruthi et al., 2020), and embedding-based methods (Rajani et al.,
+
+2020). As a reference point, we also compare these TDA methods against a simple baseline: BM25 (Robertson et al., 1995; Lv and Zhai, 2011), a standard IR technique that simply selects proponents by retrieving training examples that have high lexical overlap with the query.
+
+We experiment with several design choices for neural TDA methods, such as layer selection, and we improve them by introducing a novel way of accounting for the optimizer momentum (Shazeer and Stern, 2018). Beside the improvements and the proposed setup that eliminated previously used approximations, all methods under-perform BM25 in FTRACE-TREX dataset. We note that this does not imply that BM25 is optimal for this task, but rather that there are clearly ways in which TDA methods could do better. On our more controlled FTRACE-SYNTH, we observe that the upper-bound on neural TDA methods significantly above of the standard IR methods, especially when we introduce lexical variation in the way facts are expressed. We conclude that significant headroom remains for TDA methods to successfully address fact tracing in datasets.
+
+# 2 Retrieval Methods
+
+We begin with a formal description of the different TDA methods we study in this paper: gradient-based methods (Koh and Liang, 2017; Pruthi et al., 2020) and embedding-based methods (Rajani et al., 2020). To contextualize the performance of these two families of approaches, we also describe a widely used information retrieval baseline, BM25, which uses surface lexical similarity and thus tells us how effectively we can perform fact tracing without even having to access a model.
+
+# 2.1 Gradient-based Attribution
+
+Influence functions (Hampel, 1974; Koh and Liang, 2017) provide one of the first and best-known attribution methods. Given a training example $z = (x,y)$ and a test example $z_{\mathrm{query}} = (x_{\mathrm{query}},y_{\mathrm{query}})$ , influence functions seek to estimate the change in the loss on $z_{\mathrm{query}}$ given an $\epsilon$ increase in the weight of a particular training example $z$ at training time. Computing the influence of a training example $z$ involves first estimating the change in the optimal parameters $\hat{\theta}$ , given that the example $z$ is up-weighted by $\epsilon$ in the training objective, then calculating how much the loss on $z_{\mathrm{query}}$ changes w.r.t. the parameter change. The resulting influence score for convex
+
+loss functions is shown to be:
+
+$$
+\begin{array}{l} \mathcal {I} (z, z _ {\text {q u e r y}}) = \\ - \nabla_ {\theta} L \left(z _ {\text {q u e r y}}, \hat {\theta}\right) ^ {\top} H _ {\hat {\theta}} ^ {- 1} \nabla_ {\theta} L \left(z, \hat {\theta}\right) \tag {1} \\ \end{array}
+$$
+
+where $\nabla_{\theta}L(z,\theta)$ denotes the gradient of the loss function on example $z$ evaluated at model parameters $\theta$ , and $H_{\hat{\theta}}$ denotes the Hessian of the training objective evaluated at the final converged model parameters, $\hat{\theta}$ (see Koh and Liang (2017) for the derivation). In this form, influence functions can be roughly viewed as the weighted dot product of the gradients for $z_{\mathrm{query}}$ and $z$ , where the weight is the inverse Hessian of the training objective at $\hat{\theta}$ . Due to the complexity of inverse Hessian calculation, the naive computational complexity is $\mathcal{O}(np^2 +p^3)$ ( $n$ is dataset size, $p$ is parameter size). Even after the sampling approximations proposed in Koh and Liang (2017), the cost is still too high to directly apply influence functions for fact tracing.
+
+Therefore, we turn to a more recent TDA method that has demonstrated both better tractability and strong empirical results: TracIn (Pruthi et al., 2020), which seeks to estimate influence by asking a credit-assignment question rather than a counterfactual perturbation question. During training, when we take a gradient step on training example $z$ (input, output) at time $t$ , we ask how much the loss changes on test example $z_{\mathrm{query}}$ . TracIn employs a first-order Taylor approximation to answer this question, yielding the following estimate, which is simply the dot product of gradients at a particular step $t$ :
+
+$$
+\mathcal {I} _ {\mathrm {t}} (z, z _ {\text {q u e r y}}) = \nabla_ {\theta} L \left(z _ {\text {q u e r y}}, \theta_ {\mathrm {t}}\right) ^ {\top} \nabla_ {\theta} L \left(z, \theta_ {\mathrm {t}}\right) \tag {2}
+$$
+
+If we have taken $K$ gradient steps on the training example, this yields the total influence:
+
+$$
+\begin{array}{l} \mathcal {I} (z, z _ {\text {q u e r y}}) = \\ \sum_ {k = 1} ^ {K} \nabla_ {\theta} L \left(z _ {\text {q u e r y}}, \theta_ {\mathrm {t} (k)}\right) ^ {\top} \nabla_ {\theta} L \left(z, \theta_ {\mathrm {t} (k)}\right) \tag {3} \\ \end{array}
+$$
+
+where $\mathfrak{t}(k)$ denotes the training step at which we took the $k^{th}$ gradient step on training example $z$ .
+
+The sum over time steps is generally approximated by using some fixed set of training checkpoints, which need not coincide with the actual
+
+steps where $z$ was visited. A known issue is that gradient similarity may be dominated by outlier training examples with large gradients. A simple fix proposed in previous work (Barshan et al., 2020; Han and Tsvetkov, 2021) is to unit-normalize the gradients, effectively replacing the dot product in Equation (2) with cosine similarity:
+
+$$
+\begin{array}{l} \mathcal {I} (z, z _ {\text {q u e r y}}) = \\ \sum_ {k = 1} ^ {K} \frac {\nabla_ {\theta} L \left(z _ {\text {q u e r y}} , \theta_ {\mathrm {t} (k)}\right) ^ {\top} \nabla_ {\theta} L \left(z , \theta_ {\mathrm {t} (k)}\right)}{\| \nabla_ {\theta} L \left(z _ {\text {q u e r y}}, \theta_ {\mathrm {t} (k)}\right) \| \| \nabla_ {\theta} L \left(z , \theta_ {\mathrm {t} (k)}\right) \|} \tag {4} \\ \end{array}
+$$
+
+We hereafter refer to $\mathcal{I}$ in Equation (4) as TRACIN.
+
+# 2.2 Embedding-based Attribution
+
+Hidden representations of neural networks are known to embed high-level features that are often useful for similarity search. While not as theoretically justified, prior work (Rajani et al., 2019) has found that such representations can outperform gradient-based methods. Following prior work, we extract the intermediate layer outputs of a Transformer language model, and average over decoding time-steps to obtain a single vector representation for any example. In our experiments, we consider representations at different layers of the Transformers, as well as their concatenations. Similar to the case of gradient-based methods, the association between a training example and a model prediction is defined by a cosine product:
+
+$$
+\begin{array}{l} \mathcal {I} (z, z _ {\text {q u e r y}}) = \\ \frac {L M _ {\text {i n t e r .}} (z) ^ {\top} L M _ {\text {i n t e r .}} \left(z _ {\text {q u e r y}}\right)}{\| L M _ {\text {i n t e r .}} (z) ^ {\top} \| \| L M _ {\text {i n t e r .}} \left(z _ {\text {q u e r y}}\right) \|} \tag {5} \\ \end{array}
+$$
+
+where $LM_{\mathrm{inter}}$ denotes some hidden representation internal to the model $LM$ . We refer to $\mathcal{I}$ in Equation (5) as EMBED.
+
+# 2.3 Baseline:BM25
+
+In the previous sections, we used attribution methods to define a model-specific similarity function between examples. But it is also possible to identify facts in a model-agnostic way: In the classic IR literature, word-overlap based methods have been shown to be both simple and effective.
+
+Among these approaches, BM25 (Robertson et al., 1995; Lv and Zhai, 2011), the best performing variant, has been consistently used as a baseline for information retrieval benchmarks (Thakur et al.,
+
+
+Figure 2: Dataset Creation: From the original TREx (Elsahar et al., 2018) data, we construct masked sentences and annotate their facts by using provided fact annotations. We assume a fact is expressed when either the object or subject is masked in the sentence. Given a query from the LAMA dataset (Petroni et al., 2019), we identify proponents by matching all TREx training examples expressing the same fact. (The outputs of the masked examples are omitted in the figure.)
+
+2021). When using BM25, we consider an example as a bag of words consist of the input and the output words. The score is proportional to token overlap between the query and the candidate, inversely weighted with the frequency of such tokens, and the importance of weights regulated by hyperparameters. Refer to Appendix A for details.
+
+# 3 Fact Tracing Datasets
+
+We propose two datasets to measure fact tracing approaches: FTRACE-TREx, a natural language dataset with real facts derived from the TREx dataset, and FTRACE-SYNTH, a synthetic dataset with novel facts using made-up entities and relations. For each dataset, we define an attribution set containing all LM training examples that might be considered proponents and a query set containing test examples, each annotated with their ground truth proponents from the attribution set. The examples in these sets are masked language modeling examples, each a (masked input, output, facts) tuple.
+
+# 3.1 FTRACE-TREx
+
+We create an attribution set using TREx (Elsahar et al., 2018) and query set using LAMA (Petroni et al., 2019) datasets. TREx consists of DBPedia (Brümmer et al., 2016) abstracts, $a_{i} \in A$ . Each abstract contains a set of sentences, $s_{j} = a_{ij}$ ,
+
+
Statistics
FTRACE-TREX
FTRACE-SYNTH
Attribution
Query
Attribution
Query
Length
1,560,453
31,479
3,190,000
10,000
Unique Facts
552,381
31,479
50,000
5,000
Avg. #proponents
-
83
-
62
Facts per example
8.28
1
2
1
Unique Predicates
488
41
37
37
Unique Objects
49,166
2,266
5,000
5,000
Unique Subjects
310,197
29,464
5,000
5,000
+
+Table 1: FTRACE-TREx: We extract 1M masked examples from TREx (Elsahar et al., 2018), and match them with 27k queries from LAMA (Petroni et al., 2019) to construct our fact tracing benchmark. FTRACE-SYNTH: To evaluate influence methods on completely novel facts, we propose a synthetic benchmark consists of made-up entities and relations. Refer to Appendix C.4 and Appendix C.5 for examples.
+
+and each sentence is associated with a set of facts, $F(s_{j})$ . For each fact $f \in F(s_{j})$ , TREx annotates the exact positions where the subject and object respectively appear in the sentence $s_{j}$ .
+
+We wish to convert these sentences into training examples that can teach a language model about the facts stated within them. To do so, we construct cloze-style language modeling examples as in masked language modeling (Devlin et al., 2019) or span corruption (Raffel et al., 2020). In particular, for each fact $f$ in a sentence $s$ , we mask out either the subject or the object, and train the model to predict it. The two resulting examples $\mathrm{mask}_{\mathrm{sub}}(s,f)$ and $\mathrm{mask}_{\mathrm{obj}}(s,f)$ are marked as "proponents" of the fact, as shown in Figure 2.
+
+The LAMA dataset is anchored to the same fact tuples used by TREx. For each fact tuple, LAMA provides a template-generated sentence expressing the fact. Similar to TREx, we convert this sentence into a cloze-style example by either masking out the subject or object. Hence, we now have two sets of examples (TRex and LAMA) that express the same facts. We treat the TREx examples as our attribution set and the LAMA examples as our test set. Since we wish to trace influence from LAMA back to TREx, we sometimes refer to LAMA examples as “queries” and TREx examples as “retrieval candidates.” For any LAMA example, we define the ground-truth proponents as simply the TREx examples that express the same fact.
+
+One ambiguity remains regarding ground truth in TREx sentences that express multiple facts. Suppose a TREx sentence expresses facts $f_{1}$ and $f_{2}$ , and we generate cloze examples for both $f_{1}$ and $f_{2}$ . The example $\mathrm{mask}_{\mathrm{sub}}(s, f_{1})$ is clearly a pro
+
+ponent of $f_{1}$ , but it is perhaps also a proponent of $f_{2}$ , since the text supporting $f_{2}$ is still present after masking. Ultimately, we care about whether attribution methods can retrieve the right sentence from the attribution set, not a particular masking of that sentence. In our evaluations (described next), we evaluate a method's ability to retrieve at the sentence level, with the score of a sentence defined as the max score over all maskings of that sentence.
+
+In total (Table 1), we match approximately 448k TREx sentences with 31k LAMA queries. On average, each TREx example expresses three facts, and each LAMA example has 83 proponents (including different maskings of the same sentence).
+
+# 3.2 FTRACE-Synth
+
+In a dataset with real facts, two factors can negatively impact TDA methods for LMs compared to baselines such as BM25: First, many of the facts in FTRACE-TREx may already be known by a pre-trained LM. In such cases, the LM will not learn the fact from TREx, and TDA methods should not be expected to identify examples in TREx as proponents. We refer to this as the "saturation" problem, since the model's performance already saturated on the fact before fine-tuning, leaving no signal for TDA methods to detect. Second, real corpora like TREx and LAMA have lexical overlap between query and attribution examples (overlapping surface forms; see Section 3.1) which can favor counting-based methods like BM25.
+
+To better evaluate TDA methods in isolation, we create a synthetic dataset, FTRACE-SYNTH, to evaluate TDA methods on facts that are guaranteed to be novel. First, we create random entities with a total number comparable to TREx. Then, we randomly relate those entities with each other using the same set of relations from the TREx dataset.
+
+Entities Our entity list consists of 5,000 synthetic entities each uniquely identified by a number. To reduce the lexical overlap between examples in the dataset, we use 4 surface forms per entity - 2 forms with Arabic numerals, 2 forms with Roman numerals. For example, the fourth entity appears with the following surface forms: ["4-entity", "entity-4", "IV-entity", "entity-IV"].
+
+Relations The dataset includes a set of 37 relations (Appendix B) borrowed directly from TREx. Additionally, we paraphrase each relation to create diversity and to reduce the lexical overlap between attribution and query examples.
+
+Attribution Set Each example in the attribution corpus expresses two facts to parallel the multi-fact nature of TREx examples.
+
+Input: entity-MMCLXXIV is the official language of $\_ \_ \_ \_ \_ \_ \_ \_ \_ \_ \_ \_ \_ \_ \_ \_ \_ \_ \_ \_ \_ \_ \_ \_ \_ \_ \_ \_ \_ \_ \_ \_ \_ \_ \_ \_ \_ \_ \_ \_ \_ \_ \_ \_ \_ \_ \_ \_ \_ \_ \_ 1$ , CMXCVII-entity is the writing place of
+
+Output: 1:3082-entity, 2:entity-MMMCCC
+
+The attribution corpus includes 50,000 individual facts. By masking different entities as well as combining different facts, we can generate 3,190,000 masked examples for the attribution corpus.
+
+Query Set Similar to LAMA, each example in the query corpus queries a single fact expressed as a masked example, for example:
+
+Input: entity-3300 was written in
+
+Output: entity-CMXCVII
+
+We generate 5,000 such facts by assigning random relations between different entities, with two surface forms for each, resulting in 10,000 examples. As a result, each fact in this query set has 62 proponents in the attribution corpus, and every entity appears in 10 relations on average.
+
+# 4 Experimental Setup
+
+Our experiments aim to answer the questions of (1) whether TDA methods can be used as effective fact tracing tools (compared to simple IR baselines), (2) which configurations make them most effective (exploring many variations), and (3) analyzing the weaknesses of TRACIN, in particular its sensitivity to when the knowledge is learned (the aforementioned "saturation" hypothesis).
+
+# 4.1 Reranking Evaluation
+
+Ideally, an attribution method would score a given test query against every training example, and we can sort all examples by their influence score. This would enable evaluation with standard IR methods like recall@10 and mean reciprocal rank (MRR) $\frac{1}{|Q|}\sum_{q\in Q}\frac{1}{\mathrm{rank}_q}$ , where $\mathrm{rank}_q$ is the rank of the first true proponent for the query, and $Q$ denotes the candidate set. However, most attribution methods are computationally intractable for scoring all training sentences in large datasets. Although we
+
+can reduce the complexity of some of these methods through the use of random projections (Pruthi et al., 2020), such lossy approximations would render our results less conclusive, as it would be unclear whether an outcome is due to the intrinsic quality of a method or the quality of the projection.
+
+Therefore, to achieve computational tractability while avoiding such confounds, we propose a simple reranking setup: instead of scoring all examples, we can score a carefully selected subset that still enables meaningful comparisons. We call this the "candidate set". It is the union of four sets:
+
+1. all true proponents for a query: $\mathcal{P}(z_{\mathrm{query}})$
+2. the top-100 retrievals from BM25: BM25 $(z_{\mathrm{query}})$
+3. 100 random examples that share the same target $y$ as the query: $\mathcal{D}_y = \{(x,y)$ s.t. $y = y_{\mathrm{query}}\}$ , and
+4. 100 randomly sampled examples: $\mathcal{D}_{\mathrm{random}}$
+
+with random samples fixed across all evaluations. Note that MRR on this particular candidate set is an upper-bound on the MRR over the full attribution set. Because it includes all proponents but fewer distractors, rank is guaranteed to be closer to 1 in the MRR definition. Also, including $\mathcal{P}(z_{\mathrm{query}})$ is necessary to ensure that the model has the opportunity to retrieve all proponents. BM25( $z_{\mathrm{query}}$ ) ensures that we have "distractors" with high lexical overlap, and $\mathcal{D}_y$ is included because we observed that TDA methods have a tendency to retrieve examples with the same output as the query.
+
+Our candidate set includes all top retrievals from BM25, so the results for BM25 are exact. When combined with the fact that reranking MRRs always upper-bound full retrieval MRRs, our setup guarantees that any method that underperforms BM25 on reranking will also underperform for full retrieval.
+
+Slicing examples The gradient-based methods require careful treatment when considering models that go through two separate stages: pre-training and fine-tuning. For example, if a model has already obtained zero loss on an example at the start of fine-tuning, then the gradient will be near-zero throughout fine-tuning, and computing influence using only fine-tuning checkpoints will yield an uninformative influence score for any query. We refer
+
+to this problem as “saturation.” To mitigate saturation, we evaluate TDA methods on a subset of queries we label Finetune-learned (FL), where the model failed before fine-tuning (the answer is not in top-3 beam-search predictions), but succeeded afterward (the answer is top-1 in beam-search). We referred to this set as “novel facts” in Section 1.4
+
+# 4.2 Model
+
+We use MT5-base, a commonly used encoder-decoder language model (Xue et al., 2021) to evaluate the aforementioned neural TDA methods. MT5 was pre-trained on the MC4 corpus, which includes all of Wikipedia, and therefore was exposed to the knowledge expressed in FTRACE-TREx. The pre-trained MT5 model achieves $24.3\%$ top-3 accuracy when predicting answers to the TREx queries. Fine-tuning MT5 on our FTRACE-TREx training set increases accuracy to $47.42\%$ , suggesting that there are still many facts MT5 did not know after pre-training. For FTRACE-SYNTH, the pre-trained model gets 0 accuracy as expected, and the fine-tuned model obtains $81\%$ .
+
+To evaluate TRACIN, we approximate Equation (3) by choosing three checkpoints that are uniformly spaced out in terms of their training loss (specifically, inverse perplexity), to ensure that we cover significant parts of training while favoring regions with greater loss reduction. Note that we use pre-training checkpoints when evaluating the pretrained model, and fine-tuning checkpoints when evaluating the fine-tuned model; see Appendix A for details. We calculate the gradient w.r.t the average negative likelihood of the true output token sequence. To evaluate embedding-based fact tracing, we use representations from the final checkpoint of the model.
+
+For both gradient and embedding-based methods, we present the best layer combination among the different concatenations of layers studied in (Section 5.2).
+
+# 5 Results
+
+# 5.1 Top-level comparisons
+
+In Table 2, we present a top-level comparison of the three main methods discussed (gradient-based, embedding-based, and BM25). Hyperparameters
+
+
Methods
MRR
Recall@10
Random-Target
14.50 ±0.95
10.32 ±1.54
BM25
77.55 ±1.50
60.89 ±2.31
Finetuned
Pretrained
Finetuned
Pretrained
TRACIN
48.56 ±4.40
62.38 ±1.99
56.02 ±0.67
57.54 ±1.25
EMBED
64.29 ±1.32
60.59 ±1.13
57.89 ±1.38
54.91 ±0.32
TRACIN + EMBED
58.52 ±3.83
67.66 ±0.22
61.72 ±0.08
61.59 ±1.35
+
+Table 2: Top Level Results: Best scores for each method and model on the Finetune-learned slice of FTRACE-TREx. We present average sentence-level retrieval results over 3 random selections of 200 queries. We found that BM25 performs best in MRR outperforming neural methods. Table 6 shows detailed MRRs on predicate, subject, and object level of candidate examples.
+
+for all methods have been optimized. As we discuss in subsequent sections, TDA hyperparameters have a significant effect on performance.
+
+We optimized TRACIN by rescaling gradients with Adafactor accumulators (Shazeer and Stern, 2018), applying unit-normalization to the gradients (see Table 3) and selecting the best layer configuration (Section 5.2). To sanity check that TDAs are doing more than matching the query's output label, we compare to a RANDOM-TARGET baseline that outputs a score of 1 for all training examples with the same output label. This baseline is indeed substantially worse than either method.
+
+Despite extensive optimization for TRACIN and EMBED, however, we found that BM25 with no tuning still outperforms neural TDAs in MRR and Recall@10. TRACIN slightly outperforms EMBED for pretrained model but significantly underforms EMBED for the finetuned model. When we ensemble TRACIN and EMBED (by summing their influence scores) there is an improvement on recall of candidate examples, demonstrating that their success is somewhat orthogonal. We provide example retrievals from all three models in Appendix C.
+
+Surprisingly, pre-trained TRACIN outperforms fine-tuned TRACIN in this dataset, as we discuss more in Section 5.3.
+
+We do not seek to measure all benefits of attribution methods, but rather to assess one expected function (fact-tracing), as promised by their stated goal (tracing a model's prediction back to data). The fact that even the best TDA method obtains MRR of 67.66 and Recall@10 of 61.59 showcases the significant absolute headroom that remains for attribution methods. BM25 results are only a little better, and are provided mainly as a reference point. Next, we analyze what choices contributed to the current status of TDA methods with a detailed ex
+
+
MRR
Recall@10
Change
Finetuned
Pretrained
Finetuned
Pretrained
Adafactor → no-Adafactor
-3.83 ±4.81
-7.20 ±2.25
-11.29 ±2.05
-2.36 ±1.63
unit-norm → no-norm
-3.36 ±4.89
-32.90 ±2.13
-10.82 ±2.24
-28.06 ±1.46
multi-ckpt → single-ckpt
+0.51 ±6.42
+0.60 ±2.23
-6.95 ±4.72
+5.44 ±1.61
no [eos] → [eos]
+5.50 ±4.63
-24.77 ±3.82
+12.96 ±1.59
-19.93 ±3.49
+
+Table 3: Our experiment with various configurations for best layer of the TRACIN evaluated in Finetune-Learned set of FTRACE-TREX: For each change from the best configuration (the first row), we report the best result by optimizing free hyper-parameters. The normalization and the usage of the accumulator smoothing was effective in our top level TRACIN results. We compare maximum scored checkpoint scores to our original multi-checkpoint results, we found that the best checkpoint performs slightly better than multi-checkpoint results in MRR. The including the the end of sentence token in the target side hurts pretrained MT5 model since it is originally trained to predict multiple answer.
+
+ploration of hyperparameters.
+
+# 5.2 Which Transformer layers provide the most reliable attribution signal?
+
+Some layers of a language model may be specialized for operations that have no relation to factual information. For example, previous probing work (Tenney et al., 2019) shows the existence of layers that focus on syntax rather than on knowledge. The contribution of such layers to TRACIN may introduce noise. In Figure 3, we conduct an experiment where we sweep over various subsets of layers.
+
+For TRACIN, the best-performing layer is the embedding layer of the model — this result, also observed in Yeh et al. (2022), is surprising, as most prior work used only the last layer. In EMBED, the best performing layer is again the output of the embedding layer. These results suggest that much of the effectiveness of embedding-based methods derives from their models of lexical similarity. Conversely, for TRACIN, the embedding layer also includes contextual information since the gradient signal propagates back through the entire model.
+
+Additional Model Variants Section 5.1 mentioned several design choices for TRACIN. We performed a systematic evaluation of those choices. In Table 3, given the set of configurable options in the table, we set a given option to a particular value and then optimize remaining parameters.
+
+Unit-normalized gradients drastically outperform the dot product. Next, we considered the role of Adafactor during training. The TRACIN equation arises from considering parameter up
+
+
MRR
Precision @10
Recall@10
Random-Target
36.47 ±2.84
30.43 ±4.00
2.45 ±0.32
BM25
87.69 ±1.71
52.02 ±2.65
4.20 ±0.21
TRACIN
100.00 ±0.00
99.50 ±0.14
8.02 ±0.01
EMBED
99.58 ±0.24
97.12 ±0.53
7.83 ±0.04
TRACIN + EMBED
100.00 ±0.00
98.07 ±0.18
7.91 ±0.01
+
+Table 4: Synthetic Dataset Results: Best scores for fine-tuned model on the Finetune-learned slice of FTRACE-SYNTH. On completely novel facts, the TracIn upperbound is higher than the other methods. Since we control the lexical overlap, BM25 underperforms neural methods. We present average sentence-level retrieval results over 2 random selections of 200 queries. The upper-bound scores on neural methods are higher in the synthetic data than BM25 as we reduce the lexical overlap. The TrACIN upperbound performs best in all the metrics.
+
+dates at a specific step. The true parameter updates were not raw gradients, but gradients that had been rescaled by Adafactor accumulators. Using these rescaled gradient for TRACIN performs much better. Also, surprisingly, using the single best-performing checkpoint is sometimes better than using multiple checkpoints. We provide the individual checkpoint results in Table 5.
+
+# 5.3 FTRACE-SYNTH and Saturation
+
+As mentioned earlier, TRACIN monitors the change in a model's performance on a test query over the course of training — and therefore is likely to fail if a test query's loss is already zero (saturated) at the start of the training period monitored by TRACIN. In addition, because the pre-trained model sees very similar sentences and information in the pre-training corpora, the influence could be distributed over multiple examples, such that the signal from each candidate is weak. These confounding factors may apply to FTRACE-TREX. Therefore, we also evaluate TDA methods on our synthetic dataset, FTRACE-SYNTH, which controls for all these issues. We fine-tune the same model on FTRACE-SYNTH and perform the same evaluation in Table 4. The results suggest that when the aforementioned factors are controlled, the reranking upper-bound for gradient-based TDA methods is better than BM25 and slightly better than embedding-based TDA methods. This result verifies that TDA methods might have advantages over standard IR methods, despite falling short in a more realistic, applied scenario.
+
+
+Figure 3: Mean reciprocal rank for TRACIN with different layers and and EMBED from different intermediate layers: In G.0, gradient of embedding layer is used. In G. and A., respectively, gradients and embeddings of all layers are used. A.E.0 and A.D.0 corresponds to embedding layer's output in the encoder and decoder part of the model respectively. Comma-separated labels denote ensembling by summing the scores of the corresponding layers. We report results for 3 random seeds (error bars with standard deviation) of 200 queries where queries learned between pretraining checkpoints. In neural methods, using only the embedding layer or its output performs the best, while underperforming the baseline method BM25.
+
+# 6 Related work
+
+Information Retrieval To define our fact tracing task, we employ standard concepts from the information retrieval (IR) literature: a retrieval + reranking setup, and standard retrieval metrics. However, while IR focuses on retrieving any document that satisfies a user's query, our benchmark specifically aims to identify examples that caused a particular model to make a particular prediction. This focus on model-specific causality distinguishes us from prior IR work (Thakur et al., 2021; Nguyen et al., 2016). Our evaluation setup should be easier than generic IR benchmarks because we are only evaluating on predictions we know the LM gets right.
+
+Language Models as Retrievers Language models have been successfully used in numerous IR applications. Karpukhin et al. (2020) use language model embeddings to warm-start neural retrievers for knowledge-intensive tasks. Guu et al. (2020) and Lewis et al. (2020) show that language modeling and information retrieval can be jointly learned in a manner that benefits both tasks. Our work uses TDA-based retrieval methods to help users understand the behavior of the LMs themselves.
+
+Attribution Methods Recent work has tried to explain neural model behavior in many different ways: (1) attributing a prediction back to specific features in the input (Simonyan et al., 2014; Sundararajan et al., 2017; Han et al., 2020), (2) attributing to specific model parameters (Dai et al., 2022;
+
+Mitchell et al., 2022), (3) probing for competence at linguistic sub-tasks (Tenney et al., 2019), and finally (4) attributing back to training examples (Pruthi et al., 2020; Koh and Liang, 2017).
+
+However, work in the last category (Han et al., 2020; Guo et al., 2021) has been limited, mainly focusing on classification and regression tasks that do not involve questions about factuality or world knowledge. Consequently, these methods have primarily been used as a data cleaning technique, leaving the question of fact tracing unexplored (Han et al., 2020; Hara et al., 2019).
+
+# 7 Conclusion
+
+We introduced a new dataset and benchmark for fact tracing: the task of tracing language models' assertions back to the training examples that provided evidence for those predictions. We evaluated gradient-based and embedding-based TDA methods and found that they perform worse than a standard IR baseline (BM25) even in settings that favor TDA methods. We investigated the effects of layer selection, model checkpoints and fine-tuning. Our ablative analysis suggests that saturation is an important factor affecting the performance of current methods. Much is needed to improve these methods before they can be reliably used for fact tracing. We hope that this benchmark will enable future research on fact tracing, by mitigating computational challenges and establishing a principled ground truth.
+
+# Acknowledgments
+
+We would like to thank Zhuyun Dai, Keith Hall, Ji Ma for their helpful discussions and feedbacks.
+
+# Limitations and Impact Statement
+
+Our experiments focus on a single representative language model, MT5-base; it is possible that our findings about the effectiveness of attribution methods for fact tracing would differ substantially when applied to language models with very different architectures or trained on different datasets. Because of the candidate set construction scheme described in Section 4.1, these results only upper-bound the performance of evaluated methods, and it is also possible that they are even less effective than reported here. The ground truth labels in FTRACE-TREx extracted from TREx where the fact annotations are semi-automatically annotated, can have labeling errors.
+
+The FTRACE dataset includes content from Wikipedia, some of which has not been vetted for factual accuracy. It is possible that by redistributing this content we will propagate misinformation. We plan to mitigate this harm with a datasheet that explicitly communicates FTRACE's role as an evaluation tool, and not as a reliable source of information. Apart from the dataset, we anticipate no ethical issues associated with the techniques described in this publication.
+
+# References
+
+Elnaz Barshan, Marc-Etienne Brunet, and Gintare Karolina Dziugaite. 2020. Relatif: Identifying explanatory training samples via relative influence. In International Conference on Artificial Intelligence and Statistics, pages 1899-1909. PMLR.
+Samyadeep Basu, Phil Pope, and Soheil Feizi. 2021. Influence functions in deep learning are fragile. In International Conference on Learning Representations.
+Martin Brümmer, Milan Dojchinovski, and Sebastian Hellmann. 2016. Dbpedia abstracts: A large-scale, open, multilingual nlp training corpus. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 3339-3343.
+Damai Dai, Li Dong, Yaru Hao, Zhifang Sui, Baobao Chang, and Furu Wei. 2022. Knowledge neurons in pretrained transformers. In Proceedings of the 60th Annual Meeting of the Association for Computational
+
+Linguistics (Volume 1: Long Papers), pages 8493-8502, Dublin, Ireland. Association for Computational Linguistics.
+Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.
+Hady Elsahar, Pavlos Vougiouklis, Arslen Remaci, Christophe Gravier, Jonathon Hare, Frederique Laforest, and Elena Simperl. 2018. T-rex: A large scale alignment of natural language with knowledge base triples. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018).
+Han Guo, Nazneen Rajani, Peter Hase, Mohit Bansal, and Caiming Xiong. 2021. FastIF: Scalable influence functions for efficient model interpretation and debugging. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 10333-10350, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
+Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, and Mingwei Chang. 2020. Retrieval augmented language model pre-training. In Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pages 3929-3938. PMLR.
+Frank R Hampel. 1974. The influence curve and its role in robust estimation. Journal of the american statistical association, 69(346):383-393.
+Xiaochuang Han and Yulia Tsvetkov. 2021. Influence tuning: Demoting spurious correlations via instance attribution and instance-driven updates.
+Xiaochuang Han, Byron C. Wallace, and Yulia Tsvetkov. 2020. Explaining black box predictions and unveiling data artifacts through influence functions. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5553-5563, Online. Association for Computational Linguistics.
+Satoshi Hara, Atsushi Nitanda, and Takanori Maehara. 2019. Data cleansing for models trained with sgd. Advances in Neural Information Processing Systems, 32.
+Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for open-domain question answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6769-6781, Online. Association for Computational Linguistics.
+
+Pang Wei Koh and Percy Liang. 2017. Understanding black-box predictions via influence functions. In International Conference on Machine Learning, pages 1885-1894. PMLR.
+Katherine Lee, Orhan First, Ashish Agarwal, Clara Fanjiang, and David Sussillo. 2018. Hallucinations in neural machine translation. In Workshop on Interpretability and Robustness for Audio, Speech, and Language at NeurIPS.
+Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Kuttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, et al. 2020. Retrieval-augmented generation for knowledge-intensive nlp tasks. Advances in Neural Information Processing Systems, 33:9459-9474.
+Yuanhua Lv and ChengXiang Zhai. 2011. Lower-bounding term frequency normalization. In Proceedings of the 20th ACM international conference on Information and knowledge management, pages 7-16.
+Eric Mitchell, Charles Lin, Antoine Bosselut, Chelsea Finn, and Christopher D Manning. 2022. Fast model editing at scale. In International Conference on Learning Representations.
+Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and Li Deng. 2016. Ms marco: A human generated machine reading comprehension dataset. In Proceedings of the Workshop on Cognitive Computation.
+Fabio Petroni, Tim Rocttäschel, Sebastian Riedel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, and Alexander Miller. 2019. Language models as knowledge bases? In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2463-2473, Hong Kong, China. Association for Computational Linguistics.
+Garima Pruthi, Frederick Liu, Satyen Kale, and Mukund Sundararajan. 2020. Estimating training data influence by tracing gradient descent. Advances in Neural Information Processing Systems, 33.
+Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research, 21(140):1-67.
+Nazneen Fatema Rajani, Ben Krause, Wengpeng Yin, Tong Niu, Richard Socher, and Caiming Xiong. 2020. Explaining and improving model behavior with k nearest neighbor representations. arXiv preprint arXiv:2010.09030.
+Nazneen Fatema Rajani, Bryan McCann, Caiming Xiong, and Richard Socher. 2019. Explain yourself! leveraging language models for commonsense
+
+reasoning. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4932-4942, Florence, Italy. Association for Computational Linguistics.
+Adam Roberts, Colin Raffel, and Noam Shazeer. 2020. How much knowledge can you pack into the parameters of a language model? arXiv preprint arXiv:2002.08910.
+Stephen E Robertson, Steve Walker, Susan Jones, Micheline M Hancock-Beaulieu, Mike Gatford, et al. 1995. Okapi at trec-3. Nist Special Publication Sp, 109:109.
+Andrea Schioppa, Polina Zablotskaia, David Vilar, and Artem Sokolov. 2022. Scaling up influence functions. Proceedings of the AAAI Conference on Artificial Intelligence, 36(8):8179-8186.
+Noam Shazeer and Mitchell Stern. 2018. Adafactor: Adaptive learning rates with sublinear memory cost. In Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pages 4596-4604. PMLR.
+K Simonyan, A Vedaldi, and A Zisserman. 2014. Deep inside convolutional networks: visualising image classification models and saliency maps. pages 1-8. Proceedings of the International Conference on Learning Representations (ICLR).
+Mukund Sundararajan, Ankur Taly, and Qiqi Yan. 2017. Axiomatic attribution for deep networks. In Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pages 3319-3328. PMLR.
+Ian Tenney, Dipanjan Das, and Ellie Pavlick. 2019. Bert rediscovers the classical nlp pipeline. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4593-4601.
+Nandan Thakur, Nils Reimers, Andreas Rückle, Abhishek Srivastava, and Iryna Gurevych. 2021. BEIR: A heterogeneous benchmark for zero-shot evaluation of information retrieval models. In Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2).
+Ran Tian, Shashi Narayan, Thibault Sellam, and Ankur P Parikh. 2019. Sticking to the facts: Confident decoding for faithful data-to-text generation. arXiv preprint arXiv:1910.08684.
+Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, and Colin Raffel. 2021. mT5: A massively multilingual pre-trained text-to-text transformer. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 483-498, Online. Association for Computational Linguistics.
+
+Chih-Kuan Yeh, Ankur Taly, Mukund Sundararajan, Frederick Liu, and Pradeep Ravikumar. 2022. First is better than last for training data influence. arXiv preprint arXiv:2202.11844.
+
+# Appendix
+
+In this appendix, we will provide implementation details and additional results for the experiments.
+
+# A Implementation Details
+
+BM25 We use the following BM25 formula:
+
+$$
+\mathcal {I} (z, z _ {\mathrm {q u e r y}}) = \sum_ {t \in z _ {\mathrm {q u e r y}}} \log \left(\frac {N + 1}{N _ {t}}\right) \times \left(\frac {(k _ {1} + 1) \cdot f (z , t)}{k _ {1} \cdot ((1 - b) + b \cdot (\frac {L (z)}{L _ {\mathrm {a v g}}})) + f (z , t)} + 1\right)
+$$
+
+where, $\mathrm{f(z,t)}$ is the overlap count, $N$ is the number of training examples, $L(z)$ is the length of the example, and $L_{\mathrm{avg}}$ is the average example length. $k_{1}$ and $b$ are hyperparameters that reweights the importance of the other terms in the formula. Robertson et al. (1995) provides the intuition behind this definition of relatedness.
+
+We use a publicly available BM25+(Lv and Zhai, 2011) implementation written in python and released under https://pypi.org/project/rank-bm25/. We tokenize queries and retrieval examples by space and we remove masked tokens. We did not optimize any of the default hyper parameters.
+
+MT5 Model We use intermediate checkpoints of MT5 model $^{6}$ (12 layers transformer with 580M parameters). We convert these checkpoints to Pytorch by using HugginFace's T5 converter. We use the tokenizer provided. In our datasetSection 3.1, we use extra_id_0 for the mask token compatible with pretraining corpus of MT5..
+
+TRACIN We calculate gradients by using Pytorch without batching examples and by using average negative likelihood over output sequence. We store each individual parameter's gradient (blocks of transformer) in a dictionary structure. Given a query and a retrieval example, we calculate scores Equation (4) for each parameter separately that means we locally normalize each parameters' gradient in Equation (4). Then, to calculate a layer's or full model's score, we score individual scores corresponding to parameters in that layer. This enable us to sweep over different combination of layers as in Figure 3 without rerunning the model.
+
+Pretrained MT5 model is trained until 80k gradient steps. We use checkpoints at 5100, 10200, 15300 steps. We fine-tune MT5 model on additional 60k gradient steps on TReX dataset. Then, we use checkpoints at 5000, 10000, 30000 steps.
+
+We parallelize over checkpoints when calculating Equation (4). For each query, we spend approximately 15 minutes by using VOLTA V100 32 GB GPUs to get scores for all the retrieval examples in the ranking set (Section 4.1))
+
+EMBED Transformer model's forward pass can be expressed as following pseudo code:
+
+$$
+\operatorname {e n c} _ {0} = \operatorname {E m b e d d i n g} (x)
+$$
+
+$$
+\operatorname {e n c} _ {i} = \operatorname {E n c o d e r} _ {i} \left(\operatorname {e n c} _ {i - 1}\right) i = 1.. N
+$$
+
+$$
+\operatorname {d e c} _ {0} = \operatorname {E m b e d d i n g} (y) \tag {6}
+$$
+
+$$
+\operatorname {d e c} _ {i} = \operatorname {D e c o d e r} _ {i} (y, \operatorname {e n c} _ {N}) i = 1.. N
+$$
+
+$$
+\mathcal {L} = \mathrm {N L L} \left(W _ {\text {p r o j}} \operatorname {d e c} _ {N}, y _ {\text {q u e r y}}\right)
+$$
+
+We use $\mathrm{enc}_i$ and $\mathrm{dec}_i$ , and reduce (average) them over time-steps in input and outputs side respectively.
+
+# B Synthetic Data Relation Templates
+
+Below are the relation templates we use in the dataset. "0 and "1" are the slots for the entities. Paraphrases are delimited by "l" sign. Left paraphrase is the original surface for in the FTRACE-TREx dataset, right one is the additional paraphrase paraphrase.
+
+$\{\emptyset\}$ was born in $\{1\}$ | $\{\emptyset\}$ 's birth place is $\{1\}$
+$\{0\}$ died in $\{1\} \mid \{0\}$ passed away in $\{1\}$
+$\{\emptyset\}$ is a subclass of $\{1\} \mid \{1\}$ is superclass of $\{\emptyset\}$
+The official language of $\{0\}$ is $\{1\} \mid \{1\}$ is the official language of $\{0\}$
+$\{\emptyset\}$ plays in $\{1\}$ position | $\{1\}$ is the play position of $\{\emptyset\}$
+$\{\emptyset\}$ was awarded the $\{1\} \mid \{1\}$ given to $\{\emptyset\}$
+$\{\emptyset\}$ was originally aired on $\{1\} \mid \{1\}$ is the first streamer of $\{\emptyset\}$
+$\{\emptyset\}$ was educated at the University of $\{1\}$ | $\{\emptyset\}$ studied in University of $\{1\}$
+$\{\emptyset\}$ shares border with $\{1\} \mid \{\emptyset\}$ and $\{1\}$ are neighbours
+$\{\emptyset\}$ is named after $\{1\} \mid \{1\}$ was inspirational for the naming of $\{\emptyset\}$
+The original language of $\{\emptyset\}$ is $\{1\} \mid \{1\}$ is the original language of $\{\emptyset\}$
+$\{0\}$ plays with $\{1\} \mid \{0\}$ plays along with $\{1\}$
+$\{\emptyset\}$ is a member of $\{1\} \mid \{1\}$ accepted $\{\emptyset\}$ as a member
+$\{0\}$ works in the field of $\{1\} \mid \{1\}$ is the work field of $\{0\}$
+{1} participated in the $\{\emptyset\}$ | {1} was a participant of $\{\emptyset\}$
+$\{\emptyset\}$ is a $\{1\}$ by profession | $\{\emptyset\}$ 's profession is $\{1\}$
+$\{\emptyset\}$ consists of $\{1\} \mid \{\emptyset\}$ includes $\{1\}$
+$\{\emptyset\}$ is a member of the $\{1\}$ political party | $\{\emptyset\}$ 's political party was $\{1\}$
+$\{\emptyset\}$ maintains diplomatic relations with $\{1\} \mid \{\emptyset\}$ 's diplomacy with $\{1\}$
+$\{\emptyset\}$ is produced by $\{1\} \mid \{1\}$ produced $\{\emptyset\}$
+$\{\emptyset\}$ is a citizen of $\{1\} \mid \{\emptyset\}$ 's home country is $\{1\}$
+$\{\emptyset\}$ was written in $\{1\} \mid \{1\}$ is the writing place of $\{\emptyset\}$
+$\{\emptyset\}$ is located in $\{1\} \mid \{\emptyset\}$ placed in $\{1\}$
+$\{\emptyset\}$ is developed by $\{1\} \mid \{1\}$ developed $\{\emptyset\}$
+$\{\emptyset\}$ is the capital of $\{1\}$ | the capital of $\{1\}$ is $\{\emptyset\}$
+$\{\emptyset\}$ works for $\{1\} \mid \{\emptyset\}$ works at $\{1\}$
+$\{\emptyset\}$ plays $\{1\}$ music | $\{\emptyset\}$ perform $\{1\}$ music
+$\{\emptyset\}$ has the position of $\{1\} \mid \{\emptyset\}$ 's position is $\{1\}$
+$\{0\}$ is represented by music label $\{1\}$ | music label $\{1\}$ represents $\{0\}$
+$\{\emptyset\}$ used to work in $\{1\} \mid \{1\}$ is ex-workplace of $\{\emptyset\}$
+$\{\emptyset\}$ is affiliated with the $\{1\}$ religion | $\{\emptyset\}$ believes in $\{1\}$ religion
+$\{\emptyset\}$ is owned by $\{1\} \mid \{1\}$ owned $\{\emptyset\} \vee$
+The native language of $\{0\}$ is $\{1\} \mid \{1\}$ is the native language of $\{0\}$
+$\{\emptyset\}$ and $\{1\}$ are twin cities | $\{\emptyset\}$ is twin city of $\{1\}$
+$\{\emptyset\}$ is a legal term in $\{1\} \mid \{\emptyset\}$ is a legal definition in $\{1\}$
+The headquarter of $\{0\}$ is in $\{1\} \mid \{0\}$ 's headquarter in $\{1\}$
+$\{\emptyset\}$ was founded in $\{1\} \mid \{\emptyset\}$ was established in $\{1\}$
+
+# C Additional Results and Samples
+
+# C.1 Individual Checkpoints
+
+
MRR
Recall@10
FT
PT
FT
PT
Multi
48.56±4.40
62.38±1.99
56.02±0.67
57.54±1.25
Ckpt1
49.07±4.67
54.77±1.26
49.07±4.67
54.77±1.26
Ckpt2
47.30±2.88
62.98±1.01
47.30±2.88
62.98±1.01
Ckpt3
48.69±5.19
60.29±3.34
48.69±5.19
60.29±3.34
+
+Table 5: TRACIN results for individual checkpoints on Finetune-Learned set.
+
+# C.2 MRR Results with Submetrics
+
+MRR on Finetune-Learned Subsets of FTRACE-TREx We provide submetrics for (Finetuned-learned (FL)) set.
+
+Table 6: MRR Results with submetrics in fine-tuned learned set. (see Table 2)
+
+
Sentence (Table 2)
Predicate
Subject
Object
FT
PT
FT
PT
FT
PT
FT
PT
Random-Target
14.50±0.95
14.50±0.95
14.71±0.89
14.71±0.89
98.14±0.72
98.14±0.72
63.56±2.53
63.56±2.53
BM25
77.55±1.50
77.55±1.50
79.26±2.82
79.26±2.82
88.25±1.80
88.25±1.80
85.71±1.22
85.71±1.22
TRACIN
48.56±4.40
62.38±1.99
49.16±4.76
63.98±0.98
99.53±0.43
86.49±1.22
88.74±1.59
74.99±3.61
EMBED
64.29±1.32
60.59±1.13
66.25±1.82
63.00±1.73
94.09±0.77
81.79±1.33
80.45±0.99
74.03±2.27
TRACIN + EMBED
58.52±3.83
67.66±0.22
59.24±3.88
69.49±0.92
97.92±0.50
82.03±1.61
71.94±2.44
79.15±1.55
+
+MRR on Pretrained-Learned Subsets of FTRACE-TREx We present additional results for (Pretrain-learned (PL)) examples where the model failed before the a checkpoint of pre-training, but changed during pre-training. We found that the average number of proponents in the PL set is $2.5\mathrm{x}$ that of the FL set (since we expect that frequently mentioned facts will be learned first). These results suggest that it's difficult to control for when facts were learned without affecting the other statistics, and that direct comparisons between model performance on the PL and FL datasets may not be informative.
+
+Table 7: MRR Results with submetrics in pretrained- learned set. (see Table 2)
+
+
Sentence (Table 2)
Predicate
Subject
Object
FT
PT
FT
PT
FT
PT
FT
PT
Random-Target
15.83±1.42
15.83±1.42
15.62±1.29
15.62±1.29
98.36±0.79
98.36±0.79
61.88±1.99
61.88±1.99
BM25
77.62±3.24
77.62±3.24
77.20±3.56
77.20±3.56
91.24±0.83
91.24±0.83
88.07±1.39
88.07±1.39
TRACIN
64.18±2.62
54.45±1.96
63.94±1.80
56.01±1.95
99.17±0.72
89.82±2.21
88.07±2.00
81.44±1.71
EMBED
51.21±2.43
50.42±2.43
51.02±2.55
50.30±2.64
96.52±1.75
84.21±3.03
79.18±1.05
79.00±0.82
TRACIN + EMBED
65.91±2.88
55.40±1.98
66.04±2.60
57.09±2.24
97.95±0.25
86.21±2.16
84.09±2.00
82.01±1.78
+
+# C.3 Precision-Recall plots for FTRACE-TREx
+
+We present accompanying precision and recall results for Figure 3.
+
+
+
+
+
+# C.4 Samples for FTRACE-TREx
+
+Here, we provide example top-3 retrievals from TDAs for the FTRACE-TREx dataset. Long examples are truncated for display purposes. We provide label (whether the retrieved example includes the fact) next to the output of the retrieved example.
+
+
Embed
TracIn
BM25
Q: In late 2005, the _1 broadcast a full series of Star Spell, again presented by Ea-monn Holmes but Mishal Hu-sain took over from Nina as word pronou...
+A: BBC True
Q: In late 2005, the _1 broadcast a full series of Star Spell, again presented by Ea-monn Holmes but Mishal Hu-sain took over from Nina as word pronou...
+A: BBC True
Q: In late 2005, the _1 broadcast a full series of Star Spell, again presented by Ea-monn Holmes but Mishal Hu-sain took over from Nina as word pronou...
+A: BBC True
Q: The Vicar of Dibley is a _1 television sitcom cre-ated by Richard Curtis and writ-ten for actress Dawn French by Curtis and Paul Mayhew-Archer, wit...
+A: BBC False
Q: Tasneem Zehra Husain (also spelled as Tasneem Zehra Hus-sain), is a Pakistani _1 and an Assistant Professor of Physics at the Lahore University of...
+A: theoretical physicist False
Q: In late 2005, the BBC broadcast a full series of Star Spell, again presented by Ea-monn Holmes but _1 took over from Nina as word pro-nouncer.
+A: Mishal Husain True
Q: Honigberg also recorded Homage to Rostropovich (1927–2007), a CD of solo cello works written for the legendary cellist; Frédéric Chopin's complete wor...
+A: piano False
Q: Abdul Aziz Bin Dato Haji Husain was born 18 July 1950 in Kuching, Sarawak, _1.
+A: Malaysia False
Q: He now works for the BBC, presenting on the BBC News channel and _1.
+A: BBC One False
+
+Table 8: Mishal Husain works for _______1. (A: BBC)
+
+# C.5 Samples for FTRACE-Synth
+
+Now, we provide the retrieved examples for FTRACE-SYNTH version of our dataset.
+
+
Embed
TracIn
BM25
Q: Clara Ellaline Hope Leighton (sometimes Clare Veronica Hope Leighton) (12 April 1898 - 4 November 1989) was an _1/American artist, writer and ill...
+A: English False
Q: He was educated in _1 and at the Quaker Leighton Park School.
+A: London False
Q: The _1 Kenneth Leighton (1929-1988) also wrote a Fantasia Contrappuntistica ("Homage to Bach", Op.24) for piano, which won the first prize at the...
+A: composer True
Q: Lillianne Brown Leighton (May 17, 1874 - March 19, 1956), known professionally as Lillian Leighton, was an _1 silent film actress.
+A: American False
Q: Kenneth _1 Bray (May 26, 1895 - January 9, 1953) was an Episcopal priest, teacher, sportsman and coach.
+A: Augustine False
Q: The composer _1 (1929-1988) also wrote a Fantasia Contrappuntistica ("Homage to Bach", Op.24) for piano, which won the first prize at the Bolzano...
+A: Kenneth Leighton True
Q: The composer Kenneth Leighton (1929-1988) also wrote a Fantasia Contrappuntistica ("Homage to Bach", Op.24) for _1, which won the first prize at ...
+A: piano True
Q: Leighton Road Evangelical Church is a nonconformist independent evangelical church located on the Gainsborough estate, _1 in the English county o...
+A: Ipswich False
Q: The composer Kenneth Leighton (1929-1988) also wrote a Fantasia Contrappuntistica ("Homage to Bach", Op.24) for _1, which won the first prize at ...
+A: piano True
Q: _1 given to 3692-entity-entity-2686 was awarded the _2A: 1:entity-1138, 2:entity-MMMMDCLIII True
Q: _1 given to entity-MMDCLXXXVI, _2 used to work in CXVI-entityA: 1:entity-MMMMDCLIII, 2:entity-1650 True
Q: _1 given to 3692-entity, _2 was awarded the entity-MMMMDCLIII A: 1:entity-1138, 2:entity-2686 True
Q: entity-1138 given to _1-entity-2686 was awarded the _2A: 1:3692-entity, 2:entity-MMMMDCLIII True
Q: entity-CCCII given to _1,MMMDLVI-entity given to _2A: 1:entity-MMMMDCLIII, 2:entity-MDCCCLXXXVII False
Q: entity-CCCII given to _1, _2 given to entity-MDCCCLXXXVII A: 1:entity-MMMMDCLIII, 2:MMMDLVI-entity False
Q: _1 given to 2686-entity, _2 plays in entity-2658 positionA: 1:MMMMDCLIIII-entity, 2:MMMMCCLIX-Entity True
Q: _1 shares border with entity-DCCXXVII-entity-302 given to _2A: 1:entity-MMMMCMXCVIII, 2:entity-MMMDCLIII False
Q: entity-CCCII given to _1,MMMDLVI-entity given to _2A: 1:entity-MMMMDCLIII, 2:entity-MDCCCLXXXVII False
+
+Table 10: ${}_{1}$ given to entity-2686. (A: entity-MMMMDCLIII)
+
+
Embed
TracIn
BM25
Q: entity-MMMDLXXVI's diplomacy with __1,entity-3193's birth place is __2A: 1:MMCDLX-entity, 2:entity-5 True
Q: 3132-entity maintains diplomatic relations with __1, __2 was awarded the entity-3701A: 1:entity-3468, 2:entity-4097 False
Q: MMMDLXXVI-entity's profession is __1,entity-CCLXI's diplomacy with __2A: 1:MXXX-entity, 2: 506-entity False
Q: MMMDLXXVI-entity's diplomacy with __1, __2given to 2897-entityA: 1:entity-MMCDLX, 2:MCMLIII-entity True
Q: The original language of MMCCLXXIX-entity is __1,3552-entity shares border with __2A: 1:MMCDLX-entity, 2:MMMMDLXV-entity False
Q: MMMDLXXVI-entity's diplomacy with __1,MCMLIII-entity given to __2A: 1:entity-MMCDLX, 2: 2897-entity True
Q: MMMDLXXVI-entity's diplomacy with __1,MCMLIII-entity given to __2A: 1:entity-MMCDLX, 2:2897-entity True
Q: The official language of CMXCVII-entity is __1, __2 died in MMCDLX-entityA: 1:3215-entity, 2: 710-entity False
Q: MMMDLXXVI-entity's diplomacy with __1, __2given to 2897-entityA: 1:entity-MMCDLX, 2:MCMLIII-entity True
+
+Table 11: MMMDLXXVI-entity's diplomacy with _______1. (A: MMCDLX-entity)
\ No newline at end of file
diff --git a/towardstracingknowledgeinlanguagemodelsbacktothetrainingdata/images.zip b/towardstracingknowledgeinlanguagemodelsbacktothetrainingdata/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..4feb2645d29df6cef25a3f26dc84b0f343ed59a4
--- /dev/null
+++ b/towardstracingknowledgeinlanguagemodelsbacktothetrainingdata/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:8e787fb05b02dba4209d884c7abcc32c9e2ee8f357fcd5113c885b988d9d2b76
+size 1246794
diff --git a/towardstracingknowledgeinlanguagemodelsbacktothetrainingdata/layout.json b/towardstracingknowledgeinlanguagemodelsbacktothetrainingdata/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..c1a21cd8bc3171b6d2fa40e380890afde60dab04
--- /dev/null
+++ b/towardstracingknowledgeinlanguagemodelsbacktothetrainingdata/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c870e01686f5e686d4cb69543b1864563f66005bf0037aeb924b5261b72af35d
+size 590362
diff --git a/towardsunifiedprompttuningforfewshottextclassification/c853ba9b-820a-47ca-9b04-e72563c595e0_content_list.json b/towardsunifiedprompttuningforfewshottextclassification/c853ba9b-820a-47ca-9b04-e72563c595e0_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..e470347951a77d7359687ef793dfed008c8db571
--- /dev/null
+++ b/towardsunifiedprompttuningforfewshottextclassification/c853ba9b-820a-47ca-9b04-e72563c595e0_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:fbcf135fbaa03ba7d772186b6b361f9f77dfab144feb26ccb6b584dbf38ae706
+size 98534
diff --git a/towardsunifiedprompttuningforfewshottextclassification/c853ba9b-820a-47ca-9b04-e72563c595e0_model.json b/towardsunifiedprompttuningforfewshottextclassification/c853ba9b-820a-47ca-9b04-e72563c595e0_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..0706a36ed0a02a1d6d01bdb903c2ee6766b8f95d
--- /dev/null
+++ b/towardsunifiedprompttuningforfewshottextclassification/c853ba9b-820a-47ca-9b04-e72563c595e0_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:101926c6b1336667bd47b7b4ed753d3607cb130e4ba31548b606c3f8885fce07
+size 119344
diff --git a/towardsunifiedprompttuningforfewshottextclassification/c853ba9b-820a-47ca-9b04-e72563c595e0_origin.pdf b/towardsunifiedprompttuningforfewshottextclassification/c853ba9b-820a-47ca-9b04-e72563c595e0_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..c7f62d5baa2b3ef6e0979ee8cf5241b455f73e4c
--- /dev/null
+++ b/towardsunifiedprompttuningforfewshottextclassification/c853ba9b-820a-47ca-9b04-e72563c595e0_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f9112c15d18eb411e6dab0e9b7002c8368e59b53062b6de30fe7f14c6ee6b84e
+size 635688
diff --git a/towardsunifiedprompttuningforfewshottextclassification/full.md b/towardsunifiedprompttuningforfewshottextclassification/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..b9d9377d33cc7177fc5e9e459db4f2f40a6fa6ca
--- /dev/null
+++ b/towardsunifiedprompttuningforfewshottextclassification/full.md
@@ -0,0 +1,360 @@
+# Towards Unified Prompt Tuning for Few-shot Text Classification
+
+Jianing Wang $^{1}$ , Chengyu Wang $^{2}$ , Fuli Luo $^{2}$ , Chuanqi Tan $^{2}$ , Minghui Qiu $^{2}$ , Fei Yang $^{3}$ , Qiuhui Shi $^{4}$ , Songfang Huang $^{2}$ , Ming Gao $^{1,5\dagger}$
+
+1 School of Data Science and Engineering, East China Normal University, Shanghai, China
+
+2 Alibaba Group, Hangzhou, China 3 Zhejiang Lab, Hangzhou, China
+
+4 Ant Group, Hangzhou, China
+
+5 Shanghai Key Laboratory of Mental Health and Psychological Crisis Intervention,
+
+School of Psychology and Cognitive Science, East China Normal University, Shanghai, China
+
+lygwjn@gmail.com,
+
+{chengyu.wcy,lf1259702,chuanqi.tcq}@alibaba-inc.com,
+
+minghui.qmh@alibaba-inc.com, yangf@zhejianglab.com,
+
+qiuhui.sqh@antgroup.com, songfang.hsf@alibaba-inc.com,
+
+mgao@dase.ecnu.edu.cn
+
+# Abstract
+
+Prompt-based fine-tuning has boosted the performance of Pre-trained Language Models (PLMs) on few-shot text classification by employing task-specific prompts. Yet, PLMs are unfamiliar with prompt-style expressions during pre-training, which limits the few-shot learning performance on downstream tasks. It would be desirable if the models can acquire some prompting knowledge before adapting to specific NLP tasks. We present the Unified Prompt Tuning (UPT) framework, leading to better few-shot text classification for BERT-style models by explicitly capturing prompting semantics from non-target NLP datasets. In UPT, a novel paradigm Prompt-Options-Verbalizer is proposed for joint prompt learning across different NLP tasks, forcing PLMs to capture task-invariant prompting knowledge. We further design a self-supervised task named Knowledge-enhanced Selective Masked Language Modeling to improve the PLM's generalization abilities for accurate adaptation to previously unseen tasks. After multi-task learning across multiple tasks, the PLM can be better prompt-tuned towards any dissimilar target tasks in low-resourced settings. Experiments over a variety of NLP tasks show that UPT consistently outperforms state-of-the-arts for prompt-based fine-tuning.
+
+# 1 Introduction
+
+The emergence of Pre-trained Language Models (PLMs) has boosted the performance of a variety
+
+of NLP tasks (Qiu et al., 2020; Han et al., 2021a). However, during fine-tuning, PLMs can perform poorly with few training samples due to model over-fitting (Gao et al., 2021).
+
+To alleviate this problem for low-resourced scenarios, natural language prompts have been applied to enable few-shot or zero-shot learning with PLMs (Liu et al., 2021a). To make prompts more flexible and task-adaptive, prompt tuning freezes the PLM backbone and adjusts the representations of prompts (Lester et al., 2021). This type of method is especially suitable for ultra-large PLMs that are difficult to tune. For BERT-style PLMs, prompt-based fine-tuning has been proposed, transforming text classification tasks into cloze-style problems (Schick and Schütze, 2021a,b; Gao et al., 2021). To specify, task-specific discrete templates with masked language tokens are added to input texts. The result tokens of the masked positions predicted by the Masked Language Modeling (MLM) head are used for class label prediction2. Therefore, the pre-trained knowledge acquired by PLMs can be better utilized by "re-using" the MLM training objective. Witnessing the successful usage of prompts for few-shot learning, various following-up works have been conducted, such as continuous prompt encoding (Liu et al., 2021c), knowledgeable prompt learning (Hu et al., 2021), and prompt generation (Shin et al., 2020).
+
+Recently, a few works (Wei et al., 2021; Zhong et al., 2021a; Mishra et al., 2021) focus on multi-
+
+
+Figure 1: UPT is a unified framework that learns prompting knowledge from non-target NLP datasets to improve the performance on target tasks, in the format of Prompt-Options-Verbalizer (Sect. 2.2). Figures a) and b) show examples of supervised and self-supervised learning tasks (i.e., Knowledge-enhanced Selective MLM, Sect. 2.3).
+
+
+
+task prompt tuning on ultra-large PLMs. Specifically, they tune PLMs on full training samples from different tasks to force PLMs to learn more prompting knowledge, and directly make predictions over the target task by zero-shot learning. Yet, we observe that for BERT-style PLMs, the performance is not satisfactory for two reasons. 1) These PLMs are sensitive to different designs of prompt templates and verbalizers (Liu et al., 2021c), which fail to adapt to target tasks with new prompts and verbalizers. 2) There are word distribution differences between prompt-style texts and sentences in pre-training corpora. It would be better if BERT-style PLMs can acquire some prompting knowledge before they are adapted to downstream tasks. Therefore, a natural question arises: how can we make BERT-style PLMs adapt to target NLP tasks accurately with more prompting knowledge?
+
+To address these issues, we introduce a novel framework named Unified Prompt Tuning (UPT), facilitating better few-shot text classification performance for BERT-style models by explicitly capturing general prompting semantics from non-target datasets. Specially, we propose a unified paradigm named Prompt-Options-Verbalizer (POV), which enables mixture prompt-tuning over a series of nontarget NLP tasks of varied types. To further improve the model's generalization abilities on previously unseen tasks, we propose a novel auxiliary task named Knowledge-enhanced Selective MLM (KSMLM), which mimics the behavior of MLM with explicit usage of prompts following the POv paradigm. After multi-task training is completed, the underlying PLM can be fine-tuned to fit any few-shot tasks using the same prompting paradigm.
+
+In the experiments, we verify the effectiveness of UPT over public NLP datasets of various tasks.
+
+Experimental results show that UPT consistently outperforms state-of-the-art approaches for prompt-based few-shot fine-tuning. In summary, we make the following major contributions:
+
+- We introduce the novel UPT framework to improve prompt-based fine-tuning for BERT-style models, which captures unified prompting semantics from multiple source tasks of various types for few-shot text classification on new target tasks.
+- In UPT, a new paradigm $POV$ is proposed for joint prompt tuning across different NLP tasks. We further design the self-supervised KSMLM task to improve the PLM's generalization abilities for accurate task adaptation.
+- Extensive experiments over various NLP datasets show that UPT consistently outperforms state-of-the-arts for prompt-based few-shot fine-tuning by a relatively large margin.
+
+# 2 UPT: The Proposed Framework
+
+We start with a brief overview of the UPT framework, followed by its detailed techniques.
+
+# 2.1 A Brief Overview of UPT
+
+For clarity, we introduce some basic notations. Let $\mathcal{D}^*$ be the $N$ -way- $K$ -shot training set of a target NLP task $\mathcal{T}^*$ . The underlying PLM is parameterized by $\Theta$ . The basic goal of few-shot learning is to obtain a high-performing model for $\mathcal{T}^*$ based on $\mathcal{D}^*$ , with parameters initialized from $\Theta$ . As the size of $\mathcal{D}^*$ is only $N\times K$ , the model performance would be highly limited. Here, we assume that there are $M$ other NLP tasks that are dissimilar to $\mathcal{T}^*$ , i.e., $\mathcal{T}^{(1)},\dots,\mathcal{T}^{(M)}$ , with their (usually non few-shot) training sets denoted as $\mathcal{D}^{(1)},\dots,\mathcal{D}^{(M)}$ ,
+
+respectively3. The UPT framework seeks to explore how to employ $\mathcal{D}^{(1)},\dots ,\mathcal{D}^{(M)}$ to enhance the performance of the PLM on a new task (such as $\mathcal{T}^*$ ) based on its own few-shot training set $\mathcal{D}^*$ .
+
+In UPT, the model is firstly trained over all the source tasks $\mathcal{T}^{(1)},\dots ,\mathcal{T}^{(M)}$ , aiming to learn the semantics of prompts and the general methodology of solving downstream tasks by prompting. After that, it is prompt-tuned over a specific target task $\mathcal{T}^*$ in the low-resourced scenario. To unify the learning process, each training sample $i$ in all different tasks (either $\mathcal{T}^{(1)},\dots ,\mathcal{T}^{(M)}$ or $\mathcal{T}^*$ ) is augmented in the same format, by means of the Prompt-Options-Verbalizer (POV) triple $(P_{i},O_{i},V_{i})$ . Here, $P_{i}$ is the prompt. $O_{i}$ is the expression containing all possible options of the masked language token appearing in the prompt $P_{i}$ (i.e., the collection of label words). $V_{i}$ is the verbalizer that maps the target token predicted by the MLM head of the PLM to the class label. Readers can also refer to the examples of supervised learning tasks in Figure 1.
+
+In addition, we observe that the diversity of label words in original labeled tasks $\mathcal{T}^{(1)},\dots ,\mathcal{T}^{(M)}$ is limited. For previously unseen tasks, the optimization of these tasks alone often leads to a poorly generalized model that is biased towards these tasks. Therefore, we further introduce the selfsupervised Knowledge-enhanced Selective MLM (KSMLM) $\tilde{\mathcal{T}}$ as an auxiliary task. Specifically, take the sentences from source tasks training data $\hat{\mathcal{D}} = \mathcal{D}^{(1)}\cup \mathcal{D}^{(2)}\cup \dots \cup \mathcal{D}^{(M)}$ as inputs. These sentences are selectively masked, with options generated by rich knowledge mined from a massive corpus. An example is also in Figure 1. Hence, the model has better generalization abilities and avoids catastrophic forgetting of pre-training knowledge.
+
+# 2.2 The Unified Prompting Paradigm
+
+A fundamental challenge for prompt-based training across $\mathcal{D}^{(1)},\dots ,\mathcal{D}^{(M)}$ for BERT-style models is that different NLP tasks have diverse sets of label words w.r.t. masked language tokens. When dealing with a mixture of training samples, a naive solution is to build a unified output prediction space, consisting of candidate label words from all tasks. However, the enlarged output space makes it chal
+
+lenging for the PLM to optimize. Additionally, the output prediction space may not cover the label words of all possible unseen NLP tasks.
+
+Here, we propose a unified prompting paradigm that augments each sample $i$ by a Prompt-Options-Verbalizer (POV) triple $(P_{i},O_{i},V_{i})$ . $P_{i}$ is the prompt that provides task guidance (in line with PET (Schick and Schütze, 2021a,b)). $O_{i}$ is a fixed expression that explicitly provides selection for the model over all its candidate label words4. To facilitate the fast adaptation of arbitrary tasks, the verbalizer $V_{i}$ maps the output of the masked language token to the entire vocabulary $\mathcal{V}$ . We can see that the options are crucial as they give strong indications on the possible outputs of the PLM (i.e., the candidates). Overall, the output probability $q(v|i,P_i,O_i,\Theta)$ of the token $v\in \mathcal{V}$ w.r.t. the training sample $i$ is computed as follows:
+
+$$
+q (v | i, P _ {i}, O _ {i}, \Theta) = \frac {\exp (s (v | i , P _ {i} , O _ {i} , \Theta))}{\sum_ {v ^ {\prime} \in \mathcal {V}} \exp (s (v ^ {\prime} | i , P _ {i} , O _ {i} , \Theta))}
+$$
+
+where $s(v|i, P_i, O_i, \Theta)$ is the un-normalized score of the MLM head (before the softmax function) for generating token $v$ at the position of the masked language token with $i$ , $P_i$ and $O_i$ as inputs. Denote the entire prediction vector (of the length $|\mathcal{V}|$ ) as $Q(\mathcal{V}|i, P_i, O_i, \Theta)$ . The multi-task prompting loss (denoted as $\mathcal{L}_{MP}$ ) can be written as follows:
+
+$$
+\mathcal {L} _ {M P} = - \sum_ {i \in \mathcal {D}} P (\mathcal {V} | i, P _ {i}, O _ {i}, \Theta) \cdot
+$$
+
+$$
+\log Q (\mathcal {V} | i, P _ {i}, O _ {i}, \Theta)
+$$
+
+where $\mathcal{D} = \bigcup_{k=1}^{M} \mathcal{D}^{(k)}$ , and $P(\mathcal{V}|i, P_i, O_i, \Theta)$ is the one-hot ground-truth prediction vector.
+
+In addition, we notice that $\mathcal{D}^{(1)},\dots ,\mathcal{D}^{(M)}$ can be arbitrary labeled datasets with varied sizes. Optimizing $\mathcal{L}_{MP}$ directly on their original datasets would make the few-shot learner more likely to be biased towards larger datasets. In our work, we do stratified sampling to form a batch where a training sample $i$ from $\mathcal{D}^{(1)},\dots ,\mathcal{D}^{(M)}$ is picked with the probability proportional to its own dataset size (denoted as $w_{i}$ ), i.e., $w_{i} = \frac{\log|\mathcal{D}^{(k)}| + \gamma}{M\cdot\gamma + \sum_{k^{\prime} = 1}^{M}\log|\mathcal{D}^{(k^{\prime})}|}$ where $\gamma >0$ is a smoothing factor and $i\in \mathcal{D}^{(k)}$ . Hence, we re-formulate $\mathcal{L}_{PT}$ as the weighted multi-task
+
+
+Figure 2: An illustrated example of the $POV$ generation process for the KSMLM task.
+
+prompting (WMP) loss $\mathcal{L}_{WMP}$ :
+
+$$
+\mathcal {L} _ {W M P} = - \sum_ {i \in \mathcal {D}} w _ {i} \cdot P (\mathcal {V} | i, P _ {i}, O _ {i}, \Theta) \cdot
+$$
+
+$$
+\log Q (\mathcal {V} | i, P _ {i}, O _ {i}, \Theta)
+$$
+
+# 2.3 Extending Unified Prompting to Self-supervised Learning
+
+One drawback of the above approach is that the diversity of label words in these supervised learning tasks is usually limited, covering a narrow spectrum of the vocabulary $\mathcal{V}$ . The model would not be well generalized for tasks with new label words. Hence, we leverage the idea of MLM pre-training, formulated by the $POV$ paradigm.
+
+As a naive approach, given a sentence, we can randomly mask a word and generate the options of the correct and a randomly selected word, and then ask the model to make the prediction. Unfortunately, the seemingly feasible approach may ruin the training process, because not all words are suitable label words. For example, stop words and a large number of verbs and adverbs have not been used in any verbalizers in downstream tasks. The alternatives used in options should be reasonable, in order to make the model learn truly useful knowledge. To address the issue, we present the self-supervised KSMLM task, with an example shown in Figure 2. In the following, we describe the $POV$ construction process for KSMLM. After that, the loss function of the task is given.
+
+P-Generation. This process aims to generate a template with a [MASK] token for each sentence, which is fixed to be "It is [MASK]." during the
+
+multi-task training stage. In the task-specific finetuning stage, we follow LM-BFF (Gao et al., 2021) to automatically generate templates for each task. During training, the PLM is asked to predict the actual word of the masked position.
+
+O-Generation. From Gao et al. (2021), we can see that most label words for language understanding tasks are adjectives5 (such as "great" and "terrible" for sentiment analysis). Thus in our work, we detect all adjectives in the corpus by part-of-speech tagging models6 and filter out low-frequency adjectives. The adjectives are then clustered by K-Means, with their token representations generated from the underlying PLM as features. Formally, We construct a knowledge repository named Options Knowledge Repository (OKR), in the form of triples $\mathcal{R} = \{(v,\vec{v},c_v)\}$ , where $v$ is a candidate label word. $\vec{v}$ and $c_{v}$ denote the representation vector and the cluster membership of $v$ , respectively. The cluster centroids are also stored. We do not use existing lexicons such as WordNet (Miller, 1995) because they may have limited coverage of label words. Additionally, the automatic process enables the extension of our algorithm to arbitrary languages and domains.
+
+With the availability of $\mathcal{R}$ , we can generate knowledge-induced options. Given a sentence with the masked word as $v$ , we query $v$ against $\mathcal{R}$ for the most dissimilar cluster w.r.t. $v$ , denoted as $\tilde{c}_v$ , where the cosine similarity of the vector representation $\vec{v}$ and the cluster centroid is employed as the similarity measure. Finally, we randomly select one adjective from $\tilde{c}_v$ as the alternative label word to generate the knowledge-induced options. The text expressions of options is fixed, i.e., "Is it [x1] or [x2]?" Readers can further refer to the example in Figure 2.
+
+V-Generation. For verbalizers, we map the true and the generated label words in the options to two classes, namely Class: Correct and Class: Incorrect. For instance, the verbalizers of the sample sentence in Figure 2 are:
+
+It is "effective". $\rightarrow$ "Class: Correct"
+
+It is "ineffective". $\rightarrow$ "Class: Incorrect"
+
+Loss Function. The KSMLM loss is significantly different from the auxiliary MLM loss used
+
+in Schick and Schütze (2021a,b). In $\tilde{\mathcal{D}}$ , each training sample $i$ can be directly extended to the training example for KSMLM by POV construction process with exactly one masked token, the knowledge-induced options $O_{i}$ and the prompt $P_{i}$ . The PLM is trained to predict the correct masked word in the sentence, with the loss function: $\mathcal{L}_{KSMLM} = -\sum_{i\in \tilde{\mathcal{D}}}P(\mathcal{V}|i,P_i,O_i,\Theta)\log Q(\mathcal{V}|i,P_i,O_i,\Theta)$ . Overall, the loss function of UPT $\mathcal{L}$ is defined as the summation of the WMP and KSMLM loss:
+
+$$
+\mathcal {L} = \mathcal {L} _ {W M P} + \lambda \cdot \mathcal {L} _ {K S M L M}
+$$
+
+where $\lambda \geq 0$ is the balancing hyper-parameter.
+
+Discussion. To our knowledge, external knowledge has also been applied to other prompt-based methods, such as KPT (Hu et al., 2021). The major difference between KPT and ours is that UPT uses the knowledge for options creation of the self-supervised task KSMLM that we proposed, in order to improve the model generalization abilities for accurate adaptation on new tasks. In contrast, previous works consider the expansion of verbalizers for specific downstream NLP tasks.
+
+# 2.4 Few-shot Fine-tuning
+
+For a specific downstream task $\mathcal{T}^*$ , the samples in the target few-shot training set $\mathcal{D}^*$ can be processed and computed in the same way as those supervised tasks used during UPT. The learning consistency in the two stages ensures that the underlying PLM has already acquired prompting knowledge for $\mathcal{T}^*$ . In addition, one can prompt-tune a single PLM over various tasks and uses it to fine-tune over any target tasks, making it computationally efficient to produce models for these applications.
+
+# 3 Experiments
+
+# 3.1 Experimental Settings
+
+In the experiments, we employ 9 public text classification datasets to evaluate the proposed UPT framework, which are divided into three groups: sentiment analysis (Sentiment) (SST-2 (Socher et al., 2013), MR (Hu and Liu, 2004), CR (Pang and Lee, 2005)), Natural Language Inference (NLI) (MNLI (Williams et al., 2018), SNLI (Bowman et al., 2015), QNLI (Wang et al., 2019b), RTE (Dogan et al., 2005)) and Paraphrase (MRPC (Dolan and Brockett, 2005), $\mathrm{QQP^7}$ ). The data statistics are shown in the Appendix. In default, $K = 16$ (training instances per class).
+
+As mentioned above, during UPT, we only leverage full training data from all dissimilar task groups, and then prompt-tune the model on the target task in the low-resourced setting. For example, when the target task is SST-2, the training data during UPT is from NLI and Paraphrase. The underlying PLM is the RoBERTa-large model (with 335M parameters) (Liu et al., 2019), unless otherwise specified. The baselines include standard finetuning, and four recently proposed few-shot learning algorithms: PET (Schick and Schütze, 2021a)8, LM-BFF (Gao et al., 2021)9, P-tuning (Liu et al., 2021c)10 and PPT (Gu et al., 2021). To make a fair comparison with these single-task baselines, a variant of our approach (denoted as UPT-Single) is also implemented by only fine-tuning over the few-shot target task based on $POV$ without the usage of dissimilar supervised source datasets.
+
+As we use other dissimilar datasets to train our model, we also include two multi-task methods that are meta-tuned using the same dissimilar datasets as strong baselines, namely MT (Zero-shot) and MT (Few-shot) (Zhong et al., 2021a) $^{11}$ . We also implement the zero-shot version of UPT, denote as UPT (Zero-shot). In addition, given a supervised NLP task, multiple prompts can be manually crafted. By augmenting one training sample with these prompts, we can automatically realize self-ensemble learning. For the self-ensemble version of UPT, we employ five different prompts. For each input sample, we randomly select one expression of options and one set of verbalizers. We denote this method as UPT-SE. The designed prompts, options, and verbalizers are listed in the Appendix. All the results of these models are evaluated in terms of averaged accuracy and its standard deviation, over 5 random seeds.
+
+Our UPT framework is implemented in PyTorch
+
+8https://github.com/timoschick/pet
+
+For a fair comparison with other approaches, we train the underlying models by LM-BFF with manual-compiled prompts without demonstration learning. URL: https://github.com/princeton-nlp/LM-BFF
+
+$^{10}$ https://github.com/THUDM/P-tuning
+
+In Zhong et al. (2021a), the authors only conduct zero-shot learning using larger PLMs. To make their work comparable to ours, we re-implement their algorithm over the Roberta model on our datasets under two settings. MT (Zero-shot) refers to the model tuned only using dissimilar full datasets. MT (Few-shot) further tunes the entire model over the target few-shot training set based on the prompts. Note that a few contemporaneous works (such as Wei et al. (2021)) also consider multi-task zero-shot learning. Because the settings and model scales are significantly different from ours, they are not directly comparable.
+
+
Paradigm
Method
Group 1: Sentiment.
Group 2: NLI.
Group 3: Paraphrase.
SST-2
MR
CR
MNLI
SNLI
QNLI
RTE
MRPC
QQP
Avg.
Single-task methods w/o. the usage of dissimilar datasets (K=16)
FT
Fine-tuning
81.1 ±4.1
78.2±5.4
75.4±3.3
45.8±6.0
48.4±4.8
60.9±5.8
54.0±6.1
74.4±2.5
61.0±4.1
64.4±4.7
PT
PET
91.8±1.3
86.4±2.9
90.5±1.9
58.4±2.2
59.4±2.9
61.3±1.8
65.7±2.0
74.5±1.6
67.6±3.1
72.8±2.2
LM-BFF
92.0±1.7
87.4±0.7
90.8±1.0
65.2±2.6
71.7±4.9
69.1±2.8
69.5±2.0
74.2±2.3
63.5±1.2
75.9±2.4
P-Tuning
92.6±1.6
87.0±1.2
91.7±1.4
62.4±2.3
70.2±2.1
68.8±3.5
70.8±2.5
73.4±1.9
67.6±0.8
76.0±1.6
PPT
92.3±0.5
87.1±1.6
90.9±1.3
64.9±2.0
71.4±1.5
68.8±2.9
67.9±2.6
74.8±2.1
67.2±1.2
76.1±1.8
UPT-Single
92.9±1.0
87.7±1.5
91.8±0.7
65.6±1.4
71.2±2.3
70.1±1.6
68.9±1.7
75.1±0.9
72.1±2.0
77.2±1.5
Multi-task methods w. the usage of dissimilar datasets (K=16)
PT
MT(Zero-shot)
58.7±1.6
59.0±3.6
58.9±2.8
36.3±3.3
39.2±3.2
40.9±2.5
54.9±1.4
70.6±2.6
42.8±2.5
51.3±2.2
MT(Few-shot)
92.1±1.4
86.5±1.3
91.0±2.2
69.6±1.1
67.1±2.7
68.9±2.3
68.6±1.2
71.0±1.4
74.8±2.1
76.7±1.7
UPT(Zero-shot)
74.5±1.2
73.9±1.3
72.4±1.4
43.7±2.0
46.0±2.1
53.9±1.9
57.1±1.0
70.7±0.9
56.5±1.3
61.0±1.5
UPT
93.5±0.6
88.1±0.9
91.4±1.2
70.1±1.4
68.2±1.2
69.9±1.5
73.5±1.5
77.0±1.1
78.8±1.7
78.9±1.4
UPT-SE
93.1±0.4
88.4±0.9
92.1±1.0
71.4±1.1
73.6±0.6
70.5±1.6
75.8±0.8
76.2±0.4
79.6±1.3
80.1±1.1
+
+Table 1: Comparison between UPT and baselines over all testing sets in terms of accuracy (%) and standard deviation. "FT" and "PT" refer to the fine-tuning and prompt-based fine-tuning paradigm, respectively. The methods in bold refer to our approach and its variants. The scores of baselines are re-produced using their open-source codes.
+
+
BERT Scale
SST-2
MR
CR
Avg.
Base
82.6+3.8
71.1+9.3
78.1+8.9
77.2+7.3
Medium
68.0+3.0
63.4+4.2
70.2+6.1
67.2+4.4
Small
66.3+3.7
58.1+4.6
68.2+5.5
64.2+4.6
Mini
58.8+3.1
59.4+7.6
65.8+7.5
61.3+6.1
Tiny
54.2+3.8
54.0+1.3
54.4+5.2
54.2+3.4
+
+Table 2: Results of model scale analysis. We report the accuracy $(\%)$ of UPT based on BERT with other scales, and relative improvements, compared to the models w/o. prompt learning over dissimilar datasets.
+
+and run with NVIDIA V100 GPUs. Specifically, we train our model with the Adam optimizer. The learning rate for all training stages is fixed to be 1e-5. We set the default hyper-parameters as $\gamma = 0.001$ and $\lambda = 0.1$ , which are also tuned over the development sets. The parameter regularizers are the same as in Gao et al. (2021).
+
+# 3.2 Main Results
+
+In Table 1, we report the general experimental results of UPT and all the baselines. The results show that: 1) Prompt-learning based methods (i.e., PET (Schick and Schütze, 2021a), LM-BFF (Gao et al., 2021), P-tuning (Liu et al., 2021c) and PPT (Gu et al., 2021)) have large improvements over standard fine-tuning. 2) UPT-Single outperforms previous few-shot learning models in average, which indicates that the utilization of $POV$ is better than vanilla prompts (Schick and Schütze, 2021a). 3) UPT (both the vanilla and the ensemble version) consistently outperforms all baselines on all tasks, which demonstrates that our framework possesses better generalization by learning
+
+from dissimilar groups of tasks $^{12}$ . 4) MT (Zero-shot) (Zhong et al., 2021a) and UPT (Zero-shot) do not yields satisfactory results on BERT-style models. Different from ultra-large models, we suggest that few-shot prompt-tuning is necessary for BERT-style models to produce good results over these tasks. 5) By comparing UPT against MT (Few-shot), we can see that the proposed $POV$ paradigm and the self-supervised KSMLM task are more effective for few-shot learning. 6) Generally, UPT-SE improves the averaged accuracy on all tasks by $1.2\%$ than UPT. It means that self-ensemble learning can enhance model generalization, but the improvement is not consistent across all tasks. A possible cause is that some prompts and options are not optimal for the target task.
+
+
+Figure 3: Parameter analysis w.r.t. hyper-parameter $\lambda$ .
+
+
+
+# 3.3 Model Analysis
+
+Parameter Analysis. We conduct parameter analysis to investigate the best choice of the balance coefficient $\lambda$ . Results over SST-2 and RTE are shown in Figure 3. We have the best performance when $\lambda = 0.1$ , which indicates that our proposed UPT
+
+
Method/Group
Group 1
Group 2
Group 3
MT (Few-shot)
89.9
68.6
72.9
UPT
91.0
70.2
77.9
w/o. POV
90.2
68.9
74.2
w/o. KSMLM
90.9
69.1
73.7
w/o. POV&KSMLM
89.6
68.7
73.5
w/o. OKR
90.7
69.9
76.8
+
+Table 3: Ablation study in terms of accuracy $(\%)$ . Standard deviations are omitted here to save space.
+
+possess generalization when it is jointly trained over the self-supervised KSMLM task. We also observe that the performance decreases when $\lambda$ becomes larger. This means KSMLM is a suitable regularization task, but also may introduce a lot of prompts and options that are irrelevant to downstream tasks. This opens up new opportunities for model improvement.
+
+Ablation Study. To clearly verify the contributions of each component in UPT, we conduct an ablation study over all groups and report the mean accuracy. As shown in Table 3, w/o. $POV$ denotes the method with manually designed prompts without the usage of any options. w/o. KSMLM equals the setting with $\lambda = 0$ , which is the same as UPT-Single. w/o. OKR means that we randomly choose the alternative label words in the options without knowledge guidance when we optimize the KSMLM task. w/o. $POV$ & KSMLM denotes the method without any options and the auxiliary KSMLM task. The results show that no matter which module is removed, the model performance is affected. Particularly, when we remove both $POV$ and KSMLM, the performance is decreased by $1.4\%$ , $1.5\%$ , $4.4\%$ , respectively. The accuracy values of this setting are lower than w/o. $POV$ and w/o. KSMLM, which suggests that both two components highly contribute to the high performance of our framework. We also find that w/o. $POV$ or w/o. KSMLM both outperform MT (Few-shot) over all groups. Additionally, we find that if we use KSMLM but remove OKR, the results decrease over all these tasks, but are still higher than w/o. KSMLM. It means that the options knowledge that we mine from the corpus is suitable for the self-supervised learning task.
+
+Sample Efficiency. We further explore the model effects with different numbers of training samples per class $(K)$ from 16 to 512. We also use standard fine-tuning as the reference. As shown in Figure 4, each point refers to the averaged score across 5 randomly sampled datasets. We observe that our UPT consistently achieves higher scores regardless of
+
+
+Figure 4: Results of sample efficiency analysis. We compare UPT with standard fine-tuning with different numbers of training samples $K$ over two tasks.
+
+
+
+the number of training samples. In addition, the variance of UPT is lower than fine-tuning, meaning that the stability of our method is better. This is different from other prompt-based methods (Schick and Schütze, 2021a,b; Gao et al., 2021).
+
+Model Scale Analysis. To further show that UPT can improve the model performance regardless of the scales, we regard multiple small-scale BERT as model backbones $^{13}$ . Due to space limitations, we only illustrate the results in Table 2 over SST-2, MR, and CR. To make a fair comparison, we also test the performance without the usage of dissimilar NLP datasets and show the relative improvements. The results demonstrate that the model scale plays an important role in the ability of model generalization. We also find that UPT that uses dissimilar datasets can highly improve the effectiveness, especially on small-scale PLMs. Therefore, our method is better suitable for producing high-performing small PLMs for online applications.
+
+Adaptation Efficiency of Task Groups. Because we focus on multi-task training before prompt-tuning over the target task in low-resourced settings. Therefore, it is worth exploring which/how many groups of tasks have a better effect on the adaptation improvement. Specifically, when given a target task (e.g., MNLI), we only choose one group of tasks (e.g., MRPC and QQP of Group 3 (Paraphrase)) for multi-task prompt-tuning, and then fine-tune the model on the target task. As shown in Figure 5, the cell in the $i$ -th row and $j$ -th column denotes the relative improvement from single-task learning over the $j$ -th task to the setting where the $i$ -th group is added for multi-task prompt learning. For visualization, we normalize the values of each column to show the percentage of influence of each group. The results show that the performance of a target task improves the most when we add data
+
+
+Figure 5: Adaptation efficiency between task groups. The shade of color indicates the degree of adaptation.
+
+
+Figure 6: Adaptation efficiency between the different numbers of NLI tasks $(M)$ and each target task from Sentiment and Paraphrase.
+
+samples from other datasets within the same task group. However, in low-resourced scenarios, similar datasets are not available. By using UPT, we can even transfer the knowledge from datasets from dissimilar tasks to the target task.
+
+Specifically, taking NLI as the source group, we randomly choose $M$ dataset(s) from the group as our source tasks and then prompt-tune the model on each target task. The results from Figure 6 demonstrate that the accuracy is further improved when we increase the value $M$ . We also find that the improvements over MRPC and QQP are more obvious. We suggest that NLI is easier to be adapted to paraphrase tasks because they both model the relations between sentence pairs.
+
+# 4 Related Work
+
+Pre-trained Language Models. Recently, benefited from the powerful modeling abilities of PLMs and computational resources, we have witnessed the qualitative improvement of multiple NLP tasks (Qiu et al., 2020; Han et al., 2021a). For examples, the large GPT model series (Radford et al., 2019; Brown et al., 2020) utilizes multi-layer transformer decoders to capture left-to-right semantics of natural languages. BERT (Devlin
+
+et al., 2019) focuses on the learning of bidirectional contextual representations. Other notable PLMs include Transformer-XL (Dai et al., 2019), ELMo (Peters et al., 2018), RoBERTa (Liu et al., 2019), AlBERT (Lan et al., 2020), XLNet (Yang et al., 2019), StructBERT (Wang et al., 2019d), T5 (Raffel et al., 2020), etc. As model architecture is not the focus of our work, we do not elaborate.
+
+Prompt-based Learning. Fine-tuning PLMs directly by learning the [CLS] head may perform poorly with few training samples (Liu et al., 2021a). Recently, the huge GPT-3 model (Brown et al., 2020) has been proposed to enable in-context learning, which introduces handcrafted prompts and demonstrations. Schick and Schütze (2021a) apply handcrafted prompts to prompt-based fine-tuning for BERT-style models. To facilitate the automatic prompt generation, Gao et al. (2021) present LM-BFF to generate discrete templates (Raffel et al., 2020). Other works (Shin et al., 2020; Han et al., 2021b; Scao and Rush, 2021; Utama et al., 2021) mine prompts from the training corpus based on heuristic rules/semantic relations. However, these methods are time-consuming for mining optimized prompts for target tasks. A series of methods are proposed to learn continuous/soft prompt embeddings, such as P-tuning (Liu et al., 2021c), P-tuning-V2 (Liu et al., 2021b), OptiPrompt (Zhong et al., 2021b), Prefix-tuning (Li and Liang, 2021). Zhao and Schütze (2021); Gu et al. (2021) focus on the hybrid training with both discrete and continuous prompts. Hu et al. (2021) consider the automatic expansion of label words and presents Knowledgeable Prompt-tuning (KPT) to utilize knowledge for the construction of verbalizers. Sun et al. (2021) and Wang et al. (2021b) prompt the PLMs to make language inference in zero-shot learning. In addition, Wang et al. (2021a); Vu et al. (2021) consider transfer learning on continuous prompt-tuning. Li et al. (2021); Chen et al. (2021); Ma et al. (2021) focus on prompts for specific NLP tasks, such as sentiment analysis and information extraction.
+
+Recently, Wei et al. (2021); Zhong et al. (2021a); Min et al. (2021); Mishra et al. (2021) tune PLMs on mixed data samples drawn from different NLP tasks with manually designed task-specific prompts. The resulting PLMs are then utilized to solve unseen tasks by zero-shot learning. These methods successfully work for large PLMs such as GPT-3 (Brown et al., 2020) and T5 (Raffel et al., 2020), but consume a large amount of computation re
+
+sources. We further leverage data from non-target NLP tasks to make prompt-tuned PLMs have better capacities of adapting to unseen NLP tasks.
+
+# 5 Conclusion and Future Work
+
+In this paper, we present the Unified Prompt Tuning framework (UPT) that enables better few-shot text classification for BERT-style models by explicitly capturing prompting semantics from non-target datasets. We introduce a novel $POV$ paradigm to unify the task format, and then extend the unified prompting to the self-supervised learning with the knowledge-enhanced selective MLM task. Experiments show that UPT consistently outperforms state-of-the-arts for prompt-based fine-tuning over multiple text classification scenarios. As for future work, we seek to extend UPT to other tasks such as named entity recognition, text generation, and machine translation. In addition, we will explore continuous prompt-tuning for UPT.
+
+# Limitations
+
+Our work focuses on the prompt-based fine-tuning for text classification. It is also possible to extend our work to other NLP task (such as question answering, relation extraction, text generation, etc.) which will be addressed in the future work.
+
+# Ethical Considerations
+
+Our contribution in this work is fully methodological, namely a unified prompt tuning (UPT) to boost the prompt-tuned PLMs. Hence, there are no direct negative social impacts of this contribution. However, as PLMs may have some negative impacts, such as the existence of social and gender biases, the tuned models produced by our UPT would unavoidably suffer from these issues. We suggest that users should carefully deal with the potential risks before deploying the models online.
+
+# Acknowledgments
+
+This work has been supported by the National Natural Science Foundation of China under Grant No. U1911203, Alibaba Group through the Alibaba Innovation Research Program, the National Natural Science Foundation of China under Grant No. 61877018, the Research Project of Shanghai Science and Technology Commission (20dz2260300) and the Fundamental Research Funds for the Central Universities.
+
+# References
+
+Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In EMNLP, pages 632-642.
+Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In NeurIPS.
+Xiang Chen, Ningyu Zhang, Xin Xie, Shumin Deng, Yunzhi Yao, Chuanqi Tan, Fei Huang, Luo Si, and Huajun Chen. 2021. Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. CoRR, abs/2104.07650.
+Ido Dagan, Oren Glickman, and Bernardo Magnini. 2005. The PASCAL recognising textual entailment challenge. In MLCW, volume 3944 of Lecture Notes in Computer Science, pages 177-190.
+Zihang Dai, Zhilin Yang, Yiming Yang, Jaime G. Carbonell, Quoc Viet Le, and Ruslan Salakhutdinov. 2019. Transformer-xl: Attentive language models beyond a fixed-length context. In ACL, pages 2978-2988.
+Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In *NAACL*, pages 4171–4186.
+William B. Dolan and Chris Brockett. 2005. Automatically constructing a corpus of sentential paraphrases. In IWP@IJCNLP 2005.
+Tianyu Gao, Adam Fisch, and Danqi Chen. 2021. Making pre-trained language models better few-shot learners. In ACL, pages 3816-3830.
+Yuxian Gu, Xu Han, Zhiyuan Liu, and Minlie Huang. 2021. PPT: pre-trained prompt tuning for few-shot learning. CoRR, abs/2109.04332.
+Xu Han, Zhengyan Zhang, Ning Ding, Yuxian Gu, Xiao Liu, Yuqi Huo, Jiezhong Qiu, Liang Zhang, Wentao Han, Minlie Huang, Qin Jin, Yanyan Lan, Yang Liu, Zhiyuan Liu, Zhiwu Lu, Xipeng Qiu, Ruihua Song, Jie Tang, Ji-Rong Wen, Jinhui Yuan, Wayne Xin Zhao, and Jun Zhu. 2021a. Pre-trained models: Past, present and future. CoRR, abs/2106.07139.
+Xu Han, Weilin Zhao, Ning Ding, Zhiyuan Liu, and Maosong Sun. 2021b. PTR: prompt tuning with rules for text classification. CoRR, abs/2105.11259.
+
+Minqing Hu and Bing Liu. 2004. Mining and summarizing customer reviews. In KDD 2004, pages 168-177.
+Shengding Hu, Ning Ding, Huadong Wang, Zhiyuan Liu, Juanzi Li, and Maosong Sun. 2021. Knowledgeable prompt-tuning: Incorporating knowledge into prompt verbalizer for text classification. CoRR, abs/2108.02035.
+Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2020. ALBERT: A lite BERT for self-supervised learning of language representations. In ICLR.
+Brian Lester, Rami Al-Rfou, and Noah Constant. 2021. The power of scale for parameter-efficient prompt tuning. CoRR, abs/2104.08691.
+Chengxi Li, Feiyu Gao, Jiajun Bu, Lu Xu, Xiang Chen, Yu Gu, Zirui Shao, Qi Zheng, Ningyu Zhang, Yongpan Wang, and Zhi Yu. 2021. Sentiprompt: Sentiment knowledge enhanced prompt-tuning for aspect-based sentiment analysis. CoRR, abs/2109.08306.
+Xiang Lisa Li and Percy Liang. 2021. Prefix-tuning: Optimizing continuous prompts for generation. In ACL, pages 4582-4597.
+Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, and Graham Neubig. 2021a. Pretrain, prompt, and predict: A systematic survey of prompting methods in natural language processing. CoRR, abs/2107.13586.
+Xiao Liu, Kaixuan Ji, Yicheng Fu, Zhengxiao Du, Zhilin Yang, and Jie Tang. 2021b. P-tuning v2: Prompt tuning can be comparable to fine-tuning universally across scales and tasks. CoRR, abs/2110.07602.
+Xiao Liu, Yanan Zheng, Zhengxiao Du, Ming Ding, Yujie Qian, Zhilin Yang, and Jie Tang. 2021c. GPT understands, too. CoRR, abs/2103.10385.
+Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining approach. CoRR, abs/1907.11692.
+Ruotian Ma, Xin Zhou, Tao Gui, Yiding Tan, Qi Zhang, and Xuanjing Huang. 2021. Template-free prompt tuning for few-shot NER. CoRR, abs/2109.13532.
+George A. Miller. 1995. Wordnet: A lexical database for english. Commun. ACM, 38(11):39-41.
+Sewon Min, Mike Lewis, Luke Zettlemoyer, and Hannaneh Hajishirzi. 2021. Metaicl: Learning to learn in context. CoRR, abs/2110.15943.
+Swaroop Mishra, Daniel Khashabi, Chitta Baral, Yejin Choi, and Hannaneh Hajishirzi. 2021. Reframing instructional prompts to gptk's language. CoRR, abs/2109.07830.
+
+Bo Pang and Lillian Lee. 2005. Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales. In ACL, pages 115-124.
+Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In NAACL, pages 2227-2237.
+Xipeng Qiu, Tianxiang Sun, Yige Xu, Yunfan Shao, Ning Dai, and Xuanjing Huang. 2020. Pre-trained models for natural language processing: A survey. CoRR, abs/2003.08271.
+Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9.
+Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res., 21:140:1-140:67.
+Toven Le Scao and Alexander M. Rush. 2021. How many data points is a prompt worth? In *NAACL*, pages 2627-2636.
+Timo Schick and Hinrich Schütze. 2021a. Exploiting cloze-questions for few-shot text classification and natural language inference. In EACL, pages 255–269.
+Timo Schick and Hinrich Schütze. 2021b. It's not just size that matters: Small language models are also few-shot learners. In *NAACL*, pages 2339-2352.
+Taylor Shin, Yasaman Razeghi, Robert L. Logan IV, Eric Wallace, and Sameer Singh. 2020. Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In EMNLP, pages 4222-4235.
+Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Y. Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In EMNLP, pages 1631-1642.
+Yi Sun, Yu Zheng, Chao Hao, and Hangping Qiu. 2021. NSP-BERT: A prompt-based zero-shot learner through an original pre-training task next sentence prediction. CoRR, abs/2109.03564.
+Prasetya Ajie Utama, Nafise Sadat Moosavi, Victor Sanh, and Iryna Gurevych. 2021. Avoiding inference heuristics in few-shot prompt-based finetuning. CoRR, abs/2109.04144.
+Tu Vu, Brian Lester, Noah Constant, Rami Al-Rfou, and Daniel Cer. 2021. Spot: Better frozen model adaptation through soft prompt transfer. CoRR, abs/2110.07904.
+
+Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019a. Superglue: A stickier benchmark for general-purpose language understanding systems. In NeurIPS, pages 3261-3275.
+Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019b. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In ICLR.
+Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019c. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In ICLR.
+Chengyu Wang, Minghui Qiu, Taolin Zhang, Tingting Liu, Lei Li, Jianing Wang, Ming Wang, Jun Huang, and Wei Lin. 2022. Easynlp: A comprehensive and easy-to-use toolkit for natural language processing. CoRR, abs/2205.00258.
+Chengyu Wang, Jianing Wang, Minghui Qiu, Jun Huang, and Ming Gao. 2021a. Transprompt: Towards an automatic transferable prompting framework for few-shot text classification. In EMNLP, pages 2792-2802.
+Sinong Wang, Han Fang, Madian Khabsa, Hanzi Mao, and Hao Ma. 2021b. Entailment as few-shot learner. CoRR, abs/2104.14690.
+Wei Wang, Bin Bi, Ming Yan, Chen Wu, Zuyi Bao, Jiangnan Xia, Liwei Peng, and Luo Si. 2019d. Structbert: incorporating language structures into pretraining for deep language understanding. arXiv preprint arXiv:1908.04577.
+Jason Wei, Maarten Bosma, Vincent Y. Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M. Dai, and Quoc V. Le. 2021. Finetuned language models are zero-shot learners. CoRR, abs/2109.01652.
+Adina Williams, Nikita Nangia, and Samuel R. Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In *NAACL*, pages 1112–1122.
+Zhilin Yang, Zihang Dai, Yiming Yang, Jaime G. Carbonell, Ruslan Salakhutdinov, and Quoc V. Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. In NeurIPS, pages 5754-5764.
+Mengjie Zhao and Hinrich Schütze. 2021. Discrete and soft prompting for multilingual models. In EMNLP, pages 8547-8555.
+Ruiqi Zhong, Kristy Lee, Zheng Zhang, and Dan Klein. 2021a. Adapting language models for zero-shot learning by meta-tuning on dataset and prompt collections. In EMNLP, pages 2856-2878.
+Zexuan Zhong, Dan Friedman, and Danqi Chen. 2021b. Factual probing is [MASK]: learning vs. learning to recall. In *NAACL*, pages 5017-5033.
+
+# A Dataset Statistics
+
+In the main experiments, we employ 9 different NLP datasets for evaluation. As shown in Table 4, we divided all datasets into three groups, i.e., Sentiment, NLI, and Paraphrase. During multi-task training, we select two groups of tasks with full training data for $POV$ prompt-tuning with the auxiliary KSMLM objective. After that, we prompt-tune the model over the target task in the few-shot learning setting. The corresponding group of the target task is unseen during multi-task training.
+
+# B POV Examples
+
+As shown in Table 5, we list the designed $POV_{s}$ for all the tasks. Note that for each task group, the options are the same, but verbalizers of these tasks may be different. For example, SST-2, MR, and CR have the same schema of options, but with different verbalizers.
+
+# C Detailed Experiments for the KSMLM Task
+
+We further conduct experiments over each group to evaluate the effectiveness of different settings in KSMLM. The baselines for comparison include:
+
+- UPT w/o. KSMLM: It means the training process on source tasks without the KSMLM learning objective before prompt-tuning over the target task.
+- MLM: It means that we directly train the vanilla MLM based on the full training data from source tasks.
+- KSMLM (w/o. OKR): It means that we randomly select options without the K-Means algorithm and the knowledge-guided options construction process.
+- KSMLM (w/o. Options): It means that we directly remove the options in $POV$ .
+- KSMLM (w/o. Verbalizer): It means that the prediction search space at each masked position is the whole BERT vocabulary rather than the designed limited collection of label words (expressed by options).
+
+As shown in Table 7, we follow the same settings with the ablation study in Table 3 to report the mean accuracy values of each group. We can draw the following conclusions: 1) Compared to
+
+
Group
Category
Task
#Training
#Testing
N
Class Labels
Group 1: Sentiment
Single Sentence
SST-2
6,920
872
2
positive, negative
MR
8,662
2,000
2
positive, negative
CR
1,775
2,000
2
positive, negative
Group 2: NLI
Sentence Pair
MNLI
392,702
9,815
3
entailment, neutral, contradiction
SNLI
549,367
9,842
3
entailment, neutral, contradiction
QNLI
104,743
5,463
2
entailment, not entailment
RTE
2,490
277
2
entailment, not entailment
Group 3: Paraphrase
Sentence Pair
MRPC
3,668
408
2
equivalent, not equivalent
QQP
363,846
40,431
2
equivalent, not equivalent
+
+Table 4: Dataset statistics. We only sample $N \times K$ instances from the original training sets to form the few-shot training and development sets. The testing sets used in the experiments are full datasets.
+
+
Task
Prompt
Option
Label word
SST-2
Template 1: [<s1>]. It was [MASK].Template 2: [<s1>. I thought it was [MASK].Template 3: [<s1>. It is [MASK].Template 4: [<s1>. The review is [MASK].Template 5: [<s1>. A [MASK] one.
Option 1: Is <x1> or <x2>?Option 2: Does <x1> or <x2>?Option 3: <x1> or <x2>?
Template 1: [<s1>. It was [MASK].Template 2: [<s1>. A [MASK] piece of work templates 3: [<s1>. It is [MASK].Template 4: [<s1>. The film is [MASK].Template 5: [<s1>. A really [MASK] movie.
Option 1: Is <x1> or <x2>?Option 2: Does <x1> or <x2>?Option 3: <x1> or <x2>?
Template 1: [<s1>. It was [MASK].Template 2: [<s1>. It looks [MASK].Template 3: [<s1>. It is [MASK].Template 4: [<s1>. The quality is [MASK].Template 5: [<s1>. I thought it was [MASK].
Option 1: Is <x1> or <x2>?Option 2: Does <x1> or <x2>?Option 3: <x1> or <x2>?
Template 1: [<s1>. You are right, [MASK]. [<s2>].Template 2: [<s1>. It was [MASK]. [<s2>].Template 3: [<s1>. [<s2>]. It is [MASK].Template 4: [<s1>. It is true that [MASK], [<s2>].Template 5: [<s1>. [MASK]. Then, [<s2>].
Option 1: Is <x1> or <x2> or <x3>?Option 2: Based on the paragraph above, is the following <x1> or <x2> or <x3>?
Template 1: [<s1>. [MASK], no, [<s2>].Template 2: [<s1>. [MASK], in this case, [<s2>].Template 3: [<s1>. [MASK], I think, [<s2>].Template 4: [<s1>. It was [MASK].Template 5: [<s1>. [MASK], [<s2>].
Option 1: Is <x1> or <x2> or <x3>?Option 2: Based on the paragraph above, is the following <x1> or <x2> or <x3>?
Template 1: Question: [<s1]? [<s2>. The answer: [MASK].Template 2: Question: [<s1]? [<s2>. [MASK]. Template 3: Question: [<s1]? [MASK], Yes, [<s2>. Template 4: [<s1]? [MASK], it is known that [<s2>].Template 5: [<s1]? [MASK]. Then, [<s2>].
Option 1: Is <x1> or <x2>?Option 2: Based on the question, is the following <x1> or <x2>?Option 3: Is the answer <x1> or <x2>?
Verbalizer 1: Entailment (Yes), Not Entailment (No)Verbalizer 2: Entailment (Okay), Not Entailment (Nonetheless)Verbalizer 3: Entailment (Notably), Not Entailment (Yet)
RTE
Template 1: [<s1>. The answer: [MASK].Template 2: [<s1>. [<s2>. [MASK].Template 3: [<s1>. [MASK], I think, [<s2>].Template 4: [<s1>. The question: [<s2]? It is [MASK].Template 5: [<s1>. [MASK]. I believe, [<s2>].
Option 1: Is <x1> or <x2>?Option 2: Based on the question, the answer is <x1> or <x2>?Option 3: Is the answer <x1> or <x2>?
Verbalizer 1: Entailment (So), Not Entailment (Meanwhile)Verbalizer 2: Entailment (Yes), Not Entailment (No)Verbalizer 3: Entailment (Notably), Not Entailment (Yet)
MRPC
Template 1: [<s1>. The answer: [MASK].Template 2: [<s1>. [<s2>. [MASK].Template 3: [<s1>. [MASK], however, [<s2>].Template 4: [<s1>. [s2>. In fact [MASK].Template 5: [<s1>. [MASK]. that's right, [<s2>].
Option 1: Is <x1> or <x2>?Option 2: Are two questions <x1> or <x2>?Option 3: <x1> or <x2>?
+
+Table 5: The Prompts, Options and Verbalizers (POV) for each task. $\langle s1\rangle$ and $\langle s2\rangle$ denote the input sentences. $\langle x1\rangle ,\langle x2\rangle$ and $\langle x3\rangle$ denote the label words.
+
+
Paradigm
Method
AX-b
AX-g
BoolQ
CB
SST-5
TREC
Subj
Avg.
FT
Fine-tuning
47.51±1.8
60.83±2.0
65.96±1.3
73.21±1.3
40.10±3.4
59.30±1.8
73.00±2.0
59.99
PT
PET
60.28±1.2
64.08±0.8
70.54±1.6
82.09±2.0
44.10±1.7
84.90±1.9
89.30±1.0
70.76
LM-BFF
61.53±1.4
63.89±1.9
71.30±2.1
82.14±2.6
46.10±1.3
84.80±1.5
89.25±1.0
71.29
P-Tuning
62.23±0.8
63.19±1.2
72.88±0.9
83.08±1.8
48.20±1.5
85.10±1.9
89.35±1.1
72.00
UPT
64.25±1.2
69.44±1.4
74.06±1.6
83.92±0.9
48.35±1.0
85.90±0.8
90.15±1.2
73.72
+
+Table 6: Additional experiments for comparison between UPT and baselines over all testing sets in terms of accuracy (\%) and standard deviation.
+
+
Method/Group
Group 1
Group 2
Group 3
UPT
91.0
70.2
77.9
UPT w/o. KSMLM
90.9
69.1
73.7
MLM
87.1
67.4
72.0
KSMLM (w/o. OKR)
90.7
69.9
76.8
KSMLM (w/o. Options)
90.1
68.2
76.3
KSMLM (w/o. Verbalizer)
85.0
62.4
66.7
+
+Table 7: The ablation analysis of the KSMLM task in terms of accuracy (\%).
+
+
Paradigm/Task
SST-2
MR
CR
Avg.
POV
92.9
87.7
91.8
90.8
Multiple-choice
82.7
73.9
80.9
79.2
Yes/No
92.6
87.9
91.6
90.7
+
+Table 8: The comparison between different paradigms in terms of accuracy $(\%)$ .
+
+vanilla MLM, the results indicate that KSMLM is an irreplaceable task for the improvement of the model generalization power. 2) We also find that if we ignore the verbalizer construction, the results decrease to a large degree, and lower than UPT w/o. KSMLM. It means that verbalizers are crucial for template-based prompt-tuning. 3) When OKR or options are removed, the results also decline, indicating the effectiveness of these techniques.
+
+# D Comparing $POV$ with Other Paradigms
+
+To compare the proposed $POV$ paradigm with other paradigms, we perform experiments over SST-2, MR, and CR tasks. The alternative paradigms are as follows:
+
+- Multiple-choice. It is a unified template to list all the candidate results. For example, an input sentence can be "The Disney cartoons are very interesting for children to enrich their extracurricular life. A. great; B. terrible. It is [MASK]". This paradigm is closely in line with PPT (Gu et al., 2021).
+
+- Yes/No. We can reformulate the multi-class classification tasks into a series of binary classification. Take NLI for example. We can design three templates for each class, i.e "Are these descriptions are entailment?", "Are these descriptions are neutral?", and "Are these descriptions are contradiction?". We follow Zhong et al. (2021a) to add an MLP layer on the top of the PLM to obtain the output of the [MASK] token to classify the answer to be "Yes" or "No".
+
+Experimental results in Table 8 show that in average, $POV$ outperforms all baselines. For Multi-choice, we find the results decline a lot. We guess that the PLM is hard to understand and generate the items number, such as "A, B, C, D". In addition, we find the paradigm "Yes/No" has a similar performance to $POV$ . Overall, the experiments prove the effectiveness of $POV$ , which is easy to implement and avoids the transformation to multiple binary classification tasks for tasks with multiple classes.
+
+# E Additional Evaluation Results over Other Tasks
+
+In this part, we further present additional experiments over other tasks from GLUE (Wang et al., 2019c) and SuperGLUE (Wang et al., 2019a), including AX-b, AX-g, BoolQ, CB, SST-5, TREC and Subj. The data statistics can be found in the original papers. We choose standard fine-tuning, PET (Schick and Schütze, 2021a), LM-BFF (Liu et al., 2021c) as our baselines to make comparison. In this experiment, we only conduct task-specific single-task learning to evaluate the efficiency of the $POV$ paradigm. We also set $K = 16$ . As shown in Table 6, we can draw the following conclusions. 1) Our UPT framework outperforms strong baselines over these tasks. 2) SST-5 and TREC are challenging tasks with many labels, which consist of 5 and 6 classes, respectively. Experiments show that our $POV$ paradigm can achieve the best performances.
\ No newline at end of file
diff --git a/towardsunifiedprompttuningforfewshottextclassification/images.zip b/towardsunifiedprompttuningforfewshottextclassification/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..e1ff4f3f85a746a81bd467946dbcf092693b9571
--- /dev/null
+++ b/towardsunifiedprompttuningforfewshottextclassification/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3ad7ba2daf379cf3d414460b5ceed24ae9413c277723f20e922db424eb067f39
+size 785614
diff --git a/towardsunifiedprompttuningforfewshottextclassification/layout.json b/towardsunifiedprompttuningforfewshottextclassification/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..3b24c643188f82ac65e2605fdc21591da09ef383
--- /dev/null
+++ b/towardsunifiedprompttuningforfewshottextclassification/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:8e0d0123ea0aebc55e87bdc882531a8bc8ecf304451ffb39666892b10a2acbb9
+size 499443
diff --git a/trainflatthencompresssharpnessawareminimizationlearnsmorecompressiblemodels/f4f9f8e3-8a8e-41bd-bb6c-26a433b41b18_content_list.json b/trainflatthencompresssharpnessawareminimizationlearnsmorecompressiblemodels/f4f9f8e3-8a8e-41bd-bb6c-26a433b41b18_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..42ecd21c6cb31e01433903340d1a1ab639ee8901
--- /dev/null
+++ b/trainflatthencompresssharpnessawareminimizationlearnsmorecompressiblemodels/f4f9f8e3-8a8e-41bd-bb6c-26a433b41b18_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ead91535867e85fb7bbffdc4c0b06e7133cdd40670b6c56f9273c5ffe04f24a5
+size 161776
diff --git a/trainflatthencompresssharpnessawareminimizationlearnsmorecompressiblemodels/f4f9f8e3-8a8e-41bd-bb6c-26a433b41b18_model.json b/trainflatthencompresssharpnessawareminimizationlearnsmorecompressiblemodels/f4f9f8e3-8a8e-41bd-bb6c-26a433b41b18_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..db73d0b29d3829ef5d98ad281796d27e298e5918
--- /dev/null
+++ b/trainflatthencompresssharpnessawareminimizationlearnsmorecompressiblemodels/f4f9f8e3-8a8e-41bd-bb6c-26a433b41b18_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:013cfe9f80390bba7c2f715aa2ef63885d78899215e8b60f89c100c3ab7da818
+size 198012
diff --git a/trainflatthencompresssharpnessawareminimizationlearnsmorecompressiblemodels/f4f9f8e3-8a8e-41bd-bb6c-26a433b41b18_origin.pdf b/trainflatthencompresssharpnessawareminimizationlearnsmorecompressiblemodels/f4f9f8e3-8a8e-41bd-bb6c-26a433b41b18_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..a87732dd57511dada6ef05de5a5d742568a5a6f8
--- /dev/null
+++ b/trainflatthencompresssharpnessawareminimizationlearnsmorecompressiblemodels/f4f9f8e3-8a8e-41bd-bb6c-26a433b41b18_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:491b64306793eee6694c06d4a0e516e290f1bc47f07913a698e701a6b234d97f
+size 1695101
diff --git a/trainflatthencompresssharpnessawareminimizationlearnsmorecompressiblemodels/full.md b/trainflatthencompresssharpnessawareminimizationlearnsmorecompressiblemodels/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..8536c73266bfd1ff731f8238395e438682d04b0c
--- /dev/null
+++ b/trainflatthencompresssharpnessawareminimizationlearnsmorecompressiblemodels/full.md
@@ -0,0 +1,660 @@
+# Train Flat, Then Compress: Sharpness-Aware Minimization Learns More Compressible Models
+
+Clara Na Sanket Vaibhav Mehta Emma Strubell
+
+Language Technologies Institute
+
+School of Computer Science
+
+Carnegie Mellon University
+
+{csna, smehta, estrubel}@cs.cmu.edu
+
+# Abstract
+
+Model compression by way of parameter pruning, quantization, or distillation has recently gained popularity as an approach for reducing the computational requirements of modern deep neural network models for NLP. Inspired by prior works suggesting a connection between simpler, more generalizable models and those that lie within wider loss basins, we hypothesize that optimizing for flat minima should lead to simpler parameterizations and thus more compressible models. We propose to combine sharpness-aware minimization (SAM) with various task-specific model compression methods, including iterative magnitude pruning (IMP), structured pruning with a distillation objective, and post-training dynamic quantization. Empirically, we show that optimizing for flatter minima consistently leads to greater compressibility of parameters compared to vanilla Adam when fine-tuning BERT models, with little to no loss in accuracy on the GLUE text classification and SQuAD question answering benchmarks. Moreover, SAM finds superior winning tickets during IMP that 1) are amenable to vanilla Adam optimization, and 2) transfer more effectively across tasks. $^{1}$
+
+# 1 Introduction
+
+Recent advances in hardware, modeling, and optimization for deep neural networks have enabled training of substantially larger models on massive amounts of unlabeled data, leading to corresponding improvements in accuracy across a variety of tasks in NLP (Devlin et al., 2019; Brown et al., 2020; Raffel et al., 2020). Unfortunately, this sudden increase in scale of state-of-the-art models also has adverse consequences, such as reducing equity of access (Yu, 2020; Ahmed and Wahed, 2020) and increasing computational and energy requirements (Strubell et al., 2019; Dodge et al., 2022).
+
+
+Figure 1: Average score over all GLUE tasks as a function of sparsity (\% of parameters pruned) of $\mathrm{BERT}_{\text {base }}$ through unstructured iterative magnitude pruning. We compare the baseline Adam optimized model's performance to our model, optimized to prefer flat minima via SAM. The green horizontal bands mark initial performance of our full fine-tuned (FT) models. SAM outperforms baseline Adam across all sparsity values.
+
+In response, model compression has emerged as a dominant approach to improving memory and inference efficiency in neural network models, including approaches such as knowledge distillation (Bucilua et al., 2006; Hinton et al., 2014; Jiao et al., 2020), model quantization (Vanhoucke et al., 2011; Shen et al., 2020) and pruning (LeCun et al., 1989; Chen et al., 2020; Xia et al., 2022).
+
+The vast majority of work on model compression focuses on methods for selecting how and where to reduce (via quantization or distillation) or remove (by pruning) model parameters without sacrificing end-task accuracy. While these approaches are usually simple to implement and work relatively well for maintaining overall accuracy on considered benchmarks, recent work has revealed that these commonly used compression methods can result in negative impacts on model behavior that are not necessarily captured by current performance met-
+
+rics. For example, in the field of computer vision, pruning has been shown to sacrifice performance on long-tail examples in order to preserve accuracy on more frequent phenomena (Hooker et al., 2020) and reduce robustness to out-of-distribution examples (Liebenwein et al., 2021). At the same time, it has also been shown that compression can sometimes have a regularizing effect, improving generalization in some settings (Ahia et al., 2021). Clearly, more work is needed better understand the relationship between compression and generalizability, and to develop improved compression methods that are informed by this knowledge.
+
+Meanwhile, there is a growing body of work investigating curvature of the loss landscape and its relation to generalization in deep neural models. Hochreiter and Schmidhuber (1997) first defined flat minima as regions in parameter space where error remains relatively stable despite perturbations in parameter values, arguing that models in flat minima should correspond to simpler, more generalizable functions. More recently, Wu et al. (2017) further showed that wide loss basins correspond to low-complexity solutions,[2] and it has been shown empirically that directly optimizing for solutions in flat minima leads to improved generalization on a wide range of supervised learning tasks in both vision (Foret et al., 2021) and language (Bahri et al., 2022) modalities.
+
+Inspired by the above results connecting wider loss regions to simpler, more generalizable models, in this work we aim to advance our understanding of model compression by examining the relationship between flat minima and compressibility in fine-tuned language models. Intuitively, model parameters in neighborhoods having uniformly low loss values should be more robust to perturbations, such as those resulting from model compression, compared to models in sharper regions, since changes to parameter values should lead to minimal change in loss with respect to the main training objective. Empirically and theoretically, previous work has also linked generalizability with compressibility, finding that neural networks whose weights reflect simpler, more general hypotheses may be more robust to compression, and compression itself can act as a regularization mechanism (Zhou et al., 2019; Liang et al., 2021; Kuhn et al., 2021). Li et al. (2020) showed that larger
+
+pre-trained language models, which are known to generalize better, are also more compressible. Further, Bartoldson et al. (2020) connect flatness to generalization in pruned models, showing that pruning noise correlates positively with measures of flatness in a CNN for computer vision.
+
+We investigate the relationship between flat minima and compression in large pre-trained language models by directly optimizing for flat minima during language model fine-tuning using sharpness-aware minimization (SAM; Foret et al. (2021)). Through extensive experiments on the GLUE text classification and SQuAD question answering benchmarks, we show that fine-tuning BERT models with SAM leads to optima in flatter basins, and compressing those models consistently results in higher model accuracy at the same level of compression compared to standard Adam-optimized baselines. Our results hold across multiple BERT variants and sizes (Devlin et al., 2019; Liu et al., 2019) and a wide variety of compression methods: Iterative magnitude pruning with and without rewinding, a structured pruning procedure that also employs knowledge distillation, and an off-the-shelf method for post-training quantization. We also show that sparse subnetworks (winning tickets; Frankle and Carbin (2019)) discovered by SAM are better transferable across tasks, suggesting improved generalizability. Our findings shed light on a promising new avenue for obtaining practical improvements in model compression.
+
+# 2 Methods
+
+Broadly, we are interested in understanding: 1) Are models in flatter minima more compressible? 2) If so, why? 3) Beyond task-specific accuracy, what properties do flat, compressed models have?
+
+To provide empirical answers to these questions, our experimental setup is as follows. We fine-tune pre-trained language models on standard benchmarks using both "vanilla" Adam optimization and Sharpness-Aware Minimization (§2.1). We then experiment with a variety of strategies for compressing those fine-tuned models: iterative magnitude pruning (unstructured) with and without rewinding (§2.2.1), structured pruning using $\ell_0$ regularization and a distillation objective (§2.2.2), and posttraining dynamic quantization (§2.3). We evaluate model end-task accuracy at different compression rates, and compare that accuracy to the full (uncompressed) model and across experimental settings,
+
+such as varying the pre-trained language model, and transferring initializations across tasks.
+
+# 2.1 Flat Minima
+
+Sharpness-Aware Minimization (SAM). To explicitly encourage flatter loss basins, we employ the SAM (Foret et al., 2021) procedure. Given a loss function $f(w)$ , SAM strives to find parameters that lie in the neighborhood with uniformly low loss by optimizing the following minimax objective:
+
+$$
+\min _ {w} \max _ {| | \epsilon | | _ {2} \leq \rho} f (w + \epsilon) \tag {1}
+$$
+
+where the maximization (or neighborhood) region is an $\ell^p$ ball with radius $\rho$ for $p = 2$ in Equation (1). The gradient of the result of the above (inner) maximization problem can be approximated as:
+
+$$
+\begin{array}{l} \left. \approx \nabla_ {w} f (w) \right| _ {w + \hat {\epsilon} (\mathbf {w})} + \frac {\partial \hat {\epsilon} (\mathbf {w})}{w} \nabla_ {w} f (w) \big | _ {w + \hat {\epsilon} (\mathbf {w})} \\ \text {w h e r e}, \hat {\epsilon} (\mathbf {w}) = \rho \nabla_ {w} f (w) / \| \nabla_ {w} f (w) \| _ {2} \\ \end{array}
+$$
+
+Foret et al. (2021) showcase that one can simplify the optimization problem without compromising the algorithm's effectiveness by dropping the second order term in the gradient, leaving us with:
+
+$$
+\nabla_ {w} \max _ {\| \epsilon \| _ {2} \leq \rho} f (w + \epsilon) \approx \nabla_ {w} f (w) \Big | _ {w + \hat {\epsilon} (\mathbf {w})} \tag {2}
+$$
+
+Intuitively, SAM takes a gradient step at each iteration based on the gradient estimated at the parameters yielding the highest loss $(w + \hat{\epsilon}(w))$ in an $\ell^p$ neighborhood around the current parameters $(w)$ .
+
+Stochastic Weight Averaging (SWA). Although, we use SAM to optimize for flatness, there exist other alternatives to promote flatness like Stochastic Weight Averaging (Izmailov et al., 2018). SWA performs an equal average of model checkpoints along the optimization trajectory to find flatter solutions compared to vanilla optimization. We also consider SWA for our experimentation (§3.4.7).
+
+Sharpness Metric. To verify that SAM and SWA indeed leads to flatter minima as compared to Adam, we compute a sharpness metric (Keskar et al., 2017), which estimates the flatness by computing the maximum value of $f(w)$ within a neighborhood region controlled by the hyperparameter $\epsilon$ . Following (Keskar et al., 2017; Mehta et al., 2021), the neighborhood region is defined as:
+
+$$
+\begin{array}{l} C _ {\epsilon} = \{z \in R ^ {p}: - \epsilon \left(\left| \left(A ^ {+} w\right) _ {i} \right| + 1\right) \leq \\ z _ {i} \leq \epsilon \left(\left| \left(A ^ {+} w\right) _ {i} \right| + 1\right) \forall i \in \{1 \dots p \} \} \tag {3} \\ \end{array}
+$$
+
+where $R^p$ is a random subspace of the entire parameter space $R^n$ constructed using a projection matrix $A \in R^{n \times p}$ , and $A^+$ is the pseudo inverse of $A$ . Concretely, the sharpness metric (lower corresponds to flatter minima) is computed as follows:
+
+$$
+\phi_ {w, f} := \frac {\max _ {z \in C _ {\epsilon}} f (w + A z) - f (w)}{1 + f (w)} \times 1 0 0 \tag {4}
+$$
+
+To qualitatively verify for flatness, we also visualize loss contours (see Appendix A.2).
+
+# 2.2 Pruning
+
+We investigate compressibility primarily in an unstructured pruning setting. Given a network $\mathcal{N}$ and weights $\mathbf{w}$ , we wish to prune a subset of individual weights to leave only a subset, $\mathbf{w}'$ . A successfully pruned model has $|\mathbf{w}'| \ll |\mathbf{w}|$ while retaining good performance on the task(s) of interest.
+
+# 2.2.1 Iterative Magnitude Pruning
+
+Typically, $\mathbf{w}^{\prime}$ is found through an iterative process where $\mathcal{N}$ is trained and pruned incrementally, either until some target sparsity is reached or until some larger-than-desired performance drop is observed, and a common criterion selects weights with the smallest absolute magnitude to be pruned at each iteration. In the standard pruning scenario (Renda et al., 2020a; Han et al., 2015), training simply resumes with the remaining weights after each iteration of pruning. Previous work (Renda et al., 2020a) presents evidence that rewinding remaining weights to earlier learned values may be beneficial for compressibility.
+
+Frankle and Carbin (2019) present a formulation of iterative magnitude pruning (IMP) as a way to obtain sparse "winning tickets" $\mathbf{w}'$ that can be trained from initialization to reach performance to match that of the original full network $\mathcal{N}$ while using significantly fewer parameters. In IMP, model parameters are repeatedly reset to their original initialized values after pruning, before the next iteration of training. Reverting weights to values from an earlier point during training is also known as rewinding (Frankle et al., 2020). Following Chen et al. (2020), we consider both standard (no rewinding) and lottery ticket-style IMP (with rewinding) settings. In alignment with the paradigm of pretraining and fine-tuning, we treat the pre-trained model's weights as the initial weights to which parameters are reset at each iteration.
+
+# 2.2.2 Structured Pruning
+
+We also explore flat minima in a recently proposed structured pruning setting: Xia et al. (2022) incorporate a layerwise distillation objective into their structured pruning process, which dynamically maps layers between teacher and student models as structured units of varying granularity are incrementally pruned in the student model via an $\ell_0$ regularizer. In our experiments, we vary only optimizer used to fine-tune the teacher model and compare downstream compression performance.
+
+# 2.3 Post-Training Quantization
+
+We compare performance of Int8 quantized $\mathrm{BERT}_{\mathrm{base}}$ models fine-tuned with Adam- and SAM-optimized models. Using a standard PyTorch implementation, we perform post-training dynamic quantization, where full-size (32-bit) floating point model weights are statically mapped to a lower precision (in our case, 8-bit integer) representation after training, and activations are dynamically reduced in precision during inference.
+
+# 3 Experimentation
+
+# 3.1 Research Questions
+
+In this section, we describe a series of experiments and analyses aimed at answering the following research questions:
+
+Q0 Does SAM help make models more robust to compression? (§3.4.1, §3.4.4, §3.4.5)
+Q1 Does SAM benefit compressibility across different model initializations and sizes? (§3.4.6)
+Q2 How does SAM influence model compressibility? (§3.4.2)
+Q3 How does SAM affect compressed model properties beyond single-task accuracy? (§3.4.2, §3.4.3)
+Q4 How does flatness in general, beyond SAM specifically, influence compressibility? (§3.4.7)
+
+# 3.2 Datasets and Metrics
+
+We consider eight tasks from the standard GLUE (Wang et al., 2018) benchmark for our experimentation, as well as SQuAD v1.1 (Rajpurkar
+
+et al., 2016). The GLUE datasets include MNLI (Williams et al., 2018), $\mathrm{QQP^4}$ , STS-B (Cer et al., 2017), QNLI (Wang et al., 2018), MRPC (Dolan and Brockett, 2005), RTE (Wang et al., 2018), SST-2 (Socher et al., 2013), and CoLA (Warstadt et al., 2019). For all experiments unless otherwise noted, we follow prior work (Chen et al., 2020) and report validation set accuracy for QQP, QNLI, MRPC, RTE, SST-2, matched accuracy for MNLI, Matthew's correlation for CoLA, Pearson correlation for STS-B, and F1 score for SQuAD. We also make use of a sharpness metric ( $\S A.2$ ) to quantify the flatness of the basins that our models lie in.
+
+# 3.3 Implementation Details
+
+For all experiments described in this section, we fine-tune publicly available pre-trained BERT model weights (Wolf et al., 2019). We use the uncased $\mathrm{BERT}_{\mathrm{base}}$ model for all experiments except when otherwise noted. For our iterative magnitude pruning experiments, we mainly set hyperparameters as mentioned by Chen et al. (2020) and follow a similar general procedure for iterative magnitude pruning, pruning an absolute $10\%$ of prunable weights over 9 iterations to reach $90\%$ sparsity in order to facilitate direct comparisons. Appendix A.4 contains further implementation details including hyperparameters used and explanations of when our methods differ. For our SAM optimizer, we use Adam (Kingma and Ba, 2014) as our base optimizer, and following (Mehta et al., 2021; Bahri et al., 2022) set $\rho$ to 0.05. Appendix A.2 contains implementation details for SWA.
+
+# 3.4 Results
+
+# 3.4.1 Iterative Magnitude Pruning (Q0)
+
+With rewind to $\mathbf{BERT}_{\mathrm{base}}$ We investigate the SAM procedure's effectiveness in uncovering winning tickets (Frankle and Carbin, 2019). The IMP section of Table 1 shows that optimizing with SAM throughout iterative magnitude pruning allows pruned models to retain higher performance at reference sparsity levels when compared to models trained with vanilla Adam optimizer.
+
+The plots in Figure 2 show evaluation metrics over successive IMP iterations for individual GLUE tasks. We see that although initial performance of Adam- and SAM-optimized models is usually comparable, promoting flat minima during
+
+
Dataset
MNLI
QQP
STS-B
QNLI
MRPC
RTE
SST-2
CoLA
SQuAD
Sparsity
70%
90%
50%
70%
50%
60%
60%
50%
40%
Metric
Match/Mismatch acc.
Acc.
Pearson Cor.
Acc.
Acc.
Acc.
Acc.
Matthew's Cor.
F1
Full
Adam
84.60.1/83.60.3
89.10.1
84.00.4
90.80.2
82.61.4
67.01.4
93.20.6
53.31.2
88.50.2
SAM
85.00.1/84.50.1
89.20.2
84.70.1
90.80.4
82.91.3
65.50.8
93.60.1
54.11.3
89.20.1
IMP
Adam
82.30.3/81.40.2
83.00.0
83.30.3
88.60.3
81.51.3
63.32.8
92.20.4
47.71.5
86.90.3
SAM
83.20.2/82.50.4
85.40.1
84.10.1
89.40.2
83.60.2
65.01.5
92.90.5
49.52.0
87.80.2
*SWA
83.10.1/82.10.1
87.70.2
85.30.4
89.00.2
83.70.1
67.40.4
92.70.3
50.91.0
-
Std
Adam
82.40.3/81.10.5
87.20.1
83.90.1
88.80.1
82.91.0
64.41.6
92.30.2
50.90.2
86.30.2
SAM
83.20.1/81.80.1
87.40.4
84.60.1
89.40.1
81.91.2
64.30.8
93.00.6
51.32.0
87.10.2
+
+Table 1: At full size and at Chen et al. (2020)'s reference sparsities, we report task-specific metrics for Adam and SAM-optimized $\mathrm{BERT}_{\text {base }}$ models in their (1) Iterative Magnitude Pruning (IMP) and (2) Std pruning settings. We report mean and standard deviation calculated over 3 random seeds. All GLUE results are reported for test sets. We report results on the development set for SQuAD as test set evaluation is unavailable for v1.1. Table 5 in Appendix A.5 contains comparison with reference (Ref) values using development sets. *Additionally, we report test accuracy metrics in IMP models trained with stochastic weight averaging (SWA), another optimization method which empirically leads to flatter minima. We observe that optimizing with SAM or SWA throughout iterative magnitude pruning allows pruned models to retain higher performance at reference sparsity levels when compared to models trained with vanilla Adam optimizer.
+
+IMP with SAM leads to either 1) higher performance compared to Adam at Chen et al. (2020)'s reference sparsity level or 2) performance comparable to the full sized model at higher sparsity levels, if not both, for all but one smaller GLUE task (CoLA).
+
+Moreover, for some tasks (RTE, SST-2), the final pruned model's accuracy tends to be higher than the full BERTbase fine-tuned model's. This is an especially striking result given that we reset remaining weights to the BERTbase initialization after each successive iteration of pruning; there is no progressive learning of weights from iteration to iteration. Instead, we reach higher accuracy simply by optimizing over the learned substructures rather than the full network.
+
+Both SAM and vanilla Adam induce marginal improvements compared to the full fine-tuned model with $10\%$ pruning for some tasks (e.g. 2b, 2f, 2g), which we might attribute to slight pruning acting as a structure regularizer. However, only SAM-optimized models sometimes continue to exhibit improvements in much later stages of pruning.
+
+"Standard pruning" without rewind We also investigate SAM and vanilla Adam optimizers in the standard pruning setting, where we continue training immediately after pruning in each iteration, without resetting remaining weights to pre-trained $\mathrm{BERT}_{\mathrm{base}}$ initialization. The Std section of Table 1
+
+shows these results.
+
+SAM-optimized models retain more of the full sized model's performance at Chen et al. (2020)'s reference sparsity levels. However, the trend is not more stark in this setting, which hints at the structure of winning tickets found by SAM playing an important role.
+
+We more directly investigate how SAM benefits model compressibility in Section 3.4.2, but we also take care to rule out the possibility that SAM is simply acting as an implicit $\ell_{1}$ regularizer (A.7).
+
+# 3.4.2 Analysis: Answering the Structure vs. Optimization Question (Q2, Q3)
+
+In this experimental setting, we aim to disentangle the effects of the structure $^6$ of the pruning masks learned using different optimizers, from the optimization over given substructures using different optimizers. Figure 3 displays our results.
+
+We observe that, in general, subnetworks learned through IMP greatly outperform random subnetworks of the same sparsity when trained. Subnetworks found with SAM tend to outperform those found with vanilla Adam, especially with subsequent optimization with Adam. Furthermore, training any given IMP-learned subnetwork with SAM yields modest improvements in accuracy compared to fine-tuning with vanilla Adam.
+
+
+
+
+(b) MRPC
+
+
+
+
+
+
+(a) RTE
+(e) SST-2
+
+
+(f)QNLI
+
+
+(c) CoLA
+(g) QQP
+
+
+(d) STS-B
+(h) MNLI
+
+
+Figure 2: Individual plots showing sparsity vs. task metrics (validation set) for GLUE throughout IMP. The vertical lines and gray horizontal bands mark reference sparsity and "winning ticket" evaluation metric values that were obtained by Chen et al. (2020). The green horizontal bands mark the initial performance of our full fine-tuned (uncompressed) models.
+Figure 3: We compare optimizers' learned tickets, as well as their performance in optimization over a given ticket. For select GLUE tasks at their reference sparsity values, we fine-tune pruned subnetworks of pre-trained BERTbase initializations networks based on 1) random masks, 2) Adam-learned masks, and 3) SAM-learned masks, using a) SAM and b) Adam optimizers. SAM optimization over SAM- and Adam-learned winning tickets tends to yield marginal improvements compared to Adam optimization. Comparing bar heights from left to right within each figure allows us to see that, at least when Adam is used for final fine-tuning, a Random- < Adam- < SAM-learned masks. Exact values are reported in Table 6, and Figure 12 in Appendix shows an alternative view of the data.
+
+
+
+
+
+
+
+# 3.4.3 Analysis: Comparing Transferability of Winning Tickets (Q3)
+
+We explore the extent to which SAM tickets are more or less transferable across tasks compared to tickets discovered by Adam (Figure 4). For consistency, we use $70\%$ sparsity tickets for all tasks evaluated. SAM-learned tickets tend to transfer better across tasks than Adam-learned tickets. This complements our findings in §3.4.2, which can be interpreted as a study on transferability of winning
+
+tickets between optimizers instead of across tasks.
+
+We also compare SAM versus Adam as an optimizer for fine-tuning in this setting, and found that SAM did not seem to work better overall as an optimizer, given a ticket for a different task (see Figure 13 in Appendix). Note that this does not directly contradict our results from §3.4.2; SAM optimization does typically benefit same-task performance (see diagonals in these figures).
+
+
+SAM vs Adam Ticket Transfer Performance, w/ SAM Optimizer
+
+
+SAM vs Adam Ticket Transfer Performance, w/ Adam Optimizer
+Figure 4: Heatmaps indicating the difference in target task performance between $70\%$ sparsity SAM and Adam tickets when transferring tickets across tasks, finetuned with either SAM (top) or Adam (bottom) optimizers during IMP. Values $>0$ indicate the extent to which SAM tickets transferred better than Adam tickets; Values $<0$ indicate where Adam tickets transferred better than SAM. Overall, SAM tickets transfer better regardless of the final fine-tuning optimizer. Note that the positive values along the diagonal indicate superior SAM ticket performance in the single task setting, even with "transfer" between optimizers.
+
+# 3.4.4 Structured Pruning (Q0)
+
+We use full-size fine-tuned $\mathrm{BERT}_{\text {base }}$ models from §3.4.1 as teacher models for the layerwise distillation objective used throughout the structured pruning procedure. The pruning procedure itself is the same as Xia et al. (2022)'s, regardless of whether the teacher model was originally trained using SAM or vanilla Adam. Appendix A.10 contains further implementation details.
+
+In Table 2, we show that SAM-optimized teacher models improve compressed student model performance in this structured pruning setting. SAM outperforming a vanilla Adam optimizer in this setting is a particularly desirable goal from a prac
+
+
Dataset
Optim.
Teacher Acc.
Pruned Acc.
SST-2
Adam
92.70.1
90.20.5
(67k)
SAM
93.10.6
91.30.3
QNLI
Adam
91.50.1
85.90.4
(105k)
SAM
91.30.6
86.90.4
QQP
Adam
91.00.1
90.00.1
(364k)
SAM
91.10.1
90.10.1
MNLI
Adam
84.70.4
80.20.4
(393k)
SAM
85.30.2
80.60.1
+
+Table 2: Comparison between pruned models obtained using teacher models fine-tuned with Adam and SAM optimizers. Numbers reported are means $_{stddev}$ ( $n = 3$ ) for evaluation metrics on the development set. Compressed models are trained to reach $95\%$ sparsity using optimal values for $\lambda$ and finetuning learning rate from Xia et al. (2022)'s structured pruning setting.
+
+tical standpoint. First, unlike in a full IMP setting, we only use SAM for a single fine-tuning in the pruning pipeline, and so we only incur the computational overhead associated with SAM for a fraction of the overall pruning process. Second, training time for CoFi's pruning process itself has a tenfold speedup compared to the TinyBERT baseline (Jiao et al., 2020). Finally, the pruned models obtained in this setting perform inference as quickly as TinyBERT, which amounts to a tenfold speedup compared to the full $\mathrm{BERT}_{\mathrm{base}}$ model.
+
+# 3.4.5 Post-Training Quantization (Q0)
+
+
+Post-Training Quantized Model Performance Comparison
+Figure 5: We compare full fine-tuned and quantized BERT-base models optimized with SAM and Adam. Error bars show standard deviations for $n = 3$ . Additionally, we show that applying a simpler post-training dynamic quantization technique on a SAM-optimized model can approach the reported performance of a model quantized through quantization-aware training (QAT) (Zafrir et al., 2019).
+
+From Figure 5 (see A.11 for actual numbers), we
+
+can make a few key observations: 1) SAM-trained models retain higher task performance after quantization compared to Adam-trained models. 2) SAM-trained models' higher performance is also more stable across random seeds. Finally, 3) for some tasks, our SAM-trained models compressed with simple post-training dynamic quantization meet or approach the performance of Zafrir et al. (2019)'s models quantized through quantization-aware training (QAT), which tends to yield better compressed models but is more complex to implement and use. The benefits of quantization vary under different settings and hardware, but our BERTbase models quantized to Int8 precision have a 2.5x reduction in storage requirements and 1.5-2x faster inference.
+
+# 3.4.6 Evaluating Other Models (Q1)
+
+In order to explore the applicability of our findings across model sizes and other BERT variants, we experiment with SAM and Adam optimizers on $\mathrm{BERT}_{large}$ and $\mathrm{RoBERTa}_{base}$ models in the IMP setting of §3.4.1. We find that, indeed, SAM-optimized models fare better than Adam-optimized models in these other BERT variants. A.12 includes details relevant for reproducibility, as well as task-specific results.
+
+More generally, our proposal to "train flat" with SAM is compatible with Li et al. (2020)'s recommendation to "train large, then compress", and with starting with a better-performing model before compression. Starting with $\mathrm{BERT}_{\text {large }}$ or $\mathrm{RoBERTa}_{\text {base }}$ can lead to clearly higher compressed accuracy at similar target sparsity levels and rarely leads to significantly worse performance. However, the higher performance often does not simply follow a parallel pattern as in $\mathrm{BERT}_{\text {base }}$ models throughout pruning; the initial performance gaps between $\mathrm{BERT}_{\text {large }}$ and $\mathrm{BERT}_{\text {base }}$ models tend to be preserved slightly more reliably at higher sparsity levels than the gaps between $\mathrm{RoBERTa}_{\text {base }}$ and $\mathrm{BERT}_{\text {base }}$ models. This prompts further investigation into the properties of the subnetworks found in these BERT variants and appeals to the potential compressibility of even larger models when flatness-optimized.
+
+# 3.4.7 Stochastic Weight Averaging (Q4)
+
+We conduct experiments matching the IMP setting with rewind from §3.4.1 using stochastic weight averaging (SWA). In Table 1, we report results on GLUE test set for SWA with IMP. Similar to SAM, we observe that SWA is superior to Adam optimized models at reference sparsity levels.
+
+# 4 Related Work
+
+In this section, we draw connections to key related work. For a more general description of related work, please refer to A.1 in Appendix.
+
+First, we briefly recount previous work that we consider for our experimentation. We use Adam (Kingma and Ba, 2014) as a base optimizer, with comparisons between vanilla Adam and the addition of Sharpness-Aware-Minimization (Foret et al., 2021). We make further comparisons in a subset of our experiments with Stochastic Weight Averaging (Izmailov et al., 2018) as an alternative method of inducing flatness. Using Keskar et al. (2017)'s $\epsilon$ -sharpness metric, and based on Mehta et al. (2021)'s implementation, we verify that SAM and SWA induce flatness. We fine-tune BERT models (Devlin et al., 2019; Liu et al., 2019) for GLUE (Wang et al., 2018) and SQuAD (Rajpurkar et al., 2016) language tasks. The compression methods we couple with "training flat" include: 1) unstructured IMP to find winning lottery tickets (Frankle and Carbin, 2019), referring to Chen et al. (2020)'s implementation and reported results for making direct comparisons; 2) Xia et al. (2022)'s structured pruning method with a distillation objective; and 3) off-the-shelf post-training quantization, which we compare with reported results from Zafrir et al. (2019)'s quantization-aware training method.
+
+Flatness and generalization Prior work has investigated the connection between flat minima and generalization (i.e., the gap between training accuracy and holdout set accuracy), starting with Hochreiter and Schmidhuber (1997).
+
+Subsequent work has continued on this front, exploring notions of sharpness and their predictiveness of generalization under different conditions (Bisla et al., 2022), as well as empirical evaluations of flatness-inducing methods and their effects on generalization. Jiang* et al. (2020) find that sharpness is empirically predictive of generalization, including in particular a perturbation magnitude-aware metric very similar to the $\epsilon$ -sharpness metric introduced in (Keskar et al., 2017) and used in our paper. In this work, however, separate from generalization, we primarily focus on investigating the underexplored relationship between flatness and compression.
+
+Flatness and compression Previous work has mentioned flatness in the context of pruning. In fact, Hochreiter and Schmidhuber (1997)'s orig
+
+inal "Flat Minimum Search" algorithm is explicitly designed to prune units, weights, and input lines as a mechanism for finding flat minima. LeCun et al. (1989) also propose pruning unimportant weights as a way to obtain improved generalization, although without any notions of flatness.
+
+Since these earlier works, the deep learning landscape has changed such that effective model compression itself is now often a goal; model efficiency in terms of size and latency is a common priority.
+
+To the best of our knowledge, we are the first to directly relate loss landscape flatness with model compressibility. We view our results as complementary to the concurrent work by Paul et al. (2022), who study properties of winning tickets found during iterative magnitude pruning and find that IMP itself preferentially prunes parameters lying along flatter directions in the loss landscape; they also theorize that flatter landscapes allow for more aggressive pruning. We note that the optimizer, data, and model architectures they use are different from ours. While Paul et al. (2022) use Stochastic Gradient Descent for image classification tasks on ResNet architectures, we use Adam as a base optimizer for text classification and question answering tasks with BERT architectures. Nonetheless, to the extent that findings from both papers can generalize to other settings, Paul et al. (2022) lay theoretical groundwork which supports our explicit suggestion to "train flat" as a strategy for inducing greater compressibility in neural models.
+
+# 5 Discussion
+
+We show that in general, SAM helps models retain higher accuracy on a variety of language tasks at higher sparsity levels. This holds true in multiple unstructured iterative magnitude pruning settings, as well as in a structured pruning setting with a distillation objective. Moreover, our additional experiments and analyses point to SAM-learned structures playing an important role in compressibility, as well as transferring well across tasks.
+
+# 5.1 Future work
+
+Beyond SAM and SWA (Q4) In this paper, we explore (Foret et al., 2021)'s Sharpness-AwareMinimization procedure specifically as a method for directly reaching flat minima and conduct additional comparisons with stochastic weight averaging (Izmailov et al., 2018). However, other methods such as entropy SGD (Chaudhari et al., 2017)
+
+and label noise SGD (Damian et al., 2021) have also been shown to encourage convergence to flat minima. Further exploration would provide clarity on the role of flatness in general in model compressibility versus properties specific to SAM and SWA.
+
+The role of pre-training (Q4) The BERT models we fine-tune in this work are pre-trained on various self-supervised auxiliary objectives with the goal of learning useful general representations of the English language. Recent work (Mehta et al., 2021) has found that, empirically, pre-training is associated with convergence to flatter minima than obtained by training on the same task to the same accuracy from a random initialization. Subsequent work could compare compressibility of pretrained models versus models trained from random initialization, as well as investigate the potential for further improving compressibility through flat pre-training. Inducing flatness during pre-training would facilitate further experimentation with end-task-agnostic knowledge distillation.
+
+Other evaluations (Q3) In general, we evaluate model performance in terms of task-specific metrics (on development/test splits) throughout this work. However, since compression is associated with negative consequences for model behavior not captured by task-specific accuracy (Hooker et al., 2020; Liebenwein et al., 2021), there is a particular need for work to understand and influence these qualities. Ribeiro et al. (2020) propose behavioral testing of NLP models to evaluate specific capabilities such as robustness to typos and simple coherence resolution. Xu et al. (2021) propose measures of probability and label loyalty and robustness to input perturbations for evaluating compressed models beyond preserved accuracy. We briefly discuss preliminary observations of model behaviors with respect to Ribeiro et al. (2020)'s Checklist items in $\S A.13$ in Appendix, but we emphasize that more work is needed to understand behaviors and behavior shifts of vanilla Adam and SAM models before and throughout pruning.
+
+# Limitations
+
+Many of the limitations of our work have to do with its computational requirements. First, the standard implementation of Sharpness-Aware Minimization that we use incurs significant computational overhead, so further investigation into adaptive (Kwon
+
+et al., 2021) and more efficient (Du et al., 2022b,a) variations of SAM is warranted before considering adoption of our methods in practice. Meanwhile, we control for the number of training steps and sparsity level of our models without regard to wall-clock time in our experiments, but fin-tuning $\mathrm{BERT}_{\text {base }}$ with standard SAM typically results in $1.5 \times 2 \times$ slower optimization steps. In general, many of our approaches are not strategies that can simply be applied off the shelf for practical benefits. In particular, current hardware and frameworks do not typically support reliable and proportionate efficiency gains from quantization to arbitrary precision and unstructured magnitude pruning. Moreover, the computation and storage requirements for our iterative magnitude pruning scheme are tenfold compared to the typical single fine-tuning performed on a pre-trained language model, due to the ten iterations performed and checkpoints saved. We benefited from access to a large compute cluster with dozens of GPUs, including A6000, a100, v100, RTX3090, RTX8000, and 2080Ti GPUs. We estimate having used at least 1000 GPU hours across experiments for this work, which inherently limits the full reproducibility of our results in limited compute scenarios.
+
+Additionally, although we focus on optimizing for flatness directly in our experimentation with SAM (Foret et al., 2021), other methods, such as weight averaging (Izmailov et al., 2018) (which we explore in less detail), entropy SGD (Chaudhari et al., 2017), and label noise SGD (Damian et al., 2021), have also been shown to encourage convergence to flat minima. Without explicitly exploring compressibility in models that have reached flat minima in alternative ways, we refrain from making strong claims about flat minima in general in this work.
+
+Furthermore, the tasks we train our models on are limited to sentence classification and question answering tasks, all in only the English language. We evaluate our methods using only BERT variants: pre-trained language models trained with a token masking objective.
+
+Finally, although our measures of end-task accuracy and degree of compression are standard and useful for evaluating and comparing compressed models, they do not provide a complete picture of model behaviors and capabilities. Other desirable characteristics in compressed models (and models in general) include but are not limited to robustness
+
+to distribution shift, stability against catastrophic forgetting, and fairness in performance across demographic groups..
+
+# Ethics Statement
+
+We reiterate that we are not proposing that our strategies or models be adopted off the shelf as is. This is especially true because our work does not include rigorous analysis of our compressed models' properties and behaviors outside of task accuracy, resilience to compression, and transferability to other tasks. Detailed study of properties such as long-tail performance, robustness to data distribution shifts, and fairness in performance across demographic groups, for example, which have important real-world implications, is outside of the scope of our current work. However, preliminary evaluations on certain model behaviors and properties suggests that many of our models which achieve high end-task performance are vulnerable to simple perturbations in data and lack basic desirable linguistic capabilities (although not necessarily more so than is "typical" of language models (Ribeiro et al., 2020; Xu et al., 2021).
+
+# Acknowledgements
+
+We are grateful to our anonymous reviewers, both of whom provided thoughtful and helpful feedback on extremely short notice. We would like to thank COMEDY (COhorts of Maarten Sap, Emma Strubell, Daniel Fried, and Yonatan Bisk) lab members for sharing insights and intuitions during initial discussions; Nupoor Gandhi, Jared Fernandez, Jeremiah Milbauer, Zhisong Zhang, and Josh Zhanson also gave constructive feedback on drafts and figures. We would like to acknowledge CMU Workhorse and TIR groups for providing compute resources for this work. Outside of CMU, we are appreciative of Mengzhou Xia for helping us reproduce structured pruning experiments using CoFi. This project is funded in part by DSO National Laboratories.
+
+# References
+
+Orevaoghene Ahia, Julia Kreutzer, and Sara Hooker. 2021. The low-resource double bind: An empirical study of pruning for low-resource machine translation. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 3316-3333, Punta Cana, Dominican Republic. Association for Computational Linguistics.
+
+Nur Ahmed and Muntasir Wahed. 2020. The democratization of ai: Deep learning and the compute divide in artificial intelligence research.
+Dara Bahri, Hossein Mobahi, and Yi Tay. 2022. Sharpness-aware minimization improves language model generalization. In Annual Conference of the Association for Computational Linguistics (ACL), Dublin, Ireland.
+Haoli Bai, Wei Zhang, Lu Hou, Lifeng Shang, Jin Jin, Xin Jiang, Qun Liu, Michael Lyu, and Irwin King. 2021. BinaryBERT: Pushing the limit of BERT quantization. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4334-4348, Online. Association for Computational Linguistics.
+Brian Bartoldson, Ari Morcos, Adrian Barbu, and Gordon Erlebacher. 2020. The generalization-stability tradeoff in neural network pruning. Advances in Neural Information Processing Systems, 33:20852-20864.
+Aishwarya Bhandare, Vamsi Sripathi, Deepthi Karkada, Vivek Menon, Sun Choi, Kushal Datta, and Vikram Saletore. 2019. Efficient 8-bit quantization of transformer neural machine language translation model. arXiv preprint arXiv:1906.00532.
+Devansh Bisla, Jing Wang, and Anna Choromanska. 2022. Low-pass filtering sgd for recovering flat optima in the deep learning optimization landscape. In Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, volume 151 of Proceedings of Machine Learning Research, pages 8299-8339. PMLR.
+Davis Blalock, Jose Javier Gonzalez Ortiz, Jonathan Frankle, and John Guttag. 2020. What is the state of neural network pruning? In Proceedings of Machine Learning and Systems (MLSys).
+Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In Advances in Neural Information Processing Systems, volume 33, pages 1877-1901. Curran Associates, Inc.
+Cristian Bucilua, Rich Caruana, and Alexandru Niculescu-Mizil. 2006. Model compression. In Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '06, page 535-541, New York, NY, USA. Association for Computing Machinery.
+
+Daniel Cer, Mona Diab, Eneko Agirre, Inigo Lopez-Gazpio, and Lucia Specia. 2017. Semeval-2017 task 1: Semantic textual similarity-multilingual and cross-lingual focused evaluation. arXiv preprint arXiv:1708.00055.
+Pratik Chaudhari, Anna Choromanska, Stefano Soatto, Yann LeCun, Carlo Baldassi, Christian Borgs, Jennifer T. Chayes, Levent Sagun, and Riccardo Zecchina. 2017. Entropy-sgd: Biasing gradient descent into wide valleys. ArXiv, abs/1611.01838.
+Tianlong Chen, Jonathan Frankle, Shiyu Chang, Sijia Liu, Yang Zhang, Zhangyang Wang, and Michael Carbin. 2020. The lottery ticket hypothesis for pretrained bert networks. In Advances in Neural Information Processing Systems, volume 33, pages 15834-15846. Curran Associates, Inc.
+Xiaohan Chen, Yu Cheng, Shuohang Wang, Zhe Gan, Zhangyang Wang, and Jingjing Liu. 2021. Early BERT: Efficient BERT training via early-bird lottery tickets. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 2195-2207, Online. Association for Computational Linguistics.
+Alex Damian, Tengyu Ma, and Jason D. Lee. 2021. Label noise SGD provably prefers flat global minimizers. In Advances in Neural Information Processing Systems.
+Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.
+James Diffenderfer and Bhavya Kailkhura. 2021. Multi-prize lottery ticket hypothesis: Finding accurate binary neural networks by pruning a randomly weighted network. In International Conference on Learning Representations.
+Laurent Dinh, Razvan Pascanu, Samy Bengio, and Yoshua Bengio. 2017. Sharp minima can generalize for deep nets. In Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pages 1019-1028. PMLR.
+Jesse Dodge, Taylor Prewitt, Remi Tachet des Combes, Erika Odmark, Roy Schwartz, Emma Strubell, Alexandra Sasha Luccioni, Noah A. Smith, Nicole DeCario, and Will Buchanan. 2022. Measuring the carbon intensity of ai in cloud instances. In ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT).
+
+Bill Dolan and Chris Brockett. 2005. Automatically constructing a corpus of sentential paraphrases. In Third International Workshop on Paraphrasing (IWP2005).
+Jiawei Du, Hanshu Yan, Jiashi Feng, Joey Tianyi Zhou, Liangli Zhen, Rick Siow Mong Goh, and Vincent Tan. 2022a. Efficient sharpness-aware minimization for improved training of neural networks. In International Conference on Learning Representations.
+Jiawei Du, Daquan Zhou, Jiashi Feng, Vincent Y. F. Tan, and Joey Tianyi Zhou. 2022b. Sharpness-aware training for free.
+Pierre Foret, Ariel Kleiner, Hossein Mobahi, and Behnam Neyshabur. 2021. Sharpness-aware minimization for efficiently improving generalization. In International Conference on Learning Representations.
+Jonathan Frankle and Michael Carbin. 2019. The lottery ticket hypothesis: Finding sparse, trainable neural networks. In International Conference on Learning Representations.
+Jonathan Frankle, Gintare Karolina Dziugaite, Daniel M. Roy, and Michael Carbin. 2020. Linear mode connectivity and the lottery ticket hypothesis. In Proceedings of the 37th International Conference on Machine Learning, ICML'20. JMLR.org.
+Amir Gholami, Sehoon Kim, Zhen Dong, Zhewei Yao, Michael W. Mahoney, and Kurt Keutzer. 2021. A survey of quantization methods for efficient neural network inference.
+R.M. Gray and D.L. Neuhoff. 1998. Quantization. IEEE Transactions on Information Theory, 44(6):2325-2383.
+Demi Guo, Alexander Rush, and Yoon Kim. 2021. Parameter-efficient transfer learning with diff pruning. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4884-4896, Online. Association for Computational Linguistics.
+Song Han, Huizi Mao, and William J. Dally. 2015. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding.
+Yaru Hao, Li Dong, Furu Wei, and Ke Xu. 2019. Visualizing and understanding the effectiveness of bert. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4134-4143.
+Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2014. Distilling the knowledge in a neural network. In NIPS 2014 Deep Learning Workshop.
+
+Sepp Hochreiter and Jürgen Schmidhuber. 1997. Flat minima. Neural computation, 9(1):1-42.
+Sara Hooker, Nyalleng Moorosi, Gregory Clark, Samy Bengio, and Emily Denton. 2020. Characterising bias in compressed models. In Fifth Workshop on Human Interpretability in Machine Learning (WHI).
+Pavel Izmailov, Dmitrii Podoprikhin, Timur Garipov, Dmitry Vetrov, and Andrew Gordon Wilson. 2018. Averaging weights leads to wider optima and better generalization. arXiv preprint arXiv:1803.05407.
+Benoit Jacob, Skirmantas Kligys, Bo Chen, Menglong Zhu, Matthew Tang, Andrew G. Howard, Hartwig Adam, and Dmitry Kalenichenko. 2018. Quantization and training of neural networks for efficient integer-arithmetic-only inference. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2704-2713.
+Yiding Jiang*, Behnam Neyshabur*, Hossein Mobahi, Dilip Krishnan, and Samy Bengio. 2020. Fantastic generalization measures and where to find them. In International Conference on Learning Representations.
+Xiaoqi Jiao, Yichun Yin, Lifeng Shang, Xin Jiang, Xiao Chen, Linlin Li, Fang Wang, and Qun Liu. 2020. TinyBERT: Distilling BERT for natural language understanding. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 4163-4174, Online. Association for Computational Linguistics.
+Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, and Ping Tak Peter Tang. 2017. On large-batch training for deep learning: Generalization gap and sharp minima. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net.
+Sehoon Kim, Amir Gholami, Zhewei Yao, Michael W. Mahoney, and Kurt Keutzer. 2021. I-bert: Integer-only bert quantization. In Proceedings of the 38th International Conference on Machine Learning, volume 139 of Proceedings of Machine Learning Research, pages 5506-5518. PMLR.
+Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
+Lorenz Kuhn, Clare Lyle, Aidan N. Gomez, Jonas Rothfuss, and Yarin Gal. 2021. Robustness to pruning predicts generalization in deep neural networks.
+Jungmin Kwon, Jeongseop Kim, Hyunseo Park, and In Kwon Choi. 2021. Asam: Adaptive sharpness-aware minimization for scale-invariant learning of deep neural networks. In Proceedings of the 38th International Conference on Machine Learning, volume 139 of Proceedings of Machine Learning Research, pages 5905-5914. PMLR.
+
+François Lagunas, Ella Charlaix, Victor Sanh, and Alexander Rush. 2021. Block pruning for faster transformers. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 10619-10629, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
+Yann LeCun, John Denker, and Sara Solla. 1989. Optimal brain damage. In Advances in Neural Information Processing Systems, volume 2. Morgan-Kaufmann.
+Zhuohan Li, Eric Wallace, Sheng Shen, Kevin Lin, Kurt Keutzer, Dan Klein, and Joey Gonzalez. 2020. Train large, then compress: Rethinking model size for efficient training and inference of transformers. In ICML, pages 5958-5968.
+Chen Liang, Simiao Zuo, Minshuo Chen, Haoming Jiang, Xiaodong Liu, Pengcheng He, Tuo Zhao, and Weizhu Chen. 2021. Super tickets in pre-trained language models: From model compression to improving generalization. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 6524-6538, Online. Association for Computational Linguistics.
+Lucas Liebenwein, Cenk Baykal, Brandon Carter, David Gifford, and Daniela Rus. 2021. Lost in pruning: The effects of pruning neural networks beyond test accuracy. In Proceedings of Machine Learning and Systems, volume 3, pages 93-138.
+Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. ArXiv, abs/1907.11692.
+Christos Louizos, Max Welling, and Diederik P. Kingma. 2018. Learning sparse neural networks through l_0 regularization. In International Conference on Learning Representations.
+Sanket Vaibhav Mehta, Darshan Patil, Sarath Chandar, and Emma Strubell. 2021. An empirical investigation of the role of pre-training in lifelong learning. arXiv preprint arXiv:2112.09153.
+Paul Michel, Omer Levy, and Graham Neubig. 2019. Are sixteen heads really better than one? In Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc.
+Behnam Neyshabur, Hanie Sedghi, and Chiyuan Zhang. 2020. What is being transferred in transfer learning? In Advances in Neural Information Processing Systems, volume 33, pages 512-523. Curran Associates, Inc.
+Mansheej Paul, Feng Chen, Brett W. Larsen, Jonathan Frankle, Surya Ganguli, and Gintare Karolina Dziugaite. 2022. Unmasking the lottery ticket hypothesis: What's encoded in a winning ticket's mask?
+
+Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research, 21(140):1-67.
+Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: $100,000+$ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383-2392.
+Alex Renda, Jonathan Frankle, and Michael Carbin. 2020a. Comparing rewinding and fine-tuning in neural network pruning. In International Conference on Learning Representations.
+Alex Renda, Jonathan Frankle, and Michael Carbin. 2020b. Comparing rewinding and fine-tuning in neural network pruning. In International Conference on Learning Representations.
+Marco Tulio Ribeiro, Tongshuang Wu, Carlos Guestrin, and Sameer Singh. 2020. Beyond accuracy: Behavioral testing of NLP models with CheckList. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4902-4912, Online. Association for Computational Linguistics.
+Victor Sanh, Lysandre Debut, Julien Chaumont, and Thomas Wolf. 2019. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter.
+Victor Sanh, Thomas Wolf, and Alexander Rush. 2020. Movement pruning: Adaptive sparsity by fine-tuning. In Advances in Neural Information Processing Systems, volume 33, pages 20378-20389. Curran Associates, Inc.
+Sheng Shen, Zhen Dong, Jiayu Ye, Linjian Ma, Zhewei Yao, Amir Gholami, Michael W. Mahoney, and Kurt Keutzer. 2020. Q-bert: Hessian based ultra low precision quantization of bert. Proceedings of the AAAI Conference on Artificial Intelligence, 34(05):8815-8821.
+Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 conference on empirical methods in natural language processing, pages 1631-1642.
+Emma Strubell, Ananya Ganesh, and Andrew McCallum. 2019. Energy and policy considerations for deep learning in NLP. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3645-3650, Florence, Italy. Association for Computational Linguistics.
+Siqi Sun, Yu Cheng, Zhe Gan, and Jingjing Liu. 2019. Patient knowledge distillation for BERT model compression. In Proceedings of the 2019 Conference on
+
+Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4323-4332, Hong Kong, China. Association for Computational Linguistics.
+Zhiqing Sun, Hongkun Yu, Xiaodan Song, Renjie Liu, Yiming Yang, and Denny Zhou. 2020. MobileBERT: a compact task-agnostic BERT for resource-limited devices. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2158-2170, Online. Association for Computational Linguistics.
+Vincent Vanhoucke, Andrew Senior, and Mark Z. Mao. 2011. Improving the speed of neural networks on cpus. In Deep Learning and Unsupervised Feature Learning Workshop, NIPS 2011.
+Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. 2018. Glue: A multi-task benchmark and analysis platform for natural language understanding. arXiv preprint arXiv:1804.07461.
+Alex Warstadt, Amanpreet Singh, and Samuel R Bowman. 2019. Neural network acceptability judgments. Transactions of the Association for Computational Linguistics, 7:625-641.
+Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112-1122.
+Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, et al. 2019. Huggingface's transformers: State-of-the-art natural language processing. arXiv preprint arXiv:1910.03771.
+Lei Wu, Zhanxing Zhu, and Weinan E. 2017. Towards understanding generalization of deep learning: Perspective of loss landscapes. In ICML Workshop on Principled Approaches to Deep Learning (PADL).
+Mengzhou Xia, Zexuan Zhong, and Danqi Chen. 2022. Structured pruning learns compact and accurate models. In Association for Computational Linguistics (ACL).
+Canwen Xu, Wangchunshu Zhou, Tao Ge, Ke Xu, Julian McAuley, and Furu Wei. 2021. Beyond preserved accuracy: Evaluating loyalty and robustness of BERT compression. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 10653-10659, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
+Peter K. Yu. 2020. The algorithmic divide and equality in the age of artificial intelligence. Florida Law Review, 72:331-89.
+
+Ofir Zafrir, Guy Boudoukh, Peter Izsak, and Moshe Wasserblat. 2019. Q8bert: Quantized 8bit bert. 2019 Fifth Workshop on Energy Efficient Machine Learning and Cognitive Computing - NeurIPS Edition (EMC2-NIPS), pages 36-39.
+Chunting Zhou, Jiatao Gu, and Graham Neubig. 2020. Understanding knowledge distillation in non-autoregressive machine translation. In International Conference on Learning Representations.
+Wenda Zhou, Victor Veitch, Morgane Austern, Ryan P. Adams, and Peter Orbanz. 2019. Non-vacuous generalization bounds at the imagenet scale: a PAC-bayesian compression approach. In International Conference on Learning Representations.
+
+# A Appendix
+
+# A.1 Extended Related Work
+
+Model compression in NLP Approaches for model compression aim to replicate the end-task performance of a large, accurate model while requiring fewer parameters and floating-point operations, and many approaches for model compression have been successfully applied to tasks in NLP. Specific model compression techniques include pruning, where individual model parameters (unstructured pruning) or entire weight matrices (structured pruning) are removed entirely (LeCun et al., 1989; Blalock et al., 2020; Sanh et al., 2020; Lagunas et al., 2021), quantization, where model weights, activations, gradients, and/or add-multiply accumulators are reduced in precision from 32-bit floating point representations to floating or fixed-point representations as low as one or two bits (Gray and Neuhoff, 1998; Vanhoucke et al., 2011; Gholami et al., 2021), and knowledge distillation where a smaller model is trained to replicate the predictions, and often intermediate embedded representations, of a larger model (Bucilua et al., 2006; Hinton et al., 2014; Sanh et al., 2019; Jiao et al., 2020).
+
+Unstructured pruning can achieve some of the highest sparsity levels using various criteria and schedules for determining which parameters to prune (Frankle and Carbin, 2019; Chen et al., 2020; Sanh et al., 2020; Guo et al., 2021), though sparsity patterns resulting from unstructured pruning often do not result in latency reduction on modern accelerator hardware. Work in structured pruning has explored removing entire parameter matrices such as self-attention heads, hidden units, and entire layers (Michel et al., 2019; Lagunas et al., 2021; Xia et al., 2022), with basic underlying hardware constraints in mind. Pruning is often combined with a distillation objective, which provides complementary gains, likely by reducing complexity of the dataset (Zhou et al., 2020).
+
+Distillation is a prominent, practical method for compression that is widely used in NLP. Work on distillation in NLP has focused largely on the task-agnostic setting of compressing general-purpose pre-trained models such as BERT (Sanh et al., 2019; Sun et al., 2019, 2020) but task-specific distillation has also been reported to work well (Jiao et al., 2020).
+
+Approaches for model quantization can be categorized into post-training quantization, where general-purpose models are quantized at test-time
+
+(Jacob et al., 2018; Bhandare et al., 2019; Kim et al., 2021), and quantization-aware training, where models incorporate simulated quantization error during training in order to learn more quantizable parameters (Zafrir et al., 2019; Bai et al., 2021). Quantization-aware training tends to lead to higher accuracy quantized inference, but post-training quantization can be applied on-the-fly to any model at inference time.
+
+In this work we experiment with a variety of compression methods including structured and unstructured pruning (Xia et al., 2022; Chen et al., 2020), and out-of-the-box INT8 post-training dynamic quantization, highlighting both practical (structured pruning, quantization) and theoretical (unstructured pruning) findings. While we do not experiment with distillation directly, our chosen structured pruning method also incorporates a distillation objective.
+
+Learning compressible models Most closely related to our work are methods for learning compressible models, and the study of what makes models more compressible. Quantization-aware training is an example of such training for compressibility, and training for sparsity using $\ell_0$ regularization (Louizos et al., 2018) is a parallel method for pruning.
+
+Learning sparse models from scratch has proven difficult, despite the fact that deep neural networks are vastly over-parameterized. Frankle and Carbin (2019) formalized the Lottery Ticket Hypothesis, which posits that large, overparameterized neural network models contain sparse subnetworks, or winning tickets, that can be trained from scratch (or close to it; see Frankle et al. (2020)) to match the end-task performance of the full model. This influential work has spurred much research into better understanding neural network models, including pre-trained language models, from the perspective of winning tickets (Chen et al., 2020; Renda et al., 2020b; Diffenderfer and Kailkhura, 2021) and how to leverage winning tickets to perform better model compression (Chen et al., 2021; Liang et al., 2021). Li et al. (2020) showed that larger pre-trained language models are more compressible than smaller ones, which they hypothesize is related to larger models being more likely to contain winning tickets.
+
+Flat minima in neural networks. Hochreiter and Schmidhuber (1997) were among the first to discuss the relationship between flat basins in the loss landscape and generalization in neural networks, defining a flat minimum as "a large connected region in weight space where the error remains approximately constant." We use the $\epsilon$ -sharpness definition of Keskar et al. (2017) which defines sharpness as maximum loss within a neighborhood bounded by $\epsilon$ . Others have used Hessian-based measures to identify high minima with high curvature (Chaudhari et al., 2017). Note that care is needed when making inferences based on current measurements of sharpness, which is an active area of research; it has been shown that flat minima defined in this way can be rescaled to sharp minima (and still generalize) (Dinh et al., 2017).
+
+Most previous results related to flat minima have focused on generalizability (Hao et al., 2019; Neyshabur et al., 2020). A common explanation for the good generalization of models converged to flat minima is that flatter models are less complex (Wu et al., 2017).
+
+Flat minima may be particularly of interest in NLP, where a pretrain-then-finetune paradigm is often employed to leverage general representations learned from an extensive pre-training process during a much shorter fine-tuning process on a more specific end task. Indeed, pre-training provides a flat prior that can provide benefits in the contexts of lifelong learning (Mehta et al., 2021) and generalization (Hao et al., 2019; Bahri et al., 2022).
+
+# A.2 Flat Minima: Additional details
+
+Stochastic Weight Averaging (SWA). We consider last $50\%$ of the model checkpoints for equal averaging. Specifically, for RTE, MRPC, STS-B, and CoLA we fine-tune for 10 epochs and average 5 checkpoints from epochs 6 to 10. For SST-2, QNLI, QQP, and MNLI we fine-tune for 3 epochs and retain checkpoints after every 0.5 epochs and equal average checkpoints after 2, 2.5, 3 epochs. Iz-mailov et al. (2018) suggests modified learning rate scheduler like cyclical or constant so that towards the later stage of training, the underlying optimizer explores diverse solutions and average over them would lead to flatter solution. To simulate this behavior, for all our SWA experiments, we set our initial learning rate to a high value of $8e - 5$ and linearly decay it to 0 with no warmup steps.
+
+Evaluating sharpness. Keskar et al. (2017) propose a computationally feasible metric for measuring the sharpness of a minimizer over an $\epsilon$ -neighborhood in the loss landscape. We report sharpness metrics (lower value means flatter low loss region) in Tables 3 and 4. Overall we see that SAM and SWA optimized models have significantly smaller sharpness values (or flatter low loss regions) as compared to vanilla Adam optimizer, thus, providing convincing evidence that these methods indeed find flatter solutions.
+
+Loss contours. In Figure 6, we visualize contour plots of the loss landscape for the QQP task to qualitatively compare the sharpness of the solutions that Adam- and SAM-optimized BERTbase models find. We observe that the SAM-optimized model sits in a noticeably flatter, wider basin than the Adam-optimized model when both are fitted with their respective classifier heads. These analyses verify that SAM indeed leads to flatter minima in comparison to Adam.
+
+# A.3 Non-Iterative Unstructured Magnitude Pruning
+
+In a preliminary analysis, we subject full-size finetuned $\mathrm{BERT}_{\mathrm{base}}$ models to one-shot unstructured magnitude pruning and evaluate on the same task without any subsequent training. Figure 7 displays development set accuracies at sparsity levels of increments of $5\%$ , up to $60\%$ of prunable parameters masked to 0. Accuracy values are plotted at averages over $n = 3$ seeds for each of the sparsity levels and GLUE tasks displayed (SST-2, QNLI, MRPC, RTE). Interestingly, even under this non-iterative pruning setting, performance does not drop off noticeably in either SAM or Adam-optimized models until at least around $30\%$ sparsity for the GLUE tasks displayed. Similarly to all iterative pruning settings we explore, models optimized with SAM retain full model size accuracies at higher sparsity levels than their Adam-optimized counterparts. The SAM-optimized RTE models at $35 - 40\%$ sparsity have higher accuracy than the full-sized uncompressed model, reminiscent of the pattern we observe in Figure 2a, albeit at lower sparsity levels.
+
+# A.4 Iterative Magnitude Pruning Reproducibility and Hyperparameters
+
+Following Chen et al. (2020), we use a maximum sequence length of 128, batch size of 32, learning rate to $2e - 5$ , and linear decay of learning rate from
+
+
epsilon (ε)
Dataset
MNLI
QQP
STS-B
QNLI
MRPC
RTE
SST-2
CoLA
5 × 10-3
Adam
28.33.6
34.75.0
160.934.3
30.16.0
50.824.8
49.05.6
29.910.3
38.33.7
SAM
14.20.9
9.31.7
45.81.3
17.83.4
40.45.3
28.48.7
13.72.3
29.49.8
1 × 10-3
Adam
5.00.6
6.50.9
11.83.1
6.62.5
6.11.0
11.93.0
4.50.6
9.52.2
SAM
2.60.2
1.90.3
4.50.4
3.51.3
7.00.7
4.32.4
2.20.1
6.82.8
5 × 10-4
Adam
2.30.2
3.40.2
4.31.2
3.21.5
2.80.3
6.52.1
2.30.4
5.62.0
SAM
1.30.1
0.90.1
1.90.3
1.50.3
3.20.6
2.11.1
1.00.1
3.41.4
+
+Table 3: Evaluating sharpness metric for vanilla Adam and SAM optimized models at full size (i.e., no compression). We observe that SAM optimized models have significantly lower sharpness values (lower corresponds to flatter minima) compared to vanilla Adam. These results provide quantitative evidence that SAM indeed leads to flatter loss basins.
+
+
epsilon (ε)
Dataset Sparsity
MNLI 70%
QQP 90%
STS-B 50%
QNLI 70%
MRPC 50%
RTE 60%
SST-2 60%
CoLA 50%
5 × 10-3
Adam
56.41.0
137.940.4
232.530.0
23.83.9
42.326.9
65.78.8
85.833.8
33.82.7
SAM
42.47.9
65.31.9
184.07.3
17.26.0
47.318.0
50.86.1
26.97.7
23.94.5
SWA
37.39.9
34.71.2
81.322.3
33.57.5
31.66.6
42.112.3
23.710.3
26.59.5
1 × 10-3
Adam
6.60.9
5.81.0
20.26.1
5.92.2
8.62.2
16.44.0
5.80.7
6.21.4
SAM
3.10.4
5.40.6
13.81.2
3.21.1
8.31.6
12.03.8
4.10.8
4.20.3
SWA
10.71.6
5.20.6
5.41.1
8.90.2
12.40.8
11.84.1
5.20.7
7.92.2
5 × 10-4
Adam
3.30.2
2.30.3
7.12.5
2.10.6
5.80.5
7.91.1
3.60.9
3.90.6
SAM
1.10.3
2.00.6
5.10.6
1.20.0
3.60.6
5.92.0
1.90.4
2.30.4
SWA
4.40.5
2.90.0
2.20.4
4.70.1
5.90.7
6.02.0
2.50.5
3.40.2
+
+Table 4: Evaluating sharpness metric for vanilla Adam, SAM and SWA optimized models at Chen et al. (2020)'s reference sparsities. In general we observe that SAM and SWA optimized models have lower sharpness values (lower corresponds to flatter minima) compared to vanilla Adam at various sparsity levels across different tasks (all results are averaged over 3 runs).
+
+
+Figure 6: Visualization of loss contours for QQP on BERTbase models finetuned on the task using Adam and SAM optimizers ( $w_{adam}$ and $w_{sam}$ ), as well as the pre-trained BERT base initialization ( $w_{init}$ ). On the left, all models are fitted with the linear classifier head originally trained with $w_{adam}$ , while on the right side models are fitted with the linear classifier head originally trained with $w_{sam}$ . The SAM-optimized model sits in a noticeably flatter, wider basin than the Adam-optimized model when both are fitted with their respective classifier heads. These results provide qualitative evidence that SAM indeed leads to flatter loss basins.
+
+
+
+initial value to zero with no warmup period. For tasks with smaller datasets (RTE, MRPC, CoLA, STS-B), we fine-tune models for 10 epochs, evaluate them after every epoch and retain the checkpoint yielding best task-specific performance on the hold-out validation set (whereas Chen et al.
+
+(2020) finetune for only 3 for all tasks). For tasks with comparatively larger datasets (MNLI, QQP, QNLI, SST-2), we fine-tune models for 3 epochs. We set Adam's weight decay to $\epsilon = 0$ in order to remove the potential confound of regularization on models' amenability to magnitude pruning. This
+
+
+Figure 7: SAM- and Adam-optimized models evaluated directly after pruning a proportion of parameters from the full-sized model. Models fine-tuned with SAM hold up better to non-iterative magnitude pruning as well.
+
+differs from Chen et al. (2020)'s $\epsilon = 1\times 10^{-8}$ , but we observed that our differences do not systematically affect the trends originally reported other than to improve full-size model performance and allow for a fairer comparison between SAM and Adam optimizers. In particular, training for only 3 epochs on the smaller tasks with our SAM optimizer does not allow models to converge.
+
+# A.5 Comparison with BERT Lottery Ticket Hypothesis Numbers
+
+We present development set numbers on GLUE and SQuAD tasks in Table 5. For comparison, we include the reference (Ref) metrics reported by Chen et al. (2020) at their reported winning ticket sparsity levels.
+
+# A.6 Additional Individual Task Plots for BERTbase IMP
+
+In Figure 8, we present the SQuAD plot comparing SAM- and Adam-optimized BERTbase models in the unstructured IMP setting in Figure 2, as well as versions of the GLUE plots in including SWA performance.
+
+Plots for $\mathrm{BERT}_{\mathrm{base}}$ trained on GLUE and SQuAD tasks with unstructured standard pruning are shown in Figure 9.
+
+# A.7 Is SAM just implicitly doing $\ell_1$ regularization?
+
+No. It is clear that $\ell_1$ regularization induces sparsity in a different way compared to SAM. Although in some cases $\ell_1$ regularization can help a model reach
+
+higher accuracies at certain sparsity levels, we observe that simply optimizing with Adam throughout an iterative pruning process does not allow the model to reach SAM-optimized models' compression performance. Moreover, $\ell_1$ regularization can actually hurt compression performance in some cases.
+
+Further investigation is needed to understand the specific mechanisms allowing SAM to induce greater compressibility in models.
+
+# A.8 Detailed Structure vs Optimization Results
+
+Table 6 contains numbers presented in Figure 3 from §3.4.2.
+
+Figure 12 presents the same information as Figure 3 in an alternative view, featuring colored bars representing ticket performance over different optimizers.
+
+# A.9 Ticket Transfer Experiments: Comparing Optimization Over Given Tickets
+
+Figure 4 in §3.4.3 allows us to evaluate SAM vs. vanilla Adam ticket transferability across GLUE tasks. Figure 13, on the other hand, allows us to evaluate SAM vs. vanilla Adam optimization over given tickets for different GLUE tasks.
+
+# A.10 Structured Pruning Reproducibility and Hyperparameters
+
+Following Xia et al. (2022), we train for 20 epochs each in the pruning and final fine-tuning stages. We use a sparsity epsilon value of 0.01, meaning that a model can be accepted if its actual sparsity level is within $1\%$ of the target sparsity level of $95\%$ .
+
+We pick hyperparameters based on a grid search over Xia et al. (2022)'s baseline implementation (with their Adam-optimized teacher models), reported in Table 7.
+
+For each task, we use the optimal $\lambda$ and final fine-tuning learning rates found via grid search using Xia et al. (2022)'s implementation, including configurations for finetuning teacher models (which used normal Adam optimizers). There are small discrepancies between the reference metric values reported and the values we were able to reproduce, possibly due to variation across random seeds. However, the relative performance of hyperparameter settings seems to be fairly consistent across random seeds.
+
+Although we do not conduct an additional full grid search for our comparison of compressed mod
+
+
Dataset Sparsity Metric
MNLI 70% Matched acc.
QQP 90% Acc.
STS-B 50% Pearson Cor.
QNLI 70% Acc.
MRPC 50% Acc.
RTE 60% Acc.
SST-2 60% Acc.
CoLA 50% Matthew's Cor.
SQuAD 40% F1
Full FT
Ref
82.40.5
90.20.5
88.40.3
89.11.0
85.20.1
66.23.6
92.10.1
54.50.4
88.10.6
Adam
84.70.4
91.00.1
89.00.1
91.50.1
85.51.7
67.61.5
92.70.1
58.10.4
88.50.2
SAM
85.30.2
91.10.1
89.40.2
91.30.6
87.00.7
67.00.8
93.10.6
59.50.4
89.20.1
IMP
Ref
82.60.2
90.00.2
88.20.2
88.90.4
84.90.4
66.02.4
91.90.5
53.80.9
87.70.5
Adam
82.60.3
85.00.2
88.30.2
88.60.1
84.20.1
63.70.9
91.70.4
56.41.4
86.90.3
SAM
83.50.1
84.90.1
88.90.2
89.60.1
85.81.0
71.81.7
93.20.4
56.10.9
87.80.2
±10%
Adam
82.10.3
87.20.3
88.30.2
88.00.2
83.70.4
64.20.2
91.50.5
55.00.4
86.80.4
SAM
83.10.1
87.40.4
88.80.2
89.10.2
85.50.5
68.90.4
92.90.3
56.10.2
88.30.5
Std
Ref
82.1
90.0
88.5
89.9
85.8
63.0
90.0
52.0
87.1
Adam
82.7
88.1
89.2
89.5
85.5
65.3
91.1
54.2
86.30.2
SAM
83.3
89.6
89.6
90.0
85.3
68.2
92.1
56.8
87.10.2
+
+Table 5: We report task metrics on the development set at Chen et al. (2020)'s reference sparsities for Adam and SAM-optimized BERT-base models in their (1) Iterative Magnitude Pruning (IMP) and (2) Std pruning settings. We include Chen et al. (2020)'s reference (Ref) metrics in addition to our reported metrics (Adam, SAM). When applicable, we report mean and standard deviation calculated over 3 random seeds.
+
+
Dataset
Ticket
Optim.
Accuracy
RTE (60%)
Random
Adam
54.91.3
SAM
55.41.6
Adam
Adam
63.70.9
SAM
61.72.4
SAM
Adam
70.21.9
SAM
71.81.7
MRPC (50%)
Random
Adam
70.80.8
SAM
70.10.2
Adam
Adam
84.20.1
SAM
85.30.5
SAM
Adam
85.31.3
SAM
85.70.8
SST-2 (60%)
Random
Adam
82.80.2
SAM
83.30.8
Adam
Adam
91.90.3
SAM
92.70.1
SAM
Adam
92.40.1
SAM
92.90.2
QNLI (70%)
Random
Adam
61.70.3
SAM
61.50.1
Adam
Adam
89.00.05
SAM
89.50.04
SAM
Adam
89.10.1
SAM
89.60.2
+
+els using Adam and SAM-optimized teacher models, we do find that the optimal final fine-tuning learning rates, which are much less computationally expensive to test, transfer to our experimental settings.
+
+# A.11 Detailed Quantization Results
+
+Table 8 contains the numbers used to generate Figure 5.
+
+Table 6: For RTE, MRPC, SST-2, and QNLI at their reference sparsity values, we fine-tune using 1) SAM and 2) Adam optimizers from pre-trained BERT-base initializations using only the remaining weights based on a) a Random mask, b) an Adam-learned mask, and c) a SAM-learned mask.
+
+
Dataset
λ
FT-LR
Teacher Acc.
Pruned Acc.
SST-2 (67k)
Ref.
-
-
93.1
90.6
Reprod.
0.9
3e-5
93.6
90.6
QNLI (105k)
Ref.
-
-
91.5
86.1
Reprod.
0.9
3e-5
91.9
86.5
QQP (364k)
Ref.
-
-
91.2
90.1
Reprod.
0.7
3e-5
91.3
89.9
MNLI (393k)
Ref.
-
-
84.8
80.6
Reprod.
0.7
3e-5
85.2
80.1
+
+Table 7: We report reproduced (Reprod.) and reference (Ref.) evaluation metrics at $95\%$ sparsity and optimal values for $\lambda$ and fine-tuning learning rate on select tasks from Xia et al. (2022)'s structured pruning setting. We used the same hyperparameters as reported otherwise (distillation temperature $t = 2$ , with 20 fine-tuning epochs after pruning, learning rate $= 2e - 5$ , and batch size $= 32$ ), and conducted our grid search over the same candidate values $\lambda \in \{0.1, 0.3, 0.5\}$ and FT-LR $\in \{1e - 5, 2e - 5, 3e - 5\}$
+
+# A.12 Results for Other BERT Models
+
+We investigate SAM's influence on amenability to sparsification in both $\mathrm{BERT}_{\text {large }}$ and $\mathrm{RoBERTa}_{\text {base }}$ models subject to iterative magnitude pruning (IMP). For consistency, we use the same hyperparameters as for the $\mathrm{BERT}_{\text {base }}$ set of experiments ( $\epsilon = 0$ weight decay; 10 (MRPC, RTE), 3 (SST-2, QNLI), or 2 (SQuAD) training epochs for each IMP iteration; linear learning rate decay schedules starting at $2e - 5$ (GLUE) or $3e - 5$ (SQuAD); batch size of 32 (GLUE) and 16 (SQuAD); maximum sequence length of 128 (GLUE) and 384 (SQuAD)). It is possible that a different set of hyperparameters would be optimal for these different models, but we also tried different numbers of training epochs for the less stable smaller GLUE tasks (5 and 3), as
+
+
+
+
+(b) MRPC
+
+
+(c) CoLA
+
+
+(a) RTE
+
+
+(e) SST-2
+
+
+(f)QNLI
+
+
+(d) STS-B
+(g)QQP
+
+
+(h) MNLI
+
+
+(i) SQuAD
+Figure 8: Individual plots showing sparsity vs. task metrics (validation set) for GLUE throughout IMP. The vertical lines and gray horizontal bands mark reference sparsity and "winning ticket" evaluation metric values that were obtained by Chen et al. (2020). The green horizontal bands mark the initial performance of our full fine-tuned (uncompressed) models.
+
+well as Liu et al. (2019)'s learning rate of $1.5e - 5$ for RoBERTa, and we generally found that simply matching hyperparameters from $\mathrm{BERT}_{\mathrm{base}}$ experiments worked well or better.
+
+Figures 14 and 15 show plots for $\mathrm{BERT}_{\text {large }}$ and $\mathrm{RoBERTa}_{\text {base }}$ models compressed with iterative magnitude pruning (IMP). We include our $\mathrm{BERT}_{\text {base }}$ model results with Chen et al. (2020)'s reference sparsity levels and accuracy ranges (which are likewise for $\mathrm{BERT}_{\text {base }}$ ) in the same plots for comparison.
+
+With the exception of RoBERTa $_{base}$ on MRPC, SAM-optimized models consistently fare better than Adam-optimized models in these other BERT variants as well. However, the comparison between
+
+BERT variants is more complex. While $\mathrm{BERT}_{large}$ and $\mathrm{RoBERT}_{base}$ models generally achieve higher initial performance compared to $\mathrm{BERT}_{base}$ and can maintain this higher performance at Chen et al. (2020)'s "winning ticket" sparsity levels (14b, 14d, 14e, 15b, 15e), the drop-off in performance does not always simply follow a parallel pattern. Initial higher performance tends to decrease more quickly with pruning than in $\mathrm{BERT}_{base}$ models, such that $\mathrm{BERT}_{large}$ and $\mathrm{RoBERT}_{base}$ performance sometimes falls to near (14c, 15c, 15d) or even below (14a, 15a) $\mathrm{BERT}_{base}$ performance by the time they approach "winning ticket" sparsity levels (which in reality provide an inherent advantage to the larger models that are left with a greater absolute number
+
+
+(a) RTE, Standard Pruning
+
+
+(b) MRPC, Standard Pruning
+
+
+
+
+(d) STS-B, Standard Pruning
+
+
+(e) SST-2, Standard Pruning
+
+
+(c) CoLA, Standard Pruning
+(f) QNLI, Standard Pruning
+
+
+(g) QQP, Standard Pruning
+
+
+(h) MNLI, Standard Pruning
+
+
+(i) SQuAD, Standard Pruning
+Figure 9: Individual plots showing sparsity vs. accuracy for GLUE tasks and SQuAD in BERTbase models compressed with standard pruning (IMP with no rewinding of weights). The vertical lines and gray horizontal bands mark reference sparsity and "winning ticket" evaluation metric values that were obtained by Chen et al. (2020). The green horizontal bands mark the initial performance of our full fine-tuned models.
+
+of parameters at the same sparsity levels).
+
+# A.13 Beyond task accuracy
+
+We evaluate full and pruned $\mathrm{BERT}_{\mathrm{base}}$ models optimized by vanilla Adam and SAM throughout IMP (with rewind) on Ribeiro et al. (2020)'s pre-curated test suites for sentiment analysis (SST-2), question paraphrase detection (QQP), and question answering (SQuAD). At this time, we do not explicitly make direct comparisons between SAM and vanilla Adam for unpruned and pruned models. A single test consists of multiple examples, and the $x$ -axes of the histograms in Figure 16 refer to the proportions of examples passed within each test.
+
+
+(a) RTE, IMP w/ $\ell_1$ Regularization
+
+
+(b) MRPC, IMP w/ $\ell_1$ Regularization
+
+
+(c) SST-2, IMP w/ $\ell_1$ Regularization
+
+
+(d) QNLI, IMP w/ $\ell_{1}$ Regularization
+Figure 10: Individual plots showing sparsity vs. accuracy for GLUE tasks in $\ell_1$ -regularized BERTbase models compressed with iterative magnitude pruning (IMP), with regular BERTbase models for comparison. The vertical lines and gray horizontal bands mark reference sparsity and "winning ticket" evaluation metric values that were obtained by Chen et al. (2020). The green horizontal bands mark the initial performance of our full fine-tuned models.
+
+
+(a) RTE, Standard Pruning w/ $\ell_1$ Regularization
+
+
+(b) MRPC, Standard Pruning w/ $\ell_1$ Regularization
+
+
+(c) SST-2, Standard Pruning w/ $\ell_1$ Regularization
+
+
+(d) QNLI, Standard Pruning w/ $\ell_1$ Regularization
+Figure 11: Individual plots showing sparsity vs. accuracy for GLUE tasks and SQuAD in BERTbase models trained with $\ell_1$ regularization during iterative compression with standard pruning, with BERTbase models trained without regularization during standard pruning for comparison. The vertical lines and gray horizontal bands mark reference sparsity and "winning ticket" evaluation metric values that were obtained by Chen et al. (2020). The green horizontal bands mark the initial performance of our full fine-tuned models.
+
+
+Learned Tickets vs Optimization Over Tickets (Alternative View)
+
+
+Figure 12: An alternative view of the same information contained in Figure 3. Note that, for example, in RTE, SAM tickets clearly outperform vanilla Adam tickets regardless of the optimizer used for the final fine-tuning.
+
+
+
+
+
+
+Figure 13: Heatmaps indicating the difference in target task performance between SAM and Adam optimizers during fine-tuning when transferring tickets across tasks. Values greater than 0 indicate the extent to which SAM optimizer worked better than Adam; Values less than 0 indicate where Adam optimizer worked better than SAM; Values close to 0 indicate little difference. Note that positive values along the diagonal indicate superior SAM ticket performance in the single task setting, even with "transfer" between optimizers.
+
+
+
+
+(a) RTE, IMP w/ BERTlarge
+
+
+(b) MRPC, IMP w/ BERTlarge
+
+
+
+
+(d) QNLI, IMP w/ BERTlarge
+
+
+(c) SST-2, IMP w/ BERTlarge
+(e) SQuAD, IMP w/ BERTlarge
+Figure 14: Individual plots showing sparsity vs. accuracy for GLUE tasks and SQuAD in $\mathrm{BERT}_{large}$ models compressed with iterative magnitude pruning (IMP), with $\mathrm{BERT}_{base}$ models for comparison. The vertical lines and gray horizontal bands mark reference sparsity and "winning ticket" evaluation metric values that were obtained by Chen et al. (2020). The green horizontal bands mark the initial performance of our full fine-tuned models.
+
+
+(a) RTE, IMP w/ RoBERTa $\mathsf{b}_{\text{base}}$
+
+
+(b) MRPC, IMP w/ RoBERTabase
+
+
+(c) SST-2, IMP w/ RoBERTabase
+
+
+(d) QNLI, IMP w/ RoBERTabase
+
+
+(e) SQuAD, IMP w/ RoBERTabase
+Figure 15: Individual plots showing sparsity vs. accuracy for GLUE tasks and SQuAD in RoBERTabase models compressed with iterative magnitude pruning (IMP), with BERTbase models for comparison. The vertical lines and gray horizontal bands mark reference sparsity and "winning ticket" evaluation metric values that were obtained by Chen et al. (2020). The green horizontal bands mark the initial performance of our full fine-tuned models.
+
+
+Proportions of Checklist tests passed for QQP, full vs $70\%$ sparse
+
+
+Proportions of Checklist tests passed for QQP, SAM vs. Adam
+
+
+Figure 16: Aggregated Checklist (Ribeiro et al., 2020) results for QQP models. Tests target various capabilities of models such as robustness to typos and simple coherence resolution. Note the concentration of frequencies near 0.0 and 1.0 for all models, as well as the shifts in frequencies when pruned for both vanilla Adam and SAM models; models can effectively lose or even gain specific capabilities throughout compression.
+
+
+
+
Dataset
QAT Ref.
Optim.
Full FT
Quantized
MNLI Acc.
N/A
Adam
84.420.37
78.244.00
SAM
84.680.13
83.470.11
QQP F1
87.96
Adam
87.980.12
85.351.93
SAM
88.200.08
86.850.28
STS-B Pearson
89.04
Adam
89.060.11
86.541.07
SAM
89.390.07
87.160.76
QNLI Acc.
90.62
Adam
91.490.09
89.050.37
SAM
91.330.58
89.830.54
MRPC F1
89.56
Adam
89.571.13
86.811.72
SAM
91.240.20
89.370.83
RTE Acc.
68.78
Adam
67.631.10
56.804.51
SAM
67.870.36
65.701.65
SST-2 Acc.
92.24
Adam
92.700.07
91.170.90
SAM
93.080.57
92.390.40
CoLA Matt.
58.48
Adam
60.470.55
55.992.86
SAM
59.090.72
54.880.78
SQuAD F1
87.74
Adam
89.200.12
80.131.85
SAM
89.200.12
84.920.55
+
+Table 8: We compare full fine-tuned and quantized $\mathrm{BERT}_{\text {base }}$ models optimized with SAM and Adam. Notably, applying a simpler post-training dynamic quantization technique on a SAM-optimized model can approach the reported (QAT ref) performance of a model quantized through quantization-aware training (Zafrir et al., 2019). These instances are bolded.
\ No newline at end of file
diff --git a/trainflatthencompresssharpnessawareminimizationlearnsmorecompressiblemodels/images.zip b/trainflatthencompresssharpnessawareminimizationlearnsmorecompressiblemodels/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..a56958269b0724d718919fd12c7e57d1092d741b
--- /dev/null
+++ b/trainflatthencompresssharpnessawareminimizationlearnsmorecompressiblemodels/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b744ec4e8fe70ef3a7d5bfcc24d3dd2fb5e09ef4f69c8ee600a3dafaeafe6032
+size 1887966
diff --git a/trainflatthencompresssharpnessawareminimizationlearnsmorecompressiblemodels/layout.json b/trainflatthencompresssharpnessawareminimizationlearnsmorecompressiblemodels/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..e2c7add010cab21a6724bb5c12a0b7eb1e6d013e
--- /dev/null
+++ b/trainflatthencompresssharpnessawareminimizationlearnsmorecompressiblemodels/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:817d26d6d209acbb290c90d5ac2cf8e0c1eda6c0a90e20c8896155b3c4555a06
+size 807487
diff --git a/transadvatranslationbasedadversariallearningframeworkforzeroresourcecrosslingualnamedentityrecognition/2464bc3b-d7e5-44d6-ab83-86881675c92e_content_list.json b/transadvatranslationbasedadversariallearningframeworkforzeroresourcecrosslingualnamedentityrecognition/2464bc3b-d7e5-44d6-ab83-86881675c92e_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..7771019efad43b1d20bbe57fc071481cdc8ababe
--- /dev/null
+++ b/transadvatranslationbasedadversariallearningframeworkforzeroresourcecrosslingualnamedentityrecognition/2464bc3b-d7e5-44d6-ab83-86881675c92e_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:8b5133c17a3f660138423bf49a9fee8e9032a1be0381d3fd6d8d50fc270e5e37
+size 61457
diff --git a/transadvatranslationbasedadversariallearningframeworkforzeroresourcecrosslingualnamedentityrecognition/2464bc3b-d7e5-44d6-ab83-86881675c92e_model.json b/transadvatranslationbasedadversariallearningframeworkforzeroresourcecrosslingualnamedentityrecognition/2464bc3b-d7e5-44d6-ab83-86881675c92e_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..e938b0fcfccc0326b68ac33e90f88d96a42edc5c
--- /dev/null
+++ b/transadvatranslationbasedadversariallearningframeworkforzeroresourcecrosslingualnamedentityrecognition/2464bc3b-d7e5-44d6-ab83-86881675c92e_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f07c4d23f9586f51376e99f39a0c877105473def54f78e2919ad18d83d62b168
+size 73266
diff --git a/transadvatranslationbasedadversariallearningframeworkforzeroresourcecrosslingualnamedentityrecognition/2464bc3b-d7e5-44d6-ab83-86881675c92e_origin.pdf b/transadvatranslationbasedadversariallearningframeworkforzeroresourcecrosslingualnamedentityrecognition/2464bc3b-d7e5-44d6-ab83-86881675c92e_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..18360fbae54dbc961f3aa97a0a576f4254cc2864
--- /dev/null
+++ b/transadvatranslationbasedadversariallearningframeworkforzeroresourcecrosslingualnamedentityrecognition/2464bc3b-d7e5-44d6-ab83-86881675c92e_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:04e4baacb312745c994294306696abc2bc369cea6b35564ab7ad0bd1797a1d69
+size 444242
diff --git a/transadvatranslationbasedadversariallearningframeworkforzeroresourcecrosslingualnamedentityrecognition/full.md b/transadvatranslationbasedadversariallearningframeworkforzeroresourcecrosslingualnamedentityrecognition/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..3163d466a60c38be4bbdde72ef92b66c735a7457
--- /dev/null
+++ b/transadvatranslationbasedadversariallearningframeworkforzeroresourcecrosslingualnamedentityrecognition/full.md
@@ -0,0 +1,297 @@
+# TransAdv: A Translation-based Adversarial Learning Framework for Zero-Resource Cross-Linguual Named Entity Recognition
+
+Yichun Zhao $^{1}$ , Jintao Du $^{2}$ , Gongshen Liu $^{1*}$ , Huijia Zhu $^{2}$
+
+$^{1}$ Shanghai Jiao Tong University
+
+2 Tiansuan Lab, Ant Group Co., Ltd.
+
+$^{1}\{zhaoyichun, lgshen\} @sjtu.edu.cn$
+
+2{lingke.djt, huijia.zhj}@antgroup.com
+
+# Abstract
+
+Zero-Resource Cross-Linguual Named Entity Recognition aims at training an NER model of the target language using only labeled source language data and unlabeled target language data. Existing methods are mainly divided into three categories: model transfer based, data transfer based and knowledge transfer based. Each method has its own disadvantages, and combining more than one of them often leads to better performance. However, the performance of data transfer based methods is often limited by inevitable noise in the translation process. To handle the problem, we propose a framework named TransAdv to mitigate lexical and syntactic errors of word-by-word translated data, better utilizing the data by multi-level adversarial learning and multi-model knowledge distillation. Extensive experiments are conducted over 6 target languages with English as the source language, and the results show that TransAdv achieves competitive performance to the state-of-the-art models.
+
+# 1 Introduction
+
+Named Entity Recognition (NER) is a fundamental task that aims to locate named entities in a given sentence and assign them to predefined types, i.e., person, location, organization, etc. In recent years, neural NER models have achieved remarkable performance on this task with a large amount of labeled data. However, many low-resource languages do not have enough data for supervised learning. Therefore, transferring labeled data or trained models from high-resource to low-resource languages is gaining increasing attention.
+
+In this paper, we concentrate on zero-resource cross-lingual NER where no labeled data in the target language is available. Existing methods fall into three main categories: i) model transfer based methods (Wu and Dredze, 2019; Wu et al., 2020c), which train a source model on the labeled source
+
+language data to learn language-independent features and then directly apply it to the target language; ii) data transfer based methods (Mayhew et al., 2017; Xie et al., 2018), which translate the labeled source language data and map all entity labels to generate pseudo target language data; iii) knowledge transfer based methods (Wu et al., 2020a; Chen et al., 2021), which train a source model on the labeled source language data and then apply it over the unlabeled target language data to distill a student model.
+
+Each kind of method has its drawbacks, (Wu et al., 2020b) is the first to unify the three kinds of methods with great success. However, the noise in the translation process significantly limits its performance. There are two common translation strategies for cross-lingual NER tasks: i) sentence translation followed by entity alignment, where the propagation of entity alignment errors is inevitable; ii) directly word-by-word translation (Wu et al., 2020b), where the generated sentence is noisy in terms of word order.
+
+To better utilize the translated data, we propose a translation-based adversarial learning framework named TransAdv for zero-resource cross-lingual NER, and the overview architecture is shown in Figure 1. The contributions of our work can be summarized as follows:
+
+- We better unify data transfer and knowledge transfer for cross-lingual NER, mitigating lexical and syntactic errors of word-by-word translated data through multi-level adversarial learning and multi-model knowledge distillation.
+- We conduct extensive experiments over 6 target languages with English as the source language, and the results validate the effectiveness and reasonableness of our model.
+
+
+Figure 1: The overall architecture of our proposed TransAdv.
+
+# 2 Methodology
+
+# 2.1 Task Definition
+
+Following previous works, we formulate cross-lingual NER as a standard sequence labeling problem. When given an input sentence $\pmb{x} = \{x_{1}, x_{2}, \dots, x_{n}\}$ of length $n$ , a model aims to extract all named entities appearing in the sentence by generating a sequence of labels $\pmb{y} = \{y_{1}, y_{2}, \dots, y_{n}\}$ over the set of entity labels $\mathcal{V}$ . We denote the labeled source language data as $\mathcal{D}_S = \{(\pmb{x}, \pmb{y})\}$ and the unlabeled target language data as $\mathcal{D}_T^u = \{\tilde{\pmb{x}}\}$ . In the case of zero-resource cross-lingual NER, a model is trained with $\mathcal{D}_S$ and $\mathcal{D}_T^u$ , then evaluated on the labeled test data of the target language.
+
+# 2.2 Data Creation
+
+In this section, we construct multiple datasets based on the labeled source language data as shown in Figure 2.
+
+Following (Wu et al., 2020b), we apply MUSE (Lample et al., 2018) to translate a source language sentence $x_{S}$ into a target language sentence $x_{T}$ word-by-word. The entity label of each source language word is then directly copied to its corresponding translated word. Since MUSE has inevitable translation errors that it may not strictly translate each word into the target language, we also try Google translate API for more accurate word-by-word translation. After word-by-word translating and copying entity labels, we construct a pseudo target language training dataset $\mathcal{D}_T$ from $\mathcal{D}_S$ .
+
+(Zhang et al., 2021a) propose an aspect code-switching mechanism to augment the training data for cross-lingual aspect-based sentiment analysis. In this section, we apply a similar mechanism to
+
+switch named entities between the source and translated sentences to construct two bilingual sentences: $x_{S}^{swi}$ is derived from $x_{S}$ with named entities in $x_{T}$ , and $x_{T}^{swi}$ is derived from $x_{T}$ with named entities in $x_{S}$ . The entity label of each word in $x_{S}^{swi}$ and $x_{T}^{swi}$ is also the same as its corresponding word in $x_{S}$ and $x_{T}$ benefiting from word-by-word translation. Therefore, we can construct two bilingual datasets $\mathcal{D}_{S}^{swi}$ and $\mathcal{D}_{T}^{swi}$ .
+
+Due to the difference between the word orders of the source language and the target language, we also design a word shuffling method for NER data. Since NER is a coarse-grained sequence labeling task, completely shuffling all words in a sentence will affect the internal relations of the words within entities. Therefore, we separately shuffle the words in each entity or each context between entities, with all entity labels retained. For sentences $x_{S}$ and $x_{T}$ , two shuffled sentences are denoted as $x_{S}^{shu}$ and $x_{T}^{shu}$ . Based on $\mathcal{D}_S$ and $\mathcal{D}_T$ , we can build two shuffled datasets $\mathcal{D}_S^{shu}$ and $\mathcal{D}_T^{shu}$ .
+
+# 2.3 Multi-Level Adversarial Learning for Cross-Lingual NER
+
+In cross-lingual tasks, the source and the target language usually have differences in lexical and syntactic features. To avoid the model overfitting on the source language data and make the model better fine-tuned on the word-by-word translated target language data, we follow (Chen et al., 2021) and propose a multi-level adversarial network. It is formulated as a multi-task problem with NER, word-level language classification and sentence-level order classification. Different modules in the network and their loss functions are defined as below:
+
+Generator We choose multilingual BERT (Devlin et al., 2019) as the generator and feed a given sen
+
+
+Figure 2: The process of data creation.
+
+tence $\pmb{x} = \{x_{1}, x_{2}, \dots, x_{n}\}$ into mBERT to generate the feature vectors $\pmb{h} = \{h_{1}, h_{2}, \dots, h_{n}\}$ :
+
+$$
+\boldsymbol {h} = \operatorname {m B E R T} (\boldsymbol {x}) \tag {1}
+$$
+
+NER Classifier We feed $h$ into a fully-connected layer followed by a softmax layer to yield a probability distribution over the entity label set $\mathcal{Y}$ :
+
+$$
+p _ {i} ^ {n e r} = \operatorname {s o f t m a x} \left(\boldsymbol {W} ^ {n e r} h _ {i} + \boldsymbol {b} ^ {n e r}\right) \tag {2}
+$$
+
+where $h_i \in \mathbb{R}^{d_g}$ denotes the feature vector of the $i$ th word with $d_g$ being the dimension of $\pmb{h}$ , $p_i^{ner} \in \mathbb{R}^{|Y|}$ , $W^{ner} \in \mathbb{R}^{|Y| \times d_g}$ and $b^{ner} \in \mathbb{R}^{|Y|}$ .
+
+Language Discriminator We feed $h$ into two fully-connected layers followed by a sigmoid layer to classify the language of each word:
+
+$$
+p _ {i} ^ {l} = \operatorname {s i g m o i d} \left(\boldsymbol {W} _ {1} ^ {l} \operatorname {R e L U} \left(\boldsymbol {W} _ {2} ^ {l} h _ {i}\right)\right) \tag {3}
+$$
+
+where $p_i^l\in \mathbb{R}^1$ $W_{1}^{l}\in \mathbb{R}^{1\times d_{l}}$ and $W_{2}^{l}\in \mathbb{R}^{d_{l}\times d_{g}}$ with $d_{l}$ being the hidden dimension of the language discriminator.
+
+Order Discriminator We first feed $h$ into a one-layer LSTM to encode sequence features of the sentence, then the hidden state of the last word is fed into a fully-connected layer followed by a sigmoid layer to classify the order of the sentence:
+
+$$
+h _ {i} ^ {\prime} = \overrightarrow {L S T M} \left(h _ {i}, h _ {i - 1} ^ {\prime}\right) \tag {4}
+$$
+
+$$
+p ^ {o} = \operatorname {s i g m o i d} \left(\boldsymbol {W} ^ {o} h _ {n} ^ {\prime}\right)
+$$
+
+where $p^o\in \mathbb{R}^1$ $W^{o}\in \mathbb{R}^{1\times d_{o}}$ with $d_{o}$ being the hidden dimension of the order discriminator.
+
+During training, different datasets will first be fed into mBERT separately as shown in Figure 1, then the generated $h$ will be sent to the corresponding module. We have a total of 4 loss functions: the NER task loss $\mathcal{L}^{ner}$ , the language discriminator
+
+loss $\mathcal{L}^l$ , the order discriminator loss $\mathcal{L}^o$ , and the generator loss $\mathcal{L}^g$ :
+
+$$
+\mathcal {L} ^ {n e r} = - \sum_ {i = 1} ^ {n} \sum_ {k \in \mathcal {Y}} I \left(y _ {i} ^ {n e r} = k\right) \log \left(p _ {i} ^ {n e r}\right)
+$$
+
+$$
+\mathcal {L} ^ {l} = - \sum_ {i = 1} ^ {n} \left[ y _ {i} ^ {l} \log \left(p _ {i} ^ {l}\right) + \left(1 - y _ {i} ^ {l}\right) \log \left(1 - p _ {i} ^ {l}\right) \right]
+$$
+
+$$
+\mathcal {L} ^ {o} = - \left[ y ^ {o} \log \left(p ^ {o}\right) + \left(1 - y ^ {o}\right) \log \left(1 - p ^ {o}\right) \right]
+$$
+
+$$
+\begin{array}{l} \mathcal {L} ^ {g} = - \sum_ {i = 1} ^ {n} \left[ y _ {i} ^ {l} \log \left(1 - p _ {i} ^ {l}\right) + \left(1 - y _ {i} ^ {l}\right) \log \left(p _ {i} ^ {l}\right) \right] \\ - \left[ y ^ {o} \log \left(1 - p ^ {o}\right) + \left(1 - y ^ {o}\right) \log \left(p ^ {o}\right) \right] \tag {5} \\ \end{array}
+$$
+
+where $y_{i}^{ner}$ and $y_{i}^{l}$ denote the ground truth entity tag and language tag of the word $x_{i}$ , $y^{o}$ denotes the ground truth order tag of the sentence $x$ .
+
+Similarly to (Chen et al., 2021), for the NER task, the parameters of the generator and the NER classifier are updated based on $\mathcal{L}^{ner}$ ; for the adversarial task, the parameters of two discriminators are updated based on $\mathcal{L}^l$ and $\mathcal{L}^o$ respectively, while the parameters of the generator are updated based on $\mathcal{L}^g$ . Finally, we denote the trained source model as $\Theta_{src}$ .
+
+# 2.4 Multi-Model Knowledge Distillation on Unlabeled Data
+
+Based on $\Theta_{src}$ , we further fine-tune it on different datasets to derive teacher models with different emphases. Actually, three combinations of the datasets constructed in section 2.2 are considered in our network, including $\mathcal{D}_{entity} = \mathcal{D}_T \cup \mathcal{D}_T^{swi}$ , $\mathcal{D}_{context} = \mathcal{D}_T \cup \mathcal{D}_S^{swi}$ and $\mathcal{D}_{order} = \mathcal{D}_T \cup \mathcal{D}_T^{shu}$ .
+
+$D_{entity}$ contains the word-by-word translated dataset $D_T$ to involve knowledge of the target language and the code-switch dataset $D_T^{swi}$ to share the same contexts but entities with different languages. $D_{context}$ contains $D_T$ and the code-switch
+
+dataset $\mathcal{D}_S^{swi}$ to share the same entities but contexts with different languages. $\mathcal{D}_{order}$ contains $\mathcal{D}_T$ and the shuffled dataset $\mathcal{D}_T^{shu}$ to share the same sentences but different orders of words. An entity-enhanced teacher model $\Theta_{entity}$ , a context-enhanced teacher model $\Theta_{context}$ and an order-enhanced teacher model $\Theta_{order}$ are derived by finetuning $\Theta_{src}$ on $\mathcal{D}_{entity}$ , $\mathcal{D}_{context}$ and $\mathcal{D}_{order}$ with the same loss function $\mathcal{L}^{ner}$ as in Eq. 5.
+
+During fine-tuning, the language discriminator trained in section 2.3 is also loaded to continue adversarial fine-tuning with $\Theta_{\text{entity}}$ and $\Theta_{\text{context}}$ while the adversarial strategy is more fine-grained. For $\Theta_{\text{entity}}$ we only discriminate languages of entity words and for $\Theta_{\text{context}}$ we only discriminate languages of context words. The new language discriminator losses are shown as in Eq. 6. These two discriminators are adapted to characteristics of $\mathcal{D}_{\text{entity}}$ and $\mathcal{D}_{\text{context}}$ with the aim of enabling $\Theta_{\text{entity}}$ and $\Theta_{\text{context}}$ to better fuse representations of entity or context words respectively of the source and the target languages.
+
+$$
+\mathcal {L} ^ {e l} = - \sum_ {x _ {i} \in e n t i t y} [ y _ {i} ^ {l} \log (p _ {i} ^ {l}) + (1 - y _ {i} ^ {l}) \log (1 - p _ {i} ^ {l}) ]
+$$
+
+$$
+\mathcal {L} ^ {c l} = - \sum_ {x _ {i} \in c o n t e x t} [ y _ {i} ^ {l} l o g (p _ {i} ^ {l}) + (1 - y _ {i} ^ {l}) l o g (1 - p _ {i} ^ {l}) ]
+$$
+
+(6)
+
+We then implement a multi-model distillation on the unlabeled target language dataset $\mathcal{D}_T^u$ . Let $\tilde{x}_i$ denotes the $i$ th word in an unlabeled sentence $\tilde{\boldsymbol{x}} \in \mathcal{D}_T^u$ and $p^{ner}(\tilde{x}_i, \Theta)$ denotes the probability distribution predicted by model $\Theta$ . We combine soft labels generated by $\Theta_{src}$ and three enhanced teacher models to obtain the united soft label:
+
+$$
+\begin{array}{l} p ^ {u n i} (\tilde {x} _ {i}) = w _ {1} p ^ {n e r} (\tilde {x} _ {i}, \Theta_ {s r c}) + w _ {2} p ^ {n e r} (\tilde {x} _ {i}, \Theta_ {e n t i t y}) \\ + w _ {3} p ^ {\text {n e r}} \left(\tilde {x} _ {i}, \Theta_ {\text {c o n t e x t}}\right) + w _ {4} p ^ {\text {n e r}} \left(\tilde {x} _ {i}, \Theta_ {\text {o r d e r}}\right) \tag {7} \\ \end{array}
+$$
+
+where $w_{k}$ is the weight for each model.
+
+Finally, we distill a student model $\Theta_{stu}$ by minimizing the mean squared error (MSE) between $p^{\text{uni}}(\tilde{x}_i)$ and the probability distribution predicted by $\Theta_{stu}$ :
+
+$$
+\mathcal {L} ^ {k d} = \frac {1}{n} \sum_ {i = 1} ^ {n} M S E \left(p ^ {\text {u n i}} \left(\tilde {x} _ {i}\right), p ^ {\text {n e r}} \left(\tilde {x} _ {i}, \Theta_ {\text {s t u}}\right)\right) \tag {8}
+$$
+
+For inference on the labeled test data of the target language, we only employ the distilled student model $\Theta_{stu}$ .
+
+# 3 Experiments
+
+# 3.1 Baselines
+
+We compare our model with the following zeroresource cross-lingual NER models to evaluate the performance of TransAdv: mBERT-FT (Wu and Dredze, 2019) fine-tune the multilingual BERT. AdvCE (Keung et al., 2019) improve upon mBERT's performance via adversarial learning. TSL (Wu et al., 2020a) propose a teacher-student learning method. Unitrans (Wu et al., 2020b) propose an approach to unify both model and data transfer. RIKD (Liang et al., 2021) propose a reinforced knowledge distillation framework. AdvPicker (Chen et al., 2021) attempt to select language-independent data by adversarial learning. TOF (Zhang et al., 2021b) design a target-oriented finetuning framework to exploit various data.
+
+# 3.2 Datasets and Metrics
+
+(a) Statistics of CoNLL.
+
+
Language
Type
Train
Dev
Test
English-en
+(CoNLL-2003)
Sentence
14987
3466
3684
Entity
23499
5942
5648
German-de
+(CoNLL-2003)
Sentence
12705
3068
3160
Entity
11851
4833
3673
Spanish-es
+(CoNLL-2002)
Sentence
8323
1915
1517
Entity
18798
4351
3558
Dutch-nl
+(CoNLL-2002)
Sentence
15806
2895
5195
Entity
13344
2616
3941
+
+(b) Statistics of WikiAnn.
+
+
Language
Type
Train
Dev
Test
English-en
Sentence
20000
10000
10000
Entity
27931
14146
13958
Arabic-ar
Sentence
20000
10000
10000
Entity
22500
11266
11259
Hindi-hi
Sentence
5000
1000
1000
Entity
6124
1226
1228
Chinese-zh
Sentence
20000
10000
10000
Entity
25031
12493
12532
+
+Table 1: Statistics of the datasets.
+
+We conducted experiments on the following NER benchmark datasets: CoNLL-2002 (Sang and Erik, 2002) for Spanish[es] and Dutch[nl], CoNLL-2003 (Sang and De Meulder, 2003) for English[en] and German[de], and WikiAnn (Pan
+
+et al., 2017) for English[en], Arabic[ar], Hindi[hi] and Chinese[zh]. Each dataset of a certain language is split into train, dev and test sets and statistics of all datasets are shown in Table 1. All datasets are annotated with 4 entity types: LOC, MISC, ORG and PER, using the BIO entity labeling scheme.
+
+Following previous work (Sang and Erik, 2002), we employ entity level F1 score as the metric of evaluation. We run each experiment 5 times with different random seeds and report the average F1 score on the test set for reproducibility. More details of implementation details and model analysis are in appendix A and B.
+
+# 3.3 Main Results
+
+(a) Results on CoNLL.
+
+
Model
es
nl
de
Average
mBERT-FT
74.96
77.57
69.56
73.57
AdvCE
74.3
77.6
71.9
74.60
TSL
76.94
80.89
73.22
77.02
RIKD
77.84
82.46
75.48
78.59
AdvPicker
79.00
82.90
75.01
78.97
Unitrans
79.31
82.90
74.82
79.01
TOF
80.35
82.79
76.57
79.90
TransAdv
80.93
83.78
75.52
80.08
+
+(b) Results on WikiAnn.
+
+
Model
ar
hi
zh
Average
mBERT-FT
42.30
67.60
52.90
54.27
TSL
43.12
69.54
48.12
53.59
RIKD
45.96
70.28
50.40
55.55
TransAdv
42.53
74.24
54.25
57.01
+
+Table 2: Results of TransAdv and baselines. (F1%). All results are from original papers or the paper of RIKD.
+
+The main results of baselines and TransAdv on CoNLL and WikiAnn are shown in Table 2. According to the results, TransAdv outperforms all baselines, proving our model's effectiveness.
+
+In general, due to the strong effect of knowledge distillation, knowledge transfer based methods such as TSL, RIKD, AdvPicker and our TransAdv significantly surpass model transfer based methods like mBERT-FT and AdvCE which directly apply the model to the target language.
+
+For western languages in CoNLL, TransAdv achieves $1.62\%$ , $0.88\%$ and $0.7\%$ absolute gain
+
+of F1 scores over Unitrans which also employs word-by-word translation. Despite using the same translation resources, our model still has a significant improvement over it, which may due to the adversarial network mitigating the lexical and syntactic errors of the translated data. Compared with TOF, which is the state-of-the-art model, TransAdv achieves an absolute F1 scores increase of $0.58\%$ , $0.99\%$ on es, $nl$ and decrease of $1.05\%$ on de. TOF requires extra labeled data of Machine Reading Comprehension (MRC) for both the source language and target language, which is costly and not in a strictly zero-resource case. For many low-resource languages, word-by-word translation is much more available than labeled MRC data.
+
+As for non-western languages in WikiAnn, TransAdv also shows significant improvements over the baselines on $hi$ , $zh$ . We even achieve $0.68\%$ absolute F1 scores gain on $zh$ over mBERT-FT, which re-tokenizes the Chinese dataset and obtains relatively high results.
+
+# 4 Conclusion
+
+In this paper, we propose a framework named TranAdv for zero-resource cross-lingual NER, which mitigates lexical and syntactic errors of word-by-word translated data and better utilizes it through multi-level adversarial learning and multi-model knowledge distillation. We evaluate TransAdv over 6 target languages with English as the source language. Experimental results show that TransAdv achieves competitive performance to the state-of-the-art models.
+
+# 5 Limitations
+
+Although word-by-word translation data is easy to obtain in most cases, high-quality translation models are not available for some low-resource languages that are extremely short of parallel corpora. Moreover, when the difference in word order between the source and target languages is slight, adversarial training of word order may result in the loss of valid order information.
+
+# Acknowledge
+
+This research work has been sponsored by Ant Group Security and Risk Management Fund, the Joint Funds of the National Natural Science Foundation of China (Grant No. U21B2020), and Shanghai Science and Technology Plan (Grant No. 22511104400).
+
+# References
+
+Weile Chen, Huiqiang Jiang, Qianhui Wu, Borje Karlsson, and Yi Guan. 2021. Advpicker: Effectively leveraging unlabeled data via adversarial discriminator for cross-lingual ner. In Proc. of ACL.
+Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proc. of NAACL.
+Phillip Keung, Yichao Lu, and Vikas Bhardwaj. 2019. Adversarial learning with contextual embeddings for zero-resource cross-lingual classification and ner. In Proc. of EMNLP.
+Guillaume Lample, Alexis Conneau, Marc'Aurelio Ranzato, Ludovic Denoyer, and Hervé Jégou. 2018. Word translation without parallel data. In Proc. of ICLR.
+Shining Liang, Ming Gong, Jian Pei, Linjun Shou, Wanli Zuo, Xianglin Zuo, and Daxin Jiang. 2021. Reinforced iterative knowledge distillation for crosslingual named entity recognition. In Proc. of KDD.
+Ilya Loshchilov and Frank Hutter. 2018. Fixing weight decay regularization in adam. arXiv preprint arXiv:1711.05101.
+Stephen Mayhew, Chen-Tse Tsai, and Dan Roth. 2017. Cheap translation for cross-lingual named entity recognition. In Proc. of EMNLP.
+Xiaoman Pan, Boliang Zhang, Jonathan May, Joel Nothman, Kevin Knight, and Heng Ji. 2017. Cross-lingual name tagging and linking for 282 languages. In Proc. of ACL.
+Erik Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the conll-2003 shared task: Language-independent named entity recognition. In Proc. of AACL.
+Tjong Kim Sang and F Erik. 2002. Introduction to the conll-2002 shared task: language-independent named entity recognition. In Proceedings of CoNLL-2002/Roth, Dan edit.
+Qianhui Wu, Zijia Lin, Börje Karlsson, Jian-Guang Lou, and Biqing Huang. 2020a. Single-/multi-source cross-lingual ner via teacher-student learning on unlabeled data in target language. In Proc. of ACL.
+Qianhui Wu, Zijia Lin, Borje F Karlsson, Biqing Huang, and Jianguang Lou. 2020b. Unitrans: Unifying model transfer and data transfer for cross-lingual named entity recognition with unlabeled data. In Proc. of IJCAI.
+Qianhui Wu, Zijia Lin, Guoxin Wang, Hui Chen, Borje F Karlsson, Biqing Huang, and Chin-Yew Lin. 2020c. Enhanced meta-learning for cross-lingual named entity recognition with minimal resources. In Proc. of AAAI.
+
+Shijie Wu and Mark Dredze. 2019. Beto, bentz, becas: The surprising cross-lingual effectiveness of bert. In Proc. of EMNLP.
+Jiateng Xie, Zhilin Yang, Graham Neubig, Noah A Smith, and Jaime G Carbonell. 2018. Neural crosslingual named entity recognition with minimal resources. In Proc. of EMNLP.
+Wenxuan Zhang, Ruidan He, Haiyun Peng, Lidong Bing, and Wai Lam. 2021a. Cross-lingual aspect-based sentiment analysis with aspect term code-switching. In Proc. of EMNLP.
+Ying Zhang, Fandong Meng, Yufeng Chen, Jinan Xu, and Jie Zhou. 2021b. Target-oriented fine-tuning for zero-resource named entity recognition. In Proc. of ACL Findings.
+
+# A Implementation Details
+
+We implement TransAdv with Pytorch 1.10.21. For word-by-word translation, we employ MUSE2 (Lample et al., 2018) based on fastText3 monolingual word embeddings for western languages of es, nl, de and Google Translate API4 for non-western languages of ar, hi, zh. For pretrained model, the cased multilingual BERT (Devlin et al., 2019) with 12 stacked Transformer blocks, 12 attention heads and 768 hidden dimensions is implemented based on HuggingFace's Transformers5.
+
+We empirically set the hyper-parameters of TransAdv and employ them in all experiments. Following previous work (Wu et al., 2020c), we freeze the parameters of the embedding layer and the bottom three layers of the multilingual BERT used in our model. We train all models for 10 epochs with the batch size of 32, the maximum sequence length of 128 and the dropout rate of 0.1, saving the model with the best performance on the dev set of the target language. We use AdamW (Loshchilov and Hutter, 2018) as the optimizer with the weight decay of 0.01 and the warmup rate of 0.05. For sequence prediction, we apply Viterbi decoding to generate the predicted results. In multi-level adversarial learning, the learning rate is set to 6e-5 for $\mathcal{L}^{ner}$ , 6e-7 for $\mathcal{L}^g$ and 5e-3 for $\mathcal{L}^l$ , $\mathcal{L}^o$ . The hidden dimensions of the language discriminator and order discriminator are set to 500 and 256 respectively. In multi-model knowledge distillation, hyper-parameters of corresponding modules
+
+in three enhanced teacher models are the same as in the source model, and the student model is trained with the learning rate of 6e-5 for $\mathcal{L}^{kd}$ . The weights of four models in Eq. 7 are all set to 1/4.
+
+All experiments are conducted on a Nvidia RTX 3090 GPU (24GB). $\Theta_{src}$ trains in $\approx 60\mathrm{min}$ with 179.17M parameters, three enhanced teacher models train in $\approx 30\mathrm{min}$ , 30min, 16min with 178.25M, 178.25M, 177.86M parameters respectively, and $\Theta_{stu}$ trains in $\approx 23\mathrm{min}$ with 177.86M parameters.
+
+# B Model Analysis
+
+# B.1 Ablation Study
+
+To verify the validity of different modules in the proposed model, we introduce the following variants of TransAdv to further carry out an ablation study: 1) TransAdv w/o LDIS and TransAdv w/o ODIS, which remove the language discriminator or the order discriminator respectively during multilevel adversarial learning. Moreover, when the language discriminator is removed, the entity language discriminator and the context language discriminator during the adversarial learning of $\Theta_{\text{entity}}$ and $\Theta_{\text{context}}$ are also removed. 2) TransAdv w/o $\Theta_{\text{entity}}$ , TransAdv w/o $\Theta_{\text{context}}$ and TransAdv w/o $\Theta_{\text{order}}$ , which remove the corresponding teacher model during multi-model knowledge distillation. 3) TransAdv w/o MLADV, which removes the multi-level adversarial learning module with the $\Theta_{\text{src}}$ directly trained on $\mathcal{D}_S$ 4) TransAdv w/o MMKD, which removes the multi-model knowledge distillation module and then the student model is directly distilled with $\Theta_{\text{src}}$ .
+
+
Methods
es
nl
de
Average
TransAdv
80.93
83.78
75.52
80.08
w/o LDIS
80.70
83.43
75.19
79.77 (0.31↓)
w/o ODIS
80.73
83.38
75.31
79.81 (0.27↓)
w/o Θentity
80.31
83.59
74.91
79.60 (0.48↓)
w/o Θcontext
80.16
83.26
75.20
79.54 (0.54↓)
w/o Θorder
80.38
83.10
75.26
79.58 (0.50↓)
w/o MLADV
80.18
83.31
75.16
79.55 (0.53↓)
w/o MMKD
77.34
81.60
74.14
77.69 (2.39↓)
+
+Table 3: Results of ablation study for TransAdv (F1%).
+
+The performance of each variant compared to TransAdv is shown in Table 3. From the results, we can draw out the following inferences:
+
+1) Comparing TransAdv with TransAdv w/o LDIS and TransAdv w/o ODIS, we can see the
+
+performance drops. It confirms the effectiveness of the two discriminators that they may avoid the model overfitting on the source language and make the model better fine-tuned on the word-by-word translated target language data.
+
+2) We observe that the performance of TransAdv outperforms the performance of TransAdv w/o $\Theta_{entity}$ , TransAdv w/o $\Theta_{context}$ and TransAdv w/o $\Theta_{order}$ , showing that teacher models derived from different combinations of datasets may have different emphases on improving the robustness of the entire model.
+3) TransAdv w/o MLADV and TransAdv w/o MMKD both significantly decline in performance compared with TransAdv, which illustrates that the two main modules both play essential roles in TransAdv.
+
+# B.2 Analysis of Translation Strategies
+
+To evaluate the impact of different translation strategies on TransAdv, we introduce the following translation methods: 1) MUSE: use the same word-by-word translation as (Wu et al., 2020b) based on fastText monolingual word embeddings. 2) Google Word: use Google translate API to translate the sentence word-by-word. 3) Google Phrase: split a sentence into phrases based on entity labels and then use Google translate API to translate the sentence phrase-by-phrase. 4) Google Word&Phrase: split a sentence into phrases based on entity labels and then use Google translate API to translate the sentence word-by-word for context phrases and phrase-by-phrase for entity phrases.
+
+The comparison of different translation strategies for each language is shown in Figure 4. We observe that for western languages in CoNLL, models with MUSE obtain the best F1 score on $es$ , $nl$ and the second best F1 score on $de$ ; for non-western languages in WikiAnn, models with Google Word obtain the best F1 score on $ar$ , $hi$ and the second best F1 score on $zh$ . It may be because when English is the source language, for western languages there are many word anchors that can be shared, and using noisier MUSE can obtain more diverse translation data without affecting the performance; whereas for non-western languages, there are much less word anchors that can be shared, so Google-based direct translation can better introduce information of the target language.
+
+On the other hand, Google Phrase and Google Word&Phrase are generally less effective than the
+
+
+(a) Source Model
+
+
+Figure 3: Clusters of embeddings of models in different stages (Circles correspond to words in the source language, triangles correspond to words in the target language).
+
+
+(b) Entity-Enhanced Teacher model
+(c) Context-Enhanced Teacher model
+
+
+Figure 4: The comparison of different translation strategies for each language.
+
+other two strategies which are based entirely on word-by-word translation. This may be due to the fact that the word-by-word translated data is more compatible with the sentence-level order adversarial training in TransAdv.
+
+# B.3 Analysis of Language discriminators
+
+To analysis the effect of language discriminators of different grains, clusters of embeddings of models in different stages trained on CoNLL with Dutch $(nl)$ as the target language are shown in Figure 3.
+
+We find that in the source model $\Theta_{src}$ , embeddings corresponding to entity labels of the source and the target languages have been partially fused due to the original language discriminator while embeddings corresponding to context labels are too scattered. Moreover, in the entity-enhanced teacher model $\Theta_{entity}$ , embeddings of different languages get further fused thanks to word-by-word translated data and the entity language discriminator while embeddings corresponding to context
+
+labels are still relatively scattered. In the context-enhanced teacher model $\Theta_{\text{context}}$ , due to the context language discriminator, the integration of embeddings corresponding to context labels is basically complete while embeddings corresponding to entity labels are not. The above results together demonstrate the effectiveness of different language discriminators.
\ No newline at end of file
diff --git a/transadvatranslationbasedadversariallearningframeworkforzeroresourcecrosslingualnamedentityrecognition/images.zip b/transadvatranslationbasedadversariallearningframeworkforzeroresourcecrosslingualnamedentityrecognition/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..83cef3594eec2bec32fe052bf2fd7c1988ab6c2a
--- /dev/null
+++ b/transadvatranslationbasedadversariallearningframeworkforzeroresourcecrosslingualnamedentityrecognition/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a557634c41ed89b23513b95a65ccfe0cfc09a569ac2434a0413ac3530574afc7
+size 452156
diff --git a/transadvatranslationbasedadversariallearningframeworkforzeroresourcecrosslingualnamedentityrecognition/layout.json b/transadvatranslationbasedadversariallearningframeworkforzeroresourcecrosslingualnamedentityrecognition/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..35bcdb712742f876c06e6f53cbaa88dddaddb0e4
--- /dev/null
+++ b/transadvatranslationbasedadversariallearningframeworkforzeroresourcecrosslingualnamedentityrecognition/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f7f30358646b4f1ec4c7a23847a57763ea1b059386860e72ac42dc55826d4207
+size 374194
diff --git a/transformerlanguagemodelswithoutpositionalencodingsstilllearnpositionalinformation/b30da29b-c75b-45dd-8cb3-5b7dae179562_content_list.json b/transformerlanguagemodelswithoutpositionalencodingsstilllearnpositionalinformation/b30da29b-c75b-45dd-8cb3-5b7dae179562_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..a55f7aeac0f9c3eae3ff36adf2cc365c1737c3dc
--- /dev/null
+++ b/transformerlanguagemodelswithoutpositionalencodingsstilllearnpositionalinformation/b30da29b-c75b-45dd-8cb3-5b7dae179562_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:6ac1e5d057d73d6eb5cd5ef18512aacc9d4b9bfaba78416c811fd54e8bf66af1
+size 51296
diff --git a/transformerlanguagemodelswithoutpositionalencodingsstilllearnpositionalinformation/b30da29b-c75b-45dd-8cb3-5b7dae179562_model.json b/transformerlanguagemodelswithoutpositionalencodingsstilllearnpositionalinformation/b30da29b-c75b-45dd-8cb3-5b7dae179562_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..68c594e96d85e06e46c0bad2acf6c67877ccddd4
--- /dev/null
+++ b/transformerlanguagemodelswithoutpositionalencodingsstilllearnpositionalinformation/b30da29b-c75b-45dd-8cb3-5b7dae179562_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:45fae68b30f73bf9f2b5b839d713a7ba7a458f40581edabe6bf76879dbe68925
+size 62108
diff --git a/transformerlanguagemodelswithoutpositionalencodingsstilllearnpositionalinformation/b30da29b-c75b-45dd-8cb3-5b7dae179562_origin.pdf b/transformerlanguagemodelswithoutpositionalencodingsstilllearnpositionalinformation/b30da29b-c75b-45dd-8cb3-5b7dae179562_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..0cc46e1a3d0568cb83eab6127c7fbb950838a8bb
--- /dev/null
+++ b/transformerlanguagemodelswithoutpositionalencodingsstilllearnpositionalinformation/b30da29b-c75b-45dd-8cb3-5b7dae179562_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b597d54c1e576f4c9f8c87f2585532eafcfc41bacfb14e5f9e2d1e650628f630
+size 277312
diff --git a/transformerlanguagemodelswithoutpositionalencodingsstilllearnpositionalinformation/full.md b/transformerlanguagemodelswithoutpositionalencodingsstilllearnpositionalinformation/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..2ddb07b36aea0c72fb6409d28afd1ab212009563
--- /dev/null
+++ b/transformerlanguagemodelswithoutpositionalencodingsstilllearnpositionalinformation/full.md
@@ -0,0 +1,199 @@
+# Transformer Language Models without Positional Encodings Still Learn Positional Information
+
+Adi Haviv $^{\tau}$ Ori Ram $^{\tau}$ Ofir Press $^{\omega}$ Peter Izsak $^{\iota}$ Omer Levy $^{\tau \mu}$
+
+$^{\tau}$ Tel Aviv University
+ $^{\omega}$ University of Washington
+ $^{\iota}$ Intel Labs
+ $^{\mu}$ Meta AI
+
+# Abstract
+
+Causal transformer language models (LMs), such as GPT-3, typically require some form of positional encoding, such as positional embeddings. However, we show that LMs without any explicit positional encoding are still competitive with standard models, and that this phenomenon is robust across different datasets, model sizes, and sequence lengths. Probing experiments reveal that such models acquire an implicit notion of absolute positions throughout the network, effectively compensating for the missing information. We conjecture that causal attention enables the model to infer the number of predecessors that each token can attend to, thereby approximating its absolute position. Our findings indicate that causal LMs might derive positional awareness not only from the explicit positioning mechanism, but also from the effects of the causal mask.
+
+# 1 Introduction
+
+The attention mechanism (Bahdanau et al., 2015) of the transformer (Vaswani et al., 2017) is agnostic to the position and order of tokens in the input sequence. It is therefore common practice to inject positional information via absolute positional embeddings (Vaswani et al., 2017; Radford et al., 2018) or relative bias factors (Shaw et al., 2018; Raffel et al., 2020; Press et al., 2022). Here, we demonstrate that transformer language models without any explicit positional information can and do learn an implicit notion of absolute positions that is sufficient to achieve competitive performance.
+
+We compare the performance of language models trained with no explicit positional information (NoPos language models) to those trained with three different position-aware mechanisms, namely: sinusoidal embeddings (Vaswani et al., 2017), learned embeddings (Gehring et al., 2017), and ALiBi (Press et al., 2022). Results show that NoPos models are competitive with position-aware
+
+
+Figure 1: Transformer language models trained without explicitly encoding positional information (NoPos) approach the performance of models trained with various positional encoding methods. All models have 1.3B parameters, and are trained on an excerpt of the Pile.
+
+models consistently across datasets, model sizes, and input sequence lengths (e.g., Figure 1).
+
+To shed light on our findings, we probe into the position-awareness of NoPos language models, compared to models that use relative or absolute position mechanisms. Specifically, we train classifiers to predict the position of a token given its representation across different layers in the network. Our probes reveal that the NoPos model achieves similar mean absolute distance between the predicted and the expected positions, as a model with learned absolute position embeddings.
+
+We hypothesize that this surprising behavior is tied to the causal attention mask, which implicitly injects positional information into the self-attention layer in order to preserve the autoregressive nature of language models. Intuitively, a model that is able to count the predecessors of a given token can essentially infer its absolute position. To test
+
+our hypothesis, we run similar experiments for masked language models (MLM) (Devlin et al., 2019), which use order-invariant attention (since no causal mask is applied). Indeed, bidirectional models fail to converge when position information is absent, substantiating our hypothesis. To conclude, our main contributions are:
+
+- We demonstrate the robustness of the NoPos model (compared to position-aware models) with respect to model size, dataset and sequence length.
+- We provide an analysis of the trained NoPos model, and show that it encoded absolute positions.
+- We show that the success of NoPos models is unique to causal language models.
+
+# 2 Positional Encodings
+
+Transformer models consist of interleaved self-attention and feed-forward layers, which are both order-invariant. Therefore, to convey the order of the input tokens, some form of positional information is explicitly introduced into the model. Absolute positions are commonly encoded as vectors (one for each position), which are then added to the input tokens' embeddings and fed to the first layer of the transformer. Relative positions are typically encoded as biases (added to attention scores) within the self-attention layers. In this work, we consider three popular methods as baselines:
+
+Learned. Embeddings trained to represent absolute positions (Sukhbaatar et al., 2015; Gehring et al., 2017). Learned positional embeddings are commonly used in MLMs (Devlin et al., 2019; Liu et al., 2019) as well as in large autoregressive language models, such as GPT-3 (Brown et al., 2020).
+
+Sinusoidal. Constant vectors computed by a nonparametric function of the input token's absolute position. Sine and cosine functions of different frequencies are used, such that each dimension of the positional encoding corresponds to a sinusoid. Sinusoidal embeddings were introduced in Vaswani et al. (2017) for machine translation, and are also used in language modeling (Baevski and Auli, 2019).
+
+ALiBi. Attention with LInear BIases (Press et al., 2022) injects information about the relative distances between tokens by adding negative biases
+
+to attention scores, which grow linearly with the distance between each pair of tokens.
+
+# 3 Experiment Setup
+
+Intuitively, encoding positional information explicitly is crucial for enabling transformer language models to predict the next token in a sequence. To test this intuition, we compared the validation set perplexity of models trained from scratch with no explicit positional information (denoted as NoPos) to those trained with the various positional encoding methods discussed in Section 2. We investigated the canonical WikiText-103 setting (Merit et al., 2017; Baevski and Auli, 2019), as well as a newer, large-scale setting based on the Pile corpus (Gao et al., 2020) on model architectures inspired by Brown et al. (2020), where we cover a spectrum of models sizes and sequence lengths.
+
+The Canonical Setting (WikiText-103). The WikiText-103 corpus (Merit et al., 2017) consists of over 100 million words extracted from a set of high-quality Wikipedia articles. The corpus is tokenized at the word level, resulting in a vocabulary of over 267K tokens. For this corpus, we used the adaptive embedding transformer model of Baevski and Auli (2019), which contains 16 transformer layers with 1024 model dimensions, 4096 feed-forward dimensions, and 8 attention heads. Overall, this model has 247M parameters in total. We trained with their exact optimization hyperparameters, as implemented in fairseq (Ott et al., 2019), with the exception of the input sequence length, which was shortened to 512 tokens (instead of 3072), as in Press et al. (2022). See App. C for detailed hyperparameters.
+
+The Large-Scale Setting (The Pile). The Pile (Gao et al., 2020) is an 800GB English text dataset composed of Common Crawl and 22 other diverse sources. For our experiments, we used 2 out of 30 shards;1 of these, we filtered out the GitHub and DM Mathematics sources and removed the shortest $1\%$ and longest $1\%$ of examples from each source to reduce noise. We used GPT-2's tokenizer (Radford et al., 2019) to convert the text into token sequences over a vocabulary of 50K tokens. We randomly sampled a validation set of 2000 documents (2.6M tokens) from the corpus, while the remaining 15M documents (21B tokens) comprised
+
+
WikiText-103
The Pile
NoPos
20.97
13.10
Learned
20.42
13.05
Sinusoidal
20.16
12.93
ALiBi
19.71
12.51
+
+the training set. The baseline model in this setting follows the 1.3B parameter architecture of Brown et al. (2020), also known as GPT-3 XL: 24 transformer layers with 2048 model dimensions, 8192 feed-forward dimensions, and 32 attention heads. The default input sequence length is 1024 tokens. We refer to App.C for detailed hyperparameters.
+
+To demonstrate the consistency of our results in different settings, we perform two scaling experiments. We first scale the model size by experimenting with the small (125M parameters), medium (350M parameters), large (760M parameters) and the XL (1.3B parameters) variants of the Brown et al. (2020) architecture on the Pile settings. In addition, we evaluate the effect of varying the sequence length using the XL (1.3B parameter) model. Specifically, we experiment with sequences of lengths \{256, 512, 1024, 2048\}.
+
+Last, to shed additional light on differences between the NoPos model to other methods, we compare the model's performance on different parts of the sequence. Details of this analysis and results are given in App. A.
+
+# 4 Results
+
+Table 1 compares the performance of training LMs with different position encoding methods. We observe that NoPos LMs approach the performance of the other models, with gaps of 0.55 (WikiText-103) and 0.05 (the Pile) perplexity from models with learned positional embeddings. In the Pile setting, performance differences between NoPos, Learned, and Sinusoidal are small both in absolute terms and with respect to their difference with ALiBi. In the WikiText-103 setting, performance gaps are wider but still modest with respect to random seed
+
+variance. $^2$ These results strongly suggest that training transformer language models without explicit positional encoding is indeed possible.
+
+Table 2 explores the effects of scaling the number of parameters in the Pile setting. While smaller models benefit from fixed, non-parametric positional encodings (Sinusoidal and ALiBi), these performance gaps narrow in larger models. Table 3 shows the effect of varying the sequence length in the same setting. In this experiment, the gaps between NoPos, Learned, and Sinusoidal remain almost constant, while the benefit of using ALiBi increases as sequences become longer. Overall, we show that transformer language modeling without explicit positional encoding is robust to the selection of corpus, model size, and sequence length.
+
+As training models at the 1.3B parameter scale is resource-intensive, we publicly release our trained models for future research and analysis. $^{3}$
+
+Table 1: Validation set perplexity of transformer language models trained with various positional encoding methods. The WikiText-103 setting (Merit et al., 2017) uses the model of Baevski and Auli (2019) on sequences of 512 tokens, while the Pile settings (Gao et al., 2020) uses a more recent 1.3B parameter architecture (Brown et al., 2020) over 1024 token sequences.
+
+
Model Size
125M
350M
760M
1.3B
NoPos
22.15
16.87
14.29
13.10
Learned
22.04
16.84
14.21
13.05
Sinusoidal
21.49
16.58
14.04
12.93
ALiBi
19.94
15.66
13.53
12.51
+
+Table 2: Validation set perplexity on the Pile, as a function of positional encoding method and model size. All models operate on sequences of 1024 tokens. Smaller models benefit from fixed, non-parametric positional encodings (Sinusoidal and ALiBi), but these performance gaps diminish as the models scale up.
+
+
Seq Length
256
512
1024
2048
NoPos
14.98
13.82
13.10
12.87
Learned
14.94
13.77
13.05
12.72
Sinusoidal
14.84
13.66
12.93
12.62
ALiBi
14.65
13.37
12.51
12.06
+
+Table 3: Validation set perplexity on the Pile, as a function of positional encoding method and sequence length. All models have 1.3B parameters. The performance differences between NoPos, Learned, and Sinusoidal are consistently small, while ALiBi slowly becomes more beneficial as sequences become longer.
+
+In a Concurrent work, Scao et al. (2022) makes a similar observation in one of their ablation experiments and further show that NoPos models gain
+
+
+Figure 2: Through probing, we find that the NoPos model behaves similarly to models that use absolute learned position embeddings. We evaluated performance using mean absolute distance on 1.3B parameter models trained on the Pile.
+
+competitive performances for downstream tasks as well. Specifically, they evaluated 27 diverse downstream tasks. They showed that the NoPos model reached an average accuracy of $41.23\%$ over all tasks, comparing to Learned and ALiBi who gained $41.72\%$ and $43.70\%$ respectively.
+
+# 5 Analysis
+
+In this section, we examine whether the NoPos model is able to encode positional information and show that such information is essential for its success.
+
+# NoPos models acquire positional information
+
+Do NoPos LMs learn some form of positional encoding to compensate for the absence of explicit positional modeling? To answer this question, we probe each layer of our trained models4 for positional information. Specifically, we use the tokens' last hidden representation after each transformer layer, produced by the evaluated LM, and train a 2-layer feed-forward ReLU network to predict the absolute position (0 to 1023) of each token (i.e., as a multiclass classification problem). Notably, we do not change the weights of the evaluated LMs and thus, do not provide any position information
+
+of the tokens to the LM in this experiment, which ensures the validity of our findings.
+
+Each layer's probe was trained separately (hyperparameters are provided in App. C). As a soft accuracy metric, we measured the mean absolute distance between the probe's prediction and the token's actual position.
+
+Figure 2 shows that even though NoPos model starts, as expected, with no positional information in the first layer (on par with a random baseline), it becomes position-aware within four layers and appears to contain more positional information than ALiBi. By the middle layer, NoPos can predict absolute positions about as well as the model with learned positional embeddings. Finally, we observe that all models shed off a significant amount of positional information in the final layers, in line with the findings of Voita et al. (2019). Overall, the probe reveals that the NoPos models learn an implicit notion of absolute positions.
+
+To elucidate what positional information the NoPos model learns, we visualize the predictions of the probe. We examine a sample of 100 predictions from the validation set of the best-performing probe trained over the NoPos model. Figure 3 shows the predictions over the 512 token sequences sampled randomly from the validation set and a single example from the same set. We observe that the probe is more accurate at the beginning of the sequence, but becomes fuzzier as it progresses.
+
+Positional information matters NoPos is able to infer absolute positions, but are they necessary? We answer this using a trained NoPos model. Instead of computing the loss over the entire sequence, we select a single random token, shuffle the previous tokens that it is conditioned on, and compare to a baseline where the prefix remains intact. We find that in the case where the suffix is shuffled, the average token-level loss increases dramatically (from $\sim 4$ to $\sim 11$ ). Details of this experiment are given in App. B.
+
+This finding indicates that the NoPos model indeed uses the positional information it acquires, as otherwise we would expect similar loss values in these two settings.
+
+# 6 Conjecture
+
+How do transformers without explicit positional encoding learn absolute positions? We conjecture that the causal attention in autoregressive transformer language models allows them to predict the
+
+
+Figure 3: A visualization of the absolute position predictions of a probe trained over a NoPos language model. The blue line shows the mean of the generated predictions for every target position and the blue area represents the $95\%$ -confidence interval. The predictions for a single random sequence are depicted as green dots.
+
+number of attendable tokens at each position, i.e. the number of tokens in the sequence that precede the current one. Such a mechanism could effectively encode the absolute position of each token into its vector representation. Indeed, our analysis (Section 5) reveals that some notion of absolute positions exists in the hidden layers of language models even when they are trained without explicit positional encoding, and that this information is acquired throughout the first few layers. On the other hand, bidirectional transformer encoders (which are used in masked language modeling, e.g. Devlin et al. 2019) do not contain causal attention masks or any other limitation on the attention mechanism; thus, they should be unable to learn absolute positions without explicit positional encoding. We tested this corollary by training a masked language model based on RoBERTa large (Liu et al., 2019) on the Pile (see App. C for hyperparameters). Table 4 shows that, indeed, the NoPos model has significantly worse perplexities than the position-informed baselines. This result echoes the findings of Sinha et al. (2021), who also observed that MLMs without positional embeddings suffer significant performance degradation.
+
+# 7 Related Work
+
+While there has been ample research on positional encoding variants, there has been relatively little prior work that investigate models' ability to infer
+
+
MLM Perplexity
NoPos
147.18
Learned
4.06
Sinusoidal
4.07
ALiBi
4.00
+
+Table 4: Validation set perplexity of masked language models (Devlin et al., 2019) trained with various positional encoding methods on an excerpt of the Pile (Gao et al., 2020). The model architecture is based on RoBERTa large (Liu et al., 2019), and processes 128 tokens per sequence. While position-aware models converge to very low perplexities, training without positional encodings (NoPos) fails.
+
+positions implicitly. Prior to our work, Irie et al. (2019) explored transformer language models for speech recognition and found that such models, when trained without positional encoding, outperform those trained with sinusoidal embeddings. In addition, a focused language modeling experiment by Stella Rose Biderman5 showed that the NoPos method attains similar results to other position embedding methods; however, that experiment was on a small 350M parameter model trained on a small character-level dataset (enwik8). Here we show that this result holds across multiple datasets and model sizes, provide an analysis of the model's internal representations, and hypothesize how this phenomenon could occur.
+
+# 8 Conclusion
+
+We show that, contrary to popular belief, transformers language models do learn positional information even when are not provided with any explicit positional encoding. Our experiments systematically demonstrate that this phenomenon is robust across different language modeling settings, and that one can approximate the absolute position of each token from the model's internal representations to a surprising degree. However, this phenomenon does not extend to transformer encoders trained on the MLM objective. We conjecture that the causal attention mechanism, which limits attention in one direction of the sequence, is responsible for implicitly imbuing the transformer with positional information.
+
+# 9 Limitations
+
+Our work explores language models in the 125M to 1.3B parameter range. We show that as parameter count increases the gap between the NoPos method and the other position methods narrows. This trend leads us to believe that our findings should hold for even larger models, but the current biggest models are more than one hundred times bigger (in terms of parameters) than our 1.3B parameter models, and so the results in that setting can be unexpected. In addition, training models at the 1.3B parameter scale is resource-intensive and might hinder reproducibility. We therefore release our trained models. In Addition, when comparing the perplexity of NoPos to other models, although the margins are very small, NoPos is always slightly worse, suggesting that the inductive bias of positional encoding is indeed important.
+
+# Acknowledgements
+
+This work was supported by Intel Corporation and Meta Platforms Inc.
+
+# References
+
+Alexei Baevski and Michael Auli. 2019. Adaptive input representations for neural language modeling. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net.
+Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. CoRR, abs/1409.0473.
+Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In Advances in Neural Information Processing Systems, volume 33, pages 1877-1901. Curran Associates, Inc.
+Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages
+
+4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.
+Leo Gao, Stella Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang, Horace He, Anish Thite, Noa Nabeshima, Shawn Presser, and Connor Leahy. 2020. The Pile: An 800gb dataset of diverse text for language modeling. arXiv preprint arXiv:2101.00027.
+Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann Dauphin. 2017. Convolutional sequence to sequence learning. In ICML.
+Kazuki Irie, Albert Zeyer, Ralf Schlüter, and Hermann Ney. 2019. Language modeling with deep transformers. In INTERSPEECH.
+Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.
+Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining approach. CoRR, abs/1907.11692.
+Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. 2017. Pointer sentinel mixture models. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net.
+Yurii Nesterov. 1983. A method for unconstrained convex minimization problem with the rate of convergence $o(1 / k^2)$ .
+Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. *fairoseq: A fast, extensible toolkit for sequence modeling*. In *Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics* (Demonstrations), pages 48-53, Minneapolis, Minnesota. Association for Computational Linguistics.
+Ofir Press, Noah Smith, and Mike Lewis. 2022. Train short, test long: Attention with linear biases enables input length extrapolation. In International Conference on Learning Representations.
+Ofir Press, Noah A. Smith, and Omer Levy. 2020. Improving transformer models by reordering their sublayers. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2996-3005, Online. Association for Computational Linguistics.
+Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training.
+
+Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners.
+Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research, 21(140):1-67.
+Teven Le Scao, Thomas Wang, Daniel Hesslow, Lucile Saulnier, Stas Bekman, M Saiful Bari, Stella Biderman, Hady Elsahar, Jason Phang, Ofir Press, Colin Raffel, Victor Sanh, Sheng Shen, Lintang Sutawika, Jaesung Tae, Zheng Xin Yong, Julien Launay, and Iz Beltagy. 2022. What language model to train if you have one million GPU hours? In *Challenges & Perspectives in Creating Large Language Models*.
+Peter Shaw, Jakob Uszkoreit, and Ashish Vaswani. 2018. Self-attention with relative position representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 464-468, New Orleans, Louisiana. Association for Computational Linguistics.
+Koustuv Sinha, Robin Jia, Dieuwke Hupkes, Joelle Pineau, Adina Williams, and Douwe Kiela. 2021. Masked language modeling and the distributional hypothesis: Order word matters pre-training for little. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 2888-2913, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
+Sainbayar Sukhbaatar, Arthur Szlam, Jason Weston, and Rob Fergus. 2015. End-to-end memory networks. In Proceedings of the 28th International Conference on Neural Information Processing Systems - Volume 2, NIPS'15, page 2440-2448, Cambridge, MA, USA. MIT Press.
+Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc.
+Elena Voita, Rico Sennrich, and Ivan Titov. 2019. The bottom-up evolution of representations in the transformer: A study with machine translation and language modeling objectives. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4396-4406, Hong Kong, China. Association for Computational Linguistics.
+
+# A NoPos Performance Across Different Segments of the Input
+
+To shed more light on the findings shown in section 4, we explore whether there are parts of the sequence that the NoPos model better predicts compared to other positional methods (e.g., is the NoPos model performs better at the beginning or the end the sequence). We compute the model's average loss in different parts of the sequences. Specifically, we split each input sequence into eight consecutive segments and compute the loss for each segment separately.
+
+We evaluate the NoPos and Sinusoidal models trained on the WikiText-103 dataset, with an input sequence length of 512, and use the standard validation set. Figure 4 shows the results of this experiment. The NoPos model performs similarly or slightly worse than the baseline model on all input parts.
+
+
+Figure 4: NoPos model shows similar performances on each part of the sequence, comparing to the baseline Sinusoidal position encoding.
+
+# B Word Order Analysis
+
+Is positional information necessary for language modeling, or does the order of the input tokens not matter? To answer this, we conduct the following experiment: instead of computing the loss on the complete sequence, we pick a specific token in the sequence. The next token prediction is conditioned on the previous tokens in the sequence, and so we shuffle the order of the tokens in the prefix and compute the loss only for that specific token. We repeat the experiment with the original, un-shuffled prefix sequence as the baseline and compare the results.
+
+The experiment was conducted on the NoPos model with an input sequence length of 512 using the WikiText-103 dataset. We randomly sample an index between 5 and 512 for the token we pick from each input sequence from the validation set.
+
+
+Figure 5 shows the results of this experiment for 100 different inputs. These results clearly show that the transformer language model's next word predictions are not order-invariant.
+Figure 5: Shuffling input tokens (for causal language modeling) leads to a massive degradation in token-level loss.
+
+# C Hyperparameters
+
+Table 5 provides the optimization hyperparameters for each one of our experiments, and Table 6 shows the model hyperparameters in the modern (Pile) setting.
+
+
WikiText-103
The Pile
Probe
Masked LM
Sequence Length
512
1024
1024
128
Optimizer
NAG
Adam
Adam
Adam
Peak Learning Rate
1
2e-3
2e-3
1e-3
Warmup Steps
16,000
500
500
500
Total Steps
286,000
10,000
10,000
10,000
Tokens per Batch
72,000
256,000
64,000
1,024,000
Dropout
0.3
0
0
0.1
Weight Decay
0
0.01
0.01
0.01
+
+Table 5: The optimization hyperparameters used in this work. The NAG optimizer refers to Nesterov accelerated gradient (Nesterov, 1983), and Adam refers to (Kingma and Ba, 2015).
+
+
125M
350M
760M
1.3B
Layers
12
24
24
24
Model Dimensions
768
1024
1536
2048
Feed-forward Dimensions
3072
4096
6144
8192
Attention Heads
12
16
16
32
+
+Table 6: The models hyperparameters by size.
\ No newline at end of file
diff --git a/transformerlanguagemodelswithoutpositionalencodingsstilllearnpositionalinformation/images.zip b/transformerlanguagemodelswithoutpositionalencodingsstilllearnpositionalinformation/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..12480cd28b65c9263020e0721e45e78f4a613bc3
--- /dev/null
+++ b/transformerlanguagemodelswithoutpositionalencodingsstilllearnpositionalinformation/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:db4bd9e40456d4177df6101b5f07516852cfcce85fb6f5cf385c64c20cb0e99a
+size 210678
diff --git a/transformerlanguagemodelswithoutpositionalencodingsstilllearnpositionalinformation/layout.json b/transformerlanguagemodelswithoutpositionalencodingsstilllearnpositionalinformation/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..62d0910b1c1a27341ede556ec287b5545a539d57
--- /dev/null
+++ b/transformerlanguagemodelswithoutpositionalencodingsstilllearnpositionalinformation/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0751dbb5bf669e2954a44535c4989c0c8c6347a1f6e2b46063cbb4e28a268ebf
+size 213502
diff --git a/translatinghanjahistoricaldocumentstocontemporarykoreanandenglish/a071cfc9-a41f-4f83-a18f-989ec7a2497f_content_list.json b/translatinghanjahistoricaldocumentstocontemporarykoreanandenglish/a071cfc9-a41f-4f83-a18f-989ec7a2497f_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..e83f005196adfae2fb8cdff3bf87f77fec7e93c2
--- /dev/null
+++ b/translatinghanjahistoricaldocumentstocontemporarykoreanandenglish/a071cfc9-a41f-4f83-a18f-989ec7a2497f_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:bf6cde2630db9049d143781869734ea576f97bfca101a1ebe4b16e6b6f4b6fae
+size 93268
diff --git a/translatinghanjahistoricaldocumentstocontemporarykoreanandenglish/a071cfc9-a41f-4f83-a18f-989ec7a2497f_model.json b/translatinghanjahistoricaldocumentstocontemporarykoreanandenglish/a071cfc9-a41f-4f83-a18f-989ec7a2497f_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..f19c3ac0099c399599395172aa0490c76e2aa3ec
--- /dev/null
+++ b/translatinghanjahistoricaldocumentstocontemporarykoreanandenglish/a071cfc9-a41f-4f83-a18f-989ec7a2497f_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c3c47a6112786a37a88b437d6ade1bb9d1db485b6bfc67418e8fed37cc56c6e1
+size 105288
diff --git a/translatinghanjahistoricaldocumentstocontemporarykoreanandenglish/a071cfc9-a41f-4f83-a18f-989ec7a2497f_origin.pdf b/translatinghanjahistoricaldocumentstocontemporarykoreanandenglish/a071cfc9-a41f-4f83-a18f-989ec7a2497f_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..d4e31e1c7c019f3a85579ff44339885864b43761
--- /dev/null
+++ b/translatinghanjahistoricaldocumentstocontemporarykoreanandenglish/a071cfc9-a41f-4f83-a18f-989ec7a2497f_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:8bcfe7f687c0611daf5d847fb701a9422b633b4f49f1f4d0adda3fb15cefb0b0
+size 2584830
diff --git a/translatinghanjahistoricaldocumentstocontemporarykoreanandenglish/full.md b/translatinghanjahistoricaldocumentstocontemporarykoreanandenglish/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..550d4451726f2a845c3b7d013b01e5b436b89101
--- /dev/null
+++ b/translatinghanjahistoricaldocumentstocontemporarykoreanandenglish/full.md
@@ -0,0 +1,609 @@
+# Translating Hanja Historical Documents to Contemporary Korean and English
+
+Juhee Son $^{1*}$ , Jiho Jin $^{1*}$ , Haneul Yoo $^{1}$ , JinYeong Bak $^{2}$ , Kyunghyun Cho $^{3,4}$ , Alice Oh $^{1}$
+
+$^{1}$ KAIST, $^{2}$ Sungkyunkwan University, $^{3}$ New York University, $^{4}$ Genentech
+
+{sjh5665, jinjh0123, haneul.yoo}@kaist.ac.kr,
+
+jy.bak@skku.edu, kyunghyun.cho@nyu.edu, alice.oh@kaist.edu
+
+# Abstract
+
+The Annals of Joseon Dynasty (AJD) contain the daily records of the Kings of Joseon, the 500-year kingdom preceding the modern nation of Korea. The Annals were originally written in an archaic Korean writing system, 'Hanja', and were translated into Korean from 1968 to 1993. The resulting translation was however too literal and contained many archaic Korean words; thus, a new expert translation effort began in 2012. Since then, the records of only one king have been completed in a decade. In parallel, expert translators are working on English translation, also at a slow pace and produced only one king's records in English so far. Thus, we propose H2KE, a neural machine translation model, that translates historical documents in Hanja to more easily understandable Korean and to English. Built on top of multilingual neural machine translation, H2KE learns to translate a historical document written in Hanja, from both a full dataset of outdated Korean translation and a small dataset of more recently translated contemporary Korean and English. We compare our method against two baselines: a recent model that simultaneously learns to restore and translate Hanja historical document and a Transformer based model trained only on newly translated corpora. The experiments reveal that our method significantly outperforms the baselines in terms of BLEU scores for both contemporary Korean and English translations. We further conduct extensive human evaluation which shows that our translation is preferred over the original expert translations by both experts and non-expert Korean speakers.
+
+# 1 Introduction
+
+Historical documents written in an archaic language should be translated into a modern language. Most of the Korean historical documents are written in Hanja, the main written language in Korea
+
+
Hanja
改清州牧爲西原縣. 以劇賊胎生邑,降號也.
Original
+Korean
+Translation
+(oKo)
chengju-mok
+seowon-hyeon. It is because the town gets gangho if geukjeok is born.
+chengju-mok
+seowon-hyeon. It is because the town gets gangho if geukjeok is born.
+chengju-mok
+seowon-hyeon. It is because the town gets gangho if geukjeok is born.
+chengju-mok
+seowon-hyeon. It is because the town gets gangho if geukjeok is born.
+chengju-mok
+seowon-hyeon.
+seowon-hyeon.
+seowon-hyeon.
+seawon-hyeon.
+seawon-hyeon.
+seawon-hyeon.
+seawon-hyeon.
+seawon-hyeon.
+seawon-hyeon.
+seawon-hyeon.
+seawon-hyeon.
+seawon-hyeon.
+seawon-hyeon.
+seawon-hyeon.
+seawon-hyeon.
+seawon-hyeON.
+seawon-hyeON.
+seawon-hyeON.
+seawon-hyeON.
+seawon-hyeON.
+seawon-hyeON.
+seawon-hyeON.
+seawon-hyeON.
+seawon-hyeON.
+seawon-hyeON.
+seawon-hyeON.
+seawon-hyeON.
+seawon-hyeON.
+seωon-hyeON.
+seωon-hyeON.
+seωon-hyeON.
+seωon-hyeON.
+seωon-hyeON.
+seωon-hyeON.
+seωon-hyeON.
+seωon-hyeON.
+seωon-hyeON.
+seωon-hyeON.
+seωon-hyeON.
+seωon-hyeON.
+seωon-hyeIN.
+seωon-hyeIN.
+seωon-hyeIN.
+seωon-hyeIN.
+seωon-hyeIN.
+seωon-hyeIN.
+seωon-hyeIN.
+seωon-hyeIN.
+seωon-hyeIN.
+seωon-hyeIN.
+seωon-hyeIN.
+seωon-hyeIN.
+seωon-hyeIN.
+seomega-
+seωon-hyeIN.
+seωon-hyeIN.
+seωon-hyeIN.
+seωon-hyeIN.
+seωon-hyeIN.
+seωon-hyeIN.
+seωon-hyeIN.
+seωon-hyeIN.
+seωon-hyeIN.
+seωon-hyeIN.
+seωon-hyeIN.
+seωon-hyeIN.
+seω-on-hyeIN.
+seωon-hyeIN.
+seωon-hyeIN.
+seωon-hyeIN.
+seωon-hyeIN.
+seωon-hyeIN.
+seωon-hyeIN.
+seωon-hyeIN.
+seωon-hyeIN.
+seωon-hyeIN.
+seωon-hyeIN.
+seωon-hyeIN.
+seωon-hyeIN.
+
+Table 1: An example from the Annals of Joseon Dynasty. We show the original Hanja sentence and the original Korean human translation which contains archaic words indicated in color box. The contemporary Korean translation replaces the archaic words with words and phrases understood by present-day Korean speakers.
+
+before the 20-th century. Hanja is an archaic language based on the old Chinese writing system, and although there is a large overlap in characters, it is different from both Chinese and Korean. The Annals of Joseon Dynasty (AJD), the representative historical records of Joseon (1392 - 1910), originally written in Hanja, was translated into Korean from 1968 to 1993 by expert translators commissioned by the Korean government. Non-expert Korean speakers however have trouble understanding these original translations of the AJD because they contain many archaic Hanja-based words, often hard-to-understand transliterations. The Institute for the Translation of Korean Classics (ITKC) recognizes this problem and is re-translating the entire AJD with modern-style writing (Table 1). This retranslation process is expected to take 22 years with 12 to 15 expert translators. Simultaneously, the Na-
+
+tional Institute of Korean History (NIKH) has been translating AJD into English since 2012, which is also expected to take about two more decades.
+
+Machine translation can accelerate the translation process. The challenge is the limited availability of parallel corpora between Hanja to contemporary Korean as well as English. Only one annal of the 24 kings of the Joseon Dynasty was newly translated into Korean and English. This is not a sufficient amount to train a full machine translation model. To address this low-resource problem, we adopt a multilingual translation approach that jointly learns to translate between Hanja, outdated original Korean, contemporary Korean and English, expecting positive transfer of knowledge among these languages.
+
+We present a multilingual neural machine translation model that translates Hanja historical documents to contemporary Korean, to which we refer as H2KE. By exploiting extra resources, H2KE performs significantly better translation of Hanja into contemporary Korean than other approaches that rely solely on the parallel corpus from the newly translated Korean and Hanja. We measure the perplexity with a large-scale language model trained on contemporary Korean, called KoGPT (Kim et al., 2021), to show that translations from our model are more similar to contemporary Korean than the old Korean translations from the original translation effort. These results are further confirmed by human evaluation, where both experts and non-experts prefer our model's translation over the original translation in old Korean. Using H2KE, we translated the remaining AJD to contemporary Korean as well as English and are releasing it publicly at https://juheuuu.github.io/h2ke-demo.
+
+Our main contributions include:
+
+- We propose a transfer learning method for translating AJD to contemporary Korean and English with a small training corpus.
+- We conduct thorough human evaluation, where experts find that our generated translations are more accurate and fluent than the original expert translations, and non-expert Korean speakers choose our translations as more easily understandable compared to the original translations.
+- We translate the entire AJD to modern Korean
+
+and English and publicly release the translations for easier access to the resources.
+
+# 2 Background
+
+# 2.1 Neural Machine Translation for the Annals of the Joseon Dynasty
+
+To translate AJD with the neural network, Park et al. (2020) propose a new subword tokenization method called share-vocabulary-and-entity-restriction byte-pair encoding. Kang et al. (2021) present a multitask learning approach that simultaneously restores and translates historical documents. For the restoration task, they use the untranslated Diaries of the Royal Secretariat (DRS) which is another Korean historical corpus written in Hanja. For translation, they only focus on translating Hanja into old Korean using the outdated AJD corpus. In contrast to these earlier approaches, our approach supports both translation into contemporary Korean and into English, while benefiting from the larger Hanja-old Korean parallel corpus.
+
+# 2.2 The Annals of the Joseon Dynasty
+
+The Annals of the Joseon Dynasty (AJD), also called the Veritable Records of the Joseon Dynasty, is an old and vast volume of historical documents from Joseon Dynasty which ruled the Korean peninsula from 1392 to 1864. It records 472 years of the 25 rulers' reigns of the Joseon Dynasty. It covers diverse historical events and is known to exhibit high integrity and credibility in its description of these events, making it invaluable as a historical record. The dataset is available at 'the Veritable Records of the Joseon Dynasty' run by the National Institute of Korean History (NIKH). AJD was originally written in Hanja, the writing system of ancient Korea, consisting of totally different characters and syntactic structures from contemporary Korean. Hanja had stemmed from traditional Chinese, but the lexical, semantic, and syntactic characteristics had changed to reflect the cultural differences between the Joseon Dynasty and other ancient Kingdoms of China.
+
+# 2.3 Translated Datasets
+
+AJD was initially translated from Hanja to Korean during 1968 - 1993, and the dataset was uploaded and publicly released by the Institute for
+
+
Annals of
Reign
Hanja
oKo
cKo
English
# of sentences
Ratio (%)
Joseon Dynasty
1392-1910
○
○
359,726
100.0
22thKing Jeongjo
1776-1799
○
○
○
14,356
3.9
4thKing Sejong
1418-1449
○
○
○
26,227
7.2
+
+Table 2: Statistics of our dataset. For the entire AJD, there are ⟨Hanja, oKo⟩ pairs. For the Annals of King Jeongjo, we also have contemporary Korean translations, and for the Annals of King Sejong, we have the English translations. The last column indicates the ratio of each dataset on the basis of the total AJD.
+
+the Translation of Korean Classics (ITKC). These original translations include numerous outdated Hanja-based words, often transliterations. These words are often not easily understood by contemporary Korean speakers, or are simply incorrect in the context they appear. To correct those and other errors and also to improve the overall readability, ITKC launched a project for modernizing the translation of AJD in 2011. The Annals of the 22-nd King Jeongjo (AKJ) was the first one to be translated between 2012 and 2016. Throughout this paper, we refer to the original translation as oKo and the new contemporary translation as cKo. For the globalization of AJD, listed as UNESCO's Memory of the World, and Korean history, NIKH has been translating AJD into English, in parallel to the effort by ITKC, since 2013. The Annals of the 4-th King Sejong (AKS) has been translated so far, and it is available from http://esillok-history.go.kr/. These translation projects are expected to take two decades.
+
+In Table 2 we list these corpora and their statistics. As discussed earlier, the corpora for contemporary Korean and English are substantially smaller than those for old Korean.
+
+# 3 Method
+
+H2KE is a model that learns to translate historical documents written in Hanja to contemporary Korean and English. We use the multilingual neural machine translation (MNMT) approach, which enables translation between multiple languages with a single model (Johnson et al., 2017; First et al., 2016).
+
+Multilingual Translation Approach. Our dataset consists of pairs of , , , , and . We append a special
+
+
+Figure 1: H2KE works with multiple language pairs by appending a source sentence with a target language token during training and inference.
+
+target-language token (either $<\mathrm{oKo}>$ , $<\mathrm{cKo}>$ , or $<\mathrm{En}>$ ) in front of each source sentence. We train a model using all these examples shuffled randomly by presenting one pair of sentences at a time. Figure 1 illustrates the overall translation pipeline. With this approach, the model can benefit from the large amount of $\langle$ Hanja, oKo $\rangle$ to improve the translation quality of the lower-resource target language pairs, $\langle$ Hanja, cKo $\rangle$ and $\langle$ Hanja, English $\rangle$ .
+
+Training and Inference. We use the Transformer model (Vaswani et al., 2017) to implement H2KE. We optimize the following loss for training:
+
+$$
+\mathcal {L} = - \frac {1}{N} \sum_ {n = 1} ^ {N} \sum_ {t = 1} ^ {T _ {n}} \log p _ {\theta} \left(y _ {t} ^ {(n)} \mid y _ {< t} ^ {(n)}, x ^ {(n)}, \operatorname {t o k} ^ {(n)}\right). \tag {1}
+$$
+
+There are $N$ training examples, and each example is tagged with the target side language using $\text{tok}^{(n)} \in \{\langle \circ \text{Ko} \rangle, \langle \text{cKo} \rangle, \langle \text{En} \rangle\}$ .
+
+For generation, we use beam search and translate the Hanja sentences to the language specified by the target language token. We generate and evaluate sentences in target languages, English (EN) and contemporary Korean (cKo), with either Hanja or original Korean translation (oKo) as source sentences.
+
+
Model
All HJ →oKo
Jeongjo HJ/oKo →cKo
Sejong HJ/oKo →EN
BLEU
HJ →oKo
HJ →cKo
oKo →cKo
HJ →EN
oKo →EN
(A)
Papago
-
11.10
-
3.59
4.49
(B)
Kang et al.
○
41.56
-
-
-
-
H2KE-base
○
46.23
-
-
-
-
H2KE-big
○
47.57
-
-
-
-
(C)
H2KE-big
○
-
17.63
21.43
-
-
H2KE-big
○
○
46.76
46.44
45.76
-
-
(D)
H2KE-big
○
-
-
-
11.92
12.36
H2KE-big
○
○
46.23
-
-
25.23
24.50
(E)
H2KE-big
○
○
○
46.58
46.11
45.76
24.62
24.59
+
+Table 3: Test results of our model on different training dataset combinations. The circle indicates the king of annals and the language pair of the data for training. The BLEU score of one target language can be measured on the different source languages.
+
+# 4 Experiments and Results
+
+# 4.1 Data Preprocessing and Training Settings
+
+We use the unigram language model tokenizer (Kudo, 2018) provided by Google's SentencePiece library. In order to use one shared vocabulary between source and target languages, we tokenize the entire corpus together, including Hanja, oKo, cKo and EN. We limit the size of the vocabulary to 32K. The out-of-vocabulary tokens are replaced with UNK (unknown) tokens. We use the hyperparameters recommended by Vaswani et al. (2017). We train and evaluate models using Fairseq (Ott et al., 2019). We average the five best checkpoints on validation data to obtain the final model to be tested on the test set.
+
+# 4.2 Translation Quality
+
+We train models with different dataset combinations and measure the BLEU score (Papineni et al., 2002). To measure the Korean BLEU score, we follow the protocol from WAT 2019 (Nakazawa et al., 2019) and use Mecab-ko $^5$ tokenizer and Sacrebleu (Post, 2018). For English, we use Sacrebleu.
+
+Table 3 shows the BLEU score for each case. Overall, utilizing $\langle$ Hanja, oKo $\rangle$ pairs brings significant improvement in low-resource translations (to cKo or EN). However, there exist performance degradations when adding the unrelated target language pairs to the translation from Hanja. Since the
+
+encoder already learns expressive representations for Hanja from the plenty of training samples, inserting pairs with different target languages rather hinders the representation learning of the source language, Hanja.
+
+A Commercial Translation Engine. We first compare our models to the Korean-specialized commercial translation service, called Papago (Lee et al., 2016). Although Papago was never trained to translate Hanja into modern Korean nor into English, we can force it to do so by asking it to translate from Taiwanese Mandarin (zh-TW) which shares a large set of characters with Hanja. According to the row (A) in Table 3, the commercial translation system, Papago, simply fails to properly translate Hanja documents, evident from significantly low BLEU in both contemporary Korean and English.
+
+Original Korean Translation. Although there is no preceding work on translating Hanja into either contemporary Korean or English, Kang et al. (2021) had recently demonstrated the effectiveness of neural machine translation for translating Hanja into old Korean. We thus compare our approach against theirs in Hanja-Old Korean translation. For fair comparison, we only use the $\langle$ Hanja, oKo $\rangle$ corpus and train a H2KE-base with only 65M parameters.
+
+As shown in the row group (B) in Table 3, the proposed H2KE-base achieves 5 BLEU scores higher than Kang et al. (2021). We attribute this improvement to the vocabulary sharing strategy
+
+and the use of the transformer. Without vocabulary sharing, the model showed 45.09 BLEU score. When we try a larger model, H2KE-big with 213M parameters, we achieve even better translation quality. We thus stick to H2KE-big in the rest of the experiments.
+
+Contemporary Korean Translation. The first row in the row group (C) of Table 3 shows that the model trained with only a small amount of $\langle$ Hanja, cKo $\rangle$ and $\langle$ oKo, cKo $\rangle$ pairs result in low BLEU scores. However, adding the $\langle$ Hanja, oKo $\rangle$ parallel corpus dramatically improves translation quality for the cKo translations, evident from 20-30 BLEU scores increase. This confirms the effectiveness of multilingual training which we hypothesized earlier.
+
+When we take the original Korean (oKo) as translation and compare it against the ground truth contemporary Korean (cKo) as reference, we obtain the BLEU score of 39.74. This score is lower than that of the H2KE's cKo translation. This strongly suggests that the generated translations from our system are more similar to the cKo than the expert's ground truth oKo translations, fulfilling the goal of producing a machine translation system for contemporary Korean.
+
+English Translation. According to the result in the row (D) in Table 3, we observe a similar trend when we use H2KE for translating Hanja into English. We gain significant improvement in translation quality by including the ⟨Hanja, oKo⟩ corpus during training. Finally in the final row (E) of Table 3, we demonstrate that a single H2KE-big model can be trained on all the corpora and can translate Hanja into both old Korean, contemporary Korean and English competitively.
+
+# 4.3 How contemporary is contemporary Korean translation?
+
+Perplexity (Horgan, 1995) is the standard metric for measuring the performance of a language model, and it has been used recently to measure the deterioration of a language model over time by Lazaridou et al. (2021). To identify the difference and similarity between AJD translation, produced by different methods, and the modern Korean language, we calculate the perplexity of translations in the test set under a Korean pre-trained GPT (Kim et al., 2021), and huggingface framework (Wolf et al., 2020). We used H2KE-big from Table 3 (B) in the case of the proposed approach.
+
+
+Figure 2: Per-system perplexity comparison calculated by KoGPT.
+
+
P(pp1(A) < ppl(B))
(B)
+(A)
gt-oKo
Kang et al.
H2KE
gt-cKo
gt-oKo
0.48*
0.28*
0.22*
Kang et al.
0.52*
0.28*
0.20*
H2KE
0.72*
0.72*
0.54
gt-cKo
0.78*
0.80*
0.46
+
+Table 4: Pairwise perplexity comparison of each model calculated by KoGPT. Each cell shows the estimated probability of $ppl(A) < ppl(B)$ by BT model. * indicates statistically significant results with $p < 0.05$
+
+Per-system perplexity. Figure 2 draws each corpus' perplexity as a box. There is a significant perplexity difference between the ground truth cKo (gt-cKo) and oKo (gt-oKo), which means the gt-cKo translation is closer to the modern language than the gt-oKo. Our generated translations result in a lower perplexity than the gt-oKo and Kang et al. (2021); it is closer to the modern language similarly to gt-cKo.
+
+Pairwise Evaluation. Because translations are associated with the same source sentences, respectively, we can compare each pair of systems by fitting Bradley-Terry (BT) model (Peyrard et al., 2021; Bradley and Terry, 1952). The BT model estimates the probability that one system is better than another based on how frequently the former system scores better. We report the estimated probabilities, $P(ppl(A) < ppl(B))$ , in Table 4.
+
+H2KE is more like contemporary Korean than either of the ground truth oKo or Kang et al. (2021) with probability 0.72. As anticipated, ground truth
+
+cKo is significantly more like contemporary Korean than both ground truth oKo and baseline. Between H2KE and the ground truth cKo, we do not observe a significant difference in this evaluation, implying that the proposed H2KE's translations are almost on par with cKo in terms of how probable they are under a language model trained on contemporary Korean. This observation is in agreement with our earlier observation on absolute evaluation.
+
+# 5 Human Evaluation
+
+We conduct human evaluation of Korean translations to confirm that H2KE's translations are both more understandable and accurate than the groundtruth oKo. We use the Direct Assessment (DA) (Graham et al., 2013, 2014, 2017) as the primary method for evaluating translation systems, where the crowd-sourced bilingual human assessors are asked to rate a translation given the source sentences by how adequately it expresses the meaning of the sentences in an analog scale (Akhbardeh et al., 2021).
+
+We cannot however adopt the crowd-sourced DA approach as is because only a few historians can evaluate the meaning of translations by interpreting Hanja. We thus work together with ITKC and ask their experts to evaluate our generated translations according to their internal evaluation criteria. This is the same procedure taken to ensure the quality of human translations at ITKC. Additionally, we conduct another evaluation to confirm whether the new Korean translation improves the understanding of historical documents for non-expert Korean speakers.
+
+# 5.1 Expert Evaluation
+
+Evaluation Protocol. In ITKC, the evaluation criteria for the historical documents are divided into accuracy and fluency. Along each of these aspects, the scores are deducted according to errors that are made and the amount of deduction is determined based on the severity of each error. In the case of accuracy, we deduct -5, -10 and -15 for word-level, phrase-level and sentence-level errors, respectively. In the case of fluency, we deduct -5 for a word-level error. We randomly select 45 test samples from the Annals of Jeongjo with each sample's length capped at 100 Hanja characters, for evaluation. We ask six experts from ITKC to score both ground-truth translations as well as machine-generated translations. Each sample is evaluated by
+
+two experts, and we report the average score. When there is significant disagreement between two experts, the score is adjusted through their discussion.
+
+
+Figure 3: Average value of the deducted score per each translation by experts. Experts identified errors in the translation and subtracted scores according to the evaluation criteria.
+
+Evaluation Result. Figure 3 shows the average deducted scores for all three cases, along both accuracy and fluency. As anticipated, the ground-truth cKo samples exhibit least deduction in their scores, implying that these new translations are indeed without serious translation errors and better translated. On the other hand, the ground-truth oKo samples received most deduction in their scores, which was expected as their low readability and errors motivated re-translation of AJD in the first place. Our samples received worse score deduction than the ground-truth cKo, but were perceived to be significantly better than the ground-truth oKo. In particular we observed significant improvement over the original Korean translations in terms of fluency. This outcome confirms the potential utility of the proposed approach of machine translation for re-translating the entire AJD as well as other historical Hanja documents.
+
+# 5.2 Non-expert Evaluation
+
+Evaluation Protocol. To compare general public's perception of three translation types (gt-oKo, gt-cKo, and H2KE), we recruit 36 Korean speakers and request them to make pairwise comparisons of the readability. Given a triplet $\langle \mathrm{gt - oKo}$ ,gt-cKo,and H2KE> of translations of the same Hanja paragraph, we choose a random pair to give to each evaluator, either $\langle \mathrm{gt - cKo}$ , H2KE>, $\langle \mathrm{gt - cKo}$ ,gt-oKo>, or H2KE, gt-oKo>. They have an option of 'no difference,' although we encourage them to avoid it as much as possible. We use 150 triples $\langle \mathrm{gt - oKo}$ . gt-cKo,H2KE> (450 pairs in total) from AKJ, and 150 pairs $\langle \mathrm{gt - oKo}$ , H2KE> from the annals of all the other kings ('others', in short) for which we do
+
+not have ground-truth contemporary Korean translations. Each evaluator compares 50 pairs, and each pair is assigned three evaluators. There are 12 different survey sheets consisting of 50 pairs each, and each survey is answered by three evaluators independently. The details about the evaluation samples and the statistics of the evaluators are in Appendix E.
+
+
+Figure 4: Result of pairwise comparison of readability by non-expert Korean speakers. The bars on each side represent the win (more understandable) rates against the other side, and the in-between white bars indicate the tie rates. Each error bar indicates the standard deviation of win rates among different survey sheets.
+
+Evaluation Result. We use the majority vote among three evaluators' responses to decide on the winner between each pair. When three people's opinions are divided into A, B, and no difference, we treat the pair as 'no difference'. In Figure 4 we present the mean and the standard deviation of the win rates.
+
+The result from AKJ shows that gt-cKo is unsurprisingly considered easier to understand than gt-oKo is, by $77.3\%$ . This further emphasizes the importance and necessity of new translation of AJD for the general public. The proposed H2EK's translations were considered more readable than oKo in AKJ by $58.0\%$ , which confirms the readability improvement, which was also observed with the annals of the other kings as well. When compared against gt-cKo, gt-cKo was preferred with a probability of $52.0\%$ , implying that there is a room for improvement in the future.
+
+# 6 Further Analysis
+
+# 6.1 Sample-Level Analysis of Korean Translations
+
+The human evaluation confirmed that H2KE significantly improves the readability and quality of the translation compared to the original oKo translations. In this section, we conduct finer-grain analysis. First, we measure how many undesirable transliteration of Hanja words are eliminated by H2KE. These transliterations are often marked in the corpus with their corresponding Hanja words surrounded by parentheses. We thus construct the archaic Hanja-based word set by extracting the gt-cKo's Hanja-based word set from the gt-oKo's. Among these detected transliterations, the proposed H2KE replaces $75\%$ with more understandable contemporary translations.
+
+Table 5 illustrates one sample text in Hanja, ground truth oKo, cKo, and H2KE. The color box represents the transliterated Hanja words. The words that have the same semantic meanings and correspond to each other across different types of translations are grouped using the same color. The ground truth oKo contains many literal translations, i.e. near-transliterations, identified by parentheses, and there is even a new Hanja word (起耕) added by the human translator. Compared to the gt-oKo, H2KE and gt-cKo replace most of those difficult translations with more easily understood ones. These are marked with $\dagger$ . On the other hand, a proper noun, that is supposed to be transliterated, H2KE correctly preserves this behaviour. See Do-jang (導掌) marked with *, which is the name of an institute. In some cases, we notice H2KE generates a translation that is even more readable and more contemporary than the ground-truth contemporary Korean, such as the one marked as §.
+
+# 6.2 Sample-Level Analysis of English Translation.
+
+Table 6 has an example of English translation from H2KE and Papago. As we use the best-performing model for each case, the sample presented from H2KE and Papago are respectively translated from Hanja and oKo. Because Papago is not aware of the historical context, it translates the word ‘요’ (Royal Lecture) to its homonym, a ‘contest.’ In contrast, our model correctly translates it into ‘Royal Lecture.’
+
+
Hanja
導掌* 之 科外† 滥徵†, 已極無狀, 以陳§ 起, 白地† 横斃, 尤極痛騒, 使之考律† 嚴處.Eng.) It is too bad that the Dojang* excessively collected† the tax outside the regulations†. It is even more surprising that the old land§ was regarded as cultivated land and was collected for no reason†. Look at the provisions of the law† and let them deal with it strictly.
+
+Table 5: The translation example of ground truth oKo, cKo, En and our generated cKo translations. The parenthesized words are literally translated from the original Hanja words. The same color box represents the group of words with the same semantic meaning. * indicates the proper noun; the literal translation is allowed. † represents the case that gt-cKo and H2KE-cKo eliminate the literal translation. § is the word only our model can generate a more understandable translation.
+
+
Hanja
隕霜.御經筵.
gt-oKo
서리가내렸다. 전쟁에 나중 갔다.
gt-En
Frost appeared and the King attended the Royal Lecture.
H2KE
Frost covered the ground. The King attended the Royal Lecture.
Papago
It frosted. I went on to the contest.
+
+Table 6: English translation Examples in the test set of the Annals of Sejong (4th King). Our generated sample is translated from Hanja, and the Papago sample is from ground truth oKo.
+
+# 6.3 H2KE beyond AJD
+
+Daily Records of the Royal Court and Important Officials (DRRI) is another Hanja corpus, consisting of journals written in the period between the 21st King Yeongjo and the last Emperor Sungjong. DRRI consists of 2,329 volumes, and $42\%$ of the corpus has been translated manually by experts. Unlike AJD, DRRI's original Hanja documents do not contain any punctuation marks. This corpus is not included in the training data of our model nor that of the baseline by Kang et al. (2021), which allows us to test the corpus-level generalization
+
+ability of our approach. We consider the translated part of DRRI after 2012 as contemporary Korean (cKo) and measure the BLEU score on this portion.
+
+
Model
BLEU
Kang et al. (2021)
12.96
H2KE-oKo
21.50
H2KE-cKo
32.23
+
+Table 7: BLEU score of translations on DRRI.
+
+We make two major observations according to the results in Table 7. First, H2KE-cKo produces translations that are of high quality, evident from BLEU above 30. Second, H2KE-cKo performs favourably to H2KE-oKo, which further confirms that H2KE-cKo is capable of producing translation in contemporary Korean. Finally, we observe that our approach works substantially better than the baseline, which may be due to missing punctuation marks, although we leave more detailed analysis to the future.
+
+# 7 Conclusion
+
+We present H2KE, a neural machine translation system for the AJD that translates from Hanja to contemporary Korean and English. H2KE is built on top of MNMT systems to overcome the low
+
+resource training data problem. H2KE shows a significantly higher BLEU score than the baseline and a current commercial translation system. Based on the perplexity evaluation with KoGPT, the translation samples from H2KE are closer to the contemporary Korean corpus than the ground truth original Korean translations and the baseline. The human evaluation results show that the translation samples from H2KE are more accurate and understandable than the ground truth original Korean. Finally, we translate the entire AJD to contemporary Korean and English with H2KE and publicly release the translations.
+
+In this work, we provide strong evidence that existing algorithms for machine translation and natural language processing generalize to a scenario where data span several centuries of an archaic language. It is highly technical in that it leads to a deeper understanding of existing algorithms and significantly extends the scope of the previous studies.
+
+# Limitations
+
+The Annals of Joseon Dynasty (AJD) were written over the course of about 500 years, so naturally Hanja underwent change during long period. Capturing the temporal change would result in a better performing model. On a related note, some entities, such as locations, and linguistic expressions may have disappeared altogether, and we simply would not be able to express those in today's language without lengthy explanations. In the non-expert evaluation, some of the surveys reported low inter-annotator agreement because there were only three annotators per question and the evaluation of readability is subjective. The range of non-experts' prior knowledge of Korean history varies widely, and this also affects inter-annotator agreement.
+
+# Ethics Statement
+
+The expert evaluation was performed under Institutional Review Board (IRB) approval. It was conducted by the experts from the Institute for the Translation of Korean Classics (ITKC), and evaluation fees were paid to evaluators according to the ITKC's criteria for evaluation fee payment. In recruiting non-expert evaluators, there was no discrimination against minority groups such as age, ethnicity, disability, and gender. They were paid the compensation of more than the minimum wage of Korea.
+
+# Acknowledgements
+
+We would like to thank the Institute for the Translation of Korean Classics (ITKC) for providing expertise on Korean historical documents and their evaluations. This work was partly supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (2022R1F1A1064401). KC was supported by Samsung Advanced Institute of Technology (under the project Next Generation Deep Learning: From Pattern Recognition to AI) and NSF Award 1922658 NRT-HDR: FUTURE Foundations, Translation, and Responsibility for Data Science.
+
+# References
+
+Farhad Akhbardeh, Arkady Arkhangorodsky, Magdalena Biesialska, Ondrej Bojar, Rajen Chatterjee, Vishrav Chaudhary, Marta R Costa-jussa, Cristina España-Bonet, Angela Fan, Christian Federmann, et al. 2021. Findings of the 2021 conference on machine translation (wmt21). In Proceedings of the Sixth Conference on Machine Translation, pages 1-88.
+Ralph Allan Bradley and Milton E Terry. 1952. Rank analysis of incomplete block designs: I. the method of paired comparisons. Biometrika, 39(3/4):324-345.
+Orhan First, Kyunghyun Cho, and Yoshua Bengio. 2016. Multi-way, multilingual neural machine translation with a shared attention mechanism. arXiv preprint arXiv:1601.01073.
+Yvette Graham, Timothy Baldwin, Alistair Moffat, and Justin Zobel. 2013. Continuous measurement scales in human evaluation of machine translation. In Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse, pages 33-41.
+Yvette Graham, Timothy Baldwin, Alistair Moffat, and Justin Zobel. 2014. Is machine translation getting better over time? In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics, pages 443-451.
+Yvette Graham, Timothy Baldwin, Alistair Moffat, and Justin Zobel. 2017. Can machine translation systems be evaluated by the crowd alone. *Natural Language Engineering*, 23(1):3-30.
+John Horgan. 1995. From complexity to perplexity. Scientific American, 272(6):104-109.
+Melvin Johnson, Mike Schuster, Quoc V. Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda Viégas, Martin Wattenberg, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2017. Google's multilingual neural machine translation system: Enabling zero-shot translation. Transactions of the Association for Computational Linguistics, 5:339-351.
+
+Kyeongpil Kang, Kyohoon Jin, Soyoung Yang, Soojin Jang, Jaegul Choo, and Youngbin Kim. 2021. Restoring and mining the records of the joseon dynasty via neural language modeling and machine translation. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4031-4042.
+Ildoo Kim, Gunsoo Han, Jiyeon Ham, and Woonhyuk Baek. 2021. Kogpt: Kakaobrain korean(hangul) generative pre-trained transformer. https://github.com/kakaobrain/kogpt.
+Taku Kudo. 2018. Subword regularization: Improving neural network translation models with multiple subword candidates. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 66-75.
+Angeliki Lazaridou, Adhi Kuncoro, Elena Gribovskaya, Devang Agrawal, Adam Liska, Tayfun Terzi, Mai Gimenez, Cyprien de Masson d'Autume, Tomas Kocisky, Sebastian Ruder, et al. 2021. Mind the gap: Assessing temporal generalization in neural language models. Advances in Neural Information Processing Systems, 34.
+Hyoung-Gyu Lee, Jun-Seok Kim, Joong-Hwi Shin, Jaesong Lee, Ying-Xiu Quan, and Young-Seob Jeong. 2016. papago: A machine translation service with word sense disambiguation and currency conversion. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: System Demonstrations, pages 185-188.
+Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, and Luke Zettlemoyer. 2020. Multilingual denoising pretraining for neural machine translation. Transactions of the Association for Computational Linguistics, 8:726-742.
+Toshiaki Nakazawa, Chenchen Ding, Raj Dabre, Anoop Kunchukuttan, Nobushige Doi, Yusuke Oda, Ondrej Bojar, Shantipriya Parida, Isao Goto, and Hidayama Mino. 2019. Proceedings of the 6th workshop on asian translation. In Proceedings of the 6th Workshop on Asian Translation.
+Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. *fairoseq: A fast, extensible toolkit for sequence modeling*. In *Proceedings of NAACL-HLT* 2019: Demonstrations.
+Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting of the Association for Computational Linguistics, pages 311-318.
+Chanjun Park, Chanhee Lee, Yeongwook Yang, and Heuseok Lim. 2020. Ancient korean neural machine translation. IEEE Access, 8:116617-116625.
+
+Maxime Peyrard, Wei Zhao, Steffen Eger, and Robert West. 2021. Better than average: Paired evaluation of nlp systems. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 2301-2315.
+Matt Post. 2018. A call for clarity in reporting bleu scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 186-191.
+Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems, pages 5998-6008.
+Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Association for Computational Linguistics.
+
+# A Translation Samples
+
+# A.1 Annals of King Jeongjo (AKJ)
+
+Table 11 shows more examples of AKJ translated by H2KE.
+
+# A.2 Daily Records of the Royal Court and Important Officials (DRRI)
+
+Table 12 represents the translation samples of DRRI. The Hanja source sentences of DRRI do not contain the punctuation mark. The H2KE can translate the Hanja sentence to the two types of Korean, new and old Korean, by adding a different language token in front of the source sentence, so we compare both. The translation samples of H2KE-nKo show comparable quality to the gt-nKo, human translations. H2KE-oKo has the same semantic meaning as the Hanja source sentence but hurts the readability. The baseline model (Kang et al., 2021) cannot generate the correct translation; a token repetition problem exists in their samples.
+
+# B Data Balancing Experiment
+
+Since our dataset consists of imbalanced types of language pairs, we experiment with the balance technology of up/down sampling proposed in Liu et al. (2020). The result in Table 8 indicates that the up/down sampling leads to improvements in the translation to English but causes degradations in the translation to Korean.
+
+
w/o balancing
w/ balancing
HJ → oKo
46.58
45.04
HJ → cKo
46.11
45.14
oKo → cKo
45.76
45.25
HJ → EN
24.62
25.10
oKo → EN
24.59
25.20
+
+# C Winning Rate in Pairwise Perplexity Comparison
+
+Table 10 represents the winning rate in pairwise perplexity comparison. Consistent with the BT comparison on Table4, the translations samples from H2KE are more closer to the gt-nKo than the gt-oKo and baseline model. The samples that have same perplexity are exactly same, because of the short length of the source sentences.
+
+# D Expert Evaluation
+
+Table 9 shows the part of the ITKC's criteria for evaluating Korean translation of historical document written in Hanja. We directly adopt those criteria for our expert evaluation.
+
+Table 8: Effect of data balancing on H2KE-big. The values in 'w/o balancing' column are from row (E) of Table 3.
+
+
Error Type
Scale
Description
Accuracy
-5
• Mistranslation of a vocabulary
+• Incomplete translation of a phrase
-10
• Mistranslation of a phrase
-15
• Consecutive mis-translation of phrases
+• Mistranslation of a sentence
Fluency
-5
• Awkward translation
+• Literal translation of unused Hanja words
+
+Table 9: Evaluation criteria of ITKC for historical document translation.
+
+# E Non-expert Evaluation
+
+Figure 5 shows an example question of the non-expert evaluation. The average length of the evaluated samples is about 300 Korean letters including the spaces. The ages of the non-expert evaluators range from 21 to 37, and the average is 24. It implies that the evaluators are more familiar to modern Korean of the 21st century (when AJD is being newly translated) than old Korean of the 20th century (when AJD was first translated).
+
+
A
B
ppl(A) <ppl(B)(%)
ppl(A) = ppl(B) (%)
ppl(A) >ppl(B)(%)
gt-nKo
gt-oKo
67.96
13.09
18.94
gt-nKo
H2KE
38.71
25.90
35.37
gt-nKo
Kang et al. (2021)
62.39
13.92
23.67
H2KE
gt-oKo
68.52
13.92
17.54
Kang et al. (2021)
gt-oKo
38.71
15.59
45.68
H2KE
Kang et al. (2021)
61.55
14.20
24.23
+
+
+Figure 5: Screenshot of an example of non-expert evaluation. It asks to choose the more understandable one given a pair (A, B) of translations. The evaluators could choose either A, no difference, or B.
+
+Table 10: The winning rate in pairwise perplexity comparison of our models, ground truth samples and the baseline model.
+
+
Hanja
卜相. 拜判教寧徐命善爲右議政,金尚喆·鄭在謙陞爲領左相.
Eng.) [The king] nominated candidates for the State Council. He appointed Seo Myeong-seon, the Magistrate of Donnyeongbu, to the Right State Councillor, and promoted Kim Sang-cheol and Jeong Jon-gyeom to the Chief State Councillor and the Left State Councillor.
Eng.) Yangsa said, “we ask to apply the law to make wife and children as slaves and confiscate family property on the traitor Lee Chan as in the document from the State Tribunal, and enforce the law as soon as possible on Hong Gye-neung as well,” but it was not granted.
+
+(c)
+
+Table 12: The translation samples of DRRI.
\ No newline at end of file
diff --git a/translatinghanjahistoricaldocumentstocontemporarykoreanandenglish/images.zip b/translatinghanjahistoricaldocumentstocontemporarykoreanandenglish/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..fba21ff997edcdd37b446398afcb4424c9b6a554
--- /dev/null
+++ b/translatinghanjahistoricaldocumentstocontemporarykoreanandenglish/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0163d67e10010c1c11efbc07cd21b874019420205686c55b2e49b45b41de5d8d
+size 1031576
diff --git a/translatinghanjahistoricaldocumentstocontemporarykoreanandenglish/layout.json b/translatinghanjahistoricaldocumentstocontemporarykoreanandenglish/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..09be314a630f1a88ffe81ce2f3d9d6fd19d83a46
--- /dev/null
+++ b/translatinghanjahistoricaldocumentstocontemporarykoreanandenglish/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b52c049ba385dc44dabfc9270f2845035ebe5c74623bb3bfbc1fabf22ad2c937
+size 330149
diff --git a/translistatransformerbasedlinguisticallyinformedsanskrittokenizer/47eb6c49-06fd-4095-8f75-af861d515cb6_content_list.json b/translistatransformerbasedlinguisticallyinformedsanskrittokenizer/47eb6c49-06fd-4095-8f75-af861d515cb6_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..10ca6252f41b4bdd6811f5b05750291f084a030a
--- /dev/null
+++ b/translistatransformerbasedlinguisticallyinformedsanskrittokenizer/47eb6c49-06fd-4095-8f75-af861d515cb6_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:61a7f89e929dca603eb33d0c8f4065d4a8fc48404fe22e6fbbb31f178702c836
+size 76801
diff --git a/translistatransformerbasedlinguisticallyinformedsanskrittokenizer/47eb6c49-06fd-4095-8f75-af861d515cb6_model.json b/translistatransformerbasedlinguisticallyinformedsanskrittokenizer/47eb6c49-06fd-4095-8f75-af861d515cb6_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..700534ab2f6109bb0bff70b17504c5f9038e07f8
--- /dev/null
+++ b/translistatransformerbasedlinguisticallyinformedsanskrittokenizer/47eb6c49-06fd-4095-8f75-af861d515cb6_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c0fc59644c4d8524cab4622d7e8d902b6c5d794e96e5b4347d58d1d1b7bd6776
+size 88574
diff --git a/translistatransformerbasedlinguisticallyinformedsanskrittokenizer/47eb6c49-06fd-4095-8f75-af861d515cb6_origin.pdf b/translistatransformerbasedlinguisticallyinformedsanskrittokenizer/47eb6c49-06fd-4095-8f75-af861d515cb6_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..4b568562be71ac873d322db5239a571222d9aba6
--- /dev/null
+++ b/translistatransformerbasedlinguisticallyinformedsanskrittokenizer/47eb6c49-06fd-4095-8f75-af861d515cb6_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:da571ba8b060cbdc260d37b33d2c29ebf8d388c360a5296582493bfd560a5430
+size 1270588
diff --git a/translistatransformerbasedlinguisticallyinformedsanskrittokenizer/full.md b/translistatransformerbasedlinguisticallyinformedsanskrittokenizer/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..b6584720aed6b1e9bb8e613b59726c03e26f0d8f
--- /dev/null
+++ b/translistatransformerbasedlinguisticallyinformedsanskrittokenizer/full.md
@@ -0,0 +1,294 @@
+# TransLIST: A Transformer-Based Linguistically Informed Sanskrit Tokenizer
+
+Jivnesh Sandhan1, Rathin Singha2, Narein Rao1, Suvendu Samanta1, Laxmidhar Behera1,4 and Pawan Goyal3
+
+$^{1}$ IIT Kanpur, $^{2}$ UCLA, $^{3}$ IIT Kharagpur, $^{4}$ IIT Mandi
+
+jivnesh@iitk.ac.in,rsinghal08@g.ucla.edu,
+
+nrao20@iitk.ac.in, pawang@cse.iitkgp.ac.in
+
+# Abstract
+
+Sanskrit Word Segmentation (SWS) is essential in making digitized texts available and in deploying downstream tasks. It is, however, non-trivial because of the sandhi phenomenon that modifies the characters at the word boundaries, and needs special treatment. Existing lexicon driven approaches for SWS make use of Sanskrit Heritage Reader, a lexicon-driven shallow parser, to generate the complete candidate solution space, over which various methods are applied to produce the most valid solution. However, these approaches fail while encountering out-of-vocabulary tokens. On the other hand, purely engineering methods for SWS have made use of recent advances in deep learning, but cannot make use of the latent word information on availability.
+
+To mitigate the shortcomings of both families of approaches, we propose Transformer based Linguistically Informed Sanskrit Tokenizer (TransLIST) consisting of (1) a module that encodes the character input along with latent-word information, which takes into account the sandhi phenomenon specific to SWS and is apt to work with partial or no candidate solutions, (2) a novel soft-masked attention to prioritize potential candidate words and (3) a novel path ranking algorithm to rectify the corrupted predictions. Experiments on the benchmark datasets for SWS show that TransLIST outperforms the current state-of-the-art system by an average 7.2 points absolute gain in terms of perfect match (PM) metric. $^{1}$
+
+# 1 Introduction
+
+Sanskrit is considered as a cultural heritage and knowledge preserving language of ancient India. The momentous development in digitization efforts has made ancient manuscripts in Sanskrit readily available for the public domain. However, the usability of these digitized manuscripts is limited
+
+due to linguistic challenges posed by the language. SWS conventionally serves the most fundamental prerequisite for text processing step to make these digitized manuscripts accessible and to deploy many downstream tasks such as text classification (Sandhan et al., 2019; Krishna et al., 2016b), morphological tagging (Gupta et al., 2020; Krishna et al., 2018), dependency parsing (Sandhan et al., 2021; Krishna et al., 2020a), automatic speech recognition (Kumar et al., 2022) etc. SWS is not straightforward due to the phenomenon of sandhi, which creates phonetic transformations at word boundaries. This not only obscures the word boundaries but also modifies the characters at juncture point by deletion, insertion and substitution operation. Figure 1 illustrates some of the syntactically possible splits due to the language-specific sandhi phenomenon for Sanskrit. This demonstrates the challenges involved in identifying the location of the split and the kind of transformation performed at word boundaries.
+
+
+Set of candidate solutions
+Figure 1: An example to illustrate challenges posed by sandhi phenomenon for SWS task.
+
+The recent surge in SWS datasets (Krishna et al., 2017; Krishnan et al., 2020) has led to various methodologies to handle SWS. Existing lexicon-driven approaches rely on a lexicon driven shallow parser, popularly known as Sanskrit Heritage Reader (SHR) (Goyal and Huet, 2016a). This line of approaches (Krishna et al., 2016a, 2018, 2020b)
+
+formulate the task as finding the most accurate semantically and syntactically valid solution from the candidate solutions generated by SHR. With the help of the significantly reduced exponential search space provided by SHR and linguistically involved feature engineering, these lexicon driven systems (Krishna et al., 2020b, 2018) report close to state-of-the-art performance for the SWS task. However, these approaches rely on the completeness assumption of SHR, which is optimistic given that SHR does not use domain specific lexicons. These models are handicapped by the failure of this preliminary step. On the other hand, purely engineering based knowledge-lean data-centric approaches (Hellwig and Nehrdich, 2018; Reddy et al., 2018; Aralikatte et al., 2018) perform surprisingly well without any explicit hand-crafted features and external linguistic resources. These purely engineering based approaches are known for their ease of scalability and deployment for training/inference. However, a drawback of these approaches is that they are blind to latent word information available through external resources.
+
+There are also lattice-structured approaches (Zhang and Yang, 2018; Gui et al., 2019; Li et al., 2020) (originally proposed for Chinese Named Entity Recognition (NER), which incorporate lexical information in character-level sequence labelling architecture). However, these approaches cannot be directly applied for SWS; since acquiring word-level information is not trivial due to sandhi phenomenon. To overcome these shortcomings, we propose Transformer-based Linguistically Informed Tokenizer (TransLIST). TransLIST is a perfect blend of purely engineering and lexicon driven approaches for the SWS task and provides the following advantages: (1) Similar to purely engineering approaches, it facilitates ease of scalability and deployment during training/inference. (2) Similar to lexicon driven approaches, it is capable of utilizing the candidate solutions generated by SHR, which further improves the performance. (3) Contrary to lexicon driven approaches, TransLIST is robust and can function even when candidate solution space is partly available or unavailable.
+
+Our key contributions are as follows: (a) We propose the linguistically informed tokenization module ( $\S 2.1$ ) which accommodates language-specific sandhi phenomenon and adds inductive bias for the SWS task. (b) We propose a novel soft-masked attention ( $\S 2.2$ ) that helps to add inductive bias for
+
+prioritizing potential candidates keeping mutual interactions between all candidates intact. (c) We propose a novel path ranking algorithm ( $\S 2.3$ ) to rectify the corrupted predictions. (d) We report an average 7.2 points perfect match absolute gain ( $\S 3$ ) over the current state-of-the-art system (Hellwig and Nehrdich, 2018).
+
+We elucidate our findings by first describing TransLIST and its key components (§ 2), followed by the evaluation of TransLIST against strong baselines on a test-bed of 2 benchmark datasets for the SWS task (§ 3). Finally, we investigate and delve deeper into the capabilities of the proposed components and its corresponding modules (§ 4).
+
+# 2 Methodology
+
+In this section, we will examine the key components of TransLIST which includes a linguistically informed tokenization module that encodes character input with latent-word information while accounting for SWS-specific sandhi phenomena (§ 2.1), a novel soft-masked attention to prioritise potential candidate words (§ 2.2) and a novel path ranking algorithm to correct mispredictions (§ 2.3).
+
+# 2.1 Linguistically Informed Sanskrit Tokenizer (LIST)
+
+Lexicon driven approaches for SWS are brittle in realistic scenarios and purely engineering based approaches do not consider the potentially useful latent word information. We propose a win-win/robust solution by formulating SWS as a character-level sequence labelling integrated with latent word information from the SHR as and when available. TransLIST is illustrated with an example svetodhavati in Figure 2. SHR employs a Finite State Transducer (FST) in the form of a lexical juncture system to obtain a compact representation of candidate solution space aligned with the input sequence. As shown in Figure 2(a), we receive the candidate solution space from the SHR engine. Here, svetah dhavati and sveta udha avati are two syntactically possible splits.3 It does not suggest the final segmentation. The candidate space includes words such as sva, sveta and etah whose boundaries are modified with respect to the input sequence due to sandhi phenomenon. SHR gives us mapping (head and tail position) of all the candidate nodes with the input sequence. In
+
+
+(a)
+
+
+(b)
+Figure 2: Illustration of TransLIST with a toy example "śvetodhāvatī". Translation: "The white (horse) runs." (a) LIST module: We use the candidate solutions (two possible candidate solutions are highlighted with ■, ■ colors where the latter is the gold standard) from SHR if available; in the absence of SHR, we resort to using n-grams $(n \leq 4)$ . (b) TransLIST architecture: In span encoding, each node is represented by head and tail position index of its character in the input sequence. ■, ■, ■ denote tokens, heads and tails, respectively. The SHR helps to include words such as šva, švetah and etah whose boundaries are modified with respect to input sequence due to sandhi phenomenon. Finally, on the top of the Transformer encoder, classification head learns to predict gold standard output shown by ■ for the corresponding input character nodes only.
+
+case such mapping is incorrect for some cases, we rectify it with the help of deterministic algorithm by matching candidate nodes with the input sentence and finding the closest match. In the absence of SHR, we propose to use all possible n-grams $(n\leq 4)^{4}$ which helps to add inductive bias about neighboring candidates in the window size of $4.^{5}$ . We feed the candidate words/n-grams to the Transformer encoder and the classification head learns to predict gold standard output for the corresponding input character nodes only. The output vocabulary consists of unigram characters (e.g., $\mathfrak{s},\mathfrak{v}$ ), bigrams and tri-grams (e.g., ah_). The output vocabulary contains '_' to represent spacing between words. Consequently, TransLIST is capable of using both character-level modelling as well as latent word information as and when available. On the other hand, purely engineering approaches rely only on character-level modelling and Lexicon driven approaches rely only on word-level information from SHR to handle sandhi.
+
+# 2.2 Soft Masked Attention (SMA)
+
+Transformers (Vaswani et al., 2017) have been proven to be effective for capturing long-distance
+
+dependencies in a sequence. The self-attention property of a Transformer facilitates effective interaction between character and available latent word information. There are two preliminary prerequisites for effective modelling of inductive bias for tokenization: (1) Allow interactions between the candidate words/characters within and amongst chunks. (2) Prioritize candidate words containing the input character for which a prediction is being made (e.g., in Figure 2(b), $\hat{s}$ va and $\hat{s}$ vetah are prioritized amongst the candidate words when predicting for the character $\hat{s}$ ).6 The vanilla self-attention (Vaswani et al., 2017) can address both the requirements; however, it has to self-learn the inductive bias associated with prioritisation. It may not be an effective solution in low-resourced settings. On the other hand, if we use hard-masked attention to address the second prerequisite, we lose mutual interactions between the candidates. Hence, we propose a novel soft-masked attention which helps to address both the requirements effectively. To the best of our knowledge, there is no existing soft-masked attention similar to ours. We formally discuss this below.
+
+Self-attention maps a query and a set of key-value pairs to an output as discussed in Vaswani et al. (2017). For an input $x = (x_{1},\dots,x_{n})$
+
+where $x_{i} \in R^{d_{x}}$ , self-attention gives an output $z = (z_{1},\dots,z_{n})$ where $z_{i} \in R^{d_{z}}$ . We presume the standard formulation of vanilla self-attention (Vaswani et al., 2017) where $d_{x}$ is the dimension of input word representation and $d_{z}$ is the projection dimension. Here, $W^{Q},W^{K},W^{V} \in R^{d_{x}\times d_{z}}$ are parameter matrices. For simplicity, we ignore multi-head attention in equations 1, 2 and 3.
+
+$$
+z _ {i} = \sum_ {j = 1} ^ {n} \alpha_ {i j} \left(x _ {j} W ^ {V}\right) \tag {1}
+$$
+
+$$
+\alpha_ {i j} = \frac {\exp \left(e _ {i j}\right)}{\sum_ {k = 1} ^ {n} \exp \left(e _ {i k}\right)} \tag {2}
+$$
+
+$$
+e _ {i j} = \frac {\left(x _ {i} W ^ {Q}\right) \left(x _ {j} W ^ {K}\right) ^ {T}}{\sqrt {d _ {z}}} \tag {3}
+$$
+
+In soft-masked attention, we provide a prior about interactions between candidate words and the input characters using a span encoding $(s_{ij} \in R^{d_z})$ (Li et al., 2020). Intuitively, it helps inject inductive bias associated with prioritisation whilst maintaining mutual interactions between the candidates.
+
+Formally, we modify Equation 2 to define soft masked attention as:
+
+$$
+\alpha_ {i j} ^ {S M} = \frac {M _ {i j} \exp \left(e _ {i j}\right)}{\sum_ {k = 1} ^ {n} M _ {i k} \exp \left(e _ {i k}\right)} \tag {4}
+$$
+
+where $M\in R^{n\times n}$ $M_{ij}\in [0,1]$ . $M_{ij}$ is defined as:
+
+$$
+M _ {i j} = \frac {\left(x _ {i} W ^ {Q}\right) \left(s _ {i j} W ^ {R}\right) ^ {T}}{\sqrt {d _ {z}}} \tag {5}
+$$
+
+$W^{R} \in R^{d_{z} \times d_{z}}$ is a learnable parameter which projects $s_{ij}$ into a location-based key vector space. Summarily, the proposed SMA module helps to prioritize potential candidate words with the help of separation, inclusion and intersection information between nodes. Finally, we calculate the output $z$ with the help of the proposed SMA as follows:
+
+$$
+z _ {i} = \sum_ {j = 1} ^ {n} \alpha_ {i j} ^ {S M} \left(x _ {j} W ^ {V}\right) \tag {6}
+$$
+
+Next, we discuss the span position encoding.
+
+Span position encoding is one of the backbones of the proposed soft-masked module. It is utilized to capture the interactions between the candidate words and the sequence of input characters. Each span/node (which is a character/word and its corresponding position in the input sentence) is represented by the head and tail which denote the
+
+position index of the initial and final characters of the token in the input sequence, as shown in Figure 2(b). The span of character is characterized by the same head and tail position index. For example, head[i] and tail[i] represent the head and tail index of span $x_{i}$ , respectively. The separation, inclusion and intersection information between nodes $x_{i}$ and $x_{j}$ can be captured by the four distance equations 7-10.
+
+$$
+d _ {i j} ^ {(h h)} = \operatorname {h e a d} [ i ] - \operatorname {h e a d} [ j ] \tag {7}
+$$
+
+$$
+d _ {i j} ^ {(h t)} = \operatorname {h e a d} [ i ] - \operatorname {t a i l} [ j ] \tag {8}
+$$
+
+$$
+d _ {i j} ^ {(t h)} = \operatorname {t a i l} [ i ] - \operatorname {h e a d} [ j ] \tag {9}
+$$
+
+$$
+d _ {i j} ^ {(t t)} = \operatorname {t a i l} [ i ] - \operatorname {t a i l} [ j ] \tag {10}
+$$
+
+The final span encoding is a non-linear transformation of these 4 distances:
+
+$$
+s _ {i j} = \operatorname {R e L U} \left(w _ {s} \left(p _ {d _ {i j} ^ {(h h)}} \oplus p _ {d _ {i j} ^ {(h t)}} \oplus p _ {d _ {i j} ^ {(t h)}} \oplus p _ {d _ {i j} ^ {(t t)}}\right)\right) \tag {11}
+$$
+
+where $w_{s}\in R$ is a learnable parameter, $\oplus$ is a concatenation operation and $p_d\in R^{\frac{dz}{4}}$ is a sinusoidal position encoding similar to Vaswani et al. (2017).
+
+# 2.3 Path Ranking for Corrupted Predictions (PRCP)
+
+Our error analysis (§ 4) suggests that sometimes the proposed system predicts words that are not part of the candidate solution space. These mistakes can be rectified with the help of SHR's candidate solutions by appropriately substituting suitable candidates. We refer to the prediction corresponding to a chunk that does not fall in the candidate solution space, as a corrupted prediction and define a path as the sequence of characters in a candidate solution for a given input. We enumerate all the possible directed paths (In Figure 2(a), two possible candidate solutions are highlighted with colors) corresponding to the input (with a corrupted prediction) and formulate the task as a path ranking problem. While designing the path scoring function (S), we consider the following criteria: (1) Select a path consisting of semantically coherent candidate words. We use an integrated judgment from two sources. First, we prefer a path having a high log-likelihood (LL) score as per TransLIST to choose a semantically coherent path in line with the contextual information of TransLIST. Second,
+
+we reinforce the scoring function (S) by considering the perplexity score $(\rho)$ for the path from the character-level language model. (2) To avoid paths consisting of over-generated segmentation provided by SHR, we use a penalty proportional to the number of words $(|W|)$ present in the path to prefer paths with less number of words. This gives us the following path scoring function (S):
+
+$$
+S = \frac {L L _ {T r a n s L I S T}}{\rho_ {C h a r L M} \times | W |}
+$$
+
+where
+
+$$
+\begin{array}{l} L L _ {T r a n s L I S T} = \log \text {- l i k e l i h o o d b y T r a n s L I S T} \\ \rho_ {\text {C h a r L M}} = \text {P e r p l e x i t y} \\ | W | = \text {N u m b e r o f w o r d s p r e s e n t i n p a t h} \\ \end{array}
+$$
+
+# 3 Experiments
+
+Data and Metrics: Currently, Digital Corpus of Sanskrit (Hellwig, 2010, DCS) has more than 600,000 morphologically tagged text lines. It consists of digitized constructions composed in prose or poetry over a wide span of 3000 years. Summarily, DCS is a perfect representation of various writing styles depending on time and domains. We use two available benchmark datasets (Krishna et al., 2017, SIGHUM)7 and (Krishnan et al., 2020, Hackathon) for SWS. Both datasets are subset of DCS (Hellwig, 2010). These datasets also come with candidate solution space generated by SHR for SWS. We prefer Krishna et al. (2017, SIGHUM) over a relatively larger dataset (Hellwig and Nehrdich, 2018) to obviate the time and efforts required for obtaining candidate solution space. We obtain the ground truth segmentation solutions from DCS. We could not use DCS10k (Krishna et al., 2020b) due to partly missing gold standard segmentation (inflections) for almost $50\%$ data points. SIGHUM consists of 97,000, 3,000 and 4,200 sentences as train, dev, test set, respectively. Similarly, Hackathon consists of 90,000, 10,332 and 9,963 sentences as train, dev and test set, respectively. We use the following word-level evaluation metrics: macro-averaged Precision (P), Recall (R), F1-score (F) and the percentage of sentences with perfect matching (PM).
+
+Hyper-parameter settings: For the implementation of TransLIST, we build on top of codebase by Li et al. (2020). We use the following hyperparameters for the best configuration of TransLIST: number of epochs as 50 and a dropout rate of 0.3 with a learning rate of 0.001. We release our codebase and datasets publicly under the Apache license 2.0. All the artifacts used in this work are publicly available for the research purpose. For all the systems, we do not use any pretraining. All the input representations are randomly initialized. We use GeForce RTX 2080, 11 GB GPU memory computing infrastructure for our experiments.
+
+Baselines: We consider two lexicon-driven approaches where Krishna et al. (2016a, SupPCRW) formulate SWS as an iterative query expansion problem and Krishna et al. (2018, Cliq-EBM) deploy a structured prediction framework. Next, we evaluate four purely-engineering based approaches, namely, Encoder-Decoder framework (Reddy et al., 2018, Seq2Seq), character-level sequence labelling system with combination of recurrent and convolution element (Hellwig and Nehrdich, 2018, rcNNSS), vanilla Transformer (Vaswani et al., 2017) and character-level Transformer with relative position encoding (Yan et al., 2019, TENER). Finally, we consider lattice-structured approaches originally proposed for Chinese NER which incorporate lexical information in character-level sequence labelling architecture. These approaches consist of lattice-structured LSTM (Zhang and Yang, 2018, Lattice-LSTM), graph neural network (GNN) based architecture (Gui et al., 2019, Lattice-GNN) and Transformer based architecture (Li et al., 2020, FLAT-Lattice). TransLIST: As per § 2.1, we report two variants: (a) TransLISTngrams which makes use of only n-grams, and (b) TransLIST which makes use of SHR candidate space.
+
+Results: Table 1 reports the results for the best performing configurations of all the baselines on the test set of benchmark datasets for the SWS task. Except purely engineering based systems (Seq2seq, TENER, Transformer and rcNN-SS), all systems leverage linguistically refined candidate solution space generated by SHR. Among the lattice-structured systems, FLAT-Lattice demonstrates competing performance against rcNN-SS.
+
+
SIGHUM
Hackathon
Model
P
R
F
PM
P
R
F
PM
Seq2seq
73.44
73.04
73.24
29.20
72.31
72.15
72.23
20.21
SupPCRW
76.30
79.47
77.85
38.64
-
-
-
-
TENER
90.03
89.20
89.61
61.24
89.38
87.33
88.35
49.92
Lattice-LSTM
94.36
93.83
94.09
76.99
91.47
89.19
90.31
65.76
Lattice-GNN
95.76
95.24
95.50
81.58
92.89
94.31
93.59
70.31
Transformer
96.52
96.21
96.36
83.88
95.79
95.23
95.51
77.70
FLAT-Lattice
96.75
96.70
96.72
85.65
96.44
95.43
95.93
77.94
Cliq-EBM
96.18
97.67
96.92
78.83
-
-
-
-
rcNN-SS
96.86
96.83
96.84
87.08
96.40
95.15
95.77
77.62
TransLISTngrams
96.97
96.77
96.87
86.52
96.68
95.74
96.21
79.28
TransLIST
98.80
98.93
98.86
93.97
97.78
97.44
97.61
85.47
+
+Table 1: Performance evaluation between baselines in terms of P, R, F and PM metrics. The significance test between the best baselines, rcNN-ss, FLAT-lattice and TransLIST in terms of recall/perfect-match metrics: $p < 0.05$ (as per t-test, for both the datasets). We do not report the performance of SupPCRW and Cli-EBM on Hackathon dataset due to unavailability of codebase. On SIGHUM, we report numbers from their papers. The best baseline's results for the corresponding datasets are underlined. The overall best results per column are highlighted in bold.
+
+We find that rcNN-SS and FLAT-Lattice perform the best among all the baselines on SIGHUM and Hackathon datasets, respectively.
+
+Both the TransLIST variants outperforms all the baselines in terms of all the evaluation metrics with TransLIST providing an average 1.8 points (F) and 7.2 points (PM) absolute gain with respect to the best baseline systems, rcNN-SS (on SIGHUM) and FLAT-Lattice (on Hackathon). Even when the SHR candidate space is not available, the proposed system can use $\mathrm{TransLIST}_{\mathrm{ngrams}}$ , which provides an average 0.11 points (F) and 0.39 points (PM) absolute gain over the best baselines. $\mathrm{TransLIST}_{\mathrm{ngrams}}$ gives comparable performance to rcNN-SS on SIGHUM dataset, while on the Hackathon dataset, it performs significantly better than FLAT-Lattice ( $p < 0.05$ as per t-test). The wide performance gap between TransLIST and $\mathrm{TransLIST}_{\mathrm{ngrams}}$ demonstrates the effectiveness of using SHR candidate space, when available. Summarily, we establish a new state-of-the-art results with the help of meticulously stitched LIST, SMA and PRCP modules. The knowledge of the candidate space by SHR gives an extra advantage to TransLIST. Otherwise, natural choice is the proposed purely engineering variant $\mathrm{TransLIST}_{\mathrm{ngrams}}$ when that is not available.
+
+# 4 Analysis
+
+In this section, we investigate various questions to dive deeper into the proposed components and investigate the capabilities of various modules. We
+
+
+(a)
+
+
+(b)
+Figure 3: Ablations on (a) TransLIST (b) PRCP module in terms of PM (SIGHUM-test). Each ablation in (a) removes a single module from TransLIST. For example, “-SMA” removes SMA from TransLIST. For (b), ablations are shown by removing a particular term from path scoring function $(S)$ .
+
+use SIGHUM dataset for the analysis.
+
+(1) Ablation analysis: Here, we study the contribution of different modules towards the final performance of TransLIST. Figure 3(a) illustrates ablations in terms of PM when a specific module is removed from TransLIST. For instance, '-LIST' corresponds to character-level transformer encoder with SMA and PRCP. Removal of any of the modules degrades the performance. Figure 3(a) shows that LIST module is the most crucial for providing inductive bias of tokenization. Also, removal of 'PRCP' module has a large impact on the performance. We observe that the PRCP module gets activated for 276 data points out of 4,200 data points in the test set. We then deep dive into the PRCP path scoring function in Figure 3(b), which consists of 3 terms, namely, penalty $(|W|)$ , perplexity
+
+
+(a) char-char
+
+
+(b) word-word
+
+
+(c) char-word
+Figure 4: SMA probing: Illustration of char-char, char-word and word-word interactions. The strength of the SMA decreases in the following order: red, orange, green and blue. Char-char attention mostly focuses on characters present in the vicinity of window size 1. Word-word interactions are able to capture whether a word is subword of another word or not. Finally, we find that quality of attention goes down for char-word as we move as per the following order: in vocabulary gold words (pink), in vocabulary non-golds (black) and out-of-vocabulary words (red). Some of the attentions are invisible due to very low attention score.
+
+score by CharLM $(|\rho|)$ and log-likelihood $(LL)$ by TransLIST, respectively. We remove a single term at a time from the path scoring function, and observe each of the terms used in the scoring function plays a major role in the final performance.
+
+(2) Comparative analysis of potential LIST module variants to add inductive bias for tokenization: We evaluate possible LIST variants which can help inject inductive bias for tokenization via auxiliary (word) nodes illustrated in Figure 2(b): (a) sandhi rules: We use sandhi rules as a proxy to indicate potential modifications at specific position in the input sequence. For example, if input chunk contains the character 'o' (Figure 1) then it can be substituted with two possibilities $\bar{o} \rightarrow a\text{-}\bar{u}/ah$ . We provide this proxy information through auxiliary nodes. (b) Sanskrit vocab: We obtain a list of vocabulary words from DCS corpus (Hellwig, 2010) and add the words which can be mapped to the input character sequence using a string matching algorithm. (c) n-grams: This is TransLISTngrams (d) SHR: We follow the exact settings as described in § 2.1 except that we do not use the PRCP component. In Table 2, we compare these with the purely engineering variant of TransLIST (Base system: only character-level Transformer) where no induc
+
+tive bias for tokenization is injected. Clearly, due to availability of enriched candidate space, SHR variant outperforms all its peers. However, competing performance of n-gram variant is appealing because it completely obliviates the dependency on SHR and remains unaffected in the absence of SHR's candidate space.
+
+
System
P
R
F
PM
Base system
92.75
92.62
92.69
72.33
+sandhi rules
93.53
93.70
93.62
75.71
+Sanskrit Vocab
96.75
96.70
96.72
85.65
+n-grams
96.97
96.77
96.87
86.52
+SHR
97.79
97.45
97.62
88.47
+
+Table 2: The comparison (on SIGHUM-test set) in between LIST variants. $^+$ indicates system where the corresponding variant is augmented with the base system. We do not activate PRCP for any of these systems.
+
+(3) Probing analysis on SMA: Here we analyze whether SMA upholds the prerequisite for effective modelling of inductive bias, i.e., prioritize candidate words which contain the input character for which the prediction is being made. Figure 4 illustrates three types of interactions, namely, char- char, char-word and word-word. We use color coding scheme to indicate the strength of atten
+
+tion weight. The attention weight decreases in the following order: Red, Orange, Green and Blue. Char-char attention mostly focuses on characters present in the vicinity of window size 1. This local information is relevant to make decisions regarding possible sandhi split. Word-word interactions are able to capture whether a word is subword of another word or not. Finally, for char-word attention, we find that quality of attention goes down as we move as per the following order: in vocabulary gold words (pink), in vocabulary non-golds (black) and out-of-vocabulary (unseen in training but recognized by SHR) gold words (red). While the drop in attention from in-vocabulary gold tokens to out-of-vocabulary gold tokens is expected, the drop in attention from gold tokens to non-gold tokens is desired. Thus, this probing analysis suggests that SMA module helps to improve intra/inter interactions between character/words and this substantiates the need of SMA module in TransLIST.
+
+(4) How does TransLIST perform in a nontrivial situation where multiple sandhi rules are applicable? In Table 3, we report the comparison with rcNN-SS for a critical scenario of a sandhi phenomenon. Table 3 represents the possible sandhi rules that generate the surface character $\bar{a}$ . Following Goyal and Huet (2016b), the sandhi rewrite rules are formalized as $u|v \rightarrow f / x_{--}$ (Kaplan and Kay, 1994) where $x, v, f \in \Sigma$ , and $u \in \Sigma^{+}$ . $\Sigma$ is the collection of phonemes, $\Sigma^{*}$ : a set of all possible strings over $\Sigma$ , and $\Sigma^{+} = \Sigma^{*} - \epsilon$ . For example, the potential outputs for the input $\bar{a}$ can be $\bar{a}$ , $\bar{a}-\bar{a}$ , $\bar{a}-a$ , a-a and ah. The correct rule can be decided based on the context. These multiple rules pose a non-trivial challenge for a system to identify the applicability of specific rule. Therefore, it is interesting to compare the TransLIST with current state-of-the-art system to verify its ability for semantic generalization. We observe that TransLIST consistently outperforms rcNN-SS in terms of all metrics. $^9$ Table 3 describes rules in decreasing order of their frequency. Interestingly, we notice large improvements over the current state-of-the-art system, especially for rare sandhi rules. This observation confirms superior performance of TransLIST over the current state-of-the-art system.
+
+
rcNN-SS
TransLIST
Rules
P
R
F
P
R
F
a
99.3
99.3
99.3
99.7
99.6
99.6
a-a
95.4
96.6
96.0
96.6
97.8
97.2
a-a
88.4
83.1
86.5
90.5
83.8
87.0
a-h
76.7
70.1
73.7
77.2
80.1
78.0
a-a
50.1
42.1
45.7
80.0
40.9
53.3
+
+Table 3: The comparison (on SIGHUM-test set) in terms of P, R and F metrics between rcNN-SS and the TransLIST for ambiguous sandhi rules leading to the same surface character $\bar{a}$ . The proposed model consistently outperforms rcNN-SS in all the metrics.
+
+
+Figure 5: F1-score against sentence length (no. of characters) over the SIGHUM dataset
+
+(5) How robust is the system when sentence length is varied? In Figure 5, we analyze the performance of the baselines with different sentence lengths. We plot the F1-score against sentence length. Clearly, while all the systems show superior performance for shorter sentences, TransLIST is much more robust for longer sentences compared to other baselines. The lattice-structured baselines give competing F1-scores over short sentences but relatively sub-par performance over long sentences.
+
+(6) Illustration of PRCP with an example: Table 4 illustrates an example that probes the effectiveness of PRCP in TransLIST. We compare TransLIST with rcNN-SS and observe that TransLIST also predicts words out of candidate solution space when PRCP module is not activated. However, the degree of such mistakes in TransLIST is comparatively less due to effective modelling of inductive bias for tokenization using LIST and SMA modules. In Table 4, rcNN-SS predicts three words which are not part of candidate space, namely, vambike, yakjavapuh and caka. These are mistakes that can be rectified with the help of available candidate space. Interestingly, TransLIST commits only a single mistake in this
+
+
Sentence
F-score
Input sentence
kimetadiocese bahusobhamane vāmbike)yakṣavapucakāsti
+Translation:What is this body resembling a Yaksha that glows,oh Ambika!You who lord over!You who shine!
-
Correct segmentation
kim etat iseBahu sobhamanevā ambike yakṣa vapuh cakāsti
-
SHR candidate space
kim, etat, ise, bahu, sobhamane, sobham, āne, sobha, māne,mā, vā, ambike, yakṣa, vapuh, cakāsti, ca, kā, astiWord-word meaning:what, this, the one who lord, very much,the one who shine, bright, mouth, I respect, never, or, Parvati,a kind of celestial being, body, glows, and, who (female), is there (be).
-
rcNN-SS
kim etat iseBahu sobhamane vāmbike yakṣavapuh caka asti
52.60
TransLIST-PRCP
kim etat iseBahu sobhamane vāaambike yakṣa vapuh cakāsti
90.00
TransLIST
kim etat iseBahu sobhamane vā ambike yakṣa vapuh cakāsti
100.00
+
+Table 4: An example to illustrate the effectiveness of PRCP module of TransLIST. Bold represents incorrect segmentation for the input sequence.
+
+category by predicting out of solution space word aambike. PRCP aids in mitigating such mistake by appropriately substituting suitable candidates.
+
+# 5 Related Work
+
+Earlier approaches on SWS focused on rule-based Finite State Transducer systems (Gérard, 2003; Mittal, 2010). Natarajan and Charniak (2011) attempted to solve the SWS task for sentences with one or two splits using the Bayesian approach. Recently, Goyal and Huet (2016a, SHR) proposed a lexicon driven shallow parser. This, along with the recent upsurge in segmentation datasets (Krishna et al., 2017; Hellwig and Nehrdich, 2018; Krishnan et al., 2020) led to two categories of approaches, namely, lexicon driven (Krishna et al., 2016a, 2018, 2020b) and purely engineering (Hellwig, 2015; Hellwig and Nehrdich, 2018; Aralikatte et al., 2018; Reddy et al., 2018). These existing approaches for SWS are either brittle in realistic scenarios or do not consider the potentially useful/available information. Thus, TransLIST bridges the shortcomings exhibited by each family and gives a win-win solution that marks a new state-of-the-art results.
+
+# 6 Conclusion and Discussion
+
+In this work, we focused on Sanskrit word segmentation task. To address the shortcomings of existing purely engineering and lexicon driven approaches, we demonstrate the efficacy of TransLIST as a win-win solution over drawbacks of the individual lines of approaches. TransLIST induces inductive bias for tokenization in a character input sequence using the LIST module, and prioritizes the relevant candidate words with the help of soft-masked attention
+
+(SMA module). Further, we propose a novel path ranking algorithm to rectify corrupted predictions using linguistic resources on availability (PRCP module). Our experiments showed that TransLIST provides a significant boost with an average 7.2 points (PM) absolute gain compared to the best baselines, rcNN-SS (SIGHUM) and FLAT-Lattice (Hackathon). We have also showcased fine-grained analysis on TransLIST's inner working. We plan to extend this work for morphological tagging in standalone mode (Gupta et al., 2020) and multi-task setting (Krishna et al., 2018) with the SWS task.
+
+# Limitations
+
+The preliminary requirement to extend TransLIST for the languages which also exhibit sandhi phenomenon is lexicon-driven shallow parser similar to Sanskrit Heritage Reader (SHR). Otherwise, the natural choice is the proposed purely engineering variant $\mathrm{TransLIST}_{\mathrm{ngram}}$ . It would be interesting to check if TransLIST and $\mathrm{TransLIST}_{\mathrm{ngram}}$ can be used together.
+
+# Ethics Statement
+
+We do not foresee any ethical concerns with the work presented in this manuscript.
+
+# Acknowledgements
+
+We are grateful to Oliver Hellwig for providing the DCS Corpus and Gerard Huet for providing the Sanskrit Heritage Engine. We thank Sriram Krishnan, University of Hyderabad and Hackathon organizers10 for providing Hackathon dataset. We
+
+thank Amrith Krishna, University of Cambridge for clarifying our queries related to SIGHUM dataset and evaluation metrics. We are grateful to Rishabh Kumar, IIT Bombay for helping us with evaluation of Cliq-EBM baseline. We would like to thank the anonymous reviewers for their constructive feedback towards improving this work. The work of the first author is supported by the TCS Fellowship under the Project TCS/EE/2011191P.
+
+# References
+
+Rahul Aralikatte, Neelamadhav Gantayat, Naveen Panwar, Anush Sankaran, and Senthil Mani. 2018. Sanskrit sandhi splitting using seq2(seq)2. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4909-4914, Brussels, Belgium. Association for Computational Linguistics.
+Huet Gérard. 2003. Lexicon-directed segmentation and tagging of sanskrit. In XIIth World Sanskrit Conference, Helsinki, Finland, Aug, pages 307-325. Cite-seer.
+Pawan Goyal and Gérard Huet. 2016a. Design and analysis of a lean interface for sanskrit corpus annotation. Journal of Language Modelling, 4:145.
+Pawan Goyal and Gérard Huet. 2016b. Design and analysis of a lean interface for sanskrit corpus annotation. Journal of Language Modelling, 4:145.
+Tao Gui, Yicheng Zou, Qi Zhang, Minlong Peng, Jinlan Fu, Zhongyu Wei, and Xuanjing Huang. 2019. A lexicon-based graph neural network for Chinese NER. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1040-1050, Hong Kong, China. Association for Computational Linguistics.
+Ashim Gupta, Amrith Krishna, Pawan Goyal, and Oliver Hellwig. 2020. Evaluating neural morphological taggers for Sanskrit. In Proceedings of the 17th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology, pages 198-203, Online. Association for Computational Linguistics.
+Oliver Hellwig. 2010. Dcs-the digital corpus of sanskrit. Heidelberg (2010-2021). URL http://www.sanskritlinguistics.org/dcs/index.php.
+Oliver Hellwig. 2015. Using recurrent neural networks for joint compound splitting and sandhi resolution in sanskrit. In 4th Biennial Workshop on Less-Resourced Languages.
+Oliver Hellwig and Sebastian Nehrdich. 2018. Sanskrit word segmentation using character-level recurrent and convolutional neural networks. In Proceedings of the 2018 Conference on Empirical Methods
+
+in Natural Language Processing, pages 2754-2763, Brussels, Belgium. Association for Computational Linguistics.
+Ronald M. Kaplan and Martin Kay. 1994. Regular models of phonological rule systems. Computational Linguistics, 20(3):331-378.
+Armith Krishna, Ashim Gupta, Deepak Garasangi, Pavankumar Satuluri, and Pawan Goyal. 2020a. Keep it surprisingly simple: A simple first order graph based parsing model for joint morphosyntactic parsing in Sanskrit. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4791-4797, Online. Association for Computational Linguistics.
+Amrith Krishna, Bishal Santra, Sasi Prasanth Bandaru, Gaurav Sahu, Vishnu Dutt Sharma, Pavankumar Satuluri, and Pawan Goyal. 2018. Free as in free word order: An energy based model for word segmentation and morphological tagging in sanskrit. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2550-2561, Brussels, Belgium. Association for Computational Linguistics.
+Amrith Krishna, Bishal Santra, Ashim Gupta, Pavankumar Satuluri, and Pawan Goyal. 2020b. A graph based framework for structured prediction tasks in sanskrit. Computational Linguistics, 46(4):1-63.
+Amrith Krishna, Bishal Santra, Pavankumar Satuluri, Sasi Prasanth Bandaru, Bhumi Faldu, Yajuendra Singh, and Pawan Goyal. 2016a. Word segmentation in Sanskrit using path constrained random walks. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 494-504, Osaka, Japan. The COLING 2016 Organizing Committee.
+Amrith Krishna, Pavan Kumar Satuluri, and Pawan Goyal. 2017. A dataset for Sanskrit word segmentation. In Proceedings of the Joint SIGHUM Workshop on Computational Linguistics for Cultural Heritage, Social Sciences, Humanities and Literature, pages 105-114, Vancouver, Canada. Association for Computational Linguistics.
+Amrith Krishna, Pavankumar Satuluri, Shubham Sharma, Apurv Kumar, and Pawan Goyal. 2016b. Compound type identification in Sanskrit: What roles do the corpus and grammar play? In Proceedings of the 6th Workshop on South and Southeast Asian Natural Language Processing (WSSANLP2016), pages 1-10, Osaka, Japan. The COLING 2016 Organizing Committee.
+Sriram Krishnan, Amba Kulkarni, and Gerard Huet. 2020. Validation and normalization of dcs corpus using sanskrit heritage tools to build a tagged gold corpus.
+Rishabh Kumar, Devaraja Adiga, Rishav Ranjan, Amrith Krishna, Ganesh Ramakrishnan, Pawan Goyal,
+
+and Preethi Jyothi. 2022. Linguistically informed post-processing for asr error correction in sanskrit. Proc. Interspeech 2022, pages 2293-2297.
+
+Xiaonan Li, Hang Yan, Xipeng Qiu, and Xuanjing Huang. 2020. FLAT: Chinese NER using flat-lattice transformer. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6836-6842, Online. Association for Computational Linguistics.
+
+Vipul Mittal. 2010. Automatic Sanskrit segmentizer using finite state transducers. In Proceedings of the ACL 2010 Student Research Workshop, pages 85-90, Uppsala, Sweden. Association for Computational Linguistics.
+
+Abhiram Natarajan and Eugene Charniak. 2011. $s^3$ - statistical sandhi splitting. In Proceedings of 5th International Joint Conference on Natural Language Processing, pages 301-308, Chiang Mai, Thailand. Asian Federation of Natural Language Processing.
+
+Vikas Reddy, Amrith Krishna, Vishnu Sharma, Prateek Gupta, Vineeth M R, and Pawan Goyal. 2018. Building a word segmenter for Sanskrit overnight. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan. European Language Resources Association (ELRA).
+
+Jivnesh Sandhan, Amrith Krishna, Pawan Goyal, and Laxmidhar Behera. 2019. Revisiting the role of feature engineering for compound type identification in Sanskrit. In Proceedings of the 6th International Sanskrit Computational Linguistics Symposium, pages 28-44, IIT Kharagpur, India. Association for Computational Linguistics.
+
+Jivnesh Sandhan, Amrith Krishna, Ashim Gupta, Laxmidhar Behera, and Pawan Goyal. 2021. A little pretraining goes a long way: A case study on dependency parsing task for low-resource morphologically rich languages. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Student Research Workshop, pages 111-120, Online. Association for Computational Linguistics.
+
+Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, undefinedukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS'17, page 6000-6010, Red Hook, NY, USA. Curran Associates Inc.
+
+Hang Yan, Bocao Deng, Xiaonan Li, and Xipeng Qiu. 2019. Tener: Adapting transformer encoder for named entity recognition.
+
+Yue Zhang and Jie Yang. 2018. Chinese NER using lattice LSTM. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1554-1564, Melbourne, Australia. Association for Computational Linguistics.
+
+# A Appendix
+
+Average run times: Table 5 shows the average training time in hours and inference time in milliseconds for all competing baselines. We find that pure engineering-based techniques (TENER, rcNN-SS) outperform lattice-structured architectures (Lattice-LSTM, Lattice-GNN, FLAT-Lattice) in terms of run time. When the inference times of TransLIST and $\mathrm{TransLIST}_{\mathrm{ngrams}}$ are compared, TransLIST takes longer owing to the PRCP module. It would be interesting to explore approaches to optimise the inference time of the PRCP module.
+
+
System
Train (Hours)
Test (ms)
TENER
4 H
7 ms
Lattice-LSTM
16 H
110 ms
Lattice-GNN
64 H
95 ms
FLAT-Lattice
5 H
14 ms
rcNN-SS
4 H
5 ms
Cliq-EBM
10.5 H
750 ms
TransLISTngrams
8 H
14ms
TransLIST
8 H
105 ms
+
+Table 5: Average training time (in hours) and inference time (in milliseconds) for all the competing baselines.
\ No newline at end of file
diff --git a/translistatransformerbasedlinguisticallyinformedsanskrittokenizer/images.zip b/translistatransformerbasedlinguisticallyinformedsanskrittokenizer/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..95a4c0d8aa7dbf5467c64532fe1906cfebccb520
--- /dev/null
+++ b/translistatransformerbasedlinguisticallyinformedsanskrittokenizer/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f322fe2ec1d3b4c647e5651107292ae77d1e5ce3b7698970962fd7da5a555356
+size 558986
diff --git a/translistatransformerbasedlinguisticallyinformedsanskrittokenizer/layout.json b/translistatransformerbasedlinguisticallyinformedsanskrittokenizer/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..aeb706d16043b001b0186ffa91fdc5b4c0728284
--- /dev/null
+++ b/translistatransformerbasedlinguisticallyinformedsanskrittokenizer/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4c7d3f8203fba6894ab0ccae3fe40ddc2b7a748869d215715e73169942ce3ea1
+size 328584
diff --git a/transtransitionbasedknowledgegraphembeddingwithsyntheticrelationrepresentation/8e865efa-a391-4f1b-869a-6f7722be8bf2_content_list.json b/transtransitionbasedknowledgegraphembeddingwithsyntheticrelationrepresentation/8e865efa-a391-4f1b-869a-6f7722be8bf2_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..d563a463535b5658d60cefefe8c588b9e45e883c
--- /dev/null
+++ b/transtransitionbasedknowledgegraphembeddingwithsyntheticrelationrepresentation/8e865efa-a391-4f1b-869a-6f7722be8bf2_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:1deb593f49f5bc75ebe0d4051aa7a9253da92f871036e4d5e59f7728b01858a0
+size 48288
diff --git a/transtransitionbasedknowledgegraphembeddingwithsyntheticrelationrepresentation/8e865efa-a391-4f1b-869a-6f7722be8bf2_model.json b/transtransitionbasedknowledgegraphembeddingwithsyntheticrelationrepresentation/8e865efa-a391-4f1b-869a-6f7722be8bf2_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..c746b79b69810aead3ee46587b8e3a5099eef2c5
--- /dev/null
+++ b/transtransitionbasedknowledgegraphembeddingwithsyntheticrelationrepresentation/8e865efa-a391-4f1b-869a-6f7722be8bf2_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:45297e0efb09251c94a53a346f26c3860d1ee30470bbc26cb56aa1b921cd83e6
+size 61255
diff --git a/transtransitionbasedknowledgegraphembeddingwithsyntheticrelationrepresentation/8e865efa-a391-4f1b-869a-6f7722be8bf2_origin.pdf b/transtransitionbasedknowledgegraphembeddingwithsyntheticrelationrepresentation/8e865efa-a391-4f1b-869a-6f7722be8bf2_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..2a928873a2458b962c2e22299e4b80fa068e971a
--- /dev/null
+++ b/transtransitionbasedknowledgegraphembeddingwithsyntheticrelationrepresentation/8e865efa-a391-4f1b-869a-6f7722be8bf2_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:cffb54aab3c0749a47d00ac9694e975563e857fa0f63df7375c431a185ce2ae5
+size 452001
diff --git a/transtransitionbasedknowledgegraphembeddingwithsyntheticrelationrepresentation/full.md b/transtransitionbasedknowledgegraphembeddingwithsyntheticrelationrepresentation/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..fe4c66816e8f69ab73e97eaac2617f74a8c02481
--- /dev/null
+++ b/transtransitionbasedknowledgegraphembeddingwithsyntheticrelationrepresentation/full.md
@@ -0,0 +1,192 @@
+# TranS: Transition-based Knowledge Graph Embedding with Synthetic Relation Representation
+
+Xuanyu Zhang, Qing Yang and Dongliang Xu
+Du Xiaoman Financial
+
+{zhangxuanyu, yangqing, xudongliang}@duxiaoman.com
+
+# Abstract
+
+Knowledge graph embedding (KGE) aims to learn continuous vector representations of relations and entities in knowledge graph (KG). Recently, transition-based KGE methods have become popular and achieved promising performance. However, scoring patterns like TransE are not suitable for complex scenarios where the same entity pair has different relations. Although some models attempt to employ entity-relation interaction or projection to improve entity representation for one-to-many/many-to-one/many-to-many complex relations, they still continue the traditional scoring pattern, where only a single relation vector in the relation part is used to translate the head entity to the tail entity or their variants. And recent research shows that entity representation only needs to consider entities and their interactions to achieve better performance. Thus, in this paper, we propose a novel transition-based method, TranS, for KGE. The single relation vector of the relation part in the traditional scoring pattern is replaced by the synthetic relation representation with entity-relation interactions to solve these issues. And the entity part still retains its independence through entity-entity interactions. Experiments on a large KG dataset, ogbl-wikikg2, show that our model achieves state-of-the-art results.
+
+# 1 Introduction
+
+Knowledge graphs (KGs), such as Freebase (Bollacker et al., 2008), Wikidata (Vrandecic and Krötzsch, 2014), DBpedia (Lehmann et al., 2015) and Yago (Rebele et al., 2016), play a very important role in many fields, including question answering (Huang et al., 2019), semantic parsing (Yih et al., 2015), information retrieval (Xiong et al., 2017) and so on. KG, as a multi-relational graph, is composed of entities as nodes and relations as different types of edges. It is usually represented as the form of triplets $(h, r, t)$ , i.e., (head entity, relation, tail entity), where relation indicates the relationship between the two entities.
+
+
+
+
+Figure 1: Examples from ogbl-wikikg2. It is difficult for a single relation vector to represent different relations between the same entity pairs.
+
+Knowledge graph embedding (KGE) is an important and fundamental research topic in KG. It aims to learn dense semantic representations of entities and relations for downstream tasks such as KG completion and link prediction. Generally speaking, KGE methods can be roughly divided into the following directions: translational distance (Bordes et al., 2013; Wang et al., 2014; Fan et al., 2014; Lin et al., 2015; Ji et al., 2015, 2016; Feng et al., 2016), semantic matching (Nickel et al., 2011; Bordes et al., 2011, 2014; García-Durán et al., 2014; Yang et al., 2015; Nickel et al., 2016; Balazevic et al., 2019) and neural networks (Socher et al., 2013; Dong et al., 2014; Liu et al., 2016; Dettmers et al., 2018; Nguyen et al., 2018). Because transition-based KGE method like TransE (Bordes et al., 2013) is simple and effective, this series of models are becoming more and more popular in both academia and industry. Specifically, TransE makes the difference between two entity vectors $(\mathbf{h}$ and $\mathbf{t})$ approximate to the relation vector $(\mathbf{r})$ , i.e., $\mathbf{t} - \mathbf{h} \approx \mathbf{r}$ . That is to say, the relation $r$ is characterized by the translating vector $\mathbf{r}$ .
+
+However, TransE is not suitable to deal with complex relations like one-to-many/many-to-one/many-to-many. For example, in Figure 1, after graduating from Erasmus University Rotterdam, Pauline Meurs became a professor at the same university. And the composer, producer, screenwriter, editor and director of the film, Indramalati, can be the same person, Jyoti Prasad Agarwala. Although previous models (Wang et al., 2014; Lin et al., 2015; Qian et al., 2018; Chao et al., 2021; Yu et al., 2021) such as TransH/R/D have considered relevant issues, they still focus on the entity-relation projection or interaction in the entity part and continue the TransE pattern, $\mathbf{R}_{\mathrm{t}} - \mathbf{R}_{\mathrm{h}} \approx \mathbf{r}$ , where $\mathbf{R}_{\mathrm{t}}$ and $\mathbf{R}_{\mathrm{h}}$ is the deformation of $\mathbf{t}$ and $\mathbf{h}$ , $\mathbf{R}_{\mathrm{t}} - \mathbf{R}_{\mathrm{h}}$ is the entity part, and $\mathbf{r}$ is the relation part. Actually, recent research, InterHT (Wang et al., 2022), shows that the entity part only needs to consider the head and tail entities and their interaction information to achieve remarkable performance and outperform previous TransX series models. Unfortunately, it again ignores the problem of complex relation representation. Therefore, from the perspective of interaction, how to solve the problem in Figure 1 by introducing entity-relation interactions in the relation part under the condition that only entity-entity interactions are retained in the entity part needs to be further considered.
+
+To this end, we propose a novel transition-based knowledge graph embedding model, TranS, which replaces traditional scoring pattern with synthetic relation pattern, i.e., $\mathbf{R}_{\mathrm{t}} - \mathbf{R}_{\mathrm{h}} \approx \bar{\mathbf{r}} + \mathbf{r} + \hat{\mathbf{r}}$ . The final relation representation is the sum of multiple relation vectors. Two of them ( $\bar{\mathbf{r}}$ , $\hat{\mathbf{r}}$ ) are also related to the head entity $h$ and the tail entity $t$ in addition to the relation $r$ (orange solid lines denote $\mathbf{r}$ , and blue dotted lines denote $\bar{\mathbf{r}}$ , $\hat{\mathbf{r}}$ in Figure 1). For one thing, in the entity part, instead of using entity-relation interaction and projection, it focuses only on entities and their interactions themselves to guarantee their independence and effectiveness. For another thing, different from other methods that utilize entity-relation interactions in the entity part, our method migrates their interactions to the relation part and forms synthetic relation representation, which can effectively solve the problem that a single relation vector cannot represent different relations when facing the same entity pair. Experiments on a large knowledge graph dataset, ogbl-wikikg2, show that our proposed model achieves the best results with fewer parameters.
+
+# 2 Methodology
+
+# 2.1 TranS
+
+Our proposed TranS model first breaks the traditional scoring patterns $\mathbf{R}_{\mathrm{t}} - \mathbf{R}_{\mathrm{h}} \approx \mathbf{r}$ in previous models (Bordes et al., 2013; Wang et al., 2014; Fan et al., 2014; Lin et al., 2015; Chao et al., 2021; Yu et al., 2021; Wang et al., 2022). It replaces single relation vector $\mathbf{r}$ with synthetic relation vectors $\bar{\mathbf{r}} + \mathbf{r} + \hat{\mathbf{r}}$ , i.e., $\mathbf{R}_{\mathrm{t}} - \mathbf{R}_{\mathrm{h}} \approx \bar{\mathbf{r}} + \mathbf{r} + \hat{\mathbf{r}}$ , where $\bar{\mathbf{r}}$ is an adjoint relation vector related to the head entity and $\hat{\mathbf{r}}$ is another adjoint relation vector related to the tail entity. The illustration of TranS is shown in Figure 2 (f). Two entity and three relation representations together make up our proposed scoring function $f_{r}(h, t)$ . That is to say, the synthetic relation representation in the right relation part consists of the sum of three different relation vectors. To make full use of context information, we use adjoint vectors and Hadamard product $\circ$ to interact with $\mathbf{h}$ , $\mathbf{t}$ , $\bar{\mathbf{r}}$ and $\hat{\mathbf{r}}$ separately:
+
+$$
+f _ {r} (h, t) = - | | \mathbf {R _ {h}} - \mathbf {R _ {t}} + \mathbf {R _ {r}} | |,
+$$
+
+$$
+\mathbf {R} _ {\mathrm {h}} = \mathbf {h} \circ \tilde {\mathbf {t}}, \tag {1}
+$$
+
+$$
+\mathbf {R} _ {\mathbf {t}} = \mathbf {t} \circ \tilde {\mathbf {h}},
+$$
+
+$$
+\mathbf {R} _ {\mathbf {r}} = \bar {\mathbf {r}} \circ \mathbf {h} + \mathbf {r} + \hat {\mathbf {r}} \circ \mathbf {t},
+$$
+
+where $\mathbf{h}$ , $\mathbf{t}$ and $\mathbf{r}$ denote main vectors similar to those in traditional scoring patterns. $\tilde{\mathbf{h}}$ represents the adjoint head entity vector and $\tilde{\mathbf{t}}$ represents the adjoint tail entity vector. Accordingly, $\mathbf{R_h}$ is the representation of the head entity that combines information of the tail entity, and $\mathbf{R_t}$ is the representation of the tail entity integrating information of the head entity. $\bar{\mathbf{r}}\circ \mathbf{h}$ is the representation of the adjoint relation with the head entity information, and $\hat{\mathbf{r}}\circ \mathbf{t}$ is the representation of another adjoint relation with the tail entity information. Thus, the final equation can be represented as:
+
+$$
+f _ {r} (h, t) = - \left\| \mathbf {h} \circ \tilde {\mathbf {t}} - \mathbf {t} \circ \tilde {\mathbf {h}} + \underbrace {\bar {\mathbf {r}} \circ \mathbf {h}} _ {\text {一 个 人}} + \mathbf {r} + \hat {\mathbf {r}} \circ \mathbf {t} \right\|. \tag {2}
+$$
+
+Following previous works (Yu et al., 2021; Wang et al., 2022), we add an unit vector $\mathbf{e}$ to $\mathbf{R_h}$ and $\mathbf{R_t}$ , i.e., $\mathbf{h} \circ \tilde{\mathbf{t}} \to \mathbf{h} \circ (\tilde{\mathbf{t}} + \mathbf{e})$ , $\mathbf{t} \circ \tilde{\mathbf{h}} \to \mathbf{t} \circ (\tilde{\mathbf{h}} + \mathbf{e})$ . And considering the out-of-vocabulary problem, we also use the NodePiece (Galkin et al., 2022) to learn a fixed-size entity vocabulary.
+
+# 2.2 Training
+
+Inspired by previous works (Chao et al., 2021; Zhang and Yang, 2021; Wang et al., 2022), we use
+
+
+(a) TransE
+
+
+(c) InterHT
+
+
+(e) TripleRE
+
+
+(b) TransH
+
+
+(d) PairRE
+
+
+(f) TranS
+Figure 2: Comparison of different transition-based KGE models.
+
+the self-adversarial negative sampling loss (Sun et al., 2019) as our loss function, which is defined as follows:
+
+$$
+\begin{array}{l} \mathcal {L} = - \log \sigma (\gamma - f _ {r} (h, t)) \\ - \sum_ {i = 1} ^ {n} p \left(h _ {i} ^ {\prime}, r, t _ {i} ^ {\prime}\right) \log \sigma \left(f _ {r} \left(h _ {i} ^ {\prime}, t _ {i} ^ {\prime}\right) - \gamma\right), \tag {3} \\ \end{array}
+$$
+
+where $\gamma$ is a fixed margin, $\sigma$ is the sigmoid function, and $(h_i', r, t_i')$ is the $i$ -th of $n$ randomly sampled negative triplets. And the weights of this negative sample $p(h_i', r, t_i')$ can be calculated as follows:
+
+$$
+p \left(h _ {i} ^ {\prime}, r, t _ {i} ^ {\prime}\right) = \frac {\exp f _ {r} \left(h _ {i} ^ {\prime} , t _ {i} ^ {\prime}\right)}{\sum_ {j} \exp f _ {r} \left(h _ {j} ^ {\prime} , t _ {j} ^ {\prime}\right)}. \tag {4}
+$$
+
+# 2.3 Comparison
+
+As shown in Figure 2, the main difference between our model (f) and previous transition-based KGE methods (a,b,c,d,e) is the synthetic relation representation. That is to say, it changes single relation representation $\mathbf{r}$ in traditional scoring pattern $\mathbf{R}_{\mathrm{t}} - \mathbf{R}_{\mathrm{h}} \approx \mathbf{r}$ to synthetic relation representation $\bar{\mathbf{r}} + \mathbf{r} + \hat{\mathbf{r}}$ in our proposed new pattern $\mathbf{R}_{\mathrm{t}} - \mathbf{R}_{\mathrm{h}} \approx \bar{\mathbf{r}} + \mathbf{r} + \hat{\mathbf{r}}$ . Specifically, different from InterHT (Wang et al., 2022), the relation part of our scoring function is the sum of multiple relation
+
+vectors $\mathbf{R}_{\mathbf{r}} = \bar{\mathbf{r}}\circ \mathbf{h} + \mathbf{r} + \hat{\mathbf{r}}\circ \mathbf{t}$ rather than single vector $\mathbf{r}$ . Comparing with TripleRE (Yu et al., 2021), where three relations are applied into three parts ( $\mathbf{R}_{\mathbf{h}} = \mathbf{h}\circ \mathbf{r}^{\mathbf{h}}$ , $\mathbf{R}_{\mathbf{t}} = \mathbf{t}\circ \mathbf{r}^{\mathbf{t}}$ , $\mathbf{R}_{\mathbf{r}} = \mathbf{r}^{\mathbf{m}}$ ) of traditional scoring patterns with addition and subtraction operations, our proposed TranS only applies synthetic relation vectors into the relation part $\mathbf{R}_{\mathbf{r}} = \bar{\mathbf{r}}\circ \mathbf{h} + \mathbf{r} + \hat{\mathbf{r}}\circ \mathbf{t}$ of scoring functions with addition operations.
+
+# 3 Experiments
+
+# 3.1 Dataset and Metric
+
+Ogbl-wikikg2 (Hu et al., 2020) is a large KG dataset extracted from Wikidata (Vrandecic and Krötzsch, 2014). It contains a set of triplet edges, capturing the different types of relations between entities in the world. The statistics of the dataset are shown in Table 1. It contains 2,500,604 entities, 535 relation types and 17,137,181 edges. Following official guidelines, we evaluate the KGE performance by predicting new triplet edges according to the training edges. The evaluation metric follows the standard filtered metric widely used in KG. Specifically, each test triplet edge is corrupted by replacing its head or tail with randomly sampled negative entities, while ensuring the resulting
+
+
Type
Train
Validation
Test
Nodes
Relations
Edges
#Number
16,109,182
429,456
598,543
2,500,604
535
17,137,181
+
+Table 1: Statistics of the ogbl-wikikg2 dataset.
+
+
Model
#Params
#Dims
Test MRR
Valid MRR
TransE (Bordes et al., 2013)
1251M
500
0.4256 ± 0.0030
0.4272 ± 0.0030
RotatE (Sun et al., 2019)
1250M
250
0.4332 ± 0.0025
0.4353 ± 0.0028
PairRE (Chao et al., 2021)
500M
200
0.5208 ± 0.0027
0.5423 ± 0.0020
AutoSF (Zhang et al., 2020)
500M
-
0.5458 ± 0.0052
0.5510 ± 0.0063
ComplEx (Trouillon et al., 2016)
1251M
250
0.5027 ± 0.0027
0.3759 ± 0.0016
TripleRE (Yu et al., 2021)
501M
200
0.5794 ± 0.0020
0.6045 ± 0.0024
ComplEx-RP (Chen et al., 2021)
250M
50
0.6392 ± 0.0045
0.6561 ± 0.0070
AutoSF + NodePiece
6.9M
-
0.5703 ± 0.0035
0.5806 ± 0.0047
TripleRev2 + NodePiece
7.3M
200
0.6582 ± 0.0020
0.6616 ± 0.0018
TripleRev3 + NodePiece
36.4M
200
0.6866 ± 0.0014
0.6955 ± 0.0008
InterHT + NodePiece
19.2M
200
0.6779 ± 0.0018
0.6893 ± 0.0015
TranS + NodePiece
19.2M
200
0.6882 ± 0.0019
0.6988 ± 0.0006
+
+Table 2: Results on the ogbl-wikikg2 dataset.
+
+triplets do not appear in KG. The goal is to rank the true head or tail entities higher than the negative entities, which is measured by Mean Reciprocal Rank (MRR).
+
+We follow the original dataset partition. The triplets are split according to time to simulate a real KG completion scenario where missing triplets that are not present at a specific timestamp need to be filled. The training set contains 16,109,182 triplets, the validation set contains 429,456 triplets, and the test set contains 598,543 triplets.
+
+# 3.2 Implementation Details
+
+In our experiments, Adam (Kingma and Ba, 2014) is used as our optimizer with 0.0005 learning rate. The batch size of the model is set to 512. To prevent overfitting, we use the dropout technique and set it to 0.05. The negative sampling size is set to 128. And the dimension of each embedding vector in Eq. 2 is set to 200. The maximum number of training steps is 800 thousand. We validate the model every 20 thousand steps. The number of anchors for NodePiece is 20 thousand. And $\gamma$ in the loss function is set to 6. The final model is evaluated with 10 different random seeds. Our code is publicly available at the link: https://github.com/xyznlp/TransS.
+
+# 3.3 Results
+
+The results are shown in Table 2. Our model achieves 0.6988 (validation set) and 0.6882 (test set) on MRR, which outperforms the previous best model, TripleREv3, on the ogbl-wikikg2 dataset. Especially, the parameters of our model (19.2M) are about half of TripleREv3 (36.4M). So the experimental results show that our proposed method can improve the model performance effectively with fewer parameters. Besides, we also construct a 38.4M TranS (large) model, the best score of which can reach 0.7101 (validation set) and 0.6992 (test set) on MRR. Comparing the two groups with similar numbers of parameters, i.e., TranS versus InterHT and TranS (large) versus TripleREv3, we can observe more significant improvements.
+
+# 4 Related Work
+
+Recently, graph structures are used widely in natural language processing, recommendation and other areas (Zhang, 2020; Zhang et al., 2021). KG, as one of the graph structures, uses triples consisting of head nodes, tail nodes and relation edges to represent structured knowledge. To further compare different transition-based knowledge graph embeddings, we summarize related methods in Table 3 with reference to recent research (Ji et al., 2021).
+
+
Model
Embedding
Scoring Function
Pattern
TransE
h, t ∈ Rd, r ∈ Rd
-||h + r - t||1/2
T
TransR
h, t ∈ Rd, r ∈ Rk, Mr ∈ Rk×d
-||Mr h + r - Mr t||2
T
TransH
h, t ∈ Rd, r, wr ∈ Rd
-||(h - wrT hw) + r - (t - wrT twr)||2
T
ITransF
h, t ∈ Rd, r ∈ Rd
||αrH · D · h + r - αrT · D · t||l
T
TransAt
h, t ∈ Rd, r ∈ Rd
Pr(σ(rh) h) + r - Pr(σ(rt) t)
T
TransD
h, t, wh wt ∈ Rd, r, wr ∈ Rk
-||(wrwhT + I) h + r - (wrwtT + I) t||2
T
TransM
h, t ∈ Rd, r ∈ Rd
-θr||h + r - t||1/2
T
Transparse
h, t ∈ Rd, r ∈ Rk, Mr ∈ Rk×d
-||Mr(θr) h + r - Mr(θr) t||1/2
T
h, t ∈ Rd, Mr1, Mr2 ∈ Rk×d
-||Mr1(θr1) h + r - Mr2(θr2) t||1/2
T
PairRE
h, t ∈ Rd, rH, rT ∈ Rd
-||h o rH - t o rT||
T
TripleRE
h, t ∈ Rd, rH, rT, rM ∈ Rd
-||h o rH - t o rT + rM||
T
InterHT
h, t, ha, ta ∈ Rd, r ∈ Rd
-||h o ta - t o ha + r||
T
TranS
h, t, h,til ∈ Rd, r, r, r ∈ Rd
-||h otil - t otil + r o h + r + r o t||
S
+
+Table 3: Summary of transition-based knowledge graph embedding models. $\mathcal{T}$ represents the traditional scoring pattern $-||\mathbf{R_h} - \mathbf{R_t} + \mathbf{r}||$ . And $S$ represents our proposed new scoring pattern $-||\mathbf{R_h} - \mathbf{R_t} + \bar{\mathbf{r}} + \mathbf{r} + \hat{\mathbf{r}}||$ .
+
+Transition-based methods measure the plausibility of fact triples (h,r,t) as the distance between entities. TransE (Bordes et al., 2013), as a representative method, models relationships by interpreting them as translations operating on the low-dimensional embeddings of the entities, i.e., $t - h \approx r$ . Although it is simple and efficient, it cannot handle complex relations. Thus, several TransX models (TransH (Wang et al., 2014), TransR (Lin et al., 2015), TransD (Ji et al., 2015)) are proposed based on hyperplane or multiple embedding spaces for these issues. For example, TransR (Lin et al., 2015) projects entities from entity space to corresponding relation space and builds translations between projected entities. And recent works also begin to utilize multiple vectors to represent entities and relations and conduct their interactions. For example, PairRE (Chao et al., 2021) and TripleRE (Yu et al., 2021) employ two and three relation vectors to represent relation information, respectively. Especially, InterHT (Wang et al., 2022) outperforms previous models only with two head and tail vectors and their interactions in the entity part. But InterHT again ignores the problem of complex relation representation. Different from previous models, from the perspective of interaction (Zhang et al., 2022; Zhang and Wang, 2020; Zhang, 2019), our proposed TranS introduces entity-entity interaction in the entity part like InterHT and migrates entity-relation interaction from the entity part to
+
+the relation part. It can not only preserve the independence of entity representation, but also utilize entity-relation interaction in the relation part to solve the above problem.
+
+# 5 Conclusion
+
+In this paper, we propose a novel transition-based knowledge graph embedding model, TranS, to solve the representation problem of complex scenarios where the same entity pair has different relations. TranS replaces the single relation vector of the relation part in traditional scoring patterns with synthetic relation representation. It not only retains the independence of entity interaction in the entity part, but also introduces entity-relation interaction in the relation part. Experiments on a large KG dataset, ogbl-wikikg2, show that our model achieves the best results with fewer parameters.
+
+# Limitations
+
+Although our model has achieved the best performance on relevant datasets, it still focuses on current or local KG triples to learn entity and relation representations. Actually, in large-scale knowledge graphs, neighborhoods can provide extra information for entity representation or initialization like NodePiece. Thus the performance of our model can be further improved by exploring additional neighbor information and encoding methods.
+
+# References
+
+Ivana Balazevic, Carl Allen, and Timothy Hospedales. 2019. TuckER: Tensor factorization for knowledge graph completion. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5185-5194, Hong Kong, China. Association for Computational Linguistics.
+Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. 2008. Freebase: a collaboratively created graph database for structuring human knowledge. In Proceedings of the 2008 ACM SIGMOD international conference on Management of data, pages 1247-1250.
+Antoine Bordes, Xavier Glorot, Jason Weston, and Yoshua Bengio. 2014. A semantic matching energy function for learning with multi-relational data. Mach. Learn., 94(2):233-259.
+Antoine Bordes, Nicolas Usunier, Alberto Garcia-Duran, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multi-relational data. Advances in neural information processing systems, 26.
+Antoine Bordes, Jason Weston, Ronan Collobert, and Yoshua Bengio. 2011. Learning structured embeddings of knowledge bases. Proceedings of the AAAI Conference on Artificial Intelligence.
+Linlin Chao, Jianshan He, Taifeng Wang, and Wei Chu. 2021. PairRE: Knowledge graph embeddings via paired relation vectors. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4360-4369, Online. Association for Computational Linguistics.
+Yihong Chen, Pasquale Minervini, Sebastian Riedel, and Pontus Stenetorp. 2021. Relation prediction as an auxiliary training objective for improving multi-relational graph representations. In 3rd Conference on Automated Knowledge Base Construction.
+Tim Dettmers, Pasquale Minervini, Pontus Stenetorp, and Sebastian Riedel. 2018. Convolutional 2d knowledge graph embeddings. In Proceedings of the AAAI conference on artificial intelligence, volume 32.
+Xin Dong, Evgeniy Gabrilovich, Geremy Heitz, Wilko Horn, N. Lao, Kevin P. Murphy, Thomas Strohmann, Shaohua Sun, and Wei Zhang. 2014. Knowledge vault: a web-scale approach to probabilistic knowledge fusion. Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining.
+Miao Fan, Qiang Zhou, Emily Chang, and Fang Zheng. 2014. Transition-based knowledge graph embedding with relational mapping properties. In Proceedings of the 28th Pacific Asia conference on language, information and computing, pages 328-337.
+
+Jun Feng, Minlie Huang, Mingdong Wang, Mantong Zhou, Yu Hao, and Xiaoyan Zhu. 2016. Knowledge graph embedding by flexible translation. In Proceedings of the Fifteenth International Conference on Principles of Knowledge Representation and Reasoning, KR'16, page 557-560. AAAI Press.
+Mikhail Galkin, Etienne Denis, Jiapeng Wu, and William L. Hamilton. 2022. Nodepiece: Compositional and parameter-efficient representations of large knowledge graphs. In International Conference on Learning Representations.
+Alberto García-Durán, Antoine Bordes, and Nicolas Usunier. 2014. Effective blending of two and three-way interactions for modeling multi-relational data. In Proceedings of the 2014th European Conference on Machine Learning and Knowledge Discovery in Databases - Volume Part I, ECMLPKDD'14, page 434-449, Berlin, Heidelberg. Springer-Verlag.
+Weihua Hu, Matthias Fey, Marinka Zitnik, Yuxiao Dong, Hongyu Ren, Bowen Liu, Michele Catasta, and Jure Leskovec. 2020. Open graph benchmark: Datasets for machine learning on graphs. arXiv preprint arXiv:2005.00687.
+Xiao Huang, Jingyuan Zhang, Dingcheng Li, and Ping Li. 2019. Knowledge graph embedding based question answering. Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining.
+Guoliang Ji, Shizhu He, Liheng Xu, Kang Liu, and Jun Zhao. 2015. Knowledge graph embedding via dynamic mapping matrix. In Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: Long papers), pages 687-696.
+Guoliang Ji, Kang Liu, Shizhu He, and Jun Zhao. 2016. Knowledge graph completion with adaptive sparse transfer matrix. In Thirtieth AAAI conference on artificial intelligence.
+Shaoxiong Ji, Shirui Pan, Erik Cambria, Pekka Marttinen, and S Yu Philip. 2021. A survey on knowledge graphs: Representation, acquisition, and applications. IEEE Transactions on Neural Networks and Learning Systems.
+Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
+Jens Lehmann, Robert Isele, Max Jakob, Anja Jentzsch, Dimitris Kontokostas, Pablo N. Mendes, Sebastian Hellmann, Mohamed Morsey, Patrick van Kleef, S. Auer, and Christian Bizer. 2015. Dbpedia - a large-scale, multilingual knowledge base extracted from wikipedia. Semantic Web, 6:167-195.
+Yankai Lin, Zhiyuan Liu, Maosong Sun, Yang Liu, and Xuan Zhu. 2015. Learning entity and relation embeddings for knowledge graph completion. In Twenty-ninth AAAI conference on artificial intelligence.
+
+Quan Liu, Hui Jiang, Zhen-Hua Ling, Si Wei, and Yu Hu. 2016. Probabilistic reasoning via deep learning: Neural association models. CoRR, abs/1603.07704.
+Dai Quoc Nguyen, Tu Dinh Nguyen, Dat Quoc Nguyen, and Dinh Q. Phung. 2018. A novel embedding model for knowledge base completion based on convolutional neural network. In NAACL.
+Maximilian Nickel, Lorenzo Rosasco, and Tomaso Poggio. 2016. Holographic embeddings of knowledge graphs. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, AAAI'16, page 1955-1961. AAAI Press.
+Maximilian Nickel, Volker Tresp, and Hans-Peter Kriegel. 2011. A three-way model for collective learning on multi-relational data. In Proceedings of the 28th International Conference on International Conference on Machine Learning, ICML'11, page 809-816, Madison, WI, USA. Omnipress.
+Wei Qian, Cong Fu, Yu Zhu, Deng Cai, and Xiaofei He. 2018. Translating embeddings for knowledge graph completion with relation attention mechanism. In IJCAI, pages 4286-4292.
+Thomas Rebele, Fabian Suchanek, Johannes Hoffart, Joanna Biega, Erdal Kuzey, and Gerhard Weikum. 2016. Yago: A multilingual knowledge base from wikipedia, wordnet, and geonames. In International semantic web conference, pages 177-185. Springer.
+Richard Socher, Danqi Chen, Christopher D. Manning, and A. Ng. 2013. Reasoning with neural tensor networks for knowledge base completion. In NIPS.
+Zhiqing Sun, Zhi-Hong Deng, Jian-Yun Nie, and Jian Tang. 2019. Rotate: Knowledge graph embedding by relational rotation in complex space. In International Conference on Learning Representations.
+Théo Trouillon, Johannes Welbl, Sebastian Riedel, Éric Gaussier, and Guillaume Bouchard. 2016. Complex embeddings for simple link prediction. In International conference on machine learning, pages 2071-2080. PMLR.
+Denny Vrandecic and Markus Krötzsch. 2014. Wikidata: a free collaborative knowledgebase. Communications of the ACM, 57(10):78-85.
+Baoxin Wang, Qingye Meng, Ziyue Wang, Dayong Wu, Wanxiang Che, Shijin Wang, Zhigang Chen, and Cong Liu. 2022. Interht: Knowledge graph embeddings by interaction between head and tail entities. arXiv preprint arXiv:2202.04897.
+Zhen Wang, Jianwen Zhang, Jianlin Feng, and Zheng Chen. 2014. Knowledge graph embedding by translating on hyperplanes. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 28.
+
+Chenyan Xiong, Russell Power, and Jamie Callan. 2017. Explicit semantic ranking for academic search via knowledge graph embedding. Proceedings of the 26th International Conference on World Wide Web.
+Bishan Yang, Scott Wen-tau Yih, Xiaodong He, Jianfeng Gao, and Li Deng. 2015. Embedding entities and relations for learning and inference in knowledge bases. In Proceedings of the International Conference on Learning Representations (ICLR) 2015.
+Wen-tau Yih, Ming-Wei Chang, Xiaodong He, and Jianfeng Gao. 2015. Semantic parsing via staged query graph generation: Question answering with knowledge base. In Proceedings of the 53rd ACL, pages 1321-1331, Beijing, China.
+Long Yu, ZhiCong Luo, Deng Lin, HuanYong Liu, and YaFeng Deng. 2021. Triplere: Knowledge graph embeddings via triple relation vectors. viXra preprint viXra:2112.0095.
+Xuanyu Zhang. 2019. MC^2: Multi-perspective convolutional cube for conversational machine reading comprehension. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6185-6190, Florence, Italy. Association for Computational Linguistics.
+Xuanyu Zhang. 2020. Cfgnn: Cross flow graph neural networks for question answering on complex tables. Proceedings of the AAAI Conference on Artificial Intelligence, 34(05):9596-9603.
+Xuanyu Zhang and Zhichun Wang. 2020. Rception: Wide and deep interaction networks for machine reading comprehension (student abstract). Proceedings of the AAAI Conference on Artificial Intelligence, 34(10):13987-13988.
+Xuanyu Zhang and Qing Yang. 2021. Dml: Dynamic multi-granularity learning for bert-based document reranking. In Proceedings of the 30th ACM International Conference on Information and Knowledge Management, CIKM '21, page 3642-3646, New York, NY, USA.
+Xuanyu Zhang, Qing Yang, and Dongliang Xu. 2021. Combining explicit entity graph with implicit text information for news recommendation. In *Companion Proceedings of the Web Conference* 2021, WWW '21, page 412-416, New York, NY, USA. Association for Computing Machinery.
+Xuanyu Zhang, Qing Yang, and Dongliang Xu. 2022. Deepv: Deep view-temporal interaction network for news recommendation. In Proceedings of the 31st ACM International Conference on Information and Knowledge Management, CIKM '22, page 2640-2650, New York, NY, USA.
+Yongqi Zhang, Quanming Yao, Wenyuan Dai, and Lei Chen. 2020. Autosf: Searching scoring functions for knowledge graph embedding. In 2020 IEEE 36th International Conference on Data Engineering (ICDE), pages 433-444. IEEE.
\ No newline at end of file
diff --git a/transtransitionbasedknowledgegraphembeddingwithsyntheticrelationrepresentation/images.zip b/transtransitionbasedknowledgegraphembeddingwithsyntheticrelationrepresentation/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..9c80d2b8ba68ba746bd8093bc58163f2909a3223
--- /dev/null
+++ b/transtransitionbasedknowledgegraphembeddingwithsyntheticrelationrepresentation/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0013bc3000696617c137d740b6ded505ee8c186933bab5350f9dd426bfddcc64
+size 367470
diff --git a/transtransitionbasedknowledgegraphembeddingwithsyntheticrelationrepresentation/layout.json b/transtransitionbasedknowledgegraphembeddingwithsyntheticrelationrepresentation/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..ee053417de3829808e1b9dfc054078c3142e0edb
--- /dev/null
+++ b/transtransitionbasedknowledgegraphembeddingwithsyntheticrelationrepresentation/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d7cf3cc329c3d8ba511fe56b48f9cd98df88644761fd8257ab3f402993363956
+size 257022
diff --git a/trial2veczeroshotclinicaltrialdocumentsimilaritysearchusingselfsupervision/15d8a725-63cc-4419-82a9-6ab2f82b4335_content_list.json b/trial2veczeroshotclinicaltrialdocumentsimilaritysearchusingselfsupervision/15d8a725-63cc-4419-82a9-6ab2f82b4335_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..00f5f81619adb04eeb8d7fc91a217a100354a537
--- /dev/null
+++ b/trial2veczeroshotclinicaltrialdocumentsimilaritysearchusingselfsupervision/15d8a725-63cc-4419-82a9-6ab2f82b4335_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:60050faa7ebc9534e2915ad9ab3dceb7396b7ffc74994bccfdf78a8918a60f0b
+size 87388
diff --git a/trial2veczeroshotclinicaltrialdocumentsimilaritysearchusingselfsupervision/15d8a725-63cc-4419-82a9-6ab2f82b4335_model.json b/trial2veczeroshotclinicaltrialdocumentsimilaritysearchusingselfsupervision/15d8a725-63cc-4419-82a9-6ab2f82b4335_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..e17ef7cb90e8119ab495e0e0c24eac06fb5a7ed6
--- /dev/null
+++ b/trial2veczeroshotclinicaltrialdocumentsimilaritysearchusingselfsupervision/15d8a725-63cc-4419-82a9-6ab2f82b4335_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3a4fdecd47247c9aa0c6870153cac39802ae956de0a90fe539fa6df4aa7b3b9a
+size 114864
diff --git a/trial2veczeroshotclinicaltrialdocumentsimilaritysearchusingselfsupervision/15d8a725-63cc-4419-82a9-6ab2f82b4335_origin.pdf b/trial2veczeroshotclinicaltrialdocumentsimilaritysearchusingselfsupervision/15d8a725-63cc-4419-82a9-6ab2f82b4335_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..9935da5182d6d38696c52ea40e54ed13239db5b0
--- /dev/null
+++ b/trial2veczeroshotclinicaltrialdocumentsimilaritysearchusingselfsupervision/15d8a725-63cc-4419-82a9-6ab2f82b4335_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:2463f6f93735e4352145bd84963e2742e7d1f681ae57891a511721bba2d3d907
+size 533154
diff --git a/trial2veczeroshotclinicaltrialdocumentsimilaritysearchusingselfsupervision/full.md b/trial2veczeroshotclinicaltrialdocumentsimilaritysearchusingselfsupervision/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..859f71e9285a9bdadc9465ba73e51fd6d1e8d73d
--- /dev/null
+++ b/trial2veczeroshotclinicaltrialdocumentsimilaritysearchusingselfsupervision/full.md
@@ -0,0 +1,378 @@
+# Trial2Vec: Zero-Shot Clinical Trial Document Similarity Search using Self-Supervision
+
+Zifeng Wang1 and Jimeng Sun1,2
+
+1Department of Computer Science, University of Illinois Urbana-Champaign
+
+2Carle Illinois College of Medicine, University of Illinois Urbana-Champaign
+
+{zifengw2,jimeng}@illinois.edu
+
+# Abstract
+
+Clinical trials are essential for drug development but are extremely expensive and time-consuming to conduct. It is beneficial to study similar historical trials when designing a clinical trial. However, lengthy trial documents and lack of labeled data make trial similarity search difficult. We propose a zero-shot clinical trial retrieval method, called Tri al2Vec, which learns through self-supervision without the need for annotating similar clinical trials. Specifically, the meta-structure of trial documents (e.g., title, eligibility criteria, target disease) along with clinical knowledge (e.g., UMLS knowledge base ${}^{1}$ ) are leveraged to automatically generate contrastive samples. Besides, Tri al2Vec encodes trial documents considering meta-structure thus producing compact embeddings aggregating multi-aspect information from the whole document. We show that our method yields medically interpretable embeddings by visualization and it gets 15% average improvement over the best baselines on precision/recall for trial retrieval, which is evaluated on our labeled 1600 trial pairs. In addition, we prove the pretrained embeddings benefit the downstream trial outcome prediction task over 240k trials. ${}^{2}$
+
+# 1 Introduction
+
+Clinical trials are essential for developing new medical interventions (Friedman et al., 2015). Many considerations come into the design of a clinical trial, including study population, target disease, outcome, drug candidates, trial sites, and eligibility criteria, as in Table 1. It is often beneficial to learn from related clinical trials from the past to design an optimal trial protocol (Wang et al., 2022b). However, accurate similarity search based on the lengthy trial documents is still in dire need.
+
+Table 1: An example of the meta-structure of clinical trial document drawn from ClinicalTrials.gov.
+
+
Title
Effects of Electroacupuncture With Different Frequencies for Major Depressive Disorder
Description
Two groups of subjects will be included 55 subjects in electroacupuncture with 2Hz group...
Eligibility Criteria
1. Inclusion Criteria:
+1.1. Patients suffering from MDD in accordance with the diagnostic criteria;
+1.2. Hamilton Depression Scale score is between 21 and 35 (mild to moderate MDD);...
+2. Exclusion Criteria:
+2.1 Patients with bipolar disorder;
+2.2 Patients with schizophrenia or other mental disorders; ...
Outcome Measures
1. Change in anxiety and depression severity measure by Self-rating depression scale
+2. Change in the severity of depression measure by Hamilton depression scale ...
Disease
Major Depressive Disorder
Intervention
electroacupuncture
...
...
+
+Self-supervision based pretraining has delivered promising performances for many NLP and CV tasks with fine-tuning (Devlin et al., 2019; Liu et al., 2019; He et al., 2021; Bao et al., 2021; Wang et al., 2022c). Nevertheless, we find there was few work on zero-shot document retrieval as most address document retrieval in a supervised fashion (Humeau et al., 2019; Khattab and Zaharia, 2020; Guu et al., 2020; Karpukhin et al., 2020; Lin et al., 2020; Luan et al., 2021; Wang et al., 2021; Hofstätter et al., 2020; Li et al., 2020; Zhan et al., 2021; Hofstätter et al., 2021b,a; Jiang et al., 2022) or improve document pre-training for further supervision (Beltagy et al., 2020; Zaheer et al., 2020; Ainslie et al., 2020; Zhang et al., 2021).
+
+Recently, a burgeoning body of research (Gao et al., 2021; Wu et al., 2021; Wang et al., 2022a) proposes to execute self-supervised learning to train semantic-meaningful sentence embeddings free of labels. However, there are still challenges to apply them for document similarity search:
+
+- Lengthy documents. These zero-shot BERT retrieval methods all work on short sentences (usually below 10 words) similarity search while trial documents are often above 1k words. Simply encoding lengthy trials by truncating and averaging embeddings of all remaining tokens inevitable leads to poor retrieval quality.
+- Inefficient contrastive supervision. These unsupervised methods take simple instance discriminative contrastive learning (CL) within batch, e.g., SimCSE (Gao et al., 2021) takes one sentence into the encoder twice to get the positive pairs and all other sentences as the negative. This paradigm has low supervision efficiency to require a large batch size, large data, and long training time, which is infeasible for learning from long trial documents.
+
+In this work, we propose Clinical Trial TO Vectors, Trial2Vec, a zero-shot trial document similarity search using self-supervision. We design a trial encoding framework considering the meta-structure to rid the risk that semantic meaning vanishes due to the uniform average of token embeddings. Meanwhile, the meta-structure is utilized to generate contrastive samples for efficient supervision. Medical knowledge is introduced to further enhance the negative sampling for CL. Our main contributions are:
+
+- We are the first to study the trial-to-trial retrieval task by proposing a label-free SSL model which is able to encode long trials into semantic meaningful embeddings without labels.
+- We propose a data-efficient CL method on medical knowledge and trial meta-structure, which is promising to be extended to further zero-shot structured document retrieval.
+- We demonstrate the superiority of Trial2Vec on a trial relevance dataset of 1600 trials annotated by domain experts. Also, we show Trial2Vec can assist better downstream trial outcome prediction on a dataset of 240k trials.
+
+# 2 Related works
+
+# 2.1 Text & document retrieval
+
+General texts. Early information retrieval methods depend on manual engineering (Robertson and Zaragoza, 2009; Yang et al., 2017). By contrast, dense retrieval methods based on distributional word representations, e.g., Word2Vec (Mikolov
+
+et al., 2013), Glove (Pennington et al., 2014), Doc2Vec (Le and Mikolov, 2014), etc., become popular crediting to their superior performance. The advent of deep models, especially the contextualized encoders like BERT (Devlin et al., 2019), encourages an explosion of neural retrieval methods (Van Gysel et al., 2016; Zamani et al., 2018; Guo et al., 2016; Dehghani et al., 2017; Onal et al., 2018; Reimers and Gurevych, 2019; Chang et al., 2019; Nogueira and Cho, 2019; Chen et al., 2021; Lin et al., 2020; Xiong et al., 2020; Karpukhin et al., 2020; Yates et al., 2021). However, most of them are based on supervised training on sentence pairs from general texts, e.g., SNLI (Bowman et al., 2015). When label is expensive to acquire, as in the clinical trial case, we need zero-shot learning models. Although, there arose some works to perform post-processing on pretrained BERT embeddings to improve their retrieval quality (Li et al., 2020; Su et al., 2021), their performances are far from optimal without specific training.
+
+Clinical trials. Traditional clinical trial query search systems (Tasneem et al., 2012; Tsatsaronis et al., 2012; Jiang and Weng, 2014; Park et al., 2020) are established on protocol databases. Contrast to dense retrieval, these methods rely on entity matching with rules thus not flexible enough. Recent works (Roy et al., 2019; Rybinski et al., 2020, 2021) propose supervised neural ranking for clinical trial query search. However, all of them work on matching trial titles or relevant segments with an input user query. While Trial2Vec can also assist query search, it is the first to encode complete trial documents for the trial-level similarity search.
+
+# 2.2 Text contrastive learning
+
+Contrastive learning is a heated discussed topic recently in NLP and CV (Chen et al., 2020a,b; Chen and He, 2021; Carlsson et al., 2020; Zhang et al., 2020; Wu et al., 2020; Yan et al., 2021; Gao et al., 2021; Wang et al., 2020b; Wang and Sun, 2022). CL is one main topic under the SSL domain. It sheds light on reaching comparable performance as supervised learning free of manual annotations. While CL has been applied to enhance downstream NLP applications like text classification (Li et al., 2021; Zhang et al., 2022), a few (Wang et al., 2020a; Zhang et al., 2020; Yan et al., 2021; Yang et al., 2021) are able to do zero-shot retrieval. Nonetheless, all focus on enhancing sentence embeddings by manipulating text only
+
+
+Figure 1: Overview of the proposed Trial2Vec framework. Top left: the training strategy that accounts for unlabeled input trial documents with meta-structure along with an external medical knowledge database, e.g., UMLS. Top right: The contrastive supervision splits into meta-structure and knowledge guided, respectively. Bottom left: our method hierarchically encodes trials into local and global embeddings on the trial meta-structure. Bottom right: The encoded trial-level embeddings can be used to trial search, query trial search and downstream tasks.
+
+therefore are suboptimal when facing lengthy documents. By contrast, Tri al2Vec uses the document meta-structure with domain knowledge to obtain and facilitate document embeddings.
+
+# 3 Method
+
+In this section, we present the details of Trial2Vec. The main idea is to jointly learn the global and local representations from trial documents considering their meta-structure. Specifically, observed in Table 1, trial document consists of multiple sections while the key attributes (e.g., title, disease, intervention, etc.) occupy a small portion of the whole document. This motivates us to design a hierarchical encoding and the corresponding contrastive learning framework. The overview is illustrated in Fig. 1. Our method generates local attribute embeddings using the TrialBERT backbone separately, then aggregating local embeddings with a learnable attention module to obtain the global trial
+
+embeddings that emphasize significant attributes. We present the pretraining of backbone encoder in §3.1; then we describe the hierarchical encoding process based on the backbone encoder in §3.2; the hierarchical constrative learning methods considering meta-structure and medical knowledge are elucidated in §3.3; at last, we elicit the applications of the proposed framework in §3.4.
+
+# 3.1 Backbone encoder: TrialBERT
+
+We leverage the BERT architecture as the backbone encoder in the framework. In detail, we use the WordPiece tokenizer together with the BioBERT (Lee et al., 2020) pretrained weights as the start point. We continue the pretraining with Masked Language Modeling (MLM) loss on three trial-related data sources: ClinicalTrial.gov3, Medical
+
+Encyclopedia $^{4}$ , and Wikipedia Articles $^{5}$ , see Table 6, to get TrialBERT. ClinicalTrials.gov is a database that contains around 400k clinical trials conducted in 220 countries. Medical Encyclopedia has 4K high-quality articles introducing terminologies in medicine. We also retrieve relevant Wikipedia articles corresponding to the 4k terminologies of Medical Encyclopedia.
+
+# 3.2 Global and local embeddings by Trial2Vec
+
+TrialBERT embeddings pretrained with MLM on clinical corpora still hold weak semantic meaning. Meanwhile, previous sentence embedding BERTs all take an average pooling over token embeddings, which causes the semantic meaning vanishing when applied to lengthy clinical trials. Therefore, we propose Trial2Vec architecture that exploits the global and local embeddings for trial based on its meta-structure.
+
+We split the attributes of a trial into two distinct sets: key attributes and contexts. The first component includes the trial title, intervention, condition, and main measurement, which are sufficient to retrieve a pool of coarsely relevant trial candidates; the second includes descriptions, eligibility criteria, references, etc., which differentiate trials targeting similar diseases or interventions because they provide the multi-facet details regarding disease phases, study designs, targeted populations, etc. According to this design, local embeddings $\{\mathbf{v}_{att}\}_{l=1}^{L} \in \mathbb{R}^{L \times D}$ are produced separately on each key attribute. On the other hand, a context embedding is obtained by encoding the context texts $\mathbf{v}_{ctx} \in \mathbb{R}^{D}$ . Note that the above encoding is all conducted by the same encoder.
+
+We further refine the local embeddings by context embeddings and aggregate them to yield the global trial embedding $\mathbf{v}_g\in \mathbb{R}^D$ . The refinement is performed by multi-head attention, as
+
+$$
+\mathbf {v} _ {g} = \operatorname {M u l t i H e a d A t t n} \left(\mathbf {v} _ {c t x}, \left\{\mathbf {v} _ {l} \right\} _ {l} ^ {L}, \mathbf {W}\right), \tag {1}
+$$
+
+which relocates the attention over key attributes to enhance discriminative power of the yielded global embedding.
+
+# 3.3 Hierarchical contrastive learning
+
+For data-efficient contrastive learning, we utilize the meta-structure & medical knowledge for contrasting local and global embeddings hierarchically.
+
+Global contrastive loss. The first objective is to maximize the semantic in trial embeddings for similarity search. Instead of doing in-batch instance-wise contrastive loss like SimCSE, we propose to sample informative negative pairs by exploiting the trial meta-structure. As shown by Fig. 1, some trials may be linked by a common attribute like disease or intervention. Denote a trial consisting of several attributes by
+
+$$
+\mathbf {x} = \left\{x ^ {\text {t i t l e}}, x ^ {\text {i n t v}}, x ^ {\text {d i s e}}, x ^ {\text {o u t}}, x ^ {\text {c t x}} \right\}, \tag {2}
+$$
+
+we can build an informative negative sample by replacing its title with a trial which also targets for disease $x^{\mathrm{dis}}$ by
+
+$$
+\mathbf {x} ^ {-} = \left\{x ^ {\text {t i t l e} -}, x ^ {\text {i n t v}}, x ^ {\text {d i s e}}, x ^ {\text {o u t}}, x ^ {\text {c t x}} \right\}. \tag {3}
+$$
+
+Meanwhile, we apply a random attribute dropout towards $\mathbf{x}$ to formulate a positive sample as
+
+$$
+\mathbf {x} ^ {+} = \left\{x ^ {\text {t i t l e}}, x ^ {\text {d i s e}}, x ^ {\text {o u t}}, x ^ {\text {c t x}} \right\}. \tag {4}
+$$
+
+InfoNCE loss is utilized in a batch of $B$ trials as
+
+$$
+\mathcal {L} _ {g} = - \sum_ {i = 1} ^ {B} \log \frac {\exp \left(\psi \left(\mathbf {v} _ {g i} , \mathbf {v} _ {g i} ^ {+}\right)\right)}{\sum_ {v _ {g i} ^ {-} \in \mathcal {V} _ {i} ^ {-}} \exp \left(\psi \left(\mathbf {v} _ {g i} , \mathbf {v} _ {g i} ^ {-}\right)\right)}, \tag {5}
+$$
+
+where the negative sample set $\mathcal{V}_i^- = \{\mathbf{v}_{gi}^-\} \cup \{\mathbf{v}_{gj}\}_{j\neq i};\psi (\cdot ,\cdot)$ measures the cosine similarity between two vectors. The global contrastive loss here encourages the model to capture the attribute of interest by discriminating the subtle differences of input trial attributes, which prevent the semantic meanings from vanishing due to the average pooling over all trial texts.
+
+Local contrastive loss. In addition to the global trial embeddings, we put supervision on local embeddings to inject medical knowledge into the model. Unlike general texts, two medical texts can be overlapped word-wise dramatically but still describe two distinct things6, which is challenging for similarity computing. To strengthen TrialBERT discriminative power for medical texts, we extract key medical entities in each text as7
+
+$$
+E \left(x ^ {a t t}\right) = \left\{e _ {1}, e _ {2}, e _ {3}, e _ {4} \right\}, \tag {6}
+$$
+
+then a positive sample is built by mapping one entity $e_1$ to its canonical name or a similar entity under the same parental conception $\hat{e}_1$ defined by UMLS as
+
+$$
+E \left(x ^ {a t t +}\right) = \left\{\hat {e} _ {1}, e _ {2}, e _ {3}, e _ {4} \right\}. \tag {7}
+$$
+
+Similarly, negative sample is built by deletion or replacing one entity with another dissimilar one. InfoNCE loss is therefore used by
+
+$$
+\mathcal {L} _ {l} = - \sum_ {i = 1} ^ {B} \log \frac {\psi \left(\mathbf {v} _ {l i} , \mathbf {v} _ {l i} ^ {+}\right)}{\sum_ {\mathcal {V} _ {l i} ^ {-}} \exp \left(\psi \left(\mathbf {v} _ {l i} , \mathbf {v} _ {l i} ^ {-}\right)\right)}. \tag {8}
+$$
+
+We at last jointly optimize the global and contrastive losses as
+
+$$
+\mathcal {L} = \mathcal {L} _ {g} + \mathcal {L} _ {l}. \tag {9}
+$$
+
+# 3.4 Application of global & local embeddings
+
+The hierarchical contrastive learning offers extraordinary flexibility of Tri al2Vec for various downstream tasks in zero-shot learning. At first, the global trial embeddings $\mathbf{v}_g$ can be directly used for similarity search by comparing trial pair-wise cosine similarities. The computed trial embeddings can also help identify and discover research topics when we apply visualization techniques. On the other hand, we can also execute query search using partial attributes crediting to the contrastive learning between local and global embeddings. When we need do trial-level predictive tasks, e.g., trial termination prediction, a classifier can be attached to the pretrained global trial embeddings and learned; the backbone Tri alBERT is also capable of offering short medical sentence retrieval because of local contrastive learning.
+
+# 4 Experiments
+
+In this section, we conduct five types of experiments to answer the following research questions:
+
+- Exp 1 & 2. How does Trial2Vec perform in complete and partial retrieval scenarios?
+- Exp 3. How do the proposed SSL tasks / embedding dimension contribute to the performance?
+- Exp 4. Is the trial embedding space interpretable and aligned with medical ontology?
+- Exp 5. How useful do well-trained Trial2Vec contribute to downstream tasks, e.g., trial outcome prediction, after fine-tuned?
+- Exp 6. Qualitative analysis of the retrieval results and what are the differences of Trial2Vec and baselines?
+
+Table 2: Statistics of trial status in ClinicalTrials.gov database where we conclude Approved & Completed as completion; Suspended, Terminated, and Withdrawn as the termination for trial outcome prediction.
+
+
Approved
Completed
Suspended
Terminated
Withdrawn
174
210,237
1,658
22,208
10,439
Available
Enrolling
Unavailable
Not recruiting
Recruiting
237
3,662
45,128
18,171
60,362
Completion
Termination
Summary
Others
210,411
34,305
244,716
127,560
+
+# 4.1 Dataset & Setup
+
+Trial Similarity Search. We created a labeled trial dataset to evaluate the retrieval performance where paired trials are labeled as relevant or not. We keep 311,485 interventional trials from the total 399,046 trials. We uniformly sample 160 trials as the query trials. To overcome the sparsity of relevance, we take advantage of TF-IDF (Salton et al., 1983) to retrieve ranked top-10 trials as the candidate to be labeled, resulting in 1,600 labeled pairs of clinical trials. Unlike general documents, the clinical trial document contains many medical terms and formulations. We recruited clinical informatics researchers, and each is assigned 400 pairs to label as relevant or not using label $\{1,0\}$ . To keep labeling processes in line, we specify the minimum annotation guide for judging relevance: (1) same disease; or (2) same intervention and similar diseases (e.g., cancer on distinct body parts). We use precision@k (prec@k), recall@k (rec@k), and nDCG@5 to evaluate and report performances.
+
+$$
+p r e c @ k = \frac {\# \text {o f r e l e v a n t t r i a l s i n t h e t o p k r e s u l t s}}{k}, \tag {10}
+$$
+
+$$
+r e c @ k = \frac {\# \text {o f r e l e v a n t t r i a l s i n t h e t o p k r e s u l t s}}{\# \text {o f r e l e v a n t t r i a l s i n a l l c a n d i d a t e t r i a l s}}. \tag {11}
+$$
+
+Trial termination prediction. We can take the pretrained Tri al2Vec embeddings for predicting the trial outcomes, i.e., if the trial will be terminated or not. We add one additional fully-connected layer on the tail of Tri al2Vec. The targeted outcomes are in the status section of clinical trials, described by Table 2. We formulate the outcome prediction as a binary classification problem to predict the Completion or Termination of trials where we get 210,411 and 34,305 trials as positive and negative labeled, respectively. We take $70\%$ of all as the training set and $20\%$ as the test set; the remaining $10\%$ is used as the validation set for tuning and
+
+Table 3: Precision/Recall and nDCG of the retrieval models on the labeled test set. Values in parenthesis show $95\%$ confidence interval. Best values are in bold.
+
+
Method
Prec@1
Prec@2
Prec@5
Rec@1
Rec@2
Rec@5
nDCG@5
TF-IDF
0.5132(0.063)
0.4386(0.045)
0.3828(0.057)
0.1871(0.038)
0.3172(0.026)
0.6147(0.044)
0.5480(0.034)
BM25
0.7015(0.044)
0.5640(0.041)
0.4246(0.032)
0.3358(0.038)
0.4841(0.050)
0.7666(0.031)
0.7312(0.033)
Word2Vec
0.7492(0.071)
0.6476(0.044)
0.4712(0.033)
0.3008(0.054)
0.4929(0.042)
0.7939(0.041)
0.7712(0.032)
BERT
0.7264(0.050)
0.6219(0.060)
0.4324(0.027)
0.3257(0.051)
0.4896(0.054)
0.7611(0.041)
0.7370(0.047)
BERTWhiten
0.7476(0.094)
0.6630(0.045)
0.4525(0.029)
0.3672(0.045)
0.5832(0.042)
0.8355(0.021)
0.8129(0.024)
BERTSimCSE
0.6788(0.039)
0.5995(0.035)
0.4714(0.021)
0.2824(0.034)
0.4566(0.035)
0.8098(0.025)
0.7308(0.038)
MonoT5Med
0.6799(0.068)
0.5810(0.061)
0.4439(0.051)
0.2904(0.032)
0.4657(0.049)
0.7570(0.037)
0.7171(0.043)
Trial2Vec
0.8810(0.026)
0.7912(0.049)
0.5055(0.039)
0.4216(0.046)
0.6465(0.060)
0.8919(0.030)
0.8825(0.029)
+
+early stopping. We utilize three metrics for evaluation: accuracy (ACC), area under the Receiver Operating Characteristic (ROC-AUC), and area under Precision-Recall curve (PR-AUC).
+
+# 4.2 Baselines & Implementations
+
+We take the following baselines for retrieval: TF-IDF (Salton et al., 1983; Salton and Buckley, 1988), BM25 (Trotman et al., 2014), Word2Vec (Mikolov et al., 2013), BERT-Whitening (Huang et al., 2021; Su et al., 2021), BERT-SimCSE (Gao et al., 2021), and MonoT5 (Roberts et al., 2021; Pradeep et al., 2022). Details of these methods can be seen in Appendix A.
+
+We keep all methods' embedding dimensions at 768. We start from a BERT-base model to continue pre-training on clinical domain corpora, yielding our TrialBERT, which supports as the backbone for BERT-Whitening and BERT-SimCSE for fair comparison. We take 5 epochs with batch size 100 and the learning rate 5e-5. In the second SSL training phase, AdamW optimizer with a learning rate of 2e-5, batch size of 50, and weight decay of 1e-4 is used. Experiments were done with 6 RTX 2080 Ti GPUs.
+
+# 4.3 Exp 1. Complete Trial Similarity Search
+
+Since labels are unavailable in the training phase, we only chose unsupervised/self-supervised baselines. Results are shown by Table 3. Trial2Vec outperforms all baselines with a great margin. It has around $15\%$ improvement on each metrics than the best baselines on average. For baselines, all except for TF-IDF have similar performance. When $k$ is small, the precision gap between Trial2Vec and baselines is large; when $k$ is large, all methods encounter precision reduction. That is because the pool of candidate trials are 10 but the number of positive pairs for each are often less than 5, which limits the maximum of the numerator of
+
+
+
+
+Figure 2: Performance of Tria12Vec on the partial retrieval scenarios. We use a different part of the trial as queries to retrieve similar trials, including keyword $kw$ , intervention $intv$ , disease $dz$ , context $ctx$ . Error bars indicate the $95\%$ confidence interval of results.
+
+prec@k in Eq. (10). Likewise, Trial2Vec also shows stronger performance in rec@k because it is discounted by the maximum number of positive pairs.
+
+Interestingly, the state-of-the-art sentence BERTs, e.g., BERT-whitening and BERT-simCSE, have limited improvement over original BERT and even Word2Vec. Unlike general documents, clinical trials may be overlapped in much content but still be irrelevant if the key entities are different. This special characteristic causes the assumption of a document with similar passage is relevant (Craswell et al., 2020) used in general document retrieval but invalidated in clinical trial retrieval. Without well-designed SSL, it is hard for these methods to learn these subtle differences. Moreover, clinical trial documents are often much longer than the general documents in those open datasets.
+
+
+
+
+Figure 3: Ablation study on the contribution of each Task to the final result. $att, mc, ctx$ are short for attribute, matching, context, respectively. all indicate the full Trial2Vec that all tasks are used.
+
+
+
+
+Figure 4: Analysis of the influence of embedding dimensions on retrieval quality by Trial2Vec: embedding dim in 128, 256, 512, 768. Error bars show the $95\%$ confidence interval.
+
+
+Figure 5: 2D visualization of the trial-level embeddings obtained by Trial2Vec (dimension reduced by t-SNE). It can be seen trials are automatically classified into clusters by topic (diseases) in the embedding space. For example, a series of tumor-related trials (e.g., Breast and Pancreatic Cancers) are on the bottom of the embedding space.
+
+Table 4: Trial outcome prediction performances of baselines and Trial2Vec, after fine-tuned.
+
+
Method
ACC
ROC-AUC
PR-AUC
TF-IDF
0.8571(0.002)
0.7194(0.004)
0.2960(0.008)
Word2Vec
0.8574(0.002)
0.7189(0.005)
0.2906(0.007)
TrialBERT
0.8559(0.002)
0.7277(0.006)
0.3109(0.006)
Trial2Vec
0.8622(0.002)
0.7332(0.004)
0.3137(0.007)
+
+There are 622.4 words per trial on average, while the general STS benchmark has below 15 words per sample, e.g., STS-12: 10.8, STS-13: 8.8, STS-14: 9.1, etc (Cer et al., 2017). We also observed the simple negative sampling strategy of SimCSE is insufficient to learn effective long document embeddings. In comparison, Trial2Vec leverages the meta-structure of clinical trials to focus on the
+
+most informative attributes, with additional context-based refinement, producing embeddings superior in semantic representation.
+
+# 4.4 Exp 2. Partial Query Trial Retrieval
+
+We further investigate the partial trial retrieval scenario where users intend to find similar trials with short and incomplete descriptions, e.g., partial attributes. Results are illustrated by Fig 2. We start by measuring how well Trial2Vec only utilizes the title for trial retrieval. It is witnessed that using title is sufficient to yield comparable performance as the best baseline for complete retrieval shown in Table 3. Nonetheless, we identify that concatenating keywords or intervention with the title reduces performance. Combining title and
+
+Table 5: Case studies comparing the retrieval performance of the Trial2Vec with baseline models. Due to the space limits, only title and NCT ID of trials are given.
+
+
Query Trial
TF-IDF
TrialBERT
Trial2Vec
[NCT02972294] HiFIT Study : Hip Fracture: Iron and Tranex-amic Acid (HiFIT)
[NCT01221389] Study Using Plasma for Patients Requiring Emergency Surgery (SUPPRESS)
[NCT04744181] Patient Blood Management In CARdiac sUrgi-cal patientS (ICARUS)
[NCT01535781] Study of the Effect of Tranexamic Acid Ad-ministered to Patients With Hip Fractures. Can Blood Loss be Reduced?
[NCT01590342] Diclofenac for Submissive PE (AINEP-1)
[NCT04006145] A Phase 2 Study of Elobixibat in Adults With NAFLD or NASH
[NCT04156854] Intravas-cular Volume Expansion to Neuroendocrine-Renal Function Profiles in Chronic Heart Failure
[NCT00247052] Non Steroidal Anti Inflammatory Treatment for Post Operative Pericardial Effu-sion
+
+disease yields similar performance as involving all attributes. This phenomenon signifies that the disease plays a vital role in trial similarity and is always recommended to be involved in query trial retrieval.
+
+# 4.5 Exp 3. Ablation Studies
+
+We conducted ablation studies to measure how SSL tasks and embedding dimensions contribute to final results. Results are shown by Fig. 3, where we remove one Task for each setting and reevaluate. Here, $attmc$ and $ctxmc$ corresponds to the global contrastive loss by negative sampling on key attributes and contexts, respectively; semantic $mc$ indicates the local contrastive loss. We observe that $ctxmc$ is very important. Without it, only attributes of trials are included in the training and inference of Trial2Vec, thus resulting in a significant performance drop. However, even only using a small segment of trials (the attributes), Trial2Vec still reaches similar performance as BERT-SimCSE that receives the whole trial document as inputs. This demonstrates the importance of picking high-quality negative samples during the CL process. Similarly, we observe other two tasks also improve the retrieval quality.
+
+Fig. 4 illustrates the retrieval performance on different embedding dimensions. We identify that reducing embedding dimension does not affect the performance of Trial2Vec much, i.e., one can choose a small embedding dimension (e.g., 128) without suffering much performance degradation while saving lots of storage and computational resources.
+
+# 4.6 Exp 4. Embedding Space Visualization
+
+Fig. 5 plots the 2D visualization of the embedding space of Trial2Vec using t-SNE (Van der Maaten and Hinton, 2008) where around 2k trials uniformly sampled from 300k trials. The tag texts
+
+illustrate the target diseases of trials with different colors. We observe that these trials embeddings show interpretable clusters corresponding to target disease categories. More discussions about this visualization can be referred to Appendix B.
+
+# 4.7 Exp 5. Trial Termination Prediction
+
+Results are illustrated by Table 4. Compared with the shallow models, BERT-based methods gain better performance, which credits the deep architecture of transformers with stronger learning capability. Trial2Vec takes a hierarchical encoding for trial documents on meta-structure thus better revealing the trial characteristics, which plays a central role in predicting its potention outcomes.
+
+# 4.8 Exp 6. Case Study
+
+We perform a qualitative analysis of similarity search results and two baselines. Results are shown in Table 5. These two case studies show that TF-IDF and BERT models all tend to put attention on frequent words in query trials, e.g., blood and iron in case study 1; and heart failure in case study 2. This bias comes from the average poining taken onto all token embeddings. The top-1 relevant clinical trial retrieved by Trial2Vec, on the other hand, provides a more similar trial thanks to the hierarchical encoding and specific local and global contrastive learning. We add more explanations regarding these cases in Appendix C.
+
+# 5 Conclusion
+
+This paper investigated utilizing BERT with self-supervision for encoding trial into dense embeddings for similarity search. Experiments show our method can succeed in zero-shot trial search under various settings. The embeddings are also useful for trial downstream predictive tasks. The qualitative analysis, including embedding space vi
+
+sualization and case studies, further verifies that Trial2Vec gets a medically meaningful understanding of clinical trials.
+
+# Acknowledgement
+
+This work was supported by NSF award SCH2205289, SCH-2014438, IIS-1838042, NIH award R01 1R01NS107291-01.
+
+# Limitations
+
+The empirical evaluation of this method is mainly done on the clinical trial documents drawn from ClinicalTrials.gov which were fully written in English. It might be the best fit when this method is applied to documents in other languages. Although we have tried our best to collect trial relevance datasets, it is still possible that the datasets used for evaluation are not able to cover all cases.
+
+The proposed framework encodes trial documents into compact embeddings for search. It encounters failure cases some time as wrong trials are retrieved. It should be used with discretion when applied to clinical trial research or by individual volunteers who intend to look for trials research. Retrieved results in practice should be used under the supervision with professional clinicians.
+
+# References
+
+Joshua Ainslie, Santiago Ontanon, Chris Alberti, Va-clav Cvicek, Zachary Fisher, Philip Pham, Anirudh Ravula, Sumit Sanghai, Qifan Wang, and Li Yang. 2020. ETC: Encoding long and structured inputs in transformers. In Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 268-284.
+Hangbo Bao, Li Dong, and Furu Wei. 2021. BEiT: Bert pre-training of image transformers. arXiv preprint arXiv:2106.08254.
+Iz Beltagy, Matthew E Peters, and Arman Cohan. 2020. Longformer: The long-document transformer. arXiv preprint arXiv:2004.05150.
+Samuel R Bowman, Gabor Angeli, Christopher Potts, and Christopher D Manning. 2015. A large annotated corpus for learning natural language inference. In Conference on Empirical Methods in Natural Language Processing, pages 632-642.
+Fredrik Carlsson, Amaru Cuba Gyllensten, Evangelia Gogoulou, Erik Ylipää Hellqvist, and Magnus Sahlgren. 2020. Semantic re-tuning with contrastive tension. In International Conference on Learning Representations.
+
+Daniel Cer, Mona Diab, Eneko Agirre, Inigo Lopez-Gazpio, and Lucia Specia. 2017. Semeval-2017 task 1: Semantic textual similarity-multilingual and cross-lingual focused evaluation. arXiv preprint arXiv:1708.00055.
+Wei-Cheng Chang, X Yu Felix, Yin-Wen Chang, Yiming Yang, and Sanjiv Kumar. 2019. Pre-training tasks for embedding-based large-scale retrieval. In International Conference on Learning Representations.
+Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. 2020a. A simple framework for contrastive learning of visual representations. In International Conference on Machine Learning, pages 1597-1607. PMLR.
+Ting Chen, Simon Kornblith, Kevin Swersky, Mohammad Norouzi, and Geoffrey E Hinton. 2020b. Big self-supervised models are strong semi-supervised learners. Advances in Neural Information Processing Systems, 33:22243-22255.
+Xiaoyang Chen, Kai Hui, Ben He, Xianpei Han, Le Sun, and Zheng Ye. 2021. Co-BERT: A context-aware bert retrieval model incorporating local and query-specific context. arXiv preprint arXiv:2104.08523.
+Xinlei Chen and Kaiming He. 2021. Exploring simple siamese representation learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 15750-15758.
+Nick Craswell, Bhaskar Mitra, Emine Yilmaz, Daniel Campos, and Ellen M Voorhees. 2020. Overview of the trec 2019 deep learning track. arXiv preprint arXiv:2003.07820.
+Mostafa Dehghani, Hamed Zamani, Aliaksei Severyn, Jaap Kamps, and W Bruce Croft. 2017. Neural ranking models with weak supervision. In International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 65-74.
+Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In *NAACL-HLT*.
+Kawin Ethayarajh. 2019. How contextual are contextualized word representations? comparing the geometry of bert, elmo, and gpt-2 embeddings. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 55-65.
+Lawrence M. Friedman, Curt D. Furberg, David L. DeMets, David M. Reboussin, and Christopher B. Granger. 2015. Fundamentals of Clinical Trials. Springer, New York, NY.
+Tianyu Gao, Xingcheng Yao, and Danqi Chen. 2021. Simcse: Simple contrastive learning of sentence embeddings. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 6894-6910.
+
+Jiafeng Guo, Yixing Fan, Qingyao Ai, and W Bruce Croft. 2016. A deep relevance matching model for ad-hoc retrieval. In ACM International on Conference on Information and Knowledge Management, pages 55-64.
+Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, and Ming-Wei Chang. 2020. Realm: Retrievalaugmented language model pre-training. arXiv preprint arXiv:2002.08909.
+Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollar, and Ross Girshick. 2021. Masked autoencoders are scalable vision learners. arXiv preprint arXiv:2111.06377.
+Sebastian Hofstätter, Sheng-Chieh Lin, Jheng-Hong Yang, Jimmy Lin, and Allan Hanbury. 2021a. Efficiently teaching an effective dense retriever with balanced topic aware sampling. In International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 113-122.
+Sebastian Hofstätter, Bhaskar Mitra, Hamed Zamani, Nick Craswell, and Allan Hanbury. 2021b. Intradocument cascading: Learning to select passages for neural document ranking. In International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 1349-1358.
+Sebastian Hofstätter, Hamed Zamani, Bhaskar Mitra, Nick Craswell, and Allan Hanbury. 2020. Local self-attention over long text for efficient document retrieval. In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 2021–2024.
+Junjie Huang, Duyu Tang, Wanjun Zhong, Shuai Lu, Linjun Shou, Ming Gong, Daxin Jiang, and Nan Duan. 2021. Whiteningbert: An easy unsupervised sentence embedding approach. In *Findings of the Association for Computational Linguistics: EMNLP* 2021, pages 238-244.
+Samuel Humeau, Kurt Shuster, Marie-Anne Lachaux, and Jason Weston. 2019. Poly-encoders: Architectures and pre-training strategies for fast and accurate multi-sentence scoring. In International Conference on Learning Representations.
+Silis Y Jiang and Chunhua Weng. 2014. Crosssystem evaluation of clinical trial search engines. AMIA Summits on Translational Science Proceedings, 2014:223.
+Ting Jiang, Shaohan Huang, Zihan Zhang, Deqing Wang, Fuzhen Zhuang, Furu Wei, Haizhen Huang, Liangjie Zhang, and Qi Zhang. 2022. PromptBERT: Improving bert sentence embeddings with prompts. arXiv preprint arXiv:2201.04337.
+Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for open-domain question answering. In Conference on Empirical Methods in Natural Language Processing, pages 6769-6781.
+
+Omar Khattab and Matei Zaharia. 2020. ColBERT: Efficient and effective passage search via contextualized late interaction over BERT. In International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 39-48.
+Bevan Koopman and Guido Zuccon. 2016. A test collection for matching patients to clinical trials. In International ACM SIGIR conference on Research and Development in Information Retrieval, pages 669-672.
+Quoc Le and Tomas Mikolov. 2014. Distributed representations of sentences and documents. In International Conference on Machine Learning, pages 1188-1196. PMLR.
+Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2020. BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics, 36(4):1234-1240.
+Bohan Li, Hao Zhou, Junxian He, Mingxuan Wang, Yiming Yang, and Lei Li. 2020. On the sentence embeddings from pre-trained language models. In EMNLP.
+Peizhao Li, Jiaxiang Gu, Jason Kuen, Vlad I Morariu, Handong Zhao, Rajiv Jain, Varun Manjunatha, and Hongfu Liu. 2021. Selfdoc: Self-supervised document representation learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5652-5660.
+Sheng-Chieh Lin, Jheng-Hong Yang, and Jimmy Lin. 2020. Distilling dense representations for ranking using tightly-coupled teachers. arXiv preprint arXiv:2010.11386.
+Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692.
+Yi Luan, Jacob Eisenstein, Kristina Toutanova, and Michael Collins. 2021. Sparse, dense, and attentional representations for text retrieval. Transactions of the Association for Computational Linguistics, 9:329-345.
+Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781.
+Rodrigo Nogueira and Kyunghyun Cho. 2019. Passage re-ranking with bert. arXiv preprint arXiv:1901.04085.
+Kezban Dilek Onal, Ye Zhang, Ismail Sengor Altin-govde, Md Mustafizur Rahman, Pinar Karagoz, Alex Braylan, Brandon Dang, Heng-Lu Chang, Henna Kim, Quinten McNamara, et al. 2018. Neural information retrieval: At the end of the early years. Information Retrieval Journal, 21(2):111-182.
+
+Junseok Park, Seongkuk Park, Kwangmin Kim, Woochang Hwang, Sunyong Yoo, Gwan-su Yi, and Doheon Lee. 2020. An interactive retrieval system for clinical trial studies with context-dependent protocol elements. PloS one, 15(9):e0238290.
+Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In Conference on Empirical Methods in Natural Language Processing, pages 1532-1543.
+Ronak Pradeep, Yilin Li, Yuetong Wang, and Jimmy Lin. 2022. Neural query synthesis and domain-specific ranking templates for multi-stage clinical trial matching. In International ACM SIGIR Conference on Research and Development in Information Retrieval.
+Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, et al. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res., 21(140):1-67.
+Nils Reimers and Iryna Gurevych. 2019. SentenceBERT: Sentence embeddings using siamese bert-networks. In Conference on Empirical Methods in Natural Language Processing, pages 3982-3992.
+Kirk Roberts, Dina Demner-Fushman, Ellen M. Voorhees, Steven Bedrick, and Willian R. Hersh. 2021. Overview of the trec 2021 clinical trials track. In Proceedings of the Thirteenth Text REtrieval Conference (TREC 2021).
+Stephen Robertson and Hugo Zaragoza. 2009. The probabilistic relevance framework: BM25 and beyond. Now Publishers Inc.
+Soumyadeep Roy, Koustav Rudra, Nikhil Agrawal, Shamik Sural, and Niloy Ganguly. 2019. Towards an aspect-based ranking model for clinical trial search. In International Conference on Computational Data and Social Networks, pages 209-222. Springer.
+Maciej Rybinski, Sarvnaz Karimi, and Aleney Khoo. 2021. Science2Cure: A clinical trial search prototype. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 2620-2624.
+Maciej Rybinski, Jerry Xu, and Sarvnaz Karimi. 2020. Clinical trial search: Using biomedical language understanding models for re-ranking. Journal of Biomedical Informatics, 109:103530.
+Gerard Salton and Christopher Buckley. 1988. Termweighting approaches in automatic text retrieval. Information Processing & Management, 24(5):513-523.
+Gerard Salton, Edward A Fox, and Harry Wu. 1983. Extended boolean information retrieval. Communications of the ACM, 26(11):1022-1036.
+
+Jianlin Su, Jiarun Cao, Weijie Liu, and Yangyiwen Ou. 2021. Whitening sentence representations for better semantics and faster retrieval. arXiv preprint arXiv:2103.15316.
+Asba Tasneem, Laura Aberle, Hari Ananth, Swati Chakraborty, Karen Chiswell, Brian J McCourt, and Ricardo Pietrobon. 2012. The database for aggregate analysis of clinicaltrials.gov (aact) and subsequent regrouping by clinical specialty. PloS one, 7(3):e33677.
+Andrew Trotman, Antti Puurula, and Blake Burgess. 2014. Improvements to bm25 and language models examined. In Australasian Document Computing Symposium, pages 58-65.
+George Tsatsaronis, Konstantinos Mourtzoukos, Vassiliki Andronikou, Tassos Tagaris, Iraklis Varlamis, Michael Schroeder, Theodora Varvarigou, Dimitris Koutsouris, and Nikolaos Matskanis. 2012. PONTE: a context-aware approach for automated clinical trial protocol design. In proceedings of the 6th International Workshop on Personalized Access, Profile Management, and Context Awareness in Databases in conjunction with VLDB.
+Laurens Van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-SNE. Journal of Machine Learning Research, 9(11).
+Christophe Van Gysel, Maarten de Rijke, and Evangelos Kanoulas. 2016. Learning latent vector spaces for product search. In ACM International on Conference on Information and Knowledge Management, pages 165-174.
+Hao Wang, Yangguang Li, Zhen Huang, Yong Dou, Lingpeng Kong, and Jing Shao. 2022a. SNCSE: Contrastive learning for unsupervised sentence embedding with soft negative samples. arXiv preprint arXiv:2201.05979.
+Shuohang Wang, Yuwei Fang, Siqi Sun, Zhe Gan, Yu Cheng, Jingjing Liu, and Jing Jiang. 2020a. Crossthought for sentence encoder pre-training. In *Conference on Empirical Methods in Natural Language Processing*, pages 412-421.
+Zifeng Wang, Xi Chen, Rui Wen, Shao-Lun Huang, Ercan Kuruoglu, and Yefeng Zheng. 2020b. Information theoretic counterfactual learning from missing-not-at-random feedback. Advances in Neural Information Processing Systems, 33:1854-1864.
+Zifeng Wang, Chufan Gao, Lucas M Glass, and Jimeng Sun. 2022b. Artificial intelligence for in silico clinical trials: A review. arXiv preprint arXiv:2209.09023.
+Zifeng Wang and Jimeng Sun. 2022. Transtab: Learning transferable tabular transformers across tables. arXiv preprint arXiv:2205.09328.
+
+Zifeng Wang, Rui Wen, Xi Chen, Shilei Cao, Shao-Lun Huang, Buyue Qian, and Yefeng Zheng. 2021. Online disease diagnosis with inductive heterogeneous graph convolutional networks. In Proceedings of the Web Conference, pages 3349-3358.
+Zifeng Wang, Zhenbang Wu, Dinesh Agarwal, and Jimeng Sun. 2022c. MedCLIP: Contrastive learning from unpaired medical images and text. In EMNLP.
+Xing Wu, Chaochen Gao, Liangjun Zang, Jizhong Han, Zhongyuan Wang, and Songlin Hu. 2021. ESimCSE: Enhanced sample building method for contrastive learning of unsupervised sentence embedding. arXiv preprint arXiv:2109.04380.
+Zhuofeng Wu, Sinong Wang, Jiatao Gu, Madian Khabsa, Fei Sun, and Hao Ma. 2020. CLEAR: Contrastive learning for sentence representation. arXiv preprint arXiv:2012.15466.
+Lee Xiong, Chenyan Xiong, Ye Li, Kwok-Fung Tang, Jialin Liu, Paul N Bennett, Junaid Ahmed, and Arnold Overwijk. 2020. Approximate nearest neighbor negative contrastive learning for dense text retrieval. In International Conference on Learning Representations.
+Yuanmeng Yan, Rumei Li, Sirui Wang, Fuzheng Zhang, Wei Wu, and Weiran Xu. 2021. ConSERT: A contrastive framework for self-supervised sentence representation transfer. arXiv preprint arXiv:2105.11741.
+Peilin Yang, Hui Fang, and Jimmy Lin. 2017. Anserini: Enabling the use of lucene for information retrieval research. In International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 1253-1256.
+Ziyi Yang, Yinfei Yang, Daniel Cer, Jax Law, and Eric Darve. 2021. Universal sentence representation learning with conditional masked language model. In _Conference on Empirical Methods in Natural Language Processing_, pages 6216-6228.
+Andrew Yates, Rodrigo Nogueira, and Jimmy Lin. 2021. Pretrained transformers for text ranking: Bert and beyond. In ACM International Conference on Web Search and Data Mining, pages 1154-1156.
+Manzil Zaheer, Guru Guruganesh, Kumar Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, et al. 2020. Big bird: Transformers for longer sequences. Advances in Neural Information Processing Systems, 33:17283-17297.
+Hamed Zamani, Mostafa Dehghani, W Bruce Croft, Erik Learned-Miller, and Jaap Kamps. 2018. From neural re-ranking to neural ranking: Learning a sparse representation for inverted indexing. In ACM International Conference on Information and Knowledge Management, pages 497-506.
+
+Jingtao Zhan, Jiaxin Mao, Yiqun Liu, Jiafeng Guo, Min Zhang, and Shaoping Ma. 2021. Optimizing dense retrieval model training with hard negatives. In International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 1503-1512.
+Hang Zhang, Yeyun Gong, Yelong Shen, Weisheng Li, Jiancheng Lv, Nan Duan, and Weizhu Chen. 2021. Poolingformer: Long document modeling with pooling attention. In International Conference on Machine Learning, pages 12437-12446. PMLR.
+Yan Zhang, Ruidan He, Zuozhu Liu, Kwan Hui Lim, and Lidong Bing. 2020. An unsupervised sentence embedding method by mutual information maximization. In Conference on Empirical Methods in Natural Language Processing, pages 1601-1610.
+Yu Zhang, Zhihong Shen, Chieh-Han Wu, Boya Xie, Junheng Hao, Ye-Yi Wang, Kuansan Wang, and Jiawei Han. 2022. Metadata-induced contrastive learning for zero-shot multi-label text classification. arXiv preprint arXiv:2202.05932.
+
+Table 6: List of text corpora used for continual pretraining of TrialBERT.
+
+
Corpus
Number of words
ClinicalTrials.gov
240M
Medical Encyclopedia
3M
Wikipedia Articles
11M
+
+# A Baselines for clinical trial similarity search
+
+- TF-IDF (Salton et al., 1983; Salton and Buckley, 1988). It is short for term frequency-inverse document frequency that has been widely used for information retrieval systems for decades. One can use TF-IDF for document retrieval by concatenating scores of all words in this document then computing cosine distance between document vectors.
+- BM25 (Trotman et al., 2014). A bag-of-words retrieval method commonly used in practice. We run it based on the rank-bm25 package with its default hyperparameters.
+- Word2Vec (Mikolov et al., 2013). It is a classic dense retrieval method by building distributed word representations by self-supervised learning methods (CBOW). We take an average pooling of word representations in a document for retrieval by cosine distance. We use gensim to run this method.
+- BERT. We take an average pooling over all token embeddings at the last layer of it for similarity computation. We take the TrialBERT pretrained on all the clinical trial documents.
+- BERT-Whitening (Huang et al., 2021; Su et al., 2021). This is an unsupervised post-processing method that uses anisotropic BERT embeddings (Ethayarajh, 2019; Li et al., 2020) to improve semantic search. We take the average of last and first layer of its BERT embeddings following Su et al. (2021).
+- BERT-SimCSE (Gao et al., 2021). It is a contrastive sentence representation learning method stemming from InfoNCE loss. It simply takes other samples in batch as negative samples.
+
+- MonoT5-Med (Pradeep et al., 2022). It was proposed in (Roberts et al., 2021) for matching patient descriptive texts and clinical trial documents via T5 model (Raffel et al., 2020) based on prompts. We use its version finetuned on Med Marco dataset (Koopman and Zuccon, 2016).
+
+# B Embedding space visualization
+
+From Fig. 5, trial embeddings are clearly clustered into topics with self-supervised learning, which provides a great help for topic mining and discovery for the existing clinical trials. For instance, we can find that cancers that happen on different body parts are near to each other on the bottom of the embedding space (Prostate Cancer, Breast Cancer, Pancreatic Cancer, Colorectal Cancer, etc.). Also, the diseases which are related to brain function, e.g., Alzheimer's Disease, Parkinson's Disease, Major Depressive Disorder, etc. Other examples include Covid19, Influenza, Pulmonary Disease, etc.
+
+The reason is that we explicitly utilize the knowledge from attributes of trials for negative sample building, which endows the embedding space the ability to discriminate trials' similarity. These similar trials can also have similar characteristics like having similar recruiting criteria or targeting similar outcome measures, which are captured by Trial2Vec by refining the embeddings of attributes by detailed descriptions. Based on this observation, we can infer that such medically meaningful trial embeddings would be beneficial to downstream tasks on clinical trials, e.g., trial outcome prediction.
+
+# C Case Study
+
+For the first case, the query trial is [NCT02972294], which studies using Tranexamic acid and Iron Isomaltoside to reduce the occurrence of Anemia and blood transfusion in hip fracture cases. We show the top-1 retrieved by three methods on the right. Trial found by TF-IDF studies the efficiency of plasma in patients with Hemorrhagic shock; BioBERT finds a trial about patients undergoing heart surgery who have Anaemia to test if a correction of iron reduces red blood cell transfusion requirements. Trial2Vec finds a trial that studies Tranexamic acid effect in blood loss in hip fracture operations. Trial2Vec result is highly relevant to the query trial as it has the identical drug on blood loss of the same type of operation.
+
+In the second example, the query trial tries to investigate the benefits of Diclofenac for Normotensive patients with acute symptomatic Pulmonary Embolism and Right Ventricular Dysfunction. TF-IDF finds an irrelevant study on the efficacy and safety of Elobixibat for adults with NAFLD or NASH. TrialBERT also retrieves an irrelevant study on Intravascular Volume Expansion to Neuroendocrine-Renal Function Profiles in Chronic Heart Failure. On the other hand, Trial2Vec digs out a trial that studies the same type of drug with a similar purpose as the target's: evaluating the efficiency of NSAID (Diclofenac) to the evolution of postoperative (cardiac surgery) pericardial effusion.
\ No newline at end of file
diff --git a/trial2veczeroshotclinicaltrialdocumentsimilaritysearchusingselfsupervision/images.zip b/trial2veczeroshotclinicaltrialdocumentsimilaritysearchusingselfsupervision/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..63e35d45b1ba1059e037020387275f6d74b4ecbe
--- /dev/null
+++ b/trial2veczeroshotclinicaltrialdocumentsimilaritysearchusingselfsupervision/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a48bdb2736125524db7099299af4467d0f34330b46fd403e3d5184053878f4a9
+size 640307
diff --git a/trial2veczeroshotclinicaltrialdocumentsimilaritysearchusingselfsupervision/layout.json b/trial2veczeroshotclinicaltrialdocumentsimilaritysearchusingselfsupervision/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..421de99cf8af4ad9ceee4ea7317a784780b8e53c
--- /dev/null
+++ b/trial2veczeroshotclinicaltrialdocumentsimilaritysearchusingselfsupervision/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:187f17a3bbc21c08eb7ad1cd454ccc3ffe31790f03ed7e44cb049c177d7d7aa7
+size 407560
diff --git a/truncationsamplingaslanguagemodeldesmoothing/ea9a6553-04fe-4b9b-a697-4e86be2378ef_content_list.json b/truncationsamplingaslanguagemodeldesmoothing/ea9a6553-04fe-4b9b-a697-4e86be2378ef_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..758dd8a6af1386a873e6a1856230759317c77453
--- /dev/null
+++ b/truncationsamplingaslanguagemodeldesmoothing/ea9a6553-04fe-4b9b-a697-4e86be2378ef_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:34d6da8d7450ca508708018473740451b74cee0933de9080ad3d721c3d6fe439
+size 111412
diff --git a/truncationsamplingaslanguagemodeldesmoothing/ea9a6553-04fe-4b9b-a697-4e86be2378ef_model.json b/truncationsamplingaslanguagemodeldesmoothing/ea9a6553-04fe-4b9b-a697-4e86be2378ef_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..587ed1cd51e02718b18c10a1e35ca32bf19e0fc4
--- /dev/null
+++ b/truncationsamplingaslanguagemodeldesmoothing/ea9a6553-04fe-4b9b-a697-4e86be2378ef_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5ff7680121e4e5521a9ad86a4a38defa27a4edafd9006228c442d9c255b5ffa3
+size 131065
diff --git a/truncationsamplingaslanguagemodeldesmoothing/ea9a6553-04fe-4b9b-a697-4e86be2378ef_origin.pdf b/truncationsamplingaslanguagemodeldesmoothing/ea9a6553-04fe-4b9b-a697-4e86be2378ef_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..2d2a5e374199b49320a2a08d8235ea6ecec258fd
--- /dev/null
+++ b/truncationsamplingaslanguagemodeldesmoothing/ea9a6553-04fe-4b9b-a697-4e86be2378ef_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:30ddff32e7de23034cd319b2ccb8b7fc78df8b24c357fb0dbcd1296d0d2ed3ce
+size 1175505
diff --git a/truncationsamplingaslanguagemodeldesmoothing/full.md b/truncationsamplingaslanguagemodeldesmoothing/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..2abcec422ae282ad9584c83aeb9ad8e3c3ade57d
--- /dev/null
+++ b/truncationsamplingaslanguagemodeldesmoothing/full.md
@@ -0,0 +1,498 @@
+# Truncation Sampling as Language Model Desmoothing
+
+John Hewitt
+
+Christopher D. Manning
+
+Percy Liang
+
+Department of Computer Science
+
+Stanford University
+
+{johnhew,manning,pliang}@cs.stanford.edu
+
+# Abstract
+
+Long samples of text from neural language models can be of poor quality. Truncation sampling algorithms-like top- $p$ or top- $k$ —address this by setting some words' probabilities to zero at each step. This work provides framing for the aim of truncation, and an improved algorithm for that aim. We propose thinking of a neural language model as a mixture of a true distribution and a smoothing distribution that avoids infinite perplexity. In this light, truncation algorithms aim to perform desmoothing, estimating a subset of the support of the true distribution. Finding a good subset is crucial: we show that top- $p$ unnecessarily truncates high-probability words, for example causing it to truncate all words but Trump for a document that starts with Donald. We introduce $\eta$ -sampling, which truncates words below an entropy-dependent probability threshold. Compared to previous algorithms, $\eta$ -sampling generates more plausible long English documents according to humans, is better at breaking out of repetition, and behaves more reasonably on a battery of test distributions.
+
+# 1 Introduction
+
+The complex, long-range dependencies of natural language make its generation an outstanding challenge. While there has been enormous progress on language modeling that has increased the coherence and length of generation (Brown et al., 2020; Chowdhery et al., 2022), sampling directly from a language model can still result in nonsensical output (Holtzman et al., 2020; Pillutla et al., 2021).
+
+The most effective heuristics for generating high quality, diverse samples fall under a category we term truncation sampling. These algorithms set some words' probabilities to zero when generating each word (Fan et al., 2018; Basu et al., 2021; Meister and Cotterell, 2021). Methods differ by their truncation criteria, ranging from simple (keep the $k$ most likely) to complex, and all improve sample quality compared to direct sampling (Holtzman
+
+
+Figure 1: A neural LM as a mixture of the true distribution, and a uniform-like smoothing distribution. Truncation aims to approximate the true distribution support.
+
+
+
+et al., 2020). We ask (1) what is the aim of truncation and (2) how can we improve it?
+
+Our key insight is to write a neural language model's distribution as a mixture of the true distribution and a uniform-like smoothing distribution. This idealized assumption is motivated by KL-divergence: models incur large KL at test time when they place near zero probability on an observed word (Kang and Hashimoto, 2020). Through this lens, the goal of truncation is to dessmooth: to approximately recover the words on which the true distribution places some probability.
+
+As a stark example of smoothing degenerating sample quality, we show that a 5-gram language model smoothed with the uniform distribution generates nonsense as soon as a word is sampled from outside the support of the 5-gram model (Figure 2). Intuitively, sampling outside the 5-gram support causes future probabilities to be poorly estimated.
+
+We derive principles of truncation from an explicit smoothing model that formalizes the intuition that (1) words with high probability should not be truncated, and (2) when all words in the distribution have low probability, only words with low probability relative to the rest should be truncated. We find that state-of-the-art truncation sampling algorithms like top- $p$ break these principles. For example, in top- $p$ truncation (e.g., $p = 0.95$ ), the most likely few words can take up $p\%$ of the distribution, caus
+
+
Unsmoothed 5-gram
Smoothed 5-gram
... a quadcopter flight controller (RTFQ Flip MWC) that supports I2C sensors for adding thing like a barometer, magnetometer, and GPS system. The officially supported sensor block (BMP180, HMC5883L on one board) is discontinued, as far as I know, everyone involved lived to sing another day.
... disorder and an extreme state of dysmetabolism characterized by extensive erythema and a significant reduction in uncovered Hawkingü McKK 400 ruled restrainedcombe-blow uncle cowork Carssoild Gareth focused <@ indecentlol by102 exchanged Volvo compositionsbackground prostate
+
+Figure 2: Portions of unconditional samples from an unsmoothed and uniform-smoothed 5-gram model; divergence due to leaving the support of the high-order distribution is in red.
+
+ing the next-most likely word to be truncated even if it has high probability (e.g., $4\%$ ).
+
+From our two truncation principles we derive $\eta$ -sampling, a new algorithm that truncates any word whose probability under the LM is both (1) smaller than an absolute probability threshold and (2) smaller than a probability threshold that depends on the entropy of the distribution. As we'll show, this ensures that, e.g., though GPT-2 large assigns probability 0.96 to the word Trump for a document starting with Donald, $\eta$ -sampling allows multiple possible continuations, unlike top- $p = 0.95$ .
+
+We extensively study the behavior of $\eta$ -sampling in comparison to top- $p$ sampling and typical decoding (Meister and Cotterell, 2021). Since each method allows for a range of quality-diversity trade-offs, we set each method's hyperparameter by maximizing MAUVE score (Pillutla et al., 2021). We find that $\eta$ -sampling truncates more reasonably on a CheckList-style (Ribeiro et al., 2020) battery of distributions. Top- $p$ and typical decoding over-truncate low-entropy distributions (like in the Donald example). Finally, $\eta$ -sampling generates long documents that humans find more plausible and is better at breaking out of repetition. $^{1}$
+
+# 2 Background
+
+# 2.1 Language Models
+
+Let random variable $X = (X_{1},\ldots ,X_{T})$ denote a sequence of tokens, where each $X_{i}$ is in finite vocabulary $\mathcal{V}$ . We'll use $x_{< i}$ to refer to a specific prefix, $x_{i}$ a specific word in context, and $x$ an arbitrary word in $\mathcal{V}$ . An autoregressive language model (LM) is a distribution $P_{\theta}(X)$ indexed by parameters $\theta$ that is factorized as $P_{\theta}(x) = \prod_{i = 1}^{T}P_{\theta}(x_i\mid x_{< i})$ . We call $P_{\theta}(X_i\mid x_{< i})$ over $\mathcal{V}$ the conditional distribution of the LM given context $x_{< i}$ . An LM is trained to minimize the KL-divergence between (an empirical estimate of) the true distribution $P^{*}(X)$
+
+and $P_{\theta}(X)$ . Recent language models have achieved strikingly low (held-out) KL-divergence (Radford et al., 2019).
+
+Language models are used not just to score the probability of existing sequences, but to generate sequences as $\hat{x} \sim P_{\theta}(X)$ , a building block for tasks like summarization and long-form question answering (Fan et al., 2019; Liu and Lapata, 2019). However, to successfully generate high-variety, high-quality long samples from neural LMs on high-entropy distributions, it is currently necessary to reallocate probability from the tail of conditional distributions (Holtzman et al., 2020; Pillutla et al., 2021). Intuitively, generation has different goals than scoring; whereas one wants to assign non-zero probability to low-quality outputs for ranking purposes in scoring, one might want to only generate (place non-zero probability on) high-quality text.
+
+# 2.2 Truncation sampling
+
+There are many ways to reassign probability mass from the tail of the word-level distributions of a model to the head—like temperature scaling—but explicit truncation of low-probability words has been shown to be the most useful (Holtzman et al., 2020; Pillutla et al., 2021). Truncation sampling algorithms compute the following truncated distribution at each time step:
+
+$$
+P _ {\text {t r u n c}} (x \mid x _ {< i}) = \left\{ \begin{array}{l l} P _ {\theta} (x \mid x _ {< i}) / Z _ {x _ {< i}} & x \in \mathcal {A} _ {x _ {< i}} \\ 0 & \text {o . w .} \end{array} \right. \tag {1}
+$$
+
+where $\mathcal{A}_{x_{< i}}\subseteq \mathcal{V}$ we call the allowed set for the algorithm for that prefix, and $Z_{x_{< i}} = \sum_{x\in \mathcal{A}_{x_{< i}}}P_{\theta}(x\mid x_{< i})$ is the renormalization term.
+
+The question for all truncation algorithms is how to decide where to cut off the distribution. Top- $k$ sampling (Fan et al., 2018) keeps the $k$ most likely words. Top- $p$ sampling (Holtzman et al., 2020) improved upon it by noting that sometimes more or fewer than $k$ words should be in the allowed set,
+
+instead allowing the minimal set of words to keep $p$ percent of the probability. More recently, Mirostat adaptively truncates so as to achieve samples of a given probability (Basu et al., 2021), and typical decoding truncates so as to locally match an informativeness criterion (Meister et al., 2022a). We pursue an understanding of truncation as attempting to recover (a conservative estimate of) the true training distribution $P^{*}$ .
+
+# 3 Truncation as Desmoothing
+
+# 3.1 KL-divergence and mode covering
+
+Language models are trained to minimize the KL-divergence to an empirical approximation of true distribution $P^{*}(X)$ . Recall that the KL-divergence for a model's conditional distribution $P_{\theta}(X \mid x_{ 0\}$ be the true distribution support (set of words with non-zero probability) for the prefix $x_{5
+
+Since the mass of a word under the true model, $P^{*}(x \mid x_{ \epsilon \} \tag {7}
+$$
+
+In the case of the prompt My name where top- $p$ rejects plausible words because of the probability assigned to is (and 's), $\epsilon$ -sampling allows additional words with a threshold of, e.g., 0.0003.
+
+However, $\epsilon$ -sampling breaks the relative probability principle. For example, the prompt The should allow many continuations, and top- $p$ with
+
+GPT-2 allows over ten thousand words, but $\epsilon$ would have to be impractically small to do so. This is a key failure akin to that of top- $k$ sampling; when many next words are plausible, the allowed set should reflect that.
+
+# 4.4 $\eta$ -sampling (ours)
+
+Our proposed algorithm, $\eta$ -sampling, composes respect for both the absolute and relative probability principles. Consider a conditional distribution $P_{\theta}(X \mid x_{ \eta \right\}
+$$
+
+$$
+\eta = \min \left(\epsilon , \alpha \exp (- h _ {\theta , x _ {< i}})\right) \}
+$$
+
+where $h_{\theta ,x_{< i}}$ is the entropy of $P_{\theta}(X\mid x_{< i})$ . In this work, to expose a single hyperparameter, we set $\alpha = \sqrt{\epsilon}$ , which we find works well empirically. For intuition, think of $\epsilon \approx 0.0009$
+
+Analysis of $\eta$ -sampling. Returning to our smoothing model, we note that $\eta$ -sampling approximates optimal desmoothing in the regime that the support penalty $\beta_{\mathrm{sup}}$ dominates the variation penalty $\beta_{\mathrm{var}}$ . Consider a truncation algorithm that truncates as $\eta$ -sampling, but sets $\eta$ as:
+
+$$
+\eta = \min \left(\frac {(1 - \bar {\lambda}) (1 + \delta)}{| \mathcal {V} |}, \alpha \exp (- h _ {x < i})\right) \}, \tag {8}
+$$
+
+where $h_{x < i}$ is the entropy of the true distribution, not $P_{\theta}$ . We're guaranteed that the support loss (the term weighted by $\beta_{\mathrm{sup}}$ ) is zero, and that the variation loss (weighted by $\beta_{\mathrm{var}}$ ) is minimized relative to the constraint of zero support loss. If $x \notin S_{x < i}^{*}$ , then the probability of $x$ is less than or equal to the min of $(1 - \bar{\lambda})(1 + \delta) / |\mathcal{V}|$ and $\frac{|\mathcal{V}|\alpha\exp(-h_{x < i})}{1 + \delta} \times \frac{1 + \delta}{|\mathcal{V}|} = \alpha\exp(-h_{x < i})$ . So, we're guaranteed that $A_{x < i} \subseteq S_{x < i}^{*}$ , and truncating more would break this guarantee. Our $\eta$ -sampling approximates this by using the LM entropy instead of the unavailable true distribution entropy, and without knowing the true hyperparameters.
+
+# 5 Experiments & Results
+
+Our experiments characterize $\eta$ -sampling relative to the state-of-the-art top- $p$ and typical decoding.
+
+
Method
Hyperparameters
top-p
{0.89, 0.9, 0.92, 0.95, 0.99}
typical
{0.2, 0.9, 0.92, 0.95}
ε
{0.001, 0.0009, 0.0006, 0.0003, 0.0001}
η
{0.004, 0.002, 0.0009, 0.0006, 0.0003}
+
+Table 1: Hyperparameter sweep for each method.
+
+
Method \ Model
sm
med
lg
xl
raw sampling †
0.589
0.373
0.845
0.882
top-p †
0.878
0.915
0.936
0.940
top-p (our replication)
0.874
0.917
0.932
0.944
Typical Decoding
0.873
0.906
0.922
0.939
ε-sampling (ours)
0.874
0.918
0.936
0.941
η-sampling (ours)
0.880
0.920
0.935
0.942
+
+We use MAUVE, an automatic metric for open-ended generation, to find hyperparameters giving comparable diversity-accuracy tradeoffs. $\eta$ -sampling behaves better in a range of settings, from long-document generation to more defensibly truncating low-entropy distributions.
+
+Models & Data. In all experiments, we use all or some subset of the four GPT-2 models (Radford et al., 2019) of varying sizes. Experiments are run on in-distribution, held-out data from the validation or test set of GPT-2 (WebText), since it is composed of a wide variety of long-form documents.
+
+# 5.1 Hyperparameter sweep on MAUVE
+
+We first find hyperparameters for each of top- $p$ typical decoding, $\epsilon$ -sampling, and $\eta$ -sampling that maximize MAUVE score for each GPT-2 model on WebText.
+
+Setting. Following the MAUVE paper's setting exactly (Pillutla et al., 2021), we take the GPT-2 family of models and 5,000 samples from their test data. For each sample, we prompt the model with 35 words and generate until at most 1024 words. We study GPT-2 small (124M parameters), medium (355M), large (774M) and XL (1.5B) models.
+
+Evaluation. MAUVE attempts to measure both the precision (are samples generally like those from the true distribution) and recall (is the variability in samples like that of those from the true distribution)
+
+Table 2: Results on the MAUVE metric for open-ended GPT-2 WebText generation. Higher is better. The $\dagger$ indicates numbers drawn from Pillutla et al. (2021). Bold indicates best for model, not necessarily significantly.
+
+
Study 1: Human vs top-p vs η
top-p
η-sampling
Human
Top-p vs human
43 (43%)
—
56 (56%)
η vs human
—
42 (42%)
53 (53%)
Top-p vs η
39 (39%)
53 (53%)
—
Study 2: top-p vs η-sampling
Top-p
η-sampling
Equal
Top-p vs η
118 (40%)
159 (53%)
17 (6%)
+
+Table 3: Human preferences of long-document plausibility; we report absolute numbers of judgments, and percentages in parentheses. Judgment percents that both suffixes were too bad to judge can be inferred.
+
+of samples from a text generation system. It was shown by Pillutla et al. (2021) to correlate well with human judgments.
+
+Hyperparameters. Top- $p$ , typical decoding, $\epsilon$ -sampling, and $\eta$ -sampling all have a hyperparameter which determines the severity of truncation. The set we search over is given in Table 1. $^{10}$ We pick the best hyperparameter using 2-5 seeds on the validation set, and report the average performance across 5 seeds on the test set.
+
+Results. The results are reported in Table 2; we find that overall, the methods perform similarly, with typical decoding performing slightly worse than top- $p$ and our methods.
+
+# 5.2 Human evaluation of long-document suffix plausibility
+
+We now study whether $\eta$ -sampling leads to more coherent long-document generations than top- $p$ sampling. We omit typical decoding since it does not seem to outperform top- $p$ on MAUVE. Considering that holistic evaluation of long texts is difficult for humans (Ippolito et al., 2020) we design a human study to evaluate long document plausibility: given a shared document prefix, which method's generated suffix (omitting the middle) is more reasonably from the same document? This new evaluation avoids forcing humans to keep up to 1024 words in working memory.
+
+Setting. For each of top- $p$ and $\eta$ -sampling, we sample from GPT-2 large with MAUVE-maximizing hyperparameters, conditioned on each prefix of 35 subword tokens from the WebText validation set. From this set we filter to prefixes for
+
+
+Figure 3: Top- $p$ sampling aggressively truncates low-entropy distributions and $\epsilon$ -sampling aggressively truncates high-entropy distributions, while $\eta$ -sampling strikes a balance.
+
+
+
+
+
+which the reference and both generated documents are at least 900 tokens long and pass manual filter for quality.[11] 59 workers from the United States were recruited on Amazon Mechanical Turk with the Master qualification, and paid $1 per task with an expected time of 3.5 to 4 minutes. We run two studies.
+
+Study 1. We show a human evaluator the 35-token prefix, as well as the last 70 tokens of two documents (of the 3 possible). The evaluator is asked to judge which of the two suffixes may more reasonably be from the same document as the prefix, or to note that both are too bad to judge. For each of the three possible pairings of top- $p$ , $\eta$ -sampling, and reference document, we elicit 100 human judgments over 100 prefixes.
+
+Study 2. We ran a second study just comparing top- $p$ to $\eta$ -sampling to allow for larger $n$ , since we had finite resources and the result that both methods generate text worse than humans is not at issue. To test whether the effect size observed was in part due to forcing evaluators to pick one of the two methods, in this study we allow human evaluators to mark that both suffixes are of equal quality.
+
+Results. The results are reported in Table 3. In Study 1, we find that human document generations are preferred over top- $p$ and $\eta$ -sampling at roughly the same rate, while $\eta$ -sampling is preferred over top- $p$ (53% to 40%). In Study 2, we find that $\eta$ -sampling is significantly preferred more frequently than top- $p$ with a Wilcoxon paired test $(p = 0.0138)$ at the same effect size.
+
+# 5.3 Entropy analysis
+
+We now want to build a deeper understanding of the characteristics of the algorithms: what parts of the
+
+
Truncation \ Model
Repetition Percent
sm
med
lg
xl
top-p
54%
61%
47%
27%
typical
51%
61%
56%
37%
ε-sampling (ours)
28%
37%
23%
11%
η-sampling (ours)
37%
40%
26%
12%
+
+Table 4: Table showing repetition-degeneration rates for each method in an adversarial setting; lower is better.
+
+distribution tend to get cut by each method? In our first analysis, we study whether each method has a tendency to aggressively truncate distributions of a given entropy. A low-entropy distribution might be given by the prompt Barack Obama went to the White . . . , while a high-entropy distribution might be given by the prompt My name is . . .
+
+Setting. For a range of hyperparameters, we plot the average amount of truncation across all contexts against the retained entropy for an entropy range. We use total variation to measure average truncation, $\mathbb{E}_{x_{14
+
+# 5.5 Studying individual distributions
+
+We now study specific truncation decisions made by each algorithm, to provide more detailed behavioral insights. We construct prompts and observe the truncation behavior of each algorithm on the resulting distribution, treating each as a CheckList-like unit test (Ribeiro et al., 2020).
+
+Setting. We take the GPT-2 large model, provide it with each of 6 prompts, and using the MAUVE-maximizing hyperparameters we found in Section 5.1, truncate the resulting distribution. The prompts are shown in Figure 4. For this experiment we only study top- $p$ , $\epsilon$ , and $\eta$ -sampling.
+
+Results. The results are visualized in Figure 4. We use two low-entropy prompts, My name... and Donald... and in both cases, find that top- $p$ decoding only allows a single word continuation. Top- $p$
+
+more useful than $n$ -gram repetition statistics, as, e.g., repetition can involve small variation.
+
+This is likely because the MAUVE-maximizing hyperparameter for typical sampling (e.g., 0.92 for GPT-2 large) is generally more conservative than that for top- $p$ (e.g., 0.95.)
+
+can only generate is after My name, and Trump after Donald, which we find undesirable; we would like our truncation to allow, e.g., multiple Donalds to be discussed. For a prompt with the phrase The feeling! repeated multiple times (as one might say euphorically), top- $p$ can only continue the repetitive pattern, unlike $\epsilon$ and $\eta$ -sampling. For a prompt suggesting specification of capitals of countries, we find that top- $p$ only allows the correct capital name, whereas $\eta$ -sampling and $\epsilon$ -sampling allow different continuations which do not follow the in-context trend, suggesting that top- $p$ may be better for generating, e.g., answers to questions. We use two high-entropy prompts, The... and My name is..., finding that $\eta$ -sampling and top- $p$ sampling allow a range of possibilities, unlike $\epsilon$ -sampling. The behavior of $\epsilon$ -sampling in allowing fewer words in higher entropy conditional distributions is a clear failure.
+
+# 6 Related Work
+
+Stochastic decoding algorithms. Stochastic decoding algorithms produce sequences from a model and involve randomness. The simplest is sampling, sometimes called ancestral sampling, (Bishop, 2006), which generates a sample from the model. Some stochastic decoding methods attempt to find high-likelihood sequences instead of attempting to recreate the true distribution, like stochastic beam search (Kool et al., 2019) and conditional poisson stochastic beam search (Meister et al., 2021a). Truncation sampling algorithms, like top- $k$ (Fan et al., 2018), top- $p$ (Holtzman et al., 2020), and Mirostat (Basu et al., 2021), are intended to improve quality but keep variety. Welleck et al. (2020) found that truncation algorithms can lead to nonzero mass assigned to infinite sequences.
+
+KL-divergence, language models, smoothing. The most famous example of methods that do not cover every mode is GANs (Goodfellow et al., 2014). In language modeling, some have pointed to the inability of the softmax function to assign 0 probability to any category as a deficiency and proposed sparse alternatives (Martins and Astudillo, 2016; Peters et al., 2019; Tezekbayev et al., 2021). This intuition is akin to ours, as is loss truncation (Kang and Hashimoto, 2020), which keeps rare events from incurring arbitrarily high loss. Mohri and Roark (2006) attempt to identify structural zeros in the distribution of language when inducing probabilistic context-free grammars.
+
+High-entropy language generation & evaluation. Evaluation of open-ended generation of natural language is difficult; one must evaluate both the quality of samples and the diversity. Quality is hard to measure in high-entropy generation, and is often not correlated with model probability (Hashimoto et al., 2019; Meister et al., 2022b). An emergent line of work connects human notions of quality, and human generative tendencies, with the uniform information density hypothesis (e.g., leading to typical decoding) (Wei et al., 2021; Meister et al., 2021b). Both Meister and Cotterell (2021) and Pillutla et al. (2021) directly estimate whether model samples' statistics match those of natural language. Nadeem et al. (2020) study properties held by successful strategies for reallocating mass away from the tail of LM distributions.
+
+# 7 Conclusion
+
+We've framed the class of truncation sampling algorithms as performing desmoothing, an insight that led to principles for how truncation should be done to recover the training distribution, a new truncation sampling algorithm, and evaluations that show the deficiencies of existing algorithms. We find the tendency of top- $p$ decoding to over-truncate low-entropy distributions to be particularly surprising. We aim for these insights, and the evaluations we use, to drive further research in understanding and improving how we generate from neural language models.
+
+# Acknowledgements
+
+The authors would like to thank John Thickstun, Rishi Bommasani, Kaitlyn Zhou, Will Merrill, Nelson Liu, and Tatsunori Hashimoto for helpful discussions on this work, and to the reviewers for clarifying feedback. JH was supported by an NSF Graduate Research Fellowship under grant number DGE-1656518. We gratefully acknowledge the support of a PECASE Award.
+
+# 8 Limitations
+
+With the analysis we've done, we believe it to be very difficult to derive an understanding of all the sequence-level effects truncation sampling algorithms (including ours) have: what kinds of sequences are we disallowing? What types, or sources of language are being (unknowingly) disallowed? Beyond this, we've only tested our algorithms on English language models; the condi
+
+tional distributions of languages with rich morphology likely have different properties (especially with subword models).
+
+# 9 Ethics Statement
+
+Any work to improve generative models of text comes with ethical concerns surrounding negative use cases of text generation including hate speech and misinformation. While our algorithm does improve long text generation, we hope it also provides insight into the unintended and until-now unknown consequences of existing truncation sampling algorithms (including top- $p$ ). Algorithms like ours, which reallocate probability mass from the least likely elements of a distribution, have a particular risk of harm in removing the ability of models to talk about topics or names that are already rare. Concurrent work finds that the choice of stochastic decoding algorithm affects measured fairness metrics in open-ended generation (Dhamala et al., 2022). Our framing, and the hope for future work, is to use truncation to recover something as close to the training distribution as possible; of course, the training distribution must then be chosen with care. Generating a word due to smoothing (noise) would likely mean that subsequently generated words about that topic would be low-quality, which is also undesirable.
+
+# References
+
+Sourya Basu, Govardana Sachitanandam Ramachandran, Nitish Shirish Keskar, and Lav R. Varshney. 2021. MIROSTAT: A neural text decoding algorithm that directly controls perplexity. In International Conference on Learning Representations.
+Christopher M Bishop. 2006. Pattern recognition and machine learning. Springer.
+Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33:1877-1901.
+Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. 2022. PaLM: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311.
+Kenneth W Church and William A Gale. 1991. A comparison of the enhanced Good-Turing and deleted estimation methods for estimating probabilities of
+
+English bigrams. Computer Speech & Language, 5(1):19-54.
+Jwala Dhamala, Varun Kumar, Rahul Gupta, Kai-Wei Chang, and Aram Galstyan. 2022. An analysis of the effects of decoding algorithms on fairness in open-ended language generation. In 2018 IEEE Spoken Language Technology Workshop (SLT). IEEE.
+Angela Fan, Yacine Jernite, Ethan Perez, David Grangier, Jason Weston, and Michael Auli. 2019. ELI5: long form question answering. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, pages 3558-3567. Association for Computational Linguistics.
+Angela Fan, Mike Lewis, and Yann Dauphin. 2018. Hierarchical neural story generation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 889-898, Melbourne, Australia. Association for Computational Linguistics.
+Leo Gao, Stella Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang, Horace He, Anish Thite, Noa Nabeshima, Shawn Presser, and Connor Leahy. 2021. The Pile: An 800gb dataset of diverse text for language modeling. CoRR, abs/2101.00027.
+Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial nets. Advances in neural information processing systems, 27.
+Tatsunori B Hashimoto, Hugh Zhang, and Percy Liang. 2019. Unifying human and statistical evaluation for natural language generation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1689-1701.
+Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. 2020. The curious case of neural text degeneration. In International Conference on Learning Representations.
+Daphne Ippolito, Daniel Duckworth, Chris Callison-Burch, and Douglas Eck. 2020. Automatic detection of generated text is easiest when humans are fooled. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1808-1822, Online. Association for Computational Linguistics.
+Daniel Jurafsky and James H. Martin. 2000. Speech and Language Processing: An Introduction to Natural Language Processing, Computational Linguistics, and Speech Recognition, 1st edition. Prentice Hall PTR, USA.
+
+Daniel Kang and Tatsunori B. Hashimoto. 2020. Improved natural language generation via loss truncation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 718-731, Online. Association for Computational Linguistics.
+S. Katz. 1987. Estimation of probabilities from sparse data for the language model component of a speech recognizer. IEEE Transactions on Acoustics, Speech, and Signal Processing, 35(3):400-401.
+Wouter Kool, Herke Van Hoof, and Max Welling. 2019. Stochastic beams and where to find them: The Gumbel-top-k trick for sampling sequences without replacement. In Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pages 3499-3508. PMLR.
+Yang Liu and Mirella Lapata. 2019. Text summarization with pretrained encoders. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3730-3740, Hong Kong, China. Association for Computational Linguistics.
+Andre Martins and Ramon Astudillo. 2016. From softmax to sparsemax: A sparse model of attention and multi-label classification. In International conference on machine learning, pages 1614-1623. PMLR.
+Clara Meister, Afra Amini, Tim Vieira, and Ryan Cotterell. 2021a. Conditional Poisson stochastic beams. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 664-681, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
+Clara Meister and Ryan Cotterell. 2021. Language model evaluation beyond perplexity. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 5328-5339, Online. Association for Computational Linguistics.
+Clara Meister, Tiago Pimentel, Patrick Haller, Lena Jäger, Ryan Cotterell, and Roger Levy. 2021b. Revisiting the Uniform Information Density hypothesis. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 963-980, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
+Clara Meister, Tiago Pimentel, Gian Wiher, and Ryan Cotterell. 2022a. Typical decoding for natural language generation. CoRR, abs/2202.00666.
+Clara Meister, Gian Wiher, Tiago Pimentel, and Ryan Cotterell. 2022b. On the probability-quality paradox in language generation. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 36-45,
+
+Dublin, Ireland. Association for Computational Linguistics.
+Mehryar Mohri and Brian Roark. 2006. Probabilistic context-free grammar induction based on structural zeros. In Proceedings of the Human Language Technology Conference of the NAACL, Main Conference, pages 312-319, New York City, USA. Association for Computational Linguistics.
+Moin Nadeem, Tianxing He, Kyunghyun Cho, and James Glass. 2020. A systematic characterization of sampling algorithms for open-ended language generation. In Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing, pages 334-346.
+Ian Osband, Zheng Wen, Seyed Mohammad Asghari, Vikranth Dwaracherla, Morteza Ibrahimi, Xiuyuan Lu, and Benjamin Van Roy. 2022. Epistemic neural networks. arXiv preprint arXiv:2107.08924.
+Ben Peters, Vlad Niculae, and André FT Martins. 2019. Sparse sequence-to-sequence models. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1504-1519.
+Krishna Pillutla, Swabha Swayamdipta, Rowan Zellers, John Thickstun, Sean Welleck, Yejin Choi, and Zaid Harchaoui. 2021. Mauve: Measuring the gap between neural text and human text using divergence frontiers. In Advances in Neural Information Processing Systems, volume 34, pages 4816-4828. Curran Associates, Inc.
+Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9.
+Marco Tulio Ribeiro, Tongshuang Wu, Carlos Guestrin, and Sameer Singh. 2020. Beyond accuracy: Behavioral testing of NLP models with CheckList. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4902-4912, Online. Association for Computational Linguistics.
+Maxat Tezekbayev, Vassilina Nikoulina, Matthias Galle, and Zhenisbek Assylbekov. 2021. Speeding up entmax. CoRR, abs/2111.06832.
+Jason Wei, Clara Meister, and Ryan Cotterell. 2021. A cognitive regularizer for language modeling. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 5191-5202, Online. Association for Computational Linguistics.
+Sean Welleck, Ilia Kulikov, Jaedeok Kim, Richard Yuanzhe Pang, and Kyunghyun Cho. 2020. Consistency of a recurrent language model
+
+with respect to incomplete decoding. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 5553-5568.
+
+# A Notes
+
+# A.1 Support-weighted total variation
+
+We introduce new notation just for this section, to present support-weighted total variation in generality. Recall that the total variation distance between discrete distribution $R$ over space $\mathcal{V}$ and discrete distribution $U_{t}$ , the result of truncation with allowed set $\mathcal{A} \subseteq \mathcal{V}$ from a discrete distribution $U$ over $\mathcal{V}$ , is
+
+$$
+\sum_ {x \in \mathcal {V}} | R (x) - U _ {t} (x) |. \tag {9}
+$$
+
+Denoting the support of $R$ as $S_{R}$ , we can partition $\mathcal{V}$ into four sets:
+
+$$
+\begin{array}{l} S _ {R} \cap \bar {\mathcal {A}} \\ \overline {{S _ {R}}} \cap \mathcal {A} \\ S _ {R} \cap \mathcal {A} \\ \overline {{S _ {R}}} \cap \bar {\mathcal {A}} \tag {10} \\ \end{array}
+$$
+
+We split the sum of the total variation distance into these four terms.
+
+The first represents the words that are in the support of $R$ but not in the allowed set of $U_{t}$ :
+
+$$
+\sum_ {S _ {R} \cap \bar {\mathcal {A}}} | R (x) - U _ {t} (x) | = \sum_ {S _ {R} \cap \bar {\mathcal {A}}} R (x), \tag {11}
+$$
+
+since $U_{t}(x) = 0$ if $x \notin \mathcal{X}$ . This exactly represents the total probability mass that was lost from $R$ . The second term represents the words that are not in the support of $R$ but were allowed:
+
+$$
+\sum_ {\overline {{S _ {R}}} \cap \mathcal {A}} | R (x) - U _ {t} (x) | = \sum_ {\overline {{S _ {R}}} \cap \mathcal {A}} U _ {t} (x), \tag {12}
+$$
+
+since $R(x) = 0$ if $x \notin S_R$ . This exactly represents the total probability that we sample a word from $U_t$ that has zero probability under $R$ (and so we move off the support of $R$ for future generation.) The third term is the words that were correctly allowed:
+
+$$
+\sum_ {S _ {R} \cap \mathcal {A}} | R (x) - U _ {t} (x) |. \tag {13}
+$$
+
+In this case, $U_{t}(x)$ may be an under or overestimate of $R(x)$ . The last term is the words that were correctly truncated:
+
+$$
+\sum_ {\overline {{S _ {R}}} \cap \bar {\mathcal {A}}} | R (x) - U _ {t} (x) | = \sum_ {\overline {{S _ {R}}} \cap \bar {\mathcal {A}}} | 0 - 0 | \tag {14}
+$$
+
+which is identically zero.
+
+To form our support-weighted total variation metric, we took the first two terms, which are interpretable and each exactly specifies one of the two desiderata from a truncation algorithm: maintaining the variety of $R$ , and not generating a word that $R$ wouldn't generate. However, in different use cases, one or the other may be more crucial; hence we give each its own hyperparameter, $\beta_{\mathrm{var}}$ and $\beta_{\mathrm{sup}}$ , to arrive at our metric,
+
+$$
+\begin{array}{l} \operatorname {T V} _ {S} (R, U _ {t}) = \beta_ {\text {v a r}} \sum_ {x \in S _ {R} \cap \bar {\mathcal {A}}} R (x) \\ + \beta_ {\sup } \sum_ {x \in \overline {{S _ {R}}} \cap \mathcal {A}} U _ {t} (x). \tag {15} \\ \end{array}
+$$
+
+# A.2 Analysis of $\eta$ -sampling
+
+The purpose of this analysis is to show that if one assumes our smoothing model, then an $\eta$ -sampling approximates an algorithm that avoids sampling from outside the support of the true distribution while minimilly truncating the distribution.
+
+Consider a conditional distribution from a language model under our model, $P_{\theta}(X_i \mid x_{ \eta^*\}$ , where $\eta^*$ is defined as
+
+$$
+\eta^ {*} = \min \left(\frac {(1 - \bar {\lambda}) (1 + \delta)}{| \mathcal {V} |}, \alpha \exp (- h _ {x < i})\right) \}. \tag {16}
+$$
+
+In this case, it is guaranteed that $x \in S_{x_{ 0$ when computing the allowed set, then under our model, there can be a conditional distribution such that $x \notin S_{x_{ \eta'$ . Such an $x$ would be incorrectly allowed.
+
+Similarly, if one sets a higher probability threshold $\eta' = \eta^* + \psi$ for some $\psi > 0$ when computing the allowed set, then under the model, there can be a conditional distribution such that $x \in S_{x < i}^*$ , and $P_{\theta}(x \mid x_{
Method\Model
small
med
large
XL
Top-p
0.9
0.89
0.95
0.95
Typical
0.9
0.9
0.92
0.92
ε-sampling
0.0006
0.0009
0.0003
0.0003
η-sampling
0.002
0.0006
0.0006
0.0003
+
+Table 5: Best-performing hyperparameters according to MAUVE from experiments in Section 5.1.
+
+the true distribution without unnecessarily truncating too much. We now consider allowed set defined by algorithms other than probability thresholds. Let the allowed set defined according to the $\eta^{*}$ threshold be $\mathcal{A}_{x_{< i}}^{*}$ . Consider an allowed set $\mathcal{A}_{x_{< i}}$ defined by another truncation sampling algorithm (which may not define it via a probability threshold like. If $\mathcal{A}_{x_{< i}} = \mathcal{A}_{x_{< i}}^{*}$ , then the two algorithms are indistinguishable for this prefix. Otherwise, if $x\in \mathcal{A}_{x_{< i}}$ and $x\notin \mathcal{A}_{x_{< i}}^{*}$ , then $x$ may be outside the support of the true distribution, and should have been truncated. And if $x\in \mathcal{A}_{x_{< i}}^{*}$ and $x\notin \mathcal{A}_{x_{< i}}$ , then $x$ was unnecessarily truncated.
+
+When using our $\eta$ -sampling algorithm, we neither know the true hyperparameters, nor do we have access to the true distribution conditional entropy, so $\eta$ -sampling only approximates this. Specifically, we set the hyperparameters of $\eta$ -sampling via search on the task of interest, and we use the observed LM entropy instead of the true distribution entropy in computing the relative probability threshold. In practice, one wants to set a threshold of truncation based on the needs of the task and the tolerance for error, so a threshold that perfectly excludes words outside the true distribution support may not be optimal for the task of interest anyway.
+
+# B More Experimental Details
+
+# B.1 Hyperparameters
+
+The MAUVE-maximizing hyperparameters for each truncation sampling algorithm for each model are provided in Table 5.
+
+# B.2 5-gram model
+
+For our small demo demonstrating the behavior of smoothed $n$ -gram models, we trained a 5-gram model on 10,000 documents from The Pile (Gao et al., 2021). We smoothed the model with the uniform distribution.
+
+# B.3 Amazon Mechanical Turk Details
+
+To provide more transparency into our human studies, we provide the form that was shown to human
+
+annotators for both of our studies. The (similar) interfaces shown for Study 1 and Study 2 are shown in Figure 5 and Figure 6, respectively. We randomize the ordering of presentation of the methods' generations (note that the forms say "Option 1" and "Option 2").
+
+Of the 59 unique workers, 44 unique workers participated in study 1, and 36 unique workers participated in study 2.
+
+We follow Pillutla et al. (2021) in manually filtering the WebText prompts that go into our human study. Webtext is noisy, and not all prompts are clearly natural language. Our manual filtering of prompts led to 36 rejected prompts (of 146 considered) due to quality for study 1. Our manual filtering of prompts led to 100 rejected prompts (of 402 considered) due to quality for study 2. This is compared to rejecting 3169 of 5000 prompts due to quality in the original MAUVE paper; we attempted to minimally filter while guaranteeing that prompts were natural language. Our kept and filtered prompts are available in our codebase.
+
+- You are given the beginning of a document that appeared on the web.
+- Two options are given for possible passages that are intended to be from the same document, but occur long after the beginning (i.e., hundreds of words are omitted between the beginning and the option passage.)
+- Your task is to pick the passage option that seems more reasonably could be in the same document as the beginning paragraph.
+- It is possible that both passage options look comparable, or both are very bad. However, you should try to discern carefully and pick the better one between the two.
+Hn n t u t u t t t t t t t t t t t t t t t t t t t t t t t t t t t t t t t t t t t t t t t t t t t t t t t t t t .
+
+
Beginning of document:
+${prefix}
Option 1 for passage from same document:
+${option_1}
Option 2 for passage from same document:
+${option_2}
+
+
Option 1
Option 2
○ Option 1 more plausibly comes from the same document as the beginning paragraph.
○ Option 2 more plausibly comes from the same document as the beginning paragraph.
+
+I can't make sense of either option.
+
+Submit
+
+Figure 5: The interface shown to human annotators for Study 1.
+
+you are given the beginning of a document that appeared on the web.
+- Two options are given for possible passages that are intended to be from the same document, but occur long after the beginning (i.e., hundreds of words are omitted between the beginning and the option passage.)
+- Your task to pick the passage option that seems more plausibly could be in the same document as the beginning paragraph. It is possible that both passage options look comparable to both ones you had. You should discern carefully and determine which is
+- It is possible that both passage options look comparable, or both are very bad. You should discern carefully and determine which is better, but can mark trial they're equally plausible or trial you can't make sense of either option
+When judging the plausibility of the passage options, please do not consider the fact that each passage may start or end awkwardly (i.e., in the middle of a word) but please do consider whether it seems to be about the same or similar subjects as the beginning paragraph, mentions the same or similar people or things as the beginning paragraph, has the same or similar style as the beginning paragraph, and makes sense within itself using your best judgement.
+
+
Beginning of document:
+${prefix}
Option 1 for passage from same document:
+${option_1}
Option 2 for passage from same document:
+${option_2}
+
+
Option 1
Both equally plausible
Option 2
○ Option 1 more plausibly comes from the same document as the beginning paragraph.
○ They're equally plausible. (Avoid marking this if possible)
○ Option 2 more plausibly comes from the same document as the beginning paragraph.
+
+$\bigcirc$ I can't make sense of either option.
+
+Submit
+
+Figure 6: The interface shown to human annotators for Study 2.
\ No newline at end of file
diff --git a/truncationsamplingaslanguagemodeldesmoothing/images.zip b/truncationsamplingaslanguagemodeldesmoothing/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..9a06958a452e000edf6fc9bea515aee252a6e83b
--- /dev/null
+++ b/truncationsamplingaslanguagemodeldesmoothing/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c8baf1937d900e2f3ba60b969719f843a63dd07cfcdcd9754e0c20f3b9966deb
+size 515449
diff --git a/truncationsamplingaslanguagemodeldesmoothing/layout.json b/truncationsamplingaslanguagemodeldesmoothing/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..336c2250bfc36c48be8783ab4123fa710076c34d
--- /dev/null
+++ b/truncationsamplingaslanguagemodeldesmoothing/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:54a5223f41cdc596a60b0f0d43bf4b788f219f981772e38fde781b1b7c913dff
+size 700024
diff --git a/turningfixedtoadaptiveintegratingpostevaluationintosimultaneousmachinetranslation/2195959f-0ddd-40fc-a0d3-f606f3e8dd0c_content_list.json b/turningfixedtoadaptiveintegratingpostevaluationintosimultaneousmachinetranslation/2195959f-0ddd-40fc-a0d3-f606f3e8dd0c_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..e7634dbc317a89c7a52a4433ff21399a728d80de
--- /dev/null
+++ b/turningfixedtoadaptiveintegratingpostevaluationintosimultaneousmachinetranslation/2195959f-0ddd-40fc-a0d3-f606f3e8dd0c_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:93baaabd6fb7550ab3eb7b79531dd5e1f6c36fe5cbd95cc819212c43d78c4ce2
+size 89324
diff --git a/turningfixedtoadaptiveintegratingpostevaluationintosimultaneousmachinetranslation/2195959f-0ddd-40fc-a0d3-f606f3e8dd0c_model.json b/turningfixedtoadaptiveintegratingpostevaluationintosimultaneousmachinetranslation/2195959f-0ddd-40fc-a0d3-f606f3e8dd0c_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..7a209572df3437c506025d65ade6b6a8374deb38
--- /dev/null
+++ b/turningfixedtoadaptiveintegratingpostevaluationintosimultaneousmachinetranslation/2195959f-0ddd-40fc-a0d3-f606f3e8dd0c_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:7a83a7bd11d5dbf40ee800073e4caf6d784c2da4fb2bc20953ac16304c03d47d
+size 106149
diff --git a/turningfixedtoadaptiveintegratingpostevaluationintosimultaneousmachinetranslation/2195959f-0ddd-40fc-a0d3-f606f3e8dd0c_origin.pdf b/turningfixedtoadaptiveintegratingpostevaluationintosimultaneousmachinetranslation/2195959f-0ddd-40fc-a0d3-f606f3e8dd0c_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..b268d4165009adbecb829b6690a8c5f613ddfc4c
--- /dev/null
+++ b/turningfixedtoadaptiveintegratingpostevaluationintosimultaneousmachinetranslation/2195959f-0ddd-40fc-a0d3-f606f3e8dd0c_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:49f09b0db7198ab75d375e5b7190d88e6fc73a9fadde4cdd53a682f34cf40e35
+size 3052766
diff --git a/turningfixedtoadaptiveintegratingpostevaluationintosimultaneousmachinetranslation/full.md b/turningfixedtoadaptiveintegratingpostevaluationintosimultaneousmachinetranslation/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..ae5dc23971311bdcbd504e5de33916961d028e78
--- /dev/null
+++ b/turningfixedtoadaptiveintegratingpostevaluationintosimultaneousmachinetranslation/full.md
@@ -0,0 +1,388 @@
+# Turning Fixed to Adaptive: Integrating Post-Evaluation into Simultaneous Machine Translation
+
+Shoutao Guo $^{1,2}$ , Shaolei Zhang $^{1,2}$ , Yang Feng $^{1,2*}$
+
+1Key Laboratory of Intelligent Information Processing
+
+Institute of Computing Technology, Chinese Academy of Sciences (ICT/CAS)
+
+$^{2}$ University of Chinese Academy of Sciences, Beijing, China
+
+{guoshoutao22z, zhangshaolei20z, fengyang}@ict.ac.cn
+
+# Abstract
+
+Simultaneous machine translation (SiMT) starts its translation before reading the whole source sentence and employs either fixed or adaptive policy to generate the target sentence. Compared to the fixed policy, the adaptive policy achieves better latency-quality tradeoffs by adopting a flexible translation policy. If the policy can evaluate rationality before taking action, the probability of incorrect actions will also decrease. However, previous methods lack evaluation of actions before taking them. In this paper, we propose a method of performing the adaptive policy via integrating post-evaluation into the fixed policy. Specifically, whenever a candidate token is generated, our model will evaluate the rationality of the next action by measuring the change in the source content. Our model will then take different actions based on the evaluation results. Experiments on three translation tasks show that our method can exceed strong baselines under all latency1.
+
+# 1 Introduction
+
+Simultaneous machine translation (SiMT) (Gu et al., 2017; Ma et al., 2019; Arivazhagan et al., 2019; Ma et al., 2020) starts translation before reading the whole source sentence. It seeks to achieve good latency-quality tradeoffs and is suitable for various scenarios with different latency tolerances. Compared to full-sentence machine translation, SiMT is more challenging because it lacks partial source content in translation (Zhang and Feng, 2022d) and needs to decide on translation policy additionally.
+
+The translation policy in SiMT directs the model to decide when to take READ (i.e., read the next source token) or WRITE (i.e., output the generated token) action, so as to ensure that the model has appropriate source content to translate the target
+
+
+Figure 1: The change in translation degree of source tokens after generating a candidate token, and the READ/WRITE action is taken accordingly.
+
+tokens. Because READ and WRITE actions are often decided based on available source tokens and generated target tokens, it is difficult to guarantee their accuracy. Therefore, if the SiMT model can evaluate the rationality of actions with the help of the current generated candidate token, it can reduce the probability of taking incorrect actions.
+
+However, the previous methods, including fixed and adaptive policies, lack evaluation before taking the next action. For fixed policy (Ma et al., 2019; Elbayad et al., 2020; Zhang et al., 2021; Zhang and Feng, 2021c), the model generates translation according to the predefined translation rules. Although it only relies on simple training methods, it cannot make full use of the context to decide an appropriate translation policy. For adaptive policy (Gu et al., 2017; Arivazhagan et al., 2019; Ma et al., 2020; Zhang et al., 2022), the model can obtain better translation performance. But it needs complicated training methods to obtain translation policy and takes action immediately after making decisions, which usually does not guarantee the accuracy of actions.
+
+Therefore, we attempt to explore some factors from the translation to reflect whether the action is correct, thereby introducing evaluation into trans
+
+lation policy. The goal of translation is to convert sentences from the source language to the target language (Mujadia and Sharma, 2021), so the source and target sentences should contain the same semantics (i.e., global equivalence). To ensure the faithfulness of translation (Weng et al., 2020), the source content that has already been translated should be semantically equivalent to the previously generated target tokens at each step (i.e., partial equivalence) (Zhang and Feng, 2022c). Furthermore, by comparing the changes between adjacent steps, the increment of the source content being translated should be semantically equivalent to the current generated token (i.e., incremental equivalence). Therefore, the rationality of the generated target token can be reflected by the increment of the source content being translated between adjacent steps, which can be used to evaluate the READ and WRITE actions.
+
+In this paper, we propose a method of performing the adaptive policy by integrating post-evaluation into the fixed policy, which directs the model to take READ or WRITE action based on the evaluation results. Using partial equivalence, our model can recognize the translation degree of source tokens (i.e., the degree to which the source token has been translated), which represents how much the source content is translated at each step. Then naturally, by virtue of incremental equivalence, the increment of translated source content can be regarded as the change in the translation degree of available source tokens. Therefore, we can evaluate the action by measuring the change in translation degree. As shown in Figure 1, if the translation degree has significant changes after generating a candidate token, we think that the current generated token obtains enough source content, and thus WRITE action should be taken. Otherwise, the model should continue to take READ actions to wait for the arrival of the required source tokens. Experiments on WMT15 De→En and IWSLT15 En→Vi translation tasks show that our method can exceed strong baselines under all latency.
+
+# 2 Background
+
+Transformer (Vaswani et al., 2017), which consists of encoder and decoder, is the most widely used neural machine translation model. Given a source sentence $\mathbf{x} = (x_{1},\dots,x_{I})$ , the encoder maps it into a sequence of hidden states $\mathbf{z} = (z_{1},\dots,z_{I})$ . The decoder generates target hidden states $\mathbf{h} =$
+
+$(h_1,\dots,h_M)$ and predicts the target sentence $\mathbf{y} = (y_{1},\dots,y_{M})$ based on $\mathbf{z}$ autoregressively.
+
+Our method is based on wait- $k$ policy (Ma et al., 2019) and Capsule Networks (Hinton et al., 2011) with Guided Dynamic Routing (Zheng et al., 2019b), so we briefly introduce them.
+
+# 2.1 Wait- $k$ Policy
+
+Wait- $k$ policy, which belongs to fixed policy, takes $k$ READ actions first and then takes READ and WRITE actions alternately. Define a monotonic non-decreasing function $g(t)$ , which represents the number of available source tokens when translating target token $y_{t}$ . For wait- $k$ policy, $g(t)$ can be calculated as:
+
+$$
+g (t; k) = \min \{k + t - 1, I \}, \tag {1}
+$$
+
+where $I$ is the length of the source sentence.
+
+To avoid the recalculation of the encoder hidden states when a new source token is read, unidirectional encoder (Elbayad et al., 2020) is proposed to make each source token only attend to its previous tokens. Besides, multi-path method (Elbayad et al., 2020) optimizes the model by sampling $k$ uniformly during training and makes a unified model obtain the translation performance comparable to wait- $k$ policy under all latency.
+
+# 2.2 Capsule Networks with Guided Dynamic Routing
+
+Guided Dynamic Routing (GDR) is a variant of routing-by-agreement mechanism (Sabour et al., 2017) in Capsule Networks and makes input capsules route to corresponding output capsules driven by the decoding state at each step. In detail, encoder hidden states $\mathbf{z}$ are regarded as a sequence of input capsules, and a layer of output capsules is added to the top of the encoder to model different categories of source information. The decoding state then directs each input capsule to find its affiliation to each output capsule at each step, thereby solving the problem of assigning source tokens to different categories.
+
+# 3 The Proposed Method
+
+The architecture of our method is shown in Figure 2. Our method first guides the model to recognize the translation degree of available source tokens based on partial equivalence during training via the introduced GDR module. Then based on the incremental equivalence between adjacent steps, our
+
+
+Figure 2: The architecture of our method. The R/W prediction module obtains the translation degree of the available source tokens and evaluates the next action based on the change in translation degree.
+
+method utilizes the changes in translation degree to post-evaluate the rationality of the READ and WRITE actions and accordingly make corrections, thereby performing an adaptive policy during inference. Besides, to enhance the robustness of the model in recognizing the translation degree during inference, our method applies a disturbed-path training based on the wait- $k$ policy, which adds some disturbance to the translation policy during training. The details are introduced in the following sections in order.
+
+# 3.1 Recognizing the Translation Degree
+
+As mentioned above, the translation degree represents the degree to which the source token has been translated and is the prerequisite of our method. Therefore, we introduce Capsule Networks with GDR to model the translation degree, which is guided by our proposed two constraints according to partial equivalence during training.
+
+Translation Degree We define the translation degree of all source tokens at step $t$ as $\mathbf{d}^{(t)} = (d_1^{(t)},\dots,d_I^{(t)})$ . To obtain the translation degree, we need to utilize the ability of Capsule Networks with GDR to assign the source tokens to different cate
+
+gories. Assume that there are $J + N$ output capsules modeling available source information that has already been translated and has not yet been translated, among which there are $J$ translated capsules $\Phi^T = (\Phi_1,\dots,\Phi_J)$ and $N$ untranslated capsules $\Phi^U = (\Phi_{J + 1},\dots,\Phi_{J + N})$ , respectively. The encoder hidden states $\mathbf{z}$ are regarded as input capsules. To determine how much of $z_{i}$ needs to be sent to $\Phi_j$ at step $t$ , the assignment probability $c_{ij}^{(t)}$ in SiMT is modified as:
+
+$$
+c _ {i j} ^ {(t)} = \left\{ \begin{array}{c c} \frac {\exp b _ {i j} ^ {(t)}}{\sum_ {l} \exp b _ {i l} ^ {(t)}} & \text {i f} i \leq g (t), \\ 0 & \text {o t h e r w i s e} \end{array} \right. \tag {2}
+$$
+
+where $b_{ij}^{(t)}$ measures the cumulative similarity between $z_{i}$ and $\Phi_j$ . Then $c_{ij}^{(t)}$ is updated iteratively driven by the decoding state and is seen as the affiliation of $z_{i}$ belonging to $\Phi_j$ after the last iteration. For more details about Capsule Networks with GDR, please refer to Zheng et al. (2019b). On this basis, the translation degree of $x_{i}$ is calculated by aggregating the assignment probability of routing to the translated capsules at step $t$ :
+
+$$
+d _ {i} ^ {(t)} = \sum_ {j = 1} ^ {J} c _ {i j} ^ {(t)}. \tag {3}
+$$
+
+Segment Constraint To ensure that the model can recognize the translation degree of source tokens, the model requires additional guidance. According to partial equivalence, the translated source content should be semantically equivalent to the generated target tokens. On the contrary, the untranslated source content and unread source tokens should be semantically equivalent to target tokens not generated. So we introduce mean square error to induce the learning of output capsules:
+
+$$
+\begin{array}{l} \mathcal {L} _ {\mathrm {S}} = \frac {1}{M} \sum_ {t = 1} ^ {M} \left(\left\| \boldsymbol {\Phi} _ {t} ^ {T} - \mathbf {W} ^ {T} \mathbf {H} _ {t} ^ {T} \right\| ^ {2} \right. \tag {4} \\ + \left\| \boldsymbol {\Phi} _ {t} ^ {U} + \mathbf {W} _ {e} ^ {U} \mathbf {Z} _ {t} - \mathbf {W} _ {d} ^ {U} \mathbf {H} _ {t} ^ {U} \right\| ^ {2}) \\ \end{array}
+$$
+
+where $\mathbf{W}^T$ , $\mathbf{W}_e^U$ and $\mathbf{W}_d^U$ are learnable parameters. $\mathbf{H}_t^T$ and $\mathbf{H}_t^U$ are the averages of hidden states of the generated target tokens and target tokens not generated, which are calculated respectively:
+
+$$
+\mathbf {H} _ {t} ^ {T} = \frac {1}{t - 1} \sum_ {\tau = 1} ^ {t - 1} h _ {\tau}, \tag {5}
+$$
+
+$$
+\mathbf {H} _ {t} ^ {U} = \frac {1}{M - t + 1} \sum_ {\tau = t} ^ {M} h _ {\tau}. \tag {6}
+$$
+
+where $M$ is the length of the target sentence. $\mathbf{Z}_t$ is the average of hidden states of unread source tokens at step $t$ :
+
+$$
+\mathbf {Z} _ {t} = \frac {1}{I - g (t)} \sum_ {\tau = g (t) + 1} ^ {I} z _ {\tau}. \tag {7}
+$$
+
+$\Phi_t^T$ and $\Phi_t^U$ are the translated and untranslated source information at step $t$ , respectively.
+
+Token Constraint To recognize the changes in translation degree more accurately, we propose token constraint according to incremental equivalence. It encourages the translated capsules to predict the generated tokens and combines translated and untranslated capsules to predict the available source tokens at each step. It can be calculated as:
+
+$$
+\begin{array}{l} \mathcal {L} _ {\mathrm {T}} = - \frac {1}{M} \sum_ {t = 1} ^ {M} \left[ \log p _ {d} \left(\mathbf {y} _ {< t} \mid \boldsymbol {\Phi} _ {t} ^ {T}\right) \right. \tag {8} \\ + \log p _ {e} (\mathbf {x} _ {\leq g (t)} | \boldsymbol {\Phi} _ {t} ^ {T}; \boldsymbol {\Phi} _ {t} ^ {U}) ] \\ \end{array}
+$$
+
+where $p_d(\mathbf{y}_{< t}|\boldsymbol{\Phi}_t^T)$ represents the probability of generated target tokens based on translated source information and $p_e(\mathbf{x}_{\leq g(t)}|\boldsymbol{\Phi}_t^T;\boldsymbol{\Phi}_t^U)$ is the probability of available source tokens based on both translated and untranslated information. Then we can get the training objective of our model:
+
+$$
+\mathcal {L} (\theta) = - \log p _ {\theta} (\mathbf {y} | \mathbf {x}) + \lambda_ {S} \mathcal {L} _ {S} + \lambda_ {T} \mathcal {L} _ {T}, \tag {9}
+$$
+
+where $-\log p_{\theta}(\mathbf{y}|\mathbf{x})$ is negative log-likelihood.
+
+# 3.2 Post-Evaluation Policy
+
+With the help of token and segment constraints, our model can accurately recognize the translation degree, which can be utilized to perform our Post-Evaluation (PE) policy by measuring the changes in translation degree between adjacent steps.
+
+Generally speaking, the core of the adaptive policy is to decide the conditions for taking different actions (Zhang and Feng, 2022b). According to incremental equivalence, the current generated token should be semantically equivalent to the increment of the source content that has been translated, which can be measured by the changes in translation degree. Therefore, we can evaluate the rationality of actions by measuring the change in
+
+the translation degree of available source tokens. We define the change in the translation degree of source tokens after generating $y_{t}$ as $\Delta \mathbf{d}^{(t)} = (\Delta d_1^{(t)},\dots,\Delta d_I^{(t)})$ and $\Delta d_i^{(t)}$ is calculated as:
+
+$$
+\Delta d _ {i} ^ {(t)} = \max \left\{d _ {i} ^ {(t + 1)} - d _ {i} ^ {(t)}, 0 \right\}, \tag {10}
+$$
+
+where $d_i^{(t)}$ and $d_i^{(t + 1)}$ are calculated in Eq.(3) and $\max (\cdot)$ function ensures that the translation degree is undiminished considering incremental equivalence. Furthermore, we introduce hyperparameter $\rho$ , which is the threshold to measure the change in translation degree.
+
+As shown in Figure 3, we can get the conditions for taking different actions by comparing $\Delta \mathbf{d}^{(t)}$ and $\rho$ . We first define function max_select(\cdot), which returns the maximum element in a vector. According to incremental equivalence, if the change in the translation degree exceeds the threshold (i.e., max_select( $\Delta \mathbf{d}^{(t)}$ ) ≥ $\rho$ ), then the current generated token obtains enough source content, and the model should take WRITE action. Otherwise, the model should continue to take READ action. However, the generation of auxiliary tokens such as 'the' in English can not lead to a change in translation degree. This misleads the model to take READ actions consecutively, so we force the model to take WRITE actions by setting the restriction of consecutive READ actions as $r$ . PE policy is shown in Algorithm 1. Our model will only take WRITE action after reading the whole source sentence.
+
+# 3.3 Disturbed-Path Training
+
+Up to now, we have proposed our adaptive policy by introducing post-evaluation, which utilizes the translation degree. Because the adaptive policy adopts different translation paths (i.e., the sequence of READ and WRITE actions) for different contexts, this requires the model to learn as many translation paths as possible. However, the previous training methods (Ma et al., 2019; Elbayad et al., 2020) can only cover a small number of predefined translation paths. To enhance the ability to recognize the translation degree on different translation paths, our model is optimized across our proposed disturbed-path.
+
+Specifically, the log-likelihood estimation based on sentence pair $(\mathbf{x},\mathbf{y})$ through the single path $\mathbf{g}_k$ is computed as:
+
+$$
+\log p (\mathbf {y} | \mathbf {x}, \mathbf {g} _ {k}) = \sum_ {t = 1} ^ {M} \log p \left(y _ {t} \mid \mathbf {y} < t, \mathbf {x} _ {\leq g (t; k)}\right), \tag {11}
+$$
+
+
+(a) WRITE Action
+
+
+(b) READ Action
+Figure 3: Change in translation degree of available sources token after generating $y_{t}$ . The model takes WRITE action when the translation degree has significant changes. Otherwise, the model should take READ action.
+
+Algorithm 1: Post-Evaluation Policy
+Input: Threshold $\rho$ , Restriction on READ actions $r,y_0\gets \langle bos\rangle$ , Prefix with $k$ source tokens $\mathbf{x}_{\leq k},t\gets 1,i\gets k$
+while $y_{t - 1}\neq \langle eos\rangle$ do if Evaluation $(\mathbf{x}_{\leq i},\mathbf{y}_{< t},\rho)$ then Take WRITE action $t\gets t + 1$ else Take READ action $i\gets i + 1$
+end
+Function Evaluation $(\mathbf{x}_{\leq i},\mathbf{y}_{< t},\rho)$ .. calculate $\mathbf{d}^{(t)}$ as Eq.(3) generate $y_{t}$ //Candidate calculate $\mathbf{d}^{(t + 1)}$ as Eq.(3) calculate $\Delta \mathbf{d}^{(t)}$ as Eq.(10) if max_select $(\Delta \mathbf{d}^{(t)})\geq \rho$ then return True else return False
+end
+
+where $\mathbf{g}_k = (g(1; k), \dots, g(M; k))$ defines the number of available source tokens at each step and $k$ is the number of source tokens read in advance before generation. For translation path $\mathbf{g}_k$ , $g(t; k)$ is updated as:
+
+$$
+g (t; k) = \left\{ \begin{array}{c c} \min \left\{g (t - 1; k) + \gamma , I \right\}, & t > 1 \\ \min \left\{k + \gamma , I \right\}, & t = 1 \end{array} , \right. \tag {12}
+$$
+
+where $\gamma$ is uniformly sampled from $[0, \dots, r]$ and $r$ is the restriction on READ actions in PE policy and controls the degree of disturbance to a single translation path. This essentially simulates the situation where the model makes decisions on the next action. For $(\mathbf{x}, \mathbf{y})$ , we then make the model have
+
+the ability to recognize the translation degree under all latency by changing $k$ . Thus, the log-likelihood estimation in Eq.(11) is modified:
+
+$$
+E _ {k} \left[ \log p (\mathbf {y} | \mathbf {x}, \mathbf {g} _ {k}) \right] = \sum_ {k \sim \mathcal {U} (\mathrm {K})} \log p (\mathbf {y} | \mathbf {x}, \mathbf {g} _ {k}), \tag {13}
+$$
+
+where $k$ is uniformly sampled form $\mathbf{K} = [1, \dots, I]$ and $I$ is the length of source sentence. Therefore, our method can perform our adaptive policy under all latency by only using a unified model.
+
+# 4 Experiments
+
+# 4.1 Datasets
+
+We evaluate our proposed method on IWSLT152 English $\rightarrow$ Vietnamese (En $\rightarrow$ Vi) task, IWSLT143 English $\rightarrow$ German (En $\rightarrow$ De) task, and WMT154 German $\rightarrow$ English (De $\rightarrow$ En) task.
+
+For $\mathrm{En} \rightarrow \mathrm{Vi}$ task (Cettolo et al., 2016), our settings are the same as Arivazhagan et al. (2019). We replace tokens whose frequency is less than 5 with $\langle unk \rangle$ . We use TED tst2012 as the development set and TED tst2013 as the test set.
+
+For $\mathrm{En}\rightarrow \mathrm{De}$ task, the model settings remain the same as Cettolo et al. (2014).
+
+For De→En task, we keep our settings consistent with Ma et al. (2020). We apply BPE (Sennrich et al., 2016) with 32K subword units and use a shared vocabulary between source and target. We use newstest2013 as the development set and newstest2015 as the test set.
+
+# 4.2 Model Settings
+
+Since our experiments involve the following models, we briefly introduce them. Wait- $k$ (Ma et al.,
+
+2https://nlp.stanford.edu/projects/nmt/
+3https://wit3.fbk.eu/2014-01
+4www.statmt.org/wmt15/
+
+
+(a) $\mathrm{En}\rightarrow \mathrm{Vi}$
+
+
+(b) $\mathrm{En}\rightarrow \mathrm{De}$
+Figure 4: Performance of different methods on $\mathrm{En} \rightarrow \mathrm{Vi}$ (Transformer-Small), $\mathrm{En} \rightarrow \mathrm{De}$ (Transformer-Small) and $\mathrm{De} \rightarrow \mathrm{En}$ (Transformer-Base) tasks. It shows the results of our methods, wait- $k$ , multi-path, adaptive-wait- $k$ and offline model.
+
+
+(c) $\mathrm{De}\rightarrow \mathrm{En}$
+
+2019) policy is the benchmark method in SiMT. It takes $k$ READ actions first, and then alternates between READ and WRITE actions. Multi-path (Elbayad et al., 2020) achieves comparable performance to wait- $k$ policy under all latency with a unified model. Adaptive-wait- $k$ (Zheng et al., 2020) implements the adaptive policy through a heuristic composition of several fixed policies. Offline refers to conventional Transformer (Vaswani et al., 2017) for full-sentence machine translation. PED represents that our model is trained through disturbed-path and performs PE policy during inference. For all the models mentioned above, we apply Transformer-Small (6 layers, 4 heads) on $\mathrm{En} \rightarrow \mathrm{Vi}$ and $\mathrm{En} \rightarrow \mathrm{De}$ tasks and Transformer-Base (6 layers, 8 heads) on $\mathrm{De} \rightarrow \mathrm{En}$ task. Other model settings follow Ma et al. (2020).
+
+We implement all models by adapting Transformer from Fairseq Library (Ott et al., 2019). The settings of Capsule Networks with GDR are consistent with Zheng et al. (2019b). For our method, we empirically set $r = 2$ and $\rho = 0.24$ for all experiments, and use $k$ as free parameter to achieve different latency. Our proposed method is fine-tuned based on the pre-trained multi-path model. We use greedy search in decoding and evaluate these methods with translation quality measured by tokenized BLEU (Papineni et al., 2002) and latency estimated by Average Lagging (AL) (Ma et al., 2019).
+
+# 4.3 Main Results
+
+The translation performance between our method and the previous methods is shown in Figure 4. It can be seen that our method can exceed previous methods under all latency on all translation tasks.
+
+Compared to wait- $k$ policy, our method obtains significant improvement, especially under low la
+
+tency. This is because wait- $k$ policy performs translation according to the predefined path, which usually leads to uncertain anticipation or introduces redundant latency (Ma et al., 2019). Both Multipath and our methods can generate translation under all latency with a unified model. But our PED method transcends its performance by performing Post-Evaluation (PE) policy, which can evaluate the rationality of actions and then decide whether to take them. Therefore, compared with fixed policy, our PE method can achieve better performance by adjusting its translation policy.
+
+Compared to Adaptive-wait- $k$ policy, our model also surpasses its performance and is more reliable under high latency. Adaptive-wait- $k$ generates translation through a heuristic composition of several models with different fixed policies, which restricts the performance under high latency and leads to a decrease in translation speed caused by frequent model switching (Zheng et al., 2020). Our method generates translation with only a unified model and integrates post-evaluation into fixed policy to evaluate the rationality of actions. In particular, our model can approach the performance of full-sentence machine translation with lower latency on two tasks.
+
+# 5 Analysis
+
+To understand our proposed method, we conduct multiple analyses. All of the following results are reported on De $\rightarrow$ En task.
+
+# 5.1 Ablation Study
+
+We conduct an ablation study on PE policy and disturbed-path training method to verify their effectiveness, respectively. As shown in Table 1, both PE policy and disturbed-path method can
+
+
+Figure 5: Translation and Evaluation process of a De→En example when performing PE policy with $k = 5$ . The horizontal direction denotes the source sentence (De), and the vertical direction denotes generated sentence (En). 'T' represents the translation degree. 'U' represents the degree to which the source token has not yet been translated. Our PE policy can take WRITE actions accurately when the translation degree has significant changes.
+
+
AL
BLEU
PED
7.63
30.28
w/o PE
7.9
30.10
w/o disturbed-path
7.81
29.68
w/o PE, disturbed-path
7.59
29.48
+
+improve the translation performance, and better latency-quality tradeoffs can be obtained by their joint contributions.
+
+We also carry out comparative experiments to understand the two constraints in subsection 3.1. The results are shown in Table 2. Both token and segment constraints have positive effects on translation performance respectively. Although the translation quality is slightly worse when the model is guided by them concurrently, the translation degree of available source tokens can be greatly improved and the latency is also reduced by their combined contributions.
+
+# 5.2 Analysis of Translation Degree
+
+To describe the translation degree intuitively, we visualize it in Figure 5. Obviously, the translation degree of each source token gradually accumulates
+
+Table 1: Ablation study of our method when $k = 9$ . 'w/o PE' denotes our model is trained across disturbed-path and performs fixed policy. 'w/o disturbed-path' denotes our model is trained across multi-path and performs our PE policy.
+
+
LT
LS
AL
BLEU
×
×
7.77
29.48
✓
×
7.86
29.57
×
✓
7.78
29.73
✓
✓
7.59
29.48
+
+Table 2: Comparison among the combinations of two constraints when decoding with $k = 9$ . The model is optimized through multi-path and performs fixed policy.
+
+with the progress of translation, which means that the source content is gradually utilized by the target to generate translation and observes partial equivalence. Besides, our PE policy can take WRITE actions when the translation degree of source tokens has significant changes, which obeys incremental equivalence and ensures the rationality of actions. Therefore, our PED policy can adaptively adjust the translation path based on context to achieve better translation performance.
+
+Following Zheng et al. (2019b), we evaluate the accuracy of the translation degree at each step by using overlapping rate, which measures the coincidence between the predicted tokens and ground-truth tokens. We introduce the prediction function in token constraint to predict the target and source tokens respectively. Then we obtain target overlapping rate $R^T$ by comparing the predicted target tokens with the generated tokens and source overlapping rate $R^S$ by comparing the predicted source
+
+
Latency
1
3
5
7
9
RT(↑)
0.60
0.62
0.63
0.62
0.61
RS(↑)
0.80
0.78
0.77
0.77
0.78
+
+Table 3: The results of overlapping rate under all latency, where the higher rate is better. The model is trained across disturb-path and performs fixed policy.
+
+tokens with available source tokens. $R^T$ is calculated as:
+
+$$
+R ^ {T} = \frac {1}{M} \sum_ {t = 1} ^ {M} \frac {| \mathrm {T o p} _ {7} (p _ {d} (\Phi_ {t} ^ {T})) \cap {\bf y} _ {< t} |}{| {\bf y} _ {< t} |},
+$$
+
+where $p_d(\cdot)$ in subsection 3.1 predicts the target tokens based on translated capsules and $\mathrm{Top}_7(\cdot)$ obtains 7 tokens (7 is just half of the average length of the target sentence in test set) with the highest probability. $R^T$ measures the ability of translated capsules to express target information. Similarly, $R^S$ is calculated as:
+
+$$
+R ^ {S} = \frac {1}{M} \sum_ {t = 1} ^ {M} \frac {| \mathrm {T o p} _ {1 5} (p _ {e} (\Phi_ {t} ^ {T} ; \Phi_ {t} ^ {U})) \cap {\bf x} _ {\leq g (t)} |}{| {\bf x} _ {\leq g (t)} |},
+$$
+
+where $p_{e}(\cdot)$ in subsection 3.1 to predict the source tokens based on output capsules. $\mathrm{Top}_{15}(\cdot)$ obtains 15 tokens (15 is just the average length of the source sentence in test set) with the highest probability. $R^{S}$ measures the ability of output capsules to express available source information. The results are shown in Table 3. The output capsules can well represent the available source information and generated target information under all latency. Therefore, our method can recognize the translation degree accurately at each step according to partial equivalence, thereby providing the basis for our policy.
+
+# 5.3 Analysis on Translation Path
+
+The purpose of the translation policy is to get a better translation path, which is composed of READ and WRITE actions. To verify the effectiveness of our PE policy, we introduce sufficiency and necessity (Zhang and Feng, 2022c) as evaluation metrics. Essentially, sufficiency measures the faithfulness of the generated translation and necessity measures how much the redundant delay is introduced.
+
+We take manually aligned alignments for $\mathrm{De} \rightarrow \mathrm{En}$ corpus in RWTH dataset5 as ground-truth
+
+
+(a) Sufficiency
+
+
+(b) Necessity
+Figure 6: Comparison of adequacy and necessity of translation path between different translation policies.
+
+alignments (Zhang and Feng, 2021b). The comparison of sufficiency and necessity of different methods is shown in Figure 6. Obviously, the translation path decided by our PE policy exceeds other methods in terms of sufficiency and necessity. The sufficiency of wait- $k$ policy is similar to PE policy, but it introduces too much unnecessary delay under all latency. Compared to wait- $k$ policy, Adaptive-wait- $k$ performs better in terms of necessity, but it is obtained at the cost of partial sufficiency.
+
+# 5.4 Translation Efficiency
+
+In order to compare the translation efficiency between our method and the previous methods, we measure it by using the average time of generating each token. The results in Table 4 are tested on GeForce GTX Titan-X. It can be seen that the translation speed of our methods is less than wait- $k$ policy, but about three times faster than Adaptive-wait- $k$ policy. Besides, the translation speed of PED is about twice as slow as 'PED w/o PE', which
+
+
Method
Seconds per token
Adaptive-wait-k
0.1057 s
PED
0.0358 s
PED w/o PE
0.0175 s
Wait-k
0.0146 s
+
+Table 4: The comparison of average time to generate a target token in different methods.
+
+is roughly in line with our expectation for our Post-Evaluation policy.
+
+# 6 Related Work
+
+SiMT policy can be divided into fixed and adaptive policy according to whether the translation path is dynamically decided based on context. For fixed policy, the number of READ actions between adjacent WRITE actions always keeps constant. Dalvi et al. (2018) proposed STATIC-RW, and Ma et al. (2019) proposed wait- $k$ policy, which reads and writes a token alternately after reading $k$ tokens. Elbayad et al. (2020) proposed multi-path training method to make a unified model perform multiple wait- $k$ policies and get the performance comparable to the wait- $k$ policy under all latency. Zhang et al. (2021) proposed future-guided training to help SiMT model invisibly embed future information via knowledge distillation. Zhang and Feng (2021a) proposed a char-level wait- $k$ policy to improve the robustness of SiMT. Zhang and Feng (2021c) proposed MoE wait- $k$ policy, which treats the attention heads as a set of wait- $k$ experts, thereby achieving state-of-the-art performance among the fixed policies.
+
+For adaptive policy, Zheng et al. (2019a) trained the agent with oracle actions generated by full-sentence neural machine translation model. Arivazhagan et al. (2019) proposed MILk to decide the READ and WRITE actions by introducing a Bernoulli variable. Ma et al. (2020) proposed MMA, which implemented MILk on Transformer. Zheng et al. (2020) implemented the adaptive policy through a composition of several fixed policies. Miao et al. (2021) proposed a generative framework to perform the adaptive policy for SiMT. Zhang and Feng (2022c) introduced duality constraints to direct the learning of translation paths during training. Instead of predicting the READ and WRITE actions, Zhang and Feng (2022a) implemented the adaptive policy by predicting the aligned source positions of each target token.
+
+Our method focuses on the accuracy of READ and WRITE actions during inference. Our PE policy can evaluate the rationality of actions by utilizing the increment of source content before taking them, which reduces the probability of incorrect actions. Besides, our method achieves good performance under all latency with a unified model.
+
+Capsule Networks (Hinton et al., 2011) and its assignment policies (Sabour et al., 2017; Hinton et al., 2018) initially attempted to solve the problem of parts-to-wholes in computer vision. Dou et al. (2019) first employed capsule network into NMT (i.e., neural machine translation) model for layer representation aggregation. Zheng et al. (2019b) proposed a novel assignment policy GDR to model past and future source content to assist translation. Wang et al. (2019) proposed a novel capsule network for linear time NMT.
+
+Our PED method introduces Capsule Networks with GDR into SiMT model and recognizes the translation degree of source tokens under the restriction of partial source information. Furthermore, we evaluate the rationality of the actions by measuring the changes in translation degree, to implement the adaptive policy.
+
+# 7 Conclusion
+
+In this paper, we propose a new method of performing the adaptive policy by integrating post-evaluation into the fixed policy to evaluate the rationality of the actions. Besides, disturbed-path training is proposed to enhance the robustness of the model to recognize the translation degree on different translation paths. Experiments show that our method outperforms the strong baselines under all latency and can recognize the translation degree on different paths accurately. Furthermore, PE policy can enhance the sufficiency and necessity of translation paths to achieve better performance.
+
+# Limitations
+
+We think our methods mainly have two limitations. On the one hand, although our method can recognize the translation degree of each source token, it still has some deviations. On the other hand, although the inference speed of our method is slightly slower than the wait- $k$ policy, it is still faster than the Adaptive-wait- $k$ policy, which is enough to meet the needs of the application.
+
+# Acknowledgements
+
+We thank all the anonymous reviewers for their insightful and valuable comments.
+
+# References
+
+Naveen Arivazhagan, Colin Cherry, Wolfgang Macherey, Chung-Cheng Chiu, Semih Yavuz, Ruoming Pang, Wei Li, and Colin Raffel. 2019. Monotonic infinite lookback attention for simultaneous machine translation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1313-1323, Florence, Italy. Association for Computational Linguistics.
+Mauro Cettolo, Jan Niehues, Sebastian Stüker, Luisa Bentivogli, Roldano Cattoni, and Marcello Federico. 2016. The IWSLT 2016 evaluation campaign. In Proceedings of the 13th International Conference on Spoken Language Translation, IWSLT 2016, Seattle, WA, USA, December 8-9, 2016. International Workshop on Spoken Language Translation.
+Mauro Cettolo, Jan Niehues, Sebastian Stüker, Luisa Bentivogli, and Marcello Federico. 2014. Report on the 11th IWSLT evaluation campaign. In Proceedings of the 11th International Workshop on Spoken Language Translation: Evaluation Campaign@IWSLT 2014, Lake Tahoe, CA, USA, December 4-5, 2014.
+Fahim Dalvi, Nadir Durrani, Hassan Sajjad, and Stephan Vogel. 2018. Incremental decoding and training methods for simultaneous translation in neural machine translation. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT, New Orleans, Louisiana, USA, June 1-6, 2018, Volume 2 (Short Papers), pages 493-499. Association for Computational Linguistics.
+Zi-Yi Dou, Zhaopeng Tu, Xing Wang, Longyue Wang, Shuming Shi, and Tong Zhang. 2019. Dynamic layer aggregation for neural machine translation with routing-by-agreement. In The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, The Thirty-First Innovative Applications of Artificial Intelligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019, Honolulu, Hawaii, USA, January 27 - February 1, 2019, pages 86-93. AAAI Press.
+Maha Elbayad, Laurent Besacier, and Jakob Verbeek. 2020. Efficient wait-k models for simultaneous machine translation. In *Interspeech* 2020, 21st Annual Conference of the International Speech Communication Association, Virtual Event, Shanghai, China, 25-29 October 2020, pages 1461-1465. ISCA.
+Jiatao Gu, Graham Neubig, Kyunghyun Cho, and Victor O.K. Li. 2017. Learning to translate in real-time
+
+with neural machine translation. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 1053-1062, Valencia, Spain. Association for Computational Linguistics.
+Geoffrey E. Hinton, Alex Krizhevsky, and Sida D. Wang. 2011. Transforming auto-encoders. In Artificial Neural Networks and Machine Learning - ICANN 2011 - 21st International Conference on Artificial Neural Networks, Espoo, Finland, June 14-17, 2011, Proceedings, Part I, volume 6791 of Lecture Notes in Computer Science, pages 44-51. Springer.
+Geoffrey E. Hinton, Sara Sabour, and Nicholas Frosst. 2018. Matrix capsules with EM routing. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net.
+Mingbo Ma, Liang Huang, Hao Xiong, Renjie Zheng, Kaibo Liu, Baigong Zheng, Chuanqiang Zhang, Zhongjun He, Hairong Liu, Xing Li, Hua Wu, and Haifeng Wang. 2019. STACL: simultaneous translation with implicit anticipation and controllable latency using prefix-to-prefix framework. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, pages 3025-3036. Association for Computational Linguistics.
+Xutai Ma, Juan Miguel Pino, James Cross, Liezl Puzon, and Jiatao Gu. 2020. Monotonic multihead attention. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.
+Yishu Miao, Phil Blunsom, and Lucia Specia. 2021. A generative framework for simultaneous machine translation. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 6697-6706, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
+Vandan Mujadia and Dipti Misra Sharma. 2021. Low resource similar language neural machine translation for tamil-telugu. In Proceedings of the Sixth Conference on Machine Translation, WMT@EMNLP 2021, Online Event, November 10-11, 2021, pages 288-291. Association for Computational Linguistics.
+Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. *fairoseq: A fast, extensible toolkit for sequence modeling*. In *Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies*, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Demonstrations, pages 48-53. Association for Computational Linguistics.
+
+Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311-318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.
+Sara Sabour, Nicholas Frosst, and Geoffrey E. Hinton. 2017. Dynamic routing between capsules. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 3856-3866.
+Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715-1725, Berlin, Germany. Association for Computational Linguistics.
+Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 5998-6008.
+Mingxuan Wang, Jun Xie, Zhixing Tan, Jinsong Su, Deyi Xiong, and Lei Li. 2019. Towards linear time neural machine translation with capsule networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 803-812, Hong Kong, China. Association for Computational Linguistics.
+Rongxiang Weng, Heng Yu, Xiangpeng Wei, and Weihua Luo. 2020. Towards enhancing faithfulness for neural machine translation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pages 2675-2684. Association for Computational Linguistics.
+Shaolei Zhang and Yang Feng. 2021a. ICT's system for AutoSimTrans 2021: Robust char-level simultaneous translation. In Proceedings of the Second Workshop on Automatic Simultaneous Translation, pages 1-11, Online. Association for Computational Linguistics.
+Shaolei Zhang and Yang Feng. 2021b. Modeling concentrated cross-attention for neural machine translation with Gaussian mixture model. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 1401–1411, Punta Cana, Dominican Republic. Association for Computational Linguistics.
+Shaolei Zhang and Yang Feng. 2021c. Universal simultaneous machine translation with mixture-of-experts
+
+wait-k policy. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 7306-7317, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
+Shaolei Zhang and Yang Feng. 2022a. Gaussian multihead attention for simultaneous machine translation. In Findings of the Association for Computational Linguistics: ACL 2022, pages 3019-3030, Dublin, Ireland. Association for Computational Linguistics.
+Shaolei Zhang and Yang Feng. 2022b. Information-transport-based policy for simultaneous translation. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, Online and Abu Dhabi. Association for Computational Linguistics.
+Shaolei Zhang and Yang Feng. 2022c. Modeling dual read/write paths for simultaneous machine translation. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2022, Dublin, Ireland, May 22-27, 2022, pages 2461-2477. Association for Computational Linguistics.
+Shaolei Zhang and Yang Feng. 2022d. Reducing position bias in simultaneous machine translation with length-aware framework. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 6775-6788, Dublin, Ireland. Association for Computational Linguistics.
+Shaolei Zhang, Yang Feng, and Liangyou Li. 2021. Future-guided incremental transformer for simultaneous translation. In Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI 2021, Thirty-Third Conference on Innovative Applications of Artificial Intelligence, IAAI 2021, The Eleventh Symposium on Educational Advances in Artificial Intelligence, EAAI 2021, Virtual Event, February 2-9, 2021, pages 14428-14436. AAAI Press.
+Shaolei Zhang, Shoutao Guo, and Yang Feng. 2022. Wait-info policy: Balancing source and target at information level for simultaneous machine translation. In Findings of the Association for Computational Linguistics: EMNLP 2022, Online and Abu Dhabi. Association for Computational Linguistics.
+Baigong Zheng, Kaibo Liu, Renjie Zheng, Mingbo Ma, Hairong Liu, and Liang Huang. 2020. Simultaneous translation policies: From fixed to adaptive. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 2847-2853. Association for Computational Linguistics.
+Baigong Zheng, Renjie Zheng, Mingbo Ma, and Liang Huang. 2019a. Simpler and faster learning of adaptive policies for simultaneous translation. In Proceedings of the 2019 Conference on Empirical Methods
+
+in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, pages 1349-1354. Association for Computational Linguistics.
+
+Zaixiang Zheng, Shujian Huang, Zhaopeng Tu, XinYu Dai, and Jiajun Chen. 2019b. Dynamic past and future for neural machine translation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 931-941, Hong Kong, China. Association for Computational Linguistics.
+
+# A Hyperparameters
+
+All systems in our experiments use the same hyperparameters, as shown in Table 5.
+
+# B Numerical Results
+
+Table 6, 8, 7 respectively report the numerical results on IWSLT15 En→Vi, IWSLT14 En→De and WMT15 De→En measured by AL and BLEU.
+
+
Hyperparameter
IWSLT15 En→Vi
IWSLT14 En→De
WMT15 De→En
encoder layers
6
6
6
encoder attention heads
4
4
8
encoder embed dim
512
512
512
encoder ffn embed dim
1024
1024
2048
decoder layers
6
6
6
decoder attention heads
4
4
8
decoder embed dim
512
512
512
decoder ffn embed dim
1024
1024
2048
dropout
0.3
0.3
0.3
optimizer
adam
adam
adam
adam-β
(0.9, 0.98)
(0.9, 0.98)
(0.9, 0.98)
clip-norm
0
0
0
lr
5e-4
5e-4
5e-4
lr scheduler
inverse sqrt
inverse sqrt
inverse sqrt
warmup-updates
4000
4000
4000
warmup-init-lr
1e-7
1e-7
1e-7
weight decay
0.0001
0.0001
0.0001
label-smoothing
0.1
0.1
0.1
max tokens
16000
8192×4
2048×4×4
+
+Table 5: Hyperparameters of our experiments.
+
+
IWSLT15 En→Vi
Offline
AL
BLEU
22.41
28.8
Wait-k
k
AL
BLEU
1
3.03
25.28
3
4.64
27.53
5
6.46
28.27
7
8.11
28.45
9
9.80
28.53
Multi-path
k
AL
BLEU
1
3.16
25.82
3
4.69
27.99
5
6.42
28.33
7
8.17
28.39
9
9.82
28.36
Adaptive-wait-k
(ρ1, ρ10)
AL
BLEU
(0.2, 0.0)
3.12
26.05
(0.4, 0.0)
4.38
27.72
(0.6, 0.0)
6.28
28.45
(1.0, 0.0)
7.96
28.47
(1.0, 0.4)
9.80
28.41
PED
k
AL
BLEU
1
3.16
26.78
3
4.74
28.69
5
6.46
28.74
7
8.18
28.82
9
9.80
28.77
+
+Table 6: Numerical results of IWSLT15 En $\rightarrow$ Vi.
+
+
IWSLT14 En→De
Offline
AL
BLEU
23.25
27.18
Wait-k
k
AL
BLEU
1
2.03
18.54
3
3.31
22.30
5
5.17
25.45
7
6.83
26.01
9
8.52
25.64
Multi-path
k
AL
BLEU
3
3.22
23.50
5
5.01
25.84
7
6.84
26.65
9
8.64
26.83
Adaptive-wait-k
(ρ1, ρ10)
AL
BLEU
(1.0, 0.3)
2.34
24.08
(1.0, 0.4)
3.79
24.63
(1.0, 0.6)
6.34
25.74
(1.0, 0.7)
7.07
25.88
(1.0, 0.8)
8.10
26.07
PED
k
AL
BLEU
3
3.05
24.14
5
5.03
26.16
7
6.91
26.81
9
8.71
27.12
+
+Table 7: Numerical results of IWSLT14 En→De.
+
+
WMT15 De→En
Offline
AL
BLEU
27.45
30.62
Wait-k
k
AL
BLEU
1
-0.01
17.88
3
1.66
23.23
5
4.12
26.88
7
6.01
28.35
9
7.84
28.97
Multi-path
k
AL
BLEU
1
0.64
19.90
3
2.20
24.06
5
4.10
26.87
7
6.08
28.46
9
8.00
29.42
Adaptive-wait-k
(ρ1, ρ10)
AL
BLEU
(0.2, 0.0)
0.50
20.37
(0.4, 0.0)
1.39
22.81
(0.6, 0.0)
2.52
25.28
(0.8, 0.0)
4.39
27.63
(1.0, 0.0)
5.38
28.15
(1.0, 0.4)
7.32
28.78
PED
k
AL
BLEU
1
-0.21
22.08
3
1.62
24.57
5
3.39
27.51
7
5.67
29.16
9
7.63
30.28
+
+Table 8: Numerical results of WMT15 De→En.
\ No newline at end of file
diff --git a/turningfixedtoadaptiveintegratingpostevaluationintosimultaneousmachinetranslation/images.zip b/turningfixedtoadaptiveintegratingpostevaluationintosimultaneousmachinetranslation/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..36d21042622759f07142bfd60e6ccafc8209405a
--- /dev/null
+++ b/turningfixedtoadaptiveintegratingpostevaluationintosimultaneousmachinetranslation/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:2a30994582e44da9cffa3a9c3f307eba12a495a3a5a2dfd28cb5957bca4c888d
+size 737837
diff --git a/turningfixedtoadaptiveintegratingpostevaluationintosimultaneousmachinetranslation/layout.json b/turningfixedtoadaptiveintegratingpostevaluationintosimultaneousmachinetranslation/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..aecbadcfbd3b93822237fe4857399aef29109423
--- /dev/null
+++ b/turningfixedtoadaptiveintegratingpostevaluationintosimultaneousmachinetranslation/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:71dc2d49022cda2211abe229320c3816335f8624c5144e4f0103218ab57f8ba2
+size 476814
diff --git a/tweetbasedreachawaretemporalattentionnetworkfornftvaluation/cb849413-17dd-42af-a05a-0b15e9ee2d76_content_list.json b/tweetbasedreachawaretemporalattentionnetworkfornftvaluation/cb849413-17dd-42af-a05a-0b15e9ee2d76_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..c67a9f6e72b3ae32213508d072a65e889abf6601
--- /dev/null
+++ b/tweetbasedreachawaretemporalattentionnetworkfornftvaluation/cb849413-17dd-42af-a05a-0b15e9ee2d76_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:7717a6899f4d5109ae0c00aa9c52d2b19500ee871a58689d7b271309b6ee9bac
+size 88200
diff --git a/tweetbasedreachawaretemporalattentionnetworkfornftvaluation/cb849413-17dd-42af-a05a-0b15e9ee2d76_model.json b/tweetbasedreachawaretemporalattentionnetworkfornftvaluation/cb849413-17dd-42af-a05a-0b15e9ee2d76_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..7e7a3dba47290e8737f90783467755883c0f518b
--- /dev/null
+++ b/tweetbasedreachawaretemporalattentionnetworkfornftvaluation/cb849413-17dd-42af-a05a-0b15e9ee2d76_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:470fe88c1d09887beb8ea7e75716271a6b948f7588b42e36fdae9e0d5729e02a
+size 113328
diff --git a/tweetbasedreachawaretemporalattentionnetworkfornftvaluation/cb849413-17dd-42af-a05a-0b15e9ee2d76_origin.pdf b/tweetbasedreachawaretemporalattentionnetworkfornftvaluation/cb849413-17dd-42af-a05a-0b15e9ee2d76_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..b014814a66883b0f1ddcc6d83616a3226254fc62
--- /dev/null
+++ b/tweetbasedreachawaretemporalattentionnetworkfornftvaluation/cb849413-17dd-42af-a05a-0b15e9ee2d76_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:62412bf2bf5cefca0bb2942ac105837ab02b4c087acf6de58ef1b7df976e9cd3
+size 1725248
diff --git a/tweetbasedreachawaretemporalattentionnetworkfornftvaluation/full.md b/tweetbasedreachawaretemporalattentionnetworkfornftvaluation/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..2d3fe50ddfb3566bb17bef6f4b81a7a9a5f58c5a
--- /dev/null
+++ b/tweetbasedreachawaretemporalattentionnetworkfornftvaluation/full.md
@@ -0,0 +1,444 @@
+# Tweet Based Reach Aware Temporal Attention Network for NFT Valuation
+
+Ramit Sawhney*
+
+Georgia Tech
+
+Megh Thakkar*
+
+BITS, Pilani
+
+Ritesh Soun*
+
+SVC, DU
+
+Atula Neerkaje
+
+MIT, Manipal
+
+Vasu Sharma
+
+Amazon Science
+
+Dipanwita Guhathakurta
+
+IIIT, Hyderabad
+
+Sudheer Chava
+
+Georgia Tech
+
+rsawhney31@gatech.edu, sudheer.chava@scheller.gatech.edu
+
+# Abstract
+
+Non-Fungible Tokens (NFTs) are a relatively unexplored class of assets. Designing strategies to forecast NFT trends is an intricate task due to its extremely volatile nature. The market is largely driven by public sentiment and "hype", which in turn has a high correlation with conversations taking place on social media platforms like Twitter. Prior work done for modelling stock market data does not take into account the extent of impact certain highly influential tweets and their authors can have on the market. Building on these limitations and the nature of the NFT market, we propose a novel reach-aware temporal learning approach to make predictions for forecasting future trends in the NFT market. We perform experiments on a new dataset consisting of over 1.3 million tweets and 180 thousand NFT transactions spanning over 15 NFT collections curated by us. Our model (TA-NFT) outperforms other state-of-the-art methods by an average of $36\%$ . Through extensive quantitative and ablative analysis, we demonstrate the ability of our approach as a practical method for predicting NFT trends.
+
+# 1 Introduction
+
+Non Fungible Tokens (NFTs) are digital assets that represent objects like art, collectibles, and in-game items1. Public attention towards NFTs exploded in 2021 when their market experienced record sales (NonFungible, 2021), but little is known about the overall structure and evolution of its market. The NFT space is characterized by extreme growth along with highly skewed and uncertain returns that typify speculative markets (White et al., 2022). Little to no work has been done to forecast future trends in the NFT market, and unlike other, more stable assets, investing in NFTs is associated with extremely high amounts of risk (Mazur, 2021a; Nadini et al., 2021) as they are highly volatile (Kong
+
+
+Figure 1: We visualize a sample of tweets related to Bored Ape Yacht Club NFTs. We also plot the daily average price of the same NFT collection to observe the impact of tweets by influential users.
+
+
+
+
+
+
+
+
+
+
+
+Tweets - 23rd August, 2021
+
+02:54:04
+
+Logan Paul, Chris Camillo Buy Bored Ape Yacht Club NFTs, Floor Rises: What Investors Should Know #LINK
+
+08:01:54
+
+The Bored Ape Yacht Club was finally awarded official Twitter verification earlier today!
+
+11:45:36
+
+I've given in and bought an NFT. A Bored Ape Yacht Club - by KSI (7.4M Followers) [37175 Likes | 854 Retweets]
+
+and Lin, 2021). Additionally, social media has emerged as a space for NFT holders and creators to shape community opinion and drive public sentiment about NFT projects (van Slooten, 2022). Therefore, conventional forecasting approaches and contemporary ML models which utilize only numerical historic NFT data fail to capture sufficient information.
+
+Behavioral finance theories (Chu et al., 2019) suggest that people are more likely to make decisions based on overconfidence bias (Slovic and Fischhoff, 1977; Gervais, 2001) and herd behavior (Bikas et al., 2013; Bikhchandani and Sharma, 2000) when faced with uncertainty. The abundance of tweets about various NFT collections help in creating "hype" around them which drives their sales, reinforcing herd behavior. Studies have shown that NFTs valued by experts are more successful (Franceschet, 2020), and that the structure of the the NFT co-ownership network is highly centralized, and small-world-like (Barabasi, 2021; Barrat and Weigt, 1999).
+
+As shown in Figure 1, the daily average price of an NFT collection, namely Bored Ape Yacht
+
+Club, reacts immediately to a highly influential individual tweeting positively about it and spikes up. However, numerous challenges arise while analyzing such texts. For instance, there are inherent dynamic timing irregularities (Sawhney et al., 2021d) when influencers or "alpha" users make such tweets and as communities react to them. Simultaneously capturing temporal granularities along with popularity information (Savas, 2021) is crucial, as more widely the content is shared over time, the greater the user's impact becomes (Anger and Kittl, 2011).
+
+Therefore, to develop a robust method for predicting NFT trends, we curate a dataset (§3.1), and formulate a new time and popularity aware financial modelling approach, where the influence and reach of individual tweets is captured effectively translating its effects in their market value.
+
+Our contributions can be summarized as:
+
+- We curate a dataset consisting of over 1.3 million tweets and 180 thousand NFT transactions spanning over 15 NFT collections for two downstream tasks, namely daily average price prediction and price movement classification (§3).
+- We plan to make this data publicly available and hope that it could further the research in this field. To the best of our knowledge, this will be the first publicly available, large scale dataset on NFTs based on social media "hype" and sentiment.
+- We propose a novel tweet based reach-aware temporal attention network to predict NFT trends (§5), and analyze the impact of social media on NFT price prediction.
+- Through quantitative (\$6.1), ablative (\$6.2) and exploratory (\$6.3, $\S 6.4$ ) experiments, we build the case for our approach as a practical method for modelling NFT market data.
+
+# 2 Related Work
+
+Non-Fungible Tokens NFTs are digital assets with relatively recent origins (Nadini et al., 2021). NFT pricing involves more complex valuations in comparison to traditional assets such as equity (Kong and Lin, 2021), and are associated with higher returns along with high volatility (Mazur, 2021a). Existing research on NFTs focuses mostly on technical aspects such as components, protocols, standards, & desired properties (Wang et al., 2021) and new blockchain-based protocols to trace physical goods (Westerkamp et al., 2018) and the implications that
+
+NFTs have on the art world (Whitaker, 2019; van Haaften-Schick and Whitaker, 2021). Furthermore, little to no work has been done to forecast future trends in the NFT market.
+
+NLP in Finance Traditional financial forecasting techniques have been applied in areas such as stock markets (Ariyo et al., 2014; Rundo et al., 2019), currency exchange markets (Kamruzzaman and Sarker, 2003), and energy economics (Bento et al., 2018). Conventional financial models previously relied on numerical features (Nikou et al., 2019) and technical indicators (Shynkevich et al., 2017). These include discrete (Ariyo et al., 2014; Bollerslev, 1986), continuous (Jacquier et al., 2002; Andersen, 2007), and neural approaches (Luo et al., 2018; Kim et al., 2019). Efforts have since shifted towards utilizing textual data such as social media posts (Xu and Cohen, 2018), news reports (Li et al., 2020; Schumaker and Chen, 2009), web searches (Zhong and Raghib, 2019; Liu et al., 2012), etc.. These studies confine their analyses to stock markets. Recently, Sawhney et al. (2022) explored cryptocurrency bubble prediction based on user behavior on social media. However, there is a gap in leveraging social media and NLP to analyse and forecast future trends in the NFT market.
+
+Time-Aware Modelling Temporal data is omnipresent in several real-world applications, including healthcare (Baytas et al., 2017a), recommender systems (Rabiu et al., 2020), and finance (Selvin et al., 2017). As a result, sequential neural models such as LSTMs (Hochreiter and Schmidhuber, 1997) have gained popularity due to their ability to capture sequential context dependency (Hu et al., 2018). Time-aware modelling of time series data has shown improvements over conventional sequential neural models on various tasks such as patient subtyping (Baytas et al., 2017a), suicide ideation detection (Sawhney et al., 2020), and disease progression (Gao et al., 2020). Recently, time-aware modelling has been adapted in the realm of financial NLP, such as stock recommendation (Ying et al., 2020), price prediction (Sawhney et al., 2021a), and ranking (Sawhney et al., 2021d). However, these approaches do not take into account the engagement and popularity of social media posts. Hence, such methods do not scale to NFTs, which are more closely correlated with user sentiment and social media "hype" in comparison to traditional asset classes (Bouraga, 2021; Franceschet, 2020). With this work, we seek to explore a promising
+
+research avenue i.e the intersection of NFTs and financial NLP, along with time and hype aware neural modelling.
+
+# 3 Dataset and Tasks
+
+# 3.1 Dataset
+
+We utilise two sources, Twitter and Etherscan $^2$ (Ethereum Blockchain access point) to collect qualitative and quantitative data respectively for 15 NFT collections. We shortlist NFT collections which are launched before January $1^{st}$ 2022, and appear among the top 40 collections by all-time sales volume on Opensea $^3$ , the most popular marketplace for NFTs. Using the data described below, we construct two datasets for the tasks described in the subsequent section.
+
+# 3.1.1 Qualitative Data - Tweets
+
+We collect qualitative data by extracting tweets related to shortlisted NFT collections from Twitter. We search for tweets consisting of the official Twitter handle of the collection, the Twitter handles of its creators, as well as a curated list of most frequently used hashtags and search terms related to each collection. Tweets matching any of the above search criteria are extracted. In addition to the tweet text and engagement information (number of likes, retweets, etc.), we also associate each tweet with information about the user who posted it, such as user bio, followers and friends count etc.
+
+We have a total of 1,354,427 tweets corresponding to 15 NFT collections posted in the one-year period between January 1 2021 to January 31 2022. The median number of tweets over the collections is 65,158 with a maximum of 363,506 corresponding to the NFT collection Cool Cats NFT.
+
+# 3.1.2 Quantitative Data - Transactions
+
+We gather quantitative data, that is NFT transactions between January 1 2021 to January 31 2022 for shortlisted collections from Etherscan which is an Ethereum blockchain explorer. We filter out confirmed NFT sales and extract all relevant data for each transaction comprising of the seller and buyer address, transaction timestamp, amount and meta-data of the NFT sold/purchased.
+
+We have a total to 188,535 transactions over the one year time span for 15 NFT collections. A
+
+detailed breakdown of the dataset is given in Appendix B.
+
+# 3.2 Tasks
+
+We aim to predict future NFT trends based on historic tweets about an NFT collection.
+
+Daily Average Price Prediction We regress the future daily average price of an NFT collection $n$ given as, $\theta = \frac{\sum_{k=1}^{k=t_d} s_{kd}}{t_d}$ , where $s_{kd}$ is the transaction value of the $k^{th}$ NFT sale on day $d$ and $t_d$ is the number of sales on that day. Given $L$ historic tweets for a collection, we aim to predict the average price of the NFT collection on the next day. It is evaluated using mean squared error loss.
+
+Price Movement Classification We formulate movement prediction as a binary classification task. For an NFT collection $n$ , label $y_{k} = 1$ if $s_k > s_{k-1}$ , $y_{k} = 0$ otherwise. Thus $y_{k}$ refers to the price movement of the NFT collection since $s_{k-1^{th}}$ transaction. We evaluate the model performance on this task using macro F1 score.
+
+# 4 Experimental Setup
+
+Preprocessing Following (Nguyen et al., 2020), we use NLTK to preprocess tweets by converting mentions (@) and URLs to special tokens @USER and HTTPURL. We treat emoticons by converting them to strings using emoji Python package.
+
+Training Setup We perform all our experiments on a Tesla T4 GPU. We use Optuna (Akiba et al., 2019) to find optimal hyperparameter values based on the validation MSE/Macro F1 scores by performing 25 search trials. We explored the lookback window length $L \in [2,40]$ and the hidden state dimensions $\in [64,768]$ . We use $10\%$ , $10\%$ and $80\%$ of the samples for testing, validation and training respectively for both tasks. We use learning rate $\in [1e^{-5}, 1e^{-2}]$ and train the models using Adam as our optimizer for 2,150 seconds and 10,845 seconds for daily average price prediction and price movement classification tasks, respectively.
+
+Evaluation Metrics We evaluate methods using Mean Squared Error (MSE) loss for daily average price prediction task and Macro F1-score (M.F1) for price movement classification task.
+
+# 4.1 Baseline Models
+
+- Prophet A decomposable time-series model utilising interpretable model components (Taylor and Letham, 2017)
+ARIMA A moving average based autoregressive model that uses past prices as input (Adebiyi et al., 2014).
+- MLP A simple Multi-Layer Perceptron that uses averaged BERT embeddings of tweet sequences as input.
+- LSTM Utilizes an LSTM (Hochreiter et al., 1997), which is capable of learning long term dependencies, to encode textual streams.
+- FastText + CNN A CNN based architecture (Kim, 2014) with a convolution layer on top of FastText (Joulin et al., 2016) embeddings.
+- FAST A time-aware LSTM capable of modelling temporally irregular text stream data (Sawhney et al., 2021d).
+
+# 5 Methodology
+
+# 5.1 Features
+
+Text Embeddings We use Bidirectional Encoder Representations from BERTweet (Nguyen et al., 2020) to encode each preprocessed tweet $p_k$ to features $m_k = \mathrm{BERTweet}(p_k) \in \mathbb{R}^d$ where $d = 768$ , obtained by taking the [CLS] token output from the final layer.
+
+User Feature Vector We use the Twitter user metadata for each tweet $p_k$ , to construct a user feature vector $\boldsymbol{u}_k \in \mathbb{R}^d$ where $d = 5$ , normalised column-wise. This vector contains essential information about the author of the tweet like the number of followers, whether the author is verified or not, their status count, their favourites count, and friends count. This helps the model learn not only from the contextualized BERT representations but also find potential correlations between user metadata and the tweet's influence on NFT valuation.
+
+# 5.2 Model Components
+
+In this section we present the architecture of our framework, TA-NFT: Time and Reach Aware Network for NFT Price Prediction, designed to forecast NFT prices based on social media trends by explicitly modelling the temporal irregularities and engagement of tweets.
+
+Reach Aware Temporal Network Fine-grained timing irregularities play a crucial role in modelling online text stream data. For instance, the time interval between two tweets about an NFT collection can vary widely, from a few minutes to several days. Therefore, its influence on the value of the NFT collection may drastically vary overtime. There is a decay or increase in the influence of the tweet in relation to other tweets about the collection. Furthermore, every tweet does not have the same reach. The reach/engagement of two consecutive tweets about the same collection may vary by thousands of likes and retweets. In addition to this, the sentiment polarity between tweets may also vary drastically.
+
+Thus, in order to capture these reach, polarity and time dependent complexities, we modify Time-aware LSTM (Baytas et al., 2017b) into reach-aware temporal network $(\mathrm{RTN}(\cdot))$ . Intuitively, the greater the time elapsed between tweets, the lesser the impact, and the greater the reach, the higher the impact in the direction of sentiment polarity. Thus, for a given day and time $k$ , RTN applies a decaying function over $\Delta k$ , the elapsed time between two tweets $[p_k,p_{k - 1}]$ . It also applies a function over the number of likes $l$ , retweets $r$ and polarity $s$ of a tweet, transforming the reach, polarity and time differences into weights:
+
+$$
+\boldsymbol {C} _ {k - 1} ^ {s} = \tanh (\boldsymbol {W} ^ {d} \boldsymbol {C} _ {k - 1} + \boldsymbol {b} ^ {d})
+$$
+
+$$
+\hat {C} _ {k - 1} ^ {s} = C _ {k - 1} ^ {s} * g (\Delta k) * q (l, r, s)
+$$
+
+(Discounted short-term memory)
+
+$$
+\boldsymbol {C} _ {k - 1} ^ {T} = \boldsymbol {C} _ {k - 1} - \boldsymbol {C} _ {k - 1} ^ {s}
+$$
+
+(Long term memory)
+
+$$
+\boldsymbol {C} _ {k - 1} ^ {*} = \boldsymbol {C} _ {k - 1} ^ {T} + \hat {\boldsymbol {C}} _ {k - 1} ^ {s}
+$$
+
+(Adjusted previous memory)
+
+where $C_{k - 1}^{s}$ is the previous cell memory, $\pmb{W}^{d};\pmb{b}^{d}$ are the network parameters, $g(\cdot)$ is a heuristic decaying function. Following (Baytas et al., 2017b) we set $g(\cdot)$ as,
+
+$$
+g (\Delta k) = 1 / \Delta k
+$$
+
+and $q(\cdot)$ as,
+
+$$
+q (l, r, s) = \left\{ \begin{array}{l} s * (l + r) \text {i f} s \neq 0 \\ \zeta * (l + r) \text {i f} s = 0 \end{array} \right.
+$$
+
+where $\zeta \approx 0$
+
+Using the adjusted previous memory $C_{k-1}^{*}$ , we define the current hidden state and current memory states for RTN as:
+
+$$
+\widetilde {\boldsymbol {c}} _ {\boldsymbol {k}} = \tanh \left(\boldsymbol {W} ^ {c} \boldsymbol {h} _ {k - 1} + \boldsymbol {U} ^ {c} \boldsymbol {m} _ {k} + \boldsymbol {b} ^ {c}\right)
+$$
+
+$$
+\boldsymbol {C} _ {k} = \boldsymbol {i} _ {k} * \widetilde {\boldsymbol {c}} _ {k} + \boldsymbol {f} _ {k} * \boldsymbol {C} _ {k - 1} ^ {*} \quad (\text {C u r r e n t m e m o r y})
+$$
+
+$$
+\boldsymbol {h} _ {k} = \boldsymbol {o} _ {k} * \tanh (\boldsymbol {C} _ {k}) \quad \text {(C u r r e n t h i d d e n s t a t e)}
+$$
+
+
+Figure 2: An overview of TA-NFT - Reach Aware Temporal Network. TA-NFT feeds tweet embeddings to a reach-aware temporal network (RTN). User-features are concatenated to the output of RTN and fed to a GRU, followed by a Hawkes Attention layer. Finally, the aggregated representation is passed to an MLP for prediction.
+
+where $W^{c};U^{c};b^{c}$ are the learnable parameters, $\pmb {i}_k;\pmb {f}_k;\pmb {o}_k$ are input, forget and output gates. Finally, given tweets $[p_1,\dots p_T]$ over a lookback length $L$ , we define the update rule of RTN as,
+
+$$
+\boldsymbol {h} _ {\boldsymbol {k}} = \operatorname {R T N} \left(\boldsymbol {m} _ {\boldsymbol {k}}, \Delta k, \boldsymbol {h} _ {\boldsymbol {k} - \boldsymbol {1}}\right); \quad k \in [ 1, T ] \tag {1}
+$$
+
+where, $h_k$ represents the hidden states of RTN.
+
+The hidden states obtained from RTN are then updated by concatenating the user feature vectors $u_{k}$ to it,
+
+$$
+\boldsymbol {h} _ {k} = \boldsymbol {h} _ {k} \oplus \boldsymbol {u} _ {k} \tag {2}
+$$
+
+to obtain feature vectors $\in \mathbb{R}^d$ where $d = 773$ .
+
+Hawkes Attention Layer Existing work show that all historical sequence features are not equally informative and have a varied influence over the predictions (Sawhney et al., 2021c). We use a temporal attention mechanism (Luong et al., 2015) to emphasize sequence features likely to have substantial influence. This mechanism learns attention weights $\beta_{k}$ for each hiddden state $h_k\in \overline{h} = [h_1,\dots ,h_T]$ as,
+
+$$
+\beta_ {k} = \operatorname {S o f t m a x} _ {k} \left(\left(\boldsymbol {h} _ {\boldsymbol {k}}\right) ^ {\mathrm {T}} \left(\boldsymbol {W} ^ {\boldsymbol {a}} \bar {\boldsymbol {h}}\right)\right) \tag {3}
+$$
+
+where, $W$ denotes learnable weights.
+
+Next, we enhance the temporal attention using the Hawkes process (Mei and Eisner, 2017) with a
+
+Hawkes attention mechanism. The Hawkes process is a temporal point process that models a sequence of arrival of features over time. Each feature item "excites" the process in the sense that the chance of a subsequent arrival is increased for some time. Studies (Zuo et al., 2020; Sawhney et al., 2021b) show that the Hawkes process can be used to model sequences from social media and discourses. The Hawkes attention mechanism learns an excitation parameter $\epsilon$ corresponding to excitation induced by tweet $p_k$ and a decay parameter $\alpha$ to learn the decay rate of this induced excitement. Formally, we use a weighted average to aggregate hidden states $\overline{h}$ via Hawkes process as,
+
+$$
+\boldsymbol {u} = \operatorname {T A - N F T} \left(\left\{p _ {k}, t _ {k} \right\} _ {k = 1} ^ {T}\right) = \sum_ {k} \frac {\beta_ {k} \boldsymbol {q} _ {k}}{\sum_ {\tau} \beta_ {\tau} \boldsymbol {q} _ {\tau}} \boldsymbol {q} _ {k} \tag {4}
+$$
+
+$$
+\boldsymbol {q} _ {k} = \beta_ {k} * \boldsymbol {h} _ {k} + \epsilon * (\operatorname {R e L U} \left(\boldsymbol {h} _ {k}\right)) * e ^ {- \alpha \Delta k} \tag {5}
+$$
+
+# 6 Results
+
+# 6.1 Performance Comparison
+
+Table 1 shows a comparison of TA-NFT against baselines spanning commonly used approaches for asset price prediction tasks. We observe that our model outperforms most baselines by an average of $36\%$ . ARIMA (Adebiyi et al., 2014) and Facebook Prophet (Taylor and Letham, 2017), being time-series models using only historical price
+
+
Model
Price Pred. MSE ↓
Mov. Pred. M.F1 ↑
Prophet (Taylor and Letham, 2017)
0.4084
0.2576
ARIMA (Adebiyi et al., 2014)
0.1510
0.3278
MLP
0.1363
0.3621
LSTM (Hochreiter et al., 1997)
0.1287
0.3914
FastText + CNN (Kim, 2014)
0.1630
0.3076
FAST (Sawhney et al., 2021d)
0.1253
0.4032
TA-NFT (Ours)
0.0914*
0.4618*
+
+data, are unable to capture sufficient information. FastText+CNN (Kim, 2014) applies Convolutional Neural Networks on text embeddings from tweets, and FAST (Sawhney et al., 2021d) is a time-aware model using both text and historical features. We postulate that our model's superior performance over them is due to, 1) time-aware Hawkes attention mechanism, 2) incorporation of tweets' reach, polarity and timing based irregularities, and 3) accounting for author influence on the impact of individual tweets. TA-NFT outperforms other time-aware networks due to the Hawkes attention mechanism, tweet meta data and user information which serve as proxies for the popularity of the NFT on Twitter. These observations reveal that a combination of these features contribute towards NFT valuation, and by capturing all these features, our model is practically applicable for NFT average price prediction and price movement classification.
+
+# 6.2 Ablation Study
+
+We account for the importance of various components of TA-NFT in Table 2. First, we observe that replacing the standard LSTM (Hochreiter et al., 1997) with Time-aware LSTM (Baytas et al., 2017b) leads to significant performance improvement. This validates that incorporating the time irregularities helps in modelling the NFT market. Further improvement is noted on modifying it into Reach-aware T-LSTM which accounts for the reach of individual tweets. Enriching the temporal network with Hawkes process leads to performance boosts. This is possibly due to the ability of the Hawkes attention layer to capture excitations caused by influential tweets. Finally enriching the tweet embeddings with user feature vector in combination with reach-aware temporal network and Hawkes attention layer leads to best results, indicat
+
+Table 1: Performance comparison with baselines. * indicates improvement over SOTA is significant $(p<0.01)$ under Wilcoxon's signed rank test.
+
+
Reach Weights
User Feature Vector
Model
Price Pred. MSE ↓
Mov. Pred. M.F1 ↑
X
X
LSTM
0.1287
0.3914
X
X
T-LSTM
0.1248
0.4325
✓
X
T-LSTM
0.1196
0.4372
X
X
T-LSTM + Hawkes
0.1031
0.4561
✓
X
T-LSTM + Hawkes
0.1026
0.4601
✓
✓
T-LSTM + Hawkes (Ours)
0.0914*
0.4618*
+
+Table 2: Ablation study over TA-NFT (mean of 10 runs). *,†indicate improvements are significant $(p < 0.01)$ under Wilcoxon's signed rank test.
+
+
+(a) Price Prediction
+
+
+(b) Movement Prediction
+Figure 3: Impact of lookback length $L$ on TA-NFT's performance with error bounds. Results are averaged over 10 independent runs.
+
+ing that capturing the author's influence is significantly advantageous to understand the full extent of a tweet's impact on the NFT market.
+
+# 6.3 Impact of Lookback Length
+
+We study the impact of varying the lookback length $L$ , referring to the number of historical tweets used as input for each data point, on our model's performance for average price prediction task. We observe that with no historical context, both models perform the worst. As we increase the lookback length $L$ , the model performance improves up to an optimal point, indicating that the naturally decaying impact of past tweets on NFT valuation is being captured by the model. As we further increase $L$ beyond the optimal value, we observe a gradual drop in performance. This is possibly due to the noise introduced by older tweets, which are
+
+
+Figure 4: Qualitative analysis of Tweets about ON1 Force NFTs and performance of TA-NFT on price movement prediciton task with temporal, reach and token level attention visualised.
+
+
Model
Avg. Price. Pred. MSE ↓
Movement Pred. M.F1 ↑
LSTM
0.1943
0.3414
T-LSTM
0.1781
0.3536
T-LSTM + Hawkes
0.1702
0.3819
TA-NFT (Ours)
0.1627*
0.4117*
+
+relatively insignificant to model the temporal state of the community around the NFT collection. The short term dependence of NFT valuation on tweets indicates the fast-moving and volatile nature of the NFT space.
+
+# 6.4 Zero-shot Transfer Analysis
+
+We compare the performance of our model in a zero-shot setting in Table 3, where we train the models on a set of collections and test them on a set of previously unseen collections. Our model outperforms other text-based and temporal models. This shows that it is able to effectively generalize
+
+Table 3: Performance comparisons in a zero shot setting. * indicates improvement over SOTA is significant $\left( {p < }\right.$ 0.01) under Wilcoxon's signed rank test.
+
+
Model
Visual Features Used
Avg. Price Pred. MSE
Mov. Pred. M.F1
TA-NFT
None
0.0914
0.4618
TA-NFT
All
0.1879
0.3291
TA-NFT
Reduced using PCA
0.1989
0.3382
TA-NFT
Selected using Boruta
0.1829
0.3432
+
+Table 4: Impact of visual features on the performance of TA-NFT. Results are averaged over 10 independent runs.
+
+better for unseen collections. Further, it indicates that NFT collections share some inherent characteristics and have overlapping latent representations that can be learnt using online text streams.
+
+# 6.5 Impact of Visual Features
+
+We perform a study to account for the impact of the contents of NFTs, i.e., images towards its valuation. We compare the performance of our modelling approach with and without visual features in Table 4. We pretrain the Barlow Twins model (Zbontar et al., 2021) on all NFT images, minimizing the redundancy between the embeddings of two identi
+
+cal networks in order to produce information rich representations for the images. We take the output of the last fully connected layer of the model as the vector representation $v_{i} \in \mathbb{R}^{d}$ where $d = 1000$ for each image. Further, we concatenate these visual features with text features and carry out training and evaluation as usual. We also explore different approaches to reduce/select feature dimensions, namely Principal Component Analysis and Boruta (Kursa et al., 2010). We observe that utilising visual features does not lead to any improvements, but rather degrades the model performance. This observation suggests that visual features do not provide any useful information for NFT valuation and induce noise to the data. We hypothesize that this could be possibly due to inter-collection and intra-collection content similarities spawned by the market responsiveness to the success of a collection (Nadini et al., 2021).
+
+# 6.6 Qualitative Analysis
+
+We conduct a qualitative study in an attempt to interpret the predictions of TA-NFT by taking examples of tweets about 0N1 Force NFT collection as shown in Figure 4 for two cases.
+
+Following a series of positive tweets with significant reach, we observe an upward movement in the price of 0N1 Force NFT collection. Similarly, a downward movement appears to be caused by a series of relatively negative tweets with lower reach. This suggests NFTs follow hype-driven pricing where more wide-reaching social media traffic and positive sentiment leads to an upward trend and vice-versa. Our modelling approach (TA-NFT) is able to contextualize the impact of social media hype by accounting for the reach of individual tweets as well as the influence of its authors in addition to the timing irregularities. Thus, it is able to correctly classify the price movement in both cases as opposed to strictly time-aware modelling techniques. Unlike traditional assets like stocks and gold, the intensity and polarity of public sentiment on social media platform drives price fluctuations (Semenova and Winkler, 2021) which is in turn affected by influential individuals.
+
+# 7 Conclusion
+
+Building on the rising popularity and hype-driven dynamics of NFT markets, we curate a dataset for forecasting NFT trends through two downstream tasks consisting of daily average price prediction
+
+and price movement classification. We introduced TA-NFT, a time and reach-aware neural network for modelling temporal granularities and engagement dynamics of NFT discourse on social media. Through extensive experiments, we show that TA-NFT empirically outperforms other SOTA models by an average of $36\%$ , and present TA-NFT as a practical modelling approach and a strong benchmark for forecasting NFT trends. We hope the proposed dataset can enable more academic progress in the field of financial NLP.
+
+# Ethical Considerations
+
+While the predictive power of models like TA-NFT relies on data, we work within the purview of acceptable privacy practices to avoid coercion and intrusive treatment. We utilize publicly available data in a purely observational and non-intrusive manner. Although informed consent of each user was not sought as it may be deemed coercive, we follow all ethical regulations set by our data sources. Since financial markets are transparent (Bloomfield and O'Hara, 1999) and heavily regulated (Edwards, 1996), we discuss the ethical considerations and potential risks pertaining to our work.
+
+Potential risks: Our contributions are meant as an exploratory research in the financial domain and no part of the work should be treated as financial advice. All financial investments decisions are subject to market risk (Mazur, 2021b; Antonakakis et al., 2019; Campbell, 1996) and should be made after extensive testing. Practitioners should check for various biases (demographic, modelling, randomness) before attempting to use the provided code/data/methods for real-world purposes.
+
+Intended use of data artifacts: Our dataset will be made available to use for research purposes. The intended use of financial datasets is to enable investors to take informed financial decisions (Cooper et al., 2016), research and development to foster progress of AI methods and financial modeling for public good (Veloso et al., 2021).
+
+We additionally follow Cooper et al. (2016) and focus on the following ethical considerations for automated trading systems:
+
+Blocking Price Discovery Trading systems should not block price discovery, nor interfere with the ability of other market participants to add to their own information (Angel and McCabe, 2013). Examples of such scenarios include Quote Stuffing (Egginton et al., 2016) and Wash Trading (von
+
+Wachter et al., 2022). TA-NFT does not block price discovery in any manner.
+
+Circumventing Price Discovery A trading system should not hide information, such as by participating in dark pools or placing hidden orders (Zhu, 2014). While we evaluate our approach only on public data, it is possible for TA-NFT, just as any other automated trading system, to be exploited to hinder market fairness (Sako et al., 2021). We follow broad ethical guidelines to design TA-NFT and encourage readers to follow both regulatory and ethical considerations pertaining to the market.
+
+# Limitations
+
+While our dataset has been curated using data for the entire year of 2021, the NFT market is fast paced, new and ever-changing, which may lead to the need of adapting newer approaches. Apart from this, there are 1000s of NFT collections, and we conduct our analysis on only 15 of them, which might leave out a lot of independent NFT collections and related trends. We also acknowledge the presence of demographic bias in our study as the tweet data is limited to English, and thus our approach may not directly generalize to non-English settings. Additionally, there is a vast scope for future work accounting for the influence of buyer/seller network, correlation between the NFT and Cryptocurrency market, other sources of qualitative data like news, blogs, Reddit etc. and NFT metadata attributes/value proposition.
+
+# References
+
+Ayodele Adebiyi, Aderemi Adewumi, and Charles Ayo. 2014. Stock price prediction using the arima model.
+Takuya Akiba, Shotaro Sano, Toshihiko Yanase, Takeru Ohta, and Masanori Koyama. 2019. Optuna: A next-generation hyperparameter optimization framework. In Proceedings of the 25rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining.
+Leif BG Andersen. 2007. Efficient simulation of the heston stochastic volatility model. Available at SSRN 946405.
+James J Angel and Douglas McCabe. 2013. Fairness in financial markets: The case of high frequency trading. Journal of Business Ethics, 112(4):585-595.
+Isabel Anger and Christian Kittl. 2011. Measuring influence on twitter. In Proceedings of the 11th international conference on knowledge management and knowledge technologies, pages 1-4.
+
+Nikolaos Antonakakis, Ioannis Chatziantoniou, and David Gabauer. 2019. Cryptocurrency market contagion: Market uncertainty, market complexity, and dynamic portfolios. Journal of International Financial Markets, Institutions and Money, 61:37-51.
+Adebiyi A Ariyo, Adewumi O Adewumi, and Charles K Ayo. 2014. Stock price prediction using the arima model. In 2014 UKSim-AMSS 16th international conference on computer modelling and simulation, pages 106-112. IEEE.
+Albert-Laszlo Barabasi. 2021. The art market often works in secret. here's a look inside. https://www.nytimes.com/2021/05/07/opinion/nft-art-market.html.
+Alain Barrat and Martin Weigt. 1999. On the properties of small-world network models. The European Physical Journal B - Condensed Matter and Complex Systems, 13:547-560.
+Inci M Baytas, Cao Xiao, Xi Zhang, Fei Wang, Anil K Jain, and Jiayu Zhou. 2017a. Patient subtyping via time-aware LSTM networks. In Proceedings of the 23rd ACM SIGKDD international conference on knowledge discovery and data mining, pages 65-74.
+Inci M. Baytas, Cao Xiao, Xi Zhang, Fei Wang, Anil K. Jain, and Jiayu Zhou. 2017b. Patient subtyping via time-aware lstm networks. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '17, page 65-74, New York, NY, USA. Association for Computing Machinery.
+PMR Bento, JAN Pombo, MRA Calado, and SJPS Mariano. 2018. A bat optimized neural network and wavelet transform approach for short-term price forecasting. Applied energy, 210:88-97.
+Egidijus Bikas, Daiva Jurevicene, Petras Dubinskas, and Lina Novickyte. 2013. Behavioural finance: The emergence and development trends. Procedia-social and behavioral sciences, 82:870-876.
+Sushil Bikhchandani and Sunil Sharma. 2000. Herd behavior in financial markets. IMF Staff papers, 47(3):279-310.
+Robert Bloomfield and Maureen O'Hara. 1999. Market transparency: who wins and who loses? The Review of Financial Studies, 12(1):5-35.
+Tim Bollerslev. 1986. Generalized autoregressive conditional heteroskedasticity. Journal of econometrics, 31(3):307-327.
+Sarah Bouraga. 2021. On the popularity of non-fungible tokens: Preliminary results. In 2021 3rd Conference on Blockchain Research & Applications for Innovative Networks and Services (BRAINS), pages 49-50. IEEE.
+John Y Campbell. 1996. Understanding risk and return. Journal of Political economy, 104(2):298-345.
+
+Jeffrey Chu, Yuanyuan Zhang, and Stephen Chan. 2019. The adaptive market hypothesis in the high frequency cryptocurrency market. International Review of Financial Analysis, 64:221-231.
+Ricky Cooper, Michael Davis, and Ben Van Vliet. 2016. The mysterious ethics of high-frequency trading. Business Ethics Quarterly, 26(1):1-22.
+Franklin R Edwards. 1996. The new finance: regulation and financial stability. American Enterprise Institute.
+Jared F Egginton, Bonnie F Van Ness, and Robert A Van Ness. 2016. Quote stuffing. Financial Management, 45(3):583-608.
+Massimo Franceschet. 2020. Art for space. J. Comput. Cult. Herit., 13(3).
+Junyi Gao, Cao Xiao, Yasha Wang, Wen Tang, Lucas M Glass, and Jimeng Sun. 2020. Stagenet: Stage-aware neural networks for health risk prediction. In Proceedings of The Web Conference 2020, pages 530-540.
+S Gervais. 2001. T., odean., 2001. learning to be overconfident. Review of Financial Studies, 14(1):1.
+Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.
+Sepp Hochreiter et al. 1997. Long short-term memory. Neural computation, 9:1735-80.
+Ziniu Hu, Weiqing Liu, Jiang Bian, Xuanzhe Liu, and Tie-Yan Liu. 2018. Listening to chaotic whispers: A deep learning framework for news-oriented stock trend prediction. In Proceedings of the eleventh ACM international conference on web search and data mining, pages 261-269.
+Eric Jacquier, Nicholas G Polson, and Peter E Rossi. 2002. Bayesian analysis of stochastic volatility models. Journal of Business & Economic Statistics, 20(1):69-87.
+Armand Joulin, Edouard Grave, Piotr Bojanowski, Matthijs Douze, Herve Jégou, and Tomas Mikolov. 2016. Fasttext.zip: Compressing text classification models. arXiv preprint arXiv:1612.03651.
+Joarder Kamruzzaman and Ruhul A Sarker. 2003. Forecasting of currency exchange rates using ann: A case study. In International Conference on Neural Networks and Signal Processing, 2003. Proceedings of the 2003, volume 1, pages 793-797. IEEE.
+Raehyun Kim, Chan Ho So, Minbyul Jeong, Sanghoon Lee, Jinkyu Kim, and Jaewoo Kang. 2019. Hats: A hierarchical graph attention network for stock movement prediction. arXiv preprint arXiv:1908.07999.
+Yoon Kim. 2014. Convolutional neural networks for sentence classification.
+
+De-Rong Kong and Tse-Chun Lin. 2021. Alternative investments in the fintech era: The risk and return of non-fungible token (nft). Available at SSRN 3914085.
+Miron Bartosz Kursa, Aleksander Jankowski, and Witold R. Rudnicki. 2010. Boruta - a system for feature selection. Fundam. Informaticae, 101:271-285.
+Xiaodong Li, Pangjing Wu, and Wenpeng Wang. 2020. Incorporating stock prices and news sentiments for stock market prediction: A case of hong kong. Information Processing & Management, 57(5):102212.
+Ying Liu, Benfu Lv, Geng Peng, and Qingyu Yuan. 2012. A preprocessing method of internet search data for prediction improvement: application to chinese stock market. In Proceedings of the Data Mining and Intelligent Knowledge Management Workshop, pages 1-7.
+Rui Luo, Weinan Zhang, Xiaojun Xu, and Jun Wang. 2018. A neural stochastic volatility model. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 32.
+Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1412-1421, Lisbon, Portugal. Association for Computational Linguistics.
+Mieszko Mazur. 2021a. Non-fungible tokens (nft). the analysis of risk and return.
+Mieszko Mazur. 2021b. Non-fungible tokens (nft). the analysis of risk and return. Available at SSRN 3953535.
+Hongyuan Mei and Jason Eisner. 2017. The neural hawkes process: A neurally self-modulating multivariate point process.
+Matthieu Nadini, Laura Alessandretti, Flavio Di Giacinto, Mauro Martino, Luca Maria Aiello, and Andrea Baronchelli. 2021. Mapping the NFT revolution: market trends, trade networks, and visual features. Scientific Reports, 11(1).
+Dat Quoc Nguyen, Thanh Vu, and Anh Tuan Nguyen. 2020. BERTweet: A pre-trained language model for English Tweets. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 9-14.
+Mahla Nikou, Gholamreza Mansourfar, and Jamshid Bagherzadeh. 2019. Stock price prediction using deep learning algorithm and its comparison with machine learning algorithms. Intelligent Systems in Accounting, Finance and Management, 26(4):164-174.
+NonFungible. 2021. Yearly nft market report. https://nonfungible.com/reports/2021/en/ yearly-nft-market-report.
+
+Idris Rabiu, Naomie Salim, Aminu Da'u, and Akram Osman. 2020. Recommender system based on temporal models: a systematic review. Applied Sciences, 10(7):2204.
+Francesco Rundo, Francesca Trenta, Agatino Luigi di Stallo, and Sebastiano Battiato. 2019. Machine learning for quantitative finance applications: A survey. Applied Sciences, 9(24):5574.
+Kentaro Sako, Shin'ichiro Matsuo, and Sachin Meier. 2021. Fairness in ERC token markets: A case study of cryptokitties. In International Conference on Financial Cryptography and Data Security, pages 595-610. Springer.
+Serkan Savaş. 2021. Analysis of the social media impact on the popularity of crypto-currencies. In 2021 6th International Conference on Computer Science and Engineering (UBMK), pages 67-72. IEEE.
+Ramit Sawhney, Shivam Agarwal, Vivek Mittal, Paolo Rosso, Vikram Nanda, and Sudheer Chava. 2022. Cryptocurrency bubble detection: A new stock market dataset, financial task & hyperbolic models. arXiv preprint arXiv:2206.06320.
+Ramit Sawhney, Shivam Agarwal, Megh Thakkar, Arnav Wadhwa, and Rajiv Ratn Shah. 2021a. Hyperbolic online time stream modeling. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 1682-1686.
+Ramit Sawhney, Shivam Agarwal, Arnav Wadhwa, Tyler Derr, and Rajiv Ratn Shah. 2021b. Stock selection via spatiotemporal hypergraph attention network: A learning to rank approach. Proceedings of the AAAI Conference on Artificial Intelligence, 35(1):497-504.
+Ramit Sawhney, Shivam Agarwal, Arnav Wadhwa, and Rajiv Shah. 2021c. Tec: A time evolving contextual graph model for speaker state analysis in political debates. In Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI-21, pages 3552-3558. International Joint Conferences on Artificial Intelligence Organization. Main Track.
+Ramit Sawhney, Harshit Joshi, Saumya Gandhi, and Rajiv Shah. 2020. A time-aware transformer based model for suicide ideation detection on social media. In Proceedings of the 2020 conference on empirical methods in natural language processing (EMNLP), pages 7685-7697.
+Ramit Sawhney, Arnav Wadhwa, Shivam Agarwal, and Rajiv Ratn Shah. 2021d. FAST: Financial news and tweet based time aware network for stock trading. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 2164-2175, Online. Association for Computational Linguistics.
+
+Robert P Schumaker and Hsinchun Chen. 2009. Textual analysis of stock market prediction using breaking financial news: The azfin text system. ACM Transactions on Information Systems (TOIS), 27(2):1-19.
+Sreelekshmy Selvin, R Vinayakumar, EA Gopalakrishnan, Vijay Krishna Menon, and KP Soman. 2017. Stock price prediction using lstm, rn and cnn-sliding window model. In 2017 international conference on advances in computing, communications and informatics (icacci), pages 1643-1647. IEEE.
+Valentina Semenova and Julian Winkler. 2021. Social contagion and asset prices: Reddit's self-organised bull runs.
+Yauheniya Shynkevich, T Martin McGinnity, Sonya A Coleman, Ammar Belatreche, and Yuhua Li. 2017. Forecasting price movements using technical indicators: Investigating the impact of varying input window length. Neurocomputing, 264:71-88.
+Paul Slovic and Baruch Fischhoff. 1977. On the psychology of experimental surprises. Journal of Experimental Psychology: Human Perception and Performance, 3(4).
+Sean J Taylor and Benjamin Letham. 2017. Forecasting at scale.
+Lauren van Haaften-Schick and Amy Whitaker. 2021. From the artist's contract to the blockchain ledger: New forms of artists' funding using equity and resale royalties. Social Science Research Network.
+Jelmer van Slooten. 2022. Predictive value of tweet sentiment on the bored ape yacht club's trading volume and floor price.
+Manuela Veloso, Tucker Balch, Daniel Borrajo, Prashant Reddy, and Sameena Shah. 2021. Artificial intelligence research in finance: discussion and examples. Oxford Review of Economic Policy, 37(3):564-584.
+Victor von Wachter, Johannes Rude Jensen, Ferdinand Regner, and Omri Ross. 2022. Nft wash trading: Quantifying suspicious behaviour in nft markets. arXiv preprint arXiv:2202.03866.
+Qin Wang, Rujia Li, Qi Wang, and Shiping Chen. 2021. Non-fungible token (nft): Overview, evaluation, opportunities and challenges. ArXiv, abs/2105.07447.
+Martin Westerkamp, Friedhelm Victor, and Axel Kupper. 2018. Blockchain-based supply chain traceability: Token recipes model manufacturing processes. 2018 IEEE International Conference on Internet of Things (iThings) and IEEE Green Computing and Communications (GreenCom) and IEEE Cyber, Physical and Social Computing (CPSCom) and IEEE Smart Data (SmartData), pages 1595-1602.
+Amy Whitaker. 2019. Art and blockchain: A primer, history, and taxonomy of blockchain use cases in the arts. *Artivate*, 8:21 - 46.
+
+Joshua T White, Sean Wilkoff, and Serhat Yildiz. 2022. The role of the media in speculative markets: Evidence from non-fungible tokens (nfts). Available at SSRN 4074154.
+
+Yumo Xu and Shay B. Cohen. 2018. Stock movement prediction from tweets and historical prices. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1970-1979, Melbourne, Australia. Association for Computational Linguistics.
+
+Xiaoting Ying, Cong Xu, Jianliang Gao, Jianxin Wang, and Zhao Li. 2020. Time-aware graph relational attention network for stock recommendation. In Proceedings of the 29th ACM International Conference on Information & Knowledge Management, pages 2281-2284.
+
+Jure Zbontar, Li Jing, Ishan Misra, Yann LeCun, and Stephane Deny. 2021. Barlow twins: Self-supervised learning via redundancy reduction. arXiv preprint arXiv:2103.03230.
+
+Xu Zhong and Michael Raghib. 2019. Revisiting the use of web search data for stock market movements. Scientific reports, 9(1):1-8.
+
+Haoxiang Zhu. 2014. Do dark pools harm price discovery? The Review of Financial Studies, 27(3):747-789.
+
+Simiao Zuo, Haoming Jiang, Zichong Li, Tuo Zhao, and Hongyuan Zha. 2020. Transformer hawkes process. In International Conference on Machine Learning, pages 11692-11702. PMLR.
+
+# A Experimental Setup
+
+
Parameter
Value
Optimizer
Adam
Learning Rate
2e-4
Batch Size
64
β1,β2,ε
0.9, 0.999, 1e-6
# Epochs
20
Evaluation Metric
MSE/Macro F1
Base Model
BERTweet
Classifier (over architecture)
Linear layer
Number of Parameters
4,817,035
Hardware
Nvidia Tesla T4
+
+Table 5: Model and training setup for TA-NFT.
+
+
Collection
# of NFTs# of tweets# of transactions
0N1 Force
7,777
11,153
8,473
Bored Ape Yacht Club
10,000
28,651
19,472
Cool Cats NFT
9,933
363,506
16,890
CrypToadz by GREMLIN
7,025
134,339
9,408
CyberKongz
5,000
298,710
4,357
DeadFellaz
9,999
65,158
14,489
FLUF World
10,000
67,379
10,059
Hashmasks
16,370
92,903
16,642
Loot
7,779
393
6,642
Mutant Ape Yacht Club
17,961
5,154
14,819
Meebits
20,000
108,237
13,221
Pudgy Penguins
8,888
2,017
15,997
SupDucks
10,001
169,909
12,965
VOX Collectibles
8,888
2,190
11,787
World of Women
10,000
4,728
13,314
Total
159,621
1,354,427
188,535
+
+Table 6: NFT-collection wise data distribution.
+
+
Task
# of data points
Daily Average Price Prediction
2,679
Price Movement Classification
188,535
+
+Table 7: Task-wise data distribution.
+
+# B Dataset Details
+
+A detailed collection-wise breakdown of the collected data is given in Table 6. In addition to this, Table 7 gives task-wise distribution (number of data points) for the tasks defined above.
\ No newline at end of file
diff --git a/tweetbasedreachawaretemporalattentionnetworkfornftvaluation/images.zip b/tweetbasedreachawaretemporalattentionnetworkfornftvaluation/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..ac7ec51e58a1ebe61f617735e9da99336ed549df
--- /dev/null
+++ b/tweetbasedreachawaretemporalattentionnetworkfornftvaluation/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:8ca4399fe506dfe819d65e94ee22817f5e8ac42b58d95ec347902f51c4ba0d78
+size 535107
diff --git a/tweetbasedreachawaretemporalattentionnetworkfornftvaluation/layout.json b/tweetbasedreachawaretemporalattentionnetworkfornftvaluation/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..8449d83580d384f6ed16836a28556f0b1dabae44
--- /dev/null
+++ b/tweetbasedreachawaretemporalattentionnetworkfornftvaluation/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:56116382cab7401a8a8882f2656c5964e5ee3ed8793a350b7e9e357064fa209e
+size 454740
diff --git a/tydipadatasetforpolitenessclassificationinninetypologicallydiverselanguages/83505e90-0d1c-47d9-8704-54872ca9dca4_content_list.json b/tydipadatasetforpolitenessclassificationinninetypologicallydiverselanguages/83505e90-0d1c-47d9-8704-54872ca9dca4_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..7df18da23d05a647c205bfe7124810446ef22054
--- /dev/null
+++ b/tydipadatasetforpolitenessclassificationinninetypologicallydiverselanguages/83505e90-0d1c-47d9-8704-54872ca9dca4_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:43879eecfbf12ac604bc34b8b8c51a75a791469f68877915914e12f9e4f63f21
+size 100958
diff --git a/tydipadatasetforpolitenessclassificationinninetypologicallydiverselanguages/83505e90-0d1c-47d9-8704-54872ca9dca4_model.json b/tydipadatasetforpolitenessclassificationinninetypologicallydiverselanguages/83505e90-0d1c-47d9-8704-54872ca9dca4_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..e205ed84dc1b8ee2091e932baefacdf7dc6e5281
--- /dev/null
+++ b/tydipadatasetforpolitenessclassificationinninetypologicallydiverselanguages/83505e90-0d1c-47d9-8704-54872ca9dca4_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:6cbad343e59fe92fa5eb3806fca19fd360de0ede82d32171487bd7e883e9e793
+size 118278
diff --git a/tydipadatasetforpolitenessclassificationinninetypologicallydiverselanguages/83505e90-0d1c-47d9-8704-54872ca9dca4_origin.pdf b/tydipadatasetforpolitenessclassificationinninetypologicallydiverselanguages/83505e90-0d1c-47d9-8704-54872ca9dca4_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..428f1621afafa191f7c93cb9257d8c29a6653af1
--- /dev/null
+++ b/tydipadatasetforpolitenessclassificationinninetypologicallydiverselanguages/83505e90-0d1c-47d9-8704-54872ca9dca4_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5e3edff91acb867d8535158b88c1152facaceed4562d5c63be17e4627e7640ec
+size 548355
diff --git a/tydipadatasetforpolitenessclassificationinninetypologicallydiverselanguages/full.md b/tydipadatasetforpolitenessclassificationinninetypologicallydiverselanguages/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..03385e1b24cff740704a6da4575fa202c7961e45
--- /dev/null
+++ b/tydipadatasetforpolitenessclassificationinninetypologicallydiverselanguages/full.md
@@ -0,0 +1,409 @@
+# TYDIP: A Dataset for Politeness Classification in Nine Typologically Diverse Languages
+
+Anirudh Srinivasan Eunsol Choi
+
+Department of Computer Science
+
+The University of Texas at Austin
+
+{anirudhs, eunsol}@utexas.edu
+
+# Abstract
+
+We study politeness phenomena in nine typologically diverse languages. Politeness is an important facet of communication and is sometimes argued to be cultural-specific, yet existing computational linguistic study is limited to English. We create TYDIP, a dataset containing three-way politeness annotations for 500 examples in each language, totaling 4.5K examples. We evaluate how well multilingual models can identify politeness levels – they show a fairly robust zero-shot transfer ability, yet fall short of estimated human accuracy significantly. We further study mapping the English politeness strategy lexicon into nine languages via automatic translation and lexicon induction, analyzing whether each strategy's impact stays consistent across languages. Lastly, we empirically study the complicated relationship between formality and politeness through transfer experiments. We hope our dataset will support various research questions and applications, from evaluating multilingual models to constructing polite multilingual agents. $^{1}$
+
+# 1 Introduction
+
+Whether politeness phenomena and strategies are universal across languages or not have been controversial among sociologists and linguists. While Brown and Levinson (1978) claimed their universality, other followup work (Korac-Kakabadse et al., 2001) claimed how communication patterns can differ based on cultures and other social constructs such as gender (Mills, 2003) and domains.
+
+To contribute to the linguistic study of cross-cultural politeness, we collect politeness labels on nine typologically and culturally diverse languages, Hindi, Korean, Spanish, Tamil, French, Vietnamese, Russian, Afrikaans, and Hungarian. This language set covers five scripts and eight language
+
+families. We follow the seminal work (Danescu-Niculescu-Mizil et al., 2013) closely, focusing on politeness exhibited in requests as they involve the speaker imposing on the listener, requiring them to employ various politeness techniques. To capture rich linguistic strategies that can be lost in translation (Lembersky et al., 2011), we collect sentences written in each target language. To minimize the domain shift among languages, we collect examples in each language from their respective Wikipedia User talk pages, where editors make requests about administrative and editorial decisions.
+
+Crowdsourcing labels in low-resource languages is challenging. Thus, we carefully design an annotation process that includes a translation task to evaluate annotator's language proficiency and a model-in-the-loop qualification task which filters workers whose labels diverges from highly confident predictions from multilingual models. After this process, we observe high agreements among the annotators in our dataset despite the subjectivity of the task. Interestingly, the annotators agree with each other more when assigning politeness score on requests in their native languages compared to assigning politeness score on requests in English, which is their second language.
+
+Equipped with our new multilingual politeness dataset, we evaluate zero-shot transfer ability of existing multilingual models in predicting politeness – subjective and pragmatic language interpretation task. Pretrained language models (Conneau et al., 2020) fine-tuned on annotated English politeness data (Danescu-Niculescu-Mizil et al., 2013) show competitive performances on all languages, weighing in the universality of politeness phenomena across languages. We also witness impressive zero-shot performance of a high-capacity pretrained language model (Brown et al., 2020). We observe a degradation in classification performances when we translate the target language (via Google Translate API) to English, suggesting politeness might
+
+not be preserved in the current machine translation model. Despite the simplicity of classification task, we report a substantial difference between the estimated human accuracy and the best model accuracy (over $10\%$ difference in accuracy in six out of nine languages).
+
+Lastly, we provide two studies delving into politeness phenomena. We map English politeness strategy lexicon to create politeness strategy lexicon in nine languages by using tools like automatic translation, lexicon alignment (Dou and Neubig, 2021) and large-scale corpora in the same domain. Despite the limitations of automatic lexicon mapping, we largely observe consistent correlation with politeness score for each politeness strategy in nine languages we study, with some interesting exceptions. We then compare the notion of politeness and formality which has been studied in multilingual setting (Briakou et al., 2021). Our empirical results support that notions of politeness and formality cannot be used interchangeably. However, when we control for semantics, politeness classifier judges the formal version of the same sentence as more polite than its informal variant.
+
+We release our annotated data and aligned politeness lexicon to support future work. Our dataset can support various end applications, such as building multilingual agents optimized for politeness (Silva et al., 2022), developing a translation model that preserves politeness level (Fu et al., 2020), evaluating the impact of different pretraining corpus and modeling architecture for modeling subjective tasks in a wide range of languages (Hu et al., 2020), understanding cultural-specific politeness strategies, and many more.
+
+# 2 TyD1P: Multilingual Politeness Dataset
+
+Motivation Our goal is to construct high-quality multilingual evaluation data with native content, covering a wide range of languages on the task of politeness prediction. Following prior work (Danescu-Niculescu-Mizil et al., 2013), we focus on identifying politeness in requests, where requests involve speaker imposing on the listener. This scenario elicit speakers to employ diverse strategies to minimize the imposition of requests, or apologizing for the imposition (Lakoff, 1977). For each request text, we aim to collect a graded politeness score (between -3 and 3, with 0.5 increment).
+
+Language Selection We chose Hindi, Korean, Spanish, Tamil, French, Vietnamese, Russian, Afrikaans, and Hungarian. Our criteria for selecting languages were (1) covering low resource language when possible, (2) languages with rich discussion on Wikipedia editor forum and (3) languages where we can recruit native speaker annotators on a crowdsourcing platform, Prolific.
+
+Source Sentence Collection We source requests from Wikipedia user talk pages from target language Wikipedia dumps. Each request is a part of a conversation between editors on Wikipedia. We follow the pre-processing step from prior work (Danescu-Niculescu-Mizil et al., 2013), extracting each request as a sequence of two successive sentences where the second sentence ends with a question mark (?). We present one example here: "I'm somewhat puzzled by your recent edits on the Harper page, which have left two different sets of footnotes. Could you please explain your rationale for the change?"
+
+# 2.1 Annotation Process
+
+Collecting annotations for non-English data for a wide range of languages is non-trivial in all aspects, from source text collection, annotator recruiting to annotation validation. We describe our annotation process here and hope that our collection strategy can provide insights for future multilingual data collection efforts for other tasks and domains.
+
+Pre-processing We observe that a sizable portion of the requests is written in language other than its own. Thus, we filter sentences not belonging to the target language with a language identification with langdetect (Nakatani, 2010).
+
+Table 1 shows data statistics, including the language distribution among these requests. We use the Polyglot tokenizer for preprocessing.
+
+Annotator Recruiting We collect our annotation on a crowdsourcing platform, Prolific, which allows us to find workers based on their first language. Instead of developing separate guidelines for each language, we recruit bilingual annotators. We also filter by their task approval rate ( $>98\%$ ).
+
+To annotators who meet these criteria, we perform qualification process which involves transla
+
+
Language
Family
Script
Total # Requests
% Target / English / Other
Avg length (in bytes)
Hindi (hi)
Indo-Aryan
Devanagari
4,412
71 / 26 / 3
351
Korean (ko)
Korean
Hangul
43,219
96 / 3 / 1
183
Spanish (es)
Romance
Latin
180,832
97 / 2 / 1
181
Tamil (ta)
Dravidian
Tamil
5,590
92 / 8 / 0
325
French (fr)
Romance
Latin
354,544
98 / 1 / 1
179
Vietnamese (vi)
Austroasiatic
Latin
22,070
96 / 4 / 0
210
Russian (ru)
Slavic
Cyrillic
291,220
98 / 1 / 1
254
Afrikaans (af)
Germanic
Latin
3,399
85 / 11/4
134
Hungarian (hu)
Uralic
Latin
80,825
98 / 1 / 1
132
+
+Table 1: Languages chosen for our study and their data statistics. We report the number of available requests in Wikipedia User talk pages after pre-processing step, the distribution of languages after language identification, and the average length in bytes for each request.
+
+tion task and the target task, which we describe below.
+
+Target Task Qualification Inspired by strong zero-shot transfer performances of multilingual models on a variety of tasks (Conneau et al., 2018; Wu and Dredze, 2019), we use a multilingual classifier trained on existing English politeness dataset (Danescu-Niculescu-Mizil et al., 2013) to select sentences for the qualification task. We sample examples where the classifier assigned very high or very low politeness score for each language. Language-proficient researchers verified the correctness of model predictions on a subset (four) of languages. While the model was not always correct, their highly confident predictions were mostly correct. These requests, paired with the predicted politeness label, were used to filter crowdworkers.
+
+Translation Qualification Task Inspired by prior work (Pavlick et al., 2014) which employed a translation task to assess the language proficiency of crowdworkers, we estimate their language proficiency by evaluating their translation skills.
+
+We present crowdworkers with a set of five requests (assigned either very polite or very impolite rating by the model) in the target language, and ask them to translate into English as well as to label a politeness score. We first compared the annotator's translation with the output from Google Translate API. If the edit distance between their translation and the output from Google translate, we remove them from the annotator pool as they could be using this service. We also computed the distance between the user's politeness score and
+
+the model's predicted labels, and pruned workers who provided scores that varies significantly from model predictions.
+
+The qualification is not completely automatic, with constant monitoring on four languages on which language-proficient researchers continuously provide sanity checks. Fifteen workers per language took our qualifier task, and after this filtering we ended up with 7 Afrikaans, 9 Spanish, 9 Hungarian, 10 Tamil, 10 Russian, 11 Hindi, 11 Korean, 11 French and 11 Vietnamese workers.
+
+Final Data Collection / Postprocessing The annotators annotated 5 English requests and 15 target language requests instances per task. The annotation interface can be found in the appendix. We collect 3-way annotations for each request. Annotating 20 examples took approximately seven minutes and annotators were paid $3 for it, translating to$ 25.43/hr.
+
+# 2.2 Inter-annotator Agreement
+
+Ensuring data quality is challenging, especially when we do not have in-house native speaker to inspect for all languages we study. Following prior work (Pavlick et al., 2014; Danescu-Niculescu-Mizil et al., 2013), we estimate the annotation quality by comparing inter-annotator agreement with agreement between randomly assigned labels according to the data distribution8. As we study continuous rather than categorical value, we compute pairwise spearman correlation to measure agreement score instead of Cohen's Kappa.
+
+As each annotator provided scores for both English sentences and sentences of their native languages, we report both agreement numbers, split by language in Table 2. We consistently observe a positive correlation among the annotators' scores. In
+
+
Language
en
target
hi
0.31
(0.4)
0.39
(0.2)
ko
0.34
(0.34)
0.6
(0.12)
es
0.28
(0.12)
0.52
(0.16)
ta
0.38
(0.32)
0.33
(0.17)
fr
0.45
(0.3)
0.53
(0.21)
vi
0.38
(0.31)
0.41
(0.17)
ru
0.43
(0.34)
0.51
(0.16)
af
0.35
(0.34)
0.37
(0.2)
hu
0.38
(0.3)
0.52
(0.19)
average
0.36
(0.31)
0.46
(0.17)
+
+Table 2: Pairwise correlation (mean and standard deviation (in brackets)) for each language annotator, on English data and their native language data.
+
+
+Figure 1: Distribution of final politeness scores per language, with mean and median highlighted.
+
+terestingly, we observe substantially higher agreement when annotators were labeling their own language compared to labeling English across all nine languages. This suggests the interpreting politeness of foreign language can be less precise and more variable compared to interpreting that of native language. As our main goal is collecting target language annotations, this would not impact the quality of our dataset, which studies how native speakers perceive native contents. We plot the averaged pairwise spearman correlation of annotations and that of random assignments in Figure 2. In both English and their native languages, annotator correlation is substantially higher than correlation from random label assignments, which hovers around zero as expected. In Appendix C, we report the correlation with the English politeness labels from the previous study and our annotation, and inter-annotator agreement per by language.
+
+# 2.3 Final Dataset
+
+We collect three way annotations for 500 randomly sampled requests for each language. We normalize each annotator's score to a normal distribution with a mean of zero and standard deviation of 1, and then average the score of three annotators to get
+
+
+Figure 2: Spearman correlation. The first and third graph represents our annotated data in English and target languages respectively, and the second and the fourth shows correlation for random assignments, which hovers around zero as expected.
+
+a final score for each item, which ranges from -3 (very impolite) to $+3$ (very polite). We plot the final politeness distribution per language in Figure 1. Examples of annotated sentences are in Appendix B.
+
+We split these examples into 4 quartiles based on their politeness scores, and consider sentences from the top and bottom 25 percentile of politeness scores only (corresponding to positive and negative politeness), following prior work (Danescu-Niculescu-Mizil et al., 2013; Aubakirova and Bansal, 2016). This results in a balanced binary politeness prediction task, while reducing the number of examples by half. We refer to this dataset (containing half of the total TYDIP dataset) as TYDIP evaluation dataset.
+
+# 3 Predicting Politeness
+
+Equipped with politeness data for nine languages, we evaluate cross-lingual transfer performance of multilingual language models (Conneau et al., 2020). We are interested in following research questions:
+
+1. Can a multilingual model trained on English politeness data predict politeness of different languages?
+2. Can we use a monolingual model trained on English politeness data by translating target languages into English?
+
+Models We study two fine-tuned pretrained language models, one English model (RoBERTa (Liu et al., 2019)) and one multilingual model (XLM-RoBERTa (Conneau et al., 2020)) which supports all nine languages we study.
+
+
Model
Input Lang.
en
hi
ko
es
ta
fr
vi
ru
af
hu
Avg
Majority
-
0.537
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
XLMR
target
0.892
0.868
0.784
0.84
0.78
0.82
0.844
0.668
0.856
0.812
0.808
XLMR
en
0.892
0.884
0.752
0.848
0.748
0.84
0.816
0.688
0.836
0.8
0.801
RoBERTa
en
0.912
0.868
0.692
0.836
0.768
0.812
0.796
0.684
0.856
0.768
0.786
GPT3
target
0.808
0.732
0.708
0.732
0.596
0.764
0.692
0.688
0.688
0.76
0.706
GPT3
en
0.808
0.668
0.62
0.732
0.652
0.72
0.664
0.612
0.7
0.652
0.668
+
+Table 3: Accuracy on TyDIP evaluation dataset. The XLMR and RoBERTa models are finetuned in English politeness data from Danescu-Niculescu-Mizil et al. (2013), while GPT3 model is prompted in a zero-shot fashion. When Input Lang. column is "en", we use Google Translate API to translate the target language into English.
+
+We randomly split data from Danescu-Niculescu-Mizil et al. (2013) to yield 1,926 training and 251 evaluation examples in English. With this training dataset, we fine-tuned each model for five epochs with a batch size of 32 and an learning rate of 5e-6 on a Quadro RTX 6000 machine. We use the large variants for both models.
+
+At inference time, we translate the target language requests into English using Google Translate API (optional for XLMR model, necessary for RoBERTa model).
+
+We use one large-scale language model, GPT3 (Brown et al., 2020) Davinci-002, in a zero-shot prompting setup with the following prompt:
+
+Is this request polite?
+
+
+
+Then, we compute the probabilities for two options for next token – “yes” and “no” respectively, which map to “polite” and “impolite” labels respectively. Designing prompts for each language is non-trivial, so in this initial study we use this exact same English template for all languages.
+
+Results Table 3 reports the model performances. Following recent question answering benchmark (Clark et al., 2020), we only aggregate the scores on non-English languages to focus on transfer performances. Both finetuned language models (XLMR and RoBERTa) boast strong performance in English, reaching an accuracy hovering $90\%$ . Even zero-shot GPT model performs competitively, reporting an accuracy of $80.8\%$ .
+
+In terms of XLMR model, the results were fairly split on whether it is better to use automatically translated English input, matching the training data, or using the target language input as is. Using the text in English showed better performances in and four (Hindi, Spanish, French, Russian) and using the target language input was better in five languages (Korean, Tamil, Vietnamese, Afrikaans,
+
+and Hungarian). Using the target language yields a slightly better performance, questioning whether automatic translation maintain the politeness level.
+
+Large-scale language model, GPT3, even used in a zero-shot fashion without much prompt engineering (Gao et al., 2021) shows competitive performances, significantly outperforming the majority baseline. Similar to XLMR, using target language as is showed better performance than using translated text (70.6 vs. 66.8) on average, and in seven out of nine languages.
+
+Comparing performances across languages is tricky as the annotation was done by different sets of annotators on different items for each language. To put these numbers in context, we provide a comparison between estimated human performance and model performance in the next section. Would human agreement be lower on languages with weaker model performance?
+
+Comparison with human agreement To compute a comparable number between the annotators and models, we use our original 3-way annotated data before aggregating politeness score. We treat one annotator's label as the human prediction and consider the other two as references, taking their mean to get the gold politeness score. We repeat this random sampling process for each example in test set 1,000 times and plot the distribution of accuracy scores in Figure 3.
+
+Annotators shows varying degree of agreements – we notice a particularly stronger agreement in Korean and Hungarian, but overall we observe strong agreement, hovering around $90\%$ . Interestingly, models significantly underperform in these languages with high human agreement, making the gap between human and model performance large. Six out of the nine languages have a gap of at least $10\%$ , and two of them being greater than $15\%$ .
+
+
+Figure 3: Comparing the our best model accuracy (XLMR-target) vs. annotator accuracy on politeness prediction.
+
+# 4 Building and Analyzing Politeness Strategies in Nine Languages
+
+In this section, we develop a set of linguistic politeness strategies based on existing English strategies (Danescu-Niculescu-Mizil et al., 2013), and see how can explaining politeness phenomena in nine diverse languages we study. While politeness strategies are not necessary for building a high-performing classifier, it can be helpful to understand politeness phenomena.
+
+The original English study presents a list of politeness strategies along with each strategies relation to assigned politeness score. They found many statistically significant correlations between politeness strategies and human perception, such as words belonging to gratitude lexicon (appreciate), counterfactual modal (could/would) correlates with being polite, and starting the sentence with first person pronoun correlating with being impolite.
+
+Developing such a politeness lexicon for each language requires expert annotation, which can be infeasible for low-resource languages with a fewer language-proficient researchers (Joshi et al., 2020). Thus, we aim to automatically generate politeness strategies for other languages from the English ones. For this initial study, we focus on lexicon-based strategies (15 out of 20 strategies), excluding strategies involving dependency parsing.
+
+Mapping English Lexicon to Target Languages To build a politeness lexicon in nine languages, we use two NLP tools – translation and word alignments.
+
+We sample 5000 Wikipedia editor requests that are not included in our annotated data for each of nine languages. We first automatically translate target language sentence into English (with Google Translate API) and then align the words in the translated English sentence to the words in
+
+original sentence in the target language.
+
+Aligning words in parallel corpora has been longstanding task in NLP. Traditionally, alignments can be obtained as a byproduct of training statistical MT systems (Och and Ney, 2003; Dyer et al., 2013). Yet, this typically requires a large parallel corpus, which we lack for nine languages we study. We instead use alignment method using the similarity between token representations from multilingual pretrained language models (mBERT (Devlin et al., 2019)), fast-align (Dou and Neubig, 2021).
+
+For each word in English politeness lexicon, we collect their aligned word in the target language. As the alignments maps a sequence of words to a sequence of words, sometimes a single word English lexicon is mapped to multiple words in the target language. For each word in the English lexicon, we consider up to top five target language word sequences as its matching lexicon. We show examples of induced lexicon in Appendix E and full lexicon in the repository.
+
+As automatically generated lexicon can be imprecise for either incorrect translation or alignments, we manually inspected the generated lexicon in four languages for which we have language-proficient researchers. We found that the alignments were mostly reasonable, but erroneous and imprecise for words with multiple senses. Not every lexicon was mapped to foreign words either, we show the coverage statistics (average % of words in lexicon mapped to foreign language words), which hovers around $60 - 70\%$ , at the bottom of Figure 4.
+
+Analysis with Induced Lexicon Using automatically induced lexicon, we analyze our multilingual politeness data, mirroring the analysis from Danescu-Niculescu-Mizil et al. (2013). We report the average politeness score of sentences exhibiting each each strategy in Figure 4. The baseline value here would be 0. We observe that the average politeness score for each strategy across languages are somewhat consistent (e.g., PLEASE strategy being positively correlated in all languages except Spanish). The diverging patterns can be an error with strategy mapping and needs further investigation. Interestingly, in languages with lower model performance (Korean, Tamil), we observe more diverging patterns (e.g., indirect greeting having positive implications in these two languages while mildly negative in English). In the Appendix E, we include the occurrence of different strategies in different politeness quartiles (polite or impolite
+
+
+Figure 4: Induced politeness strategies and their relation to politeness scores in nine languages. We plot the average politeness score of a set of sentences containing corresponding strategy. Here, the baseline value is 0. The number of strategies covered by the induced lexicon is also mentioned for each language.
+
+
Model
en
fr
it
pt
Majority Baseline
80
Before Calib.
40.12
37.74
38.10
36.72
After Calib.
73.48
74.08
74.44
73.76
+
+subsets), which exhibits similar pattern.
+
+# 5 Transfer between Formality and Politeness
+
+While we are not aware of computational linguistic studies in politeness covering multiple languages, prior work (Briakou et al., 2021; Rao and Tetreault, 2018) has explored formality in four languages (English, French, Italian and Portuguese). In this section, we study the connections between the formality and politeness. Would formally written sentences perceived as more polite by our classifier?
+
+Table 4: Transfer from politeness to formality. Formality classification accuracy on X-FORMAL dataset.
+
+
en
hi
ko
es
ta
Before Calib.
0.537
0.5
0.5
0.5
0.5
After Calib.
0.557
0.612
0.588
0.644
0.564
fr
vi
ru
af
hu
Before Calib.
0.5
0.5
0.5
0.5
0.5
After Calib.
0.6
0.624
0.644
0.564
0.528
+
+Table 5: Transfer from formality to politeness. Politeness classification accuracy on TYDIP evaluation dataset.
+
+
Sentence
Formality
Politeness
Hey, I'm in NYC I'll help you out if your around!
Informal
Polite
I am in New York City. I will help you if you are nearby.
Formal
Polite
why do they try to sound british?
Informal
Impolite
Why do they attempt to sound British?
Formal
Impolite
+
+Table 6: Four example with annotated formality label and predicted politeness label. The formality labels are from Rao and Tetreault (2018) and the politeness labels assigned by our classifier.
+
+
Language
1 (polite | formal) = 1 (polite | informal)
p (polite | formal) > p (polite | informal)
English
0.811
0.682
French
0.775
0.702
Italian
0.764
0.734
Portuguese
0.779
0.696
+
+Table 7: Analysing politeness predictions on (informal, formal) sentence pairs. The left column represents the fraction of pairs for which the same politeness label is assigned to both sentences. The right column represents the fraction of pairs for which the classifier's probability of being polite for the formal sentence is higher than that of its informal counterparts.
+
+We use GYAFC (Rao and Tetreault, 2018) and X-FORMAL (Briakou et al., 2021), two datasets containing informal sentences from the L6 Yahoo Answers Corpus10 and four formal rewrites for each sentence (dataset statistics can be found in the Appendix G).
+
+In Table 4, we report zero-shot transfer results from politeness classifier to formality classification. We will use our best multilingual politeness classifier (XLMR-target) from Section 3. We calibrate the threshold of our politeness classifier to account for the different data distribution of positive and negative examples. Somewhat surprisingly, the classifier performs worse than the majority baseline. Table 5 shows performance numbers of transfer in the reverse direction, i.e from formality to politeness. We similarly finetune XLMR model on the English train set from GYAFC (Rao and Tetreault, 2018), and evaluated it on TYDIP evaluation dataset, using target language as an input. After the threshold calibration, the model performs better than the majority baseline, but substantially
+
+underperforms the in-domain performance reported in Table 3.
+
+Does this mean formality and politeness are not linked? Upon inspection (see Table 6 for examples), we find that politeness prediction for the informal and formal rewrites of the same sentence often stay consistent. Looking into the model's prediction on (informal, formal) sentence pairs, we find that almost $80\%$ of pairs in English have the same politeness prediction for both sentences. The left column in Table 7 depicts this across four languages, suggesting that politeness could be further linked to the content, not just style of the writing.
+
+In their original work, Rao and Tetreault (2018) report that commonly used techniques to make sentences formal include phrasal paraphrases, punctuation changes, expansions, contractions, capitalization and normalization which are fairly stylistic. Would such rewriting make sentences to be perceived more polite? We investigate this by further looking into (informal, formal) sentence pairs – for each version of the sentence in the pair, we compute their politeness probability (as assigned by the classifier) and report percentage of pairs where formal version of the same sentence were viewed as more polite than its impolite counterpart. The right column in Table 7 presents these results – for about $70\%$ examples, such rewriting indeed made the sentence perceived as more polite, despite often not enough to flip the politeness decision.
+
+# 6 Related Work
+
+Politeness & Formality Danescu-Niculescu-Mizil et al. (2013) presents the first quantitative, linguistic study of politeness, annotating two types of corpora – requests extracted from conversations between users on Wikipedia User Talk Pages and user comments from Stackoverflow. Followup work explored interpreting neural networks' politeness predictions (Aubakirova and Bansal, 2016) and controllable text generation with target politeness level (Sennrich et al., 2016; Niu and Bansal, 2018; Fu et al., 2020). While these work considers politeness phenomena in English, we expand it to study the phenomena in nine languages. A related concept to politeness is formality, studied in multiple prior work (Lahiri, 2016; Pavlick and Tetreault, 2016; Rao and Tetreault, 2018; Briakou et al., 2021).
+
+Multilingual Models Recent progresses in pretrained language models have brought better representation for multitude of languages. Multilin
+
+gual language models like mBERT (Devlin et al., 2019), XLMR (Conneau et al., 2020), based on the transformer architecture, are pretrained with the masked language modeling objective on a large amount of corpora (El-Kishky et al., 2020; Suarez et al., 2019) spanning over 100 languages. While the community also recognizes the varying quality of unlabeled data in a range languages (Caswell et al., 2022), such multilingual models provide improved representations for modeling low resource languages. When finetuned on downstream task data in a single language, these models make reasonable predictions in multiple languages (Wu and Dredze, 2019). Multilingual models have also been evaluated in a prompting setup for different tasks like Machine Translation (Tan et al., 2022) and different Multilingual NLU tasks (Zhao and Schütze, 2021; Lin et al., 2021; Winata et al., 2021).
+
+Multilingual Benchmarks Despite recent progresses in NLP resources and benchmarks, partially powered by affordable crowdsourcing (Snow et al., 2008), linguistic resources in low resource languages are still severely limited to compared to resources in English (Joshi et al., 2020). Many existing datasets are translated from English data (Connieu et al., 2018; Longpre et al., 2021). While translating approach for dataset construction have advantage of ensuring similar data distribution across languages, data collected in such fashion will not reflect the language usages of diverse population, introducing translationese which can be different from purely native text (Lembersky et al., 2011). We provide resources for nine typologically diverse languages, capturing a subtle phenomena of politeness.
+
+# 7 Conclusion
+
+We present TYDIP, a corpus of requests paired with its perceived politeness score spanning nine languages. We evaluate multiple multilingual models in zero-shot politeness prediction and find that they are able to perform well without being trained on data from the same language, while not reaching human-level performances yet.
+
+# Limitations
+
+Our dataset is moderately sized (250 examples per language in the evaluation portion, and a total of 500 examples per language) and still covers a limited number of languages. We had in
+
+tended to cover more languages (one example being Japanese), but this were hindered by the number of annotators we could recruit for each language.
+
+The aligned politeness strategy lexicon (Section 4) relies on multiple automatic toolkits (machine translation system and word alignments), thus analysis should be interpreted with caution.
+
+# Ethical Considerations
+
+The data we annotate comes from Wikipedia User Talk pages, which is an online forum for communication between editors on Wikipedia. This data spans nine different languages and contains speakers from different countries and demographics. The annotation is done by crowdworkers recruited from the online platform Prolific. These workers aren't restricted to a particular country. They are paid a wage of \(25.43/hr which is higher than the average pay stipulated on the platform. We use this data to evaluate an existing model across multiple languages, and do not use it for training as such.
+
+# Acknowledgements
+
+We would like to thank Yasumasa Onoe, Bernardo Oviedo, Gokul Anandaraman for their help in the earlier phase of the project development and inspecting data quality. We would also like to thank Cristian Danescu-Niculescu-Mizil for answering questions about his work. We'd like to thank Joel Tetreault and Yahoo for providing access to the formality datasets. We also thank Akari Asai for providing feedback on the paper. We'd like to thank Anuj Diwan for the thoughtful discussions on this topic and providing helpful feedback along the way.
+
+# References
+
+Malika Aubakirova and Mohit Bansal. 2016. Interpreting neural networks to improve politeness comprehension. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2035-2041, Austin, Texas. Association for Computational Linguistics.
+Eleftheria Briakou, Di Lu, Ke Zhang, and Joel Tetreault. 2021. Olá, bonjour, salve! XFORMAL: A benchmark for multilingual formality style transfer. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3199-3216, Online. Association for Computational Linguistics.
+
+Penelope Brown and Stephen C Levinson. 1978. Universals in language usage: Politeness phenomena. In Questions and politeness: Strategies in social interaction, pages 56-311. Cambridge University Press.
+Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In Advances in Neural Information Processing Systems, volume 33, pages 1877-1901. Curran Associates, Inc.
+Isaac Caswell, Julia Kreutzer, Lisa Wang, Ahsan Wahab, Daan van Esch, Nasanbayar Ulzii-Orshikh, Allahsera Auguste Tapo, Nishant Subramani, Artem Sokolov, Claytone Sikasote, Monang Setyawan, Supheakmungkol Sarin, Sokhar Samb, Benoit Sagot, Clara Rivera, Annette Rios Gonzales, Isabel Papadimitriou, Salomey Osei, Pedro Ortiz Suarez, Iroro Orife, Kelechi Ogueji, Rubungo Andre Niyongabo, Toan Q. Nguyen, Mathias Muller, Andre e Muller, Shamsuddeen Hassan Muhammad, Nanda Firdausi Muhammad, Ayanda Mnyakeni, Jamshidbek Mirzakhalov, Tapiwanashe Matangira, Colin Leong, Nze Lawson, Sneha Kudugunta, Yacine Jernite, M. Jenny, Orhan Firat, Bonaventure F. P. Dossou, Sakhile Dlamini, Nisansa de Silva, Sakine cCabuk Balli, Stella Rose Biderman, Alessia Battisti, Ahmed Baruwa, Ankur Bapna, Pallavi N. Baljekar, Israel Abebe Azime, Ayodele Awokoya, Duygu Ataman, Orevaoghene Ahia, Oghenefego Ahia, Sweta Agrawal, and Mofetoluwa Adeyemi. 2022. Quality at a glance: An audit of web-crawled multilingual datasets. Transactions of the Association for Computational Linguistics, 10:50-72.
+J. Clark, Eunsol Choi, Michael Collins, Dan Garrette, Tom Kwiatkowski, Vitaly Nikolaev, and Jennimaria Palomaki. 2020. Tydi qa: A benchmark for information-seeking question answering in typologically diverse languages. Transactions of the Association for Computational Linguistics, 8:454-470.
+Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettle-moyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8440-8451, Online. Association for Computational Linguistics.
+Alexis Conneau, Rudy Rinott, Guillaume Lample, Adina Williams, Samuel Bowman, Holger Schwenk, and Veselin Stoyanov. 2018. XNLI: Evaluating cross-lingual sentence representations. In Proceedings of
+
+the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2475-2485, Brussels, Belgium. Association for Computational Linguistics.
+Cristian Danescu-Niculescu-Mizil, Moritz Sudhof, Dan Jurafsky, Jure Leskovec, and Christopher Potts. 2013. A computational approach to politeness with application to social factors. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 250-259, Sofia, Bulgaria. Association for Computational Linguistics.
+Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.
+Zi-Yi Dou and Graham Neubig. 2021. Word alignment by fine-tuning embeddings on parallel corpora. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 2112-2128, Online. Association for Computational Linguistics.
+Chris Dyer, Victor Chahuneau, and Noah A. Smith. 2013. A simple, fast, and effective reparameterization of IBM model 2. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 644-648, Atlanta, Georgia. Association for Computational Linguistics.
+Ahmed El-Kishky, Vishrav Chaudhary, Francisco Guzmán, and Philipp Koehn. 2020. A massive collection of cross-lingual web-document pairs. In EMNLP.
+Liye Fu, Susan Fussell, and Cristian Danescu-Niculescu-Mizil. 2020. Facilitating the communication of politeness through fine-grained paraphrasing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 5127-5140, Online. Association for Computational Linguistics.
+Tianyu Gao, Adam Fisch, and Danqi Chen. 2021. Making pre-trained language models better few-shot learners. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 3816-3830, Online. Association for Computational Linguistics.
+Junjie Hu, Sebastian Ruder, Aditya Siddhant, Graham Neubig, Orhan First, and Melvin Johnson. 2020. Xtreme: A massively multilingual multi-task benchmark for evaluating cross-lingual generalization. ArXiv, abs/2003.11080.
+
+Pratik Joshi, Sebastin Santy, Amar Budhiraja, Kalika Bali, and Monojit Choudhury. 2020. The state and fate of linguistic diversity and inclusion in the NLP world. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6282-6293, Online. Association for Computational Linguistics.
+Nada Korac-Kakabadse, Alexander Kouzmin, Andrew Korac-Kakabadse, and Lawson Savery. 2001. Low- and high-context communication patterns: towards mapping cross-cultural encounters. *Cross cultural management: An international journal*.
+Shibamouli Lahiri. 2016. Squinky! a corpus of sentence-level formality, informativeness, and implicature.
+Robin Lakoff. 1977. What you can do with words: Politeness, pragmatics and performatives. In Proceedings of the Texas conference on performatives, presuppositions and implicatures, pages 79-106. ERIC.
+Gennadi Lembersky, Noam Ordan, and Shuly Wintner. 2011. Language models for machine translation: Original vs. translated texts. Computational Linguistics, 38:799-825.
+Xi Victoria Lin, Todor Mihaylov, Mikel Artetxe, Tianlu Wang, Shuohui Chen, Daniel Simig, Myle Ott, Naman Goyal, Shruti Bhosale, Jingfei Du, Ramakanth Pasunuru, Sam Shleifer, Punit Singh Koura, Vishrav Chaudhary, Brian O'Horo, Jeff Wang, Luke Zettle-moyer, Zornitsa Kozareva, Mona Diab, Veselin Stoyanov, and Xian Li. 2021. Few-shot learning with multilingual language models.
+Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach.
+S. Longpre, Yi Lu, and Joachim Daiber. 2021. Mkqa: A linguistically diverse benchmark for multilingual open domain question answering. Transactions of the Association for Computational Linguistics, 9:1389-1406.
+Sara Mills. 2003. Gender and politeness.
+Shuyo Nakatani. 2010. Language detection library for java.
+Tong Niu and Mohit Bansal. 2018. Polite dialogue generation without parallel data. Transactions of the Association for Computational Linguistics, 6:373-389.
+Franz Josef Och and Hermann Ney. 2003. A systematic comparison of various statistical alignment models. Computational Linguistics, 29(1):19-51.
+Ellie Pavlick, Matt Post, Ann Irvine, Dmitry Kachaev, and Chris Callison-Burch. 2014. The language demographics of Amazon Mechanical Turk. Transactions of the Association for Computational Linguistics, 2:79-92.
+
+Ellie Pavlick and Joel Tetreault. 2016. An empirical analysis of formality in online communication. Transactions of the Association for Computational Linguistics, 4:61-74.
+Sudha Rao and Joel Tetreault. 2018. Dear sir or madam, may I introduce the GYAFC dataset: Corpus, benchmarks and metrics for formality style transfer. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 129-140, New Orleans, Louisiana. Association for Computational Linguistics.
+Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Controlling politeness in neural machine translation via side constraints. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 35-40, San Diego, California. Association for Computational Linguistics.
+Diogo Silva, David Semento, and João Magalhaes. 2022. Polite task-oriented dialog agents: To generate or to rewrite? In WASSA.
+Rion Snow, Brendan T. O'Connor, Dan Jurafsky, and A. Ng. 2008. Cheap and fast - but is it good? evaluating non-expert annotations for natural language tasks. In EMNLP.
+Pedro Ortiz Suarez, Benoit Sagot, and Laurent Romary. 2019. Asynchronous pipeline for processing huge corpora on medium to low resource infrastructures.
+Zhixing Tan, Xiangwen Zhang, Shuo Wang, and Yang Liu. 2022. MSP: Multi-stage prompting for making pre-trained language models better translators. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 6131-6142, Dublin, Ireland. Association for Computational Linguistics.
+Genta Indra Winata, Andrea Madotto, Zhaojiang Lin, Rosanne Liu, Jason Yosinski, and Pascale Fung. 2021. Language models are few-shot multilingual learners. In Proceedings of the 1st Workshop on Multilingual Representation Learning, pages 1-15, Punta Cana, Dominican Republic. Association for Computational Linguistics.
+Shijie Wu and Mark Dredze. 2019. Beto, bentz, becas: The surprising cross-lingual effectiveness of bert. In EMNLP.
+Mengjie Zhao and Hinrich Schütze. 2021. Discrete and soft prompting for multilingual models. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 8547-8555, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
+
+
Raw Score
Aggregate Score
hi
0.27
(0.48)
0.41
(0.44)
ko
0.27
(0.48)
0.45
(0.46)
es
0.24
(0.48)
0.4
(0.42)
ta
0.25
(0.48)
0.49
(0.39)
fr
0.25
(0.46)
0.52
(0.35)
vi
0.29
(0.48)
0.46
(0.39)
ru
0.21
(0.52)
0.45
(0.4)
af
0.28
(0.48)
0.44
(0.44)
hu
0.22
(0.5)
0.48
(0.38)
+
+Table 9: Agreement with original English labels (Danescu-Niculescu-Mizil et al., 2013) annotated by speakers of various languages. Mean and std deviation (in brackets).
+
+# A Annotation UI
+
+Figure 5 contains the user interface used for the final annotation process.
+
+# B Example Requests
+
+Table 8 contains examples of requests in different languages and the politeness score assigned to them.
+
+# C Additional Inter Annotator Agreement Reports
+
+Figure 6 compares the overall IRR metrics on our annotations with the IRR on the annotations released by Danescu-Niculescu-Mizil et al. (2013) on English request data. They release the 5 way annotation done on their data and also a single score for each sentence after averaging and normalization. We report two scores in Table 9: a correlation with raw annotations and a correlation with the final aggregated scores.
+
+Figure 7 shows the distribution of the pairwise correlation metric over different HITs for each language. Each subplot has the distribution over the english and target language parts of each HIT, as well as a baseline method where the scores are shuffled before computing the correlation.
+
+The correlations in the random baseline are close to 0 and the correlations on the annotations are significantly higher. The correlations on the English annotations do show more variance in their distribution.
+
+# D Politeness Score Statistics
+
+Table 10 summarizes the distribution of scores across languages. All the languages have a mean close to 0, with similarly shaped distribution of
+
+scores. The minimum and maximum scores seem to vary a bit across languages. Some languages like Spanish have a higher median score and a higher number of sentences with a positive scores.
+
+# E Politeness Strategies
+
+Table 11 gives some examples of the politeness strategy lexicon we obtained by our automated method.
+
+# F Politeness Strategy Distribution
+
+Figure 8 showcases the occurrence of strategies in sentence belonging to the least polite (1st quartile) and most polite (4th quartile) subsections of our data. Cells shaded in light orange represent a baseline value of 0.25 and anything deviating from this appear in Dark Green or Red. We can clearly see difference across the the 2 quartiles for some of these strategies.
+
+# G Politeness to Formality Transfer
+
+We use the XLMR classifier trained in Section 3 and evaluate it on the mix of informal and formal sentences (1:4 ratio) as a test set. These performance numbers are shown in Table 4. We report the classifier Accuracy, as well as a majority baseline. Since we have an imbalanced mix of sentences, we decided to calibrate the classifier's threshold using the dev set. We get the probability for the 80th percentile of scores from the English dev set and use this on the test sets.
+
+# Politeness Annotation
+
+Please rate how polite the request comes across for you. Each request is a part of a conversation between editors on wikipedia. There can be multiple factors affecting the politeness level of a sentence, such as style and content.
+
+- Style: How did the requester phrase their request? The same content could be conveyed in both an impolite and polite manner. After reading the sentence, if you're able to think of a more polite way of saying the same thing, the style is not very polite
+Content: What is being asked in this request? For example, if the request is assuming the other editor made a mistake or violated rules intentionally, it less polite than one where editors are discussing what changes to make to a page. The content score will reflect the nature of the request
+
+You'll assign a single politeness score after considering all factors.
+
+A small fraction of the sentences may not be actual valid text. Simply mark those as invalid on the rightmost column.
+
+The task is spread over 4 pages. The first page contains 5 sentences in English and the remaining 3 pages contain 5 sentences each in a Foreign language.
+
+You can navigate between the pages using the Previous and Next Buttons.
+
+Before we start, please enter you Prolific ID
+
+Prolific ID:
+
+\\(en_sentence0)
+
+{$en_sentence1}
+
+{$en_sentence2}
+
+{$en_sentence3}
+
+{$en_sentence4}
+
+
+Page 1/4
+
+
+
+If you have any comments about any particular sentence or about the set of factors (style/content) for a particular sentence, please leave them below.
+
+
+
+Click the button below once you're done. If there are any missing fields, you'll be notified and you have to check the form again
+
+Figure 5: Annotation Interface
+
+
There is no desire to edit the written texts. Well, how do you like this paragraph?
-0.96
af
Jy maak aantygings van "veldtogte". Dalk isaar 'n balk in jou oog?
You make allegations of "campaigns". Maybe there is a beam in your eye?
-1.30
hu
Szia! Te jobban értesz a halakhoz, ántéznéd a
+követkeźć cytokeket?
Hi! You know more about fish, would you like to review the following articles?
0.82
+
+Table 8: Examples from TYDIP dataset. The politeness scale is from -3 (very impolite) to +3 (very polite).
+
+
Lang
# of Examples
Scores
% Positive
Mean
Std
Min
Max
hi
500
-0.0005
0.7756
-2.2745
1.7437
0.4980
ko
500
-0.0016
0.8659
-2.2760
2.0335
0.5320
es
500
-0.0007
0.8241
-2.6132
1.6320
0.5740
ta
500
0.0035
0.7546
-2.4319
1.8454
0.5540
fr
500
0.0003
0.8406
-2.6700
1.9268
0.5340
vi
500
-0.0019
0.7810
-2.0258
1.7338
0.4920
ru
500
0.0047
0.8234
-2.1938
2.2763
0.4880
af
500
0.0049
0.7845
-3.2678
2.1264
0.5200
hu
500
0.0027
0.8322
-2.3750
1.6705
0.5200
+
+
+Figure 6: Pairwise correlation metric on our annotations compared to the annotations released by Danescu-Niculescu-Mizil et al. (2013)
+
+Table 10: Statistics on Final Politeness Scores
+
+
Dataset
# Informal
# Formal
Rao and Tetreault (2018) English
2478
10992
Briakou et al. (2021) French
1000
4000
Italian
1000
4000
Portuguese
1000
4000
+
+Table 12: Statistics of Formality Data used for Evaluation (as test sets)
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Figure 7: Pairwise Correlation Metric over different HITs, along with Random Baseline value for different Languages
+
+
+
+
+
+
Strategy
Hindi
Korean
Spanish
Tamil
Please
+please
कूप्या
बामnio
por favor
लावली
HASHEDGE
+think
+apparently
+postulate
लामिा
SEXECHSI NAO
+ahmuleर्द
creo
+al parecer
+postular
मिलाकोत्तिकी रिलाकी
लावली
Deference
+great
महानानु
धन्द
gran
लावली
1st prsn pl
+we
हम
ुरिका
nos
लावली
Indirect
+hello
नमामस्थले
अन्तिा
hola
लावली
Direct
+so
ला
그름
लावली
+
+Table 11: Examples of politeness strategies lexicon gathered from our alignment method and then cherry picked by language proficient researcher.
+
+
+Figure 8: Presence of Politeness Strategies across the 1st and 4th quartiles (left and right plot) of data. Any cells deviating from the baseline value of 0.25 represent significant results
+
+
\ No newline at end of file
diff --git a/tydipadatasetforpolitenessclassificationinninetypologicallydiverselanguages/images.zip b/tydipadatasetforpolitenessclassificationinninetypologicallydiverselanguages/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..7b4802e364b8f8cfb2dc443afebd472b3090d2aa
--- /dev/null
+++ b/tydipadatasetforpolitenessclassificationinninetypologicallydiverselanguages/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:baab4dceeb9ae1959b8ca3e51e5a7b6659b3d5274a41cb1d7963ab7a7cd55819
+size 806155
diff --git a/tydipadatasetforpolitenessclassificationinninetypologicallydiverselanguages/layout.json b/tydipadatasetforpolitenessclassificationinninetypologicallydiverselanguages/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..47772e21da9cc1119b7545b8026e20f3bedbd451
--- /dev/null
+++ b/tydipadatasetforpolitenessclassificationinninetypologicallydiverselanguages/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3cd71943076d717c061e95481e79cf467e7b34616bf88c770a13a7fecce71170
+size 392868
diff --git a/uncertaintyquantificationwithpretrainedlanguagemodelsalargescaleempiricalanalysis/58beb317-fed7-4c29-a411-800b6a1278ec_content_list.json b/uncertaintyquantificationwithpretrainedlanguagemodelsalargescaleempiricalanalysis/58beb317-fed7-4c29-a411-800b6a1278ec_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..26d07924262cf6ce64be5ac6fe704b4e2eaf507f
--- /dev/null
+++ b/uncertaintyquantificationwithpretrainedlanguagemodelsalargescaleempiricalanalysis/58beb317-fed7-4c29-a411-800b6a1278ec_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:affae968bd3d30794439b3e30b26bb15b2ad86b9f319b5b68106713190602a55
+size 75124
diff --git a/uncertaintyquantificationwithpretrainedlanguagemodelsalargescaleempiricalanalysis/58beb317-fed7-4c29-a411-800b6a1278ec_model.json b/uncertaintyquantificationwithpretrainedlanguagemodelsalargescaleempiricalanalysis/58beb317-fed7-4c29-a411-800b6a1278ec_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..83753969c800d4b3281c855fe71d8d064694ae41
--- /dev/null
+++ b/uncertaintyquantificationwithpretrainedlanguagemodelsalargescaleempiricalanalysis/58beb317-fed7-4c29-a411-800b6a1278ec_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d960313e9a48e89d23a98b2f847e54aacf29513f0f7afb8cb4c9770cbcd3a4ce
+size 103636
diff --git a/uncertaintyquantificationwithpretrainedlanguagemodelsalargescaleempiricalanalysis/58beb317-fed7-4c29-a411-800b6a1278ec_origin.pdf b/uncertaintyquantificationwithpretrainedlanguagemodelsalargescaleempiricalanalysis/58beb317-fed7-4c29-a411-800b6a1278ec_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..82168627ae2e21985ef6bb4032689b5417a62c58
--- /dev/null
+++ b/uncertaintyquantificationwithpretrainedlanguagemodelsalargescaleempiricalanalysis/58beb317-fed7-4c29-a411-800b6a1278ec_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e028ce990953d358f1e5f4af5cb513e9032ad9363e34f478df8e63e3b8c4b11f
+size 416465
diff --git a/uncertaintyquantificationwithpretrainedlanguagemodelsalargescaleempiricalanalysis/full.md b/uncertaintyquantificationwithpretrainedlanguagemodelsalargescaleempiricalanalysis/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..f8ab0bce06534f83c42083e9673480c614573aac
--- /dev/null
+++ b/uncertaintyquantificationwithpretrainedlanguagemodelsalargescaleempiricalanalysis/full.md
@@ -0,0 +1,357 @@
+# Uncertainty Quantification with Pre-trained Language Models: A Large-Scale Empirical Analysis
+
+Yuxin Xiao $^{1}$ , Paul Pu Liang $^{2}$ , Umang Bhatt $^{3}$ , Willie Neiswanger $^{4}$ , Ruslan Salakhutdinov $^{2}$ , Louis-Philippe Morency $^{2}$
+
+1Massachusetts Institute of Technology, 2Carnegie Mellon University,
+
+3University of Cambridge, 4Stanford University
+
+1yuxin102@mit.edu 2 {pliang,rsalakhu,morency}@cs.cmu.edu
+
+3usb20@cam.ac.uk 4neiswanger@cs.stanford.edu
+
+# Abstract
+
+Pre-trained language models (PLMs) have gained increasing popularity due to their compelling prediction performance in diverse natural language processing (NLP) tasks. When formulating a PLM-based prediction pipeline for NLP tasks, it is also crucial for the pipeline to minimize the calibration error, especially in safety-critical applications. That is, the pipeline should reliably indicate when we can trust its predictions. In particular, there are various considerations behind the pipeline: (1) the choice and (2) the size of PLM, (3) the choice of uncertainty quantifier, (4) the choice of fine-tuning loss, and many more. Although prior work has looked into some of these considerations, they usually draw conclusions based on a limited scope of empirical studies. There still lacks a holistic analysis on how to compose a well-calibrated PLM-based prediction pipeline. To fill this void, we compare a wide range of popular options for each consideration based on three prevalent NLP classification tasks and the setting of domain shift. In response, we recommend the following: (1) use ELECTRA for PLM encoding, (2) use larger PLMs if possible, (3) use Temp Scaling as the uncertainty quantifier, and (4) use Focal Loss for fine-tuning.
+
+# 1 Introduction
+
+PLMs (Qiu et al., 2020; Min et al., 2021) have achieved state-of-the-art performance on a broad spectrum of NLP benchmarks (Rajpurkar et al., 2016, 2018; Wang et al., 2019a,b) and are increasingly popular in various downstream applications such as question answering (Yoon et al., 2019; Garg et al., 2020), text classification (Arslan et al., 2021; Limsopatham, 2021), and relation extraction (Zhou et al., 2021; Xiao et al., 2022). Consequently, it is paramount for PLMs to faithfully communicate when to (or not to) rely on their predictions for decision-making, especially in high-stakes scenarios. In these cases, we need PLMs to quantify their uncertainty accurately and calibrate well (Abdar
+
+et al., 2021), meaning that their predictive confidence should be a valid estimate of how likely they are to make a correct prediction. Consider an example of medical question answering (Yoon et al., 2019; Zhang et al., 2021) where a PLM is asked to assist doctors when diagnosing diseases. If the PLM is $90\%$ sure that a patient is healthy, the predicted outcome should occur $90\%$ of the time in practice. Otherwise, it may adversely affect doctors' judgment and lead to catastrophic consequences. Hence, since PLMs have become the de facto paradigm for many NLP tasks, it is necessary to assess their calibration quality.
+
+When constructing a well-calibrated PLM-based prediction pipeline for NLP tasks, various considerations are involved. To name a few:
+
+1. Due to the use of diverse pre-training datasets and strategies, different PLMs may behave differently regarding calibration.
+2. The model size of PLMs may also affect their capability in calibration.
+3. Leveraging uncertainty quantifiers (e.g., Temp Scaling (Guo et al., 2017) and MC Dropout (Gal and Ghahramani, 2016)) alongside PLMs in the pipeline may reduce calibration error.
+4. Some losses (e.g., Focal Loss (Mukhoti et al., 2020) and Label Smoothing (Müller et al., 2019)) may fine-tune PLMs to calibrate better.
+
+Although some of these considerations have been studied before, the ideal choice for each consideration remains obscure. On the one hand, Desai and Durrett (2020) report unconventional calibration behavior for PLMs, which casts doubts on the prior beliefs drawn on traditional neural networks by Guo et al. (2017). On the other hand, existing work (Desai and Durrett, 2020; Dan and Roth, 2021) on PLMs' empirical calibration performance often looks at a single consideration and concludes by comparing only one or two types of PLMs.
+
+Therefore, in this paper, we present a comprehensive analysis of the four pivotal considerations
+
+introduced above via large-scale empirical evaluations. To ensure that our analysis is applicable to various NLP tasks and resilient to domain shift, we set up three NLP tasks (i.e., Sentiment Analysis, Natural Language Inference, and Commonsense Reasoning) and prepare both in-domain and out-of-domain testing sets for each task. In addition to the explicit metrics of prediction and calibration error, we also utilize two evaluation tasks to examine calibration qualities implicitly. Selective prediction lowers prediction error by avoiding uncertain testing points, and out-of-domain detection checks if a pipeline is less confident on unseen domains. By comparing four to five options for each consideration, we recommend the following:
+
+1. Use ELECTRA (Clark et al., 2020) as the PLM to encode input text sequences.
+2. Use the larger version of a PLM if possible.
+3. Use Temp Scaling (Guo et al., 2017) for post hoc uncertainty recalibration.
+4. Use Focal Loss (Mukhoti et al., 2020) during the fine-tuning stage.
+
+Compared to prior work, our extensive empirical evaluations also reveal the following novel observations that are unique to PLM-based pipelines:
+
+- The calibration quality of PLMs is relatively consistent across tasks and domains, except XLNet (Yang et al., 2019) being the most vulnerable to domain shift.
+- In contrast to other NLP tasks, larger PLMs are better calibrated in-domain in Common-sense Reasoning.
+- Uncertainty quantifiers (e.g., Temp Scaling) are generally more effective in improving calibration out-of-domain.
+- Ensemble (Lakshminarayanan et al., 2017) is less effective in PLM-based pipelines.
+
+To encourage future work towards better uncertainty quantification in NLP, we release our code and large-scale evaluation benchmarks containing 120 PLM-based pipelines based on four metrics (prediction and calibration error, selective prediction, and out-of-domain detection). These pipelines consist of distinct choices concerning the four considerations and are tested on all three NLP tasks under both in- and out-of-domain settings.
+
+# 2 Background
+
+# 2.1 Problem Formulation
+
+Datasets. In this work, we focus on utilizing PLMs for NLP classification tasks. More specifically, consider such a task where the training set $\mathbb{D}_{\mathrm{train}} = \{(x_i,y_i)\}_{i = 1}^{N_{\mathrm{train}}}$ consists of pairs of a text sequence $x_{i}\in \mathcal{X}_{\mathrm{in}}$ and an associated label $y_{i}\in \mathcal{V}$ . Similarly, the validation set $\mathbb{D}_{\mathrm{val}}$ and the in-domain testing set $\mathbb{D}_{\mathrm{in}}$ come from the same domain $\mathcal{X}_{\mathrm{in}}$ and share the same label space $\mathcal{V}$ . We also prepare an out-of-domain testing set $\mathbb{D}_{\mathrm{out}}$ , which differs from the others by coming from a distinct domain $\mathcal{X}_{\mathrm{out}}$ .
+
+PLM-based Pipeline. We apply a PLM $M$ to encode an input text sequence $x_{i}$ and feed the encoding vector to a classifier $F$ , which outputs a predictive distribution $\mathbf{u}_i$ over the label space $\mathcal{V}$ via the softmax operation. Here, parameters in $M$ and $F$ are fine-tuned by minimizing a loss function $\ell$ on $\mathbb{D}_{\mathrm{train}}$ . It is optional to modify the distribution $\mathbf{u}_i$ post hoc by an uncertainty quantifier $Q$ to reduce calibration error. We define the predicted label as $\hat{y}_i = \arg \max_{j\in \{1,\dots,|\mathcal{V}|\}}\mathbf{u}_{ij}$ with the corresponding confidence $\hat{c}_i = \mathbf{u}_{i\hat{y}_i}$ .
+
+Calibration. One crucial goal of uncertainty quantification is to improve calibration. That is, the predicted confidence should match the empirical likelihood: $P(y_{i} = \hat{y}_{i} \mid \hat{c}_{i}) = \hat{c}_{i}$ . We follow Guo et al. (2017) by using the expected calibration error (ECE) to assess the calibration performance. The calculation of ECE is described in Section 3.1. To reduce ECE, our main experimental evaluation lies in examining four considerations involved in a PLM-based pipeline: (1) the choice of PLM $M$ (Section 3), (2) the size of PLM $M$ (Section 4), (3) the choice of uncertainty quantifier $Q$ (Section 5), and (4) the choice of loss function $\ell$ (Section 6).
+
+# 2.2 Related Work
+
+Uncertainty quantification has drawn long-lasting attention from various domains (Bhatt et al., 2021), such as weather forecasting (Brier et al., 1950; Raftery et al., 2005), medical practice (Yang and Thompson, 2010; Jiang et al., 2012), and machine translation (Ott et al., 2018; Zhou et al., 2020; Wei et al., 2020). Researchers have approached this question from both Bayesian (Kendall and Gal, 2017; Depeweg et al., 2018) and frequentist perspectives (Alaa and Van Der Schaar, 2020a,b). They have also proposed different techniques to improve uncertainty calibration for classification (Kong et al., 2020; Krishnan and Tickoo, 2020) and
+
+regression (Kuleshov et al., 2018; Cui et al., 2020; Chung et al., 2021) tasks. Recent work has investigated connections between uncertainty and other properties, such as model interpretability (Antoran et al., 2021; Ley et al., 2022), selective prediction (Xin et al., 2021; Varshney et al., 2022a,b), and out-of-domain generalization (Wald et al., 2021; Qin et al., 2021).
+
+PLMs (Qiu et al., 2020; Min et al., 2021) have achieved state-of-the-art prediction performance on diverse NLP benchmarks (Rajpurkar et al., 2016, 2018; Wang et al., 2019a,b) and demonstrated many desired properties like stronger out-of-domain robustness (Hendrycks et al., 2020) and better uncertainty calibration (Desai and Durrett, 2020). They typically leverage a Transformer architecture (Vaswani et al., 2017) and are pre-trained by self-supervised learning (Jaiswal et al., 2021).
+
+Although Guo et al. (2017) report that larger models tend to calibrate worse, PLMs have been shown to produce well-calibrated uncertainty in practice (Desai and Durrett, 2020), albeit for giant model sizes. Their unusual calibration behavior puts the observations drawn on traditional neural networks (Ovadia et al., 2019; Mukhoti et al., 2020) or pre-trained vision models (Minderer et al., 2021) in doubt. Prior work (Desai and Durrett, 2020; Dan and Roth, 2021) on the calibration of PLMs often explores only one or two types of PLMs and ignores uncertainty quantifiers and fine-tuning losses beyond Temp Scaling and Cross Entropy, respectively. As a result, there lacks a holistic analysis that explores the full set of these considerations in a PLM-based pipeline. Therefore, our paper aspires to fill this void via extensive empirical studies.
+
+# 3 Which Pre-trained Language Model?
+
+# 3.1 Experiment Setup
+
+To evaluate the calibration performance of PLMs, we consider a series of NLP classification tasks:
+
+1. Sentiment Analysis identifies the binary sentiment of a text sequence. We treat the IMDb movie review dataset (Maas et al., 2011) as indomain and the Yelp restaurant review dataset (Zhang et al., 2015) as out-of-domain.
+
+2. Natural Language Inference predicts the relationship between a hypothesis and a premise. We regard the Multi-Genre Natural Language Inference (MNLI) dataset (Williams et al., 2018) covering a range of genres of spoken and written text as in-domain and the Stanford
+
+
Sentiment Analysis
Natural Language Inference
Commonsense Reasoning
xin
IMDb
MNLI
SWAG
xout
Yelp
SNLI
HellaSWAG
|y|
2
3
4
|Dtrain|
25,000
392,702
73,546
|Dval|
12,500
4,907
10,003
|Din|
12,500
4,908
10,003
|Dout|
19,000
4,923
5,021
+
+Table 1: In- and out-of-domain datasets, label space size, and each data split size of the three NLP tasks.
+
+
Hugging Face Name
Model Size
Pre-training Corpus Size
Pre-training Task
bert-base-cased
109M
16G
Masked LM, NSP
xlnet-base-cased
110M
161G
Permuted LM
electra-base-discriminator
110M
161G
Replacement Detection
roberta-base
125M
161G
Dynamic Masked LM
deberta-base
140M
85G
Dynamic Masked LM
bert-large-cased
335M
16G
Masked LM, NSP
xlnet-large-cased
340M
161G
Permuted LM
electra-large-discriminator
335M
161G
Replacement Detection
roberta-large
335M
161G
Dynamic Masked LM
deberta-large
350M
85G
Dynamic Masked LM
+
+Table 2: Model size, pre-training corpus size, and pretraining task of the five PLMs, separated into the base (upper) and the large (lower) versions.
+
+Natural Language Inference (SNLI) dataset (Bowman et al., 2015) derived from image captions only as out-of-domain.
+
+3. Commonsense Reasoning determines the most reasonable continuation of a sentence among four candidates. We view the Situations With Adversarial Generations (SWAG) dataset (Zellers et al., 2018) as in-domain and its adversarial variant (HellaSWAG) (Zellers et al., 2019) as out-of-domain.
+
+For each task, we construct $\mathbb{D}_{\mathrm{train}}$ , $\mathbb{D}_{\mathrm{val}}$ , and $\mathbb{D}_{\mathrm{in}}$ from the corresponding in-domain dataset, and $\mathbb{D}_{\mathrm{out}}$ from the corresponding out-of-domain dataset. The original validation set of each dataset is split in half randomly to form a held-out non-blind testing set (i.e., $\mathbb{D}_{\mathrm{in}}$ or $\mathbb{D}_{\mathrm{out}}$ ). Table 1 describes the task details.
+
+To understand which PLM delivers the lowest calibration error, we examine five popular options:
+
+1. BERT (Devlin et al., 2019) utilizes a bidirectional Transformer architecture pre-trained by masked language modeling (LM) and next sentence prediction (NSP).
+2. XLNet (Yang et al., 2019) proposes a two-stream self-attention mechanism and a pretraining objective of permuted LM.
+3. ELECTRA (Clark et al., 2020) pre-trains a
+
+
+
+
+(a) In-Domain and Out-Of-Domain Calibration of PLMs
+
+
+(b) In-Domain and Out-Of-Domain Prediction of PLMs
+
+
+(c) In-Domain and Out-Of-Domain Selective Prediction of PLMs
+
+
+(d) In-Domain Calibration vs Out-Of-Domain Calibration
+Figure 1: Calibration and (selective) prediction performance of five PLMs in three NLP tasks under two domain settings. The calibration quality of the five PLMs is relatively consistent across tasks and domains, while XLNet is the least robust to domain shift. ELECTRA stands out due to its lowest scores in ECE, prediction error, and RPP.
+
+discriminative model to detect tokens replaced by a generative model.
+
+4. RoBERTa (Liu et al., 2019) builds on BERT by pre-training based on dynamic masked LM only and tuning key hyperparameters.
+5. DeBERTa (He et al., 2020) further improves RoBERTa via a disentangled attention mechanism and an enhanced mask decoder.
+
+We use the base version of each PLM, which has a similar model size and is initialized from the corresponding Hugging Face (Wolf et al., 2020)
+
+pre-trained checkpoint. Table 2 details these PLMs. After receiving the encoding vector of the classification token [CLS] for an input text sequence from the PLM, we pass it through a classifier to obtain a predictive distribution. Regarding the classifier configuration, we follow the default practice in Hugging Face by utilizing a two-layer neural network with tanh non-linear activation.
+
+The learning rate for each model-dataset combination is tuned based on the validation set among $\{5\mathrm{e} - 6,1\mathrm{e} - 5,2\mathrm{e} - 5,5\mathrm{e} - 5\}$ . We leverage AdamW
+
+
+
+
+(a) In-Domain Calibration of PLMs of Different Sizes
+
+
+(b) Out-Of-Domain Calibration of PLMs of Different Sizes
+
+
+(c) In-Domain Prediction of PLMs of Different Sizes
+
+
+(d) Out-Of-Domain Classification Error of PLMs of Different Sizes
+
+
+(e) Out-Of-Domain Calibration vs Out-Of-Domain Prediction
+Figure 2: Calibration and prediction performance of large and base PLMs in three NLP tasks under two domain settings. Larger PLMs calibrate better than their respective base versions when evaluated out-of-domain, while calibrating slightly worse in-domain with one exception in Commonsense Reasoning. If the computational budget permits, larger PLMs constitute more powerful pipelines given their lower out-of-domain ECE along with lower prediction error. We also observe a positive correlation between calibration and prediction error out-of-domain.
+
+(Loshchilov and Hutter, 2018) to minimize the cross-entropy loss on $\mathbb{D}_{\mathrm{train}}$ for five epochs with early stopping and a linearly decaying scheduler
+
+(Goyal et al., 2017) whose warm-up ratio $= 10\%$ . Batch size is 16, and the model gradients are clipped to a maximum norm of 1. We perform
+
+our experiments on a Tesla A6000 GPU and report the mean and one standard error by conducting six trials with different seeds.
+
+To explicitly evaluate calibration performance by ECE, we first stratify $N$ predictions into $K$ bins of equal width based on the sorted confidence values. Then ECE is a weighted average of the absolute difference between the accuracy and confidence of each bin: $\mathrm{ECE} = \sum_{k=1}^{K} \frac{|B_k|}{N} |\mathrm{acc}(B_k) - \mathrm{conf}(B_k)|$ , where $\mathrm{acc}(B_k)$ and $\mathrm{conf}(B_k)$ are the average accuracy and confidence of predictions in bin $B_k$ , respectively. We set $K = 10$ in our experiments.
+
+To implicitly assess calibration quality based on selective prediction, we deploy the metric of reversed pair proportion (RPP) (Xin et al., 2021). More specifically, for a dataset of size $N$ , $\mathrm{RPP} = \frac{1}{N^2}\sum_{i=1}^{N}\sum_{j=1}^{N}\mathbb{1}[\hat{c}_i < \hat{c}_j, y_i = \hat{y}_i, y_j \neq \hat{y}_j]$ . It measures the proportion of prediction pairs with a reversed confidence-error relationship. A lower RPP indicates that the pipeline is more confident on correct predictions.
+
+# 3.2 Empirical Findings
+
+As shown in Figure 1(a), the calibration performance of all five PLMs deteriorates from indomain to out-of-domain. This phenomenon coincides with the finding made by Ovadia et al. (2019) on traditional neural networks. In addition, the ranking among the five PLMs based on ECE is generally consistent, which implies that their calibration quality is transferable across tasks and domains. More specifically, for all three tasks under the in-domain setting, XLNet, ELECTRA, RoBERTa, and DeBERTa outperform BERT in terms of lower ECE, suggesting that a larger pre-training corpus may improve the calibration quality (see Table 2). When moving to the out-of-domain setting, XLNet sees the largest increase in ECE, which makes it an outlier in Figure 1(d). This observation may indicate that the pre-training task of permuted LM is vulnerable to domain shift.
+
+ELECTRA stands out among the five examined PLMs in encoding input text sequences. Not only does it achieve the (comparably) lowest ECE in all three tasks under both in- and out-of-domain settings, it also delivers the lowest prediction error in Figure 1(b) and the lowest RPP for selective prediction in Figure 1(c). We hypothesize its success to the unique pre-training paradigm of replaced token detection, which preserves the token distribution by avoiding the artificial [MASK]
+
+tokens in masked LM and enhances the computational efficiency by learning from all input tokens.
+
+# 4 What Model Size?
+
+# 4.1 Experiment Setup
+
+To investigate how the size of PLMs affects the calibration performance, we compare the large versions of the five PLMs mentioned in Section 3.1 against their respective base versions. We keep the rest of the setup the same as in Section 3.1.
+
+# 4.2 Empirical Findings
+
+Figures 2(a) and (b) demonstrate that larger PLMs tend to produce a slightly higher ECE compared to their respective base versions when evaluated in-domain, while calibrating better out-of-domain. This observation based on five PLMs verifies the conclusion made by Dan and Roth (2021) solely based on BERT. However, there is a notable exception that larger PLMs are significantly better calibrated in-domain in Commonsense Reasoning than their respective base versions, which implies that larger PLMs are more aware of their uncertainties during the reasoning process.
+
+Larger PLMs constitute more powerful PLM-based pipelines, if computational budget permits. Although sometimes they suffer slightly in in-domain calibration compared to their smaller counterparts, larger PLMs achieve a lower ECE out-of-domain. They also deliver lower in- and out-of-domain prediction errors in Figures 2(c) and (d), respectively. In addition, we observe a positive correlation between calibration and prediction errors under the out-of-domain setting in Figure 2(e), suggesting that pipelines calibrating well out-of-domain are more accurate under domain shift as well. This reflects the finding in Wald et al. (2021) that multi-domain calibration leads to better out-of-domain prediction performance.
+
+# 5 Which Uncertainty Quantifier?
+
+# 5.1 Experiment Setup
+
+As discussed in Section 2.1, we can further adjust the vanilla predictive distribution post hoc via an uncertainty quantifier. Therefore, we study four uncertainty quantifiers based on the setup in Section 3.1 to inspect which improve the calibration performance in our problem formulation:
+
+1. Temp Scaling (Guo et al., 2017) learns a scalar parameter $T_{\mathrm{temp}}$ based on $\mathbb{D}_{\mathrm{val}}$ and "soft-
+
+
+
+
+(a) Change in In-Domain and Out-Of-Domain Calibration due to Uncertainty Quantifiers
+
+
+(b) Change in In-Domain and Out-Of-Domain Prediction due to Uncertainty Quantifiers
+Figure 3: Change in calibration and prediction performance due to the use of four uncertainty quantifiers. The effectiveness of these quantifiers in reducing ECE follows the descending order of Temp Scaling, MC Dropout, Ensemble, and LL SVI. The drop in ECE is more significant out-of-domain. Temp Scaling is the most compelling fine-tuning loss due to its largest reduction in ECE, preservation of prediction results, and little computational cost.
+
+ens" the vanilla logit output with $T_{\mathrm{temp}}$ to obtain a new predictive distribution.
+
+2. MC Dropout (Gal and Ghahramani, 2016) approximates the expectation of a posterior predictive distribution by averaging $T_{\mathrm{mc}}$ forward passes with dropout turned on.
+3. Ensemble (Lakshminarayanan et al., 2017) averages the predictive distributions of $T_{\mathrm{en}}$ independently trained models.
+4. LL SVI (Last-Layer Stochastic Variational Inference) (Blundell et al., 2015) implements variational layers with reparameterized Monte Carlo estimators based on the Bayesian-Torch package (Krishnan et al., 2022). It approximates the expectation of a posterior predictive distribution by averaging $T_{\mathrm{svi}}$ forward passes through the Bayesian classification layers.
+
+Here, we follow Lakshminarayanan et al. (2017) by setting $T_{\mathrm{en}} = 5$ . We use $T_{\mathrm{mc}} = 10$ and $T_{\mathrm{svi}} = 50$ due to computational constraints during inference. The dropout rate in MC Dropout is the same as the default dropout rate of each PLM.
+
+# 5.2 Empirical Findings
+
+In Figure 3, we plot the change in calibration and prediction performance due to the use of uncer
+
+tainty quantifiers compared to the vanilla results in Section 4.1. The improvement in calibration is more significant out-of-domain. More specifically, the degree to which these quantifiers decrease ECE follows the descending order of Temp Scaling, MC Dropout, Ensemble, and LL SVI. In fact, LL SVI even hurts the calibration in terms of an increase in ECE, suggesting that variational classifiers with reparameterized Monte Carlo estimators cannot capture uncertainties well when used only at the fine-tuning stage. Unlike Ovadia et al. (2019), we find Ensemble less effective in PLM-based pipelines, possibly because individual learners in Ensemble are initialized from the same pre-trained model checkpoint and, consequently, the strong correlation among them limits the power of Ensemble (Liu and Yao, 1999).
+
+Meanwhile, Temp Scaling preserves prediction results, and Ensemble lowers prediction error, as expected. Although MC Dropout and LL SVI reduce the prediction error out-of-domain in Common-sense Reasoning by producing sharper predictive distributions, they usually end up being overconfident, which leads to the rise in ECE in Figure 3(a).
+
+Temp Scaling is the most appropriate uncertainty quantifier for PLM-based pipelines. Com
+
+
+
+
+(a) In-Domain and Out-Of-Domain Calibration of BERT-Base Fine-tuned with Different Losses
+
+
+(b) Out-Of-Domain Detection of BERT-Base Fine-tuned with Different Losses
+Figure 4: Calibration and out-of-domain detection performance of BERT base models fine-tuned by five losses. Focal Loss, Label Smoothing, and MMCE are more capable of fine-tuning well-calibrated models compared to Cross Entropy and Brier Loss. Focal Loss is the best option due to its competitively low ECE and FAR95.
+
+pared to LL SVI, Temp Scaling diminishes ECE and maintains the competitive prediction quality of PLMs. Moreover, the post hoc recalibration manner of Temp Scaling adds little to the computational burden. In contrast, Ensemble or MC Dropout significantly increases the computational cost during fine-tuning or inference, respectively. Note that this distinction is of great importance given the enormous computational burdens of PLMs.
+
+# 6 Which Fine-tuning Loss?
+
+# 6.1 Experiment Setup
+
+Besides cross-entropy loss, we consider four other losses when fine-tuning a BERT base model and compare their calibration performance based on the setup in Section 3.1.
+
+1. Cross Entropy (Good, 1952) is the negative log likelihood of ground-truth classes.
+2. Brier Loss (Brier et al., 1950) is the squared difference between predictive distributions and one-hot ground-truth vectors.
+3. Focal Loss (Mukhoti et al., 2020) applies a modulating term to cross-entropy loss to focus model learning on hard misclassified samples.
+4. Label Smoothing (Müller et al., 2019) produces targeting distributions by allocating probability mass to non-ground-truth classes.
+
+5. MMCE (Maximum Mean Calibration Error) (Kumar et al., 2018) is a differentiable proxy to regularize calibration error, usually used alongside cross-entropy loss.
+
+We use a smoothing factor of 0.1, and follow the practice in Mukhoti et al. (2020) by setting the focal hyperparameter to 5 when the predictive probability for the ground-truth class $\in [0,0.2)$ and to 3 when the probability $\in [0.2,1]$ .
+
+In addition, we leverage out-of-domain detection to implicitly examine the quality of uncertainty quantification. We want models to be less confident on $\mathbb{D}_{\mathrm{out}}$ than on $\mathbb{D}_{\mathrm{in}}$ and, hence, report the false alarm rate at $95\%$ recall (FAR95) (Hendrycks et al., 2020). This metric tells the ratio of samples in $\mathbb{D}_{\mathrm{in}}$ whose confidence is lower than the 95th percentile of samples in $\mathbb{D}_{\mathrm{out}}$ .
+
+# 6.2 Empirical Findings
+
+As shown in Figure 4(a), Label Smoothing, Focal Loss, and MMCE generate better-calibrated BERT base models compared to Cross Entropy and Brier Loss. While models fine-tuned by Cross Entropy, Focal Loss, or MMCE calibrate better indomain, Brier Loss and Label Smoothing enjoy a decrease in ECE when evaluated out-of-domain. This observation matches the findings in Desai and Durrett (2020); Dan and Roth (2021) and is in
+
+tutive for Label Smoothing since it deliberately alleviates overconfidence during fine-tuning.
+
+Focal Loss is the most compelling fine-tuning loss for PLM-based pipelines. Among the five examined options, Focal Loss delivers competitively low ECE, both in- and out-of-domain for all three tasks. Moreover, it scores the lowest in FAR95, as illustrated in Figure 4(b), meaning that models fine-tuned by Focal Loss are most alert to domain shift. We note that FAR95 scores are relatively high in Sentiment Analysis and Natural Language Inference, probably because these pipelines also predict well out-of-domain in Figure 2(d).
+
+# 7 Conclusion
+
+In this paper, we contribute a comprehensive analysis on how to reduce calibration error in a PLM-based pipeline. We establish four key considerations behind the pipeline and compare a broad range of prevalent options for each consideration. Our empirical evaluations consist of three distinct NLP classification tasks and two different domain settings. Based on our large-scale systematic analysis, we recommend the following:
+
+1. Use ELECTRA for PLM encoding.
+2. Use larger PLMs if possible.
+3. Use Temp Scaling for post hoc recalibration.
+4. Use Focal Loss during the fine-tuning stage.
+
+Compared to existing work, we also observe the following novel phenomena that are unique to PLM-based pipelines:
+
+- The relative calibration quality of PLMs is consistent in general across tasks and domains, with an exception of XLNet, which is the least robust to domain shift.
+- Larger PLMs are better calibrated under the indomain setting in Commonsense Reasoning, unlike in the other NLP tasks.
+- Uncertainty quantifiers are generally more effective in improving calibration performance under the out-of-domain setting.
+- Ensemble is less effective in reducing calibration error when used with PLM-based pipelines, despite their convincing performance with traditional models.
+
+# 8 Limitation
+
+Due to computational constraints, we are unable to pre-train PLMs from scratch with other combinations of pre-training corpora and tasks. Consequently, while our analysis is applicable to existing
+
+widely-used PLMs, we do not claim its generalization to new combinations of pre-training corpora and tasks. We believe that this does not invalidate our claims which are primarily targeted toward real-world practitioners using existing PLMs. It is possible that techniques catering to the special needs of PLM-based pipelines (Kong et al., 2020) can mitigate calibration error further.
+
+Moreover, although our setup involves domain shift, we do not focus on inspecting how the degree of domain shift affects the calibration performance of PLM-based pipelines. It is also interesting to consider how to construct a well-calibrated PLM-based pipeline for other types of NLP tasks such as cross-lingual text classification and generation, which we leave to future work.
+
+# Acknowledgements
+
+This material is based upon work partially supported by National Science Foundation (Awards #1722822 and #1750439) and National Institutes of Health (Awards #R01MH125740, #R01MH096951, and #U01MH116925). PPL is partially supported by a Facebook PhD Fellowship and a Carnegie Mellon University's Center for Machine Learning and Health Fellowship. Any opinions, findings, conclusions, or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the sponsors, and no official endorsement should be inferred.
+
+# References
+
+Moloud Abdar, Farhad Pourpanah, Sadiq Hussain, Dana Rezazadegan, Li Liu, Mohammad Ghavamzadeh, Paul Fieguth, Xiaochun Cao, Abbas Khosravi, U Rajendra Acharya, et al. 2021. A review of uncertainty quantification in deep learning: Techniques, applications and challenges. Information Fusion.
+
+Ahmed Alaa and Mihaela Van Der Schaar. 2020a. Discriminative jackknife: Quantifying uncertainty in deep learning via higher-order influence functions. In ICML.
+
+Ahmed Alaa and Mihaela Van Der Schaar. 2020b. Frequentist uncertainty in recurrent neural networks via blockwise influence functions. In ICML.
+
+Javier Antoran, Umang Bhatt, Tameem Adel, Adrian Weller, and José Miguel Hernández-Lobato. 2021. Getting a CLUE: A method for explaining uncertainty estimates. In ICLR.
+
+Yusuf Arslan, Kevin Allix, Lisa Veiber, Cedric Lothritz, Tegawende F Bissyandé, Jacques Klein, and Anne Goujon. 2021. A comparison of pre-trained language models for multi-class text classification in the financial domain. In WWW Companion.
+Umang Bhatt, Javier Antorán, Yunfeng Zhang, Q Vera Liao, Prasanna Sattigeri, Riccardo Fogliato, Gabrielle Melançon, Ranganath Krishnan, Jason Stanley, Omesh Tickoo, et al. 2021. Uncertainty as a form of transparency: Measuring, communicating, and using uncertainty. In AIES.
+Charles Blundell, Julien Cornebise, Koray Kavukcuoglu, and Daan Wierstra. 2015. Weight uncertainty in neural network. In ICML.
+Samuel Bowman, Gabor Angeli, Christopher Potts, and Christopher D Manning. 2015. A large annotated corpus for learning natural language inference. In EMNLP.
+Glenn W Brier et al. 1950. Verification of forecasts expressed in terms of probability. Monthly weather review.
+Youngseog Chung, Willie Neiswanger, Ian Char, and Jeff Schneider. 2021. Beyond pinball loss: Quantile methods for calibrated uncertainty quantification. In NeurIPS.
+Kevin Clark, Minh-Thang Luong, Quoc V. Le, and Christopher D. Manning. 2020. ELECTRA: Pretraining text encoders as discriminators rather than generators. In ICLR.
+Peng Cui, Wenbo Hu, and Jun Zhu. 2020. Calibrated reliable regression using maximum mean discrepancy. In NeurIPS.
+Soham Dan and Dan Roth. 2021. On the effects of transformer size on in-and out-of-domain calibration. In EMNLP (Findings).
+Stefan Depeweg, Jose-Miguel Hernandez-Lobato, Finale Doshi-Velez, and Steffen Udluft. 2018. Decomposition of uncertainty in bayesian deep learning for efficient and risk-sensitive learning. In ICML.
+Shrey Desai and Greg Durrett. 2020. Calibration of pre-trained transformers. In EMNLP.
+Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In *NAACL*.
+Yarin Gal and Zoubin Ghahramani. 2016. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In ICML.
+Siddhant Garg, Thuy Vu, and Alessandro Moschitti. 2020. Tanda: Transfer and adapt pre-trained transformer models for answer sentence selection. In AAAI.
+
+I.J. Good. 1952. Rational decisions. Journal of the Royal Statistical Society. Series B (Methodological).
+Priya Goyal, Piotr Dólar, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, and Kaiming He. 2017. Accurate, large minibatch sgd: Training imagenet in 1 hour. arXiv preprint arXiv:1706.02677.
+Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q Weinberger. 2017. On calibration of modern neural networks. In ICML.
+Pengcheng He, Xiaodong Liu, Jianfeng Gao, and Weizhu Chen. 2020. Deberta: Decoding-enhanced bert with disentangled attention. In ICLR.
+Dan Hendrycks, Xiaoyuan Liu, Eric Wallace, Adam Dziedzic, Rishabh Krishnan, and Dawn Song. 2020. Pretrained transformers improve out-of-distribution robustness. In ACL.
+Ashish Jaiswal, Ashwin Ramesh Babu, Mohammad Zaki Zadeh, Debapriya Banerjee, and Fillia Makedon. 2021. A survey on contrastive self-supervised learning. Technologies.
+Xiaoqian Jiang, Melanie Osl, Jihoon Kim, and Lucila Ohno-Machado. 2012. Calibrating predictive model estimates to support personalized medicine. Journal of the American Medical Informatics Association.
+Alex Kendall and Yarin Gal. 2017. What uncertainties do we need in bayesian deep learning for computer vision? In NeurIPS.
+Lingkai Kong, Haoming Jiang, Yuchen Zhuang, Jie Lyu, Tuo Zhao, and Chao Zhang. 2020. Calibrated language model fine-tuning for in-and out-of-distribution data. In EMNLP.
+Ranganath Krishnan, Pi Esposito, and Mahesh Subedar. 2022. Bayesian-torch: Bayesian neural network layers for uncertainty estimation. https://github.com/IntelLabs/bayesian-torch.
+Ranganath Krishnan and Omesh Tickoo. 2020. Improving model calibration with accuracy versus uncertainty optimization. In NeurIPS.
+Volodymyr Kuleshov, Nathan Fenner, and Stefano Ermon. 2018. Accurate uncertainties for deep learning using calibrated regression. In ICML.
+Aviral Kumar, Sunita Sarawagi, and Ujjwal Jain. 2018. Trainable calibration measures for neural networks from kernel mean embeddings. In International Conference on Machine Learning.
+Balaji Lakshminarayanan, Alexander Pritzel, and Charles Blundell. 2017. Simple and scalable predictive uncertainty estimation using deep ensembles. In NeurIPS.
+Dan Ley, Umang Bhatt, and Adrian Weller. 2022. *Diverse, global and amortised counterfactual explanations for uncertainty estimates*. In AAAI.
+
+Nut Limsopatham. 2021. Effectively leveraging bert for legal document classification. In EMNLP Workshop.
+Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692.
+Yong Liu and Xin Yao. 1999. Ensemble learning via negative correlation. Neural networks.
+Ilya Loshchilov and Frank Hutter. 2018. Decoupled weight decay regularization. In ICLR.
+Andrew Maas, Raymond E Daly, Peter T Pham, Dan Huang, Andrew Y Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In ACL.
+Bonan Min, Hayley Ross, Elior Sulem, Amir Pouran Ben Veyseh, Thien Huu Nguyen, Oscar Sainz, Eneko Agirre, Ilana Heinz, and Dan Roth. 2021. Recent advances in natural language processing via large pre-trained language models: A survey. arXiv preprint arXiv:2111.01243.
+Matthias Minderer, Josip Djolonga, Rob Romijnders, Frances Hubis, Xiaohua Zhai, Neil Houlsby, Dustin Tran, and Mario Lucic. 2021. Revisiting the calibration of modern neural networks. In NeurIPS.
+Jishnu Mukhoti, Viveka Kulharia, Amartya Sanyal, Stuart Golodetz, Philip Torr, and Puneet Dokania. 2020. Calibrating deep neural networks using focal loss. In NeurIPS.
+Rafael Muller, Simon Kornblith, and Geoffrey E Hinton. 2019. When does label smoothing help? In NeurIPS.
+Myle Ott, Michael Auli, David Grangier, and Marc'Aurelio Ranzato. 2018. Analyzing uncertainty in neural machine translation. In ICML.
+Yaniv Ovadia, Emily Fertig, Jie Ren, Zachary Nado, David Sculley, Sebastian Nowozin, Joshua Dillon, Balaji Lakshminarayanan, and Jasper Snoek. 2019. Can you trust your model's uncertainty? evaluating predictive uncertainty under dataset shift. In NeurIPS.
+Yao Qin, Xuezhi Wang, Alex Beutel, and Ed Chi. 2021. Improving calibration through the relationship with adversarial robustness. In NeurIPS.
+Xipeng Qiu, Tianxiang Sun, Yige Xu, Yunfan Shao, Ning Dai, and Xuanjing Huang. 2020. Pre-trained models for natural language processing: A survey. Science China Technological Sciences.
+Adrian E Raftery, Tilmann Gneiting, Fadoua Balabdaoui, and Michael Polakowski. 2005. Using bayesian model averaging to calibrate forecast ensembles. Monthly weather review.
+
+Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you don't know: Unanswerable questions for squad. In ACL.
+Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. In EMNLP.
+Neeraj Varshney, Swaroop Mishra, and Chitta Baral. 2022a. Investigating selective prediction approaches across several tasks in iid, ood, and adversarial settings. In ACL (Findings).
+Neeraj Varshney, Swaroop Mishra, and Chitta Baral. 2022b. Towards improving selective prediction ability of nlp systems. In ACL Workshop.
+Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NeurIPS.
+Yoav Wald, Amir Feder, Daniel Greenfeld, and Uri Shalit. 2021. On calibration and out-of-domain generalization. In NeurIPS.
+Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2019a. Superglue: A stickier benchmark for general-purpose language understanding systems. In NeurIPS.
+Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. 2019b. Glue: A multi-task benchmark and analysis platform for natural language understanding. In ICLR.
+Xiangpeng Wei, Heng Yu, Yue Hu, Rongxiang Weng, Luxi Xing, and Weihua Luo. 2020. Uncertainty-aware semantic augmentation for neural machine translation. In EMNLP.
+Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In *NAACL*.
+Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierrick Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, et al. 2020. Transformers: State-of-the-art natural language processing. In EMNLP.
+Yuxin Xiao, Zecheng Zhang, Yuning Mao, Carl Yang, and Jiawei Han. 2022. Sais: Supervising and augmenting intermediate steps for document-level relation extraction. In *NAACL*.
+Ji Xin, Raphael Tang, Yaoliang Yu, and Jimmy Lin. 2021. The art of abstention: Selective prediction and error regularization for natural language processing. In ACL.
+Huiqin Yang and Carl Thompson. 2010. Nurses' risk assessment judgements: A confidence calibration study. Journal of Advanced Nursing.
+
+Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. In NeurIPS.
+Wonjin Yoon, Jinhyuk Lee, Donghyeon Kim, Minbyul Jeong, and Jaewoo Kang. 2019. Pre-trained language model for biomedical question answering. In ECML-PKDD.
+Rowan Zellers, Yonatan Bisk, Roy Schwartz, and Yejin Choi. 2018. Swag: A large-scale adversarial dataset for grounded commonsense inference. In EMNLP.
+Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. 2019. Hellaswag: Can a machine really finish your sentence? In ACL.
+Taolin Zhang, Zerui Cai, Chengyu Wang, Minghui Qiu, Bite Yang, and Xiaofeng He. 2021. Smedbert: A knowledge-enhanced pre-trained language model with structured semantics for medical text mining. In ACL.
+Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classification. In NeurIPS.
+Wenxuan Zhou, Kevin Huang, Tengyu Ma, and Jing Huang. 2021. Document-level relation extraction with adaptive thresholding and localized context pooling. In AAAI.
+Yikai Zhou, Baosong Yang, Derek F Wong, Yu Wan, and Lidia S Chao. 2020. Uncertainty-aware curriculum learning for neural machine translation. In ACL.
+
+# A Responsible NLP Research
+
+In this paper, we aim to identify the best choice for each consideration in constructing a well-calibrated PLM-based pipeline via extensive empirical studies. Our empirical analysis involves training multiple large-scale PLMs and, consequently, consumes a fair amount of computational power. However, we believe that the takeaways from our analysis will benefit NLP practitioners at large, which will write off the computational cost in the future.
+
+In particular, the Hugging Face package leveraged in our experiments utilizes the Apache License 2.0, and the Bayesian-Torch package utilizes the BSD 3-Clause License. We focus on PLM-based pipelines targeting English and assess them based on six NLP datasets, which aligns with the intended use of these datasets. We also release the evaluation benchmarks of our empirical analysis to illustrate the performance of different PLM-based pipelines based on diverse metrics. The benchmarks do not contain information that uniquely identifies individual people or offensive content.
\ No newline at end of file
diff --git a/uncertaintyquantificationwithpretrainedlanguagemodelsalargescaleempiricalanalysis/images.zip b/uncertaintyquantificationwithpretrainedlanguagemodelsalargescaleempiricalanalysis/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..db697be7c73c7efc90b9ab96ef59c2c86f2d9b7e
--- /dev/null
+++ b/uncertaintyquantificationwithpretrainedlanguagemodelsalargescaleempiricalanalysis/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:6f0670aa40e7b3b16e2287a724bee6bd0227a9c7a4ccd17afadfa614329b03ca
+size 443889
diff --git a/uncertaintyquantificationwithpretrainedlanguagemodelsalargescaleempiricalanalysis/layout.json b/uncertaintyquantificationwithpretrainedlanguagemodelsalargescaleempiricalanalysis/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..38ffb3507108e799f4911d17ffdbb17d47864742
--- /dev/null
+++ b/uncertaintyquantificationwithpretrainedlanguagemodelsalargescaleempiricalanalysis/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:74b04c94da0f318f1c73fe2522f04f1196bd8f86824664d813798925b20f1578
+size 431935
diff --git a/understandingsocialmediacrossmodalitydiscourseinlinguisticspace/221869d1-0403-43f5-8278-038ca4e27899_content_list.json b/understandingsocialmediacrossmodalitydiscourseinlinguisticspace/221869d1-0403-43f5-8278-038ca4e27899_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..2ad5ec2cd143b8c88cd0bdbda1a86acd2a41519c
--- /dev/null
+++ b/understandingsocialmediacrossmodalitydiscourseinlinguisticspace/221869d1-0403-43f5-8278-038ca4e27899_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0e71349f9e59c4d291f2c3e5c3fc9344353c2edc7e2fa702f8290726e61cfa3c
+size 90178
diff --git a/understandingsocialmediacrossmodalitydiscourseinlinguisticspace/221869d1-0403-43f5-8278-038ca4e27899_model.json b/understandingsocialmediacrossmodalitydiscourseinlinguisticspace/221869d1-0403-43f5-8278-038ca4e27899_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..ab44d09715fd9815d4790efcdcb8187825043197
--- /dev/null
+++ b/understandingsocialmediacrossmodalitydiscourseinlinguisticspace/221869d1-0403-43f5-8278-038ca4e27899_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4e71320a229b826ee4089c3ccfe5171f6aa6080a25857c7cfeca612ecb9085a0
+size 111278
diff --git a/understandingsocialmediacrossmodalitydiscourseinlinguisticspace/221869d1-0403-43f5-8278-038ca4e27899_origin.pdf b/understandingsocialmediacrossmodalitydiscourseinlinguisticspace/221869d1-0403-43f5-8278-038ca4e27899_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..4964bbc7a1b7cd6d8cea4c31207f9c95bedaf46c
--- /dev/null
+++ b/understandingsocialmediacrossmodalitydiscourseinlinguisticspace/221869d1-0403-43f5-8278-038ca4e27899_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0d51362828ad36f58589fe549e7e212ec12128d2bf89c106cc80c9e2fc8ad757
+size 3048710
diff --git a/understandingsocialmediacrossmodalitydiscourseinlinguisticspace/full.md b/understandingsocialmediacrossmodalitydiscourseinlinguisticspace/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..9518c686c599826d44c3919d42054648213ec44d
--- /dev/null
+++ b/understandingsocialmediacrossmodalitydiscourseinlinguisticspace/full.md
@@ -0,0 +1,342 @@
+# Understanding Social Media Cross-Modality Discourse in Linguistic Space
+
+Chunpu Xu $^{1}$ , Hanzhuo Tan $^{1}$ , Jing Li $^{1*}$ , Piji Li $^{2}$
+
+$^{1}$ Department of Computing, The Hong Kong Polytechnic University, China
+
+2 College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, China
+
+1{chun-pu.xu,han-zhuo.tan}@connect.polyu.hk
+
+'jing-amelia.li@polyu.edu.hk; $^2 p j l i$ nuaa.edu.cn
+
+# Abstract
+
+The multimedia communications with texts and images are popular on social media. However, limited studies concern how images are structured with texts to form coherent meanings in human cognition. To fill in the gap, we present a novel concept of cross-modality discourse, reflecting how human readers couple image and text understandings. Text descriptions are first derived from images (named as subtitles) in the multimedia contexts. Five labels – entity-level insertion, projection and concretization and scene-level restatement and extension – are further employed to shape the structure of subtitles and texts and present their joint meanings. As a pilot study, we also build the very first dataset containing 16K multimedia tweets with manually annotated discourse labels. The experimental results show that the multimedia encoder based on multi-head attention with captions is able to obtain the-state-of-the-art results.
+
+# 1 Introduction
+
+The growing popularity of multimedia is revolutionizing the communications on social media. The conventional text-only form has been expanded to cross modalities involving texts and images in information exchange. For multimedia messages, the language understanding acquires more than making sense of both visual and textual semantics; it also matters to figure out what glues them together to exhibit the coherent senses in human's mind.
+
+Nevertheless, most progress made in social media language understanding relies on texts to learn the message-level semantics (Shen et al., 2018; Nguyen et al., 2020), largely ignoring the rich meanings conveyed in images (Cai et al., 2019a; Wang et al., 2020b). Other recent multimodal studies focus on model designs to combine visual and textual signals (Park et al., 2019; Li et al., 2020;
+
+Yu et al., 2021), ignoring the insights from how humans understand the implicit structure underlying a multimedia post.
+
+In light of these concerns, we consider images as an integral part of social media language and propose a novel concept of cross-modality discourse, which defines how human readers structure the coherent meanings from image and text modalities. Our work is inspired by Vempala and Preotiuc-Pietro (2019) examining the information overlap between images and texts, whereas we take a step further to characterize how multimedia messages make sense to humans, which is beyond a simple yes-or-no prediction to whether new thing is observed. To the best of our knowledge, we are the first to extend discourse — a pure linguistic concept — to define the linguistic roles played by images and their pragmatic relations with texts to shape the coherent meanings.
+
+In general, cross-modality discourse is defined by the operations adopted in human perception to couple image and text semantics. Readers may first extract the information from the images acquired to complete the cross-modality understanding, either in form of the local objects (entities) or global scenes (Rayner, 2009). Then, the extracted entities or scenes are represented in texts, named as the images' subtitles, which can further contribute to structure the entity-level or scene-level discourse with the matching texts in the multimedia contexts. Concretely, for entity-level discourse, it is detailed into insertion, projection, and concretization, according to whether the entity is omitted, described, or mapped; similarly, scene-level restatement and extension are employed to reflect whether the story in one modality recurs or continues in the other.
+
+To illustrate the definitions above, Figure 1 shows five multimedia Twitter posts. As can be seen from (a), readers may concentrate on the object "strawberry" and insert its name into the texts omitting the entity. As for (b), the "coffee" object
+
+
Tweet text
Freshly picked off my allotment (strawberry) today, well chuffed
Happy Sunday! My best friend and I have coffee in the sunshine
Cartel leader whose arrest sparked killings is sentenced to prison in Dallas court
This dog has to hold hands on the car
One step closer to summer (track towards beach)
Tweet image
(a)
(b)
(c)
(d)
(e)
Vempalaet al. (2019) labels
Image adds meaning Text not represented
Image not adds meaning Text represented
Image not adds meaning Text not represented
Image not adds meaning Text represented
Image adds meaning Text not represented
Discourse labels
Entity-level: insertion (strawberry)
Entity-level: concretization (coffee)
Entity-level: projection (court)
Scene-level: restatement (dog hold hands in car)
Scene-level: extension (track towards beach)
+
+Figure 1: The five cross-modality discourse labels and their examples. The rows from top to bottom display their texts, images, the image-text relation labels in Vempala and Preotiuc-Pietro (2019), and our cross-modality discourse categories. The labels in Vempala and Preotiuc-Pietro (2019) concern whether new meanings are added by images to texts, whereas ours define the linguistic roles of images and their pragmatic relations with texts for coherence.
+
+should be extracted from the image to concretize the word "coffee" in the text. In (c), the word "court" in text is linked with the "gavel" object. The image in (d) helps restate the texts scene (a dog holds hands in the car). In (e), the global scene works as an extension to texts and completes the story: "We are one step closer to summer following the track towards beach."
+
+On the contrary, the image-text relations in Vempala and Preotiuc-Pietro (2019) are limited to whether images add new meanings to texts, which is nonetheless insufficient to reflect how language is understood in multimedia contexts.
+
+As a pilot study of cross-modality discourse, we also present the very first dataset to explore the task. It is collected from Twitter and contains 16K high-quality multimedia posts with manual annotations on their discourse labels.1 We believe our task and the associated dataset, being the first of its kind, will be potentially beneficial to help machines gain the ability to understand social media language with multimodal elements.
+
+To that end, we present a framework to learn the discourse structure across texts and images. Inspired by the recent advances in multimodal learning (Wang et al., 2020b; Yu et al., 2020), we employ the multi-head attention mechanism (Vaswani et al., 2017) to explore the visual-textual representations reflecting cross-modality interactions. Besides, to characterize subtitles for discourse learning, image captions generated from model trained on COCO captioning dataset (Lin et al., 2014b) are leveraged as additional features.
+
+For empirical studies on cross-modality discourse, we conduct comprehensive experiments
+
+on our dataset. The comparison results on classification show the challenges for machines to infer discourse structure and it is beyond the capability of advanced multimodal encoders to well handle our task. Nevertheless, exploring correlations of texts, captions, and visual-textual interactions helps exhibit the state-of-the-art performance in both the intra-class and overall evaluation. We further examine the effects of varying modalities and text length and find that text signals are crucial for discourse inference while joint effects of texts, images, and captions present the best results. At last, the qualitative analysis demonstrates how the multi-head attention in our model interprets discourse structure.
+
+# 2 Related Work
+
+Our paper crosses the lines of multimedia learning and discourse analysis in natural language processing. Here comes more details.
+
+Multimedia Learning. Our paper is in the line with cross-media research that attempts to fuse textual and visual features. There are various deep learning methods proposed to leverage crossmodal features, either based on advanced neural architectures like co-attentions (Xu and Saenko, 2016; Lu et al., 2016) and multi-head attentions (Vaswani et al., 2017; Wang et al., 2020a), or pre-trained visual-lingual representations (Lu et al., 2019; Su et al., 2020; Zhang et al., 2021). Their effectiveness are demonstrated in both conventional vision-language tasks, such as image captioning (Park et al., 2019; Zhou et al., 2020; Shi et al., 2021) and visual question answering (VQA) (Yu et al., 2019; Tan and Bansal, 2019; Si et al., 2021), and social media applications, such as sarcasm detection (Cai
+
+et al., 2019a), event tracking (Li et al., 2020; Abavisani et al., 2020b), keyphrase prediction (Zhang et al., 2019; Wang et al., 2020b).
+
+It is seen that most progress to date made in this line focus on advancing methodology designs for general purposes (Su et al., 2020; Zhou et al., 2020) or specific applications (Wang et al., 2020b) to better capture the matched semantics across varying modalities. However, their effectiveness over social media data would be inevitably compromised resulted from the intricate image-text interactions (Vempala and Preotiuc-Pietro, 2019). We thus borrow the insights from human perception to interpret image-text relations from the linguistic viewpoints and propose the task to learn discourse structure in multimedia contexts. It is a fundamental research exhibiting the potential to help the models gather cross-modality understanding capability and might benefit various downstream applications.
+
+We are also related with previous categorization tasks on social media to understand image-text relations, such as information overlap (Vempala and Preotiuc-Pietro, 2019), point-of-interest types (Villegas and Aletras, 2021), author purposes (Kruk et al., 2019), object possessions (Chinnappa et al., 2019), and so forth. Besides, interestingly, the "discourse" concept is also employed to examine the image-text relations in cooking recipes (Alikhani et al., 2019). Compared with these studies concatenating visual and textual embeddings in a "common" space, we craft text-formed subtitles to convey visual stories and explore how they shape the coherent meanings with the post texts in linguistic space. It will consequently allow deep semantic learning to capture the implicit structure holding image and text modalities, while the existing models might be incapable to gather senses of language understanding via simple feature concatenation.
+
+Discourse Analysis. This work is related to prior studies on text-level discourse structures. The popular tasks in the styles of either RST (Rhetorical Structure Theory) (Mann and Thompson, 1988; Liu et al., 2019) or PDTB (Penn Discourse Tree Bank) (Prasad et al., 2008; Xu et al., 2018) explore the rhetorical relations of discourse units (e.g., phrases or sentences) that cohesively connect them form a sense of coherence. These studies have demonstrated their helpfulness in diverse stream of NLP applications (Choubey et al., 2020), such as sentiment analysis (Bhatia et al., 2015), text categorization (Ji and Smith, 2017), and microblog sum
+
+marization (Li et al., 2018). Nevertheless, limited work examines a social media image as a discourse unit of the pragmatic structure in multimedia contexts, which is a gap to be filled in this work.
+
+# 3 Study Design
+
+In this section, we first define the task to predict cross-modality discourse in §3.1. Then, we introduce how we construct the dataset in §3.2, followed by the data analysis in §3.3 and the potential applications in §3.4.
+
+# 3.1 Task Definition
+
+In our task, the input is an image-text pair from a multimedia post on social media, following the previous practice (Vempala and Preotiuc-Pietro, 2019). For each pair, the goal is to output a label from a predefined set that cover the major categories of cross-modality discourse on social media. Our intuition is that images are relatively more eye-catching and likely to be processed before the texts. For image understandings, the previous findings from psychological experiments (Rayner, 2009) point out that humans may first recognize and extract the meanings from global scenes to fill the information gap in contexts; if the gap still exists, they may go back to capture the local objects. Based on that, we first coarsely categorize the discourse label set into the level of entity (object) and scene, depending on whether object or scene is extracted to make sense of the joint meanings of images and texts.
+
+To further elaborate the label design, the extracted information from an image (as an object or scene) is mapped to the text modality to form the subtitle, which allows us to formulate how human senses structure the coherent meaning with subtitles and post texts.
+
+For entity-level discourse, three cases are examined: the entity is omitted, mentioned or linked in the texts. For the absent entity (e.g., Fig. 1(a)), the subtitle, in form of entity name, should be inserted into the post text to complete the meanings of a message, while the entity in Fig. 1(b) is concerted by the object in images. And the entity in Fig. 1(c) is implicitly projected into the relevant object. We henceforth design entity-level insertion, concretization, and projection to describe the above three cases, respectively.
+
+Similarly, scene-level discourse can be separated into restatement and extension categorizes. The former refers to image serving as texts description
+
+(e.g., Fig. 1(d)) and for the latter, posts presenting image scenes to elaborate the story left as white space in the texts (e.g., Fig. 1(e)).
+
+# 3.2 Data Collection and Annotation
+
+Our dataset is gathered from $\text{Twitter}^2$ , which is drawing attentions to research digital communications (Mozafari et al., 2019; Nikolov and Radivchev, 2019; Müller et al., 2020) and exhibits prominent use of multimedia posts (Vempala and Preotiuic-Pietro, 2019; Wang et al., 2020b). We first crawled the raw data using Twitter streaming $\text{API}^3$ and removed non-English posts and those with texts only or multiple images. Afterwards, to better model discourse from the noisy Twitter data (Vempala and Preotiuic-Pietro, 2019), we removed samples that might hinder the learning of non-trivial discourse signals. Here, four types of "bad" image-text pairs might provide tremendous noise in the learning, which are shown in Fig. 2.
+
+The first type refers to image portraits with some quotes to share the insights of life (henceforth portraits), where images and texts are not coherently related (from linguistic viewpoints) and discourse structure are unable to be defined for them. Moreover, many of them contain authors' selfies, which might raise privacy concerns. The second type of posts, namely background, relies on external knowledge to capture the meanings (e.g., Fig. 2(b)), which is beyond the capability of language understanding given only the images and the matching texts. For the third, we consider low-quality images (e.g., low resolution and blurred ones like Fig. 2(c)), from which it is hard to capture the visual meanings. The last one refers to OCR subtitles (Fig. 2(d)), where the subtitles appear in the images as optical characters. It may result in a degeneration of cross-modality discourse to text-level discourse and render the learning of trivial features.
+
+In the data annotation, we first selected 25 typical examples corresponding to each discourse label and provide them together with the annotation guidelines (with the detailed description of each label) for quality control. Then, two postgraduate students majoring in linguistics were recruited to manually label the discourse categories, given an image-text pair. "Bad" samples falling in the above four types should also be indicated in the annotation process. The inter-annotator agreement is
+
+(a) Portrait: one taught me love, one taught me patience, and one taught me pain
+
+(b) Background:
+SpaceX announces the identity of the world's first private lunar passenger
+
+(c) Quality: When you spend all season watching Matt Chapman
+
+(d)OCR: Saturday 1st December Thought Of The Day From Oval Station
+
+
+Figure 2: Examples tweets of the four "bad" types. (a) Portrait image with quotes in texts. (b) Background is externally required for understanding (rocket trajectory scenes here). (c) Low-quality image where objects could be barely observed. (d) OCR Subtitle ("Thought Of The Day") appear in the image in optical characters.
+
+
+
+
+
+
+
+
Total
Ins
Con
Pro
Res
Ext
Num
16,000
839
10,558
690
1,826
2,087
Len
10.69
9.11
10.85
10.98
11.24
9.92
+
+Table 1: Statistics of the total data and that with each label: Ins: Insertion; Con: Concretization; Pro: Projection; Res: Restatement; Ext: Extension. Len: average word number in texts. Num: tweet number.
+
+$79.8\%$ and we only kept the data with labels agreed by both annotators to ensure the feature learning quality in noisy data. At last, posts in "bad" types were taken away and the final dataset presents 16k multimedia tweets with manual labels in five discourse categories.
+
+# 3.3 Data Analysis
+
+Here we conduct a preliminary analysis of our dataset and show the statistics in Table 1. There exhibits imbalanced labels, where concretization and extension labels are relatively more popular compared to the other three. This indicates the diverse preferences of Twitter users in the way they choose to structure texts and images and the potential challenge for models to handle our task.
+
+For the text length, it is seen that most tweets contain limited words, challenging the models to capture essential features from textual signals. Interestingly, we compare our statistics with other text-only Twitter datasets in previous work (Wang et al., 2019) and find our multimedia tweets have $30\%$ fewer words on average. This implies that authors may tend to put less content in the text of multimedia posts, and figure the missing information in images for compensation. We also notice that insertion and extension discourse exhibit relatively shorter texts on average. It is probably because they exhibit the omitted content in texts, which presents in images.
+
+To further characterize text length in our dataset, Fig. 3 shows the word number distribution of tweet
+
+
+Figure 3: Text lengths (token number) distribution of posts with varying discourse labels.
+
+texts with varying labels. All the curves demonstrate the sparse distribution over text length, owing to the freestyles of social media writings. Insertion and extension curves first peak at 8 words while the others at 10-12, then all present long tails afterwards. This again shows that texts in multimedia posts may provide limited content and those in insertion and extension contain fewer words.
+
+# 3.4 Potential Applications
+
+In this subsection, we further discuss the potential downstream applications of our task and datasets, which might inspire the design of future work. A straightforward application is microblog summarization — an important task to distill the salient content from massive social media data. As many state-of-the-art summarization models only allow textual input while multimedia posts are prominent on social media, it may require the compression of these posts into text for easy processing. It is different from the traditional image captioning task (Anderson et al., 2018; Rennie et al., 2017; Huang et al., 2019), where the generated captions are translated from images. For a social media post, the text cannot trivially be seen as a “translation” of image, because of possibly ambiguous imagetext interactions therein. Considering crucial roles played by discourse analysis in summarization (Xuet al., 2020), it is not hard to envision that our cross-modality discourse, describing how image and text structure coherence, would contribute to the research of multimedia summarization.
+
+In addition, cross-modality discourse can be viewed as a fundamental task and might be helpful to other downstream tasks on social media (e.g., multimodal NER (Yu et al., 2020), multimodal crisis events classification (Abavisani et al., 2020a), multimodal sarcasm detection (Cai et al., 2019b), multimodal sentiment analysis(Truong and Lauw, 2019), and multimodal hashtag prediction (Wang et al., 2020c)). However, most previous efforts focus on the leverage of visual and linguistic representations yet ignore the linguistic essence that glue the two modalities. Recently, some work pro
+
+pose multitask learning to consider image-text relations in multimodal learning. For example, Sun et al. (2021) investigate the relation propagation between text and image to improve the accuracy of NER in tweets. Ju et al. (2021) utilize multimodal relation types as auxiliary labels to explore multimodal aspect-sentiment analysis. The positive results from these studies imply the potential of cross-modality discourse (as a linguistic description of image-text relations) to benefit a wide range of multimodal applications. Besides, the training data of image-text relation used in (Sun et al., 2021; Ju et al., 2021) is the TRC dataset proposed by Vempala and Preotiuc-Pietro (2019). Compared to the TRC dataset, our proposed discourse dataset exhibits a tremendously larger scale (i.e., 16K VS 4.5K) and fine-grained labels for image-text relation, as shown in Fig. 1. We therefore believe our dataset would also helpfully advance the performance of various multimodal models.
+
+# 4 The Discourse Learning Framework
+
+In this section, we describe our framework that couples the signals from images and texts to predict their discourse labels. As shown in Fig. 4, the model architecture leverages representations learned from texts, images, and image captions (to reflect subtitles), which will be introduced in §4.1. Then, we will discuss how we combine multi-modality representations §4.2. At last, §4.3 presents how we predict the discourse labels and design the training processes.
+
+# 4.1 Encoding Text, Image, and Captions
+
+Texts Encoding. Here we describe how to learn text features. The text encoder is based on the bottom 6-layers of pre-trained Bertweet (Nguyen et al., 2020). It is fed with an $L$ -length token sequence and embed its representations into a sequential hidden states $\mathbf{H}_{text} = (\mathbf{h}_1, \dots, \mathbf{h}_L)$ , where each element reflects a token embedding. $\mathbf{H}_{text}$ further goes through a max-pooling layer and produces $\bar{\mathbf{H}}_{text}$ to represent the text.
+
+Image Encoding. To explore visual signals, images are encoded by CNN-based ResNet-101 (He et al., 2016) pre-trained on ImageNet (Russakovsky et al., 2015). The output of the last convolutional layer in ResNet-101 is extracted as the representation of the input image. The size of the feature map is first reduced to $M \times M \times 2048$ and then reshaped into $M^2 \times 2048$ . Each $1 \times 2048$ vector represents
+
+
+Figure 4: Our framework to learn cross-modality discourse via representations encoded from texts (bottom), captions (upper left), and images (upper right). The encoded captions and texts are compared at output layer in visual-textual contexts.
+
+the visual features in a corresponding image area and is projected to the same dimension of text feature $\mathbf{h}$ by liner layer. The post-level visual feature is denoted as $\mathbf{H}_{img} = (\mathbf{v}_1, \dots, \mathbf{v}_{M^2})$ , where $\mathbf{v}_i$ refers to an $1 \times 2048$ vector that represents the feature of an area in the image.
+
+Image Caption Encoding In order to capture more semantic information from images, we further exploit image captions (henceforth captions) as an additional modality. Our intuition is that captions may inject essential visual semantics underlying images into a descriptive language in texts (Xu et al., 2015). They are potentially helpful to reflect the rich interactions between image objects and discover subtitle-style clues as essential discourse indicators. We first employ the model presented by Anderson et al. (2018) to predict the captions of each image. The captioning model is pre-trained on the COCO captioning dataset (Lin et al., 2014b), which mostly consists of natural pictures outside social media domain. Then, we encode the token sequence of captions following the same process of text encoding (discussed above) and yield caption representation: $\mathbf{H}_{cap} = (\mathbf{h}_1,\dots,\mathbf{h}_N)$ . Here $N$ indicates the number of tokens in the caption, $\mathbf{h}_i$ refers to the $i$ -th hidden state of the Bertweet encoder.
+
+# 4.2 Integrating Multimodal Representations
+
+As pointed out in previous work (Wang et al., 2020b), modalities on social media data exhibit much more intricate interactions compared with the widely-studied vision-language datasets (Lin et al., 2014a; Young et al., 2014). To allow the framework to attend various types of cross-modality interactions, we employ multi-head attentions (Vaswani
+
+et al., 2017) to comprehensively explore the interactions between the encoded image features $(\mathbf{H}_{img})$ and max-pooled text representations $(\bar{\mathbf{H}}_{text})$ .
+
+Concretely, we set text features as the query $\mathbf{Q}$ , image features as the key and value $\mathbf{K},\mathbf{V}$ , and compute the multi-head attention $MA(\cdot)$ as follows:
+
+$$
+M A (\mathbf {Q}, \mathbf {K}, \mathbf {V}) = [ h d _ {1}; \dots ; h d _ {n} ] \mathbf {W} ^ {O} \tag {1}
+$$
+
+where $n$ is the number of heads, $[\cdot]$ indicates the concatenation operations, and the attention of the $j$ -th head is:
+
+$$
+h d _ {j} = A \left(\mathbf {Q} \mathbf {W} _ {j} ^ {Q}, \mathbf {K} \mathbf {W} _ {j} ^ {K}, \mathbf {V} \mathbf {W} _ {j} ^ {V}\right) \tag {2}
+$$
+
+$$
+A (\mathbf {Q}, \mathbf {K}, \mathbf {V}) = \theta \left(\frac {\mathbf {Q} \mathbf {K} ^ {T}}{\sqrt {d _ {k}}}\right) \mathbf {V} \tag {3}
+$$
+
+Here $d_{k}$ is the normalization factor, $\theta (\cdot)$ means softmax function. $\mathbf{W}^{O},\mathbf{W}_{j}^{Q},\mathbf{W}_{j}^{K},\mathbf{W}_{j}^{V}$ are learnable variables. The attended images (in aware of texts) are denoted as $\hat{\mathbf{H}}_{img}$ , which further serves as the context to help explore the discourse clues from captions and texts.
+
+For discourse modeling, the encoded texts $(\bar{\mathbf{H}}_{text})$ are compared with captions (carrying subtitle-style features) to infer how the subtitles can be structured with texts. To that end, we first employ a multi-head attention mechanism to encode text-aware attended caption $\hat{\mathbf{H}}_{cap}$ , which captures salient contents from captions to indicate discourse categories. Furthermore, $\hat{\mathbf{H}}_{cap}$ are concatenated with $\bar{\mathbf{H}}_{text}$ to model their structure; also concatenated are the attended images $\hat{\mathbf{H}}_{img}$ as the image-text interaction contexts for cross-modality discourse learning.
+
+# 4.3 Discourse Prediction and Model Training
+
+The discourse labels are predicted with a multi-layer perceptron (MLP) fed with $\mathbf{H} = [\hat{\mathbf{H}}_{cap};\hat{\mathbf{H}}_{text};\hat{\mathbf{H}}_{img}]$ , the integrated feature vectors, which is further activated with a softmax function to predict the likelihood over the four discourse labels. For training, recall that in Table 1, we observe the severe label imbalance on our task. To deal with the issue, we adopt weighted cross-entropy loss, whose weights are set by the proportions of labels in training data.
+
+# 5 Experimental Setup
+
+Model Settings. The length of tweet texts $(L)$ and captions $(N)$ are both capped at 20 by truncation. The batch size is set to 100, the learning rate to $5 \times 10^{-5}$ . The head number of all multi-head attention layers are set to 6. For image encoding,
+
+
Method
Insertion
Concretization
Projection
Restatement
Extension
F1
Baselines
Qin et al. (2016)
41.13
69.91
26.13
39.67
41.15
61.67
Rutherford and Xue (2016)
43.17
70.78
32.62
42.31
40.82
62.73
Nam et al. (2017)
46.49
74.83
33.33
39.33
42.39
65.76
Text+Image
CONCATFUSE
52.86
81.62
34.78
39.19
42.93
71.09
ATTENTION
54.30
82.64
33.71
39.23
39.41
71.48
CO-ATTENTION
51.90
83.31
36.36
42.57
40.59
72.37
MULTIHEADATT
53.69
84.33
36.96
42.11
42.01
73.33
Text+Caption
CONCATFUSE
52.00
81.11
33.33
41.18
43.02
70.82
ATTENTION
54.79
81.26
36.78
39.72
42.55
70.97
CO-ATTENTION
53.73
82.13
37.20
41.78
39.16
71.38
MULTIHEADATT
53.79
82.27
34.55
43.96
43.46
72.08
Img+Text+Caption
CONCATFUSE
52.48
82.41
32.97
43.01
42.39
71.88
ATTENTION
53.24
83.01
34.95
43.45
43.65
72.58
CO-ATTENTION
54.81
83.98
36.96
45.24
39.76
73.15
MULTIHEADATT(full model)
57.75*
84.88*
37.36
46.15*
44.19*
74.51*
+
+Table 2: Comparison results of the baselines and our variants. Scores with * represent the significance tests of our full model over the baseline models with p-value $< {0.05}$ .
+
+image feature map size $M$ is set to 14. For text and comment encoding, the representations are extracted from the bottom 6-layers of the Bertweet model, which are further fine-tuned in training. In the setup, we randomly split $80\%$ , $10\%$ and $10\%$ for training, validation, and test. For evaluation, we report F1 scores in the prediction of each label and the weighted F1 to measure the overall results.
+
+Baselines and Comparisons. We first consider two text-level discourse parsers proposed in Qin et al. (2016) and Rutherford and Xue (2016), where we extend their text encoders into multimodal encoders to fit the image-text pairs. Then, we compare with a popular multimodal classifier (Nam et al., 2017) that employs a dual attention network to fuse the visual and textual features.
+
+Besides, we evaluate varying sets of feature combinations in our model Test + Image, Text + Caption, and Text + Image + Caption (the full set). Recall that our framework employs multi-head attention to integrate features learned from different modalities. In experiments, we also test the performance of other modality fusion alternatives based on simple feature concatenation (CONCATFUSE), the conventional attention mechanism (ATTENTION), the co-attention mechanism (CO-ATTENTION).
+
+# 6 Experimental Discussions
+
+This section first presents the main comparison results ( §6.1). Then, we discuss model sensitivity to varying modalities and text length in §6.2. Finally,
+
+§6.3 presents a case study to provide more insights.
+
+# 6.1 Main Comparison Results
+
+Table 2 shows the main comparison results of various multimodal encoders. The following observations can be drawn.
+
+First, all models do not exhibit good F1. This indicates that cross-modality discourse prediction is a challenging task. A good understanding for that cannot be gained by trivially adapting discourse parsers to the multimodal settings or applying the existing vision-language encoders. Second, results on the two entity-level discourse labels (i.e., insertion and concretization) are relatively better than scene-level, indicating that local objects are easier to be captured than global scenes. Among all the labels, models perform the best in concretization, probably attributed to its richer data samples for feature learning (as shown in Table 1). And models obtain worst results in projection. The reasons might be that additional knowledge are needed for models to learn the implicit relation between the object and the entity.
+
+Last, images, texts, and captions all contribute to building automatic discourse understanding. Joint modeling of the three modalities enables the corresponding models to outperform their text+image and text+caption counterparts.
+
+# 6.2 Sensitivity to Modalities and Text Length
+
+Varying Modalities. To further examine the effects of varying modalities, we compare the F1 scores of our full model with its caption-only,
+
+
+(a) F1 vs. Modalities
+
+
+Figure 5: Full model performance compared with varying modality ablations in (a) and its results over varying text length (b). X-axis: insertion, concretization, projection, restatement, and extension; Y-axis: F1 scores. For each label, bars from left to right show the caption only, image only, text only ablations, and the full model in (a) and the tweet texts capped at 5, 10, 15, and 20 in (b).
+
+image-only, and text-only ablations in Fig. 5(a). It is seen that text modality contributes relatively more to discourse modeling observed from all labels, especially for insertion, where Name Entities are omitted and makes the text style easy to recognize. Nevertheless, the joint effects of images, texts, and captions together present the best performance over all labels.
+
+Varying Text Length. As discussed above, text features are crucial to predict cross-modality discourse. Here we further examine the effects of text length on model performance and the results of our full model are shown in Fig. 5(b). Better scores are observed for longer texts as richer contents can be captured. This again demonstrates the essential signals provided by texts to infer cross-modality discourse.
+
+# 6.3 Qualitative Analysis
+
+Discussions above mostly concern caption and text modalities. Here we present a case study to probe into how the model reflects discourse indicators over vision signals.
+
+Case Study. Visual features are analyzed by the heatmap (in Fig. 6) visualizing the text-aware attenuate
+
+
+
+
+
+
+
+
+(a) Insertion: dog T: ready for bed
+(c) Projection:drilling equipment T:european oil majors adapt to low oil; break even in 2017
+
+
+(b) Concretization: jeep T: jeep wrangler sport 2014 sport used
+(d) Restatement: moon behind a tree T: moon rising behind a tree
+
+
+
+
+(c) Extension: beautiful sky and trees with yellow leaves T: fall in ohi0
+Note: T indicates the tweet text. Illuminated areas indicate higher attention weights. Texts with red represent the image content.
+Figure 6: Visualization of multi-head attention heatmaps over sample images.
+
+tion weights over images (Eq. 3), which is captured from image-text interactions. As can be seen, attentions are able to highlight salient regions that signal the essential semantic links with the texts, e.g., the entities (dog and jeep) in (a) and (b). It is also observed that the attention would vary in their focus in regions: for entity level discourse, it tends to concentrate on the some parts of a salient object (entity), while for scene level, attention also examines the background to capture the global view.
+
+# 7 Conclusion
+
+We have presented a novel task to learn cross-modality discourse that advances models to gain social media language understanding capability in multimedia contexts. To handle the intricate imag-text interactions, the visual semantics are first converted into text-formed subtitles and then compared with post texts to explore deep syntactic relations in linguistic space. For empirical studies, we further contribute the first dataset presenting 16K human-annotated tweets with discourse labels for imag-text pairs. The main comparison results on our dataset have shown the effectiveness of multi-head attentions in exploring interactions among text, image, and caption modalities. Further discussions demonstrate our potential to produce meaningful representations indicating implicit image-text structure. These discourse features, conveying essential linguistic clues consistent with human senses, may largely benefit the future advances of automatic cross-modality understanding on social media.
+
+# Limitations
+
+Class imbalance is one of the main limitations of this work. As illustrated in Table 1, Concretization is the majority category which occupies $66.0\%$ of the dataset, while the minority categories, e.g. Projection and Insertion only account for $4.3\%$ and $5.2\%$ respectively. Although such uneven distribution reflects the real scenario of image-text relationships among tweets, future work should acquire a larger amount of minority categories for better interpretation of image-text relationships.
+
+Cross-lingual and multi-platform studies should also be considered in later studies. It would be interesting and insightful to investigate the cross-modality discourse categories distribution among different languages. Are there any cultural traits that affect the use of image and text? Meanwhile, social media platforms can also exhibit preference for image and text usage. For example, will users on Instagram prefer to omit the Name Entities (Insertion category) than Twitter users?
+
+A more concrete model, e.g. vision-language Transformers, could also be employed to encode the text, caption, and image jointly. Current model runs efficiently on single NVIDIA RTX3080Ti GPU, while the training consumption of vision-language Transformers could be costly and requires larger dataset. Future studies could explore the trade-off between computation cost and classification performance.
+
+# Ethical Considerations
+
+We declare our dataset will cause no ethics problem. First, we follow the standard data acquisition process regularized by Twitter API. We downloaded the data for a purpose of academic research and is consistent with the Twitter terms of use. Then, we thoroughly navigated the data and ensured that no content will rise any ethics concerns, e.g. toxic languages, human face images, and censored images. Next, we perform the data anonymization to protect the user privacy. For the language use, we only keep the posts with English text. For the human annotations, we recruited the annotators as part-time research assistants with 16 USD/hour payment.
+
+# Acknowledgements
+
+This paper is substantially supported by NSFC Young Scientists Fund (No.62006203, 62106105), a grant from the Research Grants Council of the
+
+Hong Kong Special Administrative Region, China (Project No. PolyU/25200821), PolyU internal funds (1-BE2W, 4-ZZKM, and 1-ZVRH), and CCF-Baidu Open Fund (No. 2021PP15002000).
+
+# References
+
+Mahdi Abavisani, Liwei Wu, Shengli Hu, Joel Tetreault, and Alejandro Jaimes. 2020a. Multimodal categorization of crisis events in social media. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 14679-14689.
+Mahdi Abavisani, Liwei Wu, Shengli Hu, Joel R. Tetreault, and Alejandro Jaimes. 2020b. Multimodal categorization of crisis events in social media. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020, Seattle, WA, USA, June 13-19, 2020, pages 14667-14677. IEEE.
+Malihe Alikhani, Sreyasi Nag Chowdhury, Gerard de Melo, and Matthew Stone. 2019. CITE: A corpus of image-text discourse relations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 570-575. Association for Computational Linguistics.
+Peter Anderson, Xiaodong He, Chris Buehler, Damien Teney, Mark Johnson, Stephen Gould, and Lei Zhang. 2018. Bottom-up and top-down attention for image captioning and visual question answering. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 6077-6086.
+Parminder Bhatia, Yangfeng Ji, and Jacob Eisenstein. 2015. Better document-level sentiment analysis from RST discourse parsing. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, EMNLP 2015, Lisbon, Portugal, September 17-21, 2015, pages 2212-2218. The Association for Computational Linguistics.
+Yitao Cai, Huiyu Cai, and Xiaojun Wan. 2019a. Multimodal sarcasm detection in twitter with hierarchical fusion model. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28-August 2, 2019, Volume 1: Long Papers, pages 2506-2515. Association for Computational Linguistics.
+Yitao Cai, Huiyu Cai, and Xiaojun Wan. 2019b. Multimodal sarcasm detection in twitter with hierarchical fusion model. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2506-2515.
+Dhivya Chinnappa, Srikala Murugan, and Eduardo Blanco. 2019. Extracting possessions from social media: Images complement language. In Proceedings of the 2019 Conference on Empirical Methods
+
+in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, pages 663-672. Association for Computational Linguistics.
+Prafulla Kumar Choubey, Aaron Lee, Ruihong Huang, and Lu Wang. 2020. Discourse as a function of event: Profiling discourse structure in news articles around the main event. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 5374-5386. Association for Computational Linguistics.
+Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770-778.
+Lun Huang, Wenmin Wang, Yaxian Xia, and Jie Chen. 2019. Adaptively aligned image captioning via adaptive attention time. In Advances in Neural Information Processing Systems, pages 8942-8951.
+Yangfeng Ji and Noah A. Smith. 2017. Neural discourse structure for text categorization. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017, Vancouver, Canada, July 30 - August 4, Volume 1: Long Papers, pages 996-1005. Association for Computational Linguistics.
+Xincheng Ju, Dong Zhang, Rong Xiao, Junhui Li, Shoushan Li, Min Zhang, and Guodong Zhou. 2021. Joint multi-modal aspect-sentiment analysis with auxiliary cross-modal relation detection. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 7-11 November, 2021, pages 4395-4405. Association for Computational Linguistics.
+Julia Kruk, Jonah Lubin, Karan Sikka, Xiao Lin, Dan Jurafsky, and Ajay Divakaran. 2019. Integrating text and image: Determining multimodal document intent in instagram posts. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, pages 4621-4631. Association for Computational Linguistics.
+Jing Li, Yan Song, Zhongyu Wei, and Kam-Fai Wong. 2018. A joint model of conversational discourse and latent topics on microblogs. Comput. Linguistics, 44(4).
+Manling Li, Alireza Zareian, Qi Zeng, Spencer Whitehead, Di Lu, Heng Ji, and Shih-Fu Chang. 2020. Cross-media structured common space for multimedia event extraction. In Proceedings of the 58th Annual Meeting of the Association for Computational
+
+Linguistics, ACL 2020, Online, July 5-10, 2020, pages 2557-2568. Association for Computational Linguistics.
+Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dólar, and C Lawrence Zitnick. 2014a. Microsoft coco: Common objects in context. In European conference on computer vision, pages 740-755. Springer.
+Tsung-Yi Lin, Michael Maire, Serge J. Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dólar, and C. Lawrence Zitnick. 2014b. Microsoft COCO: common objects in context. In Computer Vision - ECCV 2014 - 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V, volume 8693 of Lecture Notes in Computer Science, pages 740-755. Springer.
+Linlin Liu, Xiang Lin, Shafiq R. Joty, Simeng Han, and Lidong Bing. 2019. Hierarchical pointer net parsing. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, pages 1007-1017. Association for Computational Linguistics.
+Jiasen Lu, Dhruv Batra, Devi Parikh, and Stefan Lee. 2019. Vilbert: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 13-23.
+Jiasen Lu, Jianwei Yang, Dhruv Batra, and Devi Parikh. 2016. Hierarchical question-image co-attention for visual question answering. In Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, December 5-10, 2016, Barcelona, Spain, pages 289-297.
+William C Mann and Sandra A Thompson. 1988. Rhetorical structure theory: Toward a functional theory of text organization. Text, 8(3):243-281.
+Marzieh Mozafari, Reza Farahbakhsh, and Noel Crespi. 2019. A bert-based transfer learning approach for hate speech detection in online social media. In International Conference on Complex Networks and Their Applications, pages 928-940. Springer.
+Martin Müller, Marcel Salathé, and Per E Kummervold. 2020. Covid-twitter-bert: A natural language processing model to analyse COVID-19 content on twitter. arXiv preprint arXiv:2005.07503.
+Hyeonseob Nam, Jung-Woo Ha, and Jeonghee Kim. 2017. Dual attention networks for multimodal reasoning and matching. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 299-307.
+
+Dat Quoc Nguyen, Thanh Vu, and Anh Tuan Nguyen. 2020. Bertweet: A pre-trained language model for english tweets. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, EMNLP 2020 - Demos, Online, November 16-20, 2020, pages 9-14. Association for Computational Linguistics.
+Alex Nikolov and Victor Radivchev. 2019. Nikolov-radivchev at semeval-2019 task 6: Offensive tweet classification with bert and ensembles. In Proceedings of the 13th International Workshop on Semantic Evaluation, pages 691-695.
+Cesc Chunseong Park, Byeongchang Kim, and Gunhee Kim. 2019. Towards personalized image captioning via multimodal memory networks. IEEE Trans. Pattern Anal. Mach. Intell., 41(4):999-1012.
+Rashmi Prasad, Nikhil Dinesh, Alan Lee, Eleni Milt-sakaki, Livio Robaldo, Aravind K. Joshi, and Bonnie L. Webber. 2008. The penn discourse treebank 2.0. In Proceedings of the International Conference on Language Resources and Evaluation, LREC 2008, 26 May - 1 June 2008, Marrakech, Morocco. European Language Resources Association.
+Lianhui Qin, Zhisong Zhang, and Hai Zhao. 2016. A stacking gated neural architecture for implicit discourse relation classification. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2263-2270, Austin, Texas. Association for Computational Linguistics.
+Keith Rayner. 2009. The 35th sir frederick bartlett lecture: Eye movements and attention in reading, scene perception, and visual search. Quarterly journal of experimental psychology, 62(8):1457-1506.
+Steven J Rennie, Etienne Marcheret, Youssef Mroueh, Jerret Ross, and Vaibhava Goel. 2017. Self-critical sequence training for image captioning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 7008-7024.
+Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. 2015. Imagenet large scale visual recognition challenge. International journal of computer vision, 115(3):211-252.
+Attapol Rutherford and Nianwen Xue. 2016. Robust non-explicit neural discourse parser in English and Chinese. In Proceedings of the CoNLL-16 shared task, pages 55-59, Berlin, Germany. Association for Computational Linguistics.
+Dinghan Shen, Xinyuan Zhang, Ricardo Henao, and Lawrence Carin. 2018. Improved semantic-aware network embedding with fine-grained word alignment. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018, pages 1829-1838. Association for Computational Linguistics.
+
+Zhan Shi, Hui Liu, and Xiaodan Zhu. 2021. Enhancing descriptive image captioning with natural language inference. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 269-277, Online. Association for Computational Linguistics.
+Qingyi Si, Zheng Lin, Ming yu Zheng, Peng Fu, and Weiping Wang. 2021. Check it again: progressive visual question answering via visual entailment. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4101-4110, Online. Association for Computational Linguistics.
+Weijie Su, Xizhou Zhu, Yue Cao, Bin Li, Lewei Lu, Furu Wei, and Jifeng Dai. 2020. VL-BERT: pretraining of generic visual-linguistic representations. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.
+Lin Sun, Jiquan Wang, Kai Zhang, Yindu Su, and Fangsheng Weng. 2021. Rpbert: A text-image relation propagation-based BERT model for multimodal NER. In Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI 2021, Thirty-Third Conference on Innovative Applications of Artificial Intelligence, IAAI 2021, The Eleventh Symposium on Educational Advances in Artificial Intelligence, EAAI 2021, Virtual Event, February 2-9, 2021, pages 13860-13868. AAAI Press.
+Hao Tan and Mohit Bansal. 2019. LXMERT: learning cross-modality encoder representations from transformers. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, pages 5099-5110. Association for Computational Linguistics.
+Quoc-Tuan Truong and Hady W Lauw. 2019. Vistanet: Visual aspect attention network for multimodal sentiment analysis. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 305-312.
+Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 5998-6008.
+Alakananda Vempala and Daniel Preotiac-Pietro. 2019. Categorizing and inferring the relationship between the text and image of twitter posts. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July
+
+28-August 2, 2019, Volume 1: Long Papers, pages 2830-2840. Association for Computational Linguistics.
+Danae Sánchez Villegas and Nikolaos Aletras. 2021. Point-of-interest type prediction using text and images. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 7-11 November; 2021, pages 7785-7797. Association for Computational Linguistics.
+Yue Wang, Shafiq R. Joty, Michael R. Lyu, Irwin King, Caiming Xiong, and Steven C. H. Hoi. 2020a. VDBERT: A unified vision and dialog transformer with BERT. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pages 3325-3338. Association for Computational Linguistics.
+Yue Wang, Jing Li, Irwin King, Michael R. Lyu, and Shuming Shi. 2019. Microblog hashtag generation via encoding conversation contexts. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 1624-1633. Association for Computational Linguistics.
+Yue Wang, Jing Li, Michael R. Lyu, and Irwin King. 2020b. Cross-media keyphrase prediction: A unified framework with multi-modality multi-head attention and image wordings. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pages 3311-3324. Association for Computational Linguistics.
+Yue Wang, Jing Li, Michael R Lyu, and Irwin King. 2020c. Cross-media keyphrase prediction: A unified framework with multi-modality multi-head attention and image wordings. arXiv preprint arXiv:2011.01565.
+Huijuan Xu and Kate Saenko. 2016. Ask, attend and answer: Exploring question-guided spatial attention for visual question answering. In Computer Vision - ECCV 2016 - 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part VII, volume 9911 of Lecture Notes in Computer Science, pages 451-466. Springer.
+Jiacheng Xu, Zhe Gan, Yu Cheng, and Jingjing Liu. 2020. Discourse-aware neural extractive text summarization. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 5021-5031. Association for Computational Linguistics.
+Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, Rich Zemel, and Yoshua Bengio. 2015. Show, attend and tell:
+
+Neural image caption generation with visual attention. In International conference on machine learning, pages 2048-2057. PMLR.
+Yang Xu, Yu Hong, Huibin Ruan, Jianmin Yao, Min Zhang, and Guodong Zhou. 2018. Using active learning to expand training data for implicit discourse relation recognition. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018, pages 725-731. Association for Computational Linguistics.
+Peter Young, Alice Lai, Micah Hodosh, and Julia Hockenmaier. 2014. From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions. Transactions of the Association for Computational Linguistics, 2:67-78.
+Jianfei Yu, Jing Jiang, Li Yang, and Rui Xia. 2020. Improving multimodal named entity recognition via entity span detection with unified multimodal transformer. Association for Computational Linguistics.
+Tiezheng Yu, Wenliang Dai, Zihan Liu, and Pascale Fung. 2021. Vision guided generative pre-trained language models for multimodal abstractive summarization. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 3995-4007, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
+Zhou Yu, Jun Yu, Yuhao Cui, Dacheng Tao, and Qi Tian. 2019. Deep modular co-attention networks for visual question answering. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019, Long Beach, CA, USA, June 16-20, 2019, pages 6281-6290. Computer Vision Foundation / IEEE.
+Pengchuan Zhang, Xiujun Li, Xiaowei Hu, Jianwei Yang, Lei Zhang, Lijuan Wang, Yejin Choi, and Jianfeng Gao. 2021. Vinvl: Revisiting visual representations in vision-language models. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2021, virtual, June 19-25, 2021, pages 5579-5588. Computer Vision Foundation / IEEE.
+Suwei Zhang, Yuan Yao, Feng Xu, Hanghang Tong, Xiaohui Yan, and Jian Lu. 2019. Hashtag recommendation for photo sharing services. In The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, The Thirty-First Innovative Applications of Artificial Intelligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019, Honolulu, Hawaii, USA, January 27 - February 1, 2019, pages 5805-5812. AAAI Press.
+Luowei Zhou, Hamid Palangi, Lei Zhang, Houdong Hu, Jason J. Corso, and Jianfeng Gao. 2020. Unified vision-language pre-training for image captioning and VQA. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on
\ No newline at end of file
diff --git a/understandingsocialmediacrossmodalitydiscourseinlinguisticspace/images.zip b/understandingsocialmediacrossmodalitydiscourseinlinguisticspace/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..6ba5490fe5f00b2300256b4b6940cf6bf5859002
--- /dev/null
+++ b/understandingsocialmediacrossmodalitydiscourseinlinguisticspace/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:bfc4a0f5b80df61ce382e143ac852a0a0550621711c2aab6cc7332d9f32cb431
+size 370633
diff --git a/understandingsocialmediacrossmodalitydiscourseinlinguisticspace/layout.json b/understandingsocialmediacrossmodalitydiscourseinlinguisticspace/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..695ffccc461b80a99a18b3d5c2c165679b3c70c4
--- /dev/null
+++ b/understandingsocialmediacrossmodalitydiscourseinlinguisticspace/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:cf378275e248909177a4dd7d4ad803b391dc938fc78f6492a83d66563f08beac
+size 393783
diff --git a/unsuperviseddomainadaptationforjointinformationextraction/9fc2da9c-133b-41f3-93aa-78b7b8619cdc_content_list.json b/unsuperviseddomainadaptationforjointinformationextraction/9fc2da9c-133b-41f3-93aa-78b7b8619cdc_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..51c15a45d83f1bba58dced4d002cdca91a7a5cc2
--- /dev/null
+++ b/unsuperviseddomainadaptationforjointinformationextraction/9fc2da9c-133b-41f3-93aa-78b7b8619cdc_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ececa8a39c643ac4f663e0d0b6f577eec39c0208be405fe20aa38ee9b3ac10dd
+size 79848
diff --git a/unsuperviseddomainadaptationforjointinformationextraction/9fc2da9c-133b-41f3-93aa-78b7b8619cdc_model.json b/unsuperviseddomainadaptationforjointinformationextraction/9fc2da9c-133b-41f3-93aa-78b7b8619cdc_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..01218317f6a5128bf62b5e56788fe80d7760828f
--- /dev/null
+++ b/unsuperviseddomainadaptationforjointinformationextraction/9fc2da9c-133b-41f3-93aa-78b7b8619cdc_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4d2c1f0322dc7eef5c25beea34fbec34d7aa6915ed82d84b982a7ff95ab04a3f
+size 94344
diff --git a/unsuperviseddomainadaptationforjointinformationextraction/9fc2da9c-133b-41f3-93aa-78b7b8619cdc_origin.pdf b/unsuperviseddomainadaptationforjointinformationextraction/9fc2da9c-133b-41f3-93aa-78b7b8619cdc_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..c297a2655223885286f8b8a7774cf595878a62e1
--- /dev/null
+++ b/unsuperviseddomainadaptationforjointinformationextraction/9fc2da9c-133b-41f3-93aa-78b7b8619cdc_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:944fee380470c5e1b6f0a8d98c8bb247c71d3caae35107129d72326ef1e797f6
+size 761055
diff --git a/unsuperviseddomainadaptationforjointinformationextraction/full.md b/unsuperviseddomainadaptationforjointinformationextraction/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..15030904b3288937da9016d8eca673d700679d32
--- /dev/null
+++ b/unsuperviseddomainadaptationforjointinformationextraction/full.md
@@ -0,0 +1,250 @@
+# Unsupervised Domain Adaptation for Joint Information Extraction
+
+Nghia Trung Ngo1, Bonan Min2* and Thien Huu Nguyen1
+
+$^{1}$ Department of Computer Science, University of Oregon, Eugene, OR, USA
+
+2 Amazon AWS AI Labs
+
+{nghian@,thien@cs}.uoregon.edu,
+
+bonanmin@amazon.com
+
+# Abstract
+
+Joint Information Extraction (JIE) aims to jointly solve multiple tasks in the Information Extraction pipeline (e.g., entity mention, event trigger, relation, and event argument extraction). Due to their ability to leverage task dependencies and avoid error propagation, JIE models have presented state-of-the-art performance for different IE tasks. However, an issue with current JIE methods is that they only focus on standard supervised learning setting where training and test data comes from the same domain. Cross-domain/domain adaptation learning with training and test data in different domains have not been explored for JIE, thus hindering the application of this technology to different domains in practice. To address this issue, our work introduces the first study to evaluate performance of JIE models in unsupervised domain adaptation setting. In addition, we present a novel method to induce domain-invariant representations for the tasks in JIE, called Domain Adaptation for Joint Information Extraction (DA4JIE). In DA4JIE, we propose an Instance-relational Domain Adaptation mechanism that seeks to align representations of task instances in JIE across domains through a generalized version of domain-adversarial learning approach. We further devise a Context-invariant Structure Learning technique to filter domain-specialized contextual information from induced representations to boost performance of JIE models in new domains. Extensive experiments and analyses demonstrate that DA4JIE can significantly improve out-of-domain performance for current state-of-the-art JIE systems for all IE tasks.
+
+# 1 Introduction
+
+An information extraction (IE) system extracting structured information from unstructured text typically involves four major tasks: event trigger detection (ETD), event argument extraction (EAE),
+
+
+
+
+Figure 1: Top figures demonstrate the difference between DANN (a) and IrDA (b). Bottom figures are the relation graphs following the strict uniform alignment of standard DA methods (c) and the chain connection of IrDA (d).
+
+entity mention extraction (EME), and relation extraction (RE). Recently, the advance of large-scale pre-trained language model has made it possible to replace the classical pipeline approaches (Li et al., 2013; Chen et al., 2015), which suffer from error propagation, with a single transformer-based model performing all four tasks jointly, i.e. Joint Information Extraction approaches (JIE) (Lin et al., 2020; Nguyen et al., 2021). While effective in standard supervised learning scenario, these modern JIE systems fail to address the practical setting where training data (i.e., the source) and testing data (i.e., the target) come from different domains with different distributions. Such discrepancies pose a major challenge due to both the intrinsic variations of linguistics (e.g., lexical and semantic shifts) as well as extrinsic factors such as how textual datasets are collected and annotated. The problem is further exacerbated when the models aim to jointly learn multiple tasks, facing various kinds of domain shifts simultaneously. For example, in a Die event where a Person entity mention is
+
+a Victim event argument, documents recording this type of event in medical records may express these instances in a significantly distinct manner compared to when new anchors report similar tragic incidents.
+
+To address domain difference for IE, a major approach involves unsupervised domain adaptation (UDA) where models leverage additional unlabeled data in target domain together with labeled training set from source domain to improve the performance on target domain. As such, the majority of existing UDA methods have focused on transfer learning between source and target domains for a single IE task (Long et al., 2015; Ganin et al., 2016; Kumar et al., 2018). While recent work also aims to generalize previous approaches to multi-domains setting (Dai et al., 2020; Wright and Augenstein, 2020), the scenario where the considered task involves multiple objectives (as in JIE) with different input distributions remains unexplored. In particular, classical UDA approaches for IE often rely on simplification assumption on the factorization of the joint input-output distribution of an IE task to categorize and solve a specific domain shift problem. An example includes covariate shift where the discrepancy is assumed to be only in the marginal input distribution whereas predictive dependency remains unchanged (Kull and Flach, 2014). However, in JIE, this assumption does not hold and thus necessitates new domain adaptation methods to address UDA for JIE.
+
+To this end, our work introduces a new UDA method for JIE, called DA4JIE. At the core of DA4JIE is an Instance-relational Domain Adaptation (IrDA) module that seeks to simultaneously align instance representations for all downstream tasks in JIE in the source and target domains. Inspired by Graph-relational Domain Adaptation (GrDA) proposed by (Xu et al., 2022) for heterogeneous domain adaptation, we view event trigger and entity mention instances of each domain as domain nodes on a domain-instance relational graph, whose adjacency matrix controls the relationship between domain-specific representations (Fig. 1a). In particular, an edge connecting two instances implies that their representations should be aligned, which is equivalent to their pairwise relationship containing no information to identify their domains. This is achieved by an adversarial learning process on pairwise node relationships. Specifically, a graph discriminator is employed to recover the
+
+domain-instance graph via the adjacency structure. Conversely, the text encoder for JIE would prevent the discriminator from doing so. IrDA is a generalization of the standard domain-adversarial training method (Ganin et al., 2016) that enforces strict uniform alignment (fully-connect relational graph) as depicted in Fig. 1c. In contrast, our approach assumes a chain connection across instance nodes (Fig. 1d) that reflects the true relationship among instance types, allowing flexible and effective adaptation to new domains for JIE.
+
+In addition, to improve task performance, previous JIE systems have leveraged specialized linguistic structures extracted from input sentences in a heuristic and direct manner, e.g., using heuristic-based dependency graphs between instances in different tasks in JIE (Lin et al., 2020; Veyseh et al., 2020b; Nguyen et al., 2021). However, this approach is not suitable for domain adaptation as it further introduces more domain-specific context-dependency information into the learned representations. To address this problem, we incorporate a novel Context-invariant Structure Learning module (CiSL) into the instance encoding process. CiSL uses graph transformer networks (GTN) (Yun et al., 2019) to fuse different types of context-independent graphs into a single context-invariant graph (CiG) for each input sentence. Here, instance node features are combined with contextual representation to encourage the model to use domain-invariant information for downstream tasks. In addition, by viewing each input sentence as a graph with word-level nodes to induce word representations, we obtain richer instance representations for JIE by aggregating word-level representations. As such, our method also proposes a novel a CiG-conditioned pooling operation to enhance instance representations for classification tasks and boost the overall adaptation performance for JIE.
+
+Finally, we provide extensive evaluation of the proposed UDA method for JIE on the ACE-05 dataset (Walker et al., 2005). The experimental results demonstrate the advantages of DA4JIE that achieves state-of-the-art (SOTA) performance when being adapted to multiple target domains.
+
+# 2 Related Work
+
+# 2.1 Joint Information Extraction
+
+Classical methods for IE manually engineered linguistic features to capture the dependency between IE tasks, including Integer Linear Programming
+
+for Global Constraints (Roth and Yih, 2004), Structured Perceptron (Miwa and Sasaki, 2014; Judea and Strube, 2016), and Graphical Models (Yu and Lam, 2010; Yang and Mitchell, 2016). The advance of deep learning and large-scale language models (Devlin et al., 2019) has greatly enhanced the representation ability of modern IE models, enabling them to jointly solve multiple tasks via the shared contextual embeddings. These joint models focused on different sets of IE tasks such as EME and RE (Zheng et al., 2017; Fu et al., 2019; Luan et al., 2019; Veyseh et al., 2020a), and ETD and EAE (Nguyen et al., 2016; Zhang et al., 2019; Nguyen and Nguyen, 2019). Recently, some efforts have been made to address the four tasks all together by introducing specialized structures and regularizations to model the joint instance distribution across tasks (Lin et al., 2020; Nguyen et al., 2021, 2022). Our work continues in their direction, but in UDA setting which is more difficult but also much more practical compared to the standard supervised learning setting.
+
+# 2.2 Unsupervised Domain Adaptation
+
+The main line of research on UDA approaches the domain shift problem by learning domain-invariant representations, which is either achieved by explicitly reducing the distance between source and target feature space measured by some distribution discrepancy metric (Long et al., 2015; Zellinger et al., 2017), or by adversarial training in which the feature extractor is trained to fool a domain classifier, both are jointly optimized to arrive at an aligned feature space (Ganin et al., 2016). We focus on applying the latter in transformer-based model (BERT) for IE tasks. In particular, there has been several prior works addressing UDA setting for a singular IE task, including event trigger identification (Naik and Rose, 2020), event detection (Ngo et al., 2021; Trung et al., 2022), and relation extraction (Fu et al., 2017). However, a method specifically tackles joint task learning in UDA is still absent from the literature to the best of our knowledge. Our IrDA is the first to explicitly take into account multiple representations of different tasks when transferring between source and target domains.
+
+# 3 Model
+
+# 3.1 Problem Statement
+
+The JIE problem composes of four tasks EME, ETD, RE, and EAE. Given an input sentence, a unified model is used to optimize a linear combination of each task objective. In particular, EME aims to detect and classify entity mentions (names, nominals, pronouns) according to a set of predefined (semantic) entity class (e.g., Person). Similarly, ETD seeks to identify and classify event triggers (verbs or normalization) that clearly evoke an event in a given set of event classes (e.g., Attack). Note that event triggers can involve multiple words. Next, RE objective is to predict the semantic relationship between two entity mentions in the sentence. Finally, in EAE, given an event trigger, the systems need to predict the roles that each entity mention plays in the corresponding event. Entity mentions are thus also called event argument candidates in this work. Noted that the sets of relations and roles are pre-determined and include a special class of None to indicate negative category.
+
+In UDA setting, data comes from two different domains. For training, we have a labeled source dataset $\mathbf{S}$ consisted of $N^s$ samples and an unlabeled set $\mathbf{T}$ of $N^t$ samples drawn from target domain. The goal is to leverage both datasets to optimize model performance on test data from target domain. At each iteration, a mini-batch consists of samples from both $\mathbf{S}$ and $\mathbf{T}$ is sampled, the former are used to learn the main downstream tasks using their true labels, while the latter are employed to impose a domain-invariant constraint on the extracted features.
+
+# 3.2 JIE Architecture
+
+The following encoding process is applied to data from both domains, thus we omit the domain index in notations for brevity. Given an input sentence $\mathbf{w} = [w_{1},w_{2},\dots ,w_{n}]$ with $n$ words, the model first identifies span of an instance, which can be an entity mention or an event trigger, in $\mathbf{w}$ and then compute its representation for downstream tasks. In particular, following Lin et al. (2020), two conditional random field (CRF) layers, one for event triggers and another for event mentions, take in as input word-level contextual representation sequence $\mathbf{X} = [\mathbf{x}_1,\mathbf{x}_2,\ldots ,\mathbf{x}_n](\mathbf{x_i}\in \mathbb{R}^{\mathbf{h}}$ is obtained by averaging the word-pieces' hidden vectors of $w_{i}$ returned by the transformer encoder, e.g., BERT). The CRFs output the best BIO tag se
+
+
+Figure 2: The overall architecture of our framework DA4JIE. First, input sentences from both source and target domains go through the same transformer encoder to compute their contextual representations. Concurrently, the CiSL module (pink) extracts the attention probability matrices at each layer to create attention graphs, using position embeddings as node features. These graphs are used to augment the dependency graphs, which are then fused across layers by a GTN to create a context-invariant graph. The node features of which are combined with the contextual representations as input for the instance span detection task using CRF layers. Next, the instance representations are computed based on the outputted spans conditioned on the context-invariant graph. Finally, source instances are used to optimize the encoder for the main JIE tasks (blue), while the IrDA module (green) takes the representations from the corresponding instance types for the nodes in the type-relational graph to calculate the discriminator loss.
+
+quences (Chiu and Nichols, 2016) to indicate event trigger and entity mention/event argument spans (i.e., no label prediction yet) in $w$ , which are then used to compute their representations $\mathbf{E}_{tr}$ and $\mathbf{E}_{ar}$ (each can contain multiple instances) by aggregating information from words in the corresponding spans. Finally, separate task-specific feed-forward networks are used to calculate label scores from $\mathbf{E}_{ar}$ , $\mathbf{E}_{tr}$ , $(\mathbf{E}_{ar}, \mathbf{E}_{ar})$ (i.e., pairs of entity mention spans), and $(\mathbf{E}_{tr}, \mathbf{E}_{ar})$ (i.e., pairs of entity mentions and event triggers) in cross-entropy losses for EME, ETD, RE and EAE respectively. Note that entity mentions/event arguments and event triggers are commonly called "instances" for the tasks in JIE.
+
+For UDA, we follow the domain-adversarial training process in DANN (Ganin et al., 2016). The same encoder $E$ is used to compute instance representations for JIE from input sentences in source and target domains. The source representations are then fed into a classification head $F$ for main task learning. Concurrently, a domain discriminator $D$ is employed taking as input representations of unlabeled samples from both domains to predict their corresponding origins. By pushing $E$ to both minimize the main task losses and maximally
+
+misdirect $D$ , the resulting representations will be both discriminative for the tasks at hand and indistinguishable to the domain classifier to boost performance in the target domain.
+
+# 3.3 Instance-relational Domain Adaptation
+
+Existing domain adaptation methods such as DANN tend to view all domains equally and ignore any topological structure among different domains to align them all perfectly. Recently, Xu et al. (2022) propose Graph-relational Domain Adaptation to generalize DANN to multi-domains adaptation setting by introducing a domain graph that captures domain heterogeneity. Each node of the graph represents a domain and a relation between two domains can be captured by an edge. By tailoring the adaptation of domains to a domain graph that reflects the true domain relationships, GrDA relaxes the uniform alignment to adapt more flexibly across domains. We adopt GrDA to solve the problem of UDA for multiple tasks in JIE by treating each of the task (i.e., EME and ETD) in two domains as a node in the type-relational graph $\mathbf{G}_r = (\mathbf{V}_r, \mathbf{A}_r)$ (i.e., type here refers to a combination of a task and a domain). Specifically, the vertex set $\mathbf{V}_r$ con
+
+sists of four nodes $\mathbf{E}_{ar}^{s},\mathbf{E}_{tr}^{s},\mathbf{E}_{ar}^{t}$ , and $\mathbf{E}_{tr}^{t}$ , and the adjacency matrix $\mathbf{A}_r\in \mathbb{R}^{4\times 4}$ dictates which pair of types should be aligned by setting the value of corresponding position to 1. We assume a chain connection in the respective mentioned order for $\mathbf{G}_r$ (i.e., in Fig. 1d), on which detailed analysis will be provided later to justify the assumption. IrDA performs a minimax optimization similar to that of DANN with the following objective:
+
+$$
+\min _ {E, F} \max _ {D} L _ {c} ^ {s} (E, F) - \lambda L _ {d} ^ {s - t} (D _ {g}, E),
+$$
+
+where $\lambda$ is a balancing term and $L_{c}^{s}$ is the combined loss for label prediction for JIE tasks in source domain, which depends on the encoder $E$ and classification head $F$ . Different from DANN where the discriminator predicts the domain identity given representations in the domains (Fig. 1a), the discriminator objective $L_{d}^{s - t}$ aims to reconstruct the type-relational graph $\mathbf{G}_r$ given the encoding of data from different types (Fig. 1b). In particular, the graph discriminator $D_{g}$ computes the pairwise relationship $\hat{a}_{ij}$ between two instance representations $e_i$ and $e_j$ (described in next section) for types $i$ and $j$ : $\hat{a}_{ij} = e_i^T e_j$ , which is then used as input to the discriminator loss:
+
+$$
+\begin{array}{l} L _ {d} ^ {s - t} = \sum_ {i, j} l (\hat {a} _ {i j}), \\ l (\hat {a} _ {i j}) = - a _ {i j} l o g \sigma (\hat {a} _ {i j}) - (1 - a _ {i j}) l o g (1 - \sigma (\hat {a} _ {i j})), \\ \end{array}
+$$
+
+where $a_{ij}$ is the value of the edge between type $i$ and $j$ from the adjacency matrix $\mathbf{A}_r$ . Intuitively, $D_g$ aims to recover the relation graph $\mathbf{G}_r$ via the adjacency structure $\mathbf{A}_r$ , while $E$ seeks to prevent it from doing so. At equilibrium, the representations of two connected types will provide no information regarding their connection in $\mathbf{A}_r$ . In other words, these types are aligned as we cannot infer their origins based on their representations.
+
+# 3.4 Context-invariant Structure Learning
+
+One problem with domain-adversarial training based methods in general is that they are sensitive to the amount of discrepancy between source and target domains. In particular, DANN's bound on target performance (David et al., 2010a) also depends on the loss of the ideal model to perform the main task on both domains. Accordingly, as the ideal model's loss is often assumed to be negligible for a single prediction task, it is ignored in the modeling process for DANN. However, in our
+
+setting with JIE, this simplification might be suboptimal as the combination of multiple tasks might increase the ideal model's loss, thus necessitating approaches to minimize this component for JIE in DA. In fact, if this component is not constrained, DANN will have little alignment effect for the representations while also worsening joint error term (David et al., 2010b; Wu et al., 2019). To this end, our proposal to minimize the ideal model's loss is to learn more transferable representations to facilitate its prediction in different domains. As such, we introduce a Context-invariant Structure Learning (CiSL) mechanism that aims to induce domain-general structures for input texts to better support transferable representation learning for JIE.
+
+CiSL first creates domain-independent structure by combining linguistic and attention graphs extracted from the input sentence. For linguistic graph, we employ dependency trees that prior work found to be useful for IE tasks (Veyseh et al., 2020b). In particular, a graph $\mathbf{G}_d = (\mathbf{V}_d,\mathbf{A}_d)$ is constructed for each sentence based on the output of an off-the-shelf syntactic dependency parser, where $\mathbf{V}_d$ is a set of word-level nodes whose features are obtained by embedding the dependency relations between a word and its governor (embeddings are learnable parameters). The adjacency matrix $\mathbf{A}_d$ is a binary matrix whose cell $(i,j)$ is only set to 1 if $w_{j}$ is the governor of $w_{i}$ in the dependency tree. We create augmented versions of $\mathbf{G}_d$ to reduce its sparsity and increase transferability, by merging it with attention graphs extracted from the output of each layer of the transformer encoder. Specifically, define an attention graph at layer $l$ as $\mathbf{G}_a^l = (\mathbf{V}_a^l,\mathbf{A}_a^l)$ , which composes of the transformer's position embeddings as node features for word-level nodes in $\mathbf{V}_a^l$ and attention probability matrix as adjacency matrix $\mathbf{A}_a^l$ ( $1\leq l\leq L$ , where $L$ is number of encoder's layers). The resulting attention-augmented dependency graphs $\mathbf{G}_{da}^{l} = (\mathbf{V}_{da}^{l},\mathbf{A}_{da}^{l})$ are computed as follow:
+
+$$
+\begin{array}{l} \mathbf {A} _ {d a} ^ {l} = \alpha_ {a} ^ {l} \mathbf {A} _ {a} ^ {l} + \alpha_ {d} ^ {l} \mathbf {A} _ {d}, \\ \mathbf {Z} _ {d a} ^ {l} = \beta_ {a} ^ {l} \mathbf {Z} _ {a} ^ {l} + \beta_ {d} ^ {l} \mathbf {Z} _ {d}, \\ \end{array}
+$$
+
+where $\{\alpha_{a}^{l},\alpha_{d}^{l},\beta_{a}^{l},\beta_{d}^{l}\}_{l = 1}^{L}$ are learnable weights, and the Zs are the node representations of corresponding graphs. These graphs are context-independent in the sense that no word embedding information is explicitly included in their node features, whereas their adjacency matrices reflect relation among words that are universal across do
+
+mains in natural language. Finally, CiSL employs a Graph Transformer Network (Yun et al., 2019) to fuse the attention-augmented dependency graphs across all layers into a single context-invariant graph $\mathbf{G}_{ci} = (\mathbf{V}_{ci},\mathbf{A}_{ci})$ with $\mathbf{A}_{ci}\in \mathbb{R}^{n\times n}$ and node features $\mathbf{Z}_{ci}\in \mathcal{R}^{n\times h}$ .
+
+To incorporate $\mathbf{G}_{ci}$ into the instance representation learning process, we add the node features $\mathbf{Z}_{ci}$ to the contextual representation $\mathbf{X}$ , resulting in: $\mathbf{X}_{ci} = \mathbf{X} + \mathbf{Z}_{ci}$ as input to the CRF layers. This encourages downstream tasks to leverage more context-independent information from $\mathbf{Z}_{ci}$ instead of just relying on the domain-specific features $\mathbf{X}$ .
+
+Additionally, by viewing the input sentence as a graph with word-level nodes can be pooled obtain instance-level nodes, we introduce a CiG-conditioned pooling operation to create final instance representations for classification tasks. Each of the BIO tag sequence outputted by the CRF layers can be reformulated into a binary assignment matrix $\mathbf{S}_{base} \in \mathbb{R}^{n \times m}$ , where $m \leq n$ is the number of spans detected, and $\mathbf{S}_{base_{ij}} = 1$ only if word $i$ lies inside span $j$ . Prior JIE systems simply compute each instance representation by summing its span's words: $e_j = \sum_{i; w_i \in span_j} x_i$ , thus solely relying on the text used to express the instance's meaning in the specific context. Accordingly, the previous equation (for each instance type) can also be formulated in matrix form as follow:
+
+$$
+\mathbf {E} = \mathbf {S} ^ {T} \mathbf {X} _ {c i}, w h e r e \mathbf {S} = \mathbf {S} _ {b a s e}.
+$$
+
+To this end, instead of fixing the assignment matrix, we propose to learn $\mathbf{S}$ by conditioning its on the context-invariant graph $\mathbf{G}_{ci}$ as follows:
+
+$$
+\mathbf {S} _ {c i} = \gamma \odot \mathbf {S} _ {\text {b a s e}} + \mu ,
+$$
+
+where $(\gamma, \mu) = GCN(\mathbf{Z}_{ci}, \mathbf{A}_{ci}) \in \mathbb{R}^{n \times 2m}$ are the outputs of a graph convolution network (Kipf and Welling, 2017; Nguyen and Grishman, 2018) taking in $\mathbf{G}_{ci}$ as input. Finally, the instance representation for label prediction is computed via:
+
+$$
+\mathbf {E} = \mathbf {S} _ {c i} ^ {T} \mathbf {X} _ {c i} = \left(\gamma \odot \mathbf {S} _ {b a s e}\right) ^ {T} \mathbf {X} _ {c i} + \mu^ {T} \mathbf {X} _ {c i},
+$$
+
+which is able to aggregate information over all words in the sentence through $\mu$ , and conversely suppress the role of the domain-related span's words through $\gamma$ .
+
+# 4 Experiments
+
+# 4.1 Dataset, Settings, and Baselines
+
+ACE-05 Following the prior works on JIE (Lin et al., 2020; Nguyen et al., 2021), we evaluate
+
+Out-of-domain
+
+
in
bc
cts
wl
un
aDom
BERT
Trigger-I
78.4
71.4
65.2
62.9
66.3
66.4
Role-I
64.1
59.5
49.0
46.3
46.9
50.4
Entity
88.9
80.8
84.0
85.5
80.9
82.8
Relation-C
64.3
61.7
58.0
52.5
48.0
55.0
Trigger-C
76.3
68.7
62.4
56.3
64.5
63.0
Role-C
60.8
55.4
47.9
42.9
43.0
47.3
aTask
72.6
66.6
63.1
59.3
59.1
62.0
OneIE
Trigger-I
79.1
70.3
68.2
63.2
64.6
66.6
Role-I
66.2
60.1
51.2
50.6
46.7
52.1
Entity
89.1
79.5
86.9
85.5
81.5
83.4
Relation-C
65.6
63.1
56.7
54.7
50.0
56.1
Trigger-C
77.2
67.5
64.6
56.8
63.4
63.1
Role-C
62.2
55.7
49.9
47.2
42.6
48.9
aTask
73.5
66.5
64.6
61.1
59.4
62.9
FourIE
Trigger-I
79.1
70.7
66.0
65.2
64.3
66.6
Role-I
66.6
60.0
52.6
48.9
49.1
52.6
Entity
89.1
80.3
84.4
85.4
81.9
83.0
Relation-C
66.0
63.7
56.6
53.1
52.7
56.5
Trigger-C
76.9
68.5
63.2
56.4
62.4
62.6
Role-C
61.8
55.4
51.8
44.5
43.6
48.8
aTask
73.5
66.9
64.0
59.8
60.1
62.7
DA4JIE
Trigger-I
79.0
72.2
66.0
64.4
66.5
67.3
Role-I
67.3
59.0
54.6
49.8
51.5
53.7
Entity
89.2
82.6
86.0
85.2
83.0
84.2
Relation-C
68.8
65.3
58.7
57.7
54.3
59.0
Trigger-C
76.5
68.7
63.0
57.4
64.1
63.3
Role-C
62.5
55.6
51.9
45.3
44.4
49.3
aTask
74.2
68.0
65.0
61.4
61.4
64.0
+
+Table 1: F1 scores of the models on ACE-05 test data for indomain (in) and out-of-domain (bc, cts, wl, un) adaptation settings. The suffixes “-I” and “-C” correspond to the identification performance (only concerning the offset correctness) and identification+classification performance (evaluating both offsets and classes). aTask is the average score over the four classification tasks, and aDom is the average out-of-domain score for each task.
+
+DA4JIE on the ACE-05 dataset (Walker et al., 2005) which provides annotations in 599 documents for entity mentions, event triggers, relations, and argument roles. In particular, there are 33 event classes, 7 entity classes, 6 relation classes, and 22 argument roles. ACE-05 was collected from 6 different domains: bn, nw, bc, cts, wl, and un. For UDA setting, we follow Ngo et al. (2021) and gather data from two closely related domains, bn and nw, to create a sizable source domain dataset and refer to it as in domain. We use $80\%$ of its documents for training whilst the rest are used for development. For out-of-domain (OOD) setting, each of the other domains is considered a target domain of a single adaptation scenario, where $20\%$ of its documents are reserved for unlabeled training target data and the remainders are utilized as the test dataset. We use the same data processing scripts as in (Lin et al., 2020; Nguyen et al., 2021) for consistency.
+
+Baselines We compare DA4JIE with the following current SOTA JIE systems: (i) BERT (Devlin et al., 2019) uses a shared Transformer encoder to represent the instances for ETD, EME, EAE, and RE and performs classification for the instances
+
+based on the task-specific label distributions. (ii) OneIE (Lin et al., 2020) is same as BERT, but leverages set of predefined global features to capture the cross-subtask and cross-instance interactions. (iii) FourIE (Nguyen et al., 2021) creates a graph structure of contextual representations to explicitly capture the interactions between related instances of the four IE tasks in a sentence, while also employing a heuristic dependency between the task instances in a dependency-based regularization to further boost the performance of the models. OneIE and FourIE are current state-of-the-art models for JIE.
+
+Implementation Details and Hyper-parameters All models are implemented in Pytorch. We leverage the pre-trained BERT-large-cased models and checkpoints from Huggingface repository (Wolf et al., 2020). To achieve a fair comparison with the baselines, we follow the same evaluation script and correctness criteria for entity mentions, event triggers, relations, and arguments as in prior work (Lin et al., 2020). To tune each model over in-domain development data, we use Adam optimizer with learning rates chosen from [5e-5, 1e-4, 5e-4, 1e-3, 5e-3], mini-batch size from [16, 32, 64] of which $50\%$ are unlabeled target data. We use GCNs with 2 or 3 layers and GTN with number of channels in [2, 4, 8]. All of the downstream heads are implemented as 2 or 3 layers feed-forward networks with hidden vectors of size [100, 50] or [200, 100, 50], respectively. The IrDA balancing term $\lambda$ is picked from the range [0.1, 0.5, 1, 5, 10]. Every model is trained for 50 epochs for each target domain, from which the model with the best average task F1 score on the in-domain development set is then evaluated OOD on the test set of the corresponding target domain. Finally, our reported results are average of three runs using the best hyper-parameter configuration with different random seeds. The selected hyper-parameters for our model from the fine-tuning process include: 3 layers for the GCNs and feed-forward classification heads, GTN with 4 channels, 1e-5 for the learning rate with Adam optimizer, 16 for the batch size, and 1 for the balancing term $\lambda$ . In this work, we use a single Tesla V100-SXM2 GPU with 32GB memory for all experiments.
+
+# 4.2 Main Results
+
+Table 1 showcases the UDA results in F1 scores for all tasks in JIE. We observe that the latest
+
+systems for JIE such as OneIE and FourIE provide marginal improvement to the standard BERT model. In particular, while the specialized architectures of these models are able to boost in-domain performance as expected, they are not tailored to UDA settings where the focus is on extracting transferable and domain-invariant features. As a result, their effectiveness in out-of-domain (OOD) settings over BERT is situational (OneIE is good for cts and w1 domains, while FourIE is better at adapting to un domains). In contrast, DA4JIE manages to achieve the best adaptation performance in average across all considered domains. Our model is 2 points higher in F1 scores than BERT's overall, surpassing the current SOTA methods by over 1 point on average. Notably, this improvement is the result of the simultaneous increases in the average performance of all downstream tasks, which is achieved by combining IrDA and CiSL modules as shown in the following section.
+
+
bc
cts
wl
un
aDom
DA4JIE
Trigger-I
72.2
66.0
64.4
66.5
67.3
Role-I
59.0
54.6
49.8
51.5
53.7
Entity
82.6
86.0
85.2
83.0
84.2
Relation-C
65.3
58.7
57.7
54.3
59.0
Trigger-C
68.7
63.0
57.4
64.1
63.3
Role-C
55.6
51.9
45.3
44.4
49.3
aTask
68.0
65.0
61.4
61.4
64.0
DA4JIE -CiSL
Trigger-I
71.6
65.9
64.1
64.7
66.6
Role-I
60.1
52.9
49.5
47.9
52.6
Entity
82.1
74.4
82.9
82.0
80.3
Relation-C
63.3
54.8
54.9
53.1
56.5
Trigger-C
69.3
63.8
56.3
62.9
63.1
Role-C
56.0
51.7
44.1
43.4
48.8
aTask
67.7
61.2
59.5
60.3
62.2
DA4JIE -IrDA
Trigger-I
66.4
64.8
64.9
66.9
66.9
Role-I
52.5
49.8
47.1
52.4
52.4
Entity
87.2
84.6
81.7
83.9
83.9
Relation-C
55.3
52.9
53.3
56.1
56.1
Trigger-C
63.4
57.7
63.5
63.3
63.3
Role-C
51.3
46.0
42.0
48.7
48.7
aTask
67.2
64.3
60.3
60.1
63.0
DA4JIE -IrDA -CiSL
Trigger-I
71.4
65.2
62.9
66.3
66.4
Role-I
59.5
49.0
46.3
46.9
50.4
Entity
80.8
84.0
85.5
80.9
82.8
Relation-C
61.7
58.0
52.5
48.0
55.0
Trigger-C
68.7
62.4
56.3
64.5
63.0
Role-C
55.4
47.9
42.9
43.0
47.3
aTask
66.6
63.1
59.3
59.1
62.0
+
+Table 2: Performance (F1 scores) for ablation study on the ACE-05 test datasets for different domains.
+
+# 4.3 Ablation study
+
+We conduct an ablation study to validate the effectiveness of each of our main components by investigating the following variations of our model by removing CiSL, IrDA, and both respectively. The results is shown in Table 2, where we observe that DA4JIE-IrDA noticeably boosts per
+
+formances for all domains compared to BERT (DA4JIE-IrDA-CiSL), while DA4JIE-CiSL only has positive impact when adapting to bc and un domains, providing little to no improvement on average. This is the result of CiSL making the instance representations more transferable at low-level, thus ensuring the necessary condition for the domain-adversarial training in IrDA to reach equilibrium. By combining both components, DA4JIE significantly outperforms other variants, especially when transferring to target domains that are highly dissimilar to source domains (i.e., w1 and un).
+
+
bc
cts
wl
un
aDom
None
66.6
63.1
59.3
59.1
62.0
Full
67.1
63.7
60.0
60.1
62.7
Pair-Task
67.5
63.2
60.0
61.1
62.9
Pair-Dom
67.0
62.7
59.1
59.5
62.1
Chain
68.0
65.0
61.4
61.4
64.0
+
+Table 3: Average task scores for domain-adversarial learning analysis. Performance (F1 scores) on the ACE-05 test datasets for different domains.
+
+
aId
aCls
CiSL
60.5
64.0
CiSL-Pool
59.7
63.2
CiSL-Node
59.4
62.3
CiSL-Node-Pool
58.0
62.0
CiSL-Dep
59.8
62.8
CiSL-Attn
59.2
62.5
+
+Table 4: Average identification and classification scores for CiSL analysis. Performance (F1 scores) on the ACE-05 test datasets for different domains. aId and aCls are the average scores across all new domains, of all identification and classification tasks, respectively.
+
+# 5 Analysis
+
+# 5.1 Instance-relational Graph Analysis
+
+We investigate the effect of IrDA with different patterns of relationships in the type-relational graph compared to our chain relation in DA4JIE. In Table 3, Full refers to the standard DANN approach where all types (i.e., task+domain) are uniformly aligned (Fig. 1c). Pair-Task and Pair-Dom are models with only a pair of edges in relation graph, the former connects the same task across domains $(\mathbf{E}_{ar}^{s} - \mathbf{E}_{ar}^{t}$ and $\mathbf{E}_{tr}^{s} - \mathbf{E}_{tr}^{t})$ , while the latter has tasks in the same domain linked $(\mathbf{E}_{tr}^{s} - \mathbf{E}_{ar}^{s}$ and $\mathbf{E}_{tr}^{t} - \mathbf{E}_{ar}^{t})$ . Finally, None means no adaptation is used and Chain corresponds to our assumption in DA4JIE. The results show that Full improves over None, but underperforms when compared to PairTask in most new domains. This indicates that the
+
+alignment imposed by Full is overly strict and not optimal when adapting multiple tasks together. In addition, appropriate connections are required for effective adaptation, as shown by the low scores of Pair-Dom, which basically is equivalent to domain-conditioning the representations without adapting between source and target domains.
+
+We argue that Chain is robust and substantially outperforms other models across all domains because it reflects the true relationship among the tasks and domains (types) for JIE. In particular, event triggers are restricted and closely related to the predefined event classes which are shared across domains, therefore their representations should be aligned when adapting to new domains. Conversely, event arguments (i.e., entity mentions) are more diverse and context-dependent, thus may significantly differ across domains and should not be directly connected in the relation graph. They are, however, implicitly connected in Chain through the event trigger nodes, which implies their representations are "weakly" aligned, as shown by Xu et al. (2022) where GrDA being able to enforce different levels of alignment. Lastly, the pair of trigger-event edges in source and target domains also equate to aligning the representations of trigger-event relation and help transfer model's role classification ability from source to target domain.
+
+# 5.2 Context-invariant Structure Learning
+
+To determine the role of different components in CiSL module, we analyze their contributions to DA4JIE performance at different levels of downstream tasks. In Table 4, CiSL-Dep and CiSL-Attn are the models without leveraging dependency graph and attention graph respectively. CiSL-Pool just uses the base assignment matrix for pooling, and CiSL-Node is the case where node features of the context-invariant graph are removed from the inputs for the CRF layers. Finally, we completely disable the CiSL module in CiSL-Node-Pool. From the results, it is clear that both node features and conditional pooling are responsible for the significant improvement of the final model. Particularly, adding the node features is more effective as it also helps boost the performance of identification tasks by making the representations more transferable from source to target domains at low-level. Furthermore, the last two rows in the table indicate that combining different kinds of structures has a positive impact, especially
+
+when they contain universal linguistic information that is general across domains.
+
+# 6 Conclusion
+
+We present DA4JIE, a novel framework that jointly solves four IE tasks (EME, ETD, RE, and EAE) in UDA setting. In particular, DA4JIE employs Instance-relational Domain Adaptation method that generalizes the standard domain-adversarial training approach to simultaneously align high-level type representations of all downstream tasks between domains. Additionally, we incorporate a Context-invariant Graph learning module into the encoder to encourage the usage of domain-independent information at low-level, thus extracting more transferable features to improve model's performance in new domains. The extensive experiments demonstrate the effectiveness of the proposed framework. In the future, we plan to extend our approach to more general settings such as multi-source domain adaptation with more IE subtasks such as entity/event coreference resolution.
+
+# Limitations
+
+We present the first work to tackle the joint information extraction problem in unsupervised domain adaptation setting. Our framework DA4JIE combines Instance-relations Domain Adaptation method with Context-invariant Structure Learning mechanism, outperforming state-of-the-art systems on ACE-05 consistently across multiple new domains. Despite positive empirical results, there are still several limitations that can be addressed in future works. First, the current model assumes a chain connection for the type-relational graph in IrDA. While intuitive and effective for the considered setting in this work, it is only designed manually. A method that explicitly learns to find the optimal connections for the relation graph might be able to produce better performance for our problem. Another issue is the limited kinds of linguistic structures that CiSL uses to create the context-invariant graph. Prior works have successfully improved IE tasks using semantic role labeling (Christensen et al., 2010) and abstract meaning representation (Zhang and Ji, 2021). Integrating structured graphs extracted from these methods is straightforward for DA4JIE and might improve model performance further.
+
+# Acknowledgement
+
+This research has been supported by the Army Research Office (ARO) grant W911NF-21-1-0112 and the NSF grant CNS-1747798 to the IUCRC Center for Big Learning. This research is also based upon work supported by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via IARPA Contract No. 2019-19051600006 under the Better Extraction from Text Towards Enhanced Retrieval (BETTER) Program. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of ARO, ODNI, IARPA, the Department of Defense, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein. This document does not contain technology or technical data controlled under either the U.S. International Traffic in Arms Regulations or the U.S. Export Administration Regulations.
+
+# References
+
+Yubo Chen, Liheng Xu, Kang Liu, Daojian Zeng, and Jun Zhao. 2015. Event extraction via dynamic multipooling convolutional neural networks. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 167-176, Beijing, China. Association for Computational Linguistics.
+Jason P.C. Chiu and Eric Nichols. 2016. Named entity recognition with bidirectional LSTM-CNNs. Transactions of the Association for Computational Linguistics, 4:357-370.
+Janara Christensen, Mausam, Stephen Soderland, and Oren Etzioni. 2010. Semantic role labeling for open information extraction. In Proceedings of the NAACL HLT 2010 First International Workshop on Formalisms and Methodology for Learning by Reading, pages 52-60, Los Angeles, California. Association for Computational Linguistics.
+Yong Dai, Jian Liu, Xiancong Ren, and Zenglin Xu. 2020. Adversarial training based multi-source unsupervised domain adaptation for sentiment analysis. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020,
+
+New York, NY, USA, February 7-12, 2020, pages 7618-7625. AAAI Press.
+Shai Ben David, John Blitzer, Koby Crammer, Alex Kulesza, Fernando Pereira, and Jennifer Wortman Vaughan. 2010a. A theory of learning from different domains. Machine learning, 79(1-2):151-175.
+Shai Ben David, Tyler Lu, Teresa Luu, and David Pal. 2010b. Impossibility theorems for domain adaptation. In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, volume 9 of Proceedings of Machine Learning Research, pages 129-136, Chia Laguna Resort, Sardinia, Italy. PMLR.
+Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT).
+Lisheng Fu, Thien Huu Nguyen, Bonan Min, and Ralph Grishman. 2017. Domain adaptation for relation extraction with domain adversarial neural network. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 425-429, Taipei, Taiwan. Asian Federation of Natural Language Processing.
+Tsu-Jui Fu, Peng-Hsuan Li, and Wei-Yun Ma. 2019. GraphRel: Modeling text as relational graphs for joint entity and relation extraction. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1409-1418, Florence, Italy. Association for Computational Linguistics.
+Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, François Laviolette, Mario March, and Victor Lempitsky. 2016. Domain-adversarial training of neural networks. Journal of Machine Learning Research, 17(59):1-35.
+Alex Judea and Michael Strube. 2016. Incremental global event extraction. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 2279-2289, Osaka, Japan. The COLING 2016 Organizing Committee.
+Thomas N. Kipf and Max Welling. 2017. Semi-Supervised Classification with Graph Convolutional Networks. In Proceedings of the 5th International Conference on Learning Representations, ICLR '17.
+Meelis Kull and Peter A. Flach. 2014. Patterns of dataset shift.
+Abhishek Kumar, Prasanna Sattigeri, Kahini Wadhawan, Leonid Karlinsky, Rogerio Feris, Bill Freeman, and Gregory Wornell. 2018. Co-regularized alignment for unsupervised domain adaptation. In Advances in Neural Information Processing Systems, volume 31. Curran Associates, Inc.
+
+Qi Li, Heng Ji, and Liang Huang. 2013. Joint event extraction via structured prediction with global features. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 73-82, Sofia, Bulgaria. Association for Computational Linguistics.
+Ying Lin, Heng Ji, Fei Huang, and Lingfei Wu. 2020. A joint neural model for information extraction with global features. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7999-8009, Online. Association for Computational Linguistics.
+Mingsheng Long, Yue Cao, Jianmin Wang, and Michael I. Jordan. 2015. Learning transferable features with deep adaptation networks.
+Yi Luan, Dave Wadden, Luheng He, Amy Shah, Mari Ostendorf, and Hannaneh Hajishirzi. 2019. A general framework for information extraction using dynamic span graphs. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3036-3046, Minneapolis, Minnesota. Association for Computational Linguistics.
+Makoto Miwa and Yutaka Sasaki. 2014. Modeling joint entity and relation extraction with table representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1858-1869, Doha, Qatar. Association for Computational Linguistics.
+Aakanksha Naik and Carolyn Rosé. 2020. Towards open domain event trigger identification using adversarial domain adaptation. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL).
+Nghia Trung Ngo, Duy Phung, and Thien Huu Nguyen. 2021. Unsupervised domain adaptation for event detection using domain-specific adapters. In *Findings of the Association for Computational Linguistics: ACL-IJCNLP* 2021, page 4015-4025. Association for Computational Linguistics.
+Minh Van Nguyen, Viet Lai, and Thien Huu Nguyen. 2021. Cross-task instance representation interactions and label dependencies for joint information extraction with graph convolutional networks. In *NAACL-HLT*, pages 27-38.
+Minh Van Nguyen, Bonan Min, Franck Dernoncourt, and Thien Nguyen. 2022. Joint extraction of entities, relations, and events via modeling inter-instance and inter-label dependencies. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4363-4374, Seattle, United States. Association for Computational Linguistics.
+
+Thien Huu Nguyen, Kyunghyun Cho, and Ralph Grishman. 2016. Joint event extraction via recurrent neural networks. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 300-309, San Diego, California. Association for Computational Linguistics.
+Thien Huu Nguyen and Ralph Grishman. 2018. Graph convolutional networks with argument-aware pooling for event detection. In Proceedings of the AAAI Conference on Artificial Intelligence.
+Trung Minh Nguyen and Thien Huu Nguyen. 2019. One for all: Neural joint modeling of entities and events. In Proceedings of the AAAI Conference on Artificial Intelligence.
+Dan Roth and Wen-tau Yih. 2004. A linear programming formulation for global inference in natural language tasks. In Proceedings of the Eighth Conference on Computational Natural Language Learning (CoNLL-2004) at HLT-NAACL 2004, pages 1-8, Boston, Massachusetts, USA. Association for Computational Linguistics.
+Nghia Ngo Trung, Linh Ngo Van, and Thien Huu Nguyen. 2022. Unsupervised domain adaptation for text classification via meta self-paced learning. In Proceedings of the 29th International Conference on Computational Linguistics, pages 4741-4752, Gyeongju, Republic of Korea. International Committee on Computational Linguistics.
+Amir Veyseh, Franck Dernoncourt, Dejing Dou, and Thien Huu Nguyen. 2020a. Exploiting the syntax-model consistency for neural relation extraction. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8021-8032, Online. Association for Computational Linguistics.
+Amir Veyseh, Tuan Ngo Nguyen, and Thien Huu Nguyen. 2020b. Graph transformer networks with syntactic and semantic structures for event argument extraction. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 3651-3661, Online. Association for Computational Linguistics.
+Christopher Walker, Stephanie Strassel, Julie Medero, and Kazuaki Maeda. 2005. Ace 2005 multilingual training corpus. In Technical report, Linguistic Data Consortium.
+Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System
+
+Demonstrations, pages 38-45, Online. Association for Computational Linguistics.
+Dustin Wright and Isabelle Augenstein. 2020. Transformer based multi-source domain adaptation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 7963-7974, Online. Association for Computational Linguistics.
+Yifan Wu, Ezra Winston, Divyansh Kaushik, and Zachary Lipton. 2019. Domain adaptation with asymmetrically-relaxed distribution alignment. In Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pages 6872-6881. PMLR.
+Zihao Xu, Hao He, Guang-He Lee, Bernie Wang, and Hao Wang. 2022. Graph-relational domain adaptation. In International Conference on Learning Representations.
+Bishan Yang and Tom M. Mitchell. 2016. Joint extraction of events and entities within a document context. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 289-299, San Diego, California. Association for Computational Linguistics.
+Xiaofeng Yu and Wai Lam. 2010. Jointly identifying entities and extracting relations in encyclopedia text via a graphical model approach. In *Coling* 2010: Posters, pages 1399–1407, Beijing, China. Coling 2010 Organizing Committee.
+Seongjun Yun, Minbyul Jeong, Raehyun Kim, Jaewoo Kang, and Hyunwoo J Kim. 2019. Graph transformer networks. In Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc.
+Werner Zellinger, Thomas Grubinger, Edwin Lughofer, Thomas Natschlager, and Susanne Saminger-Platz. 2017. Central moment discrepancy (cmd) for domain-invariant representation learning.
+Junchi Zhang, Yanxia Qin, Yue Zhang, Mengchi Liu, and Donghong Ji. 2019. Extracting entities and events as a single task using a transition-based neural model. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI-19, pages 5422-5428. International Joint Conferences on Artificial Intelligence Organization.
+Zixuan Zhang and Heng Ji. 2021. Abstract Meaning Representation guided graph encoding and decoding for joint information extraction. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 39-49, Online. Association for Computational Linguistics.
+Suncong Zheng, Feng Wang, Hongyun Bao, Yuexing Hao, Peng Zhou, and Bo Xu. 2017. Joint extraction of entities and relations based on a novel tagging scheme. In Proceedings of the 55th Annual Meeting
\ No newline at end of file
diff --git a/unsuperviseddomainadaptationforjointinformationextraction/images.zip b/unsuperviseddomainadaptationforjointinformationextraction/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..29238c3f5c712c8924a721225142bb589a68ccff
--- /dev/null
+++ b/unsuperviseddomainadaptationforjointinformationextraction/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ef345758d9d1d27f26a12ab86b6b639498e47b5b01b587c5b900f9e9c492a3dc
+size 379531
diff --git a/unsuperviseddomainadaptationforjointinformationextraction/layout.json b/unsuperviseddomainadaptationforjointinformationextraction/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..95e15bfdf96193f3797f8802a654f682b48093d2
--- /dev/null
+++ b/unsuperviseddomainadaptationforjointinformationextraction/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3cae36a938ac61ee0cb7a951129a448b1b8dfb0db64d48e5c6bb4f2ab3a447e5
+size 340947
diff --git a/unsupervisedlearningofhierarchicalconversationstructure/f69f1dc6-ec09-4266-8131-7651db06c81d_content_list.json b/unsupervisedlearningofhierarchicalconversationstructure/f69f1dc6-ec09-4266-8131-7651db06c81d_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..0328482b7bc4b9db6c5f962f50906d7e08eafed1
--- /dev/null
+++ b/unsupervisedlearningofhierarchicalconversationstructure/f69f1dc6-ec09-4266-8131-7651db06c81d_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:48611961f2fcdb509f775b72be01c3a66d683163e22c672585fc601711da5836
+size 83207
diff --git a/unsupervisedlearningofhierarchicalconversationstructure/f69f1dc6-ec09-4266-8131-7651db06c81d_model.json b/unsupervisedlearningofhierarchicalconversationstructure/f69f1dc6-ec09-4266-8131-7651db06c81d_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..4d9f93eb5a39c50819e953446afba5b4e5f7c263
--- /dev/null
+++ b/unsupervisedlearningofhierarchicalconversationstructure/f69f1dc6-ec09-4266-8131-7651db06c81d_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9ed0f3ac12150821ea6f4fe0d2af17a321677596eba06b61809eb11ba580b598
+size 100838
diff --git a/unsupervisedlearningofhierarchicalconversationstructure/f69f1dc6-ec09-4266-8131-7651db06c81d_origin.pdf b/unsupervisedlearningofhierarchicalconversationstructure/f69f1dc6-ec09-4266-8131-7651db06c81d_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..a95f7bc4005f16f454e7335950b27de1b4b15006
--- /dev/null
+++ b/unsupervisedlearningofhierarchicalconversationstructure/f69f1dc6-ec09-4266-8131-7651db06c81d_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:27f31700899e2959e279d0693765fa5307e28b162aeadd2ef6e845a26ef623aa
+size 642629
diff --git a/unsupervisedlearningofhierarchicalconversationstructure/full.md b/unsupervisedlearningofhierarchicalconversationstructure/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..3fd73b6abe4d3dddc418a72506c91fd4f0a35242
--- /dev/null
+++ b/unsupervisedlearningofhierarchicalconversationstructure/full.md
@@ -0,0 +1,310 @@
+# Unsupervised Learning of Hierarchical Conversation Structure
+
+Bo-Ru Lu\* Yushi Hu\* Hao Cheng\* Noah A. Smith\* Mari Ostendorf\*
+
+$\spadesuit$ University of Washington Microsoft Research Allen Institute for AI
+
+{roylu,yushihu,ostendor}@washington.edu
+
+chehao@microsoft.com nasmith@cs.washington.edu
+
+# Abstract
+
+Human conversations can evolve in many different ways, creating challenges for automatic understanding and summarization. Goal-oriented conversations often have meaningful sub-dialogue structure, but it can be highly domain-dependent. This work introduces an unsupervised approach to learning hierarchical conversation structure, including turn and sub-dialogue segment labels, corresponding roughly to dialogue acts and sub-tasks, respectively. The decoded structure is shown to be useful in enhancing neural models of language for three conversation-level understanding tasks. Further, the learned finite-state sub-dialogue network is made interpretable through automatic summarization.
+
+# 1 Introduction
+
+Increasingly, language understanding applications involve conversational speech and text. Much attention has recently been directed at human-agent dialogue systems, including virtual assistants, interactive problem solving, and information seeking tasks (e.g., conversational question answering). However, automatic understanding of human-human conversations is also of interest for problems such as call-center analytics, conversation outcome prediction, meeting summarization, and human-agent interaction involving multiple people. The focus of this paper is on human-human conversation understanding.
+
+Like written documents, goal-oriented conversations tend to have structure (openings, context setting, problem solving, etc.). However, in human-human conversations (both text and speech), participant roles factor into the structure, and the structure is less rigid due to the need to accommodate miscommunications and varying objectives. Yet, most work on conversational systems treats dialogues like written text, i.e., the dialogue history is
+
+a linear sequence of text. In this paper, we explore unsupervised learning strategies for adding structural information to a state-of-the-art hierarchical transformer-based model of text.
+
+Linguistic analysis of conversations often involves associating speaker utterances with dialogue acts (DAs), e.g., question, statement, backchannel, clarification, etc. (Jurafsky et al., 1997; Core and Allen, 1997), and segmenting the conversation into nested subsequences of participant turns that reflect a common topic or conversational goal (Grosz and Sidner, 1986). Past studies have explored using such structure, particularly DAs, to improve automated human-agent dialogues. Here, we use hierarchical structure (both turn-level DA labels and sub-dialogue states) to improve classification of human-human conversations. Specifically, we introduce Three-stream Hierarchical Transformer (THETA), which integrates transformer representations of the DA and sub-dialogue state sequences into a hierarchical transformer (HT) (Santra et al., 2021; Pappagari et al., 2019) operating on the original text. In addition to improving performance, the use of discrete structural cues in classification can support conversation analysis. For example, we can identify seller strategies that are more likely to lead to a successful outcome or use the sub-dialogue state sequence to summarize frequently visited states in unsuccessful interactions.
+
+Since hand-annotation of structure can be costly and inventories vary across tasks, there is substantial interest in unsupervised learning of structure for specific task domains. Here, the approach to structure learning involves two steps. First, we use a clustering algorithm to learn a mapping of utterance embeddings to discrete categories, which serve as an unsupervised version of DAs. Each conversation is then represented by the discrete sequence of cluster identifiers (IDs) associated with the sequence of utterances. Using the collection of discretized conversations, we automatically learn
+
+the topology of a latent finite-state model over these sequences, i.e., a hidden Markov model (HMM), using a greedy state-splitting algorithm that maximizes the likelihood of the sequence data without requiring any annotations. The states of the HMM correspond to different sub-dialogues that may be associated with specific topics, strategies or sub-tasks. The sub-dialogue structure of a new conversation is identified by finding the most likely state sequence given that discretized utterance sequence.
+
+The learned structure is assessed in experiments on three conversation-level classification tasks: buyer/seller negotiation outcomes on CRAIGSLIST-BARGAIN (He et al., 2018), conversation category in the Action-based Conversations Dataset (ABCD) (Chen et al., 2021), and client callback prediction in a private call center corpus. In each task, we find that a combination of both utterance-level category and sub-dialogue state information lead to improved performance. Further, we use automatically generated descriptions of the clusters and sub-dialogue states to provide an interpretable view of the finite-state topology and a summarized view of a conversation. Anecdotally, we find that this structure lends insights into how participant strategies (state paths) are associated with different conversation outcomes.
+
+The contributions of this work are as follows. First, we introduce a simple unsupervised approach to learn a hierarchical representation of conversation structure that includes turn-level labels and sub-dialogue segmentation, accounting for participant role. Using three conversation-level classification tasks, we demonstrate that integrating the structural information into a state-of-the-art hierarchical transformer consistently improves performance. Lastly, we show how the discrete representation of structure combined with automatic summarization can provide a mechanism for interpreting what the model is learning or for conversation summarization and analytics.
+
+# 2 Method
+
+As shown in Figure 1, THETA represents the sequence of turns in a conversation using: i) a hierarchical transformer (HT) operating on a turn-segmented word sequence, ii) a transformer operating on a sequence of turn-level DAs, and iii) a separate transformer operating on a sequence of sub-dialogue states derived from the DAs. The conversation-level vectors produced by the three
+
+transformers are concatenated and used in a final task-specific layer for conversation classification tasks. The HT alone is the state-of-the-art model for conversation-level tasks. The DA and sub-dialogue states comprise the structural information that enhances the HT for improving performance of the end task. In addition, the discrete nature of the structure representation provides a mechanism for analyzing the conversation classes via summarization of utterances associated with the DA labels or sub-dialogue states.
+
+# 2.1 Model Components
+
+Definitions More formally, each dialogue consists of a sequence of words (or tokens) $X = [x_{1},\ldots ,x_{T}]$ associated with $T$ customer/agent (or seller/buyer) utterances, where $x_{t}$ is the subsequence of words associated with the $t$ th utterance. The word sequence is decorated with three special tokens: [CLS], [PTY] and [SEP], where [PTY] indicates the utterance speaker role ([AGT] for agent/seller and [USR] for customer/buyer). The word sequence $X$ is mapped to two sequences of utterance-level embeddings $U^{v} = [u_{1}^{v},\dots,u_{T}^{v}]$ , $v \in \{\mathrm{HT},\mathrm{DA}\}$ . The vector $u_{t}^{\mathrm{HT}}$ is output from the last layer of the HT that is used to derive the text-based conversation-level vector $\mathbf{U}$ . The vector $u_{t}^{\mathrm{DA}}$ is the output of a separate transformer, which is then mapped to a DA category $c_{t}$ to produce the sequence $C = [c_{1},\dots,c_{T}]$ . The sequence $C$ is associated with a hidden subdia-ogue sequence that is represented using the HMM state sequence $S = [s_{1},\dots,s_{T}]$ . Additional transformers derive conversation-level vectors $\mathbf{C}$ and $\mathbf{S}$ from $C$ and $S$ , respectively. THETA enhances the conversation representation by concatenating $\mathbf{U}$ , $\mathbf{C}$ and $\mathbf{S}$ together for input to a task-specific layer.
+
+Hierarchical Transformer The hierarchical transformer (Pappagari et al., 2019) has been shown to be useful for classifying long documents (like customer support conversations), which exceed the length limits placed on transformer-based models due to the quadratic complexity of the self-attention module. At a high level, two transformer blocks, a lower utterance transformer and an upper conversation transformer are stacked together for encoding dialogues. Here, the utterance-level transformer first encodes utterances into utterance embeddings, one for each utterance. In
+
+
+Figure 1: Overview of THETA conversation encoding. The text of each utterance text is encoded by BERT, and a 1-layer transformer further contextualizes utterance embeddings to generate the text vector $\mathbf{U}$ . For structure, utterances are mapped to K-means dialogue acts (DAs), which are input to an HMM to decode sub-dialogue states. 1-layer transformers are applied to sequences of DAs and sub-dialogue states, yielding cluster vector $\mathbf{C}$ and state vector $\mathbf{S}$ . The concatenation of $\mathbf{U}$ , $\mathbf{C}$ and $\mathbf{S}$ is fed into a linear layer to obtain the structure-enhanced vector for the predictive task. For simplicity, Emb. and Trans. stand for embedding and transformer, respectively.
+
+this case, the first contextualized token embedding as the utterance embedding, which corresponds to the sentence-level [CLS] token. The sequence of utterance embeddings augmented with a conversation-level [CLS] token are then fed as inputs to another one-layer conversation-level transformer to further contextualize the vector sequence. We use the output vector associated with the conversation-level [CLS] token as the conversation representation.
+
+Dialogue Act Sequence Module To obtain the DA labels, we first derive an utterance embedding $u_{t}^{\mathrm{DA}}$ by mean pooling the final layer of the BERT transformer. The resulting embedding is mapped to a DA class $c_{t}$ using a vector quantization (VQ) approach: K-means clustering is used to learn the classes, and vectors are labeled at inference time by minimizing the Euclidean distance to cluster means. The number of clusters is treated as a hyperparameter of the overall model. We apply K-means clustering separately for utterances from the two different participant roles, so the DA index reflects the role. This simple approach is motivated by prior work on unsupervised learning of DA categories (Brychcin and Kral, 2017), which showed that K-means clustering gives a performance that is only slightly worse than HMM-based learning.
+
+In linguistic analyses, a turn can contain a sequence of DAs. Our work assigns a single DA to a user turn, as in other work using unsupervised learning as well as the negotiation data set that we
+
+report results on. Since the prior work often uses "dialogue act" for turn-level labels, we have chosen to use the DA term here, acknowledging the abuse of terminology. For complex tasks like the call center data (and other data with real users), the turns will involve multiple dialogue acts, in which case a large number of clusters is useful.
+
+Sub-Dialogue Sequence Module The DA sequence $C$ is input to a hidden Markov model (HMM) to derive the sub-dialogue structure. An HMM is a statistical model that characterizes an observation sequence $C$ in terms of a discrete, latent (hidden) Markov state sequence $S$ ,
+
+$$
+\begin{array}{l} P (C) = \sum_ {\text {a l l} S} P (C, S) \\ = \sum_ {\text {a l l} S} \pi (s _ {1}) \prod_ {t = 1} ^ {T} \eta (c _ {t} | s _ {t}) \gamma (s _ {t + 1} | s _ {t}), \\ \end{array}
+$$
+
+where $\pi$ , $\eta$ , and $\gamma$ are start-state, observation, and transition distributions, respectively. $s_{T+1}$ is a dummy stopping state. The HMM is used to decode the hidden sub-dialogue state sequence $S$ , which provides a segmentation of the conversation into different stages or sub-tasks in problem solving or negotiation. The HMM topology and parameters are derived using unsupervised learning as described in the next section.
+
+# 2.2 Sub-Dialogue Structure Learning
+
+Given a specified topology, inference and training algorithms for HMMs are well established (Murphy, 2012); the Viterbi algorithm gives the
+
+
+(a) Before split.
+
+
+(b) Temporal split.
+
+
+(c) Contextual split.
+Figure 2: The design of two split methods. The dark-blue state is chosen to be split. The light-blue state is the new state after split. Transitions to other states are omitted for simplicity.
+
+most likely state sequence, and the Expectation-Maximization (EM) algorithm is used for parameter estimation. To automatically learn the HMM topology, we apply a greedy state splitting algorithm (Ostendorf and Singer, 1997), which learns a left-to-right topology by constraining states to inherit the transition constraints of their parent. The standard objective is maximum likelihood of the DA sequence, which is unsupervised with respect to the conversation-level task.
+
+Topology learning is outlined in Algorithm 1. The initial model has a 3-state left-to-right topology, initialized (assuming $70\%$ of the conversation is associated with the middle state) and then iteratively trained until the improvement is lower than a fixed threshold or the iteration count exceeds some number. At each iteration, the state with the highest entropy of emission probability is chosen to be split. The topology can change into two new configurations corresponding to temporal and contextual splits (Figure 2). The EM algorithm is applied again on each configuration and the topology that leads to the higher likelihood is chosen. We iteratively conduct the splitting until the total number of states reaches the desired value (a hyperparameter). The HMMs of Ostendorf and Singer (1997) used continuous observation distributions. The splitting approach described below was designed for discrete distributions.
+
+# Algorithm 1 Topology Learning Algorithm
+
+1: $n$ : number of split. $\tau_{i}$ : topology after $i$ split.
+2: Initialization: Run the EM algorithm on 3-state initial topology $\tau_0$ .
+3: for $i = 1$ to $n$ do
+4: begin
+5: Select state $s \in \tau_{i-1}$ to split based on max entropy of observation distribution $\eta_{i-1}(c|s)$ .
+6: Apply temporal split and get new topology $\tau_{i,t}$ .
+7: Apply contextual split and get new topology $\tau_{i,c}$ .
+8: Run the EM algorithm on $\tau_{i,t}$ and $\tau_{i,c}$ .
+9: Select the topology with higher likelihood as $\tau_{i}$
+10: end
+
+Temporal split The temporal split provides more detailed sequential structure along a path. Figure 2(b) shows the result of a temporal split on the selected state (dark-blue) in Figure 2(a). The light-blue node is the new child state that inherits all incoming and outgoing edges and the transition probabilities of the dark-blue state except $y$ and $z$ . Edges $y$ and $z$ are initialized to $p_x / 2$ where $p_x$ is the probability of the original edge $x$ of dark-blue state. The old incoming edges of dark-blue state are removed and outgoing edges are preserved.
+
+Contextual split The contextual split allows for alternate sub-dialogue paths. Figure 2(c) illustrates the contextual split applied on the dark-blue state. The light-blue state inherits everything but the observation distribution of dark-blue state. With the aim of modeling different types of paths, when copying the observation probabilities to the light-blue state, we omit the top emission probability of the dark-blue node and set it to 0 and normalize the rest of probabilities. In terms of the transition probabilities, the light-blue state inherits all from the dark-blue one; $p_x = p_z$ where $p_x$ and $p_z$ are the transition probabilities of edges $x$ and $z$ .
+
+# 2.3 Pre-Training and End-Task Training
+
+Both for initializing the HT and for deriving the DAs, we use the transformer-based BERT model (Devlin et al., 2019) for encoding individual utterances $u_{t}$ , pre-trained using masked language modeling and next-sentence prediction. Due to the style differences of dialogue data vs. written text, we apply domain-adaptive pretraining (DAPT) (Gururangan et al., 2020) to adapt BERT for dialogue applications. As shown later (section 3), adapting BERT with DAPT provides substantial improvement in terms of predictive power as well as optimization stability.
+
+For the HT alone, supervised training involves learning the weights of the final task-level linear layer, the utterance-level transformer, and the word-
+
+level transformer.
+
+For THETA, supervised training involves learning the weights of the cluster- and state-level transformers, in addition to all updates associated with the HT component described above. The cluster sequences are obtained using the word-level transformer with DAPT and the associated cluster mapping obtained from unsupervised learning, i.e., without task-level finetuning. Similarly, there are no task-level supervision updates to the parameters associated with the HMM that is used to derive the state sequence.
+
+# 3 Experiment
+
+# 3.1 Datasets and Evaluation Metrics.
+
+We use three datasets with conversation-level classification tasks to evaluate our model. The detailed statistics of the datasets are shown in Appendix B.
+
+CRAIGSLISTBARGAIN (He et al., 2018) is a public negotiation dataset where buyers and sellers negotiate the prices of items on sale. In each conversation, the buyer has a target price in their mind and attempts to reach an agreement with the seller. Following previous work (Zhou et al., 2020; Joshi et al., 2021), we use the same list of 14 handcrafted utterance DAs and the 5-class sale-to-list price ratio labels provided in their code base. The 14 handcrafted utterance DAs are used as comparison to evaluate if our unsupervised version of DAs is learning good representations. Classification of sale-to-list price ratio is used as the downstream task, with accuracy as the evaluation criterion.
+
+ABCD (Chen et al., 2021) is a public customer support dataset that is introduced to study customer service dialogues. In each conversation, an agent follows guidelines to help a customer solve their issue. Conversations are categorized with flows and subflows. Flows are broad categories, such as shipping issue, account access, or purchase dispute. Subflows comprise 96 fine-grained labels, for example, shipping status question, recover password, or invalid promotion code. Each conversation is annotated with a flow and a subflow. We use classification of the subflows as our conversation-level task. Macro and micro F1 scores are used to reflect the performance of imbalanced subflow classes.
+
+CALL CENTER is a private collection of customer service conversations. Phone calls are automatically transcribed and private user information is anonymized. Conversations are annotated with a binary indicator as to whether or not there will
+
+be a callback within two days. (Such callbacks are an indicator that the problem was not solved in the call.) For the task of callback prediction, we measure area under the ROC curve (ROC AUC).
+
+# 3.2 Implementation Details
+
+Experimental Setup. We develop our K-means and HMMs using the packages Faiss (Johnson et al., 2019) and Pomegranate (Schreiber, 2018). The number of DAs and the size of the HMM state space are chosen separately for each dataset based on development set performance. We initialize and finetune our experiments based on uncased base model of BERT downloaded from HuggingFace (Wolf et al., 2020). We DAPT with dynamic whole-word masking (WWM) on 128-token segments for each dataset. During finetuning, the learning rate and warm-up steps are $1 \times 10^{-5}$ and 0.1 epoch, respectively. Models are selected by the best score on the development set for each dataset. Further hyperparameter details are in Appendix A.
+
+
Model
% Acc.
FeHED
42.3
HED + RNN
47.9
HED + transformer
53.7
DIALOGGRAPH
53.1
HT
54.1±2.4
THETA
66.1±1.0
+
+Table 1: Results on the test set of CRAIGSLISTBARGAIN in accuracy. For models studied in this paper (lower part), the median number is reported with standard deviation calculated based on 15 random runs.
+
+
ABCD
CALL CENTER
Model
Micro
F1 Macro
Weighted
ROC AUC
HT
52.2
25.4
45.7
69.6
THETA
62.8
39.1
59.9
71.3
+
+Table 2: Results on the test sets of ABCD and CALL CENTER datasets.
+
+# 3.3 Comparison Systems
+
+We use the hierarchical transformer (HT) as a baseline for all datasets in comparison to THETA. For CRAIGSLISTBARGAIN, we also include three additional baselines from two works (Zhou et al., 2020; Joshi et al., 2021) that employ the DAs extracted by heuristic methods; our systems use K-means to obtain primitive DAs.
+
+
Model
CRAIGSLISTBARGAIN
ABCD
CALL CENTER
Accuracy
Micro
F1 Macro
Weighted
ROC AUC
HT w/o DAPT
48.0
15.4
4.2
9.4
68.4
HT
50.3
52.2
26.9
46.3
71.2
THETA (cluster only)
60.2
59.8
35.3
55.7
72.2
THETA (state only)
51.7
58.8
32.8
54.1
72.1
THETA
61.3
62.6
38.6
59.5
72.8
+
+Table 3: Ablation on the development sets of CRAIGSLISTBARGAIN, ABCD and CALL CENTER datasets. All models with structure are statistically better than HT. THETA is better $(p < 0.01)$ than the cluster-only alternative except for the CALL CENTER.
+
+FST-enhanced hierarchical encoder-decoder model (FeHED). FeHED (Zhou et al., 2020) uses an RNN-based sequence-to-sequence model with finite-state transducers for encoding sequences of strategies and DAs.
+
+Hierarchical encoder-decoder (HED) + RNN or transformer. HED encodes dialogue utterances with a transformer (initialized from pretrained BERT), and the decoder generates the next response. An RNN or transformer encodes strategies and DAs. HED + RNN is based on the dialogue manager of He et al. (2018); Joshi et al. (2021) replace the RNN with a transformer.
+
+DIALOGraph. (Joshi et al., 2021). The state-of-the-art HED-based model on CRAIGSLISTBARGAIN dataset leverages graph attention networks (GAT; Velicković et al., 2018) to encode strategies and DAs.
+
+# 3.4 Prediction Results
+
+Performance on Negotiation Dialogues. Table 1 reports the results of different systems on the test set of CRAIGSLISTBARGAIN dataset. All models are based on the BERT-base model. HT with only text outperforms the state-of-the-art DIALOGRAPH which leverages a graph-based representation of conversation structure. This verifies our hypothesis that DAPT with target data indeed improves BERT for dialogue tasks. Compared with HT, THETA achieves better prediction accuracy and smaller variance, which suggests that integrating the structure view helps stabilize training with different random seeds. THETA provides a $24.5\%$ relative gain in accuracy over DIALOGRAPH, setting a new state of the art. This further validates the advantage of our learned conversation structure for a predictive task.
+
+Performance on Customer Support Domain. Similar to the results on the negotiation dialogue domain, Table 2 shows that conversation structure effectively enhances the performance in the customer service domain, ABCD and CALL CENTER.
+
+Ablation. Table 3 reports the results of ablating different components of THETA on the validation sets of all datasets. The first rows show that DAPT is useful on all tasks particularly for ABCD with its skewed class distribution. We also observe that THETA consistently achieves the best performance over all tasks. The cluster-based DA sequence provides more information than the sub-dialogue states, but incorporating all three views together leads to the best performance. Statistical significance is tested using bootstrap resampling (Efron and Tibshirani, 1993; Berg-Kirkpatrick et al., 2012).
+
+Prior work (Zhou et al., 2020; Joshi et al., 2021) on CRAIGSLISTBARGAIN use domain knowledge in rule-based annotation of DAs. To assess the use of K-means clusters for learning DAs, we also trained an HMM using the provided DAs. The resulting model obtained $66.5\%$ accuracy on the test data, which is not significantly different the $66.1\%$ results obtained using K-means (cf. Table 1).
+
+# 4 Interpretation and Analysis
+
+In this section, we leverage automatic summarization of clusters and states to derive insights into the learned conversation structure, both for interpretability of the model and for applications such as conversation analytics and summarization. As an example, we analyze fine-grained components from the learned topology, i.e., most frequent paths and individual state n-grams, to investigate their associations with different dialogue characteristics.
+
+We apply graph-based unsupervised summarization (Boudin and Morin, 2013; Shang et al., 2018)
+
+over utterances in each state (decoupling participant roles) and in each cluster. On CRAIGSLIST-BARGAIN and ABCD, this leads to more than $3 \times$ reduction in conversation length.
+
+Figure 3 shows the 8-state topology of CRAIGSLISTBARGAIN with selected state summaries. Based on the summaries, it is easy to see that S1 and S8 capture opening and closing DAs, respectively, while S5 and S6 correspond to different negotiation strategies. We also find that conversations with shorter paths are likely to involve a less experienced seller or lower buyer interest, e.g., $92\%$ conversations with path S1-S2-S8 lead to under listing sells. On the other hand, sellers that say offers are too low are more likely get better prices, e.g., $91\%$ conversations with path S1-S2-S3-S5-S7-S8.
+
+ABCD Table 4 shows an example with both cluster and state summaries. Based on the cluster summaries, we see that K-means learns typical DAs associated with customer service, e.g., information requests from the agent and the corresponding customer replies (, , ). States correspond to sub-dialogues where the agent follows certain protocols in resolving a sub-task (e.g., verifying account information). Alignment of flow labels with the most frequent paths through the HMM topology shows that paths are highly indicative of the corresponding dialogue flow. The high confusions are among certain flows, such as storewide_query and single_item_query, which one would expect to have similar DAs.
+
+# 5 Related Work
+
+HMMs have been leveraged for learning structure in language for many years, such as in early work on inducing word-level part-of-speech tags (Merialdo, 1994). Accordingly, most work on unsupervised learning of both DAs and conversation structure leverages HMMs.
+
+# Unsupervised Learning of Dialogue Acts.
+
+Since dialogue act recognition can be thought of as a sentence-level tagging task, initial work on unsupervised learning of DAs was similar to word tagging, involving some use of language models or fully-connected HMMs to account for sequential dependency of labels. Ritter et al. (2010) use an HMM with a factored state space with a topic model to decouple speech act from topic characteristics. The observation model $\eta$ in the HMM is a
+
+bag of words (unigram) model. The approach was later extended by incorporating speaker information (Joty et al., 2011; Paul, 2012). Brychcin and Kral (2017) further extend this work with a Gaussian mixture observation model (GMM) where the utterance representation is the average of GloVe word embeddings. They compare the results to a simple K-means clustering, which is not as effective as the HMM but gives similar results to the method proposed by Ritter et al. (2010) when applied to the Switchboard corpus. Hierarchical clustering of lexicalized utterance embeddings is used by Gunasekara et al. (2019), who use domain knowledge in preprocessing to identify phrases such as "Indian food" as "CUISINE_TYPE," for example. Our work on utterance categorization is similar to the K-means approach in Brychcin and Kral (2017), but we use more recent transformer-based utterance embeddings.
+
+# Unsupervised Learning of Dialogue Structure.
+
+Task- or goal-oriented conversations typically have structure above the level of the sentence in that a sequence of turns are associated with a common function. In more complex conversations, the structure can be hierarchical, with tasks and sub-tasks. Bangalore et al. (2008) used a parsing model to automatically recognize dialogue acts and segment a conversation into sub-tasks, leveraging hand-annotations of both DAs and sub-tasks. Since sub-task structure varies depending on the task and there is little hand-annotated data, most work has focused on unsupervised approaches with a flat segmentation. Note that the problem of unsupervised learning here involves jointly recognizing sub-dialogue segment boundaries, learning an inventory of sub-dialogue types, and learning (or constraining) the sequential structure of these types.
+
+Early work on unsupervised learning used fully-connected HMMs to identify structure in documents (Barzilay and Lee, 2004) for extractive summarization and information ordering. The observation model was based on word bigrams with the aim of capturing topic coherent segment. A similar idea is applied to task-oriented dialogues using latent Dirichlet allocation for the observation model (Zhai and Williams, 2014).
+
+Studies that leverage constrained left-to-right HMM technologies include (Althoff et al., 2016), which aimed to learn stages/strategies of counselors in mental health counseling, and (Ramanath et al., 2014), which used a hidden semi-Markov
+
+
+Figure 3: The 8-state topology on CRAIGSLISTBARGAIN dataset. The thicker edges indicate higher levels of negotiation success; in contrast, the thinner edges represent lower levels. Due to space limitations, only 6 state summaries are shown. The detailed topology with all cluster and state summaries is in Appendix D.
+
+
S1
buyer: hi i am interested seller: hi how are you interested ?
S2
buyer: how about its condition ? seller: it s in great condition .
S3
buyer: i can do <price>. i can pick it up seller: that s too low. i can go with <price>
S5
buyer: i can do <price>. <offer> seller: the lowest i can do is <price>
S6
buyer: that s a deal. i can do that seller: great. that s a deal
S8
buyer: <accept> seller: <accept>
+
+
Party
Utterance
Cluster Summary
State Summary
Agent Customer
Welcome to AcmeBrands! How can I help you? Hello, I would like to change my shipping details as they have changed recently due to a move
How can I help you? I want to check my shipping
A: How can I help you today? C: I want to check my order.
Agent Agent
I would be happy to help you with that Is there an outstanding order? Or is this just an update to your account? Yes my order id is 4870952797 What is your name please? Crystal Minh What is the shipping status of the order? In Transit Next I need to validate your purchase. I will need your username and email. Cminh948, cminh948@email.com Thank you
I can help you with that How long have you been waiting? I have pulled up your account. My order/account ID is_ What is your name? <name> What is the shipping status? In store/ In transit I need your name <email> Thank you
A: Can I have your account/order? C: My account/order is_
Agent Customer Agent
and the new address please? 9756 Primrose Street Newark, MI 85971 All taken care of!
Can you tell me _? <address> Your order has been updated
A: Can I have the address? C: My address is_
Agent Customer Agent
Is there anything else today? Thank you that is all Have a great one!
Anything else? That's all. Thank you. Have a good one!
A: Anything else I can help? C: That's all. Thank you.
+
+Table 4: An example of ABCD with cluster and state summaries. A and C stand for agent and customer, respectively.
+
+model for unsupervised alignment of privacy policy documents. Both used unigram observation models. HMM-based conversation stages are combined with a topic-based segmentation by Chen and Yang (2020) for dialogue summarization. The use of unigram and bigram word models emphasizes topic in segmenting conversations. Our work differs in that the automatically learned speech acts are observations of the HMM, since word distributions are captured by the HT.
+
+Most similar to our work is (Zhou et al., 2020), which uses two finite state transducers (FSTs) to map a sequence of dialogue acts (or strategies) to a sequence of state embeddings, which are then integrated into a hierarchical encoder-decoder model for prediction of the next strategy in a negotiation dialogue. The FSTs are analogous to our HMM, but the inputs are based on learning from hand-labeled strategies and rule-based dialogue acts.
+
+There are other approaches to modeling conver
+
+sation structure that do not rely on HMMs. DIALOGraph (Joshi et al., 2021) uses a graph attention network to encode discrete DA and strategy label sequences. A variational recurrent neural network is used to model structure by Shi et al. (2019). These approaches are less amenable to the interpretation methods used in our work.
+
+Two key differences in our approach compared to all these studies are: i) the use of HMM topology learning via successive state splitting, and ii) the integration of structural information using a multi-stream neural sequence model.
+
+# 6 Conclusion
+
+In summary, this work combines two simple approaches for unsupervised learning on top of embedded utterance representations (K-means clustering and HMM topology design) to derive a hierarchical representation of conversation structure, which is useful to enhance a hierarchical trans
+
+former in three conversation-level classification tasks. The K-means clusters are intended to approximate DAs and the HMM is intended to learn sub-dialogue structure. Unlike prior work in this area, the sub-dialogues build on DA sequences rather than unigram/bigram statistics, and the HMM incorporates forward-moving dialogue flow constraints in topology learning, with the goal of capturing sub-dialogue function.
+
+# Acknowledgments
+
+We thank Mourad Heddaya for exploring preliminary experiments when he was at the University of Washington. We also thank all members in TIAL lab and NLP groups at the University of Washington who provided valuable feedback and insights to this work.
+
+# Limitations
+
+First, our experiments explore only two types of dialogues (negotiation and customer support) with conversation-level tasks (identifying the topic or assessing some measure of conversation success). Although THETA shows promising results, it requires further exploration with other types of conversations (e.g. information gathering, tutoring), including more examples of spoken interactions, as well as extending THETA to multi-party discussions. In addition, it would be of interest to assess the utility of automatically learned structure for other types of tasks, such as call center analytics or state tracking to support dialogue management or online agent support.
+
+Second, we use K-means and HMMs for deriving the conversation structure, both of which require dataset-specific hyperparameters that are unlikely to transfer well to new datasets. Additionally, we only study a late fusion strategy for combining discrete structure and text-based representations. A more tightly integrated approach might be more effective. For example, our K-means DA is based on a single utterance; however, sequence models have been important for past work on unsupervised learning of DAs. Future work could leverage sequential DA dependencies in joint DA and sub-dialogue structure learning or explore continuous DA-like representations, as in (Cheng et al., 2019).
+
+# Ethical Considerations
+
+The automatic learning of conversation structure is dependent on having data that is matched to the task
+
+of interest. A potential challenge is that biases in the data could result in some conversation strategies not being well represented. The summarization approach provides interpretability of the model, but imperfect summarizations could lead to incorrect interpretations.
+
+# References
+
+Tim Althoff, Kevin Clark, and Jure Leskovec. 2016. Large-scale analysis of counseling conversations: An application of natural language processing to mental health. Transactions of the Association for Computational Linguistics, 4:463-476.
+Srinivas Bangalore, Giuseppe Di Fabbrizio, and Amanda Stent. 2008. Learning the structure of task-driven human-human dialogs. IEEE Transactions on Audio, Speech, and Language Processing, 16(7):1249-1259.
+Regina Barzilay and Lillian Lee. 2004. Catching the drift: Probabilistic content models, with applications to generation and summarization. In Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics: HLT-NAACL 2004, pages 113-120, Boston, Massachusetts, USA. Association for Computational Linguistics.
+Taylor Berg-Kirkpatrick, David Burkett, and Dan Klein. 2012. An empirical investigation of statistical significance in NLP. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 995-1005, Jeju Island, Korea. Association for Computational Linguistics.
+Florian Boudin and Emmanuel Morin. 2013. Keyphrase extraction for N-best reranking in multi-sentence compression. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 298-305, Atlanta, Georgia. Association for Computational Linguistics.
+T. Brychcin and P. Kral. 2017. Unsupervised dialogue act induction using Gaussian mixtures. In Prof. EMNLP, volume 2, page 485-490.
+Derek Chen, Howard Chen, Yi Yang, Alexander Lin, and Zhou Yu. 2021. Action-based conversations dataset: A corpus for building more in-depth task-oriented dialogue systems. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3002-3017, Online. Association for Computational Linguistics.
+Jiaao Chen and Diyi Yang. 2020. Multi-view sequence-to-sequence models with conversational structure for abstractive dialogue summarization. In Proceedings of the 2020 Conference on Empirical Methods in
+
+Natural Language Processing (EMNLP), pages 4106-4118, Online. Association for Computational Linguistics.
+Hao Cheng, Hao Fang, and Mari Ostendorf. 2019. A dynamic speaker model for conversational interactions. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2772-2785, Minneapolis, Minnesota. Association for Computational Linguistics.
+Mark G Core and James F Allen. 1997. Coding dialogs with the damsl annotation scheme. In in Proc. Working Notes AAAI Fall Symp. Commun. Action in Humans.
+Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.
+Bradley Efron and Robert Tibshirani. 1993. An introduction to the bootstrap. In Chapman & Hall/CRC Monographs on Statistics & Applied Probability. Taylor & Francis.
+Barbara J. Grosz and Candace L. Sidner. 1986. Attention, intentions, and the structure of discourse. Computational Linguistics, 12(3):175-204.
+R. Chulaka Gunasekara, David Nahamoo, Lazaros C. Polymenakos, David Echeverria Ciaurri, Jatin Ganhotra, and Kshitij P. Fadnis. 2019. Quantized dialog - a general approach for conversational systems. Computer Speech and Language, 54:17-30.
+Suchin Gururangan, Ana Marasovic, Swabha Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey, and Noah A. Smith. 2020. Don't stop pretraining: Adapt language models to domains and tasks. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8342-8360, Online. Association for Computational Linguistics.
+He He, Derek Chen, Anusha Balakrishnan, and Percy Liang. 2018. Decoupling strategy and generation in negotiation dialogues. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2333-2343, Brussels, Belgium. Association for Computational Linguistics.
+Jeff Johnson, Matthijs Douze, and Hervé Jégou. 2019. Billion-scale similarity search with GPUs. IEEE Transactions on Big Data, 7(3):535-547.
+Rishabh Joshi, Vidhisha Balachandran, Shikhar Vashishth, Alan Black, and Yulia Tsvetkov. 2021. DiamoGraph: Incorporating interpretable strategy-graph
+
+networks into negotiation dialogues. In International Conference on Learning Representations.
+Shafiq Joty, Giuseppe Carenini, and Chin-Yew Lin. 2011. Unsupervised modeling of dialog acts in asynchronous conversations. In Proc. International Joint Conference on Artificial Intelligence, pages 1807-1813.
+Dan Jurafsky, Elizabeth Shriberg, , and Debra Biaasca. 1997. Switchboard SWBD-DAMSL shallow-discourse-function annotation coders manual, draft 13. Technical report, University of Colorado, Boulder.
+Bernard Merialdo. 1994. Tagging English text with a probabilistic model. Computational Linguistics, 20(2):155-171.
+Kevin Murphy. 2012. Machine Learning: A Probabilistic Perspective. MIT Press.
+Mari Ostendorf and Harald Singer. 1997. HMM topology design using maximum likelihood successive state splitting. Computer Speech & Language, 11(1):17-41.
+Raghavendra Pappagari, Piotr Zelasko, Jesús Villalba, Yishay Carmiel, and Najim Dehak. 2019. Hierarchical transformers for long document classification. In 2019 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), pages 838-844. IEEE.
+Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. PyTorch: An imperative style, high-performance deep learning library. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems 32, pages 8024-8035. Curran Associates, Inc.
+Michael J. Paul. 2012. Mixed membership Markov models for unsupervised conversation modeling. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 94-104, Jeju Island, Korea. Association for Computational Linguistics.
+Rohan Ramanath, Fei Liu, Norman Sadeh, and Noah A. Smith. 2014. Unsupervised alignment of privacy policies using hidden Markov models. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 605-610, Baltimore, Maryland. Association for Computational Linguistics.
+Jeff Rasley, Samyam Rajbhandari, Olatunj Ruwase, and Yuxiong He. 2020. Deepspeed: System optimizations enable training deep learning models with over
+
+100 billion parameters. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 3505-3506.
+Alan Ritter, Colin Cherry, and Bill Dolan. 2010. Unsupervised modeling of Twitter conversations. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 172-180, Los Angeles, California. Association for Computational Linguistics.
+Bishal Santra, Potnuru Anusha, and Pawan Goyal. 2021. Hierarchical transformer for task oriented dialog systems. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5649-5658, Online. Association for Computational Linguistics.
+Jacob Schreiber. 2018. Pomegranate: fast and flexible probabilistic modeling in python. Journal of Machine Learning Research, 18(164):1-6.
+Guokan Shang, Wensi Ding, Zekun Zhang, Antoine Tixier, Polykarpos Meladianos, Michalis Vazirgiannis, and Jean-Pierre Lorre. 2018. Unsupervised abstractive meeting summarization with multi-sentence compression and budgeted submodular maximization. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 664-674, Melbourne, Australia. Association for Computational Linguistics.
+Weiyan Shi, Tiancheng Zhao, and Zhou Yu. 2019. Unsupervised dialog structure learning. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1797-1807, Minneapolis, Minnesota. Association for Computational Linguistics.
+Petar Velicković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. 2018. Graph attention networks. In International Conference on Learning Representations.
+Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Association for Computational Linguistics.
+Ke Zhai and Jason D. Williams. 2014. Discovering latent structure in task-oriented dialogues. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 36-46, Baltimore, Maryland. Association for Computational Linguistics.
+
+Yiheng Zhou, Yulia Tsvetkov, Alan W Black, and Zhou Yu. 2020. Augmenting non-collaborative dialog systems with explicit semantic and strategic dialog history. In International Conference on Learning Representations.
+
+# A Experimental Setup Details
+
+We pretrain and finetune on BERT (Devlin et al., 2019) downloaded from Huggingface Transformers (Wolf et al., $2020)^{3}$ and use the uncased base model of BERT in most of our experiments. To feed lengthy conversations to the model, we employ gradient checkpoint and DeepSpeed (Rasley et al., 2020), a deep learning optimization library, to reduce GPU memory usage and accelerate the training process.
+
+The details of the model hyperparameters are as follows. 1-layer and 2-head transformers with 300 hidden size are applied to encode sequences of utterance-level embeddings in text view and sequences of clusters and states in structure view. Thus, the total number of parameters of our best system THETA, including base model of BERT and 3 one-layer transformers, is about 113M. For in-domain adaptation pretraining (DAPT), we use $5 \times 10^{-5}$ as learning rate and 5000 steps for CRAIGSLISTBARGAIN and ABCD and 30000 steps for CALL CENTER. 0.1 epochs are used as warm-up steps with linear learning rate decay. Gradient accumulation and PyTorch (Paszke et al., 2019) distributed data parallel GPU training are applied to achieve the equivalent training batch size 4096. For finetuning, we set $1 \times 10^{-5}$ as the learning rates, 4 epochs in total and 0.1 epochs for warm-up steps with linear decay. The equivalent training batch size is 16 during finetuning. Besides, the layer-wise learning rate decay is utilized to stabilize the training results; the rates are from 0.7, 0.8, 0.9 and the 0.9 leads to the best performance. For the rest of the training hyperparameters, we follow the default values in HuggingFace's training script.
+
+For K-means, we use Faiss (Johnson et al., 2019) with GPU to speed up clustering process for large private corpus. For HMMs, we develop our splitting algorithm via Pomegranate (Schreiber, 2018), a Python package that implements fast and flexible probabilistic models, to build our topology learning algorithm. The predefined numbers of clusters vary for different datasets. To compare with handcrafted DAs provided in CRAIGSLIST-BARGAIN, we define number of clusters $k = 14$ for each party. For customer service domain, we
+
+set $k = 60$ for ABCD and $k = 120$ for CALL CENTER. For all datasets, we try the number of states from 5 to 20 and find the best numbers of states are 8, 12, and 12 for CRAIGSLISTBARGAIN, ABCD, and CALL CENTER, respectively. Each training run takes at most 2 hours on 2 Nvidia GeForce RTX 2080Ti GPUs for CRAIGSLISTBARGAIN and ABCD and 54 hours on 8 GPUs on CALL CENTER. All models are saved based on the best performance on the development sets. For each experiment on CRAIGSLISTBARGAIN and ABCD, we conduct 15 random runs and report the median and standard deviation. Due to the computation limitations and the size of corpus, we only conduct a single run for CALL CENTER for each experiment setting. The total number of GPU hours for all experiments, including different runs with random seeds, is 1536 hours approximately.
+
+
CRAIGSLIST BARGAIN
ABCD
CALL CENTER
# dialogues
6682
10042
949410
# turns / dialogue
9.2
22.1
71.6
# tokens / turn
15.5
9.2
16.3
# tokens / dialogue
142.6
202.5
1167.1
+
+Table 5: Data statistics of the datasets.
+
+
CRAIGSLIST BARGAIN
ABCD
CALL CENTER
train set # dialogues
4828
8034
711310
dev. set # dialogues
561
1004
95540
test set # dialogues
567
1004
142560
+
+Table 6: Train/dev./test split of datasets
+
+# B Dataset Details
+
+We follow all original data preprocessing scripts for CRAIGSLISTBARGAIN and ABCD. For the private collection of customer service conversations, CALL CENTER, all private user information is anonymized. The data statistics are summarized in Table 5 and Table 6.
+
+# C Topology with Summaries
+
+Figure 4 shows the detailed topology with both cluster and sub-dialogue state summaries. For each sub-dialogue state, we add the cluster summaries with
+
+top 3 emission probabilities and the sub-dialogue state summaries for the buyer and the seller. The thickness of edges indicates the levels of negotiation success and the edges with probabilities lower than 0.01 are pruned for simplicity.
+
+# D License of Artifacts
+
+The license of code for Wolf et al. (2020) and Schreiber (2018) are Apache license version 2.0. The license of code for Joshi et al. (2021), Rasley et al. (2020), and Chen et al. (2021) are MIT License. The terms for use of our artifacts will be included in our released package.
+
+
+Figure 4: The 8-state full topology with cluster and sub-dialogue state summaries on CRAIGSLISTBARGAIN dataset. The thicker edges represent higher levels of negotiation success.
\ No newline at end of file
diff --git a/unsupervisedlearningofhierarchicalconversationstructure/images.zip b/unsupervisedlearningofhierarchicalconversationstructure/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..33d6a9fa1c09ee3cffa01f43e06f17eeea46053a
--- /dev/null
+++ b/unsupervisedlearningofhierarchicalconversationstructure/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4c90d526a79c06a513c4637c411b4d987654fa50d96ebf7d68a6505c0ff535c7
+size 449149
diff --git a/unsupervisedlearningofhierarchicalconversationstructure/layout.json b/unsupervisedlearningofhierarchicalconversationstructure/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..d88a43faf97867d9a82d90694ec30036ffdbe6d3
--- /dev/null
+++ b/unsupervisedlearningofhierarchicalconversationstructure/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:fa631a85dc6584681fe1d5ee98dcdf5ca279abe4539063ff6d9cc52a48d18cc6
+size 383521
diff --git a/unsupervisedmultigranularitysummarization/b1db08e2-f10e-4712-95f7-1a3f1bf33665_content_list.json b/unsupervisedmultigranularitysummarization/b1db08e2-f10e-4712-95f7-1a3f1bf33665_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..26218873589377f63d24379e002f599abc7c20fb
--- /dev/null
+++ b/unsupervisedmultigranularitysummarization/b1db08e2-f10e-4712-95f7-1a3f1bf33665_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:030bb7647bafdd1ddb680fda72088bf6dba598b9f4c39786a4539d3e84d26867
+size 105831
diff --git a/unsupervisedmultigranularitysummarization/b1db08e2-f10e-4712-95f7-1a3f1bf33665_model.json b/unsupervisedmultigranularitysummarization/b1db08e2-f10e-4712-95f7-1a3f1bf33665_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..7f6fb1a7210c5a80b98668535a7da9b8825b3eb9
--- /dev/null
+++ b/unsupervisedmultigranularitysummarization/b1db08e2-f10e-4712-95f7-1a3f1bf33665_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:1de681d3ef2cfa0bfd3f80883879200feb36329ca9693945e24c89fe1519e37b
+size 131639
diff --git a/unsupervisedmultigranularitysummarization/b1db08e2-f10e-4712-95f7-1a3f1bf33665_origin.pdf b/unsupervisedmultigranularitysummarization/b1db08e2-f10e-4712-95f7-1a3f1bf33665_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..ab742a641ad413dfd47bf30319f2d9c1858995ab
--- /dev/null
+++ b/unsupervisedmultigranularitysummarization/b1db08e2-f10e-4712-95f7-1a3f1bf33665_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:33f90dc92ca452444648d9d57381be77fa708b005a823c3c577d63dd728b92b0
+size 334457
diff --git a/unsupervisedmultigranularitysummarization/full.md b/unsupervisedmultigranularitysummarization/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..7c82f871e92890d5700b02ebd457a548484c6a69
--- /dev/null
+++ b/unsupervisedmultigranularitysummarization/full.md
@@ -0,0 +1,401 @@
+# Unsupervised Multi-Granularity Summarization
+
+Ming Zhong $^{\S}$ Yang Liu $^{\dagger}$ Suyu Ge $^{\S}$ Yuning Mao $^{\S}$ Yizhu Jiao $^{\S}$
+Xingxing Zhang $^{\ddagger}$ Yichong Xu $^{\dagger}$ Chenguang Zhu $^{\dagger}$ Michael Zeng $^{\dagger}$ Jiawei Han $^{\S}$ $^{\S}$ University of Illinois at Urbana-Champaign
+ $^{\dagger}$ Microsoft Cognitive Services Research
+ $^{\ddagger}$ Microsoft Research Asia
+{mingz5, suyuge2, yuningm2, yizhuj2, hanj} $@$ illinois.edu
+{yaliu10, xizhang, yichong.xu, chezhu, nzeng} $@$ microsoft.com
+
+# Abstract
+
+Text summarization is a user-preference based task, i.e., for one document, users often have different priorities for summary. As a key aspect of customization in summarization, granularity is used to measure the semantic coverage between summary and source document. However, developing systems that can generate summaries with customizable semantic coverage is still an under-explored topic. In this paper, we propose the first unsupervised multi-granularity summarization framework, GRANUSUM. We take events as the basic semantic units of the source documents and propose to rank these events by their salience. We also develop a model to summarize input documents with given events as anchors and hints. By inputting different numbers of events, GRANUSUM is capable of producing multi-granular summaries in an unsupervised manner. Meanwhile, we annotate a new benchmark GranuDUC that contains multiple summaries at different granularities for each document cluster. Experimental results confirm the substantial superiority of GRANUSUM on multi-granularity summarization over strong baselines. Furthermore, by exploiting the event information, GRANUSUM also exhibits state-of-the-art performance under conventional unsupervised abstractive setting.1
+
+# 1 Introduction
+
+Text summarization aims to condense and summarize long documents into a concise paragraph containing the essential points of the original texts (See et al., 2017; Liu and Lapata, 2019; Wang et al., 2020; Zhong et al., 2020; Liu et al., 2022; An et al., 2022). Notably, the requirements for summarization are highly customized and personalized for different users (Díaz and Gervás, 2007; Lerman et al., 2009; Yan et al., 2011; Chen et al., 2021b). Therefore, generating quality summaries to meet
+
+
Multiple News Articles about Hurricane Mitch
Honduras braced for potential catastrophe Tuesday asHurri-cane Mitch roared through the northwest Caribbean, churningup high waves and intense rain ... (Total 3,358 words)
Summary of Coarse Granularity Level
Hurricane Mitch, category 5 hurricane, brought widespreaddeath and destruction to Central American, and Honduraswas especially hard hit. (Total 19 words)
Summary of Medium Granularity Level
Hurricane Mitch approached Honduras on Oct. 27, 1998 withwinds up to 180mph a Category 5 storm ... The EuropeanUnion, international relief agencies, Mexico, the U.S., Japan,Taiwan, the U.K. and U.N. sent financial aid, relief workersand supplies. (Total 53 words)
Summary of Fine Granularity Level
A category 5 storm, Hurricane Mitch roared across the north-west Caribbean with 180 mph winds across a 350-mile front... The greatest losses were in Honduras where 6,076 peopleperished ... At least 569,000 people were homeless acrossCentral America. Aid was sent from many sources (EuropeanUnion, the UN, US and Mexico). The U.S. and EuropeanUnion were joined by Pope John Paul II in a call for moneyand workers to help the stricken area. However, Relief effortsare hampered by extensive damage ... (Total 133 words)
+
+Table 1: An example from our multi-granularity summarization benchmark GranuDUC. Texts of the same color (blue, red) denote similar points described in different ways. Finer-grained summaries have higher semantic coverage with the original text.
+
+different preferences should be a natural capability of summarization systems.
+
+Granularity, a key aspect of customization in summarization, is used to measure the degree of semantic coverage between summary and source documents (Mulkar-Mehta et al., 2011). To cater to the diverse needs of readers, the granularity level of summaries often varies in a wide range. As shown in Table 1, given multiple news about Hurricane Mitch, the most compact summary (Coarse Granularity Level) accommodates only the most important event to help people grasp the overall picture of the input documents. Interested readers, on the other hand, may prefer more fine-grained
+
+summaries (Medium and Fine Granularity Level) to acquire additional details, such as how many casualties were caused and how different countries aided Honduras. Thus, multi-granularity summaries can meet the intent of different users and are more versatile in real-world applications.
+
+Most existing summarization models and benchmarks focus solely on single-granularity summarization. It limits the ability of these systems to adapt to different user preferences and generalize to a wider range of granularity scenarios. To alleviate this issue, some recent studies are dedicated to controlling the length of summary (Kikuchi et al., 2016; Fan et al., 2018; Liu et al., 2018). However, as a surface-level feature of the summary, longer length does not equate to a higher degree of semantic coverage. In other words, the length limit can be easily satisfied by talking less/more details about the same event, but this is in contrast with the concept of summarization. Another research direction is query/aspect-based (Zhong et al., 2021; Hayashi et al., 2021; Ge et al., 2021) and interactive summarization (Shapira et al., 2017, 2021). Based on different queries, models can focus on different parts of the document and create summaries of various granularities. In practice, it requires a user to provide a query, implying that the user must have prior knowledge of the topic of the source text. Therefore, automatic granularity-aware summarization model is still an under-explored topic.
+
+In this paper, we propose an unsupervised multi-granularity summarization framework called GRANUSUM. Unlike previous work based on supervised learning to provide guidance signals, such as salient sentences (Dou et al., 2021), keywords (He et al., 2020), and retrieved summaries (An et al., 2021), our approach does not rely on any manually labeled data. To measure the granularity, we first regard events as the basic semantic units of the input texts because events carry rich semantic information and are considered as informative representations in many NLP tasks (Zhang et al., 2020a; Li et al., 2020; Chen et al., 2021a). Overall, our system consists of two event-related components: Event-aware Summarizer and Event Selector. Specifically, given the document and randomly selected events in it as hints, we pre-train an abstractive Summarizer that can recover event-related passages. Furthermore, in an unsupervised manner, our Event Selector selects the events with high salience from the original text by candidate
+
+events pruning and ranking. Finally, through selecting different numbers of anchor events based on Event Selector, we can control the Summarizer to generate summaries containing different events, thus covering different numbers of semantic units of the original text. With our proposed approach, GRANUSUM becomes an unsupervised framework for multi-granularity summary generation.
+
+To evaluate the multi-granularity summarization systems, we re-associate DUC2004 (Dang, 2005) as the first benchmark in this direction (denoted as GranuDUC). Given multiple documents on the same topic, we annotate summaries at three levels of granularity with different semantic coverage. Also, to utilize the existing datasets for a supplement evaluation, we propose to divide several large-scale summarization datasets into buckets with summaries at different granularity levels to further evaluate the model performance. Experimentally, GRANUSUM surpasses strong summarization systems on all the multi-granularity evaluations. Additionally, we conduct conventional unsupervised abstractive summarization experiments on three typical benchmarks in different domains. Results demonstrate that GRANUSUM also substantially improves the previous state-of-the-art model under the traditional setting.
+
+# 2 Related Work
+
+# 2.1 Customized Summarization
+
+In order to meet the needs of different users, existing neural summarization systems attempt to control customization of the summary, such as the aspects of content (Zhong et al., 2021; Hayashi et al., 2021), summary length (Christensen et al., 2014; Kikuchi et al., 2016; Liu et al., 2018) and writing style (An et al., 2021). Also, several studies seek to accommodate multiple types of preferences simultaneously to achieve customized summarization. Fan et al. (2018) additionally introduces different special marker tokens to the model to generate user-controllable summaries. He et al. (2020) allows for entity-centric, length-controllable, and question-guided summarization by adjusting the prompts, i.e., changing the textual input in the form of a set of keywords or descriptive prompt words. However, the unavailability of large-scale data containing customized summaries limits the development of these systems that rely on supervised learning. Thus, we focus on unsupervised approaches and are committed to solving the granularity aspect, which
+
+
+Figure 1: Overview of GRANUSUM. It consists of two components: Event Selector and Event-aware Summarizer. The red line $(\rightarrow)$ indicates that Selector extracts the salient events from the original text, and the dotted line means that Summarizer assists in this process. The blue line $(\Rightarrow)$ denotes the multi-granularity summary generation process. By inputting different numbers of events as anchors (purple and green boxes), GRANUSUM can generate multi-granularity summaries.
+
+remains an under-explored direction in customized summarization.
+
+# 2.2 Unsupervised Summarization
+
+In contrast to supervised learning, unsupervised models do not require any human-annotated summaries during training. Unsupervised summarization can also be divided into two branches: extractive methods and abstractive approaches. Most extractive methods rank the sentences and select the highest-ranked ones to form the summary. Specifically, they score sentences based on graph (Erkan and Radev, 2004; Hirao et al., 2013; Parveen et al., 2015), centrality (Zheng and Lapata, 2019; Liang et al., 2021), point-wise mutual information (Padmakumar and He, 2021), or sentence-level self-attention in pre-trained models (Xu et al., 2020). Another direction is unsupervised abstractive approaches, and these studies typically employ sequence-to-sequence auto-encoding method (Chu and Liu, 2019) with adversarial training and reinforcement learning (Wang and Lee, 2018). In addition, Yang et al. (2020) pre-train a Transformer model for unsupervised abstractive summarization by exploiting the lead bias phenomenon (See et al., 2017; Zhong et al., 2019a) in the news domain. In this work, our framework is an unsupervised abstractive framework, and can be further enhanced on top of the extractive method.
+
+# 3 Multi-Granularity Framework
+
+In this section, we first describe in detail our framework GRANUSUM, which has two major compo
+
+nents: Event-aware Summarizer and Event Selector. Combining them enables multi-granularity generation. The overall framework can be seen in Figure 1. Then, we introduce the new human-annotated benchmark, GranuDUC, which can be used for multi-granularity evaluation.
+
+# 3.1 Event-Aware Summarizer
+
+In this work, we focus on abstractive summarization approaches. The way we make the model perceive the granularity is by inputting hints with different degrees of specificity, and here we format the hints as a sequence of events.
+
+Event Extraction We follow previous work to define an event as a verb-centric phrase (Zhang et al., 2020a). A lightweight method $^{2}$ is utilized to extract events from open-domain unstructured data: we extract frequently-occurring syntactic patterns that contain verbs as events. On the basis of Zhang et al. (2020a), we extend a total of 76 syntactic patterns for matching events. For instance, the most common patterns contain $n_{1}$ -nsubj- $v_{1}$ (e.g., Hurricane hits) and $n_{1}$ -nsubj- $v_{1}$ -dobj- $n_{2}$ (e.g., Earthquake damages buildings). $^{3}$ More details and concrete examples can be found in Appendix A.1.
+
+Event-based Summarizer Pre-training Previous studies reveal that event information can be an effective building block for models to perform text generation (Daniel et al., 2003; Glavaš and Šnajder, 2014), so we attempt to obtain a Summarizer with the ability to generate event-related text in an unsupervised way. In the pre-training phase, it is trained to regenerate sentences based on a list of events and the remaining source text. Then we use it to generate a summary at inference time. Concretely, we pre-train a sequence-to-sequence model in the following steps:
+
+1) randomly select a few sentences from the text, 2) extract events in these selected sentences,
+3) mask these sentences in the source document, 4) take extracted events and unmasked text as input. Then we use these selected sentences as the target for the model. For example, for a dialogue text as "Do you have any plans tomorrow? How about playing basketball? Sure, I just finished my homework, it's time to exercise," we can select How about playing basketball? and extract the event
+
+play basketball. In this case, the specific format given to the model is:
+
+- Input: play basketball ⟨seg⟩ Do you have any plans tomorrow? ⟨mask⟩ Sure, I just finished my homework, it's time to exercise.
+- Target: How about playing basketball?
+
+where $\langle \mathrm{seg}\rangle$ is the segmentation token and $\langle \mathrm{mask}\rangle$ indicates that a sentence at this position is masked. We use 'l' token to split the different events, and another example in news domain to further explain the four steps can be found in Appendix A.2.
+
+# 3.2 Event Selector
+
+The salience of the selected events determines whether the Summarizer can generate a quality summary or an irrelevant and uninformative paragraph. A long document can contain hundreds of events, and finding the best event subset involves an exponential search space. Therefore, it is crucial to have an Event Selector that selects the most important events in the text to feed to the Summarizer. Our event selector first reduces the search space by pruning out less salient events and sentences, and then ranks the remaining events using the pre-trained Summarizer.
+
+Event Ranking The salience of the different events extracted from the documents varies. Some of the events are informative and relevant to the original text, but others are too general or specific. For instance, two events club say and Malone be remember can be extracted from the sentence "The club said Malone will forever be remembered as a genuine icon and pillar in the Philadelphia 76ers team". The former is not important for this news, while the latter is indispensable. And in a sentence "Malone won MVP awards by averaging 24.5 points and 15.3 rebounds", "average 24.5 points and 15.3 rebounds" is too detailed to be included in a high-level summary. Thus, ranking candidate events is a key function of Event Selector.
+
+Inspired by Yuan et al. (2021), where a pretrained generative model is capable of evaluating the correlation between the input and the target, we also use our pre-trained Event-based Summarizer to calculate the salience score for each event. Given the candidate event set $E$ and the source document $D$ , our Summarizer can generate a candidate summary $c_{E}$ . Whenever an event $e$ in the input is removed, if the generated candidate summary $c_{E\setminus \{e\}}$ differs greatly from $c_{E}$ , this indicates
+
+that the removed event $e$ is salient. As in the example above, removing "club say" does not cause an obstacle for the model to recover the sentence whose main meaning is that Malone is remembered by people, while removing "Malone be remember" makes the model unable to output the correct sentence. Thus, the latter should be the more important event. Formally, the Salience Score of event $e$ can be defined as:
+
+$$
+\operatorname {S a l} (e) \stackrel {\text {d e f}} {=} - \operatorname {S i m} \left(c _ {E \backslash \{e \}}; c _ {E}\right), \tag {1}
+$$
+
+$$
+\operatorname {S i m} \left(x _ {1}, x _ {2}\right) \stackrel {\text {d e f}} {=} \mathrm {R 1} \left(\mathrm {x} _ {1}, \mathrm {x} _ {2}\right) + \mathrm {R 2} \left(\mathrm {x} _ {1}, \mathrm {x} _ {2}\right), \quad (2)
+$$
+
+where $\operatorname{Sim}(x_1, x_2)$ is a function based on ROUGE score (Lin, 2004) to measure the similarity between any two text sequences $x_1$ and $x_2$ . R1 and R2 are ROUGE-1 and ROUGE-2 scores, respectively. Based on the salience score, Event Selector can rank all the events in the candidate set. However, a single sentence may contain multiple events, so a long document can encompass hundreds of events. Using all events as a candidate set leads to unaffordable computational consumption. Therefore, we prune the candidate events before ranking them.
+
+Candidate Pruning We expect to capture a small set of events that are relevant to the main topic while pruning redundant parts. Events with high relevance provide an efficient summary of the central points in the original text, while low redundancy ensures that the final summary is concise. To this end, we first select several salient sentences and extract the events in them as a candidate set. For relevance, if a sentence has a high semantic overlap with other input sentences, it should have a higher centrality and a higher probability to be included in the summary (Padmakumar and He, 2021). Thus, we define the Relevance Score of each sentence as:
+
+$$
+\operatorname {R e l} (s, D) \stackrel {\text {d e f}} {=} \operatorname {S i m} (s; D \setminus \{s \}), \tag {3}
+$$
+
+where $s$ means the sentence and $D$ represents the given document. $D \backslash \{s\}$ indicates that the sentence $s$ is removed from the original text $D$ .
+
+For redundancy, the sentences in the summary should contain low redundant information when compared with each other. So when extracting the $k$ -th sentence, we define its Redundancy Score as follows:
+
+$$
+\operatorname {R e d} (s, S) \stackrel {\text {d e f}} {=} \sum_ {i = 1} ^ {k - 1} \operatorname {S i m} \left(s _ {i}; s\right), \tag {4}
+$$
+
+where $S$ is a set of the $k-1$ sentences in the summary so far. We follow the idea of Maximal Marginal Relevance (Carbonell and Goldstein, 1998) to maximize relevance and minimize redundancy to calculate the Importance Score of each sentence as:
+
+$$
+\operatorname {I m p} (\mathrm {s}, \mathrm {S}, \mathrm {D}) = \lambda_ {1} \operatorname {R e l} (s, D) - \lambda_ {2} \operatorname {R e d} (s, S). \tag {5}
+$$
+
+Through iteratively calculating the score of each sentence, we can eventually obtain a fixed number of sentences and extract the events from them as a candidate set.
+
+# 3.3 Multi-Granularity Summary Generation
+
+With Event-aware Summarizer and Event Selector, it is feasible to generate multi-granularity summaries. By taking different numbers of ranked events as hints, the Summarizer can perceive the specific level of semantic coverage required to enable the generation of different summaries. For example, the Summarizer can generate a concise coarse-grained summary when only the two events with the highest salience scores (see Equation 1) are input. A case study to illustrate the overall flow of the multi-granularity summary generation can be found in Appendix A.4. During inference, instead of using the same setting as Zhang et al. (2020c), i.e., placing the $\langle \mathrm{mask}\rangle$ token at the beginning of the article, we simply omit it. Because we already provide enough event information to guide the model to generate a summary in our framework.
+
+# 3.4 New Benchmark: GranuDUC
+
+Considering that there is no dataset for evaluating multi-granularity summarization models, we re-annote a new benchmark called GranuDUC on the basis of DUC2004 (Dang, 2005). Our annotation team consists of 5 graduate students in NLP or people with equivalent expertise. For each document cluster, annotators are required to read multiple source documents and write summaries at three different granularities. The annotators are informed to be aware that granularity is not distinguished by the number of sentences, but is defined by different semantic coverage of the original text. Specifically, we inform the annotators that "coarse granularity level" should include only the main event of the entire documents, "medium granularity level" should include several important conditions, results and processes surrounding the main topic, and "fine granularity level" should further include the details such as time and location for each
+
+sub-event. Summaries at different granularities require significantly different levels of semantic coverage. Newly annotated sentences are allowed to be copied or rewritten from DUC2004's original reference summaries. In addition, we require annotators not to use the same sentences in different summaries of a sample, even when describing the same event. Each annotated summary is required to be reviewed by another annotator, then these two people discuss and revise until an agreement is reached. In the end, GranuDUC contains a total of 50 clusters, each cluster contains an average of 10 related documents and 3 summaries of different granularity, ranging from 10 words to more than 200 words in length. To demonstrate the quality of GranuDUC, we include the annotations of two samples in Appendix 8.
+
+# 4 Experiments
+
+We design three settings of experiments:
+
+1) experiments on GranuDUC,
+2) bucket-based evaluation,
+3) unsupervised abstractive summarization.
+
+The first two settings constitute a new testbed for multi-granularity summarization, where bucket means that we divide the existing dataset into different buckets according to semantic coverage to make the evaluation more comprehensive. In addition to this scenario, the last experiment auxiliarily evaluates the quality of summaries generated by our framework under the conventional unsupervised abstractive summarization setting.
+
+# 4.1 Experimental Setup
+
+Datasets Because the conclusions obtained on the summarization dataset of a single domain are not generalizable (Wang et al., 2019; Zhong et al., 2019b; Chen et al., 2020), we select two widely varying domains: news and scientific papers for our experiments Notably, we focus on two types of datasets, multi-document and long-document summarization, which are two main scenarios where users call for a multi-granularity system. For multi-document summarization, we concatenate the multiple articles into a single sequence as the source text. In addition to our benchmark GranuDUC, we use the following three datasets. Detailed statistics are listed in Table 2.
+
+Multi-News (Fabbri et al., 2019) is a large-scale multi-document summarization dataset in the news domain. We use it in bucket-based evaluation (Sec
+
+
Datasets
# Samples
Len. of Doc.
Len. of Sum.
Multi-News
56K
1793
217
arXiV
214K
6021
272
DUC2004
50
5882
115
GranuDUC
50
5882
24/68/135
+
+Table 2: Statistics of all datasets we used in this paper. DUC2004 and GranuDUC are for testing only.
+
+tion 4.2.2) and unsupervised summarization experiments (Section 4.3).
+
+DUC2004 (Dang, 2005) contains 50 clusters, each with 10 relevant news articles and 4 reference summaries written by humans. Due to its small size, it is usually used directly as a test set. We utilize it in the unsupervised summarization experiment (Section 4.3).
+
+arXiv (Cohan et al., 2018) is a collection of long documents derived from scientific papers. It takes the full text of the paper as input, and the corresponding abstract as the reference summary. We use it in the unsupervised summarization experiment (Section 4.3).
+
+Implementation Details To process long input text in Table 2, we choose the Longformer-Encoder-Decoder (LED) (Beltagy et al., 2020) as our backbone model, and train it with typical cross entropy loss. For Multi-News and arXiv, we further pretrain LED with our event-related generation task on their training corpora (without using reference summaries) for a total of 10,000 and 30,000 steps, respectively. We set batch size to 32 and the maximum learning rate to 2e-5. $\lambda_{1}$ in the importance score is 1.0 and $\lambda_{2}$ is 0.4. By tuning the hyperparameters on the validation set, we empirically extract 9 sentences for Multi-News and 4 sentences for arXiv to form a candidate set, and input $90\%$ events according to salience score to the Summarizer under unsupervised summarization setting. For DUC2004 and GranuDUC, we test directly with the Summarizer pre-trained on Multi-News, since these datasets are both in the news domain. In all experiments, we use standard pyrouge4 to calculate ROUGE scores. Due to the limitation of computational resources, we truncate an input text to 3,072 tokens for LED models.
+
+Baselines We use the following baselines:
+
+BART (Lewis et al., 2020) is the state-of-the-art sequence-to-sequence pre-trained model for vari-
+
+ous generation tasks, including abstractive dialogue generation, question answering, and text summarization. We use BART-large in all the experiments.
+
+PEGASUS (Zhang et al., 2020b) is a powerful generation model with gap-sentences generation as a pretraining objective tailored for abstractive summarization. We use the large version of PEGASUS for comparison.
+
+PEGASUS-event indicates that on top of PEGASUS, additional event information is prepended to the input before the $\langle \mathrm{mask}\rangle$ token. We compare it to see if additional event information can be captured without our event-aware pre-training stage.
+
+LED (Beltagy et al., 2020) has the same architecture as BART, except that the attention in the encoder introduces additional local attention and extends the position embedding to 16K tokens by copying the original embedding. The parameters in the LED are initialized by the weights in BART.
+
+LED-Length-Control (LED-LC) is a baseline that we obtained by further pre-training LED. Inspired by Fan et al. (2018), given a document and the desired number of sentences $k$ , we randomly place $k$ sentences in the document with the ⟨mask⟩ token, and let the model recover these sentences. During inference, we input the text and the desired number of sentences as a hint to the model so that it can control the length of the output summary. $^5$
+
+PRIMERA (Xiao et al., 2022) is a pre-trained model for multi-document summarization that reduces the need for dataset-specific architectures and extensive labeled data. It achieves state-of-the-art results on multi-document summarization datasets under multiple settings.
+
+# 4.2 Multi-granularity Evaluation
+
+The first testbed we built for multi-granularity summarization includes two evaluation methods:
+
+1) To test the ability of the model to generate summaries with different granularity levels when given the same input, we evaluate different models on our benchmark GranuDUC.
+2) To supplement the limited size of GranuDUC, we design a bucket-based evaluation approach, where we divide a large-scale test set into different buckets based on their granularity levels, and test the ability of models to generate quality summaries in different granularity buckets.
+
+
Coarse Granularity Level
Medium Granularity Level
Fine Granularity Level
Model
R-1
R-2
R-L
R-1
R-2
R-L
R-1
R-2
R-L
PEGASUS
20.74
4.20
15.11
24.86
4.39
14.34
29.79
5.70
14.83
PEGASUS-event
20.68
4.18
15.12
24.72
4.28
14.25
29.58
5.52
14.61
LED-LC
21.83
4.80
15.29
26.73
5.59
15.76
30.18
5.57
15.24
GRANUSUM
23.61
6.60
17.12
29.69
6.84
16.23
34.71
7.49
17.42
Model
Flu.
Rel.
Faith.
Flu.
Rel.
Faith.
Flu.
Rel.
Faith.
PEGASUS
3.25
3.36
3.15
3.46
3.49
2.72
3.73
3.44
2.58
LED-LC
3.97
3.39
3.08
3.93
3.57
3.14
3.67
3.62
2.73
GRANUSUM
4.13
3.82
3.59
4.09
3.78
3.46
3.82
4.05
3.17
+
+Table 3: Results on GranuDUC. The top half of the Table shows the result of the automatic metric ROUGE, and the bottom half presents the result of human evaluation, including fluency, relevance and faithfulness.
+
+
Model
Low
Medium
High
R-1
R-2
R-L
R-1
R-2
R-L
R-1
R-2
R-L
PRIMERA
37.21
9.92
17.68
42.50
13.19
20.24
46.95
18.10
23.99
LED-LC
37.28
9.56
16.64
42.37
12.65
19.15
47.57
17.88
22.40
GRANUSUM
38.19
10.27
18.07
44.73
14.12
20.10
50.23
19.62
24.11
- Ranking
37.34
9.36
16.69
43.41
13.28
19.12
49.66
19.35
23.37
+
+Table 4: Result of bucket-based evaluation on Multi-news. We design Granularity Score to divide the test set into three buckets. Low means that the summary has low semantic coverage with the source documents.
+
+# 4.2.1 Results on GranuDUC
+
+The summaries of each sample in GranuDUC can be divided into three granularity levels, where coarse granularity level represents the most compact summary, and fine granularity level is the most fine-grained summary. We use automatic metrics ROUGE and perform the human evaluation to evaluate the performance of different models in GranuDUC. Notably, both LED-LC and GRANUSUM have the ability to adjust the output according to specific granularity scenarios. At three different granularity levels on GranuDUC, we let LED-LC output 1, 3 and 8 sentences which correspond to the average length of reference summaries at different granularities. For our model, we take the top $90\%$ events with the highest salience score in the selected 1, 3, 8 sentences as the input hint. For all baselines, we control the length of the model output to be similar to the reference summary to get the best performance.
+
+Automatic Evaluation As illustrated in Table 3, compared to PEGASUS, LED-LC can bring a certain degree of improvement due to the ability to control the length of the output summary. This improvement is not remarkable at fine granularity level. For coarse and medium granularity levels, LED-LC can control the number of output sentences, while PEGASUS does not have a similar ca
+
+pability and it can only generate shorter summaries by truncating the output (to 32 and 64 words), which leads to performance degradation. On the other hand, GRANUSUM exceeds LED-LC and PE-GASUS by a large margin in all the granularity levels. Although GRANUSUM and LED-LC are trained on the same data, GRANUSUM increases the R-1 score by 1.78 at coarse granularity level $(21.83\rightarrow 23.61)$ , and the improvement reaches to 4.53 at fine granularity level $(30.18\rightarrow 34.71)$ . With the benefit of event information, our model can generate more relevant and quality summaries, and the advantage is more pronounced in fine-grained summaries. Therefore, GRANUDUC is a more suitable system for multi-granularity scenarios than existing controllable summarization models.
+
+Human Evaluation We also conduct human evaluation to have a more comprehensive understanding of the model output. Six graduate students are involved in this process to score the generated summaries from three different perspectives: fluency, relevance and faithfulness to the source documents. The score range is 1-5, with 1 being the worst and 5 the best. Each sample requires two people to discuss and agree on the scoring. According to the fluency scores in Table 3, both LED-LC and GRANUDUC can generate coherent sentences, while PEGASUS performs poorly in coarse and
+
+
Model
Multi-News
arXiv
DUC2004
R-1
R-2
R-L
R-1
R-2
R-L
R-1
R-2
R-L
LEAD
42.9
14.3
19.2
32.7
8.1
17.5
32.3
6.5
16.3
LED
17.3
3.7
10.4
15.0
3.1
10.8
16.6
3.0
12.0
BART
27.3
6.2
15.1
29.2
7.5
16.9
24.1
4.0
15.3
PEGASUS
32.0
10.1
16.7
29.5
7.9
17.1
32.7
7.4
17.6
PEGASUS-event
31.5
10.2
15.8
29.2
7.7
17.0
31.8
7.1
16.9
PRIMERA
42.2
13.7
20.6
34.6
9.4
18.3
34.7
6.9
17.6
Selector
43.3
14.1
19.1
35.3
10.8
17.8
34.3
7.1
17.1
LED-LC
42.0
13.3
19.2
34.9
9.9
18.1
33.9
6.6
16.8
GRANUSUM
43.7
14.2
20.1
36.0
11.3
18.6
34.8
7.3
17.9
- Ranking
43.5
14.0
19.7
35.4
10.8
18.5
34.3
7.0
17.2
+
+Table 5: Results of unsupervised abstractive summarization on three datasets.
+
+medium granularity levels due to truncating the output to a fixed length. From the perspective of relevance and faithfulness, a clear trend is that the more fine-grained the summary, the more relevant it is to the original text and the more likely it is to contain factual errors. Specific to the models, GRANUSUM generates more relevant and faithful summaries in all granularity scenarios compared to other baselines by exploiting event information.
+
+# 4.2.2 Bucket-based Evaluation
+
+In addition to GranuDUC, we seek to utilize existing large-scale datasets for multi-granularity evaluation. Unlike the previous approach of using a single reference summary to evaluate multiple lengths of summaries (Shapira et al., 2018), we divide the reference summaries into different buckets based on semantic coverage and then compare the performance of each model in each bucket. We first design a metric to calculate the granularity score between the source document and the reference summary to categorize the different samples. Because the same events in original text and human-written summary may have different descriptions, we design a granularity score on the basis of BERTScore (Zhang et al., 2019) to perform soft matching due to its ability to measure semantic coverage between two sequences. Specifically, we extract all the events in the source document and the reference summary as two event sequences, and calculate Granularity Score as:
+
+$$
+\operatorname {G r a n u} (D, r) = f \left(E v e n t _ {D}, E v e n t _ {r}\right), \tag {6}
+$$
+
+where $D$ is the source documents and $r$ represents the reference summary. $Event_{D}$ denotes that we extract all events from $D$ by using the approach in Section 3.1, and concatenate them into an event
+
+sequence. $f$ means that BERTScore is used to calculate the recall score between two event sequences. Intuitively, a high recall score of the reference summary to the original text indicates that it has high semantic coverage and thus it is a summary at a high granularity level. We sort all samples in the test set of Multi-News dataset according to Granularity Score and divide them into three buckets with the same number of samples. The average length of summaries in the three buckets are 198, 214, and 236 words, respectively.
+
+Although PRIMERA is the state-of-the-art model, it does not have the flexibility to change the output in response to different buckets. For LED-LC, we let the model generate 7, 8, and 9 sentences in low, medium, and high buckets, respectively. For our model, we take the top $70\%$ , $80\%$ , and $90\%$ of the events with the higher salience score (see Section 3.2) in 9 selected sentences as the input for three different buckets. As shown in Table 4, LED-LC has no significant benefits over PRIMERA, indicating that controlling the output length and ignoring its connection to the original text is not a good solution for the multi-granularity system. In contrast, GRANUSUM achieves substantial improvements in all buckets compared to powerful baselines. In particular, in buckets with high semantic coverage, our model improves R-1 score by 3.28 compared to PRIMERA. Also, “Ranking” means that we no longer filter out events based on the salience score, which causes a performance drop. It confirms that our selector can indeed exclude irrelevant and redundant events and thus improve the quality of the generated summary.
+
+# 4.3 Unsupervised Abstractive Summarization
+
+The quality of the summary is a key factor for all summarization systems. So in addition to the
+
+multi-granularity scenario, we likewise compare GRANUSUM with conventional unsupervised abstractive summarization models. Table 5 provides results on three datasets. The first section includes a simple yet effective approach LEAD, which refers to extracting the first few sentences at the beginning of the text as a summary. It is a strong baseline in the news domain due to the lead bias problem (See et al., 2017; Zhong et al., 2019a). The second section lists the strong baselines and the last section contains the results of our models. Selector indicates that we extract several sentences from the source document based on our importance score described in Section 3.2 as the summary.
+
+Surprisingly, although GRANUSUM is not specially designed for the conventional unsupervised summarization task, it still beats all the competitors and achieves new state-of-the-art results on most metrics across datasets. Despite inputting the same hints, PEGASUS-event does not show the ability to exploit event information and even performs worse than PEGASUS. In contrast, our pre-trained Event-aware Summarizer incorporates event information well into the generated summaries and thus boosts performance. Furthermore, GRANUSUM outperforms Selector, which is a strong extractive baseline, and extractive approaches usually dominate unsupervised summarization tasks. We think the improvement comes from two reasons:
+
+1) In the pre-training stage, important content in the masked sentences is easier to reconstruct due to the redundancy of input texts. Thus, GRANUSUM learn to filter those unimportant content in inference, generating more concise summaries.
+2) Event Selector screens out less critical events which should not appear in the summary.
+
+Overall, GRANUSUM improves R-1 score by 1.0 on average compared to the previous best results, indicating that it is sufficient to generate quality summaries besides the multi-granularity ability.
+
+# 5 Conclusion
+
+In this paper, we highlight the importance of multi-granularity summarization systems in catering to user preferences and applying them to real-world scenarios. To facilitate research in this direction, we propose the first unsupervised multi-granularity summarization framework GRANUSUM and build a well-established testbed. Experiments demonstrate the effectiveness of our framework.
+
+# Limitations
+
+We state the limitations of this paper from the following four aspects:
+
+1) Unlike previous work that uses summary length to approximate granularity, we adopt an event-based definition, which can be extended to be more flexible. For example, introducing phrases, entities, relationships, etc. as part of the granularity may be a feasible way to further enhance the granularity-aware summarization system.
+2) Despite being the first multi-granularity summarization benchmark, GranuDUC can only be used as a test set due to its small size. Thus, we call for the emergence of customized summarization datasets, which can greatly facilitate the development of customizable summarization models.
+3) Specific to the method, we extract events from the source text as hints, which may reduce the abstractness of the generated summaries to some extent. In pursuit of a more abstractive summary, rephrasing events into different forms may be a viable option, and we leave it as future work.
+4) In this paper we focus on three different levels of granularity and take document clusters containing thousands of words as input. A promising extension could be to input longer text and to add finer levels of granularity, for example, to generate summaries for an entire book (e.g., a novel) at multiple granularities.
+
+# Acknowledgements
+
+We thank Wen Xiao for providing the output of PRIMERA. We would also like to thank anonymous reviewers for valuable comments and suggestions. Research was supported in part by US DARPA KAIROS Program No. FA8750-19-2-1004 and INCAS Program No. HR001121C0165, National Science Foundation IIS-19-56151, IIS-17-41317, and IIS 17-04532, and the Molecule Maker Lab Institute: An AI Research Institutes program supported by NSF under Award No. 2019897, and the Institute for Geospatial Understanding through an Integrative Discovery Environment (I-GUIDE) by NSF under Award No. 2118329. Any opinions, findings, and conclusions or recommendations expressed herein are those of the authors and do not necessarily represent the views, either expressed or implied, of DARPA or the U.S. Government. The views and conclusions contained in this paper are those of the authors and should not be interpreted as representing any funding agencies.
+
+# References
+
+Chenxin An, Ming Zhong, Zhichao Geng, Jianqiang Yang, and Xipeng Qiu. 2021. Retrievalsum: A retrieval enhanced framework for abstractive summarization. arXiv preprint arXiv:2109.07943.
+Chenxin An, Ming Zhong, Zhiyong Wu, Qin Zhu, Xuan-Jing Huang, and Xipeng Qiu. 2022. Colo: A contrastive learning based re-ranking framework for one-stage summarization. In Proceedings of the 29th International Conference on Computational Linguistics, pages 5783-5793.
+Iz Beltagy, Matthew E Peters, and Arman Cohan. 2020. Longformer: The long-document transformer. arXiv preprint arXiv:2004.05150.
+Jaime G. Carbonell and Jade Goldstein. 1998. The use of mmr, diversity-based reranking for reordering documents and producing summaries. In SIGIR '98: Proceedings of the 21st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, August 24-28 1998, Melbourne, Australia, pages 335-336. ACM.
+Muhao Chen, Hongming Zhang, Qiang Ning, Manling Li, Heng Ji, Kathleen McKeown, and Dan Roth. 2021a. Event-centric natural language processing. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing: Tutorial Abstracts, pages 6-14.
+Yiran Chen, Pengfei Liu, Ming Zhong, Zi-Yi Dou, Dan- qing Wang, Xipeng Qiu, and Xuan-Jing Huang. 2020. Cdevalsumm: An empirical study of cross-dataset evaluation for neural summarization systems. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 3679-3691.
+Yulong Chen, Yang Liu, Liang Chen, and Yue Zhang. 2021b. Dialogsum: A real-life scenario dialogue summarization dataset. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 5062-5074.
+Janara Christensen, Stephen Soderland, Gagan Bansal, et al. 2014. Hierarchical summarization: Scaling up multi-document summarization. In Proceedings of the 52nd annual meeting of the association for computational linguistics (volume 1: Long papers), pages 902-912.
+Eric Chu and Peter Liu. 2019. Meansum: a neural model for unsupervised multi-document abstractive summarization. In International Conference on Machine Learning, pages 1223-1232. PMLR.
+Arman Cohan, Franck Dernoncourt, Doo Soon Kim, Trung Bui, Seokhwan Kim, Walter Chang, and Nazli Goharian. 2018. A discourse-aware attention model for abstractive summarization of long documents. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 615-621.
+
+Hoa Trang Dang. 2005. Overview of duc 2005. In Proceedings of the document understanding conference, volume 2005, pages 1-12.
+Naomi Daniel, Dragomir Radev, and Timothy Allison. 2003. Sub-event based multi-document summarization. In Proceedings of the HLT-NAACL 03 Text Summarization Workshop, pages 9-16.
+Alberto Díaz and Pablo Gervás. 2007. User-model based personalized summarization. Information Processing & Management, 43(6):1715-1734.
+Zi-Yi Dou, Pengfei Liu, Hiroaki Hayashi, Zhengbao Jiang, and Graham Neubig. 2021. Gsum: A general framework for guided neural abstractive summarization. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4830-4842.
+Günes Erkan and Dragomir R Radev. 2004. Lexrank: Graph-based lexical centrality as salience in text summarization. Journal of Artificial Intelligence Research, 22:457-479.
+Alexander Richard Fabbri, Irene Li, Tianwei She, Suyi Li, and Dragomir Radev. 2019. Multi-news: A large-scale multi-document summarization dataset and abstractive hierarchical model. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1074-1084.
+Angela Fan, David Grangier, and Michael Auli. 2018. Controllable abstractive summarization. In Proceedings of the 2nd Workshop on Neural Machine Translation and Generation, pages 45-54.
+Suyu Ge, Jiaxin Huang, Yu Meng, Sharon Wang, and Jiawei Han. 2021. Fine-grained opinion summarization with minimal supervision. arXiv preprint arXiv:2110.08845.
+Goran Glavaš and Jan Šnajder. 2014. Event graphs for information retrieval and multi-document summarization. Expert systems with applications, 41(15):6904-6916.
+Hiroaki Hayashi, Prashant Budania, Peng Wang, Chris Ackerson, Raj Neervannan, and Graham Neubig. 2021. Wikiasp: A dataset for multi-domain aspect-based summarization. Transactions of the Association for Computational Linguistics, 9:211-225.
+Junxian He, Wojciech Krysciński, Bryan McCann, Nazneen Rajani, and Caiming Xiong. 2020. Ctrlsum: Towards generic controllable text summarization. arXiv preprint arXiv:2012.04281.
+Tsutomu Hirao, Yasuhisa Yoshida, Masaaki Nishino, Norihito Yasuda, and Masaaki Nagata. 2013. Single-document summarization as a tree knapsack problem. In Proceedings of the 2013 conference on empirical methods in natural language processing, pages 1515-1520.
+
+Yuta Kikuchi, Graham Neubig, Ryohei Sasano, Hiroya Takamura, and Manabu Okumura. 2016. Controlling output length in neural encoder-decoders. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1328-1338.
+Kevin Lerman, Sasha Blair-Goldensohn, and Ryan McDonald. 2009. Sentiment summarization: evaluating and learning user preferences. In Proceedings of the 12th conference of the European chapter of the ACL (EACL 2009), pages 514-522.
+Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7871-7880.
+Manling Li, Qi Zeng, Ying Lin, Kyunghyun Cho, Heng Ji, Jonathan May, Nathanael Chambers, and Clare Voss. 2020. Connecting the dots: Event graph schema induction with path language modeling. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 684-695.
+Xinnian Liang, Shuangzhi Wu, Mu Li, and Zhoujun Li. 2021. Improving unsupervised extractive summarization with facet-aware modeling. In *Findings of the Association for Computational Linguistics: ACLIJCNLP* 2021, pages 1685-1697.
+Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out, pages 74-81.
+Yang Liu and Mirella Lapata. 2019. Text summarization with pretrained encoders. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3721-3731.
+Yixin Liu, Pengfei Liu, Dragomir Radev, and Graham Neubig. 2022. Brio: Bringing order to abstractive summarization. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2890-2903.
+Yizhu Liu, Zhiyi Luo, and Kenny Zhu. 2018. Controlling length in abstractive summarization using a convolutional neural network. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4110-4119.
+Rutu Mulkar-Mehta, Jerry R Hobbs, and Eduard Hovy. 2011. Granularity in natural language discourse. In Proceedings of the Ninth International Conference on Computational Semantics (IWCS 2011).
+
+Vishakh Padmakumar and He He. 2021. Unsupervised extractive summarization using pointwise mutual information. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 2505-2512.
+Daraksha Parveen, Hans-Martin Ramsl, and Michael Strube. 2015. Topical coherence for graph-based extractive summarization. In Proceedings of the 2015 conference on empirical methods in natural language processing, pages 1949-1954.
+Abigail See, Peter J Liu, and Christopher D Manning. 2017. Get to the point: Summarization with pointer-generator networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1073-1083.
+Ori Shapira, David Gabay, Hadar Ronen, Judit Bar-Ilan, Yael Amsterdamer, Ani Nenkova, and Ido Dagan. 2018. Evaluating multiple system summary lengths: A case study. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 774-778.
+Ori Shapira, Ramakanth Pasunuru, Hadar Ronen, Mohit Bansal, Yael Amsterdamer, and Ido Dagan. 2021. Extending multi-document summarization evaluation to the interactive setting. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 657-677.
+Ori Shapira, Hadar Ronen, Meni Adler, Yael Amsterdamer, Judit Bar-Ilan, and Ido Dagan. 2017. Interactive abstractive summarization for event news tweets. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 109-114.
+Danqing Wang, Pengfei Liu, Yining Zheng, Xipeng Qiu, and Xuan-Jing Huang. 2020. Heterogeneous graph neural networks for extractive document summarization. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6209-6219.
+Danqing Wang, Pengfei Liu, Ming Zhong, Jie Fu, Xipeng Qiu, and Xuanjing Huang. 2019. Exploring domain shift in extractive text summarization. arXiv preprint arXiv:1908.11664.
+Yaushian Wang and Hung-Yi Lee. 2018. Learning to encode text as human-readable summaries using generative adversarial networks. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4187-4195.
+Wen Xiao, Iz Beltagy, Giuseppe Carenini, and Arman Cohan. 2022. Primera: Pyramid-based masked sentence pre-training for multi-document summarization. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 5245-5263.
+
+Shusheng Xu, Xingxing Zhang, Yi Wu, Furu Wei, and Ming Zhou. 2020. Unsupervised extractive summarization by pre-training hierarchical transformers. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings, pages 1784-1795.
+Rui Yan, Jian-Yun Nie, and Xiaoming Li. 2011. Summarize what you are interested in: An optimization framework for interactive personalized summarization. In Proceedings of the 2011 conference on empirical methods in natural language processing, pages 1342-1351.
+Ziyi Yang, Chenguang Zhu, Robert Gmyr, Michael Zeng, Xuedong Huang, and Eric Darve. 2020. Ted: A pretrained unsupervised summarization model with theme modeling and denoising. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings, pages 1865-1874.
+Weizhe Yuan, Graham Neubig, and Pengfei Liu. 2021. Bartscore: Evaluating generated text as text generation. arXiv preprint arXiv:2106.11520.
+Hongming Zhang, Xin Liu, Haojie Pan, Yangqiu Song, and Wingki Leung. 2020a. Aser: A large-scale eventuality knowledge graph. In The Web Conference 2020-Proceedings of the World Wide Web Conference, WWW 2020, page 201.
+Jingqing Zhang, Yao Zhao, Mohammad Saleh, and Peter Liu. 2020b. Pegasus: Pre-training with extracted gap-sentences for abstractive summarization. In International Conference on Machine Learning, pages 11328-11339. PMLR.
+Jingqing Zhang, Yao Zhao, Mohammad Saleh, and Peter J. Liu. 2020c. PEGASUS: pre-training with extracted gap-sentences for abstractive summarization. In Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event, volume 119 of Proceedings of Machine Learning Research, pages 11328-11339. PMLR.
+Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Weinberger, and Yoav Artzi. 2019. Bertscore: Evaluating text generation with bert. In International Conference on Learning Representations.
+Hao Zheng and Mirella Lapata. 2019. Sentence centrality revisited for unsupervised summarization. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6236-6247.
+Ming Zhong, Pengfei Liu, Yiran Chen, Danqing Wang, Xipeng Qiu, and Xuanjing Huang. 2020. Extractive summarization as text matching. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 6197-6208. Association for Computational Linguistics.
+
+Ming Zhong, Pengfei Liu, Danqing Wang, Xipeng Qiu, and Xuan-Jing Huang. 2019a. Searching for effective neural extractive summarization: What works and what's next. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1049-1058.
+Ming Zhong, Danqing Wang, Pengfei Liu, Xipeng Qiu, and Xuan-Jing Huang. 2019b. A closer look at data bias in neural extractive summarization models. In Proceedings of the 2nd Workshop on New Frontiers in Summarization, pages 80-89.
+Ming Zhong, Da Yin, Tao Yu, Ahmad Zaidi, Mutethia Mutuma, Rahul Jha, Ahmed Hassan, Asli Celikyilmaz, Yang Liu, Xipeng Qiu, et al. 2021. Qmsum: A new benchmark for query-based multi-domain meeting summarization. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5905-5921.
+
+# A Method
+
+Here we provide more details about our method part. The workflow of GRANUSUM and case study are listed in Table 7.
+
+# A.1 Event Extraction
+
+Specifically, given a sentence $s$ , we use a dependency parser to obtain its dependency parse tree and select all non-auxiliary verbs as centric tokens. Then, along the syntactic relationships between the selected verbs and other tokens, we extract the longest phrase that matches the designed patterns as events. As illustrated in Table 6, the most frequent pattern is $n_1$ -nsubj- $v_1$ , such as Hurricane hit. Another common pattern is $n_1$ -nsubj- $v_1$ -dobj- $n_2$ , like Hurricane damage buildings. Here "nsubj" denotes an active relationship between nouns and verbs, while "nsubjpass" in another example represents a passive relationship between them. More detailed examples can be found in Table 7, we extract events from four selected sentences, and the colored text shows the locations of the events in the original document.
+
+# A.2 Event-based Summarizer Pre-training
+
+We further explain the four steps of Event-based Summarizer pre-training with the help of the following example. For a paragraph of news as "Honduras braced for potential catastrophe Tuesday. Hurricane Mitch roared through the Caribbean, churning up high waves and intense rain that sent coastal residents scurrying for safer ground. President declared a state of maximum alert and the Honduran military sent planes to pluck residents from their homes on islands near the coast", we
+
+1) first randomly select a sentence: "Hurricane Mitch roared through the Caribbean, churning up high waves and intense rain that sent coastal residents scurrying for safer ground",
+
+2) extract events in it such as Mitch roar, Mitch churn up wave and rain, send and resident scurry,
+
+3) then mask this sentence in the original paragraph, and finally
+
+4) use extracted events and masked text as the input and regard the selected sentence as the target as follows:
+
+- Input: Mitch roar | Mitch churn up wave and rain | send | resident scurry ⟨seg⟩ Honduras braced for potential catastrophe Tuesday. ⟨mask⟩ President declared a state of maximum alert and the Honduran military sent
+
+planes to pluck residents from their homes on islands near the coast.
+
+- Target: Hurricane Mitch roared through the Caribbean, churning up high waves and intense rain that sent coastal residents scurrying for safer ground.
+
+In our experiments, we randomly mask 1 to $n$ sentences from a document, which leads to $n$ samples to pre-train our Summarizer. Here we set $n$ to the smaller of a constant number 10 and one-third of the number of sentences in the document.
+
+# A.3 Event Selector
+
+We use the example in Table 7 to further explain the flow of the Event Selector. When we obtain candidate events from selected sentences, there are still different types of issues in the candidate set. Some generic and uninformative events, such as "club say" and "let him know", should have a lower priority for a summary. Although we introduce sentence-level redundancy score in the pruning step, as a finer-grained unit, events still suffer from redundancy problem (see events in Table 7 with the same color), e.g., both "win MVP", "Malone win MVP" and "average 31.1 points and 14.7 rebounds", "average 24.5 points and 15.3 rebounds" appear in the candidate set. However, after the events ranking and filter using our Event Selector, all of these issues are alleviated. In this case, our Selector regards "Malone win MVP", "Moses Malone die" and "Malone be remember" as the three most salient events, which is consistent with the original news. In addition, uninformative events ("club say" and "let him know") are ranked at the end of the candidate sets, and duplicate events ("win MVP" and "average 24.5 points and 15.3 rebounds") are filtered out due to the lowest salience score. In general, the reasonable ranking of candidate events by the Selector plays a crucial role in improving the quality of subsequent multi-granularity summaries.
+
+# A.4 Multi-Granularity Summary Generation
+
+We can see from Table 7, to obtain the most condensed summary, the two most important events ("Malone win MVP" and "Moses Malone die") and the original news are fed to the model. Then, the pre-trained Summarizer can be aware of event-based cues and generate the corresponding sentence: "Moses Malone, a three-time NBA MVP and one of basketball's most ferocious rebounders,
+
+
Patterns
Examples
n1-nsubj-v1
Hurricane hit
n1-nsubj-v1-dobj-n2
Hurricane damage buildings
n1-nsubj-v1-xcomp-a
People feel scared
n1-nsubj-v1-xcomp-v2-dobj-n2
Police want to save people
n1-nsubjpass-v1
Residents are injured
+
+Table 6: Five typical patterns and corresponding examples when we extract events (76 patterns in total). Here 'v' is a verb, 'n' stands for a noun, and 'a' denotes an adjective. All verbs remain in their original form. 'nsubj', 'dobj', 'xcomp', and 'nsubjpass' are syntactic relations.
+
+died on Sunday". As more events are input, our Summarizer also has the ability to adjust the order of the narrative to make the content more logical. In the summary of granularity level 2, the order in the prompt is "Malone be remember" then "team compile a 65-17 record", but the model first output "He helped the team compile a 65-17 record in the first season" and then "These achievements make him be remembered as a genuine icon and pillar in the history of 76ers basketball" to make the whole summary more coherent and intuitive. Compared to sentences selected from the source documents (see Step 1 in Table 7), the summary generated by GranuSum omits unimportant details and paraphrases to make it more concise. Abstractive models without guidance signals, such as PEGASUS, tend to generate some repetitive sentences (the first two sentences), and generate several less relevant sentences without capturing important events. In contrast, GRANUSUM can output summaries that are more relevant and faithful to the original text.
+
+# B Examples for GranuDUC
+
+We provide two annotation examples for our proposed GranuDUC benchmark in Table 8.
+
+# Step 1: Select Important Sentences based on Relevance and Redundancy Score, and Extract Events
+
+- Malone was part of the 76ers' 1983 NBA championship team, and the club said he will forever be remembered as a genuine icon and pillar of the most storied era in the history of Philadelphia 76ers basketball. —— club say | Malone be remember
+- In the initial meeting in New York, Cunningham pulled Malone aside and let him know his expectations of the player who had won MVP honors in Houston the previous season by averaging 31.1 points and 14.7 rebounds. $\longrightarrow$ Cunningham pull Malone | let him know | win MVP | average 31.1 points and 14.7 rebounds
+- In his first season with the Sixers, Malone won MVP awards by averaging 24.5 points and 15.3 rebounds during the regular season in which the team compiled a 65-17 record. $\longrightarrow$ Malone win MVP | average 24.5 points and 15.3 rebounds | team compile a 65-17 record
+- Moses Malone, a three-time NBA MVP and one of basketball's most ferocious rebounders, died Sunday, the Philadelphia 76ers said. $\longrightarrow$ Moses Malone die | 76ers say
+
+# Step 2: Obtain a Candidate Set by Combining the Above Events
+
+- Original Candidate Events: club say | Malone be remember | Cunningham pull Malone | let him know | win MVP | average 31.1 points and 14.7 rebounds | Malone win MVP | average 24.5 points and 15.3 rebounds | team compile a 65-17 record | Moses Malone die | 76ers say
+
+# Step 3: Event Ranking and Filtering (Event Selector)
+
+- Ranked Candidate Events: Malone win MVP | Moses Malone die | Malone be remember | team compile a 65-17 record | Cunningham pull Malone | average 31.1 points and 14.7 rebounds | 76ers say | let him know
+
+# Step 4: Multi-Granularity Summary Generation (Event-based Summarizer)
+
+- Coarse Granularity Level
+- Input: Malone win MVP | Moses Malone die Source News
+- Generated Summary: Moses Malone, a three-time NBA MVP and one of basketball's most ferocious rebounders, died on Sunday.
+- Fine Granularity Level
+- Input: Malone win MVP | Moses Malone die | Malone be remember | team compile a 65-17 record $\langle \mathrm{seg}\rangle$ $\langle \mathrm{mask}\rangle$ Source News
+- Generated Summary: Moses Malone, a three-time NBA MVP and one of basketball's most ferocious rebounders, died on Sunday. He helped the team compile a 65-17 record in the first season. These achievements make him be remembered as a genuine icon and pillar in the history of 76ers basketball.
+
+# Summary Generated by PEGASUS
+
+- Moses Malone, a three-time NBA MVP and one of basketball's most ferocious rebounders, died Sunday, the Philadelphia 76ers said. The 76ers issued a statement that said Malone had died. Malone was inducted into the Naismith Memorial Basketball Hall of Fame in 2001 and attended the induction ceremonies for the year's class in Springfield, Massachusetts this weekend.
+
+# Reference Summary
+
+- Three-time NBA MVP and Philadelphia 76ers legend Moses Malone, who with Julius Erving in 1983 brought the City of Brotherly Love its first championship since 1967, has died at the age of 60, reports the Inquirer. Moses holds a special place in our hearts and will forever be remembered as a genuine icon and pillar of the most storied era in the history of Philadelphia 76ers basketball.
+
+Table 7: Workflow of GRANUSUM and case study. The colored text in Step 1 indicates the location of the extracted event in the original sentence. Events of the same color in Step 2 are redundant. Underlined text in Step 4 represents the overlap with the reference summary. Notably, we pre-train an Event-based Summarizer before Step 1.
+
+# Sample 1: News about the Civil Suit against Microsoft
+
+- Summary of Coarse Granularity Level: The Justice Department filed a civil suit against Microsoft to change its pattern of anti-competitive conduct on browser software.
+- Summary of Medium Granularity Level: Business rivals have filed an anti-trust suit against Microsoft to break Microsoft Corp.'s monopoly on computer operating systems. The suit began with a Microsoft vs Netscape battle. The Government is examining Microsoft's financial records and painting a dark image of its Chairman Bill Gates. An unpublished book may be crucial to the trial.
+- Summary of Fine Granularity Level: The Justice Department filed a suit against Microsoft for violation of the Sherman Act to change its anti-competitive conduct. The heart of the suit is the Internet browser battle between Microsoft and Netscape. Microsoft, it is argued, has told computer manufacturers that if they want Windows, they must forgo Netscape. Netscape complaint over browsers was central to the case, which grew to include Intel, IBM, Sun, Apple, AOL, and Intuit. The battle now extends far beyond that aiming at Microsoft's overall aggressive anti-competitive conduct. Microsoft's chairman, Bill Gates, usually seen as a visionary is portrayed in much darker tones in the trial. Microsoft was ordered to let Justice examine its records and sought a trial delay. An unpublished book provided evidence, which can be crucial to the trial.
+
+# Sample 2: News about the Health Condition of the Russian President
+
+- Summary of Coarse Granularity Level: Russia President Boris Yeltsin's worsening heath condition caused great concern to the Russian leadership.
+- Summary of Medium Granularity Level: During Russia President Boris Yeltsin's seven years in power, illness has often sidelined him. He recently cut short a trip to Central Asia because of a respiratory infection and he later canceled two out-of-country summits. Russia's leaders are calling for his resignation and question his legal right to seek reelection.
+- Summary of Fine Granularity Level: Russia President Boris Yeltsin had a heart attack in 1996, followed by multiple bypass surgery. The cause of minor burns on his hand were not disclosed. On a trip to Uzbekistan he walked stiffly, stumbled, rambled and seemed confused. Ceremonies were canceled and the trip ended a day early. Yeltsin refuses to admit he is seriously ill and his condition is kept secret. He was treated with antibiotics and ordered to bed but went to the office anyway. Many Russians suspect he is sicker, question his ability to do his job, and want him to resign. The court was to judge on whether he could serve a third term, but he already has said he will not run.
+
+Table 8: Annotation of two samples in GranuDUC.
\ No newline at end of file
diff --git a/unsupervisedmultigranularitysummarization/images.zip b/unsupervisedmultigranularitysummarization/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..615117bce14975ff8d72b1b355d8f256f045b31d
--- /dev/null
+++ b/unsupervisedmultigranularitysummarization/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:efe25fc1cf902c3767d447d711e86257fd378ddab4c12f0224bd42402c71db2f
+size 388552
diff --git a/unsupervisedmultigranularitysummarization/layout.json b/unsupervisedmultigranularitysummarization/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..2f33b7de74bff445fa653dcea3986556227fecec
--- /dev/null
+++ b/unsupervisedmultigranularitysummarization/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a996b8aa975dfac2a470d2fe25a78a2e8ccdf4e276005623dedc8cfae430f582
+size 461777
diff --git a/unsupervisedsyntacticallycontrolledparaphrasegenerationwithabstractmeaningrepresentations/9c62ff91-5320-405f-8ca9-3d0298b071a1_content_list.json b/unsupervisedsyntacticallycontrolledparaphrasegenerationwithabstractmeaningrepresentations/9c62ff91-5320-405f-8ca9-3d0298b071a1_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..6102bbc84b0b04f1d47430a9ef12ea063c8a3d3f
--- /dev/null
+++ b/unsupervisedsyntacticallycontrolledparaphrasegenerationwithabstractmeaningrepresentations/9c62ff91-5320-405f-8ca9-3d0298b071a1_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0762a96d2d5b9c7a96b3e7e63bd04f1c10d0a8df43b7dc2a48b888abf2ff8ca4
+size 49396
diff --git a/unsupervisedsyntacticallycontrolledparaphrasegenerationwithabstractmeaningrepresentations/9c62ff91-5320-405f-8ca9-3d0298b071a1_model.json b/unsupervisedsyntacticallycontrolledparaphrasegenerationwithabstractmeaningrepresentations/9c62ff91-5320-405f-8ca9-3d0298b071a1_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..34d67535aa4061217164c7c2b885c5510465e919
--- /dev/null
+++ b/unsupervisedsyntacticallycontrolledparaphrasegenerationwithabstractmeaningrepresentations/9c62ff91-5320-405f-8ca9-3d0298b071a1_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5cd828bc0ee8ce983235c6214a9c659c814fb93b172ac7ebbed0e4f9cc342c71
+size 61310
diff --git a/unsupervisedsyntacticallycontrolledparaphrasegenerationwithabstractmeaningrepresentations/9c62ff91-5320-405f-8ca9-3d0298b071a1_origin.pdf b/unsupervisedsyntacticallycontrolledparaphrasegenerationwithabstractmeaningrepresentations/9c62ff91-5320-405f-8ca9-3d0298b071a1_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..90d3653f1273efed472a7cc8a8951eb4f89d8581
--- /dev/null
+++ b/unsupervisedsyntacticallycontrolledparaphrasegenerationwithabstractmeaningrepresentations/9c62ff91-5320-405f-8ca9-3d0298b071a1_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:2fcbf2cb808bb52155deb91428849b4d7b074881ba6d9d6c3e406fd3b36dc317
+size 393004
diff --git a/unsupervisedsyntacticallycontrolledparaphrasegenerationwithabstractmeaningrepresentations/full.md b/unsupervisedsyntacticallycontrolledparaphrasegenerationwithabstractmeaningrepresentations/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..54bed81b577cf3b5fccb96c017f02c16ec04983f
--- /dev/null
+++ b/unsupervisedsyntacticallycontrolledparaphrasegenerationwithabstractmeaningrepresentations/full.md
@@ -0,0 +1,184 @@
+# Unsupervised Syntactically Controlled Paraphrase Generation with Abstract Meaning Representations
+
+Kuan-Hao Huang*† Varun Iyer*Anoop Kumar Sriram Venkatapathy‡ Kai-Wei Chang† Aram Galstyan‡
+
+†University of California, Los Angeles
+
+$^{\diamond}$ Johns Hopkins University, $\ddagger$ Amazon Alexa AI
+
+{khhuang, kwchang}@cs.ucla.edu, viyer3@jhu.edu
+
+{anooamzn, vesriram, kaiwec, argalsty}@amazon.com
+
+# Abstract
+
+Syntactically controlled paraphrase generation has become an emerging research direction in recent years. Most existing approaches require annotated paraphrase pairs for training and are thus costly to extend to new domains. Unsupervised approaches, on the other hand, do not need paraphrase pairs but suffer from relatively poor performance in terms of syntactic control and quality of generated paraphrases. In this paper, we demonstrate that leveraging Abstract Meaning Representations (AMR) can greatly improve the performance of unsupervised syntactically controlled paraphrase generation. Our proposed model, AMR-enhanced Paraphrase Generator (AMRPG), separately encodes the AMR graph and the constituency parse of the input sentence into two disentangled semantic and syntactic embeddings. A decoder is then learned to reconstruct the input sentence from the semantic and syntactic embeddings. Our experiments show that AMRPG generates more accurate syntactically controlled paraphrases, both quantitatively and qualitatively, compared to the existing unsupervised approaches. We also demonstrate that the paraphrases generated by AMRPG can be used for data augmentation to improve the robustness of NLP models.
+
+# 1 Introduction
+
+Syntactically controlled paraphrase generation approaches aim to control the format of generated paraphrases by taking into account additional parse specifications as the inputs, as illustrated by Figure 1. It has attracted increasing attention in recent years since it can diversify the generated paraphrases and benefit a wide range of NLP applications (Iyyer et al., 2018; Huang and Chang, 2021; Sun et al., 2021), including task-oriented dialog generation (Gao et al., 2020), creative generation (Tian et al., 2021), and model robustness (Huang and Chang, 2021).
+
+
+Figure 1: An illustration of syntactically controlled paraphrase generation. Given a source sentence and different parse specifications, the model generates different paraphrases following the parse specifications.
+
+Recent works have shown success in training syntactically controlled paraphrase generators (Iyyer et al., 2018; Chen et al., 2019; Kumar et al., 2020; Sun et al., 2021). Although their models can generate high-quality paraphrases and achieve good syntactic control ability, the training process needs a large amount of supervised data, e.g., parallel paraphrase pairs. Annotating paraphrase pairs is usually expensive because it requires intensive domain knowledge and high-level semantic understanding. Due to the difficulty in collecting parallel data, the ability of supervised approaches are limited, especially when adapting to new domains.
+
+To reduce the annotation demand, unsupervised approaches can train syntactically controlled paraphrase generators without the need for parallel pairs (Zhang et al., 2019; Bao et al., 2019; Huang and Chang, 2021). Most of them achieve syntactic control by learning disentangled embeddings for semantics and syntax separately (Bao et al., 2019; Huang and Chang, 2021). However, without parallel data, it is challenging to learn a good disentanglement and capture semantics well. As we will show later (Section 4.1), unsupervised approaches can generate bad paraphrases by mistakenly swapping object and subject of a sentence.
+
+In this work, we propose to use Abstract Meaning Representations (AMR) (Banarescu et al., 2013) to learn better disentangled semantic embeddings
+
+
+Figure 2: The same AMR graph for a pair of paraphrased sentences "He described her as a genius." and "She was a genius, according to his description."
+
+for unsupervised syntactically controlled paraphrase generation. AMR is a semantic graph structure that covers the abstract meaning of a sentence. As shown in Figure 2, two sentences would have the same (or similar) AMR graph as long as they carry the same abstract meaning, even they are expressed with different syntactic structures. This property makes AMRs a good resource to capture sentence semantics.
+
+Based on this, we design an AMR-enhanced Paraphrase Generator (AMRPG), which separately learns (1) semantic embeddings with the AMR garphs extracted from the input sentence and (2) syntactic embeddings from the constituency parse of the input sentence. Then, AMRPG trains a decoder to reconstruct the input sentence from the semantic and syntactic embeddings. The reconstruction objective and the design of the disentanglement of semantics and the syntax makes AMRPG learn to generate syntactically controlled paraphrases without using parallel pairs. Our experiments show that AMRPG performs better syntactic control than existing unsupervised approaches. Additionally, we demonstrate that the generated paraphrases of AMRPG can be used for data augmentation to improve the robustness of NLP models.
+
+# 2 Related Work
+
+Paraphrase generation. Traditional paraphrase generators are usually based on hand-crafted rules (Barzilay and Lee, 2003) or seq2seq models (Cao et al., 2017; Gupta et al., 2018; Fu et al., 2019). To generate diverse paraphrases, different techniques are proposed, including random pattern embeddings (Kumar et al., 2019), latent space perturbation (Roy and Grangier, 2019; Zhang et al., 2019; Cao and Wan, 2020), multi-round generation (Lin and Wan, 2021), reinforcement learning (Liu et al., 2020), prompt-tuning (Chowdhury et al., 2022), order control (Goyal and Durrett, 2020), and syntactic control (Iyyer et al., 2018; Kumar et al., 2020; Huang and Chang, 2021; Sun et al., 2021).
+
+Abstract meaning representation (AMR). Since AMR (Banarescu et al., 2013) captures high-level semantics, it has been applied for various NLP tasks, including summarization (Sachan and Xing, 2016), dialogue modeling (Bai et al., 2021), information extraction (Zhang et al., 2021). Some works also focus on training high-quality AMR parsers with graph encoders (Cai and Lam, 2020), seq2seq models (Konstas et al., 2017; Zhou et al., 2020), and decoder-only models (Bevilacqua et al., 2021).
+
+# 3 Unsupervised Syntactically Controlled Paraphrase Generation
+
+# 3.1 Problem Formulation
+
+We follow previous works (Iyyer et al., 2018; Huang and Chang, 2021) and consider constituency parses (without terminals) as the control signals. Given a source sentence $s$ and a target parse $p$ , the goal of the syntactically controlled paraphrase generator is to generate a target sentence $t$ which has similar semantics to the source sentence $s$ and has syntax following the parse $p$ . In the unsupervised setting, the paraphrase generator cannot access any target sentences and target parses but only the source sentences and source parses during training.
+
+# 3.2 Proposed Method: AMRPG
+
+Motivated by previous approaches (Bao et al., 2019; Huang and Chang, 2021), we design AM-RPG to learn separate embeddings for semantics and syntax, as illustrated by Figure 3. Then, AM-RPG learns a decoder with the objective to reconstruct the source sentence. The challenge here is how to learn embeddings such that the semantic embedding contains only semantic information while the syntactic embedding contains only syntactic information. We introduce the details as follows.
+
+Semantic embedding. Given a source sentence, we first use a pre-trained AMR parser1 to get its AMR graph. Next, we use a semantic encoder to encode the AMR graph into the semantic embedding $e_{sem}$ . Specifically, the semantic encoder consists of two parts: a fixed pre-trained AMR encoder (Ribeiro et al., 2021) followed by a learnable Transformer encoder. We additionally perform node masking when training the semantic encoder. Specifically, every node in the AMR graph has a
+
+
+Figure 3: AMRPG's framework. It separately encodes the AMR graph and the constituency parse of the input sentence into two disentangled semantic and syntactic embeddings. A decoder is then learned to reconstruct the input sentence from the semantic and syntactic embeddings.
+
+probability to be masked out during training. This can improve the robustness of AMRPG.
+
+As mentioned above, two semantically similar sentences would have similar AMR graphs regardless of their syntax. This property encourages AMRPG to capture only semantic information in semantic embeddings. Compared with previous work (Huang and Chang, 2021), which uses bag-of-words to learn the semantic embeddings, using AMR can capture semantics better and lead to better performance, as shown in Section 4.
+
+Syntactic embedding. Given a source sentence, we use the Stanford CoreNLP toolkit (Manning et al., 2014) to get its constituency parse. Then, we remove all the terminals in the parse and learns a Transformer encoder to encode the parse into the syntactic embedding $e_{syn}$ . Since we remove the terminals, the syntactic embedding contains only the syntactic information of the source sentence.
+
+Decoder. We train a Transformer decoder that takes the semantic embedding $e_{sem}$ and the syntactic embedding $e_{syn}$ as the input, and reconstructs the source sentence with a cross-entropy loss. The reconstruction objective makes AMRPG not require parallel paraphrase pairs for training.
+
+Inference. Given a source sentence $s$ and a target parse $p$ , we use the semantic encoder to encode the AMR graph of $s$ into the semantic embedding, use the syntactic encoder to encode $p$ into the syntactic embedding, and use the decoder to generate the target sentence $t$ .
+
+# 4 Experiments
+
+# 4.1 Syntactically Controlled Paraphrase Generation
+
+Datasets. We consider ParaNMT (Wieting and Gimpel, 2018) for training and testing. We use only the source sentences in ParaNMT to train AMRPG and other unsupervised baselines, and use both the source sentences and target sentences to train supervised baselines. To further test the model's ability to generalize to new domains, we directly use the models trained with ParaNMT to test on Quora (Iyer et al., 2017), MRPC (Dolan et al., 2004), and PAN (Madnani et al., 2012)
+
+Evaluation metrics. Following the previous work (Huang and Chang, 2021), we consider the BLEU score to measure the similarity between the gold target sentences and the predicted target sentences, and consider the template matching accuracy $^2$ (TMA) to evaluate the goodness of syntactic control. More details about the evaluation can be found in Appendix B.2.
+
+Baselines. We consider the following unsupervised models: SIVAE (Zhang et al., 2019), SynPG (Huang and Chang, 2021), AMRPG, and T5-Baseline, which replaces the AMR encoder with a T5-encoder. We also consider SCPN (Iyyer et al., 2018) as the supervised baseline.
+
+Results. Table 1 shows the results of syntactically controlled paraphrase generation. AMRPG performs the best among the unsupervised approaches. Specifically, AMRPG outperforms SynPG, the
+
+
Model
ParaNMT
Quora
PAN
MRPC
TMA
BLEU
TMA
BLEU
TMA
BLEU
TMA
BLEU
Unsupervised Approaches (without using parallel pairs)
Supervised Approaches (using additional parallel pairs in ParaNMT; not comparable to ours)
SCPN (Iyyer et al., 2018)
83.9
58.3
87.1
41.0
72.3
37.6
80.1
41.8
+
+Table 1: Results of syntactically controlled paraphrase generation. AMRPG performs the best among all unsupervised approaches and can outperform supervised approaches when considering the target domain source sentences.
+
+
Input
The dog chased the cat on the street.
Parse template
(S (NP (DT) (NN)) (VP (VBN) (PP)) (.))
Target
The cat was chased by the dog on the street.
SynPG
The dog was chased by the cat on the street.
AMRPG
The cat was chased by a dog in the street.
Input
John will send a gift to Tom when Christmas comes.
Parse template
(S (SBAR (WHADVP) (S)) (.)(NP (NNP)) (VP (MD) (VP)) (.))
Target
When Christmas comes, John will send a gift to Tom.
SynPG
When Tom comes, John will send a gift to Christmas.
AMRPG
When Christmas comes, John will send a gift to Tom.
+
+Table 2: Paraphrase examples generated by SynPG and AMRPG. AMRPG captures semantics better and generates higher quality of paraphrases than SynPG.
+
+state-of-the-art unsupervised model, with a large gap in terms of BLEU score. This justifies that using AMR can learn better disentangled embeddings and capture semantics better.
+
+We observe that there is indeed a performance gap between AMRPG and SCPN (supervised baseline). However, since AMRPG is an unsupervised model, it is possible to use the source sentences from the target domains to further fine-tune AMRPG without additional annotation cost. As shown in the table, AMRPG with further fine-tuning can achieve even better performance than SCPN when considering domain adaptation (Quora, MRPC, and PAN). This demonstrates the flexibility and the potential of unsupervised paraphrase models.
+
+Qualitative examples. Table 2 lists some paraphrases generated by SynPG and AMRPG. As we mentioned in Section 3, SynPG uses bag-of-words to learn semantic embeddings and therefore SynPG is easy to get confused about the relations between entities or mistake the subject for the object. In contrast, AMRPG can preserve more semantics.
+
+# 4.2 Improving Robustness of NLP Models
+
+We demonstrate that the paraphrases generated by AMRPG can improve the robustness of NLP models by data augmentation. Following the setting of previous work (Huang and Chang, 2021), we consider three classification tasks in GLUE (Wang et al., 2019): MRPC, RTE, and SST-2. We compare three baselines: (1) the classifier trained with original training data, (2) the classifier trained with original training data and augmented data generated by SynPG, and (3) the classifier trained with original training data and augmented data generated by AMRPG. Specifically, for every instance in the original training data, we generate four paraphrases as the augmented examples by considering four common syntactic templates. More details can be found in Appendix C.1.
+
+Table 3 shows the clean accuracy and the broken rate (the percentage of examples being attacked) after attacked by the syntactically adversarial examples3 generated with SCPN (Iyyer et al., 2018). Although the classifiers trained with data augmen
+
+
Model
MRPC
RTE
SST-2
Acc.
Brok.
Acc.
Brok.
Acc.
Brok.
Base
83.3
52.9
62.1
58.1
92.2
38.8
+ SynPG
80.6
42.2
61.7
40.3
91.5
38.5
+ AMRPG
80.6
38.3
58.8
39.3
91.6
36.7
+
+Table 3: Augmenting paraphrases generated by AM-RPG improves the robustness of NLP models. Acc denotes the clean accuracy (the higher is the better). Brok denotes the percentage of examples being successfully attacked (the lower is the better).
+
+tation have slightly worse clean accuracy, they have significantly lower broken rates, which implies that data augmentation improves the model robustness. Also, data augmentation with AMRPG performs better than data augmentation with SynPG in terms of the broken rate. We attribute this to the better quality of paraphrase generation of AMRPG.
+
+# 5 Conclusion
+
+We propose AMRPG that utilizes AMR to learn a better disentanglement of semantics and syntax without using any parallel data. This enables AMRPG to captures semantics better and generate more accurate syntactically controlled paraphrases than existing unsupervised approaches. We also demonstrate that how to apply AMRPG to improve the robustness of NLP models.
+
+# Limitations
+
+Our goal is to demonstrate the potential of AMR for syntactically controlled paraphrase generation. The current experimental setting follows previous works (Iyyer et al., 2018; Huang and Chang, 2021), which considers the full constituency parses as the control signals. In real applications, getting full constituency parses before the paraphrase generation process might take additional efforts. One potential solution is to consider relatively noisy or simplified parse specifications (Sun et al., 2021). In addition, some parse specifications can be inappropriate for certain source sentences (e.g., the source sentence is long but the target parse is short). How to score and reject some of the given parse specifications is still an open research question. Finally, although training AMRPG does not require any parallel paraphrase pairs, it does require a pretrained AMR parser, which can be a potential cost for training AMRPG.
+
+# Broader Impacts
+
+Our proposed method focuses on improving syntactically controlled paraphrase generation. It is intended to be used to improve the robustness of models and facilitate language generation for applications with positive social impacts. All the experiments are conducted on open benchmark datasets. However, it is known that the models trained with a large text corpus could capture the bias reflecting the training data. It is possible for our model to potentially generate offensive or biased content learned from the data. We suggest to carefully examining the potential bias before deploying models in any real-world applications.
+
+# References
+
+Xuefeng Bai, Yulong Chen, Linfeng Song, and Yue Zhang. 2021. Semantic representation for dialogue modeling. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (ACL/IJCNLP).
+Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffith, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider. 2013. Abstract meaning representation for sembanking. In Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse (LAW-ID@ACL).
+Yu Bao, Hao Zhou, Shujian Huang, Lei Li, Lili Mou, Olga Vechtomova, Xin-Yu Dai, and Jiajun Chen. 2019. Generating sentences from disentangled syntactic and semantic spaces. In Proceedings of the 57th Conference of the Association for Computational Linguistics (ACL).
+Regina Barzilay and Lillian Lee. 2003. Learning to paraphrase: An unsupervised approach using multiple-sequence alignment. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT).
+Michele Bevilacqua, Rexhina Blloshmi, and Roberto Navigli. 2021. One SPRING to rule them both: Symmetric AMR semantic parsing and generation without a complex pipeline. In Thirty-Fifth AAAI Conference on Artificial Intelligence (AAAI).
+Deng Cai and Wai Lam. 2020. AMR parsing via graph sequence iterative inference. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (ACL).
+Yue Cao and Xiaojun Wan. 2020. Divgan: Towards diverse paraphrase generation via diversified generative adversarial network. In *Findings of the Association for Computational Linguistics: (EMNLP)-Findings*.
+
+Ziqiang Cao, Chuwei Luo, Wenjie Li, and Sujian Li. 2017. Joint copying and restricted generation for paraphrase. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence (AAAI).
+Mingda Chen, Qingming Tang, Sam Wiseman, and Kevin Gimpel. 2019. Controllable paraphrase generation with a syntactic exemplar. In Proceedings of the 57th Conference of the Association for Computational Linguistics (ACL).
+Jishnu Ray Chowdhury, Yong Zhuang, and Shuyi Wang. 2022. Novelty controlled paraphrase generation with retrieval augmented conditional prompt tuning. In Proceedings of the Thirty-Sixth AAAI Conference on Artificial Intelligence (AAAI).
+Bill Dolan, Chris Quirk, and Chris Brockett. 2004. Unsupervised construction of large paraphrase corpora: Exploiting massively parallel news sources. In 20th International Conference on Computational Linguistics (COLING).
+Yao Fu, Yansong Feng, and John P. Cunningham. 2019. Paraphrase generation with latent bag of words. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019 (NeurIPS).
+Silin Gao, Yichi Zhang, Zhijian Ou, and Zhou Yu. 2020. Paraphrase augmented task-oriented dialog generation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (ACL).
+Tanya Goyal and Greg Durrett. 2020. Neural syntactic preordering for controlled paraphrase generation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (ACL).
+Ankush Gupta, Arvind Agarwal, Prawaan Singh, and Piyush Rai. 2018. A deep generative framework for paraphrase generation. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence (AAAI).
+Kuan-Hao Huang and Kai-Wei Chang. 2021. Generating syntactically controlled paraphrases without using annotated parallel pairs. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics (EACL).
+Shankar Iyer, Nikhil Dandekar, and Korn el Csernai. 2017. First quora dataset release: Question pairs. data.quora.com.
+Mohit Iyyer, John Wieting, Kevin Gimpel, and Luke Zettlemoyer. 2018. Adversarial example generation with syntactically controlled paraphrase networks. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT).
+Ioannis Konstas, Srinivasan Iyer, Mark Yatskar, Yejin Choi, and Luke Zettlemoyer. 2017. Neural AMR:
+
+sequence-to-sequence models for parsing and generation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL).
+Ashutosh Kumar, Kabir Ahuja, Raghuram Vadapalli, and Partha P. Talukdar. 2020. Syntax-guided controlled generation of paraphrases. Transactions of the Association for Computational Linguistics, 8:330-345.
+Ashutosh Kumar, Satwik Bhattachamishra, Manik Bhandari, and Partha P. Talukdar. 2019. Submodular optimization-based diverse paraphrasing and its effectiveness in data augmentation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT).
+Zhe Lin and Xiaojun Wan. 2021. Pushing paraphrase away from original sentence: A multi-round paraphrase generation approach. In *Findings of the Association for Computational Linguistics* (ACL/IJCNLP-Findings).
+Mingtong Liu, Erguang Yang, Deyi Xiong, Yujie Zhang, Yao Meng, Changjian Hu, Jinan Xu, and Yufeng Chen. 2020. A learning-exploring method to generate diverse paraphrases with multi-objective deep reinforcement learning. In Proceedings of the 28th International Conference on Computational Linguistics (COLING).
+Nitin Madnani, Joel R. Tetreault, and Martin Chodorow. 2012. Re-examining machine translation metrics for paraphrase identification. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT).
+Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Rose Finkel, Steven Bethard, and David McClosky. 2014. The stanford corenlp natural language processing toolkit. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, System Demonstrations.
+Leonardo F. R. Ribeiro, Martin Schmitt, Hinrich Schütze, and Iryna Gurevych. 2021. Investigating pretrained language models for graph-to-text generation. In Proceedings of the 3rd Workshop on Natural Language Processing for Conversational AI.
+Aurko Roy and David Grangier. 2019. Unsupervised paraphrasing without translation. In Proceedings of the 57th Conference of the Association for Computational Linguistics (ACL).
+Mrinmaya Sachan and Eric P. Xing. 2016. Machine comprehension using rich semantic representations. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL).
+Jiao Sun, Xuezhe Ma, and Nanyun Peng. 2021. AESOP: paraphrase generation with adaptive syntactic control. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP).
+
+Yufei Tian, Arvind Krishna Sridhar, and Nanyun Peng. 2021. Hypogen: Hyperbole generation with commonsense and counterfactual knowledge. In Findings of the Association for Computational Linguistics: (EMNLP-Findings).
+Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In 7th International Conference on Learning Representations (ICLR).
+John Wieting and Kevin Gimpel. 2018. *Paranmt-50m: Pushing the limits of paraphrastic sentence embeddings with millions of machine translations*. In *Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (ACL)*.
+Xinyuan Zhang, Yi Yang, Siyang Yuan, Dinghan Shen, and Lawrence Carin. 2019. Syntax-infused variational autoencoder for text generation. In Proceedings of the 57th Conference of the Association for Computational Linguistics (ACL).
+Zixuan Zhang, Nikolaus Nova Parulian, Heng Ji, Ahmed Elsayed, Skatje Myers, and Martha Palmer. 2021. Fine-grained information extraction from biomedical literature based on knowledge-enriched abstract meaning representation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (ACL/IJCNLP).
+Qiji Zhou, Yue Zhang, Donghong Ji, and Hao Tang. 2020. AMR parsing with latent structural information. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (ACL).
+
+# A Implementation Details
+
+We use around 20 millions of examples4 in ParaNMT (Wieting and Gimpel, 2018) to train AMRPG and all baselines. The semantic encoder and the syntactic decoder are trained from scratch, with the default architecture and the default parameters of torch.nn.Transformer. The max length for input sentences, the linearized constituency parses, and the linearized AMR graph are set to 40, 160, and 250, respectively. The word dropout rate is 0.4 while the node masking rate is 0.6. We consider Adam optimizer with the learning rate being $10^{-4}$ and the weight decay being $10^{-5}$ . The total number of epochs is set to 10. When generating the outputs, we use random sampling with temperature being 0.5. The model is trained with 4 NVIDIA V100 GPUs with 16 GB memory each. It takes around 7 days to finish the training process.
+
+# B Experimental Settings of Syntactically Controlled Paraphrase Generation
+
+# B.1 Datasets
+
+Following previous work (Huang and Chang, 2021), our test data is: (1) 6,400 examples of ParaNMT (Wieting and Gimpel, 2018), (2) 6,400 examples of Quora (Iyer et al., 2017), (3) 2,048 examples of PAN (Madnani et al., 2012), and (4) 1,920 examples of MRPC (Dolan et al., 2004).
+
+# B.2 Evaluation
+
+Following previous work (Huang and Chang, 2021), we consider paraphrase pairs to evaluate the performance. Given a paraphrase pairs $(s_1,s_2)$ we use the Standford CoreNLP constituency parser (Manning et al., 2014) to get their parses $(p_{1},p_{2})$ . The input of all baselines would be $(s_1,p_2)$ and the ground truth would be $s_2$ .
+
+Assuming the generated paraphrase is $g$ , We use BLEU score to measure the similarity between the generated paraphrase $g$ and the ground truth $s_2$ . We also calculate the template matching accuracy (TMA) by computing the exact matching accuracy of the top-2 levels of $p_g$ and $p_2$ ( $p_g$ is the constituency parse of $g$ ).
+
+# C Experimental Settings of Model Robustness
+
+# C.1 Training Details
+
+We use the pre-trained SynPG parse generator to generate the full parse for each instance with the following parse templates: “(S (NP) (VP) (.))”, “(S (VP) (.))”, “(NP (NP) (.))”, and “(FRAG (SBAR) (.))”. Then, we use the generated full parses as the parse specifications to generate paraphrases for data augmentation. When training classifiers with data augmentation, the original instances have four times of weights as the augmented instances when computing the loss. We use the scripts from Huggingface5 with default values to train the classifiers.
+
+# C.2 Generating Adversarial Examples
+
+We use the official script $^{6}$ of SCPN (Iyyer et al., 2018) to generate syntactically adversarial examples. Specifically, we consider the first five parse templates for RTE and SST-2 and first three parse templates for MRPC to generate the adversarial examples. As long as one of the adversarial examples makes the classifier change the prediction, we count it as a successful attack on this instance.
\ No newline at end of file
diff --git a/unsupervisedsyntacticallycontrolledparaphrasegenerationwithabstractmeaningrepresentations/images.zip b/unsupervisedsyntacticallycontrolledparaphrasegenerationwithabstractmeaningrepresentations/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..d1f1e342d967bfa01f97b7574f2561be9a63e67d
--- /dev/null
+++ b/unsupervisedsyntacticallycontrolledparaphrasegenerationwithabstractmeaningrepresentations/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5933f626fadf3a8e545d90da1ae05cd1a8f0415134909b176411f52a3b9e67ec
+size 243497
diff --git a/unsupervisedsyntacticallycontrolledparaphrasegenerationwithabstractmeaningrepresentations/layout.json b/unsupervisedsyntacticallycontrolledparaphrasegenerationwithabstractmeaningrepresentations/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..450783bba8bee7ff0559c8557d3f4db17c2e2d9b
--- /dev/null
+++ b/unsupervisedsyntacticallycontrolledparaphrasegenerationwithabstractmeaningrepresentations/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:1e9c017cabd6a6f8b9e284f08516d299f8b4b7bafde5902f8e4b65568cf10dc2
+size 221277
diff --git a/unsupervisedtextdeidentification/7c2b8e03-1a2e-4853-9194-ec4b94e97875_content_list.json b/unsupervisedtextdeidentification/7c2b8e03-1a2e-4853-9194-ec4b94e97875_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..3f24a020d6926af78ad5f1d6de3397626ff14dd3
--- /dev/null
+++ b/unsupervisedtextdeidentification/7c2b8e03-1a2e-4853-9194-ec4b94e97875_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e2dbcabdd2c1c73deafb522810c14f20cd7c59b9a39c0be0583eaf21d11c9aa5
+size 89366
diff --git a/unsupervisedtextdeidentification/7c2b8e03-1a2e-4853-9194-ec4b94e97875_model.json b/unsupervisedtextdeidentification/7c2b8e03-1a2e-4853-9194-ec4b94e97875_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..d589d764acb859d937ddefa569f33b2e760937b7
--- /dev/null
+++ b/unsupervisedtextdeidentification/7c2b8e03-1a2e-4853-9194-ec4b94e97875_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c62eaf4a0570ef730c5724bd4b40edfdba8f8b092006ff0a40e6ed46bbe4bbb8
+size 108369
diff --git a/unsupervisedtextdeidentification/7c2b8e03-1a2e-4853-9194-ec4b94e97875_origin.pdf b/unsupervisedtextdeidentification/7c2b8e03-1a2e-4853-9194-ec4b94e97875_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..379d8c9eb2775e172e8d5d8325a2469b235d2dad
--- /dev/null
+++ b/unsupervisedtextdeidentification/7c2b8e03-1a2e-4853-9194-ec4b94e97875_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:adaa3558cc9d41effd321e0e10a9264e5b682041f99eae5eea81ba77c5291b56
+size 718896
diff --git a/unsupervisedtextdeidentification/full.md b/unsupervisedtextdeidentification/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..592e49c219f912d318ce928500247821db747e84
--- /dev/null
+++ b/unsupervisedtextdeidentification/full.md
@@ -0,0 +1,420 @@
+# Unsupervised Text Deidentification
+
+John X. Morris Justin T. Chiu Ramin Zabih Alexander M. Rush
+
+Cornell University
+
+{jxm3,arush}@cornell.edu
+
+# Abstract
+
+Deidentification seeks to anonymize textual data prior to distribution. Automatic deidentification primarily uses supervised named entity recognition from human-labeled data points. We propose an unsupervised deidentification method that masks words that leak personally-identifying information. The approach utilizes a specially trained reidentification model to identify individuals from redacted personal documents. Motivated by K-anonymity based privacy, we generate redactions that ensure a minimum reidentification rank for the correct profile of the document. To evaluate this approach, we consider the task of deidentifying Wikipedia Biographies, and evaluate using an adversarial reidentification metric. Compared to a set of unsupervised baselines, our approach deidentifies documents more completely while removing fewer words. Qualitatively, we see that the approach eliminates many identifying aspects that would fall outside of the common named entity based approach.
+
+# 1 Introduction
+
+In domains such as law, medicine, and government, it can be difficult to release textual data because it contains sensitive personal information (Johnson et al., 2016; Jana and Biemann, 2021; Pilan et al., 2022). Privacy laws and regulations vary by domain and impact the requirements for deidentification. Most prior work on automatic deidentification (Neamatullah et al., 2008; Meystre et al., 2010; Sanchez et al., 2014; Liu et al., 2017; Norgeot et al., 2020; Sberbank and Emelyanov, 2021) deidentifies data to the requirements of the HIPAA Safe Harbor method (Centers for Medicare & Medicaid Services, 1996). Annotations for these systems are based on a list of 18 identifiers like age,
+
+phone number, and zip code. These systems treat deidentification as a named entity recognition problem within this space. Upon the removal of these pre-defined entities, text is no longer considered sensitive.
+
+However, one of the 18 categories defined by HIPAA Safe Harbor includes "any unique identifying number, characteristic, or code [that could be used to reidentify an individual]". Prior work ignores this nebulous 18th category. One reason the category is ill-defined is due to the existence of quasi-identifiers, pieces of personally identifiable information (PII) that do not fall under any single category and therefore are difficult to identify and label in the general case (Phillips and Knoppers, 2016). Even data that has all of the categories from Safe Harbor removed may still be reidentified through quasi-identifiers (Angiuli et al., 2015). Supervised approaches cannot naturally detect quasi-identifiers, since these words are not inherently labeled as PII (Uzuner et al., 2007).
+
+In this work, we propose an unsupervised deidentification method that targets the more general definition of PII. Instead of relying on specific rule lists of named entities, we directly remove words that could lead to reidentification. Motivated by the goal of $K$ -anonymity (Lison et al., 2021), our approach utilizes a learned probabilistic reidentification model to predict the true identity of a given text. We perform combinatorial inference in this model to find a set of words that, when masked, achieve K-anonymity. The system does not require any annotations of specific PII, but instead learns from a dataset of aligned descriptive text and profile information. Using this information, we can train an identification process using a dense encoder model.
+
+Experiments test the ability of the system to deidentify documents from a large-scale database. We use a dataset of Wikipedia Biographies aligned
+
+
+Figure 1: Method overview. A document $(x, \text{top-left})$ paired with a profile $(\hat{y}, \text{top-right})$ is given to the system. A trained neural reidentification model $(p(y|x,z)$ , blue circle) produces a distribution over all possible profiles based on densely encoded representations. At each stage of inference, masks are added to the source document, changing the relative rank of the reidentification model. The method is run until k-anonymity of the reidentification model is achieved. Note that in this example, it is not necessary to remove all information, such as the month and day of birth, since the player is already deidentified.
+
+with info-boxes (Lebret et al., 2016). The system is fit on a subset of the data and then asked to deidentify unseen individuals. Results show that even when all words from the profile are masked, the system is able to reidentify $32\%$ of individuals. When we use our system to deidentify documents, it is able to fully anonymize them while retaining over $50\%$ of words. When we compare our deidentification method to a set of unsupervised baselines, our method deidentifies documents more completely while removing fewer words. We qualitatively and quantitatively analyze the redactions produced by our system, including examples of successfully redacted quasi-identifiers.
+
+# 2 Related Work
+
+Automated deidentification. There is much prior work on deidentifying text datasets, both with rule-based systems (Neamatullah et al., 2008; Meystre et al., 2010; Sánchez et al., 2014; Norgeot et al., 2020; Sberbank and Emelyanov, 2021) and deep learning methods (Liu et al., 2017; Yue and Zhou, 2020; Johnson et al., 2020). Each of these methods is supervised, relies on datasets with human-labeled PII, and focuses on removing some
+
+subset of the 18 identifying categories from HIPAA Safe Harbor. Other approaches include generating entire new fake datasets using Generative Adversarial Networks (GANs) (Chin-Cheong et al., 2019). Friedrich et al. (2019) train an LSTM on an EMR-based NLP task using adversarial loss to prevent the model from learning to reconstruct the input. Finally, differential privacy is a technique for ensuring provably private distributions (Dwork et al., 2006). It has mostly been used for training anonymized models on data containing PII, but requires access to the un-anonymized datasets for training (Li et al., 2021). Our deidentification approach does not provide the formal guarantees of differential privacy, but aims to provide a practical solution for anonymizing datasets in real-world scenarios.
+
+Deidentification by reidentification. The NeurIPS 2020 Hide-and-Seek Privacy Challenge benchmarked both deidentification and reidentification techniques for clinical time series data (Jordon et al., 2021). In computer vision, researchers have proposed learning to mask faces in images to preserve the privacy of individuals using reidentification (Hukkelas et al., 2019;
+
+Maximov et al., 2020; Gupta et al., 2021). In NLP, some work has been done on evaluating the reidentification risk of deidentified text (Scaiano et al., 2016). El Emam et al. (2009) proposes a method for deidentification of tabular datasets based on the concept of K-anonymity. Gardner and Xiong (2009) deidentify unstructured text by performing named entity extraction and redacting entities until k-anonymity. Mansour et al. (2021) propose an algorithm for deidentification of tabular datasets by quantifying reidentification risk using a metric related to K-anonymity. In our work, we train a reidentification model in an adversarial setting and use the model to deidentify documents directly.
+
+Learning in the presence of masks. Various works have shown how to improve NLP models by masking some of the input during training. Chen and Ji (2020) show that learning in the presence of masks can improve classifier interpretability and accuracy. Li et al. (2016) train a model to search for the minimum subset of words required that, when removed, change the output of a classifier. They apply their method to neural network interpretability, and use reinforcement learning. Liao et al. (2020) pre-train a BERT-style language model to do masked-word prediction by sampling a masking ratio from $U(0,1)$ and masking that many words. While their method was originally proposed for text generation, we apply the same masking approach to train language models for redaction.
+
+# 3 Motivating Experiment: Quasi-Identifiers
+
+In order to study the problem of deidentifying personal information from documents, we set up a model dataset utilizing personal profiles from Wikipedia. We use the Wikibio dataset (Lebret et al., 2016). Each entry in the dataset contains a document, the introductory text of the Wikipedia article, and a profile, the infobox of key-value pairs containing personal information. We train on the training dataset of 582, 659 documents and profiles. During test time, we evaluate only test documents, but consider all 728, 321 profiles from the concatenation of the train, validation, and test sets. This dataset represents a natural baseline by providing a range of factual profile information for a large collection of individuals, making it challenging to deidentify. In addition, it provides an openly available collection for comparing models.
+
+
DeID
ReID
None (0%)
Named entity (24%)
Lexical (28%)
IR ReID
74.9
4.3
0.0
NN ReID
99.6
79.7
31.9
+
+Table 1: Percentage of documents reidentified (ReID) for different deidentification methods. Percentage of words masked in parentheses.
+
+Is it difficult to deidentify individuals in this dataset? Wikipedia presents no domain challenges, and so finding entities is trivial. In addition many of the terms in the documents overlap directly with the terms in the profile table. Simple techniques should provide robust deidentification.
+
+We test this with two deidentification techniques: (1) Named entity removes all words in documents that are tagged as named entities. (2) Lexical removes all words in the document that also overlap with the profile. To reidentify, we use an information retrieval model (BM25) and a dense neural network approach (described in Section 5).
+
+Table 1 shows the results. While IR-based ReID is able to reidentify most of the original documents, without named entities or lexical matches, documents appear to be no longer reidentifiable. However, our model is able to reidentify $80\%$ of documents, even with all entities removed. With all lexical matches with the profile removed (32% of total words), NN ReID is still able to reidentify a non-trivial number of documents.
+
+This experiment indicates that even in the WikiBio domain, there are a significant number of pseudo-identifiers that allow the system to identify documents even when almost all known matching information is removed. In this work we study methods for discovering and quantifying these identifiers.
+
+# 4 Deidentification by Inference
+
+An overview of our data and system is shown in Figure 1. Given a document $x_{1} \ldots x_{N}$ , we consider the problem of uniquely identifying the corresponding person $y$ from a set of possible options $\mathcal{V}$ . The system works in the presence of redactions defined by a latent binary mask $z_{1} \ldots z_{N}$ on each position, where setting $z_{n} = 1$ masks word $x_{n}$ .
+
+We define a reidentification model as a model of $p(y \mid x, z)$ that gives a probability to each profile in
+
+Algorithm 1 Greedy Deidentification
+
+$x, \hat{y} \gets$ input document and person
+
+$$
+z _ {j} \leftarrow 0 \text {f o r a l l} j
+$$
+
+for $i = 1$ to $N$ do
+
+$$
+j ^ {*} \leftarrow \arg \min _ {j} p (y = \hat {y} \mid x, z _ {- j}, z _ {j} = 1)
+$$
+
+$$
+z _ {j ^ {*}} \leftarrow 1
+$$
+
+if $\hat{y} \notin K$ arg $\max_y p(y \mid x, z)$ then
+
+return z
+
+$\mathcal{V}$ for a masked document. During deidentification, we assume that we have access to the true identity $\hat{y}$ of the document that we would like to hide.
+
+Our objective is to find the minimally sized mask that ensures that $\hat{y}$ is not in the top- $K$ predictions of the identification model:
+
+$$
+\min _ {z _ {1} \dots z _ {N}} \qquad | z |
+$$
+
+s.t. $\hat{y}\notin K\arg \max_{y}p(y\mid x,z).$
+
+This objective is motivated by the concept of $K$ -Anonymity (Samarati and Sweeney, 1998). A dataset has $K$ -anonymity if each person $\hat{y}$ in the dataset is indistinguishable from at least $K$ other people in $\mathcal{V}$ .
+
+The $K$ -anonymity objective is combinatorial, and is intractable to solve with a non-trivial reidentification model. We instead approximate it with search. Specifically we use a simple greedy deidentification technique shown in Algorithm 1.
+
+# 5 Reidentification Model
+
+The core of this redaction system is a model of reidentification, $p(y \mid x, z)$ . Defining this model faces two challenges: a) to facilitate informed search in the presence of masks and b) to correctly identify a person from 100,000s of choices.
+
+As we do not have access to supervised masks, we define the probability of unmasked identification as marginalizing over all possible masks:
+
+$$
+p (y \mid x) = \mathbb {E} _ {z \sim p (z \mid x)} p (y \mid x, z; \theta)
+$$
+
+where $p(z\mid x)$ is the mask prior and $p(y\mid x,z;\theta)$ is the reidentification model.
+
+To assign a prior over masks $p(z \mid x)$ , we opt for a simple setting that avoids building in additional information and fits well with deidentification search. One possibility would be to follow BERT-style masking and mask words at a fixed ratio of $15\%$ (Devlin et al., 2019). However, Liao
+
+et al. (2020) argue that while successful for classification, fixed-ratio masking works poorly for generation-style objectives. Following this advice, we use the following algorithm to construct masks of varying size:
+
+- Sample the number of masks $l \sim \operatorname{Uni}(0, N)$ .
+- Sample $l$ masked words $z_{m}$ by uniformly sampling indices $m$ from $\{1, \ldots, M\}$ without replacement.
+
+For the reidentification model, $p(y|x,z;\theta)$ , we follow the dense retrieval literature and use an embedding-based model (Karpukhin et al., 2020). Specifically we use an (asymmetric) bi-encoder model on documents and profiles. The document encoder $f$ computes an embedding of the masked document, and the profile encoder $g(y)$ produces an embedding of the profile table corresponding to person $y$ . We score the match by computing the joint encoding $f(x,z)^{\top}g(y)$ using the dot product between the vectors outputted by two neural networks. Define the matrix of profile embeddings as $\mathbf{G} = [g(y_1);\dots ;g(y_{|\mathcal{V}|})]$ . The reidentification probability is defined as
+
+$$
+p (y = i \mid x, z) = \operatorname {s o f t m a x} (f (x, z) ^ {\top} \mathbf {G}) _ {i}.
+$$
+
+During training we utilize label smoothing on the distribution, which has also been shown to be useful when training for inference in an argmax setting (Müller et al., 2019).
+
+To train the model we optimize a lower bound on the identification log-likelihood:
+
+$$
+\log p (y | x) \geq \mathbb {E} _ {z \sim p (z | x)} [ \log p (y | x, z) ]
+$$
+
+Specifically we sample a word dropout mask $z$ for each element $x$ from the prior, and then mask words during reidentification training.
+
+Note that for training we compute the full distribution and do not use a contrastive approximation. In order to learn the parameters of $g$ we utilize coordinate ascent. Specifically we fix $\mathbf{G}$ and optimize the parameters of $f$ . We then switch and optimize the profile encoder $g$ on odd-numbered epochs to predict documents in $\mathcal{X}$ (with no masking), and then recompute $\mathbf{G}$ .
+
+# 6 Experimental Setup
+
+Models We call our main deidentification model NN DeID. We consider several different parameterization variants of the dual encoder. For the
+
+
Method
Privacy Ensemble ReID
% Masked
Utility Info. Loss (%)
(No reduction)
99.6
0.0
0.0
Lexical reduction
31.9
32.1
20.8
Named entity reduction
79.7
27.3
27.3
< 25% Reidentifiable
IDF
24.2
58.5
66.3
IDF (Table-Aware)
21.2
29.6
29.0
NN DeID
22.8
24.2
20.2
< 5% Reidentifiable
IDF
3.8
71.4
78.6
IDF (Table-Aware)
5.0
67.3
70.4
NN DeID
4.4
35.9
29.5
< 1% Reidentifiable
IDF
0.0
82.2
81.1
IDF (Table-Aware)
0.1
74.7
80.2
NN DeID
0.0
43.5
40.0
+
+Table 2: Statistics comparing sets of 1000 documents redacted using different methods at various levels of identifiability. Reidentification rate measures the rate at at least one model in our neural-network ensemble can retrieve the correct profile for a redacted document. Information loss is measured as the percentage change in the size of the text when compressed.
+
+document encoder $(f(x,z))$ , we consider two different pretrained language models: RoBERTa-base version (Liu et al., 2019) (125M parameters) and PMLM (Liao et al., 2020) (125M parameters), a pretrained encoder specifically designed to support masking-style inference. For the table encoder $(g(z))$ , we consider: RoBERTa-base (Liu et al., 2019) (125M parameters) with a simple linearized version of the profile, and TAPAS base (Herzig et al., 2020) (111 million parameters), a model designed to handle table input. We randomly compute masks online during training, so documents take a new randomly-reduced form on each epoch. All models are implemented in Hugging Face transformers library (Wolf et al., 2020). Each model is trained for sixty epochs, about two days on a single NVIDIA RTX A6000 GPU. More training details are available in A.
+
+We experiment with all combinations for reidentification models, specified by document-profile encoders, RoBERTa-RoBERTa (RR), RoBERTa-TAPAS (RT), PMLM-RoBERTa (PR), PMLMTAPAS (PT). The PT model is the default for NN DeID.
+
+Baselines We consider several unsupervised redaction baselines based on lexical matches with
+
+the table and word frequencies. Lexical removes all overlapping words that appear in the profile from the document. IDF (Table-Aware) masks all overlapping words that appear in the profile from the document, then masks in order of descending Inverse Document Frequency (IDF) (rarest word first) until a fixed threshold. We compute IDF based on the full corpus of documents and profiles from the train, validation, and test sets. Named entity removes all named entities from the document.2
+
+Metrics A major challenge is how to evaluate text privacy in the presence of a strong reidentification models. As shown in Section 3, information retrieval metrics work well for lightly redacted documents, but fail under heavy masking. We ran preliminary experiments with human subjects, but found that even at seemingly low levels of masking, documents were nearly impossible for humans to reidentify.
+
+Inspired by work on adversarial privacy such as the NeurIPS Hide-and-Seek challenge (Jordon
+
+
+Figure 2: Pareto curves comparing deidentification approaches on privacy versus words masked. Lexical and Named Entity baselines are fixed values. NN DeID is computed with different $K$ -anonymity values. IDF (table-aware) and IDF remove until a fixed IDF cutoff threshold.
+
+et al., 2021), we adapt a metric that utilizes an ensemble of reidentification models $\mathcal{R}$ as a benchmark. A masked document, $x,z$ , is considered reidentified if any of the models can correctly select its profile, i.e. $\hat{y} = \arg \max_y p_r(y\mid x,z)$ for any model $r\in \mathcal{R}$ . In order to diversify the ensemble we utilize different pretrained neural models as discussed above. We observe that each model can reidentify others with high accuracy indicating diversity features (more discussion in Section 8.3). We also include a word-matching based IR model in the ensemble, but find that it is not competitive at reidentification. Explicitly, the ensemble consists of the three variant parameterizations (RR, PR, RT) as well as the IR matching model.
+
+As a metric of utility, we compute the average percentage of words masked, as well as the information loss percentage, computed as the ratio between the size of the original and redacted texts when compressed. For each method and baseline we sweep over mask sizes to compute a curve of reidentifiability and utility.
+
+Inference We generate redactions from the re-identification models using greedy search to find the word to mask that causes the maximum decrease in the correct prediction. We use search implementations from the TextAttack library (Morris et al., 2020). Search takes in a stopping parameter $K$ which indicates the rank cutoff of $\hat{y}$ to stop search, $\hat{y} \notin K$ arg $\max_y p(y|x,z)$ . We run with different values of $K$ to sweep over levels of privacy, and
+
+
Model
Masked
0%
30%
Baseline
52.9
10.6
+ Word dropout
55.4
20.5
Dropout by IDF-weighting
48.6
20.2
+ Label smoothing (α = 0.1)
56.3
10.8
+ Bigger emb. (768 → 3072)
61.8
22.2
+ Table encoder optimization
98.1
14.0
+ Combined
96.4
38.3
+
+Table 3: Ablation study. Effect of different factors on model ReID accuracy across data with different reduction strategies. Experiments are on RT parameterization and use 1/10 training data and number of profiles.
+
+generate redactions with different masking rates. We ignore stopwords to speed up the search since they will rarely be identifiers.
+
+# 7 Results
+
+Table 2 presents results comparing unsupervised deidentification techniques on privacy and utility under the ensemble reidentification metric. As noted above, we see that neither Lexical nor Named Entity redaction provide sufficient privacy. NN DeID can provide better privacy while masking fewer words. Both NN DeID and IDF based approaches can reach stronger levels of privacy $(< 5\%)$ reidentifiability), but at these levels IDF masks most of the remaining words. At full deidentification under the ensemble, NN DeID masks less than half of the words. When we consider an information loss measure of utility, NN DeID also performs much better than IDF-based deidentification.
+
+Figure 2 expands on these results by showing the Pareto curves for privacy and utility across methods. Curves are obtained by varying the $K$ value used in NN DeID and the threshold for IDF based deidentification. Curves show that in addition to achieving better utility at very low rates of identifiability, the method also achieves better utility than lexical matching, and a steeper privacy curve even at lower levels of redaction.
+
+Model ablations Table 3 shows an ablation study of the components added to the model to improve accuracy. An alternative approach to this task is to finetune a pretrained model directly for the reidentification task (baseline). However, we found that out-of-the-box this model was neither effective as a
+
+
+Figure 3: Rank comparison of the true document $(\hat{y})$ in two different parameterized models of $p(y \mid x, z)$ (RT and PT). Mask $z$ comes from a deidentification $(K = 8)$ on the PT model. While correlated, the two parameterizations produce very different rankings.
+
+pure reidentification model nor as a model to guide search. We ablate each component added to NN DeID independently utilizing 1/10th of the training data and profiles, and compare both on the original documents and on documents with $30\%$ of the words masked. Word dropout with the proposed sampling rate improves model accuracy particularly in the high mask regime. Interestingly weighting word dropout frequency using IDF hurts model accuracy in the full regime, and is not included in the final model. Increasing the dual encoder embedding sizes from 768 to 3072 and adding label smoothing both increase model accuracy. Finally, using coordinate ascent to optimize the profile encoder in addition to the document encoder has by far the largest impact on model accuracy. The combination of these approaches gives a deidentification model that is accurate across levels of masking.
+
+# 8 Analysis
+
+# 8.1 Quasi-Identifiers in Redacted Examples
+
+Table 4 shows examples of redacted documents. While the most common redacted entities in deidentified examples are names, dates, and locations, we find notable examples of redacted quasi-identifiers:
+
+- Determiners. Determiners can provide useful information in context. In the first example, the system removes "American" before musician, but also the word "an" which, in this
+
+
+Figure 4: Percentage of words by part-of-speech tags that are masked by the IDF model and NN ReID model at $K = 8$ (similar masking level).
+
+context, signals the next word may be "American". This example is also interesting in that it preserves the word "Collective", leading the model to predict a musician Avey Tare from the band "Animal Collective".
+
+- Gender markers. The model often redacts words marking gender in order to anonymize documents. In the second example, for the document on Madoko Hisagae, the model removes both the word "She" and "women's". This redaction leads to the prediction of Hi-roki Ichigatani as the predicted match, a male Olympic fencing.
+
+- Locations. The pretrained model seems to be able to identify relative locations even if they are not represented directly in the profile. In the third example, the profile indicates that Tim Tolkien is an English sculptor. The word "English" is masked immediately, but the location "Bradley Heath, West Midlands" is a quasi-identifier as to the country. Upon redacting this term, the model switches its prediction to Nesbert Mukomberanwa, a sculptor from Zimbabwe.
+
+# 8.2 Redacted Word Types
+
+The IDF (table-aware) model relies on overlapping words and rare words to redact content, whereas the NN DeID model can in theory remove any identifying word. Figure 4 compares the part-of-speech tags of the masked words between the two models at the same redaction level. We see that while similar, the NN DeID model masks fewer nouns, proper nouns and numbers, and more adjectives
+
+Model prediction: Dean Roland (99%)
+
+Dean Roland (born October 10, 1972) is an American musician. He is best known for being the rhythm guitarist of the band Collective Soul, an alternative rock band fronted by his older brother Ed. He is also part of the rock duo Magnets & Ghosts alongside Ryan Potesta.
+
+Model prediction: Madoko Hisagae (100%)
+
+Madoka Hisagae (born 11 January 1979) is a Japanese fencer. She competed in the women's individual sabre events at the 2006 and 2008 Summer Olympics.
+
+Model prediction: Tim Tolkien (100%)
+
+Tim Tolkien (born September 1962) is an English sculptor who has designed several monumental sculptures, including the award - winning Sentinel. He has a wood carving and metal sculpture business at Cradley Heath, West Midlands.
+
+Model prediction: Lee Harding (writer) $(97\%)$
+
+Lee John Harding (born 19 February 1937) is an Australian freelance photographer, who became a writer of science fiction novels and short stories.
+
+Model prediction: Begziin Yavuukhulan (100%)
+
+Begziin Yavuukhulan (1929 - 1982) was a Mongolian poet of the communist era that wrote in Mongolian and Russian.
+
+Model prediction: Bob Whiting $(91\%)$
+
+Robert "Bob" Whiting (6 January 1883 - 1917) was an English footballer who played in the football league for Chelsea. Whiting died in France whilst fighting in World War I. He is commemorated at the Arras Memorial.
+
+Model prediction: Ronald Jonker $(99\%)$
+
+Ronald Jonker (born 14 December 1944) is a former Australian cyclist. He competed in the individual road race at the 1968 Summer Olympics.
+
+Model prediction: Brad Turner $(93\%)$
+
+Brad Turner is a Canadian film director, television director, and photographer.
+
+Model prediction: Julie Roginsky $(93\%)$
+
+Julie Roginsky (born April 25, 1973) is a Democratic Party strategist and television personality. She is a contributor with the Fox news channel and a co-host of The Five. (...)
+
+Model prediction: Avey Tare $(48\%)$
+
+( born ) is musician. He is best known for being the of the Collective an fronted by his He is also part of the duo Magnets & Ghosts alongside
+
+Model prediction: Hiroki Ichigatani (5%)
+
+(2013) (born 11 January 2014) is a fencer. competed in the individual sabre events at the 2006 and 2008 Summer Olympics.
+
+Model prediction: Nesbert Mukomberanwa $(6\%)$
+
+(born) is an sculptor who has designed several monumental sculptures, including award - winning . he has a wood carving and metal business at Heath, West
+
+Model prediction: Alan Burridge (writer) $(9\%)$
+
+(born 19) is an freelance, who became a writer of fiction novels and short stories.
+
+Model prediction: Tarzi Afshar (25%)
+
+(1929 - ) was a poet of the era that wrote in and Russian.
+
+Model prediction: Bob McDonald (9%)
+
+Robert "Bob" ( ) was an English footballer who played in the football league for died in France whilst fighting in World War He is commemorated at the Arras Memorial.
+
+Model prediction: Peter McDermott (94%)
+
+(born December 1944) is a former Australian cyclist. He competed in the individual road race at the 1968 Summer Olympics.
+
+Model prediction: Andrew Dosunmu (10%)
+
+is a Canadian film director, television director, and photographer.
+
+Model prediction: Ann Curry (11%)
+
+(25. born) is a Democratic Party and television personality. She is a contributor with the Fox news channel and a co-host of The Five. (.)
+
+Table 4: Example redactions from the system.
+
+and pronouns. These word classes are less likely to fit the IDF or table-matching criterion.
+
+# 8.3 Model Diversity
+
+The ensemble used for deidentification contains three separate pretrained encoder variants. One potential issue is that the model used to deidentify the text may be overly correlated with the ensemble models used for evaluation. However, we find that each model is quite strong on reidentifying redactions made by other models. For example, the RR model can reidentify NN DeID (PT, $\mathrm{K} = 1$ ) with a surprisingly high $60.5\%$ accuracy. In general we find the model rankings are quite different.
+
+Figure 3 demonstrates this phenomenon. In this figure, examples are deidentified to $K = 8$ with a PT parameterization, and we plot a rank-rank joint histogram with an RT parameterization. While there is some correlation in the rankings, the two models produce very different rankings, with RT even fully reidentifying some points.
+
+# 8.4 Reidentification at high levels of masking
+
+Table 5 shows examples of documents where our reidentification ensemble can correctly identify the
+
+individual even at extremely high levels of masking. Examples are randomly generated with a minimum of $95\%$ of words masked. Because we permit punctuation in redacted examples, and we mask but do not erase words, models are able to exploit word counting and punctuation-specific features to identify individuals under very high masking rates.
+
+# 9 Conclusion
+
+We propose an unsupervised method for text deidentification that focuses on deidentifying pseudoidenifiers. The method first learns to reidentify from text utilizing a prior masking models. We then utilize search to find a mask to ensure K-anonymity in this model. This approach outperforms masking based on named entities and matching with tabular data, both of which fail to fully anonymize the document. Using an ensemble of reidentification models as a metric, we show that our approach can reach high levels of privacy with moderate levels of redaction. In future work we plan to utilize this approach in conjugation with downstream tasks in order to further demonstrate the utility of the redacted data. We also plan to compare and evaluate with domain-specific approaches for distribut
+
+Model prediction: J.G. Blackman (99%)
+
+J. G. Blackman was a West Indian cricketUMPire. He stood in one test match, West Indies vs. England, in 1935.
+
+Model prediction: Nadezhda Shitikova $(99\%)$
+
+Nadezhda Shitikova ( ; 15 September 1923 – 1995 ) was a Soviet fencer. She competed in the women's individual foil event at the 1952 and 1956 Summer Olympics.
+
+Model prediction: Begziin Yavuukhulan $(98\%)$
+
+Begziin Yavuukhulan (, 1929-1982) was a Mongolian poet of the communist era that wrote in Mongolian and Russian.
+
+Model prediction: Sally Raguib (100%)
+
+Sally Raguib (born 8 September 1996) is a Djiboutian Judoka. She competed in the women's $57\mathrm{kg}$ event at the 2012 Summer Olympics.
+
+Model prediction: J.G. Blackman (28%)
+
+
+
+Model prediction: Nadezhda Shitikova $(11\%)$
+
+
+
+Model prediction: Begziin Yavuukhulan $(9\%)$
+
+
+
+Model prediction: Sally Raguib $(31\%)$
+
+
+
+Table 5: Examples of redactions where our neural ensemble can correctly reidentify the individual at extremely high levels of document masking, even though the documents were never seen during training.
+
+ing redacted models through manual and automatic redaction.
+
+# 10 Limitations
+
+Issues with Wikipedia. Many Wikipedia biographical articles within a given category follow a similar syntactic template, so it is possible that a model could learn to partially reidentify a person by looking at superficial features of the article structure. In the future, documents could be paraphrased during training to prevent the model from learning such syntactic idiosyncrasies. Additionally, since RoBERTa and TAPAS's pre-training data both include Wikipedia articles (Liu et al., 2019; Herzig et al., 2020) it is possible that the models can "cheat" on the test set by recalling data that they memorized during their pre-training. We hypothesize that cheating is unlikely to be happening for two reasons. First, articles in Wikibio make up a small percentage of the models' training data, so very little of their information is probably stored in the pre-trained weights of the models. Second, the models' performance on the test set before training is very low (0% test accuracy). Finally, Wikibio contains articles about a very small and biased subset of humanity (Yuan et al., 2021).
+
+Need for a profile. Although the method we propose does not require any labeled data, it requires a different new data source in the form of profiles. This means that the information deidentified is limited to what can be captured in the profile. Thus, the work of adapting this to a new domain shifts from collecting human-labeled PII annotations to collecting as much personal information as possible into profiles. This is much easier in domains like medicine where a great deal of personal infor
+
+mation is known about each patient, but collecting such profiles may not be possible in every scenario.
+
+Number of words as a quasi-identifier This work focuses on redacting data by replacing words with masks. One unaddressed issue in this work is the fact that even when masked, the presence of a word can still leak information. Consider the following example: "Jack Leswick (January 1, 1910 - August 4, 1934) was a Canadian ice hockey centre for the .". Leswick's team, the Chicago Black Hawks, is one of 11 of 32 National Hockey League teams with three words in their name. An adversary can eliminate the possibility that Leswick played for any of the 20 two-name teams. Future work can consider the possibility of deleting words entirely or joining multiple masked words into a single mask token to provide additional privacy.
+
+Hiding in the crowd. K-anonymity exists when an individual cannot be distinguished from $K - 1$ other individuals in the dataset. This means that for a given individual, all anonymity guarantees in our setting are with respect to the other individuals in the dataset. Therefore, the same document could be deidentified differently depending on which other profiles there are in the dataset, even without any changes to the document itself.
+
+# 11 Ethical Considerations
+
+This paper targets deidentification, a technique which has been used to democratize access to sensitive data in business, law, and healthcare. However, this paper also discusses the topic of reidentification, and raises issues about how models that identify individuals from seemingly-anonymized data may be used in a negative manner. Reiden
+
+tification models may be used as part of linkage attacks, where individuals can be pinpointed even from seemingly anonymized data. Additionally, the world knowledge of today's large language models may be well-suited for this type of linkage attack. We observed this behavior empirically, when our models were uncannily able to reidentify individuals within a dataset of 720,000 identities, even from documents that appeared to have no remaining personal information.
+
+We plan to release our models for deidentifying documents from Wikibio to the general public. We are open to hearing from users how our technology impacts both their lives and the lives of others, positively or negatively. If we receive any reports of misuse of our technology, we will mitigate accordingly.
+
+# 12 Acknowledgments
+
+AR and JC are supported by NSF CAREER 2037519, NSF 1704834, and a Sloan Fellowship. RZ is supported by a gift from the Simons foundation. JM is supported by Weill Cornell Medicine. Thanks to Dr. Curtis Cole and Dr. Thomas Campion from Weill Cornell Medicine for their general influence on our research direction within the area of deidentification.
+
+# References
+
+Olivia Angiuli, Joe Blitzstein, and Jim Waldo. 2015. How to de-identify your data. Communications of the ACM, 58(12):48-55.
+Centers for Medicare & Medicaid Services. 1996. The Health Insurance Portability and Accountability Act of 1996 (HIPAA). Online at http://www.cms.hhs.gov/hipaa/.
+Hanjie Chen and Yangfeng Ji. 2020. Learning variational word masks to improve the interpretability of neural text classifiers. ArXiv, abs/2010.00667.
+Kieran Chin-Cheong, Thomas M. Sutter, and Julia E. Vogt. 2019. Generation of heterogeneous synthetic electronic health records using gans. In NeurIPS 2019.
+Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 4171-4186. Association for Computational Linguistics.
+
+Cynthia Dwork, Frank McSherry, Kobbi Nissim, and Adam Smith. 2006. Calibrating noise to sensitivity in private data analysis. In *Theory of Cryptography*, pages 265–284, Berlin, Heidelberg. Springer Berlin Heidelberg.
+Khaled El Emam, Fida Kamal Dankar, Romeo Issa, Elizabeth Jonker, Daniel Amyot, Elise Cogo, Jean-Pierre Corriveau, Mark Walker, Sadrul Chowdhury, Regis Vaillancourt, Tyson Roffey, and Jim Bottomley. 2009. A globally optimal k-anonymity method for the de-identification of health data. Journal of the American Medical Informatics Association: JAMIA, 16(5):670-682.
+William Falcon, The PyTorch Lightning team, et al. 2019. PyTorch Lightning.
+Max Friedrich, Arne Kohn, Gregor Wiedemann, and Chris Biemann. 2019. Adversarial learning of privacy-preserving text representations for deidentification of medical records. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5829-5839, Florence, Italy. Association for Computational Linguistics.
+James Gardner and Li Xiong. 2009. An integrated framework for de-identifying unstructured medical data. Data & Knowledge Engineering, 68(12):1441-1451. Including Special Section: 21st IEEE International Symposium on Computer-Based Medical Systems (IEEE CBMS 2008) - Seven selected and extended papers on Biomedical Data Mining.
+Aayush Gupta, Ayush Jaiswal, Yue Wu, Vivek Yadav, and Pradeep Natarajan. 2021. Adversarial mask generation for preserving visual privacy. In 2021 16th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2021), page 1-5.
+Jonathan Herzig, Pawel Krzysztof Nowak, Thomas Müller, Francesco Piccinno, and Julian Martin Eisenschlos. 2020. Tapas: Weakly supervised table parsing via pre-training. ArXiv, abs/2004.02349.
+Håkon Hukkelås, Rudolf Mester, and Frank Lindseth. 2019. Deeppravity: A generative adversarial network for face anonymization.
+Abhik Jana and Chris Biemann. 2021. An investigation towards differentially private sequence tagging in a federated framework. In Proceedings of the Third Workshop on Privacy in Natural Language Processing, pages 30-35, Online. Association for Computational Linguistics.
+Alistair E. W. Johnson, Lucas Bulgarelli, and Tom J. Pollard. 2020. Deidentification of free-text medical records using pre-trained bidirectional transformers. In Proceedings of the ACM Conference on Health, Inference, and Learning, page 214-221, Toronto Ontario Canada. ACM.
+Alistair E. W. Johnson, Tom J. Pollard, Lu Shen, Liwei H. Lehman, Mengling Feng, Mohammad Ghassemi, Benjamin Moody, Peter Szolovits, Leo Anthony Celi, and Roger G. Mark. 2016. Mimic-iii,
+
+a freely accessible critical care database. Scientific Data, 3(1):160035.
+James Jordan, Daniel Jarrett, Evgeny Saveliev, Jinsung Yoon, Paul Elbers, Patrick Thoral, Ari Ercole, Cheng Zhang, Danielle Belgrave, and Mihaela van der Schaar. 2021. Hide-and-seek privacy challenge: Synthetic data generation vs. patient re-identification. In Proceedings of the NeurIPS 2020 Competition and Demonstration Track, volume 133 of Proceedings of Machine Learning Research, pages 206-215. PMLR.
+Vladimir Karpukhin, Barlas Oguz, Sewon Min, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for open-domain question answering. CoRR, abs/2004.04906.
+R. Lebret, D. Grangier, and M. Auli. 2016. Neural Text Generation from Structured Data with Application to the Biography Domain. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP).
+Jiwei Li, Will Monroe, and Dan Jurafsky. 2016. Understanding neural networks through representation erasure.
+Xuechen Li, Florian Tramér, Percy Liang, and Tatsunori Hashimoto. 2021. Large language models can be strong differentially private learners.
+Yi Liao, Xin Jiang, and Qun Liu. 2020. Probabilistically masked language model capable of autoregressive generation in arbitrary word order. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 263-274, Online. Association for Computational Linguistics.
+Pierre Lison, Ildiko Pilán, David Sanchez, Montserrat Batet, and Lilja Øvrelid. 2021. Anonymisation models for text data: State of the art, challenges and future directions. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4188-4203, Online. Association for Computational Linguistics.
+Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. ArXiv, abs/1907.11692.
+Zengjian Liu, Buzhou Tang, Xiaolong Wang, and Qingcai Chen. 2017. De-identification of clinical notes via recurrent neural network and conditional random field. Journal of Biomedical Informatics, 75:S34-S42. Supplement: A Natural Language Processing Challenge for Clinical Records: Research Domains Criteria (RDoC) for Psychiatry.
+Huda O. Mansour, Maheyzah M. Siraj, Fuad A. Ghaleb, Faisal Saeed, Eman H. Alkhammash, and Mohd A. Maarof. 2021. Quasi-identifier recognition algorithm for privacy preservation of cloud data based on risk
+
+reidentification. Wireless Communications and Mobile Computing, 2021:e7154705.
+Maxim Maximov, Ismail Elezi, and Laura Leal-Taixe. 2020. CIAGAN: conditional identity anonymization generative adversarial networks. CoRR, abs/2005.09544.
+Stephane Meystre, F Friedlin, Brett South, Shuying Shen, and Matthew Samore. 2010. Automatic de-identification of textual documents in the electronic health record: A review of recent research. BMC medical research methodology, 10:70.
+John Morris, Eli Lifland, Jin Yong Yoo, Jake Grigsby, Di Jin, and Yanjun Qi. 2020. Textattack: A framework for adversarial attacks, data augmentation, and adversarial training in nlp. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 119-126.
+Rafael Müller, Simon Kornblith, and Geoffrey E Hinton. 2019. When does label smoothing help? Advances in neural information processing systems, 32.
+Ishna Neamatullah, M. Douglass, Li wei H. Lehman, Andrew T. Reisner, Mauricio Villarroel, William J. Long, Peter Szolovits, George B. Moody, Roger G. Mark, and Gari D. Clifford. 2008. Automated de-identification of free-text medical records. BMC Medical Informatics and Decision Making, 8:32 - 32.
+Beau Norgeot, Kathleen Muenzen, Thomas A. Peterson, Xuancheng Fan, Benjamin S. Glicksberg, Gundolf Schenk, Eugenia Rutenberg, Boris Oskotsky, Marina Sirota, Jinoos Yazdany, Gabriela Schmajuk, Dana Ludwig, Theodore Goldstein, and Atul J. Butte. 2020. Protected health information filter (philter): accurately and securely de-identifying free-text clinical notes. npj Digital Medicine, 3(1):1-8.
+Mark Phillips and Bartha M. Knoppers. 2016. The discombobulation of de-identification. Nature Biotechnology, 34(11):1102-1103.
+Ildiko Pilán, Pierre Lison, Lilja Øvrelid, Anthi Papadopoulou, David Sánchez, and Montserrat Batet. 2022. The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization.
+Pierangela Samarati and Latanya Sweeney. 1998. Protecting Privacy when Disclosing Information: k-Anonymity and its Enforcement through Generalization and Suppression. Technical report, SRI International.
+Russia Sberbank, Moscow and Yaroslav Emelyanov. 2021. Towards task-agnostic privacy- and utility-preserving models. In Proceedings of the Conference Recent Advances in Natural Language Processing - Deep Learning for Natural Language Processing Methods and Applications, page 394-401. INCOMA Ltd. Shoumen, BULGARIA.
+
+Martin Scaiano, Grant Middleton, Luk Arbuckle, Varada Kolhatkar, Liam Peyton, Moira Dowling, Debbie S. Gipson, and Khaled El Emam. 2016. A unified framework for evaluating the risk of re-identification of text de-identification tools. Journal of Biomedical Informatics, 63:174-183.
+David Sánchez, Montserrat Batet, and Alexandre Viejo. 2014. Utility-preserving privacy protection of textual healthcare documents. Journal of Biomedical Informatics, 52:189-198. Special Section: Methods in Clinical Research Informatics.
+Erik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition. In Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003, pages 142-147.
+Özlem Uzuner, Yuan Luo, and Peter Szolovits. 2007. Evaluating the State-of-the-Art in Automatic De-identification. Journal of the American Medical Informatics Association, 14(5):550-563.
+Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Association for Computational Linguistics.
+Ann Yuan, Daphne Ippolito, Vitaly Nikolaev, Chris Callison-Burch, Andy Coenen, and Sebastian Gehrmann. 2021. Synthbio: A case study in humanai collaborative curation of text datasets. ArXiv, abs/2111.06467.
+Xiang Yue and Shuang Zhou. 2020. PHICON: improving generalization of clinical text de-identification models via data augmentation. CoRR, abs/2010.05143.
+
+# A Training details
+
+We train all models using the Adam optimizer with an initial learning rate of $5 \times 10^{-5}$ or $1 \times 10^{-4}$ . We clip gradients to a maximum norm of 5.0. We decrease the learning rate by a factor of 0.5 whenever performance on a set of held-out redacted validation examples decreases. We train on the full dataset for a duration of 60 epochs, but stop early if the learning rate does not increase for 5 epochs. We implement training using the PyTorch lightning library (Falcon et al., 2019). We use PMLM-a, the version of PMLM with absolute positional embeddings (Liao et al., 2020). All encoders have a
+
+maximum sequence length set to 128 throughout all experiments. We truncate tables by dropping columns until the encoded table fits the maximum sequence length.
+
+We use linear learning rate scheduling. For models with the RoBERTa document encoder, we decay the learning rate from a starting point of $1e - 5$ to $1e - 6$ over the epochs. For models with PMLM as the document encoder, we start the learning rate at $5e - 5$ . This is because we found empirically the PMLM tended to converge better when started at a lower learning rate. For the first two epochs (one for each encoder), we employ linear warmup, and increase the learning rate from 0 to its true initial value. We find that training the profile encoder is not useful after a handful of epochs, as the profile encoder starts to overfit, and our compute is better spent training the document encoder, which learns much more slowly since its inputs are $50\%$ . redacted on average. Thus, after the first 10 epochs (5 of which are spent training the profile encoder), we only train the document encoder.
+
+# B Search methods ablation
+
+Our deidentification method redacts words by greedily selecting the word that minimizes the performance with respect to a reidentification model. We also tested using beam search to select words to redact, and found that it did not improve performance. At $k = 1$ , beam search with beam width 4 masked $14.96\%$ of words at $78.1\%$ reidentification rate, while greedy masks $15.46\%$ at $78.9\%$ reidentification rate, while being 3.39 times faster.
\ No newline at end of file
diff --git a/unsupervisedtextdeidentification/images.zip b/unsupervisedtextdeidentification/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..8e61e827e87f08c46559a67834ab1633e15f52e8
--- /dev/null
+++ b/unsupervisedtextdeidentification/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ee62831b2b1b8a9a3df1cd77a73b72082072bc0155f07507ccb44f26a82b9610
+size 333268
diff --git a/unsupervisedtextdeidentification/layout.json b/unsupervisedtextdeidentification/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..5d883750d101ba9146c8be299e8de357afd438cf
--- /dev/null
+++ b/unsupervisedtextdeidentification/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5b2b554befcf002167d09197f4aec9777784de2cea9a87c4aed48e20aa1ef0ee
+size 438880
diff --git a/usingdeveloperdiscussionstoguidefixingbugsinsoftware/9a251dd1-b49b-4498-8d85-12505347d850_content_list.json b/usingdeveloperdiscussionstoguidefixingbugsinsoftware/9a251dd1-b49b-4498-8d85-12505347d850_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..6e51a69a93d64d6c12d619b951940c30134fa03d
--- /dev/null
+++ b/usingdeveloperdiscussionstoguidefixingbugsinsoftware/9a251dd1-b49b-4498-8d85-12505347d850_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:05c6695d050fd34d41d649de825167c4329d3993dd78043428a930a90647130c
+size 71010
diff --git a/usingdeveloperdiscussionstoguidefixingbugsinsoftware/9a251dd1-b49b-4498-8d85-12505347d850_model.json b/usingdeveloperdiscussionstoguidefixingbugsinsoftware/9a251dd1-b49b-4498-8d85-12505347d850_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..3c2a7195a55aeb2ec6d7aa084eb200d02b3bc237
--- /dev/null
+++ b/usingdeveloperdiscussionstoguidefixingbugsinsoftware/9a251dd1-b49b-4498-8d85-12505347d850_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4c3dae52b8152bc939629362b096537a5641470d50ffc4a5313d0eafe5ec0866
+size 81596
diff --git a/usingdeveloperdiscussionstoguidefixingbugsinsoftware/9a251dd1-b49b-4498-8d85-12505347d850_origin.pdf b/usingdeveloperdiscussionstoguidefixingbugsinsoftware/9a251dd1-b49b-4498-8d85-12505347d850_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..ccc61a6e99684ed7715d9a93df00855dc193d033
--- /dev/null
+++ b/usingdeveloperdiscussionstoguidefixingbugsinsoftware/9a251dd1-b49b-4498-8d85-12505347d850_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5f7a8ad295567e8839c8999ed95f3f643812190abc092457bb19a70329c40da5
+size 422920
diff --git a/usingdeveloperdiscussionstoguidefixingbugsinsoftware/full.md b/usingdeveloperdiscussionstoguidefixingbugsinsoftware/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..b7090594aa152964ac52899163c871167a44b51b
--- /dev/null
+++ b/usingdeveloperdiscussionstoguidefixingbugsinsoftware/full.md
@@ -0,0 +1,269 @@
+# Using Developer Discussions to Guide Fixing Bugs in Software
+
+Sheena Panthaplackel1, Milos Gligoric2, Junyi Jessy Li3, Raymond J. Mooney1
+
+$^{1}$ Department of Computer Science
+
+$^{2}$ Department of Electrical and Computer Engineering
+
+$^{3}$ Department of Linguistics
+
+The University of Texas at Austin
+
+spantha@cs.utexas.edu, gligoric@utexas.edu
+
+jessy@austin.utexas.edu, mooney@cs.utexas.edu
+
+# Abstract
+
+Automatically fixing software bugs is a challenging task. While recent work showed that natural language context is useful in guiding bug-fixing models, the approach required prompting developers to provide this context, which was simulated through commit messages written after the bug-fixing code changes were made. We instead propose using bug report discussions, which are available before the task is performed and are also naturally occurring, avoiding the need for any additional information from developers. For this, we augment standard bug-fixing datasets with bug report discussions. Using these newly compiled datasets, we demonstrate that various forms of natural language context derived from such discussions can aid bug-fixing, even leading to improved performance over using commit messages corresponding to the oracle bug-fixing commits.
+
+# 1 Introduction
+
+Software defects, or bugs, arise for a number of reasons, including missing or changing specifications, programming errors, poor documentation, and overall complexity (Rodriguez-Perez et al., 2020). Due to the extensive developer time and effort needed to fix bugs (Weiss et al., 2007), there is growing interest in automated bug fixing (Tufano et al., 2019; Chen et al., 2019; Lutellier et al., 2020; Mashhadi and Hemmati, 2021; Allamanis et al., 2021; Chakraborty and Ray, 2021).
+
+Most of these approaches only consider the buggy code snippet when generating the fix. However, with such limited context, this is extremely challenging. For instance, in Figure 1a, generating the fixed code requires removing .append("\n"), but this is not obvious from inspecting the buggy code alone. To address this, Chakraborty and Ray (2021) proposed prompting developers for a natural language description of intent (e.g., "Removed trailing newlines...") that can guide a model in performing the task. As a proxy, in their study,
+
+
+
+Oracle Commit Message: Removed trailing newlines from error messages. Fixes https://github.com/mwanji/toml4j/issues/18
+
+(a) Buggy and fixed code snippets in emptyImplicitTable method with commit message for the oracle bug-fixing commit
+
+Title: Parsing exception messages contain trailing newlines
+
+# Utterance #1:
+
+Some of the parsing exceptions thrown by tom4j contains trailing newlines. This is somewhat unusual, and causes empty lines in log files when the exception messages are logged...
+
+# Utterance #2:
+
+The idea was to be able to display multiple error messages at once. However, processing stops as soon as an error is encountered, so that's not even possible. Removing the newlines shouldn't be a problem, then.
+
+Solution Description: remove trailing newlines from toml4j log messages
+
+(b) Bug report discussion and generated solution description
+
+Figure 1: Bug-fixing patch from the toml4j project, with context from the corresponding bug report discussion.
+
+they used the commit message corresponding to the oracle commit which fixed the bug.
+
+By showing that natural language can aid bug-fixing, their study yields promising results. However, we raise two concerns with their approach. First, prompting developers for additional information can be burdensome for them, as it requires time and manual effort. Second, and more importantly, it is unrealistic to use the oracle commit message as a proxy. Since it is written after the bug is fixed to document the code changes (Tao et al., 2021), it does not accurately reflect information actually available when the task needs to be performed.
+
+In reality, there are more appropriate sources of natural language to guide fixing bugs, which are naturally occurring and available before the task is to be performed. Namely, many bugs are first reported through issue tracking systems (e.g., GitHub Issues), where developers engage in a discussion to collectively understand the problem, investigate the cause, and formulate a solution (before they are
+
+fixed) (Noyori et al., 2019; Arya et al., 2019).
+
+Content in these discussions are often relevant to generating the fix. For example, in Figure 1b, the title suggests that the bug pertains to "trailing newlines" and the last utterance of the discussion recommends "removing the newlines." Additionally, using modern techniques (Panthaplackel et al., 2022) that summarize content relevant towards implementing the solution in a bug report discussion, we can also automatically obtain a natural language description of the solution ("remove trailing newlines..."). Note that these sequences provide insight on the intent of the fix, much like the oracle commit message, without requiring any additional input or any context beyond what is naturally available.
+
+In this work, we use bug report discussions to facilitate automated bug fixing. While these discussions have been previously used to automate tasks related to bug resolution, such as localizing bugs (Koyuncu et al., 2019; Zhu et al., 2020) and assigning relevant developers (Xi et al., 2018; Baloch et al., 2020), to our knowledge, they have never been used to directly generate the bug-fixing code.
+
+We propose various input context representations, encompassing different natural language components that are tied to the discussions and likely to capture their meaningful aspects. We heuristically derive components from discussions, including the discussion as a whole, the title, and last utterance. We also derive components algorithmically through model-generated solution descriptions and the attended discussion utterances during this generation. We incorporate these representations into large sequence-to-sequence models pretrained on large amounts of source code and technical text (Ahmad et al., 2021) and then do task-specific finetuning.
+
+For training and evaluation, we mine bug report discussions from GitHub Issues and map them to subsets of Tufano et al. (2019)'s bug-fixing patches datasets. Results show that when bug report discussions are available, they lead to significant improvements in fixing bugs, even outperforming using the oracle commit message.
+
+# 2 Using Bug Report Discussions
+
+Many bugs are reported with issue tracking systems, through which a user can open a bug report and initiate a discussion with developers. The user
+
+first states the problem in the title and typically elaborates in the first utterance. Developers then join the discussion and engage in a dialogue with the user as well as other developers.
+
+These discussions isolate the problem, diagnose the cause, and prescribe potential solutions (Arya et al., 2019). Due to their technical nature, they often span more than just natural language, including system error messages and relevant code snippets (Li et al., 2018). Furthermore, they are readily available before bugs are fixed. So, we consider using these contextually rich discussions to guide the task of bug fixing. We devise various strategies for heuristically and algorithmically deriving context from these discussions.
+
+# 2.1 Heuristically Deriving Context
+
+We consider using the whole discussion, including the title and all utterances (occurring before the bug-fixing code changes are implemented). However, these discussions can be extremely long (Table 1), making them difficult for neural models to reason about and also extending beyond the input length capacities of many models (e.g., 1,024 tokens) (Ahmad et al., 2021) in some cases. For this reason, we look at more concise elements within the discussion which might convey its meaningful aspects. First, we consider the title, as it is a brief summary of the bug (Chen et al., 2020). Next, we consider the last utterance before the bug-fixing commit, since it captures the most recent information and also roughly corresponds to the point at which a developer acquired enough context about the fix to implement it (Panthaplackel et al., 2022).
+
+# 2.2 Algorithmically Deriving Context
+
+To guide developers in absorbing information relevant towards implementing the solution for a given bug report, we recently proposed generating a brief natural language description of the solution by synthesizing relevant content from within the whole bug report discussion (Panthaplackel et al., 2022).
+
+To generate these solution descriptions, we finetuned a large pretrained encoder-decoder model. For training supervision of solution descriptions, we used commit messages and pull request titles corresponding to the commits and pull requests linked to bug reports. To control for noise, we relied on a filtered training set, consisting of fewer generic and uninformative target descriptions as well as discussions without sufficient context to generate informative descriptions). We provide
+
+additional details of our approach for generating solution descriptions in Appendix A.
+
+While these solution descriptions are intended to guide humans in manually fixing bugs, we evaluate whether they can also guide models in automatically performing the task. Furthermore, since the title corresponding to the bug report discussion and the solution description summarize different aspects of the discussion, we investigate the benefits of combining the two (solution description + title).
+
+Next, the segments (title or individual utterances) from the discussion that contribute the most towards generating a natural language description of the solution are likely to also be useful towards implementing that solution (i.e., generating the fix). To approximate the most relevant discussion segments, we use attention. Namely, we examine the last layer of Panthaplackel et al. (2022)'s decoder to determine the most highly attended input token at each decoding step and the segment (title or individual utterance) to which it belongs. From this, we obtain the attended segments.
+
+# 3 Data
+
+Chakraborty and Ray (2021) relied on the commonly used bug-fixing patches (BFP) datasets (Tufano et al., 2019). This entails $BFP_{small}$ , with examples extracted from Java methods spanning fewer than 50 tokens, and $BFP_{medium}$ , with examples extracted from methods spanning 50-100 tokens. In this work, we also focus on these datasets, particularly the preprocessed versions released by Chakraborty and Ray (2021). However, since they do not include the associated bug report discussions, we enrich examples with this information.
+
+# 3.1 Mining Bug Report Discussions
+
+We mine issue reports from GitHub Issues, for the 58,597 projects that encompass examples in the BFP datasets. We obtain 1,878,096 issue reports, 365,005 of which are linked to commits made between March 2011 and October 2017 (time frame used for mining the BFP datasets). By matching these commits to the bug-fixing commits from which the BFP examples were drawn, we identify the examples that correspond to bug reports. We map 3,028 (of the 58,287) examples in $BFP_{small}$ and 3,333 (of the 65,404) examples in $BFP_{medium}$ to bug report discussions, forming the discussion-augmented bug-fixing patches (Disc-BFP) datasets: $Disc-BFP_{small}$ and $Disc-BFP_{medium}$ .
+
+
Disc-BFPsmall
Disc-BFPmed
#Ex
3,028
3,333
#Discussions/Ex
1.3
1.3
#Utterance/Discussion
2.8
2.9
#Attn Segments/Ex
1.0
1.0
Buggy
22.1
42.4
Fixed
19.3
40.8
Method
32.2
74.2
Oracle Msg
19.7
19.6
Title
7.9
8.1
Utterance
127.6
136.4
Last Utterance
114.0
109.3
Soln Desc
8.5
8.5
+
+Table 1: Disc-BFP dataset statistics. We report averages across all data splits. Average token lengths (split by punctuation and spacing) are presented in the second block. Note that we consider only utterances occurring before the bug-fixing commit.
+
+Note that $Disc$ -BFP is comparatively smaller than BFP. While constructing BFP, Tufano et al. (2019) did not consider any mining criteria related to bug reports, so it is not surprising that many of their examples do not have bug report discussions. Bugs can be identified through various development activities like code review, testing, and bug reporting. In this work, we focus on the last scenario, for which bug report discussions would naturally be available.
+
+# 3.2 Data Processing
+
+$Disc-BFP_{small}$ consists of 2,445 training, 290 validation, and 293 test examples. $Disc-BFP_{medium}$ consists of 2,660 training, 341 validation, and 332 test examples. In doing this, we maintain the original data splits (e.g., $Disc-BFP_{small}$ 's training set is strictly a subset of $BF_{small}$ 's training set).
+
+A bug report discussion is organized as a timeline, and we consider only content that precedes the bug-fixing commit on the timeline, corresponding to the naturally-available context. Since a commit can be linked to multiple issue reports, some examples have multiple bug report discussions. In these cases, we order them so that discussions with the most recent activity appear first and are less likely to get truncated due to input length constraints (as explained in the next paragraph). When leveraging individual discussion components (e.g., title, generated solution description), we derive them from each discussion separately and concatenate them (separated with $\langle \mathrm{s} \rangle$ ). We process bug report bug report discussions similar to Panthaplackel et al. (2022), and we use the processed BFP code data (buggy code and method) released by Chakraborty and Ray (2021). We present dataset statistics in Table 1.
+
+# 4 Models
+
+Chakraborty and Ray (2021) achieved state-of-the-art performance on the BFP datasets by finetuning PLBART (Ahmad et al., 2021), a large sequence-to-sequence model that was pretrained as a denoising autoencoder (Lewis et al., 2020) on large amounts of source code from GitHub and technical text from StackOverflow. Similarly, we consider finetuning PLBART to generate the fixed code given varying input context representations.
+
+# 4.1 Model Initialization
+
+Since Chakraborty and Ray (2021) finetuned using significantly more data (i.e., BFP training sets) $^{2}$ , we initialize models using their checkpoints that were finetuned with the buggy code snippet and the full method context (emptyImplicitTable in Figure 1a): buggy $< \mathrm{s}>$ method. This helps contextualize the buggy code snippet and was shown to improve performance. $^{3}$
+
+# 4.2 Our Models
+
+After initializing, we further finetune on the $Disc$ - $BFP_{small}$ and $Disc$ - $BFP_{medium}$ training sets (separately). All input context representations used for this are formed by concatenating buggy $$ method $$ with the various natural language sequences tied to bug report discussions outlined in Section 2. Sequences entailing multiple elements (e.g., utterances in the whole discussion, titles from multiple bug report discussions) are separated with $$ .
+
+Though PLBART is capable of handling up to 1,024 tokens as input, Chakraborty and Ray (2021) limit to 512 tokens. However, since the sequences we consider can be particularly long after the SentencePiece tokenization (Kudo and Richardson, 2018) employed by PLBART, we choose to utilize the full capacity during our finetuning. Note that the input is truncated by removing from the end if it exceeds the limit. We provide additional details regarding model training in Appendix E.
+
+# 4.3 Baselines
+
+We consider models which use only buggy $< s>$ method (without natural language). As points of reference, we also consider models that use the oracle commit message rather than context from
+
+
Finetune/Test Context
Disc-BFPsmall
Disc-BFPmed
Without NL*
33.8
27.1
Oracle \( \mathrm{{Msg}}^{ \dagger } \)
33.4
27.4
Whole Discussion
33.1
27.1
Title
\( 35.5^{* \dagger} \)
25.9
Last Utterance
\( 35.2^{* \dagger} \)
\( 28.9^{* \dagger} \)
Soln Desc
33.8
27.4
Soln Desc + Title
\( 35.5^{* \dagger} \)
25.6
Attended Seg
\( 36.2^{* \dagger} \)
28.0*
+
+Table 2: Results on the Disc-BFP test sets. Models are initialized from the checkpoint originally finetuned without the oracle commit message on the full BFP training sets. We then finetune on the Disc-BFP training sets with various input context representations and evaluate on the Disc-BFP test sets using the same representations. We indicate representations that statistically significantly outperform baselines with superscripts identifying the specific baseline that is surpassed.
+
+bug report discussions: buggy $< \mathrm{s}>$ method $< \mathrm{s}>$ oracle commit message. To make a fair comparison with our models, we initialize baselines using the Chakraborty and Ray (2021) checkpoints (§4.1) and further finetune on the Disc-BFP training sets, using a context window of 1,024 tokens.
+
+# 5 Results
+
+Following Chakraborty and Ray (2021), we compute how often $(\%)$ the generated output exactly matches the target fixed code snippet. We perform statistical significance testing with bootstrap tests (Berg-Kirkpatrick et al., 2012), using 10,000 samples (with sample size 5,000) and $p < 0.05$ . We provide sample output in Appendix B.
+
+We present results in Table 2. We find that leveraging context from bug report discussions can lead to significant improvements over baselines which do not include natural language context, yielding up to $2.4\%$ improvement for $\text{Disc-BFP}_{\text{small}}$ and $1.8\%$ for $\text{Disc-BFP}_{\text{medium}}$ .
+
+We also observe that using context derived from bug report discussions leads to improved performance (1.5-2.8%) over using the oracle commit message during our finetuning with the Disc-BFP training sets. However, when Chakraborty and Ray (2021) originally considered using the oracle commit message, they had finetuned with it as input on significantly more data (i.e., the full BFP training sets). So, we further investigate by initializing PLBART parameters from the Chakraborty and Ray (2021) checkpoint which was finetuned using the oracle commit message (buggy $< \mathrm{s}>$ method $< \mathrm{s}>$ oracle commit message). Then, we perform finetuning on the Disc-BFP training sets with the various
+
+
Finetune/Test Ctxt
Disc-BFPsmall
Disc-BFPmed
Without NL§
35.5
25.3
Oracle MsgBox¶
36.2
25.9
Whole Discussion
34.1
25.6
Title
35.2
25.3
Last Utterance
36.2
25.6
Soln Desc
33.4
26.5§
Soln Desc + Title
39.2§¶
26.2§
Attended Seg
36.9§
24.1
+
+Table 3: Additional results on the Disc-BFP test sets where models are initialized from the checkpoint originally finetuned with the oracle commit message on the full BFP training sets. We then finetune on the Disc-BFP training sets with various input context representations and evaluate on the Disc-BFP test sets using the same representations. We indicate representations that statistically significantly outperform baselines with superscripts identifying the specific baseline that is surpassed.
+
+input context representations we considered in Table 2. We present results from these additional experiments in Table 3. Note that for all representations other than "oracle msg" the oracle commit message is used only during training and not used at test time, and so the results can extend to an actual realistic use case.
+
+Relative to the results presented in Table 2, for $Disc-BFP_{small}$ , initializing from the checkpoint finetuned with the oracle commit message tends to yield improved performance across the different input context representations, and with the solution description + title representation, we observe a $3.0\%$ improvement over using the oracle commit message. For $Disc-BFP_{medium}$ , the performance tends to be lower, and the "last utterance" context representation from Table 2 remains the best. Therefore, we again find that using bug report discussions leads to improvements over baselines that use the oracle commit message (during both finetuning and test). This suggests that context derived from bug report discussions, encompassing diverse types of information, can offer richer context than oracle commit messages for fixing bugs. This is especially promising since these discussions are often readily available in a real world setting.
+
+Overall, the scores and magnitude of improvement tend to be lower for $Disc\text{-} BFP_{\text{medium}}$ . This is likely due to the challenges of generating longer sequences (Varis and Bojar, 2021) and the stringent evaluation metric requiring exact match with the reference. The best performance on the $Disc\text{-} BFP_{\text{small}}$ test set comes from using solution description + title. For $Disc\text{-} BFP_{\text{medium}}$ , it is with the last utterance. Since both of these are derived from
+
+the whole discussion, one may expect using the whole discussion to yield similar or even improved performance; however, this is not the case.
+
+Including the whole discussion substantially increases the input length, which models like PLBART cannot easily handle. This can be partially attributed to the practical challenge of fitting the entire sequence in the model's limited context window, with $12.8 - 15.8\%$ training examples getting truncated. However, the bigger challenge is drawing meaning from such large amounts of text. We demonstrate the benefits of using more concise sequences, through various natural language elements that are likely to capture critical aspects of the whole discussion.
+
+# 6 Conclusion
+
+In this work, we investigated the utility of natural language for automated bug fixing. Unlike prior work, which leverages an unrealistic source of natural language for this purpose, through oracle commit messages, we consider a naturally occurring source that is often available: bug report discussions. We explore various strategies for deriving natural language context from these discussions, using our newly compiled discussion-augmented, bug-fixing patches datasets. We show that when these discussions are available, they offer useful context for bug fixing, even leading to improved performance over using oracle commit messages.
+
+# Acknowledgements
+
+We would like to thank Saikat Chakraborty for giving us access to checkpoints from Chakraborty and Ray (2021). We would like to also thank anonymous reviewers for their detailed suggestions. This work was supported by NSF grant IIS-2145479, a Bloomberg Data Science Fellowship to the first author, and a Google Faculty Research Award.
+
+# Limitations
+
+We focus on popular bug-fixing datasets (Tufano et al., 2019), which were originally constructed with certain constraints, including the use of a single programming language (Java) and methods of limited lengths (<50 tokens, 50-100 tokens). Next, examples in these datasets correspond to individual methods, with some examples being drawn from different methods of the same bug-fixing commit. Therefore, while generating the correct fix for a given example removes the presence of a bug in a
+
+particular method, it does not necessarily imply that the underlying bug has been completely removed from the software project entirely.
+
+Furthermore, bug report discussions are not always available, and our work focuses on those instances in which they are available. Because Tufano et al. (2019) do not consider bug report discussions in their work, they do not require examples to have bug report discussions in their datasets. However, we do need examples to have these discussions for our study. Because we are unable to map many of their examples to bug report discussions, we focus on smaller subsets of their datasets.
+
+# Ethics Statement
+
+Automated bug fixing aims to streamline debugging and bug resolution for developers. We envision developers using the output generated by our models as "suggested fixes" that they would still need to inspect (and possibly revise) before committing them to the code base. Without such human intervention, erroneous output generated by our models could leave bugs unfixed or even introduce new bugs, posing a threat to the overall reliability of the software.
+
+Note that we mine publicly available bug report discussions, in accordance with GitHub's acceptable use policy.
+
+# References
+
+Wasi Ahmad, Saikat Chakraborty, Baishakhi Ray, and Kai-Wei Chang. 2021. Unified pre-training for program understanding and generation. In Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2655-2668.
+Miliadis Allamanis, Henry Jackson-Flux, and Marc Brockschmidt. 2021. Self-supervised bug detection and repair. Advances in Neural Information Processing Systems, 34.
+Deeksha Arya, Wenting Wang, Jin LC Guo, and Jinghui Cheng. 2019. Analysis and detection of information types of open source software issue discussions. In International Conference on Software Engineering, pages 454-464.
+Muhammad Zubair Baloch, Shahid Hussain, Humaira Afzal, Muhammad Rafiq Mufti, and Bashir Ahmad. 2020. Software developer recommendation in terms of reducing bug tossing length. In International Conference on Security, Privacy and Anonymity in Computation, Communication and Storage, pages 396-407.
+
+Taylor Berg-Kirkpatrick, David Burkett, and Dan Klein. 2012. An empirical investigation of statistical significance in NLP. In Conference on Empirical Methods in Natural Language Processing, pages 995-1005.
+Saikat Chakraborty and Baishakhi Ray. 2021. On multimodal learning of editing source code. In International Conference on Automated Software Engineering, pages 443-455.
+Songqiang Chen, Xiaoyuan Xie, Bangguo Yin, Yuanxiang Ji, Lin Chen, and Baowen Xu. 2020. Stay professional and efficient: Automatically generate titles for your bug reports. In International Conference on Automated Software Engineering, pages 385-397.
+Zimin Chen, Steve Kommrusch, Michele Tufano, Louis Noël Pouchet, Denys Poshyvanyk, and Martin Monperrus. 2019. SequenceR: Sequence-to-sequence learning for end-to-end program repair. Transactions on Software Engineering, 47(9):1943-1959.
+Anil Koyuncu, Kui Liu, Tegawende F Bissyandé, Dongsun Kim, Martin Monperrus, Jacques Klein, and Yves Le Traon. 2019. iFixR: Bug report driven program repair. In Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering, pages 314-325.
+Taku Kudo and John Richardson. 2018. SentencePiece: A simple and language independent subword tokenizer and tokenizer for neural text processing. In Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 66-71.
+Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In Annual Meeting of the Association for Computational Linguistics, pages 7871-7880.
+Xiaochen Li, He Jiang, Dong Liu, Zhilei Ren, and Ge Li. 2018. Unsupervised deep bug report summarization. In International Conference on Program Comprehension, pages 144-14411.
+Thibaud Lutellier, Hung Viet Pham, Lawrence Pang, Yitong Li, Moshi Wei, and Lin Tan. 2020. CoCoNut: combining context-aware neural translation models using ensemble for program repair. In International Symposium on Software Testing and Analysis, pages 101-114.
+Ehsan Mashhadi and Hadi Hemmati. 2021. Applying CodeBERT for automated program repair of java simple bugs. In International Conference on Mining Software Repositories, pages 505-509.
+Yuki Noyori, Hironori Washizaki, Yoshiaki Fukazawa, Keishi Ooshima, Hideyuki Kanuka, Shuhei Nojiri, and Ryosuke Tsuchiya. 2019. What are good discussions within bug report comments for shortening bug fixing time? In International Conference on Software Quality, Reliability and Security, pages 280-287.
+
+Sheena Panthaplackel, Junyi Jessy Li, Milos Gligoric, and Ray Mooney. 2022. Learning to describe solutions for bug reports based on developer discussions. In *Findings of the Association for Computational Linguistics*, pages 2935–2952.
+Gema Rodríguez-Pérez, Gregorio Robles, Alexander Serebrenik, Andy Zaidman, Daniel M Germán, and Jesus M Gonzalez-Barahona. 2020. How bugs are born: a model to identify how bugs are introduced in software components. Empirical Software Engineering, 25(2):1294-1340.
+Wei Tao, Yanlin Wang, Ensheng Shi, Lun Du, Shi Han, Hongyu Zhang, Dongmei Zhang, and Wenqiang Zhang. 2021. On the evaluation of commit message generation models: An experimental study. In International Conference on Software Maintenance and Evolution, pages 126-136.
+Michele Tufano, Cody Watson, Gabriele Bavota, Massimiliano Di Penta, Martin White, and Denys Poshanyak. 2019. An empirical study on learning bug-fixing patches in the wild via neural machine translation. Transactions on Software Engineering and Methodology, 28(4):1-29.
+Dusan Varis and Ondrej Bojar. 2021. Sequence length is a domain: Length-based overfitting in transformer models. In Conference on Empirical Methods in Natural Language Processing, pages 8246-8257.
+Cathrin Weiss, Rahul Premraj, Thomas Zimmermann, and Andreas Zeller. 2007. How long will it take to fix this bug? In International Workshop on Mining Software Repositories, pages 1-1.
+Shengqu Xi, Yuan Yao, Xusheng Xiao, Feng Xu, and Jian Lu. 2018. An effective approach for routing the bug reports to the right fixers. In Asia-Pacific Symposium on Internetware, pages 1-10.
+Ziye Zhu, Yun Li, Hanghang Tong, and Yu Wang. 2020. CooBa: Cross-project bug localization via adversarial transfer learning. In International Joint Conference on Artificial Intelligence.
+
+# A Generating Solution Descriptions
+
+In Panthaplackel et al. (2022), we benchmarked various models, finding that the best solution descriptions were generated by finetuning PLBART with a filtered training set (consisting of fewer generic and uninformative target descriptions as well as discussions without sufficient context to generate informative descriptions).
+
+In this current work, we re-train the model after removing 7 examples in the training set that have bug reports overlapping with the Disc-BFP test sets. We run inference on all partitions of the Disc-BFP datasets. For this, we first preprocess the bug report discussions by subtokenizing them (i.e., splitting by spaces, punctuation, CamelCase, and snake(case), similar to how we previously preprocessed the training data in Panthaplackel et al. (2022). Note that we do not subtokenize bug report discussions when we directly feed them into the models we presented in the main paper. Bug report discussions often include source code, either in-lined with natural language or as longer code blocks, which are often delimited with markdown tags. In Panthaplackel et al. (2022), we had retained in-lined code but removed longer marked blocks of code. While these longer code blocks may not be as relevant to generating natural language descriptions, we believe they could be useful in gathering insight for generating the fixed code. Therefore, we do not remove them from bug report discussions, even when generating solution descriptions.
+
+# B Examples
+
+For the $Disc - BFP_{small}$ test example in Figure 1, the two models which leverage only buggy $$ method during finetuning and test (Without NL in Table 2) do not generate the correct output. Note that neither of these models have access to any natural language context. Two other models (which do use natural language) also fail to generate the correct output, corresponding to the whole discussion and solution description representations (initialized using the "Without NL" checkpoint). In all four of these error cases, the model simply copies the buggy code snippet. However, the other 12 models generate the correct output for this particular example.
+
+Some examples are difficult for models, even with natural language context. We provide one such example from the $Disc-BFP_{medium}$ test set in Figure 2. The fix requires reversing the order of the method parameters, which is actually evident
+
+from the bug report discussion, as well as the generated solution description. However, performing this reversal involves more complex reasoning, and so the majority of models are unable to generate the correct output for this example. Nonetheless, the model which leverages the last utterance (initialized using the "Without NL" checkpoint) does manage to generate the correct output.
+
+# C Identifying Useful Discussion Segments
+
+We acquire context from bug report discussions in various ways, either heuristically (whole discussion, title, last utterance) or algorithmically (attended segments when generating solution descriptions). (Note that we do not include solution descriptions in these groups since they do not actually appear within the bug report discussions.) As we saw in Table 2, using the whole discussion may not be beneficial, since models struggle to reason about large amounts of text. We show that we are able to achieve improved performance by selecting more concise segments from within this discussion (e.g., title, last utterance, attended segments) that are likely to be relevant to fixing the bug.
+
+However, we may not always being selecting the most useful segment(s) yielding the best performance. The most useful segments may vary by example, and there could also be other utterances (beyond the title, last utterance, and attended utterances) that have relevant information.
+
+Therefore, we also estimate the performance of an "oracle" upper-bound that employs the most useful segment as the natural language context. For this, we consider models finetuned with the various segments from the discussion, including models finetuned on the whole discussion. We run inference with these models, using buggy $$ method $$ segment, for all segments, including the title and each utterance in the discussion. So, if there are $N$ segments derived from the discussion (title and $N - 1$ utterances before the bug-fixing commit), we obtain $N$ candidates for the fixed code.
+
+For a given example, we compute best exact match, or how often at least one of these candidates matches the reference. We present results in Table 4. We observe a $3.1 - 3.3\%$ gap, relative to the highest scores in Table 2, suggesting that there is useful context in these discussions that is not being exploited. We leave it to future work to learn models for extracting the most useful segments from bug report discussions for fixing bugs.
+
+```java
+- public void assertEquals (java.lang.Object actual, java.lang.Object expected) + public void assertEquals (java.lang.Object expected, java.lang.Object actual) { if ((expected == null) && (actual == null)) return; if ((expected != null) && (expected.equals(actual))) return; fail(range(expected, actual)); } Oracle Commit Message: Fixes issue #4. Title: assertEquals parameters order
+```
+
+```txt
+Utterance #1: Maybe this is not an issue but a desired behaviour, however, it seems to me that the order of the parameters in the assertEquals method is wrong: public void assertEquals(Object actual, Object expected) Being a long time user of JUnit, I expected the "actual" parameter to be in the second position instead of the first one.
+```
+
+```txt
+Utterance #2: The ordering was based on TestNG, which is what I typically use for unit testing, but since xUnit is more common I don't mind reversing the order.
+```
+
+```txt
+Solution Description: reversing the order of the assert equals parameters
+```
+
+Figure 2: Examples from the $Disc-BFP_{medium}$ test set, with the corresponding bug report discussion (https://github.com/jhalterman/concurrentunit/issues/4) and generated solution description.
+
+
Init
Finetune Context
Disc-BFPsmall
Disc-BFPmed
Without NL (BFP)
Whole Discussion
36.9
29.8
Title
40.3
27.4
Last Utterance
36.9
32.2
Attended Segments
37.2
31.3
With NL (BFP)
Whole Discussion
39.6
29.2
Title
38.2
26.8
Last Utterance
39.2
29.5
Attended Segments
42.3
27.1
+
+Table 4: Evaluating exact match (%) if the best performing segment (title or any individual utterance) from the whole discussion is used at test time (assuming that it's known).
+
+# D Initializing Model Parameters
+
+In Table 2, we present results from initializing model parameters from two of the checkpoints released by Chakraborty and Ray (2021). One corresponds to finetuning PLBART without NL using task-specific data from the larger BFP training sets. The other one corresponds to finetuning PLBART with NL (from oracle commit messages), also using task-specific data from the BFP training sets. Since these checkpoints have already been finetuned on bug-fixing data, it is reasonable to run inference on them directly without further finetuning on the Disc-BFP training sets. We show these results in Table 5. We find the overall performances to be lower, especially when testing with input context representations that were not seen during Chakraborty and Ray (2021)'s finetuning (e.g., whole discussion).
+
+We also tried initializing model parameters directly from PLBART and finetuning on the Disc-BFP training sets. Table 5 shows that this works poorly, likely because the Disc-BFP training sets are smaller than the BFP training sets, with which the Chakraborty and Ray (2021) checkpoints were
+
+finetuned. Therefore, to reap the benefits of finetuning on more data, we believe it is best to first finetune on larger bug-fixing datasets (for which bug report discussions do not need to be available). Following that, another stage of finetuning should be done using the smaller training set that includes context from bug report discussions.
+
+# E Training Details
+
+Our models are based on the architecture of PLBART, which itself follows from the BART-base model (Lewis et al., 2020). The encoder and decoder each have 6 layers, with hidden dimension 768 and 12 heads. There are approximately 140M parameters. We use the same hyperparameters as Chakraborty and Ray (2021). The batch size is 4, with gradient accumulation over every 4 batches. Early stopping is employed, with a patience of 5 epochs, based on validation performance. All models are trained for a single run. At test time, beam search is used, with a beam size of 5. Models are finetuned and tested using NVIDIA DGX GPUs (32 GB). We report the number of epochs, training time, and testing time for each of the models in Table 6.
+
+
Init
Context
Inference Only
Finetuned
Disc-BFPsmall
Disc-BFPmed
Disc-BFPsmall
Disc-BFPmed
PLBART
Without NL
-
-
22.2
14.8
Oracle Msg
-
-
28.0
16.6
Whole Disc
-
-
25.3
16.0
Title
-
-
27.3
1.5
Last Utterance
-
-
23.2
19.0
Soln Desc
-
-
20.5
17.2
Soln Desc + Title
-
-
24.2
16.3
Attended Seg
-
-
18.1
1.8
Without NL (BFP)
Without NL
30.7
25.3
33.8
27.1
Oracle Msg
30.7
25.0
33.4
27.4
Whole Disc
21.5
19.0
33.1
27.1
Title
31.4
25.9
35.5
25.9
Last Utterance
29.0
23.2
35.2
28.9
Soln Desc
31.7
25.9
33.8
27.4
Soln Desc + Title
29.7
25.0
35.5
25.6
Attended Seg
23.5
20.8
36.2
28.0
With NL (BFP)
Without NL
31.1
22.3
35.5
25.3
Oracle Msg
31.1
24.4
36.2
25.9
Whole Disc
20.5
16.9
34.1
25.6
Title
28.7
22.3
35.2
25.3
Last Utterance
25.3
22.3
36.2
25.6
Soln Desc
29.4
24.1
33.4
26.5
Soln Desc + Title
28.3
22.6
39.2
26.2
Attended Seg
23.5
19.9
36.9
24.1
+
+Table 5: We measure the effect of finetuning on the $Disc-BFP$ training sets by comparing to a setting in which the Chakraborty and Ray (2021) checkpoints are used directly for inference (without any finetuning). We also measure the effect of initializing with checkpoints that have already been finetuned on task-specific data by comparing to models directly initialized from PLBART and then finetuned on the $Disc-BFP$ training sets.
+
+
Init
Context
Disc-BFPsmall
Disc-BFPmed
Epoch
Train Time
Test Time
Epoch
Train Time
Test Time
Without NL (BFP)
Without NL
2
0:15:44
0:01:06
3
0:34:43
0:01:59
Oracle Msg
14
0:31:11
0:00:34
4
0:28:21
0:01:13
Whole Disc
6
0:25:25
0:01:55
10
1:02:34
0:02:18
Title
6
0:18:54
0:01:31
13
1:05:32
0:01:02
Last Utterance
8
0:28:28
0:01:38
5
0:37:44
0:02:47
Soln Desc
4
0:17:48
0:01:11
3
0:26:54
0:01:50
Soln Desc + Title
10
0:28:18
0:01:16
9
0:51:09
0:02:27
Attended Seg
6
0:22:54
0:01:21
2
0:27:37
0:02:39
With NL (BFP)
Without NL
6
0:17:54
0:00:53
3
0:24:16
0:01:18
Oracle Msg
9
0:36:17
0:00:52
9
1:02:02
0:01:59
Whole Disc
6
0:25:55
0:00:53
5
0:42:01
0:02:19
Title
6
0:18:08
0:01:31
10
0:46:07
0:01:36
Last Utterance
4
0:19:51
0:01:54
2
0:24:24
0:02:09
Soln Desc
2
0:13:44
0:01:03
2
0:23:42
0:01:50
Soln Desc + Title
8
0:26:41
0:01:42
6
0:41:55
0:01:38
Attended Seg
10
0:35:53
0:01:43
5
0:41:44
0:02:28
+
+Table 6: Reporting the training epoch from which we obtain the checkpoint used for evaluation, the total training time (HH:MM:SS), and the total time needed to run inference (HH:MM:SS).
\ No newline at end of file
diff --git a/usingdeveloperdiscussionstoguidefixingbugsinsoftware/images.zip b/usingdeveloperdiscussionstoguidefixingbugsinsoftware/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..1698c559692b488d37c35497246f90203b7560fa
--- /dev/null
+++ b/usingdeveloperdiscussionstoguidefixingbugsinsoftware/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:2830a4db7bc725d648d9128d1f0d47df850e3d17600601b79712077ff4ee13ce
+size 317927
diff --git a/usingdeveloperdiscussionstoguidefixingbugsinsoftware/layout.json b/usingdeveloperdiscussionstoguidefixingbugsinsoftware/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..ac74ca8a0d16d4e08352231e2ae3832e64d45601
--- /dev/null
+++ b/usingdeveloperdiscussionstoguidefixingbugsinsoftware/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0c60447b872e544f9361b9a0649d26a8c263109df0bdf146c5727abdd0aca91a
+size 284960
diff --git a/utilizinglanguageimagepretrainingforefficientandrobustbilingualwordalignment/534a4d78-ee0c-41d5-8eb1-b76ef43e7cf7_content_list.json b/utilizinglanguageimagepretrainingforefficientandrobustbilingualwordalignment/534a4d78-ee0c-41d5-8eb1-b76ef43e7cf7_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..0d909212db592ff11b947762273abb026c594398
--- /dev/null
+++ b/utilizinglanguageimagepretrainingforefficientandrobustbilingualwordalignment/534a4d78-ee0c-41d5-8eb1-b76ef43e7cf7_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:fae928df4d6948407002e9611da657e44b0910dc7477164c03247c83eecbd3ea
+size 112107
diff --git a/utilizinglanguageimagepretrainingforefficientandrobustbilingualwordalignment/534a4d78-ee0c-41d5-8eb1-b76ef43e7cf7_model.json b/utilizinglanguageimagepretrainingforefficientandrobustbilingualwordalignment/534a4d78-ee0c-41d5-8eb1-b76ef43e7cf7_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..8d2f25c480a42c3230a402eadfe8ed8e91463b0d
--- /dev/null
+++ b/utilizinglanguageimagepretrainingforefficientandrobustbilingualwordalignment/534a4d78-ee0c-41d5-8eb1-b76ef43e7cf7_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c8d7458aaf51a588d50b5f837fef70476ec2cc0e006bf358ccb09af61128e373
+size 135206
diff --git a/utilizinglanguageimagepretrainingforefficientandrobustbilingualwordalignment/534a4d78-ee0c-41d5-8eb1-b76ef43e7cf7_origin.pdf b/utilizinglanguageimagepretrainingforefficientandrobustbilingualwordalignment/534a4d78-ee0c-41d5-8eb1-b76ef43e7cf7_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..d195697d6599f7e7d43a3316ceed9eff6c3432a4
--- /dev/null
+++ b/utilizinglanguageimagepretrainingforefficientandrobustbilingualwordalignment/534a4d78-ee0c-41d5-8eb1-b76ef43e7cf7_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ee574933a681c2ed108ef8b4ac97fcda965734feb9d5d8dea8eb29233e5f00df
+size 2567854
diff --git a/utilizinglanguageimagepretrainingforefficientandrobustbilingualwordalignment/full.md b/utilizinglanguageimagepretrainingforefficientandrobustbilingualwordalignment/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..5e235ed608f4c149ad236f9f68c76bc1d9d759e1
--- /dev/null
+++ b/utilizinglanguageimagepretrainingforefficientandrobustbilingualwordalignment/full.md
@@ -0,0 +1,436 @@
+# Utilizing Language-Image Pretraining for Efficient and Robust Bilingual Word Alignment
+
+Tuan Dinh*, Jy-yong Sohn, Shashank Rajput, Timothy Ossowski, Yifei Ming, Junjie Hu, Dimitris Papailiopoulos, Kangwook Lee University of Wisconsin, Madison, WI, USA
+
+# Abstract
+
+Word translation without parallel corpora has become feasible, rivaling the performance of supervised methods. Recent findings have shown the improvement in accuracy and robustness of unsupervised word translation (UWT) by utilizing visual observations, which are universal representations across languages. Our work investigates the potential of using not only visual observations but also pretrained language-image models for enabling a more efficient and robust UWT. We develop a novel UWT method dubbed Word Alignment using Language-Image Pretraining (WALIP), leveraging visual observations via the shared image-text embedding space of CLIPs (Radford et al., 2021). WALIP has a two-step procedure. First, we retrieve word pairs with high confidence of similarity, computed using our proposed image-based fingerprints, which define the initial pivot for the alignment. Second, we apply our robust Procrustes algorithm to estimate the linear mapping between two embedding spaces, which iteratively corrects and refines the estimated alignment. Our extensive experiments show that WALIP improves upon the state-of-the-art performance of bilingual word alignment for a few language pairs across different word embeddings and displays great robustness to the dissimilarity of language pairs or training corpora for two word embeddings.
+
+# 1 Introduction
+
+Translating words across different languages is one of the long-standing research tasks and a standard building block for general machine translation. Word translation is helpful for various downstream applications, such as sentence translation (Conneau et al., 2017; Hu et al., 2019) or cross-lingual transfer learning in language models (de Vries and Nissim, 2020). Unsupervised word translation (UWT) has recently drawn a great deal of attention (Artetxe
+
+
+Figure 1: Conceptual visualization of WALIP for unsupervised word translation between English and French. We can connect English and French words in an unsupervised fashion through the shared images. CLIP models (Radford et al., 2021) can be used as human simulators to associate words with images.
+
+et al., 2017; Conneau et al., 2017; Hartmann et al., 2019), reducing the need for bilingual supervision.
+
+Without any prior knowledge of the languages' connection, aligning their words is non-trivial. Most works on UWT exploit the structural similarity between continuous word embedding spaces across languages (Mikolov et al., 2013a; Ormazabal et al., 2019) to learn a linear mapping. Early works (Smith et al., 2017; Artetxe et al., 2017; Conneau et al., 2017; Hoshen and Wolf, 2018; Grave et al., 2019) focus on using only the text data to establish the bilingual alignment and solve the Procrustes problem (Schönemann, 1966). These methods rely on the similarity between pairs of languages and training corpora, thus not working well when the languages or corpora are dissimilar (Søgaard et al., 2018; Sigurdsson et al., 2020). They may also need a large amount of data to achieve good alignments (Sigurdsson et al., 2020).
+
+Words can also be connected via the visual world. Visual similarity provides additional prior knowledge for easing language translation (Mihalcea and Leong, 2008). Recent works (Sigurdsson et al., 2020; Surís et al., 2020) demonstrate the promise of using visual information to improve UWT. However, they mostly require intense joint training for the embedding shared between images or videos with texts of multiple languages. More
+
+over, these embeddings are used for translating all words, whereas not every word can be described by images or videos. Thus, it is unclear how they are helpful for non-visual words and whether the methods properly utilize topological similarity between word vector spaces (Mikolov et al., 2013a).
+
+Our contributions. We propose WALIP (Word Alignment with Language-Image Pretraining) as a new unsupervised word alignment method that leverages the joint image-text embeddings provided by CLIP (Radford et al., 2021). Fig. 1 shows an example inspiring WALIP. Consider a conversation between a French and an English speaker. As the English speaker shows an apple image, the French speaker can easily understand and provide its translation as pomme. They can similarly pair more words describing simple objects, helping translate more complex words. This observation inspires us to leverage visual information as the pivot for matching words across languages. To do so, we use CLIP (Radford et al., 2021) to correlate texts and images and construct an image-based word representation, called a fingerprint, where each coordinate measures the similarity between the word and an image a diverse image set. Note that fingerprints share similar merits with the pictorial representation of sentence (Mihalcea and Leong, 2008) that represents simple sentences by sequences of pictures. We use fingerprints to identify initial word pairs. As not every word can be described by images, we rely on the topological similarity of word vector spaces (Mikolov et al., 2013b) for the full alignment in the second step, i.e., solving a linear mapping between two spaces using our robust Procrustes algorithm with identified word pairs.
+
+Via extensive experiments, we show that WALIP is highly effective in bilingual alignment. We achieve comparable or better performance than the state-of-the-art (SOTA) baselines and close the gap to supervised methods. For instance, on the Dictionary benchmark (Sigurdsson et al., 2020) with HowToWorld-based word embedding (Miech et al., 2019), we achieve the SOTA performance on all evaluated pairs (English $\rightarrow$ {French, Korean, Japanese}), achieving significant accuracy improvements (6.7%, 2.5%, and 4.5%, cf. Table 2) over the previous SOTA (Sigurdsson et al., 2020). Our method also displays great robustness to the dissimilarity of language pairs and static word embeddings. We empirically show the effectiveness of our method through various ablation studies.
+
+# 2 Related Works
+
+Unsupervised word translation (UWT). Most UWT methods exploit the structure similarity between word vector spaces across languages (Mikolov et al., 2013a) to learn linear mappings. Early works (Smith et al., 2017; Artetxe et al., 2017) establish the parallel vocabulary and estimate the mapping by solving the Procrustes problem (Schönemann, 1966; Gower and Dijksterhuis, 2004). Others study assignment problems and directly solve Wasserstein-Procrustes for the one-to-one word assignment matrix (Zhang et al., 2017b; Grave et al., 2019) or hyper-alignment for multiple languages (Alaux et al., 2018; Taitelbaum et al., 2019). Recent works (Zhang et al., 2017a; Conneau et al., 2017; Hoshen and Wolf, 2018) propose to learn the mapping via aligning the embedding distributions with the notable MUSE framework (Conneau et al., 2017) using the adversarial training to achieve high translation performance for multiple pairs. We use MUSE as our baseline. While MUSE involves intense training for aligning two embedding spaces, WALIP does not require this training by utilizing pretrained CLIP models.
+
+Visual information has been used to improve machine translation (Hewitt et al., 2018; Zhou et al., 2018; Kiros et al., 2018; Yang et al., 2020; Li et al., 2022b). Focusing on word translation, MUVE (Sigurdsson et al., 2020) trains a linear mapping between two embeddings via learning a joint video-text embedding space for pairs with captioned instructional videos. Globetrotter (Surís et al., 2020) learns the multilingual text embeddings aligned with image embeddings via contrastive learning. The learned text embeddings are used for multilingual sentence translation and refined for word translation. These methods require intense training with a large amount of vision-text data for learning the encoders, while WALIP only utilizes pretrained embeddings of off-the-shelf CLIP models. MUVE and Globetrotter are our main baselines.
+
+Language-Vision (LV) models. We can categorize LV models into two types: single-stream and dual-stream models. The former feeds the concatenation of text and visual features into a single transformer-based encoder, such as VisualBERT (Li et al., 2019) and ViLT (Kim et al., 2021). The latter uses separate encoders for text and image and aligns semantically similar features in different modalities with contrastive objectives,
+
+such as CLIP (Radford et al., 2021), ALIGN (Jia et al., 2021), and FILIP (Yao et al., 2021). We use CLIP as our language-image pretraining model due to its inference efficiency, high performance, and the availability of pretrained models in multiple languages. CLIP inspires numerous works (Zhang et al., 2021; Li et al., 2022a; Zhou et al., 2022) for better data efficiency and task adaptation of LV models. In this line of work, Zhai et al. (2022) recently show the feasibility of training multilingual image-text models without parallel corpora by connecting languages via image embeddings.
+
+# 3 Problem Setup and Preliminaries
+
+We formally describe the target problem of unsupervised word alignment and provide two preliminaries to our method: Procrustes and CSLS.
+
+Unsupervised word alignment. We focus on the word alignment (translation) problem: finding the mapping from $A_{\mathrm{dict}}$ to $B_{\mathrm{dict}}$ , where $A_{\mathrm{dict}} = \{a_1, \dots, a_{n_a}\}$ and $B_{\mathrm{dict}} = \{b_1, \dots, b_{n_b}\}$ are dictionaries of source language $A$ and target language $B$ , with $n_a$ and $n_b$ being the number of words in each dictionary, respectively. This mapping can be represented by an equivalent index mapping $\pi : [n_a] \to [n_b]$ , i.e., we consider word $a_i$ is mapped (aligned) to word $b_{\pi(i)}$ , for $i \in [n_a]$ . Here, $[n] = \{1, 2, \dots, n\}$ is defined as the set of positive integers up to a positive number $n$ . Note that we focus on unsupervised word alignment in which no ground-truth word pairs $(a_i, b_{\pi(i)})$ are given to the algorithm. To solve this problem, we assume the access to three ingredients: (1) a large-scale image dataset with $d$ images denoted by $G = \{g_1, \dots, g_d\}$ , (2) a pre-trained monolingual CLIP model for each language, and (3) static word embeddings (Bojanowski et al., 2016; Pennington et al., 2014) for all words in dictionaries.
+
+Procrustes problem. Let $X, Y \in \mathbb{R}^{n \times d}$ be matrices of the $d$ -dimensional embeddings for $n$ words in the source and target languages. The Procrustes problem aims to find $W \in \mathbb{R}^{d \times d}$ such that $\| XW - Y \|_F$ is minimized. Regularizing $W$ with the orthogonality is found to improve the translation (Xing et al., 2015), where the optimal $W$ is
+
+$$
+W ^ {*} = \underset {W \in \mathcal {O} _ {d}} {\operatorname {a r g m i n}} \| X W - Y \| _ {F} = \operatorname {S V D} (Y ^ {T} X)
+$$
+
+where $\mathcal{O}_d$ is the set of $d\times d$ orthogonal matrices and SVD is the singular value decomposition.
+
+CSLS. Conneau et al. (2017) proposed Cross-domain Similarity Local Scaling (CSLS) to robustly measure the similarity between words' em
+
+
+Figure 2: WALIP for translating between $n_a$ words $\{a_1, \dots, a_{n_a}\}$ and $n_b$ words $\{b_1, \dots, b_{n_b}\}$ in two languages $A$ and $B$ . We have access to: (1) a set of $d$ images $\{g_i\}_{i=1}^d$ , (2) the CLIP model for each language, and (3) static word embeddings for each language, denoted by $\tilde{A}^{\mathrm{txt}}$ and $\tilde{B}^{\mathrm{txt}}$ . In step 1, we build a fingerprint $f(a_i)$ defined in equation 1 for each word $a_i$ and build $f(b_i)$ for words $b_i$ . We match words whose fingerprints share high similarities, thus having an initial mapping $\pi : [n_a] \to [n_b]$ pairing $a_i$ and $b_{\pi_i}$ for $i \in S \subseteq [n_a]$ . In step 2, we use static word vectors and initially matched pairs to solve the linear word mapping with the robust Procrustes algorithm for better alignment.
+
+beddings. Given two sets $X = \{x_{i}\}_{i\in [n_{X}]}$ , $Y = \{y_{i}\}_{i\in [n_{Y}]}$ and the number of neighbors $K$ , the CSLS of $x_{i}$ and $y_{j}$ is defined as $\mathrm{CSLS}(x_i,y_j) = 2\cos (x_i,y_j) - r_Y(x_i) - r_X(y_j)$ where $\cos (\cdot ,\cdot)$ is the cosine similarity, $r_Y(x_i) = \frac{1}{K}\sum_{y_j\in \mathcal{N}_Y(x_i)}\cos (x_i,y_j)$ is the average similarity of $x_{i}$ , and $\mathcal{N}_Y(x_i)$ is the set of $K$ nearest neighbors of $x_{i}$ among elements of $Y$ . CSLS performs cross-domain normalization to address the hub phenomenon (Radovanovic et al., 2010) of the $K$ -nearest-neighbor method in high-dimensional spaces, which occurs when some vectors are nearest to many vectors while others are isolated.
+
+# 4 WALIP
+
+We first provide the high-level idea and then specify each stage of WALIP. Algo. 2 in Appendix presents the pseudocode for our algorithm.
+
+# 4.1 Method Overview
+
+Our idea is to enable effective and robust word alignment by using (1) the similarity of visual representations of words with similar meanings and (2) the structural similarity of static word embedding spaces across languages. Specifically, we use images to connect similar words in two languages with the aid of CLIP (Radford et al., 2021). However, a naive application of this method only makes sense for visual words such as non-abstract nouns that images can describe. To map non-visual words,
+
+we utilize the topological similarity (i.e., the degree of isomorphism) between word vector spaces (Vulić et al., 2020). Motivated by the existence of a linear association between two static word embeddings of different languages (Ormazabal et al., 2019), we learn a linear mapping using the robust matching algorithm on identified word pairs.
+
+Fig. 2 illustrates WALIP used for aligning words $\{a_i\}$ and words $\{b_i\}$ in languages $A$ and $B$ . WALIP has two steps. First, it selects pairs $\{a_i, b_{\pi_i}\}$ having similar visual meanings by using each word's fingerprint, defined as the similarity of the word and an image set via CLIP's encoders. Second, it iteratively aligns word embeddings of languages $A$ and $B$ , i.e., find a linear mapping between two embeddings, using robust Procrustes and the initial pairs identified in the first step.
+
+# 4.2 Step 1: Pairing up Visually Similar Words using Language-Image Association
+
+As shown in Algo. 2, our Step 1 pairs words via images. This is available by an image-based fingerprint representation of each word, defined below.
+
+# 4.2.1 Image-based Fingerprints
+
+We denote the image/text encoder of the CLIP model for language $A$ as $A^{\mathrm{img}}$ and $A^{\mathrm{txt}}$ . Similarly, we define $B^{\mathrm{img}}$ and $B^{\mathrm{txt}}$ for language $B$ . The critical advantage of the CLIP model is the access to the shared embedding space aligning image $g_{i}$ and its corresponding word $(a_{i}$ or $b_{i})$ . WALIP utilizes this embedding space of each source/target language to find the bilingual mapping.
+
+Given $d$ images $\{g_1,\dots ,g_d\}$ we first define a $d$ -dimensional vector (called fingerprint) for each word $a_{i}\in A_{\mathrm{dict}}$ in the source language as $f(a_{i}) = [f_{i,1}^{a},\dots ,f_{i,d}^{a}]$ where $f_{i,j}^{a} = \sin (A^{\mathrm{txt}}(a_{i}),A^{\mathrm{img}}(g_{j}))$ is the similarity between the embedding of the $i$ -th word and the embedding of the $j$ -th image. Similarly, we define the fingerprint of each word $b_{i}\in B_{\mathrm{dict}}$ in the target language as $f(b_{i}) = [f_{i,1}^{b},\dots ,f_{i,d}^{b}]$ where $f_{i,j}^{b} = \sin (B^{\mathrm{txt}}(b_{i}),B^{\mathrm{img}}(g_{j}))$ . This fingerprint represents a word's similarity to images, according to the embedding space of pretrained CLIP models. We denote the fingerprint of the $i$ -th word in the dictionary of a language $l\in \{a,b\}$ as
+
+$$
+f \left(l _ {i}\right) = \left[ f _ {i, 1} ^ {l}, \dots , f _ {i, d} ^ {l} \right]. \tag {1}
+$$
+
+Figs. 3a, 3b show examples of English and French fingerprints. Here, we measure the similarity of each word with 12 images from ImageNet (Deng et al., 2009), obtaining a 12-dim vector. The top
+
+
+(a) Fingerprints for English words
+
+
+(b) Fingersprints for French words
+Figure 3: Illustration of image-based fingerprints for English words (a) and their translations in French (b). The similarity between each word (inserted in a simple template such as “A photo of []”) and all images serves as the fingerprint (each row). Fingerprints of visual words (top three rows) are more distinguishable than abstract words (three bottom rows) and share similar patterns to the fingerprints of their French translations.
+
+three rows of each figure are fingerprints for visual words (cock, goldfish, tiger shark), and the bottom rows are of abstract words (culture, philosophy, phenomenon). Unlike visual words, fingerprints of abstract words are more uniform (similar values for most coordinates), i.e., they are not distinguishable. Note that fingerprints of each English-French pair of visual words { (cock, coq), (goldfish, poisson rouge), (tiger shark, requin)} share highly similar patterns.
+
+# 4.2.2 Identifying Pivot Pairs
+
+Consider two visual words $a_i, b_j$ in two languages with similar meanings (e.g., $a_i =$ "tiger shark" and $b_j =$ "requin" in Fig. 3). For a given image set, fingerprints of the two words would be similar, i.e., $f(a_i) \approx f(b_j)$ as shown in Fig. 3, allowing the use of fingerprint similarity for word translation.
+
+Keeping only visually aligned words. Recall that fingerprints are meaningful for visual words only, as observed in Fig. 3. Motivated by this observation, we focus on words well represented by a set of images. Specifically, for the $i$ -th word $l_i$ in language $l \in \{a, b\}$ , we compute the maximum similarity value $f_{i,\max}^{(l)} = \max_j f_{i,j}^{(l)}$ within the corresponding fingerprint $f(l_i) = [f_{i,1}^{(l)}, \dots, f_{i,d}^{(l)}]$ . Then, for each language $l \in \{a, b\}$ , we keep the set of words $S_l$ having the maximum similarity be
+
+yond the median. To focus on components with high similarity, we sparsify fingerprints by eliminating values below the $0.9^{th}$ -quantile and normalize the vectors. This revised fingerprint allows us to focus on images highly similar to the given word.
+
+Selecting pairs with high similarity. For source words $\{a_i\}_{i\in S_a}$ and target words $\{b_j\}_{j\in S_b}$ , we measure the similarity of fingerprints $f(a_{i})$ and $f(b_{j})$ using CSLS (Sec. 3). Recall that our goal is to find a mapping $\pi :[n_a]\to [n_b]$ indicating that the word $a_{i}$ is translated to $b_{\pi (i)}$ , and we want to map $a_{i}$ to $b_{j}$ having similar fingerprints. Based on the similarity score $c_{i,j} = \mathrm{CSLS}(f(a_i),f(b_j))$ for $i\in S_a$ and $j\in S_b$ , we set $\pi (i) = \arg \max_{j}c_{i,j}$ , giving us an initial set of word pairs, where two words in each pair are visual words and share highly similar fingerprint patterns. See algorithms 3 and 4 for pseudocodes of word filtering and pair selection.
+
+# 4.3 Step 2: Iteratively Learning the Mapping with Robust Procrustes
+
+In Step 1 of WALIP in Algo. 2, we have identified the initial word mapping $\pi$ on visual words. In Step 2, we learn and fine-tune $\pi$ on the whole dictionaries using linear mapping $W^{\star}$ between static word embeddings of two languages, learned by iteratively applying our robust Procrustes algorithm (Algo. 1). We first explain Algo. 1 – the building block of Step 2 in Sec. 4.3.1, and then explain how this algorithm allows us to learn $\pi$ in Sec. 4.3.2.
+
+Algorithm 1 Robust-Procrustes
+Input: Vectors $X,Y\in \mathbb{R}^{n\times d}$
+Output: Linear mapping $W^{*}\in \mathbb{R}^{d\times d}$
+Set $\epsilon = 0.001$ $M = 5$
+Initial mapping $W_{0} = \mathrm{Procrustes}(X,Y)$
+for $m\in \{1,\dots ,M\}$ do $\begin{array}{r}\alpha_{i}\leftarrow \frac{1}{\|y_{i} - W_{k - 1}x_{i}\|^{2} + \epsilon}\quad \mathrm{for} i\in [n]\\ \alpha_{i}\leftarrow \alpha_{i} / \max_{j\in [n]}\alpha_{j}\\ D\leftarrow \mathrm{Diag}(\alpha_{1}^{1 / 2},\ldots ,\alpha_{n}^{1 / 2})\\ W_{m}\leftarrow \mathrm{Procrustes}(DX,DY) \end{array}$ $W^{\star}\gets W_M$
+
+# 4.3.1 Error-Weighting Robust Procrustes
+
+The initial word pairs identified in Step 1 are obtained in an unsupervised manner with potentially many mismatched pairs. Thus directly applying the existing Procrustes algorithm (Sec. 3) to these pairs may lead to an incorrect linear mapping $W$ .
+
+We introduce a robust matching algorithm (Algo. 1) to eliminate the mismatched pairs and learn the mapping from the correct ones. Inspired
+
+by the existing robust Procrustes algorithms (Groenen et al., 2005), we assign small weights to incorrect pairs and large weights to correct pairs. Given a word embedding matrix $X$ and its aligned counterpart $Y$ , we first apply the Procrustes to learn the initial $W_{0}$ . We then measure the error of $W_{0}$ on each word pair $(x, y)$ by the residual $r(x, y) = \| y - W_{0}x\|_{2}$ . Since the pair is likely to be correct when the residual is small, we use $\alpha(x, y) = 1 / r(x, y)$ as the weight of the pair. Then, we apply Procrustes on these weighted pairs to obtain a new mapping $W_{1}$ . We repeat this process a few times to achieve a stable linear mapping $W^{\star}$ .
+
+# 4.3.2 Iteratively Updating the Word Alignment $\pi$ and Linear Mapping $W^{\star}$
+
+In Step 2 of WALIP, we iteratively apply two procedures: first, we update linear mapping $W^{\star}$ by applying the robust Procrustes on identified pairs, and second, we update the word mapping $\pi$ using $W^{\star}$ and the pair selection algorithm (Algo. 4).
+
+The first phase is described in Sec. 4.3.1. In the second phase, we transform each source vector $x_{i}$ into $W^{\star}x_{i}$ in the target embedding space and apply the $k$ -nearest-neighbor (NN) on this space. We update $\pi$ using Algo. 4 in the following manner: retrieving $k > 1$ candidate target words for each source word and choosing candidates having the similarity (with source word) higher than a threshold $q$ . For the updated $\pi$ , we measure the Euclidean distance between paired vectors as the validation loss and repeat the two procedures (update $W^{\star}$ and $\pi$ ) until the validation loss is convergent. In this process, two hyperparameters $q$ and $k$ are initialized with high values and gradually decayed at each update step of $\pi$ . Once the validation loss converged, we obtain the final mapping $\pi$ by applying Algo. 4 with $k = 1$ and $q = 0$ .
+
+Step 2 is crucial to achieving high translation performance from initial mapping. While sharing similar merits to ours, the refinement procedure (Connseau et al., 2017) is only used for marginally improving upon a high-accuracy linear mapping $W$ .
+
+# 4.4 Advantages of WALIP
+
+First, WALIP is computationally efficient, especially compared to MUSE, MUVE, and GLOBETROTTER. With pretrained CLIPs, our first step (Sec. 4.2) requires no extra training for pivot pair matching, while Step 2 (Sec. 4.3) involves a few matrix computations. Second, WALIP is more robust to language dissimilarity. Assuming well
+
+trained CLIPs, fingerprints of words having similar meanings are intuitively similar across languages as they all represent the same visual correlation to the same image set. Thus, fingerprints improve the robustness of pivot matching, especially for dissimilar languages. This may not be the case for methods only using static word embeddings (Søgaard et al., 2018). Finally, our image-based fingerprint provides an interpretable representation of words.
+
+# 5 Experiments
+
+We evaluate WALIP on bilingual alignment tasks. Sec. 5.2 compares WALIP and baselines in multiple language pairs. The following sections provide additional experimental results that either highlight the benefits of WALIP or help understand the component that enables the high performance of WALIP. Our code is available at https://github.com/UW-Madison-Lee-Lab/walip.
+
+# 5.1 Settings
+
+WALIP setting. We use publicly available pretrained CLIPs for English, Russian, Korean, and Japanese. For other languages, we fine-tune English CLIP models on Multi30K (Elliott et al., 2016, 2017) and MS-COCO variants (Lin et al., 2014; Scaiella et al., 2019; Carlos, 2020). For making CLIP prompts, we convert single words to sentences using prompt templates suggested in (Radford et al., 2021). We apply the prompt-ensemble technique with 2-7 prompts for each word and use their average as word embeddings. To make the fingerprints, we use a set of 3000 images from ImageNet (Deng et al., 2009) by default. See Sec. 5.6.3 for our detailed evaluation. For the static word embedding, we use HowToWorld (HTW)-based Word2Vec (Sigurdsson et al., 2020) and Wiki-based Fasttext embeddings (Bojanowski et al., 2016).
+
+Evaluation. We evaluate methods on the Dictionary datasets (Sigurdsson et al., 2020), which are test sets used in the MUSE benchmark (Conneau et al., 2017). Each dictionary is a set of translation pairs where each word in the source language may have multiple translations in the target language. We report recall@n used in (Sigurdsson et al., 2020), which presents the fraction of source words correctly translated. A retrieval is correct for a given query if at least one of $n$ retrieved words is the correct translation. By default, we report recall@1, which is equivalent to precision@1, and the accuracy used in (Conneau et al., 2017).
+
+Baselines. Our baselines include the videogrounding method MUVE (Sigurdsson et al., 2020) and the image-grounding method Globetrotter (Surís et al., 2020). We also compare our method with two versions of the text-only method MUSE (Conneau et al., 2017): the default one trained on the Dictionary dataset (with $1.5\mathrm{K}-3\mathrm{K}$ words per dictionary), and the other one trained on the MUSE training data (with $200\mathrm{K}$ words per dictionary); we call the latter one as MUSE (extra-vocabulary). We also consider a simple baseline using CLIP, denoted by CLIP-NN, which performs 1-nearest neighbor (1-NN) based estimation on the embedding spaces of two CLIP models: we first find the image nearest to the source word, and then find the target word nearest to the image found in the first step. For measuring recall@n of this baseline, we replace 1-NN with $\lceil \sqrt{n} \rceil$ -NN.
+
+We also test three variants of our method by making changes in Step 1: WALIP (clip-text in Step 1) which replaces fingerprints with CLIP-based text embeddings, WALIP (substring matching) which replaces the initial matching by selecting pairs sharing the longest common substrings, and WALIP (character mapping) which improves substring matching by first applying letter counting (Ycart, 2012) to map two languages' character sets. We also test two variants that replace the static word embeddings used in Step 2 with CLIP-based text embeddings (denoted by WALIP (clip-text in Step 2)) or fingerprints (denoted by WALIP (fingerprint in Step 2)). Further details are in Appendix B.
+
+# 5.2 How Well Does WALIP Perform Bilingual Word Alignment?
+
+Tables 1, 2 show our evaluation of bilingual alignment using Wiki-based and HTW-based embeddings on the Dictionary datasets.
+
+Wiki-based embeddings. In Table 1, WALIP achieves comparable or the best performances in most cases among unsupervised methods, attaining relatively small gaps to the full supervision. Specifically, WALIP achieves SOTA on five pairs, especially for $\mathrm{En} \rightarrow \mathrm{Ko}$ , where WALIP outperforms others with large margins. For the baselines using visual information, we outperform GLOBETROTTER and all variants of WALIP across all pairs. Note that MUVE only reports recall@10 for $\mathrm{En} \rightarrow \mathrm{Fr}$ as 82.4, far below ours (97.5). Compared to the version of MUSE with extra vocabularies, WALIP achieves comparable scores in most cases and out
+
+Table 1: Comparing bilingual alignment methods on Wiki-based word embedding. We report recall@1 on the Dictionary dataset. WALIP achieves SOTA performance in many pairs, close to the supervision. (Sigurdsson et al., 2020) do not report results of MUVE in this setting and GLOBETROTTER uses its learned word embeddings.
+
+
Method
En→Ko
En→Ru
En→Fr
En→It
En→Es
En→De
Es→De
It→Fr
Text-only
(Upper bound) Supervision
69.1
85.5
93.5
92.1
93.3
92.5
91.5
95.1
MUSE (extra-vocabulary)
59.3
83.0
92.5
91.6
93.0
92.5
89.1
94.5
MUSE
2.8
65.9
84.5
84.9
85.1
73.6
83.0
92.3
WALIP (substring matching)
0.2
0.0
92.0
90.3
92.0
92.1
88.7
94.3
WALIP (character mapping)
0.2
5.0
90.9
0.1
0.1
0.3
0.5
0.5
Text-Image
CLIP-NN
2.5
9.4
1.3
10.5
8.2
7.1
7.3
6.5
GLOBETROTTER
0.1
4.0
52.3
50.1
46.4
46.8
38.3
49.3
WALIP (clip-text in Step 1)
0.3
0.0
58.9
79.4
56.2
50.8
46.5
52.5
WALIP (clip-text in Step 2)
0.2
15.7
59.3
59.1
59.1
52.3
46.8
52.1
WALIP (fingerprint in Step 2)
0.2
0.5
31.3
39.0
32.6
31.3
34.7
43.3
WALIP
62.3
82.7
92.6
90.7
92.2
92.6
89.2
94.5
+
+Table 2: Comparing bilingual alignment methods on HTW-based embedding. WALIP achieves highest recall@n scores on Dictionary dataset across all pairs.
+
+
Method
En→Fr
En→Ko
En→Ja
R@1
R@10
R@1
R@10
R@1
R@10
(Up.) Sup.
57.9
80.1
41.8
72.1
41.1
68.3
MUSE (extra.)
26.3
42.3
11.8
23.9
11.6
23.5
MUSE
0.8
6.6
0.3
3.1
0.3
2.5
MUVE
28.9
45.7
17.7
33.4
15.1
31.2
WALIP (substr.)
35.5
56.0
0.0
0.2
0.3
2.1
WALIP
35.6
56.2
20.2
42.4
19.6
41.0
+
+performs in $\mathrm{En} \rightarrow \mathrm{Ko}$ . The score gaps between the two methods are larger in Table 2, as described in the next paragraph. It is worth mentioning that this version of MUSE needs a large number of extra vocabularies for training while WALIP directly performs on the test dictionaries. Moreover, most baselines (except CLIP-NN) require intense training for aligning embedding spaces, while WALIP needs a few matrix computations.
+
+HTW-based embeddings. Following (Sigurdsson et al., 2020), we test for three language pairs $(\mathrm{En} \rightarrow \{\mathrm{Fr}, \mathrm{Ko}, \mathrm{Ja}\})$ . Table 2 compares WALIP with MUVE and the baselines that perform well in Table 1. Results of MUSE (extra.) and MUVE are from (Sigurdsson et al., 2020). WALIP outperforms other unsupervised baselines with large margins, achieving the SOTA for all pairs, with the recall@1 gaps to the second-best method (MUVE) being 6.7, 2.8, and 4.5 for $\mathrm{En} \rightarrow \{\mathrm{Fr}, \mathrm{Ko}, \mathrm{Ja}\}$ , respectively. WALIP also outperforms the substring matching variant on $\mathrm{En} \rightarrow \{\mathrm{Ko}, \mathrm{Ja}\}$ .
+
+The performance on dissimilar language pairs. For both embedding types, WALIP works relatively well regardless of the similarity of language pairs. In contrast, most baselines do not perform well on a few or all dissimilar pairs $(\mathrm{En} \rightarrow \{\mathrm{Ko}, \mathrm{Ja}, \mathrm{Ru}\})$ . We expect that the low performance of the substring matching method partly comes from the dissimilarity of alphabets in such pairs.
+
+Table 3: Comparing methods when static word embeddings of source and target languages are trained on different corpora. We report recall@1 on En→Fr translation evaluated on Dictionary dataset. WALIP outperforms other baselines across two settings.
+
+
Method
Wiki-HTW
HTW-Wiki
MUSE (extra.)
0.3
0.3
MUSE
0.3
0.2
VecMap
0.1
0.1
MUVE
32.6
41.2
WALIP
34.3
60.0
+
+# 5.3 Robustness against the Dissimilarity of Static Word Embeddings
+
+Following (Sigurdsson et al., 2020), we evaluate WALIP when the static word embeddings of source and target languages come from different training corpora: Wiki and HTW corpora. We also compare with VecMap (Artetxe et al., 2017), the baseline used in the MUVE paper. Table 3 compares WALIP with MUSE variants, VecMap, and MUVE on $\mathrm{En} \rightarrow \mathrm{Fr}$ . WALIP and MUVE are more robust to the dissimilarity of word embeddings than MUSE variants and VecMap. In addition, WALIP outperforms MUVE on both settings. For instance, on the Wiki-HTW setting, recall@1 of WALIP is $60\%$ while that of MUVE is $41.2\%$ .
+
+# 5.4 Can We Reuse CLIP Models Trained on English Texts for Other Languages?
+
+Large-scale language models exhibit the strong ability of cross-lingual zero-shot transfer (Hu et al., 2020). We investigate whether WALIP can utilize a CLIP model trained on English texts (English-CLIP) for other languages. Intuitively, this is probably doable when the other language uses the same alphabet (and the same tokenizer). Here, we use the English-CLIP model to obtain fingerprints for all languages, resulting in a new version of WALIP,
+
+
+Figure 4: Zero-shot cross-lingual transfer. We observe the following when we replace the original CLIPs (yellow) with English-CLIP (cyan). Top: The initial matching accuracy drops for all pairs. Bottom: The final recall score becomes nearly 0 for the dissimilar pair $(\mathrm{En} \rightarrow \mathrm{Ru})$ but remains mostly the same for other pairs.
+
+Table 4: The percentage $(\%)$ of each word class in the Dictionary dictionaries. Each class of abstract and concrete nouns accounts for approximately $4 \%$ of words, with the total nouns being nearly $50 \%$ of words.
+
+
Dict.
Noun
Others
Abstract
Concrete
Non-ID
En→Ru
3.8
4.3
39.5
52.4
En→It
3.9
3.8
38.9
53.4
+
+denoted English-WALIP. Here, our static word embeddings are Wiki-based Fasttext embeddings. As shown in Fig. 4, using English-WALIP causes drops in initial matching accuracies, which measure the precision of mapping on selected pairs. However, these drops only affect the translation performance of languages dissimilar to English (e.g., Russian) and do not significantly affect the ones similar to English, i.e., the recall@1 remains mostly the same for $\mathrm{En} \rightarrow \{\mathrm{It}, \mathrm{Fr}, \mathrm{Es}, \mathrm{De}\}$ . Thus, English-CLIP can be used in WALIP framework for languages similar to English, reducing the need for training their new CLIP models.
+
+# 5.5 WALIP on Different Word Types
+
+In this section, we check how the performance of WALIP changes for different types of words. We categorize words into 4 classes: abstract nouns (e.g., beauty), concrete nouns (e.g., computer), non-identified nouns (e.g., Copenhagen), and non-noun (e.g., pretty). We use spaCy noun parser $^2$ to detect nouns and then use lists of popular English abstract and concrete nouns $^3$ to match their classes. We denote the unmatched nouns as non-identified (non
+
+Table 5: Recall@1 (↑) of each word type reported on each step of WALIP. In the early stages, concrete nouns obtain the highest scores in both dictionaries. After step 2, abstract and concrete nouns share more comparable scores, higher than scores of non-noun words.
+
+
Dict.
Step (Iter.)
Noun
Others
Abstract
Concrete
Non-ID
En→Ru
#1
7.0
47.6
9.8
5.8
#2 (first)
40.4
66.2
42.2
23.7
#2 (last)
86.0
86.2
86.0
78.8
En→It
#1
3.3
35.1
13.2
12.9
#2 (first)
72.9
77.2
68.0
55.1
#2 (last)
96.6
94.8
92.0
89.3
+
+id). Table 4 reports the percentage of each class in the $\mathrm{En} \rightarrow \{\mathrm{Ru}, \mathrm{It}\}$ dictionaries. Nearly $47\%$ of words are nouns, with approximately $8\%$ of words being abstract or concrete nouns.
+
+Table 5 reports recall@1 scores for all word classes after the initial matching (Step 1 in Sec. 4.2) and after the first and the last iterations of linear mapping (Step 2 in Sec. 4.3). We use the Wiki-based Fasttext embeddings for static word embeddings. After completing step 1 and the first iteration of step 2, concrete nouns have the highest scores. Note that the score gap between concrete and abstract nouns on $\mathrm{En}\rightarrow \mathrm{Ru}$ is more than $40\%$ after step 1. This indicates that the initial matching using fingerprints works better with concrete nouns. After completing step 2, the scores are improved for all classes, where scores of abstract and concrete nouns become more comparable, e.g., 86.0, 86.2 on $\mathrm{En}\rightarrow \mathrm{Ru}$ . Note that nouns have much higher recall@1 than non-noun words. These results show that step 2 improves the matching for all word types, especially for nouns.
+
+# 5.6 Ablation Study
+
+We perform ablation studies using the Wiki-based Fasttext embedding and the Dictionary dataset.
+
+# 5.6.1 Effect of Fingerprints
+
+Fig. 5 shows the effect of fingerprints on translation performance. We compare variants of WALIP using various initial mapping methods: random matching (red), clip-text embeddings (olive), substring matching (green), and image-based fingerprints (ours, dark blue). The evaluation scores can be found in Table 1. Fingerprint-based WALIPs are the best among variants across all pairs.
+
+# 5.6.2 Effect of Robust Procrustes
+
+Fig. 6 shows the comparison between our robust Procrustes (in Algo. 1) and the standard Procrustes
+
+
+Figure 5: WALIP with different methods of initial mapping. Compared to image-based fingerprints (dark blue), using other methods for the initial mapping result in lower recall scores, especially for dissimilar language pairs $(\mathrm{En} \rightarrow \mathrm{Ko}$ and $\mathrm{En} \rightarrow \mathrm{Ru})$ .
+
+
+Figure 6: Investigating the effect of robust Procrustes. Robust Procrustes helps improve the translation across different language pairs. The effect is more significant on "difficult" pairs, such as English-Russian.
+
+algorithm, given the same initial mapping. Robust Procrustes indeed helps improve over the standard Procrustes, especially when two languages are dissimilar. For instance, on $\mathrm{En} \rightarrow \mathrm{Ko}$ , using robust Procrustes increases the final recall@1 by $12.9\%$ .
+
+# 5.6.3 Effect of the Image Set
+
+Here we check how the images used for making fingerprints affect the performance of WALIP.
+
+Size of image sets. Fig. 7 compares recall@1 scores of WALIP when different number of images (from ImageNet) are used for building fingerprints. When the number of images increases, the recall@1 increases and converges for all pairs. As the languages become more dissimilar, WALIP may need more images to attain good performance. WALIP needs only 1000 to 3000 images to achieve good performance across all evaluated language pairs.
+
+Diversity of images. To see the importance of image diversity, we fix the total number of images as 3000 and vary the number of classes. Table 6 compares the recall@1 of WALIP on $\mathrm{En} \rightarrow \mathrm{Ru}$ varying the image diversity. Here, we use the CIFAR10 dataset for 10 or fewer classes, CIFAR100 for 20-100 classes, and ImageNet for 1000 classes. Note that WALIP achieve high performance only when we use a large number of classes (e.g., more than 37 classes in the Table). This is probably because image sets with higher diversity provide more distinguishing coordinates of fingerprints to obtain more
+
+
+Figure 7: Recall@1 (↑) of WALIP varying the size of image (ImageNet) set used for fingerprints. The performance improves as the number of images increases from 100 to 1000 and then remains mostly unchanged. Hence, a sufficiently large number of images is required.
+
+Table 6: Recall@1 (↑) of WALIP on $\mathrm{En} \rightarrow \mathrm{Ru}$ , varying the number of image classes given a fixed number of images as 3000. WALIP achieves high performance (step 2) when 38 or more classes are used. Furthermore, using 1000-class ImageNet results in the highest initial matching score (62.2) among the settings.
+
+
No. classes
Dataset
Step 1
Step 2
1
CIFAR10
0.9
0.8
2
CIFAR10
0.9
0.6
10
CIFAR10
6.2
8.1
20
CIFAR100
7.6
5.4
37
CIFAR100
9.1
4.4
38
CIFAR100
10.3
82.1
50
CIFAR100
11.1
82.5
100
CIFAR100
11.1
82.3
1000
ImageNet
62.2
83.0
+
+pivot pairs in the initial matching step – the condition for robust Procrustes to learn. Furthermore, compared to other settings, 1000-class ImageNet obtains much better initial matching in step 1.
+
+# 6 Conclusion
+
+We propose WALIP, a novel unsupervised bilingual word alignment method using pretrained CLIP models. WALIP first leverages the visual similarity between words as the auxiliary for matching initial and simple word pairs via the image-based fingerprint representation computed by language-image pretraining models. Then WALIP uses these initial pairs as pivots to learn the linear transformation between two static word embeddings. We introduce a robust Procrustes algorithm based on error-weighting to estimate the linear mapping. Compared with existing baselines, WALIP needs less computation for aligning two embeddings, thanks to the aid of visual information and pretrained CLIP models. WALIP achieves the SoTA alignment performances on several language pairs across word embedding types, especially for pairs in which two languages are highly dissimilar. WALIP also displays the robustness against the dissimilarity of static word embeddings' training corpora.
+
+# 7 Limitations
+
+Despite achieving high translation performance on various language pairs, WALIP has some limitations, coming from the requirements of CLIP models, the presence of visual words, and the structural similarity of static word embedding spaces.
+
+As shown in Fig. 5, the initial mapping in Step 1 of WALIP needs to be sufficiently good for WALIP to achieve high translation performance. The conditions for good initial mappings are (1) well-trained CLIP models and (2) a sufficiently large number of visual words in the two dictionaries. First, our setting assumes the availability of pretrained CLIP models for the two languages. However, this may not be the case for many languages, especially for low-resource ones having small amounts of training data publicly available. We also observe that the CLIP models for non-English languages (either trained from scratch or fine-tuned from a model pretrained on English corpora) are not as good as the OpenAI CLIP trained on English corpora $^4$ in terms of image-text alignment and zero-shot image classification. Fortunately, our results on zero-shot transfer (Fig. 4) indicate that we may only need a few well-trained CLIP models in some major languages and further use them for their highly similar languages. Second, we have shown that image-based fingerprints work the best with visual words and may not show the distinguishable pattern on non-visual words (Fig. 3). Therefore, the two dictionaries need to have a sufficient number of visual words for WALIP to obtain initial pairs with adequate quantity and high accuracy.
+
+Furthermore, WALIP, as well as most existing unsupervised word translation methods (Connseau et al., 2017; Artetxe et al., 2017; Sigurdsson et al., 2020) rely on the structural similarity of static word embedding spaces across languages. However, such linear mapping between two spaces may not exist in several cases, especially when two languages are highly dissimilar. For instance, we observed that the supervision method (with Procrustes) achieved low translation accuracy (approximately $40\%$ ) on the English-Japanese pair evaluated on the Dictionary dataset with HTW-based embeddings, indicating that the linear transformation assumption may not be fully satisfied for these two languages' static word embedding spaces.
+
+# 8 Broader Impact and Ethical Considerations
+
+WALIP provides a simple yet effective solution to word translation, contributing to the progress of machine translation, which brings more benefits to our society. Our method is unsupervised and computationally efficient, thus significantly saving the computing and reducing the need for human labeling. Furthermore, the robustness of WALIP to the dissimilarity of language pairs and the dissimilarity of training corpora for static word embeddings may be beneficial to low-resource languages.
+
+However, employing WALIP without careful consideration and understanding may lead to undesired outcomes. First, the provided dictionaries may contain harmful contexts and racist or sexist content. WALIP can be used to translate these contents to other languages, bringing unwanted adverse effects to society. Second, though achieving the SOTA performances, WALIP still has not attained sufficiently high accuracies (greater than $50\%$ ) on several dissimilar pairs (e.g., $\mathrm{En}\rightarrow \mathrm{Ja}$ ), potentially producing wrong translations for multiple words, and hence having undesired impacts to the users. Third, our methods may inherit biases and undesired contents from language-image (CLIP) models pretrained on large-scale datasets. Applying efficient fine-tuning to the pretrained CLIP models with fairness consideration methods (Gira et al., 2022) may help mitigate these biases.
+
+# References
+
+Jean Alaux, Edouard Grave, Marco Cuturi, and Armand Joulin. 2018. Unsupervised hyper-alignment for multilingual word embeddings. In International Conference on Learning Representations.
+Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2017. Learning bilingual word embeddings with (almost) no bilingual data. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 451-462.
+Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2016. Enriching word vectors with subword information. arXiv preprint arXiv:1607.04606.
+García Carlos. 2020. MS-COCO-ES. GitHub repository.
+Alexis Conneau, Guillaume Lample, Marc'Aurelio Ranzato, Ludovic Denoyer, and Hervé Jégou. 2017. Word translation without parallel data. arXiv preprint arXiv:1710.04087.
+Wietse de Vries and Malvina Nissim. 2020. As good as new. how to successfully recycle english gpt-2 to make models for other languages. arXiv preprint arXiv:2012.05628.
+Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. 2009. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248-255. IEEE.
+Desmond Elliott, Stella Frank, Loic Barrault, Fethi Bougares, and Lucia Specia. 2017. Findings of the second shared task on multimodal machine translation and multilingual image description. In Proceedings of the Second Conference on Machine Translation, Volume 2: Shared Task Papers, pages 215-233, Copenhagen, Denmark. Association for Computational Linguistics.
+Desmond Elliott, Stella Frank, Khalil Sima'an, and Lucia Specia. 2016. Multi30k: Multilingual english-german image descriptions. In Proceedings of the 5th Workshop on Vision and Language, pages 70-74. Association for Computational Linguistics.
+Michael Gira, Ruisu Zhang, and Kangwook Lee. 2022. Debiasing pre-trained language models via efficient fine-tuning. In Proceedings of the Second Workshop on Language Technology for Equality, Diversity and Inclusion, pages 59-69, Dublin, Ireland. Association for Computational Linguistics.
+John C Gower and Garmt B Dijksterhuis. 2004. Procrustes problems, volume 30. OUP Oxford.
+Edouard Grave, Armand Joulin, and Quentin Berthet. 2019. Unsupervised alignment of embeddings with Wasserstein procrustes. In The 22nd International Conference on Artificial Intelligence and Statistics, pages 1880-1890. PMLR.
+
+Patrick JF Groenen, Patrizia Giaquinto, and Henk AL Kiers. 2005. An improved majorization algorithm for robust procrastes analysis. In New developments in classification and data analysis, pages 151-158. Springer.
+Mareike Hartmann, Yova Kementchedjhieva, and Anders Søgaard. 2019. Comparing unsupervised word translation methods step by step. Advances in Neural Information Processing Systems, 32.
+John Hewitt, Daphne Ippolito, Brendan Callahan, Reno Kriz, Derry Tanti Wijaya, and Chris Callison-Burch. 2018. Learning translations via images with a massively multilingual image dataset. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2566-2576.
+Yedid Hoshen and Lior Wolf. 2018. Non-adversarial unsupervised word translation. arXiv preprint arXiv:1801.06126.
+Junjie Hu, Sebastian Ruder, Aditya Siddhant, Graham Neubig, Orhan First, and Melvin Johnson. 2020. Xtreme: A massively multilingual multi-task benchmark for evaluating cross-lingual generalisation. In International Conference on Machine Learning, pages 4411–4421. PMLR.
+Junjie Hu, Mengzhou Xia, Graham Neubig, and Jaime Carbonell. 2019. Domain adaptation of neural machine translation by lexicon induction. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2989-3001, Florence, Italy. Association for Computational Linguistics.
+Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc Le, Yun-Hsuan Sung, Zhen Li, and Tom Duerig. 2021. Scaling up visual and vision-language representation learning with noisy text supervision. In International Conference on Machine Learning, pages 4904-4916. PMLR.
+Wonjae Kim, Bokyung Son, and Ildoo Kim. 2021. Vilt: Vision-and-language transformer without convolution or region supervision. In International Conference on Machine Learning, pages 5583-5594. PMLR.
+Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
+Jamie Kiros, William Chan, and Geoffrey Hinton. 2018. Illustrative language understanding: Large-scale visual grounding with image search. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 922-933.
+Lianian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, and Kai-Wei Chang. 2019. Visualbert: A simple and performant baseline for vision and language. arXiv preprint arXiv:1908.03557.
+
+Yangguang Li, Feng Liang, Lichen Zhao, Yufeng Cui, Wanli Ouyang, Jing Shao, Fengwei Yu, and Junjie Yan. 2022a. Supervision exists everywhere: A data efficient contrastive language-image pre-training paradigm. In International Conference on Learning Representations (ICLR).
+Yi Li, Rameswar Panda, Yoon Kim, Chun-Fu Richard Chen, Rogerio S Feris, David Cox, and Nuno Vasconcelos. 2022b. Valhalla: Visual hallucination for machine translation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5216-5226.
+Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dólar, and C Lawrence Zitnick. 2014. Microsoft coco: Common objects in context. In European conference on computer vision, pages 740-755. Springer.
+Ilya Loshchilov and Frank Hutter. 2016. Sgdr: Stochastic gradient descent with warm restarts. arXiv preprint arXiv:1608.03983.
+Antoine Miech, Dimitri Zhukov, Jean-Baptiste Alayrac, Makarand Tapaswi, Ivan Laptev, and Josef Sivic. 2019. Howto100m: Learning a text-video embedding by watching hundred million narrated video clips. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 2630-2640.
+Rada Mihalcea and Chee Wee Leong. 2008. Toward communicating simple sentences using pictorial representations. Machine translation, 22(3):153-173.
+Tomas Mikolov, Quoc V Le, and Ilya Sutskever. 2013a. Exploiting similarities among languages for machine translation. arXiv preprint arXiv:1309.4168.
+Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013b. Distributed representations of words and phrases and their compositionality. Advances in neural information processing systems, 26.
+Aaron van den Oord, Yazhe Li, and Oriol Vinyals. 2018. Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748.
+Aitor Ormazabal, Mikel Artetxe, Gorka Labaka, Aitor Soroa, and Eneko Agirre. 2019. Analyzing the limitations of cross-lingual word embedding mappings. arXiv preprint arXiv:1906.05407.
+Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532-1543.
+Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. 2021. Learning transferable visual models from natural language supervision. In International Conference on Machine Learning, pages 8748-8763. PMLR.
+
+Milos Radovanovic, Alexandros Nanopoulos, and Mirjana Ivanovic. 2010. Hubs in space: Popular nearest neighbors in high-dimensional data. Journal of Machine Learning Research, 11(sept):2487-2531.
+Antonio Scaiella, Danilo Croce, and Roberto Basili. 2019. Large scale datasets for image and video captioning in Italian. *Italian Journal of Computational Linguistics*, 2(5):49–60.
+Peter H Schonemann. 1966. A generalized solution of the orthogonal procrustes problem. Psychometrika, 31(1):1-10.
+Gunnar A Sigurdsson, Jean-Baptiste Alayrac, Aida Nematzadeh, Lucas Smaira, Mateusz Malinowski, Joao Carreira, Phil Blunsom, and Andrew Zisserman. 2020. Visual grounding in video for unsupervised word translation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10850-10859.
+Samuel L Smith, David HP Turban, Steven Hamblin, and Nils Y Hammerla. 2017. Offline bilingual word vectors, orthogonal transformations and the inverted softmax. arXiv preprint arXiv:1702.03859.
+Anders Søgaard, Sebastian Ruder, and Ivan Vulic. 2018. On the limitations of unsupervised bilingual dictionary induction. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 778-788, Melbourne, Australia. Association for Computational Linguistics.
+Didac Surís, Dave Epstein, and Carl Vondrick. 2020. Globetrotter: Unsupervised multilingual translation from visual alignment. arXiv preprint arXiv:2012.04631.
+Hagai Taitelbaum, Gal Chechik, and Jacob Goldberger. 2019. A multi-pairwise extension of Procrustes analysis for multilingual word translation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3560-3565, Hong Kong, China. Association for Computational Linguistics.
+Ivan Vulic, Sebastian Ruder, and Anders Søgaard. 2020. Are all good word vector spaces isomorphic? In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 3178-3192, Online. Association for Computational Linguistics.
+Chao Xing, Dong Wang, Chao Liu, and Yiye Lin. 2015. Normalized word embedding and orthogonal transform for bilingual word translation. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1006-1011.
+Ziyan Yang, Leticia Pinto-Alva, Franck Dernoncourt, and Vicente Ordonez. 2020. Using visual feature space as a pivot across languages. In *Findings of the
+
+Association for Computational Linguistics: EMNLP 2020, pages 3673-3678.
+
+Lewei Yao, Runhui Huang, Lu Hou, Guansong Lu, Minzhe Niu, Hang Xu, Xiaodan Liang, Zhenguo Li, Xin Jiang, and Chunjing Xu. 2021. Filip: Fine-grained interactive language-image pre-training. arXiv preprint arXiv:2111.07783.
+Bernard Ycart. 2012. Letter counting: a stem cell for cryptology, quantitative linguistics, and statistics. arXiv preprint arXiv:1211.6847.
+Xiaohua Zhai, Xiao Wang, Basil Mustafa, Andreas Steiner, Daniel Keysers, Alexander Kolesnikov, and Lucas Beyer. 2022. Lit: Zero-shot transfer with locked-image text tuning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 18123-18133.
+Meng Zhang, Yang Liu, Huanbo Luan, and Maosong Sun. 2017a. Adversarial training for unsupervised bilingual lexicon induction. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1959-1970.
+Meng Zhang, Yang Liu, Huanbo Luan, and Maosong Sun. 2017b. Earth mover's distance minimization for unsupervised bilingual lexicon induction. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1934-1945.
+Renrui Zhang, Rongyao Fang, Peng Gao, Wei Zhang, Kunchang Li, Jifeng Dai, Yu Qiao, and Hongsheng Li. 2021. Tip-adapter: Training-free clip-adapter for better vision-language modeling. arXiv preprint arXiv:2111.03930.
+Kaiyang Zhou, Jingkang Yang, Chen Change Loy, and Ziwei Liu. 2022. Conditional prompt learning for vision-language models. In The IEEE / CVF Computer Vision and Pattern Recognition Conference (CVPR).
+Mingyang Zhou, Runxiang Cheng, Yong Jae Lee, and Zhou Yu. 2018. A visual attention grounding neural model for multimodal machine translation. arXiv preprint arXiv:1808.08266.
+
+# Appendix
+
+Section A presents the pseudocodes for algorithms discussed in the main paper. We provide details of the experimental setting, chosen hyperparameters, computing resources, and running times in Section B for reproducibility.
+
+# A Algorithms
+
+We present the pseudocodes for algorithms in Section 4 of the main paper, including the WALIP algorithm (Algo. 2), the visual-word filtering algorithm (Algo. 3), and the word matching algorithm (Algo. 4).
+
+Algorithm 2 WALIP
+
+Input: Source dictionary $A_{\mathrm{dict}} = \{a_1, \dots, a_{n_a}\}$ , target dictionary $B_{\mathrm{dict}} = \{b_1, \dots, b_{n_b}\}$ , CLIP models $(A^{\mathrm{txt}}, A^{\mathrm{img}})$ , $(B^{\mathrm{txt}}, B^{\mathrm{img}})$ , set of images $G = \{g_1, \dots, g_d\}$ , word vectors $T_A$ for $A_{\mathrm{dict}}$ , $T_B$ for $B_{\mathrm{dict}}$ , number of alignment steps $M$ , threshold quantile $q$ , number of candidates $k$ .
+
+Output: $\pi :[n_a]\to [n_b]$ such that $a_{\pi (i)}\equiv b_i$ /\*STEP1.PAIRING USING FINGERPRINTS \*/ for language $l\in \{a,b\}$ do
+$\begin{array}{rl} & {\big{\lfloor}f(l_i)\leftarrow \mathrm{f i n g e p r i n t~i n~(1)~f o r~}i\in [n_l]}\\ & {\mathcal{F}\leftarrow \{f(l_i)\}_{l\in \{a,b\} ,i\in [n_l]}}\\ & {\mathcal{F}\leftarrow \mathrm{V i s u a l - W o r d - F i l t e r i n g}(\mathcal{F})}\\ & {\pi_0\leftarrow \mathrm{M a t c h i n g - F i l t e r i n g}(\mathcal{F},q)}\\ & {/*\mathrm{STEP2. I T E R A T I V E R O B U S T P R O C R U S T E S}*}\\ & {\mathrm{Set}Q_s = \{0.5,0.5,0.3,0.1\} ,K_s = \{10,5,3,1\}}\\ & {\mathrm{Set}q = 0.5,k = 10,\epsilon_0 = \infty ,\delta = 0.5} \end{array}$ Set $q = 0.5$ $k = 10$ $\epsilon_0 = \infty$ $\delta = 0.5$
+for $m\in \{1,\dots ,M\}$ do
+
+$s_{m - 1}^{A}\gets \{i\in [n_{a}]:\pi_{m - 1}(a_{i})\in B_{\mathrm{dict}}\}$ $s_{m - 1}^{B}\gets \{j\in [n_{b}]:\exists a_{i}$ s.t. $\pi_{m - 1}(a_i) = b_j\}$ $T_A^\prime \leftarrow T_A[s_{m - 1}^A ],\qquad T_B^\prime \leftarrow T_B[s_{m - 1}^B ]$ $W^{\star}\gets$ Robust-Procrustes $(T_A^{\prime},T_B^{\prime})$ $T_{A}\gets T_{A}W^{\star}$ $\epsilon_{m} = \| T_{A} - T_{B}\|_{F}$ if $\epsilon_{m} < \epsilon_{m - 1} + \delta$ then t<-min{M/10],4} q<-Qs[t], k<-Ks[t] $\pi_{m}\gets$ Matching-Filtering( $\{T_A,T_B\} ,q,k)$ $\pi \gets$ Matching-Filtering( $\{T_A,T_B\} ,0,1)$
+
+# B Experimental Setup, Implementation, and Running
+
+We present details of the experimental setting (Sec. 5.1 in main paper) and the chosen hyperpa
+
+# Algorithm 3 Visual-Word-Filtersing
+
+Input: Fingerprints $\mathcal{F} = \{f(l_i)\}_{l\in \{a,b\}, i\in [n_l]}$
+
+Output:Updated fingerprints $\mathcal{F}$
+
+for $l\in \{a,b\}$ do
+
+$$
+\begin{array}{l} f _ {i, j} ^ {(l)} \leftarrow j \text {- t h e l e m e n t o f} f \left(l _ {i}\right), \text {f o r} j \in [ d ] \\ f _ {i, \max } ^ {(l)} \leftarrow \max _ {j} f _ {i, j} ^ {(l)} \text {f o r} i \in [ n _ {l} ] \\ S _ {l} \leftarrow \{i: f _ {i, \max } ^ {(l)} \geq \operatorname {m e d i a n} _ {i} \left(f _ {i, \max } ^ {(l)}\right) \} \\ \end{array}
+$$
+
+for $i\in S_l$ do
+
+$$
+\begin{array}{l} \bar {q} \leftarrow 0. 9 - t h q u a n t i l e o f \{f _ {i, k} ^ {(l)} \} _ {k = 1} ^ {d} \\ f _ {i, j} ^ {(l)} \leftarrow f _ {i, j} ^ {(l)} \cdot \mathbf {1} _ {\{f _ {i, j} ^ {(l)} \geq \bar {q} \}} \\ f _ {i, j} ^ {(l)} \leftarrow f _ {i, j} ^ {(l)} / | f _ {i, j} ^ {(l)} | \\ \end{array}
+$$
+
+# Algorithm 4 Matching-Filtering
+
+Input: $\mathcal{F} = \{f(l_i)\}_{l\in \{a,b\}, i\in [n_l]}$
+
+Threshold quantile $q$
+
+Number of candidates $k$ ( $k = 1$ by default).
+
+Output: Word index mapping $\pi :[n_a]\to [n_b]$
+
+$$
+c _ {i, j} \leftarrow \operatorname {C S L S} \left(f \left(a _ {i}\right), f \left(b _ {j}\right)\right) \text {f o r} i \in [ n _ {a} ], j \in [ n _ {b} ]
+$$
+
+$$
+\bar {c} \leftarrow q - t h q u a n t i l e o f \{c _ {i, j} \}
+$$
+
+$$
+\pi \leftarrow \text {e m p t y} [ n _ {a} ] \text {t o} [ n _ {b} ]
+$$
+
+for $i\in [n_a]$ do
+
+$$
+\begin{array}{l} J ^ {\star} \leftarrow \{j \in [ n _ {b} ]: c _ {i, j} \geq k - \text {t h} c _ {i, j} \} \\ \pi (i) \leftarrow \{j \in J ^ {\star}: c _ {i, j} \geq \bar {c} \} \\ \end{array}
+$$
+
+rameters in (B.1), the computing sources, running time, and validation performance in (B.2).
+
+# B.1 Experimental Setup
+
+Static word embeddings. We use two embeddings: HowToWorld (HTW)-based Word2Vec (Miech et al., 2019; Sigurdsson et al., 2020) that trains Word2Vec (Mikolov et al., 2013b) on HTW video datasets and Wiki-based Fasttext (Bojanowski et al., 2016) that trains Fasttext on the Wikipedia corpus.
+
+Evaluation benchmark and datasets. We use the Dictionary datasets (Sigurdsson et al., 2020) which are test sets of MUSE bilingual dictionaries (Conneau et al., 2017). Each test set provides a set of matched pairs in two languages where each word in the source language can have multiple translations in the target language. For instance, the $\mathrm{En} \rightarrow \mathrm{Fr}$ dictionary has 1500 unique English words and 2943 corresponding French words. All pairs used in our evaluation are $\mathrm{En} \rightarrow \{\mathrm{Fr}, \mathrm{Ru}, \mathrm{It}, \mathrm{Ko}, \mathrm{Ja}\}$ , and $\mathrm{It} \rightarrow \mathrm{Fr}$ . Input evaluation dictionaries are pre-processed to ensure
+
+the delimiting character is a white-space character and that there are no duplicate synonym pairs. Words that do not appear in the word2vec files for HowToWorld-based or Wiki-based embeddings were removed. We also provide the modifications of the original datasets that remove non-native words (e.g., 'dot, gif' in the Korean dictionary). We provide all evaluated datasets in our source codes. The test dictionaries can also be found at https://github.com/facebookresearch/
+
+MUSE and https://github.com/gsig/ visual-grounding/tree/master/datasets.
+
+Evaluation metrics. Our metric is $recall@n$ used in (Sigurdsson et al., 2020) for $n = 1, 10$ : the retrieval for a query is correct if at least one of $n$ retrieved words is the correct translation of the query. $Recall@n$ presents the fraction of source words that are correctly translated. In our setting, the $recall@1$ is equivalent to precision@1, and the matching accuracy used in (Conneau et al., 2017).
+
+Baselines. We describe what baselines we have compared in this paper. CLIP-NN is a simple baseline that performs double 1-nearest neighbor (1-NN) on CLIP embeddings: Given a source word, we perform the 1-NN to find the nearest image (using source CLIP) and then perform the 1-NN on the target CLIP to find the nearest target word. For recall@n, we perform the similar double $k$ -NN where $k = \lceil \sqrt{n} \rceil$ . MUSE (Conneau et al., 2017) is a text-only method that learns the crosslingual linear mapping via adversarially aligning embeddings' distributions and iterative refinement with Procrustes. As the adversarial training is sensitive to initialization, we follow the procedure in (Sigurdsson et al., 2020) and report the highest observed performance across different initializations on the test set. As a result, this represents an upper bound on the true performance of the baseline. MUVE (Sigurdsson et al., 2020) replaces the linear layer learned in the first stage of MUSE with the AdaptLayer learned by jointly training the embeddings of videos and captions, shared across languages. The AdaptLayer allows monolingual embeddings to be transformed into a shared space so the rest of the network can be shared, even if the input languages are different. Their results suggest that visually grounding translation with video allows for more robust translation. We use their reported performances (Sigurdsson et al., 2020) in our comparison. Globetrot
+
+ter (Surís et al., 2020) uses image-caption pairs to jointly align the text embeddings of multiple languages to image embeddings using a contrastive objective. Even though their model was trained on pairing sentences with images, they show that the text representation learned by their model can also be used for unsupervised word translation by using the Procrustes algorithm on the learned word embeddings. We use their word embeddings for word translation. We also evaluate the supervision method using the Procrustes on different ground-truth translation pairs and use its results as an upper bound of performance.
+
+Implementation details. Here, we provide the details for implementing our algorithms.
+
+CLIP models. We use publicly available pretrained CLIPs for English $^{5}$ , Russian $^{6}$ , Korean $^{7}$ , and Japanese. $^{8}$ For other languages, we fine-tune English CLIP models on Multi30K (Elliott et al., 2016, 2017) and MS-COCO datasets (Lin et al., 2014; Scaiella et al., 2019; ?) with translated captions for each target language. Precisely, we fine-tune each model for 20 epochs using the NCEInfo loss (Oord et al., 2018) without changing the architectures of the original CLIP's encoders. We use Adam optimizer (Kingma and Ba, 2014) ( $\beta_{1}, \beta_{2} = 0.9, 0.98$ ) with a learning rate of 1e-7 and cosine annealing scheduler (Loshchilov and Hutter, 2016).
+
+Image datasets. We use 3000 images from ImageNet (Deng et al., 2009). We find that high-resolution images provide the best initial mappings among tested image data.
+
+Prompts for words in CLIPs. As for the input of CLIPs, we convert every single word to a complete sentence. We use the prompt templates suggested in (Radford et al., 2021) and apply prompt-ensemble (Radford et al., 2021) for the best embedding. In particular, we use a set of (two to seven) prompts for each word and average these text embeddings as the word embedding.
+
+Hyper-parameters. The robust Procrustes algorithm (Algo. 1) uses $M = 5$ iterations. In Algo. 2, we use $M = 40$ alignment iterations in Step 2 and select the best model by our evaluation loss. We observe that the evaluation losses on pairs of similar languages (e.g., English-French) converge quickly
+
+Table 7: Estimated WALIP validation loss (Euclidean distance) on several language pairs performed on the HTW-based embedding and Dictionary dataset.
+
+
En→Fr
En→Ko
En→Ja
Avg. Dist.
8.49
8.58
8.55
+
+Table 8: Estimated WALIP validation loss (Euclidean distance) on several language pairs performed on the Wiki-based embedding and Dictionary dataset.
+
+
Avg. Dist.
En→Ko15.70
En→Ru14.24
En→Fr10.99
En→It11.67
Avg. Dist.
En→Es10.79
En→De12.06
Es→De13.28
It→Fr11.54
+
+after a few iterations, while the dissimilar pairs require more iterations. For quantile threshold $q$ , we use the simple scheme by gradually reducing $q$ from a set of discrete values $\{0.7, 0.5, 0.3, 0.1\}$ . The number of candidates $k$ is decayed using the following values $\{10, 5, 3, 1\}$ .
+
+# B.2 Computation and Evaluation of WALIP
+
+Validation. As WALIP is unsupervised, we estimated the validation error (or loss) by evaluating the average squared Euclidean distance between the mapped source embeddings and the target word embeddings. We use this criterion to select our best mappings. We report the validation errors in Table 7 and Table 8 for evaluated language pairs on two types of static word embeddings. We can see that the validation errors of the dissimilarity of language pairs (e.g., $\mathrm{En} \rightarrow \mathrm{Ko}$ ) are higher than the similar pairs (e.g., $\mathrm{En} \rightarrow \mathrm{Fr}$ ). We report the recall@1 scores corresponding to mappings with the smallest validation errors.
+
+Computing resources and time. We run our algorithms and baselines on an NVIDIA GeForce RTX 3090 GPU. The average running time of WALIP is about less than 2 minutes, while MUSE models take approximately an hour for each language pair.
+
+Number of parameters of WALIP models. Each of our pretrained CLIP models has about 150 million trainable parameters. In ablation studies, we have tested WALIP with larger versions of CLIP models, with upwards of 400 million trainable parameters. However, we find that both WALIP versions with smaller and large CLIP models share similar translation performances across different language pairs.
\ No newline at end of file
diff --git a/utilizinglanguageimagepretrainingforefficientandrobustbilingualwordalignment/images.zip b/utilizinglanguageimagepretrainingforefficientandrobustbilingualwordalignment/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..d42d36332da44a658f4559677e9f9b4463d0cc07
--- /dev/null
+++ b/utilizinglanguageimagepretrainingforefficientandrobustbilingualwordalignment/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:06e0eb7110cd242bee413b950cedd49527f8868b39ce36d8d5f8ce5f9521d184
+size 405991
diff --git a/utilizinglanguageimagepretrainingforefficientandrobustbilingualwordalignment/layout.json b/utilizinglanguageimagepretrainingforefficientandrobustbilingualwordalignment/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..c15443ef791f89a4cb36ffdd49682979bc3f2e22
--- /dev/null
+++ b/utilizinglanguageimagepretrainingforefficientandrobustbilingualwordalignment/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c0a8b6382ec204ae5b41c355beebbef49fddfdaa15792bbce9d07d7aed381e5d
+size 634012
diff --git a/validityassessmentoflegalwillstatementsasnaturallanguageinference/58079a41-233d-4dcd-8635-6f1c4c287c62_content_list.json b/validityassessmentoflegalwillstatementsasnaturallanguageinference/58079a41-233d-4dcd-8635-6f1c4c287c62_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..105a010dd2c062114685624d42f12d4667aa39d1
--- /dev/null
+++ b/validityassessmentoflegalwillstatementsasnaturallanguageinference/58079a41-233d-4dcd-8635-6f1c4c287c62_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:39196fe80b132b37994c3ce5f903974fe01dbba308913ccbaa25284a61d3b0bf
+size 66171
diff --git a/validityassessmentoflegalwillstatementsasnaturallanguageinference/58079a41-233d-4dcd-8635-6f1c4c287c62_model.json b/validityassessmentoflegalwillstatementsasnaturallanguageinference/58079a41-233d-4dcd-8635-6f1c4c287c62_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..327790f98f3e598f2b1871191291b9d30c8dc496
--- /dev/null
+++ b/validityassessmentoflegalwillstatementsasnaturallanguageinference/58079a41-233d-4dcd-8635-6f1c4c287c62_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:934379904f90be1c777cc567e07fc80ab8b9c409ec414dfc55103291a0f67b45
+size 78802
diff --git a/validityassessmentoflegalwillstatementsasnaturallanguageinference/58079a41-233d-4dcd-8635-6f1c4c287c62_origin.pdf b/validityassessmentoflegalwillstatementsasnaturallanguageinference/58079a41-233d-4dcd-8635-6f1c4c287c62_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..9cb7cc7a8a87d9e954d15488d3378a1875ac3149
--- /dev/null
+++ b/validityassessmentoflegalwillstatementsasnaturallanguageinference/58079a41-233d-4dcd-8635-6f1c4c287c62_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3c0bfe841ef50a038921cfc04e58f2f835339efd483bac7383436f2f2796c573
+size 2620018
diff --git a/validityassessmentoflegalwillstatementsasnaturallanguageinference/full.md b/validityassessmentoflegalwillstatementsasnaturallanguageinference/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..f40dfdf3c08f4c7d5c5df6fd1cffb9a0b06162b3
--- /dev/null
+++ b/validityassessmentoflegalwillstatementsasnaturallanguageinference/full.md
@@ -0,0 +1,243 @@
+# Validity Assessment of Legal Will Statements as Natural Language Inference
+
+Alice Saebom Kwak, Jacob O. Israelsen, Clayton T. Morrison, Derek E. Bambauer, Mihai Surdeanu
+
+The University of Arizona, Tucson, Arizona, USA
+
+{alicekwak, jisraelsen, claytonm, derekbambauer, msurdeanu}@arizona.edu
+
+# Abstract
+
+This work introduces a natural language inference (NLI) dataset that focuses on the validity of statements in legal wills. This dataset is unique because: (a) each entailment decision requires three inputs: the statement from the will, the law, and the conditions that hold at the time of the testator's death; and (b) the included texts are longer than the ones in current NLI datasets. We trained eight neural NLI models in this dataset. All the models achieve more than $80\%$ macro F1 and accuracy, which indicates that neural approaches can handle this task reasonably well. However, group accuracy, a stricter evaluation measure that is calculated with a group of positive and negative examples generated from the same statement as a unit, is in mid 80s at best, which suggests that the models' understanding of the task remains superficial. Further ablative analyses and explanation experiments indicate that all three text segments are used for prediction, but some decisions rely on semantically irrelevant tokens. This indicates that overfitting on these longer texts likely happens, and that additional research is required for this task to be solved.
+
+# 1 Introduction
+
+Natural language inference (NLI) in the legal domain has not been widely investigated, despite its importance and potential. One such important NLI application is validity assessment of legal documents such as wills. Legal procedures for creating and executing wills are evolving rapidly. Processing a will via probate is a costly, time-consuming process that can be exacerbated by errors or by challenges to validity. These problems will likely increase as people increasingly employ electronic wills. Developing natural language techniques that can determine a will's validity at creation and execution can increase validity, conserve legal resources, and effectuate the author's intent. To this end, this work explores how NLI models can be employed to evaluate the validity of will statements.
+
+NLI in general deals with determining whether a premise entails, contradicts, or is neutral to the hypothesis given. Our project made two important changes to this approach to fit legal documents. First, our dataset contains three input types: statements from wills (as hypotheses), laws (as premises), and conditions, which are circumstances at the time of probate (e.g., eligibility of the testator or witnesses). This adaptation is necessary as will statements' validity cannot be evaluated without considering both circumstances at the time of probate ("conditions") and relevant legal rules ("laws"). Also, we switched the labels from entailment, contradict, neutral to support, refute, unrelated to better represent the relation between will statements and laws.
+
+The major contributions of this work are:
+
+- We create an open-access annotated dataset with 1,014 data points, generated from 23 publicly available wills. In addition to the three-tuple setting unique to the legal domain, this dataset also contains texts considerably longer than in other open-domain NLI datasets, such as SNLI (Bowman et al., 2015). The average length of our texts is 269 tokens, while the average lengths of premises and hypotheses in SNLI are 8 and 14 tokens, respectively.
+- We demonstrate that validity assessment of legal will statements can be handled reasonably well with state-of-the-art NLI models when trained on our dataset. However, low scores in group accuracy, which is a stricter evaluation measure calculated with a group of positive and negative examples generated from the same statement as a unit, indicate that work remains before NLI models fully understand legal language.
+- We explain how the trained models utilize our dataset via ablation tests that remove various
+
+pieces of information at prediction time (laws, conditions, or both), and through post-hoc explainability analyses using LIME (Ribeiro et al., 2016). Our analyses indicate that in most cases the NLI classifiers use meaningful phrases from all three pieces of text, indicating the models do capture useful information. However, in some situations, the models use features that are not intuitive for humans, indicating that the task is incomplete.
+
+# 2 Related Work
+
+Natural language inference (NLI), also known as Recognizing Textual Entailment (RTE), determines entailment relations between a pair of sentences: a premise and a hypothesis. Their relationship is either: a) entailment, if a premise entails a hypothesis; b) contradiction, if a premise contradicts a hypothesis; or c) neutral, if a premise neither entails nor contradicts a hypothesis. NLI has been a key framework in natural language processing since Dagan et al. (2006) proposed the RTE challenge.
+
+Numerous datasets exist for general NLI tasks. For example, the Stanford Natural Language Inference (SNLI) Corpus (Bowman et al., 2015) dataset contains about 570,000 pairs of sentences (premises and hypotheses) generated from Flickr image captions. More recently, datasets for domain-specific NLI tasks are being introduced. SciNLI (Sadat and Caragea, 2022) is drawn from scientific texts. It contains 107,412 sentence pairs extracted from papers in natural language processing and computational linguistics.
+
+There have been a few NLI-related resources for the legal domain, including IFlyLegal (Wang et al., 2019), StAtutory Reasoning Assessment (SARA) dataset (Holzenberger et al., 2020), Graph-based Causal Inference (GCI) framework (Liu et al., 2021), and ContractNLI (Koreeda and Manning, 2021). One IFlyLegal module was developed for natural language article inference; it provides relevant legal articles when asked legal questions. SARA contains rules extracted from the US Internal Revenue Code and natural language questions. GCI generates causal graphs based on fact descriptions. Though the framework was not specifically designed for law, Liu et al. (2021) demonstrated that it can be utilized for legal text analysis. ContractNLI is a dataset consisting of 607 legal contracts designed to handle document-level natural
+
+language inference.
+
+Our work differs from these existing NLI works in two ways: (a) it contains three types of input information; and (b) it operates in a pragmatic middle space with respect to text length: larger than datasets with sentence-level texts, which are insufficient to capture legal details, but shorter than document-level inference datasets.
+
+# 3 Dataset
+
+Our dataset includes three types of entries (will statements, applicable law, and facts representing external state) rather than the usual two types. Validity often depends upon such external circumstances. Thus, a will statement can be legally valid under some facts, but not others. Hence, a will can change from valid to invalid or vice versa; wills must, at minimum, be evaluated when executed (drafted and signed) and when probated (where a court determines whether to implement its provisions). For example, if a will contains a statement appointing a specific person as executor, that person's eligibility must be verified. This requires information about both the law and the person's circumstances at probate time. For example, according to Tennessee Code section 40-20-115, any person who has been imprisoned cannot be an executor. Thus, to assess eligibility of a Tennessee executor in Tennessee, one must know whether they have been in prison at probate time.
+
+# 3.1 Data Collection
+
+Our data was collected from the U.S. Wills and Probates datasets in Ancestry, which contains documents in the public domain.1 We chose 23 wills based on three criteria: 1) whether the wills were handwritten; 2) the wills' execution and probation dates; and 3) where the wills were executed and probated. We excluded handwritten wills due to the difficulty of OCR text recognition. Execution and probation dates can affect interpretation of wills; we excluded wills executed before 1970 and probated before 2000. Lastly, we only collected wills from Tennessee, because including wills from multiple states would require analyzing each state's laws governing wills. Tennessee had the greatest number of wills meeting our criteria.
+
+All collected wills were anonymized. Personal information was replaced with special tokens denoting the type of information (such as [Person-
+
+
+Figure 1: A demonstration of the annotation procedure. Given a law and condition, each statement is evaluated for validity. Support: it is supported by the law and condition provided; Refute: refuted by the law and condition. Unrelated: not relevant to the law. When a statement and a law are unrelated to each other, a condition is not required for classification. However, it is still assigned one of the conditions from either support or refute case. The goal is to prevent NLI models from depending on contextual difference (i.e., "texts without conditions are all unrelated cases") rather than features when making predictions.
+
+$n$ , [Address- $n$ ], [Number- $n$ ], where $n$ identified each person or object in the will. (Suntwal et al. (2019) suggested this anonymization method). Anonymization was performed manually to prevent including personal information in the dataset.
+
+# 3.2 Annotation
+
+Each will statement was annotated as support, refute, or unrelated based on a given condition and law. (Our annotators set hypothetical conditions when necessary to categorize statements.)
+
+Importantly, each statement was annotated multiple times. A statement was annotated once as support, once as refute, and thrice as unrelated. This ensured that every statement was labeled with all three classes in the same ratio (support:refute:unrelated=1:1:3). Also, annotating statements three times more with "unrelated" than support and refute enabled including a greater range of laws in the dataset. Formally, our annotation procedure included the following steps:
+
+(1) preprocessing: extracting texts from collected wills using OCR3, and copying statements into the dataset;
+(2) for each statement, identifying the Tennessee
+
+legal provision that supported, refuted, or was unrelated to the statement's validity;
+
+(3) adding a condition specifying external circumstances relevant to validity;
+(4) repeating these steps to generate five annotations per statement;
+(5) anonymizing all statements by replacing personal information with special tokens.
+
+Figure 1 demonstrates the annotation procedure with examples.
+
+Two annotators participated in the task. One is a law student, and annotation was supervised by author Bambauer, a law professor. The other annotator does not have legal training, but ongoing discussions and reviews ensured the uniform quality. The annotators contributed equally. Annotators worked individually but shared annotation guidelines. After the annotation was complete, cross-annotation was conducted on 200 randomly selected items (100 items from each annotator's set; each annotator worked blinded on items drawn from the other annotator's set) to check inter-annotator agreement. We calculated Kappa agreement score based on the cross-annotation result, and the score is 0.91 (rounded to hundredth).
+
+After the annotation was complete, the dataset was split into training, development, and test sets. There are 1014 data points in our dataset, split $50:25:25\%$ (train: 504, development: 255, test: 255). When splitting, a group containing annotations from a single statement (one support, one refute, and three unrelated) was treated as a single
+
+
+a) Texts (statement+condition+law)
+
+
+b) Statements
+
+
+c) Conditions
+Figure 2: Histograms demonstrating the length distribution of a) full texts (i.e., statement+condition+law), b) statements, c) conditions, and d) laws in terms of token counts. Token counts are plotted on the X-axis; the Y-axis indicates the number of texts/statements/conditions/laws for each bin.
+
+
+d) Laws
+
+
Datasets
Texts
Tokens
Train
504
135218
Development
255
69643
Test
255
68042
Total
1014
272903
+
+Table 1: The number of texts and tokens contained in each dataset and in total
+
+unit. Each annotator's work was equally represented in all three datasets. All 23 wills appear in more than one of the train, dev, and test partitions. However, given the nature of the will statements from the wills and how they were annotated, data leakage is unlikely. Will statements are independent and generally do not convey information about other statements from the same wills. Summary statistics are in Table 1 and Figure 2.
+
+# 3.3 Characteristics of the Dataset
+
+Our dataset has two characteristics distinguishing it from other datasets: a) texts are composed of three input types, and b) texts contain a large number of tokens.
+
+First, our task requires a third input type: a condition, since validity often depends upon conditions.
+
+Second, our texts tend to contain a large number of tokens. $44\%$ of our texts contain 200 or more tokens. The average token number was 269.14; 79 texts contain more than 512 tokens (the threshold for most NLI models). Popular NLI datasets are shorter.
+
+# 3.4 Implementation
+
+We trained multiple NLI models with our dataset to assess their performance: four transformer models and four sentence-transformer models. The transformer models include bert-base-uncased (Devlin et al., 2018), distilbert-base-uncased (Sanh et al., 2019a), roberta-large-mnli (Liu et al., 2019), and longformer-base-4096 (Beltagy et al., 2020). Bert-base-uncased and dilstilbert-base-uncased were trained to set baselines on the task. In addition to the baseline models, we used roberta-large-mnli and longformer-base-4096. Among sentence-transformer models (Reimers and Gurevych, 2019), four pretrained models with top average performances (based on the Model Overview page $^{4}$ ) were chosen: all-mpnet-base-v2 $^{5}$ , multi-qa-mpnet-base
+
+dot-v1 $^{6}$ , all-distilroberta-v1 (Sanh et al., 2019b) $^{7}$ , and all-MiniLM-L12-v2 (Wang et al., 2020) $^{8}$ .
+
+To distinguish between statements, laws, and conditions in the concatenated texts, we prefixed each with a special token: [STATE], [LAW], and [COND].
+
+Both transformer models and sentence-transformer models were trained on PyTorch 1.11.0 with Cuda 11.3. All transformer models were trained using the Trainer class provided by HuggingFace. Learning rates and training epochs were tuned on the development partition. Sentence-transformer models were trained with Sentence Transformer Fine-Tuning (SetFit) proposed by Wasserblat (2021). It was slightly adapted to fit the multi-class classification task, but its fundamentals remain intact. SetFit utilizes sentence pairs generated within the same class as training data. It fits the model with the data to minimize the Softmax Loss. Once the model is fitted, it is used to encode the training and development (or test, when it is a testing phase) datasets. The encoded data is used to fit the Logistic Regression classifier (with 'liblinear' solver) which, finally, makes predictions.
+
+All models except for longformer-base-4096 were trained and evaluated on the truncated datasets due to the models' token number limitations. The models can only process texts with 512 or fewer tokens without truncation (all-MiniLM-L12-v2: 256 tokens; all-mpnet-base-v2: 384 tokens; other models, except longformer-base-4096: 512 tokens). When truncating datasets, the ratio of average token numbers among three input types (statements, laws, and conditions) was considered to prevent losing excessive information from a single type of input.
+
+# 4 Results and Analysis
+
+# 4.1 Evaluation Measures
+
+We report results using standard accuracy, precision, recall, and F1 scores. To ensure all labels are well represented, precision, recall, and F1 scores were computed with Macro average. Additionally, as suggested by Elazar et al. (2021), we introduce a new measure called group accuracy (GA). GA
+
+is calculated with a group of positive and negative examples from the same statement (rather than each text) as a unit. If a group has one or more incorrect predictions, the group is incorrect. If a model correctly understands a will statement, it should perform equally well on all examples from the same statement.
+
+# 4.2 Results from Trained Models
+
+Table 2 shows the models' performances on our test partition. Overall, the models demonstrated good performances. Each achieved more than $80\%$ in all metrics but group accuracy. Roberta-large-nli showed the best performance. It achieved over $96\%$ in all metrics but group accuracy $(84.31\%)$ , suggesting it can handle the task reasonably well. However, the large difference between accuracy and group accuracy suggests that the models' understanding of the task is superficial.
+
+Accuracy for the unrelated label was higher than for support and refute in all models except all-mpnet-base-v2 and multi-qa-mpnet-base-dot-v1. This higher accuracy for unrelated label may be partially attributable to the gap between accuracy and group accuracy, as it would inflate overall accuracy. However, since the gap between accuracy and group accuracy is not significantly smaller for all-mpnet-base-v2 and multi-qa-mpnet-base-dot-v1 (i.e., the models where accuracy for unrelated was not higher than the other labels), it is more likely that the gap originated from the strictness of group accuracy and the models' superficial understanding of will statements.
+
+Further, roberta-large-mnli (trained with truncated inputs) showed better performance than longformer-base-4096 (trained with full length inputs). This suggests that models with token number limitations can still perform well with long inputs when truncated properly.
+
+# 4.3 Experiment with Different Input Lengths
+
+To investigate whether the models' performance was affected by input length and/or input truncation, we varied input lengths. Inputs were classified into three categories by length: a) short (equal or less than 192 or 256 tokens); b) regular (larger than 192 or 256 tokens and equal or less than 384 or 512); and c) long (larger than 384 or 512)9. The
+
+
Model
Precision
Recall
F1
Accuracy
GA
Transformers models
bert-base-uncased
81.47
82.20
81.81
87.06
49.02
distilbert-base-uncased
82.37
82.59
82.48
87.06
50.98
longformer-base-4096
94.09
93.69
93.85
94.90
74.51
roberta-large-mnli
96.67
96.25
96.42
96.86
84.31
Sentence-Transformers models
all-MiniLM-L12-v2
81.50
84.54
82.64
85.49
50.98
all-disilroberta-v1
83.43
85.99
84.22
87.06
49.02
multi-qa/mpnet-base-dot-v1
91.48
94.13
92.65
93.73
74.51
all/mpnet-base-v2
92.93
95.05
93.82
94.90
76.47
+
+Table 2: Results of all classifiers trained on our dataset using five measures: precision, recall, F1, accuracy, and group accuracy (GA in table, see Section 4.1). Precision, recall, and F1 scores were computed with Macro average. Group accuracy is calculated with positive and negative examples generated from the same statement as a unit. If a group contains one or more incorrect predictions, the group is considered incorrect.
+
+
Model
Precision
Recall
F1
Accuracy
all-mpnet-base-v2 (sample n = 52)
token <= 192
93.75
93.15
92.91
94.23
192 < token <= 384
90.48
89.10
89.63
94.23
384 < token
86.35
89.49
87.08
88.46
longformer-base-4096 (sample n = 20)
token <= 256
95.24
88.89
90.77
95.00
256 < token <= 512
80.56
71.67
70.93
80.00
512 < token
73.81
66.67
66.93
75.00
roberta-large-mnli (sample n = 20)
token <= 256
100
100
100
100
256 <token <= 512
94.87
85.00
88.76
90.00
512 <token
94.44
93.33
93.27
95.00
+
+Table 3: Results with different input lengths. Models generally performed worse when token numbers in inputs were larger. Roberta-large-mnli performed better than longformer-base-4096 in predicting long inputs (token > 512) despite truncation. Roberta-large-mnli also showed better performance on long inputs than regular inputs, indicating input truncation did not worsen the model's performance.
+
+number of inputs in each category varied. To control the impact of varying sample size, categories with larger sample sizes were reduced by random sampling to match the smallest sample size. For this experiment, we used the three models with best performance on the full dataset.
+
+Table 3 shows the results. Models generally performed worse with larger input token numbers. All three models performed best with inputs with smaller token numbers. One interesting finding is that roberta-large-mnli performed better than longformer-base-4096 in predicting inputs with large token numbers $(\mathrm{n} > 512)$ despite truncation. Roberta-large-mnli showed better performance on long inputs than regular inputs, indicating truncation did not negatively affect the model's perfor
+
+mance. This finding aligns with the observation from the overall results that models with token number limitation can still show good performances on long inputs when properly truncated. However, results differed with a smaller token number limitation. All-mpnet-base-v2 performed worst on long inputs (where truncation occurred), indicating truncation negatively impacted performance.
+
+# 4.4 Results from Ablation Experiment
+
+To determine whether the models correctly rely on all three types of texts (statements, conditions, and laws), we conducted an ablation experiment, training the models with datasets lacking one (law or condition) or two types (law+condition) of inputs. Poliak et al. (2018) suggested a similar experiment
+
+
Model
Precision
Recall
F1
Accuracy
GA
Statements and laws
all-mpnet-base-v2
75.64
76.47
75.98
81.96
27.45
longformer-base-4096
75.72
75.98
75.79
82.75
37.25
roberta-large-mnli
77.93
77.51
77.55
84.31
39.22
Statements and conditions
all-mpnet-base-v2
46.67
57.13
44.80
44.31
0.00
longformer-base-4096
58.00
49.44
51.30
58.82
0.00
roberta-large-mnli
37.63
41.72
38.08
56.86
0.00
Statements only
all-mpnet-base-v2
36.76
35.94
34.04
36.47
0.00
longformer-base-4096
18.17
33.33
23.52
54.51
0.00
roberta-large-mnli
18.17
33.33
23.52
54.51
0.00
+
+Table 4: Results from the ablation experiment where three models were trained with datasets lacking one (law or condition) or two types (law+condition) of inputs. Results demonstrate that the models' performances deteriorate if one or more input type(s) is omitted. Thus, both types of inputs affect the models' performances, though laws have larger impact than conditions.
+
+to this one to set hypothesis-only baselines. We used the three models with the best performances on the full dataset for this experiment.
+
+Table 4 shows the results. Performance deteriorated if any input type was omitted. Among the models trained on partial data, those trained with statements and laws performed better, with F1 scores over $75\%$ . However, group accuracy dropped to the 30s or even $20\%$ s, indicating actual understanding decreased considerably. Models trained with statements and conditions performed significantly worse. Results ranged between $38\%$ and $52\%$ for F1, and group accuracy dropped to 0. Models trained only with statements performed worst, with F1 scores dropping to $35\%$ or lower.
+
+This degradation from partial data shows including both conditions and laws positively affects the models' performance. Models use all three types of inputs when making predictions10. Laws have more impact on performance than conditions, since deterioration in results without laws was more significant than from models trained without condi
+
+tions. This is realistic, since laws set the parameters within which conditions operate to make a given provision valid or invalid.
+
+# 4.5 Understanding Results with LIME
+
+To clarify model behavior, we used Local Interpretable Model-Agnostic Explanations (LIME) to explain our classifiers' predictions (Ribeiro et al., 2016). We implemented LIME on predictions made by the best performing model (roberta-large-nli, fine-tuned on our data). The LIME results revealed that the model tends to correctly rely on all three texts (statements, laws, conditions) for a prediction, and uses meaningful features in many cases. However, sometimes the model utilized features that are not intuitive for humans.
+
+# 4.5.1 A Correct Example
+
+Figure 3 shows a case where the model made a correct prediction based on features sensible to humans. Figure 3's text is a statement saying two or more witnesses witnessed the testator signing the will and signed in each other's presence. It includes a condition stating that one of the two witnesses was ineligible to serve under Tennessee law, and the law specifying witness eligibility in Tennessee. The given condition and the law invalidate the will statement, rendering it as refute. The model correctly predicted the outcome based on tokens such as ineligible, witnesses, One, testament, and 2. The top two tokens with greatest impact on the model's prediction were ineligible and witnesses, with importance score of 0.60 and 0.14, respectively. It is
+
+
+
+# Text with highlighted words
+
+[STATE] We, the undersigned subscribing witnesses, do hereby certify that we witnessed the foregoing Last Will and Testament of [Person-1] at his request, in his presence and in the presence of each other, and that he signed the same in our presence, and in the presence of each of us, declaring the same to be his Last Will and Testament. This 5th day of December, 2001. [COND] One out of the two witnesses was ineligible to serve according to Tennessee state law. [LAW] 32-1-103. Witnesses — Who may act. (a) Any person competent to be a witness generally in this state may act as attesting witness to a will. (b) No will is invalidated because attested by an interested witness, but any interested witness shall, unless the will is also attested by two (2) disinterested witnesses, forfeit so much of the provisions therein made for the interested witness as in the aggregate exceeds in value, as of the date of the testator's death, what the interested witness would have received had the testator died interstate. (c) No attesting witness is interested unless the will gives to the attesting witness some personal and beneficial interest.
+
+
+Figure 3: Examples of LIME explanations showing a case where the model (roberta-large-mnli) makes correct predictions based on features sensible to humans.
+Figure 4: An example of a LIME explanation showing a case where the model (roberta-large-mnli) makes an incorrect prediction based on the tokens such as Making, and, and the which bear little semantic relevancy to the gist of the text.
+
+understandable that these tokens have high scores, as they are the keywords ("One out of two (2) witnesses is ineligible") which provide grounds for revoking a will statement.
+
+# 4.5.2 An Incorrect Example
+
+Figure 4 shows a case where the model made an incorrect prediction based on irrelevant tokens. The intent of the will statement is to excuse the Executor from filing inventory. This is not contradicted by the condition and law, requiring the model to classify the text as support. Nevertheless, the model incorrectly classified it as unrelated. The reason for the incorrect prediction is that the model (probably) relied on irrelevant tokens such as Making, and, and the, which bear little semantic relevancy to the text, versus more relevant tokens such as inventories and excuses when making the prediction. This is likely due to overfitting induced by longer texts. This LIME explanation shows that even the best performing model still has room for improvement.
+
+# 5 Future Work
+
+This work can be expanded in several directions. First, it can be extended to cover multiple states by adapting the models or adding more data. Future work can investigate if models trained on a single state's data can be adapted to evaluate data from other states. Also, wills from multiple states could be added to the dataset. We expect the augmented dataset would enhance the models' performance on evaluating wills from other states.
+
+This work can be expanded to other legal domains. Models trained on our data can be adapted to similar tasks in other legal domains.
+
+Lastly, this work can be extended by investigating novel transformer models. Given the uniqueness of our dataset with regard to the number of inputs and text length, it is likely that further experiment and modification is needed to handle such characteristics.
+
+# 6 Conclusion
+
+This work presented an annotated dataset for natural language inference (NLI) in the legal domain, consisting of 1,014 data points generated from 23 publicly available wills. The dataset is novel for two reasons: it included texts with three input types (statement, law, and condition) rather than two (premise and hypothesis) in the traditional NLI, and included texts are longer than in general NLI datasets. The NLI models trained on our dataset showed reasonable performance in assessing the validity of will statements. Ablative experiments demonstrated that the models' performances worsen if any input type (condition, law, or both) is omitted. This suggests that the models utilize all three input types. The LIME implementation reveals that even the best performing model makes errors in some cases by using semantically irrelevant tokens. Our open-access dataset is publicly available at: https://github.com/ml4ai/nli4wills-corpus
+
+# Limitations
+
+Our dataset consists of a relatively small number of data points (1,014 texts). Annotating will statements with relevant laws and conditions is a highly demanding, time intensive task. A larger size of dataset would likely further improve the models' performances.
+
+Our dataset only includes wills executed and probated in Tennessee, with execution after 1970 and probate after 2000. Due to these restrictions, our framework might produce incorrect results on inputs from wills from different settings. Supplementing the dataset with wills from more diverse settings would address this limitation. Even though the scope is limited to a single state (Tennessee), our study demonstrates that transformer models trained on the dataset can evaluate the validity of statements from wills with reasonable accuracy. Future additions to our dataset will be available at the same URL: https://github.com/ml4ai/nli4wills-corpus
+
+This work does not involve humans in the loop. Considering how crucial accuracy is for the task (i.e., legal validity evaluation), the work would have benefited much from involving domain experts. Even though our study discovered that the state-of-the-art transformer models can show good performances (over $85\%$ accuracy in all 8 models) without human interaction, it was also found that
+
+the models' understanding on the task is rather superficial (GAs ranging from $49 - 85\%$ in Table 2). Including humans-in-the-loop could be a solution for enhancing the models' understanding on the task.
+
+# Ethics Statement
+
+We collected legal wills from Ancestry as a part of our dataset creation process. The wills probated in court are in the public domain in the US, and we did not violate Ancestry's Terms and Conditions when collecting wills. We also anonymized the wills by replacing any personal identifiable information contained in the documents with special tokens.
+
+Our datasets and codes are released to public. We believe our released datasets and codes would contribute to society by promoting further NLI endeavors in the legal domain. The resources could potentially assist with people reviewing wills, but they should not be considered as legal advice. To avoid any confusion, we placed a disclaimer that the users must not rely on any information provided from our resources when making legal decisions and should instead consult with an attorney.
+
+# Acknowledgements
+
+We thank the reviewers for their thoughtful comments and suggestions. This work was partially supported by the National Science Foundation (NSF) under grant 2217215, and by University of Arizona's Provost Investment Fund. Mihai Surdeanu and Clayton Morrison declare a financial interest in lum.ai. This interest has been properly disclosed to the University of Arizona Institutional Review Committee and is managed in accordance with its conflict of interest policies.
+
+# References
+
+Iz Beltagy, Matthew E. Peters, and Arman Cohan. 2020. Longformer: The long-document transformer. arXiv:2004.05150.
+Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics.
+Ido Dagan, Oren Glickman, and Bernardo Magnini. 2006. The pascal recognising textual entailment challenge. In *Machine Learning Challenges*. Evaluating Predictive Uncertainty, Visual Object Classification,
+
+and Recognising Tectual Entailment, pages 177-190, Berlin, Heidelberg. Springer Berlin Heidelberg.
+Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: pre-training of deep bidirectional transformers for language understanding. CoRR, abs/1810.04805.
+Yanai Elazar, Hongming Zhang, Yoav Goldberg, and Dan Roth. 2021. Back to square one: Artifact detection, training and commonsense disentanglement in the Winograd schema. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 10486-10500, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
+Nils Holzenberger, Andrew Blair-Stanek, and Benjamin Van Durme. 2020. A dataset for statutory reasoning in tax law entailment and question answering. CoRR, abs/2005.05257.
+Yuta Koreeda and Christopher D. Manning. 2021. Contractnli: A dataset for document-level natural language inference for contracts. CoRR, abs/2110.01799.
+Xiao Liu, Da Yin, Yansong Feng, Yuting Wu, and Dongyan Zhao. 2021. Everything has a cause: Leveraging causal inference in legal text analysis. CoRR, abs/2104.09420.
+Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692.
+Adam Poliak, Jason Naradowsky, Aparajita Haldar, Rachel Rudinger, and Benjamin Van Durme. 2018. Hypothesis only baselines in natural language inference. In Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics, pages 180-191, New Orleans, Louisiana. Association for Computational Linguistics.
+Nils Reimers and Iryna Gurevych. 2019. Sentence-bert: Sentence embeddings using siamese bert-networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics.
+Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. "why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135-1144.
+Mobashir Sadat and Cornelia Caragea. 2022. SciNLI: A corpus for natural language inference on scientific text. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 7399-7409, Dublin, Ireland. Association for Computational Linguistics.
+
+Victor Sanh, Lysandre Debut, Julien Chaumont, and Thomas Wolf. 2019a. Distilbert, a distilled version of BERT: smaller, faster, cheaper and lighter. CoRR, abs/1910.01108.
+Victor Sanh, Lysandre Debut, Julien Chaumont, and Thomas Wolf. 2019b. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. *ArXiv*, abs/1910.01108.
+Sandeep Suntwal, Mithun Paul, Rebecca Sharp, and Mihai Surdeanu. 2019. On the importance of delexicalization for fact verification. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3413-3418, Hong Kong, China. Association for Computational Linguistics.
+Wenhui Wang, Furu Wei, Li Dong, Hangbo Bao, Nan Yang, and Ming Zhou. 2020. Minilm: Deep self-attention distillation for task-agnostic compression of pre-trained transformers.
+Ziyue Wang, Baoxin Wang, Xingyi Duan, Dayong Wu, Shijin Wang, Guoping Hu, and Ting Liu. 2019. IFly-Legal: A Chinese legal system for consultation, law searching, and document analysis. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP): System Demonstrations, pages 97-102, Hong Kong, China. Association for Computational Linguistics.
+Moshe Wasserblat. 2021. Sentence transformer fin-tuning (setfit): Outperforming gpt-3 on few-shot text-classification while being 1600 times smaller.
\ No newline at end of file
diff --git a/validityassessmentoflegalwillstatementsasnaturallanguageinference/images.zip b/validityassessmentoflegalwillstatementsasnaturallanguageinference/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..baeaf46932da5faff4cf168eec11d572c708e5bc
--- /dev/null
+++ b/validityassessmentoflegalwillstatementsasnaturallanguageinference/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4add222e19469410f9576716c77821f380791a78f4075580996340be6973aab4
+size 446939
diff --git a/validityassessmentoflegalwillstatementsasnaturallanguageinference/layout.json b/validityassessmentoflegalwillstatementsasnaturallanguageinference/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..5c298e11078c893f362a793137e74e4781433cdf
--- /dev/null
+++ b/validityassessmentoflegalwillstatementsasnaturallanguageinference/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:6924d61ee6a66c77f57a4f7474c9e18d7c5392abf010862293da05664ea0d60e
+size 262125
diff --git a/varmaepretrainingofvariationalmaskedautoencoderfordomainadaptivelanguageunderstanding/38bc0f6d-05e4-416f-bc6d-f4d22e9925e4_content_list.json b/varmaepretrainingofvariationalmaskedautoencoderfordomainadaptivelanguageunderstanding/38bc0f6d-05e4-416f-bc6d-f4d22e9925e4_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..5a45216d435dc7ba3dc95ddce802a4d3a7a43f26
--- /dev/null
+++ b/varmaepretrainingofvariationalmaskedautoencoderfordomainadaptivelanguageunderstanding/38bc0f6d-05e4-416f-bc6d-f4d22e9925e4_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3097d4f0cab796202095e11a36303a30a307fa0ea071a6ea8be2c84e6c1c628f
+size 88499
diff --git a/varmaepretrainingofvariationalmaskedautoencoderfordomainadaptivelanguageunderstanding/38bc0f6d-05e4-416f-bc6d-f4d22e9925e4_model.json b/varmaepretrainingofvariationalmaskedautoencoderfordomainadaptivelanguageunderstanding/38bc0f6d-05e4-416f-bc6d-f4d22e9925e4_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..ffadde2028f865df6458f872cff53a1e0a5e1165
--- /dev/null
+++ b/varmaepretrainingofvariationalmaskedautoencoderfordomainadaptivelanguageunderstanding/38bc0f6d-05e4-416f-bc6d-f4d22e9925e4_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:77605386803f0c747ead23414e09c7c18fc4a0dfe9a4dbd9e69a09332ea97bd7
+size 109448
diff --git a/varmaepretrainingofvariationalmaskedautoencoderfordomainadaptivelanguageunderstanding/38bc0f6d-05e4-416f-bc6d-f4d22e9925e4_origin.pdf b/varmaepretrainingofvariationalmaskedautoencoderfordomainadaptivelanguageunderstanding/38bc0f6d-05e4-416f-bc6d-f4d22e9925e4_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..6d07ee2f61acd2c90c5d18b1452b57a2d8a590d1
--- /dev/null
+++ b/varmaepretrainingofvariationalmaskedautoencoderfordomainadaptivelanguageunderstanding/38bc0f6d-05e4-416f-bc6d-f4d22e9925e4_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:6897d6671c30ba250df8ce012290599ab480590d0c028a4c372308812a7ed6ea
+size 890054
diff --git a/varmaepretrainingofvariationalmaskedautoencoderfordomainadaptivelanguageunderstanding/full.md b/varmaepretrainingofvariationalmaskedautoencoderfordomainadaptivelanguageunderstanding/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..22ea5ce036feff8464f25f44a2992c2ceee721f9
--- /dev/null
+++ b/varmaepretrainingofvariationalmaskedautoencoderfordomainadaptivelanguageunderstanding/full.md
@@ -0,0 +1,377 @@
+# VarMAE: Pre-training of Variational Masked Autoencoder for Domain-adaptive Language Understanding
+
+Dou Hu $^{1,2,3*}$ , Xiaolong Hou $^{3}$ , Xiyang Du $^{3}$ , Mengyuan Zhou $^{3}$ , Lianxin Jiang $^{3}$ , Yang Mo $^{3}$ , Xiaofeng Shi $^{3}$
+
+$^{1}$ Institute of Information Engineering, Chinese Academy of Sciences
+
+$^{2}$ School of Cyber Security, University of Chinese Academy of Sciences
+
+3 Ping An Life Insurance Company of China, Ltd.
+
+hudou@iei.ac.cn, {houxiaolong430, duxiyang037, zhoumengyuan425,
+
+jianglianxin769,moyang853,shixiaofeng309}@pingan.com.cn
+
+# Abstract
+
+Pre-trained language models have achieved promising performance on general benchmarks, but underperform when migrated to a specific domain. Recent works perform pre-training from scratch or continual pre-training on domain corpora. However, in many specific domains, the limited corpus can hardly support obtaining precise representations. To address this issue, we propose a novel Transformer-based language model named VarMAE for domain-adaptive language understanding. Under the masked autoencoding objective, we design a context uncertainty learning module to encode the token's context into a smooth latent distribution. The module can produce diverse and well-formed contextual representations. Experiments on science- and finance-domain NLU tasks demonstrate that VarMAE can be efficiently adapted to new domains with limited resources.
+
+# 1 Introduction
+
+Pre-trained language models (PLMs) have achieved promising performance in natural language understanding (NLU) tasks on standard benchmark datasets (Wang et al., 2018; Xu et al., 2020). Most works (Devlin et al., 2019; Liu et al., 2019) leverage the Transformer-based pre-train/fine-tune paradigm to learn contextual embedding from large unsupervised corpora. Masked autoencoding, also named masked language model in BERT (Devlin et al., 2019), is a widely used pre-training objective that randomly masks tokens in a sequence to recover. The objective can lead to a deep bidirectional representation of all tokens in a BERT-like architecture. However, these models, which are pre-trained on standard corpora (e.g., Wikipedia), tend to underperform when migrated to a specific domain due to the distribution shift (Lee et al., 2020).
+
+Recent works perform pre-training from scratch (Gu et al., 2022; Yao et al., 2022) or continual
+
+pre-training (Gururangan et al., 2020; Wu et al., 2022) on large domain-specific corpora. But in many specific domains (e.g., finance), effective and intact unsupervised data is difficult and costly to collect due to data accessibility, privacy, security, etc. The limited domain corpus may not support pre-training from scratch (Zhang et al., 2020), and also greatly limit the effect of continual pre-training due to the distribution shift. Besides, some scenarios (i.e., non-industry academics or professionals) have limited access to computing power for training on a massive corpus. Therefore, how to obtain effective contextualized representations from the limited domain corpus remains a crucial challenge.
+
+Relying on the distributional similarity hypothesis (Mikolov et al., 2013a) in linguistics, that similar words have similar contexts, masked autoencoders (MAEs) leverage co-occurrence between the context of words to learn word representations. However, when pre-training on the limited corpus, most word representations can only be learned from fewer co-occurrence contexts, leading to sparse word embedding in the semantic space. Besides, in the reconstruction of masked tokens, it is difficult to perform an accurate point estimation (Li et al., 2020) based on the partially visible context for each word. That is, the possible context of each token should be diverse. The limited data only provides restricted context information, which causes MAEs to learn a relatively poor context representation in a specific domain.
+
+To address the above issue, we propose a novel Variational Masked Autoencoder (VarMAE), a regularized version of MAEs, for a better domain-adaptive language understanding. Based on the vanilla MAE, we design a context uncertainty learning (CUL) module for learning a precise context representation when pre-training on a limited corpus. Specifically, the CUL encodes the token's point-estimate context in the semantic space into a smooth latent distribution. And then, the module
+
+
+Figure 1: The architecture of VarMAE. Based on the vanilla MAE, a CUL module is used to learn diverse and well-formed context representations for all tokens.
+
+reconstructs the context using feature regularization specified by prior distributions of latent variables. In this way, latent representations of similar contexts can be close to each other and vice versa (Li et al., 2019). Accordingly, we can obtain a smoother space and more structured latent patterns.
+
+We conduct continual pre-training on unsupervised corpora in two domains (science and finance) and then fine-tune on the corresponding downstream NLU tasks. The results consistently show that VarMAE outperforms representative language models including vanilla pre-trained (Liu et al., 2019) and continual pre-training methods (Gururangan et al., 2020), when adapting to new domains with limited resources. Moreover, compared with masked autoencoding in MAEs, the objective of VarMAE can produce a more diverse and well-formed context representation.
+
+# 2 VarMAE
+
+In this section, we develop a novel Variational Masked Autoencoder (VarMAE) to improve vanilla MAE for domain-adaptive language understanding. The overall architecture is shown in Figure 1. Based on the vanilla MAE, we design a context uncertainty learning (CUL) module for learning a precise context representation when pre-training on a limited corpus.
+
+# 2.1 Architecture of Vanilla MAE
+
+Masking We randomly mask some percentage of the input tokens and then predict those masked
+
+tokens. Given one input tokens $X = \{x_{1},\dots,x_{n}\}$ and $n$ is the sentence length, the model selects a random set of positions (integers between 1 and $n$ ) to mask out $M = \{m_1,\dots,m_k\}$ , where $k = \lceil 0.15n\rceil$ indicates $15\%$ of tokens are masked out. The tokens in the selected positions are replaced with a [MASK] token. The masked sequence can be denoted as $X^{\mathrm{masked}} = \mathrm{REPLACE}(X,M,[\mathrm{MASK}])$ .
+
+Transformer Encoder Vanilla MAE usually adopts a multi-layer bidirectional Transformer (Vaswani et al., 2017) as basic encoder like previous pre-training model (Liu et al., 2019). Transformer can capture the contextual information for each token in the sentence via self-attention mechanism, and generate a sequence of contextual embeddings. Given the masked sentence $X^{\mathrm{masked}}$ , the context representation is denoted as $\mathbf{C} = \{\mathbf{c}_1, \dots, \mathbf{c}_N\}$ .
+
+Language Model Head We adopt the language model (LM) head to predict the original token based on the reconstructed representation. The number of output channels of LM head equals the number of input tokens. Based on the context representation $\mathbf{c}_i$ , the distribution of the masked prediction is estimated by: $p_{\theta}(\mathbf{x}_i|\mathbf{c}_i) = \text{softmax}(\mathbf{W}\mathbf{c}_i + \mathbf{b})$ , where $\mathbf{W}$ and $\mathbf{b}$ denote the weight matrices of one fully-connected layer. $\theta$ refers to the trainable parameters. The predicted token can be obtained by $x' = \arg \max_i p_{\theta}(\mathbf{x}_i|\mathbf{c}_i)$ , where $x'$ denotes the predicted original token.
+
+# 2.2 Context Uncertainty Learning
+
+Due to the flexibility of natural language, one word may have different meanings under different domains. In many specific domains, the limited corpus can hardly support obtaining precise representations. To address this, we introduce a context uncertainty learning (CUL) module to learn regularized context representations for all tokens. These tokens include masked tokens with more noise and unmasked tokens with less noise. Inspired by variational autoencoders (VAEs) (Kingma and Welling, 2014; Higgins et al., 2017), we use latent variable modeling techniques to quantify the aleatoric uncertainty1 (Der Kiureghian and Ditlevsen, 2009; Abdar et al., 2021) of these tokens.
+
+Let us consider the input token $x$ is generated with an unobserved continuous random variable $\mathbf{z}$ . We assume that $x_{i}$ is generated from a conditional
+
+distribution $p_{\theta}(\mathbf{x}|\mathbf{z})$ , where $\mathbf{z}$ is generated from an isotropic Gaussian prior distribution $p_{\theta}(\mathbf{z}) =$ $\mathcal{N}(\mathbf{z};\mathbf{0},\mathbf{I})$ . To learn the joint distribution of the observed variable $x$ and its latent variable factors $\mathbf{z}$ - the optimal objective is to maximize the marginal log-likelihood of $x$ in expectation over the whole distribution of latent factors $\mathbf{z}$ ..
+
+$$
+\max _ {\theta} \mathbb {E} _ {p _ {\theta} (\mathbf {z})} [ \log p _ {\theta} (\mathbf {x} | \mathbf {z}) ]. \qquad (1)
+$$
+
+Since masked and unmasked tokens have relatively different noise levels, the functions to quantify the aleatoric uncertainty of these two types should be different. We take CUL for masked tokens as an example. Given each input masked token $x_{i}^{m}$ and its corresponding context representation $\mathbf{c}_i^m$ , the true posterior $p_{\theta}(\mathbf{z}^m |x_i^m)$ is approximated as $p_{\theta'}(\mathbf{z}^m |\mathbf{c}_i^m)$ due to the distributional similarity hypothesis (Mikolov et al., 2013a). Inspired by Kingma and Welling (2014), we assume $p_{\theta'}(\mathbf{z}^m|\mathbf{c}_i^m)$ takes on an approximate Gaussian form with a diagonal covariance, and let the variational approximate posterior be a multivariate Gaussian with a diagonal covariance structure. This variational approximate posterior is denoted as $q_{\phi}(\mathbf{z}^{m}|\mathbf{c}_{i}^{m})$ :
+
+$$
+q _ {\phi} (\mathbf {z} ^ {m} | \mathbf {c} _ {i} ^ {m}) = \mathcal {N} (\mathbf {z} ^ {m}; \boldsymbol {\mu} _ {i} ^ {m}, \boldsymbol {\sigma} _ {i} ^ {m 2} \mathbf {I}), \tag {2}
+$$
+
+where $\mathbf{I}$ is diagonal covariance, $\phi$ is the variational parameters. Both parameters (mean as well as variance) are input-dependent and predicted by MLP (a fully-connected neural network with a single hidden layer), i.e., $\pmb{\mu}_i^m = f_{\phi_\mu}(\mathbf{c}_i^m)$ , $\pmb{\sigma}_i^m = f_{\phi_\sigma}(\mathbf{c}_i^m)$ , where $\phi_\mu$ and $\phi_\sigma$ refer to the model parameters respectively w.r.t output $\pmb{\mu}_i^m$ and $\pmb{\sigma}_i^m$ . Next, we sample a variable $\mathbf{z}_i^m$ from the approximate posterior $q_{\phi}(\mathbf{z}^m|\mathbf{c}_i^m)$ , and then feed it into the LM head to predict the original token.
+
+Similarly, CUL for each unmasked token $x_{i}^{\bar{m}}$ adopts in a similar way and samples a latent variable $z_{i}^{\bar{m}}$ from the variational approximate posterior $q_{\phi}(\mathbf{z}^{\bar{m}}|\mathbf{c}_{i}^{\bar{m}}) = \mathcal{N}(\mathbf{z}^{\bar{m}};\pmb{\mu}_{i}^{\bar{m}},\pmb{\sigma}_{i}^{\bar{m}^{2}}\mathbf{I})$ , where $\mu_{i}^{\bar{m}}$ and $\pmb{\sigma}_{i}^{\bar{m}}$ are predicted by MLP.
+
+In the implementation, we adopt $f_{\phi \mu}$ with shared parameters to obtain $\pmb{\mu}^{m}$ and $\pmb{\mu}^{\bar{m}}$ . Conversely, two $f_{\phi \sigma}$ with independent parameters are used to obtain $\pmb{\sigma}^{m}$ and $\pmb{\sigma}^{\bar{m}}$ , for $x^{m}$ with more noise and $x^{\bar{m}}$ with less noise, respectively. After that, batch normalization (Ioffe and Szegedy, 2015) is applied to avoid the posterior collapse2 (Zhu et al., 2020). By
+
+applying the CUL module, the context representation is not a deterministic point embedding any more, but a stochastic embedding sampled from $\mathcal{N}(\mathbf{z};\boldsymbol {\mu},\sigma^2\mathbf{I})$ in the latent space. Based on the reconstructed representation, the LM head is adopted to predict the original token.
+
+# 2.3 Training Objective
+
+To learn a smooth space where latent representations of similar contexts are close to each other and vice versa, the objective function is:
+
+$$
+\begin{array}{l} \max _ {\phi , \theta} \mathbb {E} _ {x \sim \mathbf {D}} \left[ \mathbb {E} _ {\mathbf {z} \sim q _ {\phi} (\mathbf {z} | \mathbf {c})} [ \log p _ {\theta} (\mathbf {x} | \mathbf {z}) ] \right], \tag {3} \\ \begin{array}{l} \text {s . t .} D _ {K L} (q _ {\phi} (\mathbf {z} | \mathbf {c}) \| p _ {\theta} (\mathbf {z})) < \delta , \end{array} \\ \end{array}
+$$
+
+where $\delta > 0$ is a constraint, and $q_{\phi}(\mathbf{z}|\mathbf{c})$ is the variational approximate posterior of the true posterior $p_{\theta}(\mathbf{z}|x)$ (see Section 2.2). $D_{KL}(\cdot)$ denotes the KL-divergence term, which serves as the regularization that forces prior distribution $p_{\theta}$ to approach the approximated posterior $q_{\phi}$ . Then, for each input sequence, the loss function is developed as a weighted sum of loss functions for masked tokens $\mathcal{L}^m$ and unmasked tokens $\mathcal{L}^{\bar{m}}$ . The weights are normalization factors of masked/unmasked tokens in the current sequence.
+
+$$
+\begin{array}{l} \mathcal {L} ^ {\tau} = \mathbb {E} _ {\mathbf {z} ^ {\tau} \sim q _ {\phi} (\mathbf {z} ^ {\tau} | \mathbf {c} ^ {\tau})} [ \log p _ {\theta} (\mathbf {x} ^ {\tau} | \mathbf {z} ^ {\tau}) ] \\ - \lambda^ {\tau} D _ {K L} \left(q _ {\phi} \left(\mathbf {z} ^ {\tau} \mid \mathbf {c} ^ {\tau}\right) \| p _ {\theta} \left(\mathbf {z} ^ {\tau}\right)\right), \tau \in \{m, \bar {m} \}, \tag {4} \\ \end{array}
+$$
+
+where $\lambda^m$ and $\lambda^{\bar{m}}$ are trade-off hyper-parameters. Please see Appendix B for more details.
+
+As the sampling of $\mathbf{z}_i$ is a stochastic process, we use re-parameterization trick (Kingma and Welling, 2014) to make it trainable: $\mathbf{z}_i = \boldsymbol{\mu}_i + \boldsymbol{\sigma}_i \odot \epsilon, \epsilon \sim \mathcal{N}(0, \mathbf{I})$ , where $\odot$ refers to an element-wise product. Then, KL term $D_{KL}(\cdot)$ is computed as:
+
+$$
+D _ {K L} \left(q _ {\phi} (\mathbf {z} | \mathbf {c}) \| p _ {\theta} (\mathbf {z})\right) = - \frac {1}{2} \left(1 + \log \sigma^ {2} - \mu^ {2} - \sigma^ {2}\right). \tag {5}
+$$
+
+For all tokens, the CUL forces the model to be able to reconstruct the context using feature regularization specified by prior distributions of latent variables. Under the objective of VarMAE, latent vectors with similar contexts are encouraged to be smoothly organized together. After the pre-training, we leverage the Transformer encoder and $f_{\phi_\mu}$ to fine-tune on downstream tasks.
+
+# 3 Experiments
+
+We conduct experiments on science- and finance-domain NLU tasks to evaluate our method.
+
+
Model
Science-domain
Finance-domain
ACL-ARC
SciCite
JNLPBA
EBM-NLP
Avg.
OIR
MTC
IEE
PSM
Avg.
CLS
NER
SE
CLS
NER
TM
RoBERTa
74.58
84.85
73.09
75.11
76.91
66.64
54.95
67.77
46.65
59.00
TAPT
68.10
86.23
72.54
74.09
75.24
65.16
53.18
68.80
49.71
59.21
DAPT
70.02
84.20
73.85
75.88
75.99
65.54
54.49
65.90
46.47
58.10
VarMAE
76.50
86.32
74.43
76.01
78.32
68.77
56.58
70.15
53.68
62.30
+
+Table 1: Results on science- and finance-domain downstream tasks. All compared pre-trained models are fine-tuned on the task dataset. For each dataset, we run three random seeds and report the average result of the test sets. We report the micro-average F1 score for CLS and TM, entity-level F1 score for NER, and token-level F1 score for SE. Best results are highlighted in bold.
+
+
Corpus Size
Science-domain
Finance-domain
DAPT
VarMAE
DAPT
VarMAE
|D|/3
76.77
77.82
59.56
62.04
|D|
75.99
78.32
58.10
62.30
+
+Table 2: Average results on all downstream tasks against different corpus sizes of pre-training. $|\mathcal{D}|$ is the corpus size for corresponding domain.
+
+
Masking Ratio
Science-domain
Finance-domain
5%
77.27
58.54
15%
78.32
62.30
30%
76.95
59.12
+
+Table 3: Average results of VarMAE on all downstream tasks against different masking ratios of pre-training.
+
+# 3.1 Domain Corpus and Downstream Tasks
+
+Domain Corpus For science domain, we collect 0.6 million English abstracts (0.1B tokens) of computer science and broad biomedical fields, which are sampled from Semantic Scholar corpus (Ammar et al., 2018). For finance domain, we collect 2 million cleaned Chinese sentences (0.3B tokens) from finance-related online platforms (e.g., Sina Finance $^{3}$ , Weixin Official Account Platform $^{4}$ , and Baidu Zhidao $^{5}$ ) and business scenarios $^{6}$ . The 1 million sentences in this corpus are from finance news, sales/claims cases, product introduction/clauses, and finance encyclopedia entries, while the remaining 1 million sentences are collected from the internal corpus and log data in business scenarios.
+
+Downstream Tasks and Datasets We experiment with four categories of NLP downstream tasks: text classification (CLS), named entity recognition (NER), span extraction (SE), and text matching (TM). For science domain, we choose four pub
+
+lic benchmark datasets: ACL-ARC (Jurgens et al., 2018) and SciCite (Cohan et al., 2019) for citation intent classification task, JNLPBA (Collier and Kim, 2004) for bio-entity recognition task, EBM-NLP (Nye et al., 2018) for PICO extraction task. For finance domain, we choose four real-world financial business datasets6: OIR for outbound intent recognition task, MTC for multi-label topic classification task, IEE for insurance-entity extraction task, and PSM for pairwise search match task. The details of datasets are included in Appendix C.1.
+
+# 3.2 Experimental Setup
+
+We compare VarMAE with the following baselines: RoBERTa (Liu et al., 2019) is an optimized BERT with a masked autoencoding objective, and is to directly fine-tune on given downstream tasks. TAPT (Gururanan et al., 2020) is a continual pre-training model on a task-specific corpus. DAPT (Gururanan et al., 2020) is a continual pre-training model on a domain-specific corpus.
+
+Experiments are conducted under PyTorch $^{7}$ and using 2/1 NVIDIA Tesla V100 GPUs with 16GB memory for pre-training/fine-tuning. During pretraining, we use roberta-base $^{8}$ and chinese-roberta-wwm-ext $^{8}$ to initialize the model for science (English) and finance domains (Chinese), respectively. During the pre-training of VarMAE, we freeze the embedding layer and all layers of Transformer encoder to avoid catastrophic forgetting (French, 1993; Arumae et al., 2020) of previously general learned knowledge. And then we optimize other network parameters (e.g., the LM Head and CUL module) by using Adam optimizer (Kingma and Ba, 2015) with the learning rate of $5e^{-5}$ . The number of epochs is set to 3. We use gradient accumulation step of 50 to achieve the large batch sizes (i.e., the batch size is 3200). The trade-off co
+
+$^{7}$ https://pytorch.org/
+8https://huggingface.co/
+
+
No.
Example
Gold
Pred.
+(RoBERTa)
Pred.
+(DAPT)
Pred.
+(VarMAE)
1
Can forearm superficial injury insure accidental injury?
+(前臂浅表损伤是否投保意外保险?)
Accident (意外);
+Disease underwriting (疾病核保)
Disease underwriting
Accident
Accident;
+Disease underwriting
2
Medical demands inspire quality care.
+(医疗需求激发品质养老。)
Pension (养老);
+Risk education (风险教育)
Pension
Pension
Pension;
+Risk education
3
How does high incidence cancer pro-
+tection calculate the risk insurance?
+(高发癌症保障计划如何计算风险保额?)
Critical illness (重疾);
+Insurance rules (投保规则)
Insurance rules
Insurance rules
Critical illness;
+Insurance rules
4
What are the features of ABC Compre-
+hensive Care Program?
+(ABC全面呵护计划特色包括什么内容?)
+
+Table 4: Case studies in the multi-label topic classification (MTC) task of financial business scenarios. The table shows four examples of spoken dialogues in the test set, their gold labels and predictions by three methods (RoBERTa, DAPT and VarMAE). We translate original Chinese to English version for readers.
+
+efficient $\lambda$ is set to 10 for both domains selected from $\{1, 10, 100\}$ . For fine-tuning on downstream tasks, most hyperparameters are the same as in pretraining, except for the following settings due to the limited computation. The batch size is set to 128 for OIR, and 32 for other tasks. The maximum sequence length is set to 64 for OIR, and 128 for other tasks. The number of epochs is set to 10. More details are listed in Appendix C.2.
+
+# 3.3 Results and Analysis
+
+Table 1 shows the results on science- and financedomain downstream tasks. In terms of the average result, VarMAE yields $1.41\%$ and $3.09\%$ absolute performance improvements over the best-compared model on science and finance domains, respectively. It shows the superiority of domain-adaptive pretraining with context uncertainty learning. DAPT and TAPT obtain inferior results. It indicates that the small domain corpus limits the continual pretraining due to the distribution shift.
+
+We report the average results on all tasks against different corpus sizes of pre-training in Table 2 (see Appendix D.1 for details). VarMAE consistently achieves better performance than DAPT even though a third of the corpus is used. When using full corpus, DPAT's performance decreases but VarMAE's performance increases, which proves our method has a promising ability to adapt to the target domain with a limited corpus.
+
+Table 3 shows the average results of VarMAE on all tasks against different masking ratios of pretraining (see Appendix D.2 for details). Under the default masking strategies, the best masking rate is $15\%$ , which is the same as BERT and RoBERTa.
+
+# 3.4 Case Study
+
+As shown in Table 4, we randomly choose several samples from the test set in the multi-label topic classification (MTC) task.
+
+For the first case, RoBERTa and DAPT each predict one label correctly. It shows that both general and domain language knowledge have a certain effect on the domain-specific task. However, none of them identify all the tags completely. This phenomenon reflects that the general or limited continual PLM is not sufficient for the domain-specific task. For the second and third cases, these two comparison methods cannot classify the topic label Risk education and Critical illness, respectively. It indicated that they perform an isolated point estimation and have a relatively poor context representation. Unlike other methods, our VarMAE can encode the token's context into a smooth latent distribution and produce diverse and well-formed contextual representations. As expected, VarMAE predicts the first three examples correctly with limited resources.
+
+For the last case, all methods fail to predict Critical illness. We notice that ABC Comprehensive Care Program is a product name related to critical illness insurance. Classifying it properly may require some domain-specific structured knowledge.
+
+# 4 Conclusion
+
+We propose a novel Transformer-based language model named VarMAE for domain-adaptive language understanding with limited resources. A new CUL module is designed to produce a diverse and well-formed context representation. Experiments on science- and finance-domain tasks demonstrate that VarMAE can be efficiently adapted to new domains using a limited corpus. Hope that VarMAE can guide future foundational work in this area.
+
+# Limitations
+
+All experiments are conducted on a small pretraining corpus due to the limitation of computational resources. The performance of VarMAE pretraining on a larger corpus needs to be further studied. Besides, VarMAE cannot be directly adapted to downstream natural language generation tasks since our model does not contain a decoder for the generation. This will be left as future work.
+
+# Acknowledgements
+
+This research is supported by Ping An Life Insurance. We thank the reviewers for their insightful and constructive comments.
+
+# References
+
+Moloud Abdar, Farhad Pourpanah, Sadiq Hussain, Dana Rezazadegan, Li Liu, Mohammad Ghavamzadeh, Paul Fieguth, Xiaochun Cao, Abbas Khosravi, U Rajendra Acharya, et al. 2021. A review of uncertainty quantification in deep learning: Techniques, applications and challenges. Information Fusion, 76:243-297.
+Emily Alsentzer, John Murphy, William Boag, WeiHung Weng, Di Jindi, Tristan Naumann, and Matthew McDermott. 2019. Publicly available clinical bert embeddings. In Proceedings of the 2nd Clinical Natural Language Processing Workshop, pages 72-78.
+Waleed Ammar, Dirk Groeneveld, Chandra Bhagavatula, Iz Beltagy, Miles Crawford, Doug Downey, Jason Dunkelberger, Ahmed Elghohary, Sergey Feldman, Vu Ha, Rodney Kinney, Sebastian Kohlmeier, Kyle Lo, Tyler Murray, Hsu-Han Ooi, Matthew E. Peters, Joanna Power, Sam Skjonsberg, Lucy Lu Wang, Chris Wilhelm, Zheng Yuan, Madeleine van Zuylen, and Oren Etzioni. 2018. Construction of the literature graph in semantic scholar. In NAACL-HLT (3), pages 84–91. Association for Computational Linguistics.
+Kristjan Arumae, Qing Sun, and Parminder Bhatia. 2020. An empirical investigation towards efficient multi-domain language model pre-training. In EMNLP (1), pages 4854-4864. Association for Computational Linguistics.
+Iz Beltagy, Kyle Lo, and Arman Cohan. 2019. Scibert: A pretrained language model for scientific text. In EMNLP/IJCNLP (1), pages 3613-3618. Association for Computational Linguistics.
+Dimitri P Bertsekas. 1997. Nonlinear programming. Journal of the Operational Research Society, 48(3):334-334.
+Steven Bird, Robert Dale, Bonnie J. Dorr, Bryan R. Gibson, Mark Thomas Joseph, Min-Yen Kan, Dongwon
+
+Lee, Brett Powley, Dragomir R. Radev, and Yee Fan Tan. 2008. The ACL anthology reference corpus: A reference dataset for bibliographic research in computational linguistics. In LREC. European Language Resources Association.
+Hicham El Boukkouri, Olivier Ferret, Thomas Lavergne, and Pierre Zweigenbaum. 2022. Re-train or train from scratch? comparing pre-training strategies of BERT in the medical domain. In LREC, pages 2626-2633. European Language Resources Association.
+Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In NeurIPS.
+Ilias Chalkidis, Manos Fergadiotis, Prodromos Malakasiotis, Nikolaos Aletras, and Ion Androutsopoulos. 2020. LEGAL-BERT: "preparing the muppets for court". In EMNLP (Findings), volume EMNLP 2020 of Findings of ACL, pages 2898-2904. Association for Computational Linguistics.
+Kevin Clark, Minh-Thang Luong, Quoc V. Le, and Christopher D. Manning. 2020. ELECTRA: pretraining text encoders as discriminators rather than generators. In ICLR. OpenReview.net.
+Arman Cohan, Waleed Ammar, Madeleine van Zuylen, and Field Cady. 2019. Structural scaffolds for citation intent classification in scientific publications. In *NAACL-HLT* (1), pages 3586-3596. Association for Computational Linguistics.
+Nigel Collier and Jin-Dong Kim. 2004. Introduction to the bio-entity recognition task at JNLPBA. In NLPBA/BioNLP.
+Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, and Ziqing Yang. 2021. Pre-training with whole word masking for chinese BERT. IEEE ACM Trans. Audio Speech Lang. Process., 29:3504-3514.
+Armen Der Kiureghian and Ove Ditlevsen. 2009. Aleatory or epistemic? does it matter? Structural safety, 31(2):105-112.
+Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In *NAACL-HLT* (1), pages 4171–4186. Association for Computational Linguistics.
+Shizhe Diao, Jiaxin Bai, Yan Song, Tong Zhang, and Yonggang Wang. 2020. ZEN: pre-training chinese text encoder enhanced by n-gram representations. In
+
+EMNLP (Findings), volume EMNLP 2020 of Findings of ACL, pages 4729-4740. Association for Computational Linguistics.
+Robert M. French. 1993. Catastrophic interference in connectionist networks: Can it be predicted, can it be prevented? In NIPS, pages 1176-1177. Morgan Kaufmann.
+Yu Gu, Robert Tinn, Hao Cheng, Michael Lucas, Naoto Usuyama, Xiaodong Liu, Tristan Naumann, Jianfeng Gao, and Hoifung Poon. 2022. Domain-specific language model pretraining for biomedical natural language processing. ACM Trans. Comput. Heal., 3(1):2:1-2:23.
+Suchin Gururangan, Ana Marasovic, Swabha Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey, and Noah A. Smith. 2020. Don't stop pretraining: Adapt language models to domains and tasks. In ACL, pages 8342-8360. Association for Computational Linguistics.
+Irina Higgins, Loic Matthewshey, Arka Pal, Christopher Burgess, Xavier Glorot, Matthew Botvinick, Shakir Mohamed, and Alexander Lerchner. 2017. beta-vac: Learning basic visual concepts with a constrained variational framework. In ICLR (Poster). OpenReview.net.
+Jeremy Howard and Sebastian Ruder. 2018. Universal language model fine-tuning for text classification. In ACL (1), pages 328-339. Association for Computational Linguistics.
+Dou Hu, Zhou Mengyuan, Xiyang Du, Mengfei Yuan, Jin Zhi, Lianxin Jiang, Mo Yang, and Xiaofeng Shi. 2022. Pali-nlp at semeval-2022 task 4: Discriminative fine-tuning of transformers for patronizing and condescending language detection. In Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022), pages 335-343.
+Dou Hu and Lingwei Wei. 2020. SLK-NER: exploiting second-order lexicon knowledge for chinese NER. In SEKE, pages 413-417. KSI Research Inc.
+Kexin Huang, Jaan Altosaar, and Rajesh Ranganath. 2019. Clinicalbert: Modeling clinical notes and predicting hospital readmission. CoRR, abs/1904.05342.
+Sergey Ioffe and Christian Szegedy. 2015. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In ICML, volume 37 of JMLR Workshop and Conference Proceedings, pages 448-456. JMLR.org.
+Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S. Weld, Luke Zettlemoyer, and Omer Levy. 2020. Spanbert: Improving pre-training by representing and predicting spans. Trans. Assoc. Comput. Linguistics, 8:64-77.
+David Jurgens, Srijan Kumar, Raine Hoover, Daniel A. McFarland, and Dan Jurafsky. 2018. Measuring the evolution of a scientific field through citation frames. Trans. Assoc. Comput. Linguistics, 6:391-406.
+
+William Karush. 2014. Minima of functions of several variables with inequalities as side conditions. In Traces and Emergence of Nonlinear Programming, pages 217-245. Springer.
+Jin-Dong Kim, Tomoko Ohta, Yuka Tateisi, and Jun'ichi Tsujii. 2003. GENIA corpus - a semantically annotated corpus for bio-textmining. In ISMB (Supplement of Bioinformatics), pages 180-182.
+Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.
+Diederik P. Kingma and Max Welling. 2014. Autoencoding variational bayes. In ICLR.
+Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2020. Biobert: a pre-trained biomedical language representation model for biomedical text mining. Bioinform., 36(4):1234-1240.
+Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In ACL, pages 7871-7880. Association for Computational Linguistics.
+Bohan Li, Junxian He, Graham Neubig, Taylor Berg-Kirkpatrick, and Yiming Yang. 2019. A surprisingly effective fix for deep latent variable modeling of text. In EMNLP/IJCNLP (1), pages 3601-3612. Association for Computational Linguistics.
+Chunyuan Li, Xiang Gao, Yuan Li, Baolin Peng, Xiujun Li, Yizhe Zhang, and Jianfeng Gao. 2020. Optimus: Organizing sentences via pre-trained modeling of a latent space. In EMNLP (1), pages 4678-4699. Association for Computational Linguistics.
+Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining approach. CoRR, abs/1907.11692.
+Tomás Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word representations in vector space. In ICLR (Workshop Poster).
+Tomás Mikolov, Ilya Sutskever, Kai Chen, Gregory S. Corrado, and Jeffrey Dean. 2013b. Distributed representations of words and phrases and their compositionality. In NIPS, pages 3111-3119.
+Benjamin E. Nye, Junyi Jessy Li, Roma Patel, Yinfei Yang, Iain James Marshall, Ani Nenkova, and Byron C. Wallace. 2018. A corpus with multi-level annotations of patients, interventions and outcomes to support language processing for medical literature. In ACL (1), pages 197-207. Association for Computational Linguistics.
+
+Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In EMNLP, pages 1532-1543. ACL.
+Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In NAACL-HLT, pages 2227-2237. Association for Computational Linguistics.
+Yujia Qin, Yankai Lin, Ryuichi Takanobu, Zhiyuan Liu, Peng Li, Heng Ji, Minlie Huang, Maosong Sun, and Jie Zhou. 2021. ERICA: improving entity and relation understanding for pre-trained language models via contrastive learning. In ACLIJCNLP (1), pages 3350-3363. Association for Computational Linguistics.
+Yujia Qin, Jiajie Zhang, Yankai Lin, Zhiyuan Liu, Peng Li, Maosong Sun, and Jie Zhou. 2022. ELLE: efficient lifelong pre-training for emerging data. In ACL (Findings), pages 2789-2810. Association for Computational Linguistics.
+Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res., 21:140:1-140:67.
+Yu Sun, Shuohuan Wang, Yu-Kun Li, Shikun Feng, Xuyi Chen, Han Zhang, Xin Tian, Danxiang Zhu, Hao Tian, and Hua Wu. 2019. ERNIE: enhanced representation through knowledge integration. CoRR, abs/1904.09223.
+Wen Tai, H. T. Kung, Xin Dong, Marcus Z. Comiter, and Chang-Fu Kuo. 2020. exbert: Extending pre-trained models with domain-specific vocabulary under constrained training resources. In EMNLP (Findings), volume EMNLP 2020 of Findings of ACL, pages 1433-1439. Association for Computational Linguistics.
+Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NIPS, pages 5998-6008.
+Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2018. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In Black-boxNLP@EMNLP, pages 353-355. Association for Computational Linguistics.
+Wei Wang, Bin Bi, Ming Yan, Chen Wu, Jiangnan Xia, Zuyi Bao, Liwei Peng, and Luo Si. 2020. Structbert: Incorporating language structures into pre-training for deep language understanding. In ICLR. OpenReview.net.
+Lingwei Wei, Dou Hu, Wei Zhou, Xuehai Tang, Xiaodan Zhang, Xin Wang, Jizhong Han, and Songlin Hu. 2020. Hierarchical interaction networks with rethinking mechanism for document-level sentiment
+
+analysis. In ECML/PKDD (3), volume 12459 of Lecture Notes in Computer Science, pages 633-649. Springer.
+Tongtong Wu, Massimo Caccia, Zhuang Li, Yuan-Fang Li, Guilin Qi, and Gholamreza Haffari. 2022. Pretrained language model in continual learning: A comparative study. In ICLR. OpenReview.net.
+Liang Xu, Hai Hu, Xuanwei Zhang, Lu Li, Chenjie Cao, Yudong Li, Yechen Xu, Kai Sun, Dian Yu, Cong Yu, Yin Tian, Qianqian Dong, Weitang Liu, Bo Shi, Yiming Cui, Junyi Li, Jun Zeng, Rongzhao Wang, Weijian Xie, Yanting Li, Yina Patterson, Zuoyu Tian, Yiwen Zhang, He Zhou, Shaoweihua Liu, Zhe Zhao, Qipeng Zhao, Cong Yue, Xinrui Zhang, Zhengliang Yang, Kyle Richardson, and Zhenzhong Lan. 2020. CLUE: A chinese language understanding evaluation benchmark. In COLING, pages 4762-4772. International Committee on Computational Linguistics.
+Zhilin Yang, Zihang Dai, Yiming Yang, Jaime G. Carbonell, Ruslan Salakhutdinov, and Quoc V. Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. In NeurIPS, pages 5754-5764.
+Xingcheng Yao, Yanan Zheng, Xiaocoong Yang, and Zhilin Yang. 2022. Nlp from scratch without large-scale pretraining: A simple and efficient framework. In International Conference on Machine Learning, pages 25438-25451. PMLR.
+Yunzhi Yao, Shaohan Huang, Wenhui Wang, Li Dong, and Furu Wei. 2021. Adapt-and-distill: Developing small, fast and effective pretrained language models for domains. In ACL/IJCNLP (Findings), volume ACL/IJCNLP 2021 of Findings of ACL, pages 460-470. Association for Computational Linguistics.
+Rong Zhang, Revanth Gangi Reddy, Md. Arafat Sultan, Vittorio Castelli, Anthony Ferritto, Radu Florian, Ef sun Sarioglu Kayi, Salim Roukos, Avirup Sil, and Todd Ward. 2020. Multi-stage pre-training for lowresource domain adaptation. In EMNLP (1), pages 5461-5468. Association for Computational Linguistics.
+Zhengyan Zhang, Xu Han, Zhiyuan Liu, Xin Jiang, Maosong Sun, and Qun Liu. 2019. ERNIE: enhanced language representation with informative entities. In ACL (1), pages 1441-1451. Association for Computational Linguistics.
+Qile Zhu, Wei Bi, Xiaojiang Liu, Xiyao Ma, Xiaolin Li, and Dapeng Wu. 2020. A batch normalized inference network keeps the KL vanishing away. In ACL, pages 2636-2649. Association for Computational Linguistics.
+
+# Appendix Overview
+
+In this supplementary material, we provide: (i) the related work, (ii) objective derivation of the proposed VarMAE, (iii) detailed description of experimental setups, (iv) detailed results, and (v) our contribution highlights.
+
+# A Related Work
+
+# A.1 General PLMs
+
+Traditional works (Mikolov et al., 2013b; Pennington et al., 2014) represent the word as a single vector representation, which cannot disambiguate the word senses based on the surrounding context. Recently, unsupervised pre-training on large-scale corpora significantly improves performance, either for Natural Language Understanding (NLU) (Peters et al., 2018; Devlin et al., 2019; Cui et al., 2021) or for Natural Language Generation (NLG) (Raffel et al., 2020; Brown et al., 2020; Lewis et al., 2020). Following this trend, considerable progress (Liu et al., 2019; Yang et al., 2019; Clark et al., 2020; Joshi et al., 2020; Wang et al., 2020; Diaz et al., 2020) has been made to boost the performance via improving the model architectures or exploring novel pre-training tasks. Some works (Sun et al., 2019; Zhang et al., 2019; Qin et al., 2021) enhance the model by integrating structured knowledge from external knowledge graphs.
+
+Due to the flexibility of natural language, one word may have different meanings under different domains. These methods underperform when migrated to specialized domains. Moreover, simple fine-tuning (Howard and Ruder, 2018; Hu and Wei, 2020; Wei et al., 2020; Hu et al., 2022) of PLMs is also not sufficient for domain-specific applications.
+
+# A.2 Domain-adaptive PLMs
+
+Recent works perform pre-training from scratch (Gu et al., 2022; Yao et al., 2022) or continual pretraining (Alsentzer et al., 2019; Huang et al., 2019; Lee et al., 2020; Gururangan et al., 2020; Wu et al., 2022; Qin et al., 2022) on domain-specific corpora.
+
+Remarkably, Beltagy et al. (2019); Chalkidis et al. (2020) explore different strategies to adapt to new domains, including pre-training from scratch and further pre-training. Boukkouri et al. (2022) find that both of them perform at a similar level when pre-training on a specialized corpus, but the former requires more resources. Yao et al. (2022) jointly optimize the task and language modeling objective from scratch. Zhang et al. (2020); Tai et al.
+
+(2020); Yao et al. (2021) extend the vocabulary of the LM with domain-specific terms for further gains. Gururangan et al. (2020) show that domain- and task-adaptive pre-training methods can offer gains in specific domains. Qin et al. (2022) present an efficient lifelong pre-training method for emerging domain data.
+
+In most specific domains, collecting large-scale corpora is usually inaccessible. The limited data makes pre-training from scratch infeasible and restricts the performance of continual pre-training. Towards this issue, we investigate domain-adaptive language understanding with a limited target corpus, and propose a novel language modeling method named VarMAE. The method performs a context uncertainty learning module to produce diverse and well-formed contextual representations, and can be efficiently adapted to new domains with limited resources.
+
+# B Derivation of Objective Function
+
+Here, we take the objective for masked tokens as the example to give derivations of the loss function. The objective for unmasked tokens is similar. For simplifying description, we omit the superscripts that use to distinguish masked tokens from unmasked tokens. To learn a smooth space of masked tokens where latent representations of similar contexts are close to each other and vice versa, the objective function is:
+
+$$
+\begin{array}{l} \max _ {\phi , \theta} \mathbb {E} _ {x \sim \mathbf {D}} \left[ \mathbb {E} _ {\mathbf {z} \sim q _ {\phi} (\mathbf {z} | \mathbf {c})} [ \log p _ {\theta} (\mathbf {x} | \mathbf {z}) ] \right], \tag {6} \\ \begin{array}{l} \text {s . t .} D _ {K L} (q _ {\phi} (\mathbf {z} | \mathbf {c}) \| p _ {\theta} (\mathbf {z})) < \delta , \end{array} \\ \end{array}
+$$
+
+where $\delta > 0$ is a constraint, and $q_{\phi}(\mathbf{z}|\mathbf{c})$ is the variational approximate posterior of the true posterior $p_{\theta}(\mathbf{z}|x)$ (see Section 2.2). $D_{KL}(\cdot)$ denotes the KL-divergence term, which serves as the regularization that forces the prior distribution $p_{\theta}$ to approach the approximated posterior $q_{\phi}$ .
+
+In order to encourage this disentangling property in the inferred (Higgins et al., 2017), we introduce a constraint $\delta$ over $q_{\phi}(\mathbf{z}|\mathbf{c})$ by matching it to a prior $p_{\theta}(\mathbf{z})$ . The objective can be computed as a Lagrangian under the KKT condition (Bertsekas, 1997; Karush, 2014). The above optimization problem with only one inequality constraint is equivalent to maximizing the following equation,
+
+$$
+\begin{array}{l} \mathcal {F} (\theta , \phi , \lambda ; \mathbf {c}, \mathbf {z}) = \mathbb {E} _ {\mathbf {z} \sim q _ {\phi} (\mathbf {z} | \mathbf {c})} [ \log p _ {\theta} (\mathbf {x} | \mathbf {z}) ] \tag {7} \\ - \lambda \left(D _ {K L} \left(q _ {\phi} (\mathbf {z} | \mathbf {c}) \| p _ {\theta} (\mathbf {z})\right) - \delta\right), \\ \end{array}
+$$
+
+
Dataset Name
Task Name
Train
Dev
Test
# Entities
Avg/Min/Max
Class
Source
Science
ACL-ARC
Citation Intent Classification
1,688
114
139
-
42/4/224
6
NLP field
SciCite
Citation Intent Classification
7,320
916
1,861
-
34/7/228
3
Multiple scientific fields
JNLPBA
Bio-entity Recognition
16,807
1,739
3,856
59,963
27/2/204
5
Biomedical field
EBM-NLP
PICO Extraction
27,879
7,049
2,064
77,360
37/1/278
3
Clinical medicine field
Finance
OIR
Outbound Intent Recognition
36,885
9,195
3,251
-
16/2/69
34
F1, F2
MTC
Multi-label Topic Classification
66,670
2,994
4,606
-
15/2/203
39
F1, F2, F3, F4
IEE
Insurance-entity Extraction
19,136
4,784
19,206
13,128
21/1/388
2
F1, F2
PSM
Pairwise Search Match
11,812
1,476
1,477
-
7/2/100; 14/1/134
4
F1, F2
+
+Table 5: Dataset statistics of science- and finance-domain downstream tasks. Avg, Min, and Max indicate the average, minimum, and maximum length of sentences, respectively. "Class" refers to the number of classes. F1, F2, F3 and F4 mean the insurance, sickness, job and legal fields, respectively.
+
+
Hyperparameter
Assignment
Number of Epoch
3
Trade-off Weight λ
10
Number of Layers
12
Hidden size
768
FFN inner hidden size
3072
Attention heads
12
Attention head size
64
Dropout
0.1
Attention Dropout
0.1
Peak Learning Rate
5e-5
Maximum Length
128
Batch Size
64
Gradient Accumulation Steps
50
Optimization Steps
{504, 1830}
Weight Decay
0.0
Adam ε
1e-6
Adam β1
0.9
Adam β2
0.98
+
+Table 6: Hyperparameters for pre-training on a domain-specific corpus for each domain. The optimization steps are 504 and 1830 for science- and finance-domain, respectively.
+
+where the KKT multiplier $\lambda$ is the regularization coefficient that constrains the capacity of the latent information channel $\mathbf{z}$ and puts implicit independence pressure on the learnt posterior due to the isotropic nature of the Gaussian prior $p_{\theta}(\mathbf{z})$ . Since $\delta, \lambda > 0$ , the function is further defined as,
+
+$$
+\begin{array}{l} \mathcal {F} (\theta , \phi , \lambda ; \mathbf {c}, \mathbf {z}) \geq \mathcal {L} (\theta , \phi ; \mathbf {c}, \mathbf {z}, \lambda) \\ = \mathbb {E} _ {\mathbf {z} \sim q _ {\phi} (\mathbf {z} | \mathbf {c})} [ \log p _ {\theta} (\mathbf {x} | \mathbf {z}) ] \tag {8} \\ - \lambda D _ {K L} \left(q _ {\phi} (\mathbf {z} | \mathbf {c}) \| p _ {\theta} (\mathbf {z})\right), \\ \end{array}
+$$
+
+where the multiplier $\lambda$ can be considered as a hyperparameter. $\lambda$ not only encourages more efficient latent encoding but also creates a trade-off between context reconstruction quality and the extent of disentanglement. We train the model by minimizing the loss $\mathcal{L}$ to push up its evidence lower bound.
+
+
Hyperparameter
Assignment
Number of Epoch
10
Maximum Length
{64, 128}
Batch Size
{32, 128}
Learning Rate
5e-5
Dropout
0.1
Weight Decay
0.0
Warmup ratio
0.06
+
+Table 7: Hyperparameters for fine-tuning on science- and finance-domain downstream tasks. The maximum sequence length is set to 64 for OIR, and is set to 128 for other tasks. The batch size is set to 128 for OIR, and is set to 32 for other tasks.
+
+# C Detailed Experimental Setup
+
+# C.1 Datasets of Downstream Tasks
+
+The statistics of datasets and their corresponding tasks are reported in Table 5.
+
+Science Domain We choose four public benchmark datasets from the science domain.
+
+ACL-ARC (Jurgens et al., 2018) is a dataset of citation intents based on a sample of papers from the ACL Anthology Reference Corpus (Bird et al., 2008) in the NLP field.
+
+SciCite (Cohan et al., 2019) is a dataset of citation intents. It provides coarse-grained categories and covers a variety of scientific domains.
+
+JNLPBA (Collier and Kim, 2004) is a named entity dataset in the biomedical field and is derived from five superclasses in the GENIA corpus (Kim et al., 2003).
+
+EBM-NLP (Nye et al., 2018) annotates PICO (Participants, Interventions, Comparisons and Outcomes) spans in clinical trial abstracts. The corresponding PICO Extraction task aims to identify the spans in clinical trial abstracts that describe the respective PICO elements.
+
+Finance Domain We choose four real-world business datasets from the financial domain.
+
+
Corpus Size
Model
Science-domain
Finance-domain
ACL-ARC
SciCite
JNLPBA
EBM-NLP
Avg.
OIR
MTC
IEE
PSM
Avg.
CLS
NER
SE
CLS
NER
TM
|D|/3
DAPT
72.42
85.92
73.38
75.35
76.77
72.65
47.09
66.13
52.38
59.56
|D|/3
VarMAE
76.98
84.67
74.73
74.91
77.82
70.50
53.93
67.72
56.02
62.04
|D|
DAPT
70.02
84.20
73.85
75.88
75.99
65.54
54.49
65.90
46.47
58.10
|D|
VarMAE
76.50
86.32
74.43
76.01
78.32
68.77
56.58
70.15
53.68
62.30
+
+Table 8: Results of DAPT and VarMAE on all downstream tasks against different corpus sizes of pre-training. $|\mathcal{D}|$ is the corpus size. For each dataset, we run three random seeds and report the average result of the test sets. We report the micro-average F1 score for CLS and TM, entity-level F1 score for NER, and token-level F1 score for SE.
+
+
Masking Ratio
Model
Science-domain
Finance-domain
ACL-ARC
SciCite
JNLPBA
EBM-NLP
Avg.
OIR
MTC
IEEE
PSM
Avg.
CLS
NER
SE
CLS
NER
TM
5%
VarMAE
76.02
85.12
73.86
74.09
77.27
67.80
46.33
66.72
53.32
58.54
15%
VarMAE
76.50
86.32
74.43
76.01
78.32
68.77
56.58
70.15
53.68
62.30
30%
VarMAE
73.62
85.69
73.75
74.73
76.95
70.57
45.68
65.00
55.23
59.12
+
+Table 9: Results of VarMAE on all downstream tasks against different masking ratios of pre-training. For each dataset, we run three random seeds and report the average result of the test sets. We report the micro-average F1 score for CLS and TM, entity-level F1 score for NER, and token-level F1 score for SE.
+
+OIR is a dataset of the outbound intent recognition task. It aims to identify the intent of customer response in the outbound call scenario.
+
+MTC is a dataset of the multi-label topic classification task. It aims to identify the topics of the spoken dialogue.
+
+PSM is a dataset of the pairwise search matching task. It aims to identify the semantic similarity of a sentence pair in the search scenario.
+
+IEE is a dataset of the Insurance-entity extraction task. Its goal is to locate named entities mentioned in the input sentence.
+
+For OIR and MTC, we use an ASR (automatic speech recognition) tool to convert acoustic signals into textual sequences in the pre-processing phase.
+
+# C.2 Implementation Details
+
+# C.2.1 Pre-training Hyperparameters
+
+Table 6 describes the hyperparameters for pretraining on a domain-specific corpus.
+
+# C.2.2 Fine-tuning Hyperparameters
+
+Table 7 reports the fine-tuning hyperparameters for downstream tasks.
+
+# D Detailed Results
+
+In this part, we provide detailed results on science- and finance-domain downstream tasks.
+
+# D.1 Results Against Different Corpus Sizes
+
+The detailed results of DAPT and VarMAE on all downstream tasks against different corpus sizes of
+
+pre-training are reported in Table 8.
+
+# D.2 Results Against Different Masking Ratios
+
+The detailed results of VarMAE on all downstream tasks against different masking ratios of pre-training are reported in Table 9.
+
+# E Contribution and Future Work
+
+The main contributions of this work are as follows: 1) We present a domain-adaptive language modeling method named VarMAE based on the combination of variational autoencoders and masked autoencoders. 2) We design a context uncertainty learning module to model the point-estimate context of each token into a smooth latent distribution. The module can produce diverse and well-formed contextual representations. 3) Extensive experiments on science- and finance-domain NLU tasks demonstrate that VarMAE can be efficiently adapted to new domains with limited resources.
+
+For future works, we will build domain-specific structured knowledge to further assist language understanding, and apply our method for domain-adaptive language generation.
\ No newline at end of file
diff --git a/varmaepretrainingofvariationalmaskedautoencoderfordomainadaptivelanguageunderstanding/images.zip b/varmaepretrainingofvariationalmaskedautoencoderfordomainadaptivelanguageunderstanding/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..9a3feabf5f25b9a5a0d736a50bbbc7fed8a92d27
--- /dev/null
+++ b/varmaepretrainingofvariationalmaskedautoencoderfordomainadaptivelanguageunderstanding/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3271672cd710fb0f7f5521ee9417940e717f67bd0f0c605f16f43cd2b25d3fcc
+size 466600
diff --git a/varmaepretrainingofvariationalmaskedautoencoderfordomainadaptivelanguageunderstanding/layout.json b/varmaepretrainingofvariationalmaskedautoencoderfordomainadaptivelanguageunderstanding/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..c813d50841d8cf1f466deb1d24cbee60940fd85f
--- /dev/null
+++ b/varmaepretrainingofvariationalmaskedautoencoderfordomainadaptivelanguageunderstanding/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:da74301ba61fc36b5cd8607769b0827feada9c2473d294c28a4fae552d70db05
+size 439554
diff --git a/visualizingtheobviousaconcretenessbasedensemblemodelfornounpropertyprediction/c4cf24ee-ad26-42f5-a760-5939f0854a7b_content_list.json b/visualizingtheobviousaconcretenessbasedensemblemodelfornounpropertyprediction/c4cf24ee-ad26-42f5-a760-5939f0854a7b_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..b14c2e2ffea055391bfcb66efe1582e0eabee542
--- /dev/null
+++ b/visualizingtheobviousaconcretenessbasedensemblemodelfornounpropertyprediction/c4cf24ee-ad26-42f5-a760-5939f0854a7b_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:09b1ae7f1c9e5391725e04e6bdad118cb412227d1214b106709b0dee17a72350
+size 133550
diff --git a/visualizingtheobviousaconcretenessbasedensemblemodelfornounpropertyprediction/c4cf24ee-ad26-42f5-a760-5939f0854a7b_model.json b/visualizingtheobviousaconcretenessbasedensemblemodelfornounpropertyprediction/c4cf24ee-ad26-42f5-a760-5939f0854a7b_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..12d84926dd641d571fb7d27928ee2c08e78820e1
--- /dev/null
+++ b/visualizingtheobviousaconcretenessbasedensemblemodelfornounpropertyprediction/c4cf24ee-ad26-42f5-a760-5939f0854a7b_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4eb7ffbfc361a93339650b8c64aa6a2cbb248ce1b6b008a0b224ce0a4f3a225c
+size 156583
diff --git a/visualizingtheobviousaconcretenessbasedensemblemodelfornounpropertyprediction/c4cf24ee-ad26-42f5-a760-5939f0854a7b_origin.pdf b/visualizingtheobviousaconcretenessbasedensemblemodelfornounpropertyprediction/c4cf24ee-ad26-42f5-a760-5939f0854a7b_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..e25a73967b610d047a7817aec77049bbc2cf4067
--- /dev/null
+++ b/visualizingtheobviousaconcretenessbasedensemblemodelfornounpropertyprediction/c4cf24ee-ad26-42f5-a760-5939f0854a7b_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:95136903f678d9ac87a95187bd979472b666635a15241c7e801dff06d2711a01
+size 5282397
diff --git a/visualizingtheobviousaconcretenessbasedensemblemodelfornounpropertyprediction/full.md b/visualizingtheobviousaconcretenessbasedensemblemodelfornounpropertyprediction/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..8ce034d9a5e46d2cedfa24987da90107865d7ca1
--- /dev/null
+++ b/visualizingtheobviousaconcretenessbasedensemblemodelfornounpropertyprediction/full.md
@@ -0,0 +1,453 @@
+# Visualizing the Obvious: A Concreteness-based Ensemble Model for Noun Property Prediction
+
+Yue Yang*, Artemis Panagopoulou*, Marianna Apidianaki, Mark Yatskar, Chris Callison-Burch
+
+Department of Computer and Information Science, University of Pennsylvania
+
+{yueyang1, artemisp, marapi, ccb, myatskar}@seas.upenn.edu
+
+# Abstract
+
+Neural language models encode rich knowledge about entities and their relationships which can be extracted from their representations using probing. Common properties of nouns (e.g., red strawberries, small ant) are, however, more challenging to extract compared to other types of knowledge because they are rarely explicitly stated in texts. We hypothesize this to mainly be the case for perceptual properties which are obvious to the participants in the communication. We propose to extract these properties from images and use them in an ensemble model, in order to complement the information that is extracted from language models. We consider perceptual properties to be more concrete than abstract properties (e.g., interesting, flawless). We propose to use the adjectives' concreteness score as a lever to calibrate the contribution of each source (text vs. images). We evaluate our ensemble model in a ranking task where the actual properties of a noun need to be ranked higher than other non-relevant properties. Our results show that the proposed combination of text and images greatly improves noun property prediction compared to powerful text-based language models.
+
+# 1 Introduction
+
+Common properties of concepts or entities (e.g., "These strawberries are red") are rarely explicitly stated in texts, contrary to more specific properties which bring new information in the communication (e.g., "These strawberries are delicious"). This phenomenon, known as "reporting bias" (Gordon and Van Durme, 2013; Shwartz and Choi, 2020), makes it difficult to learn, or retrieve, perceptual properties from text. However, noun property identification is an important task which may allow AI applications to perform commonsense reasoning in a way that matches people's psychological or cognitive predispositions and can improve agent
+
+
+Task: Retrieve Relevant Properties of Nouns
+Figure 1: Our task is to retrieve relevant properties of nouns from a set of candidates. We tackle the task using (a) Cloze-task probing; (b) CLIP to compute the similarity between the properties and images of the noun; (c) a Concreteness Ensemble Model (CEM) to ensemble language and CLIP predictions which relies on properties' concreteness ratings.
+
+communication (Lazaridou et al., 2016). Furthermore, identifying noun properties can contribute to better modeling concepts and entities, learning affordances (i.e. defining the possible uses of an object based on its qualities or properties), and understanding models' knowledge about the world. Models that combine different modalities provide a sort of grounding which helps to alleviate the reporting bias problem (Kiela et al., 2014; Lazaridou et al., 2015; Zhang et al., 2022). For example, multimodal models are better at predicting color attributes compared to text-based language models (Paik et al., 2021; Norlund et al., 2021). Furthermore, visual representations of concrete objects improve performance in downstream NLP tasks
+
+(Hewitt et al., 2018). Inspired by this line of work, we expect concrete visual properties of nouns to be more accessible through images, and text-based language models to better encode abstract semantic properties. We propose an ensemble model which combines information from these two sources for English noun property prediction.
+
+We frame property identification as a ranking task, where relevant properties for a noun need to be retrieved from a set of candidate properties found in association norm datasets (McRae et al., 2005; Devereux et al., 2014; Norlund et al., 2021). We experiment with text-based language models (Devlin et al., 2019; Radford et al., 2019; Liu et al., 2019) and with CLIP (Radford et al., 2021) which we query using a slot filling task, as shown in Figures 1(a) and (b). Our ensemble model (Figure 1(c)) combines the strengths of the language and vision models, by specifically privileging the former or latter type of representation depending on the concreteness of the processed properties (Brysbaert et al., 2014). Given that concrete properties are characterized by a higher degree of imageability (Friendly et al., 1982), our model trusts the visual model for perceptual and highly concrete properties (e.g., color adjectives: red, green), and the language model for abstract properties (e.g., free, infinite). Our results confirm that CLIP can identify nouns' perceptual properties better than language models, which contain higher-quality information about abstract properties. Our ensemble model, which combines the two sources of knowledge, outperforms the individual models on the property ranking task by a significant margin.
+
+# 2 Related Work
+
+Probing has been widely used in previous work for exploring the semantic knowledge that is encoded in language models. A common approach has been to convert the facts, properties, and relations found in external knowledge sources into "fill-in-the-blank" cloze statements, and to use them to query language models. Apidianaki and Gárí Soler (2021) do so for nouns' semantic properties and highlight how challenging it is to retrieve this kind of information from BERT representations (Devlin et al., 2019). Furthermore, slightly different prompts tend to retrieve different semantic information (Ettinger, 2020), compromising the robustness of semantic probing tasks. We propose to mitigate these problems by also relying on images.
+
+Features extracted from different modalities can complement the information found in texts. Multimodal distributional models, for example, have been shown to outperform text-based approaches on semantic benchmarks (Silberer et al., 2013; Bruni et al., 2014; Lazaridou et al., 2015). Similarly, ensemble models that integrate multimodal and text-based models outperform models that only rely on one modality in tasks such as visual question answering (Tsimpoukelli et al., 2021; Alayrac et al., 2022; Yang et al., 2021b), visual entailment (Song et al., 2022), reading comprehension, natural language inference (Zhang et al., 2021; Kiros et al., 2018), text generation (Su et al., 2022), word sense disambiguation (Barnard and Johnson, 2005), and video retrieval (Yang et al., 2021a). We extend this investigation to noun property prediction.
+
+We propose a novel noun property retrieval model which combines information from language and vision models, and tunes their respective contributions based on property concreteness (Brysbaert et al., 2014). Concreteness is a graded notion that strongly correlates with the degree of imageability (Friendly et al., 1982; Byrne, 1974); concrete words generally tend to refer to tangible objects that the senses can easily perceive (Paivio et al., 1968). We extend this idea to noun properties and hypothesize that vision models would have better knowledge of perceptual, and more concrete, properties (e.g., red, flat, round) than text-based language models, which would better capture abstract properties (e.g., free, inspiring, promising). We evaluate our ensemble model using concreteness scores automatically predicted by a regression model (Charbonnier and Wartena, 2019). We compare these results to the performance of the ensemble model with manual (gold) concreteness ratings (Brysbaert et al., 2014). In previous work, concreteness was measured based on the idea that abstract concepts relate to varied and composite situations (Barsalou and Wiemer-Hastings, 2005). Consequently, visually grounded representations of abstract concepts (e.g., freedom) should be more complex and diverse than those of concrete words (e.g., dog) (Lazaridou et al., 2015; Kiela et al., 2014). Lazaridou et al. (2015) specifically measure the entropy of the vectors induced by multimodal models which serve as an expression of how varied the information they encode is. They demonstrate that the entropy of multimodal vectors strongly correlates with the degree of abstractness of words.
+
+# 3 Experimental Setup
+
+# 3.1 Task Formulation
+
+Given a noun $\mathcal{N}$ and a set of candidate properties $\mathbb{P}$ , a model needs to select the properties $\mathbb{P}_{\mathcal{N}} \subseteq \mathbb{P}$ that apply to $\mathcal{N}$ . The candidate properties are the set of all adjectives retained from a resource (cf. Section 3.2), which characterize different nouns. A model needs to rank properties that apply to $\mathcal{N}$ higher than properties that apply to other nouns in the resource. We consider that a property correctly characterizes a noun, if this property has been proposed for that noun by the annotators.
+
+# 3.2 Datasets
+
+FEATURE NORMS: The McRae et al. (2005) dataset contains feature norms for 541 objects annotated by 725 participants. We follow Apidianaki and Gari Soler (2021) and only use the IS_ADJ features of noun concepts, where the adjective describes a noun property. In total, there are 509 noun concepts with at least one IS_ADJ feature, and 209 unique properties. The FEATURE NORMS dataset contains both perceptual properties (e.g., tall, fluffy) and non-perceptual ones (e.g., intelligent, expensive).
+
+MEMORY COLORS: The dataset contains 109 nouns with an associated image and its corresponding prototypical color. There are 11 colors in total. (Norlund et al., 2021). The data were scraped from existing knowledge bases on the web.
+
+CONCEPT PROPERTIES: This dataset was created at the Centre for Speech, Language and Brain (Devereux et al., 2014). It contains concept property norm annotations collected from 30 participants. The data comprise 601 nouns with 400 unique properties. We keep aside 50 nouns (which are not in FEATURE NORMS and MEMORY COLORS) as our development set (dev). We use the dev for prompt selection and hyper-parameter tuning. We call the rest of the dataset CONCEPT PROPERTIES-test and use it for evaluation.
+
+CONCREteness DATASET: The Brysbaert et al. (2014) dataset contains manual concreteness ratings for 37,058 English word lemmas and 2,896 two-word expressions, gathered through crowdsourcing. The original concreteness scores range from 0 to 5. We map them to [0,1] by dividing each score by 5.
+
+
Dataset
#Ns
#Ps
N-P pairs
Ps per N
FEATURE NORMS
509
209
1592
3.1
CONCEPT PROPERTIES
601
400
3983
6.6
MEMORY COLORS
109
11
109
1.0
+
+Table 1: Statistics of the ground-truth datasets. We show the number of nouns (#Ns), properties (#Ps) and noun-property pairs (N-P pairs), as well as the average number of properties per noun in each dataset.
+
+# 3.3 Models
+
+# 3.3.1 Language Models (LMs)
+
+We query language models about their knowledge of noun properties using cloze-style prompts (cf. Appendix A.1). These contain the nouns in singular or plural form, and the [MASK] token at the position where the property should appear (e.g., "Strawberries are [MASK])". A language model assigns a probability score to a candidate property by relying on the wordpieces preceding and following the [MASK] token, $\mathbf{W}_{\backslash t} = (w_{1},\dots,w_{t - 1},w_{t + 1},\dots,w_{|\mathbf{W}|})^{2}$
+
+$$
+\operatorname {S c o r e} _ {\mathrm {L M}} (\mathcal {P}) = \log P _ {\mathrm {L M}} \left(w _ {t} = \mathcal {P} | \mathbf {W} _ {\backslash t}\right), \tag {1}
+$$
+
+where $P_{\mathrm{LM}}(\cdot)$ is the probability from language model. We experiment with BERT-LARGE (Devlin et al., 2019), ROBERTA-LARGE (Liu et al., 2019), GPT2-LARGE (Radford et al., 2019) and GPT3-DAVINCI, which have been shown to deliver impressive performance in Natural Language Understanding tasks (Yamada et al., 2020; Takase and Kiyono, 2021; Aghajanyan et al., 2021).
+
+Our property ranking setup allows to consider multi-piece adjectives (properties) $^3$ which were excluded from open-vocabulary masking experiments (Petroni et al., 2019; Bouraoui et al., 2020; Apidianaki and Gari Soler, 2021). Since the candidate properties are known, we can obtain a score for a property composed of $k$ pieces $(\mathcal{P} = (w_{t},\dots,w_{t + k}),k\geq 1)$ by taking the average of the scores assigned by the LM to each piece:
+
+$$
+\operatorname {S c o r e} _ {\mathrm {L M}} (\mathcal {P}) = \frac {1}{k} \sum_ {i = 0} ^ {k} \log P _ {\mathrm {L M}} \left(w _ {t + i} \mid \mathbf {W} _ {\backslash t + i}\right) \tag {2}
+$$
+
+We report the results in Appendix E.4 and show that our model is better than other models at retrieving multi-piece properties.
+
+# 3.3.2 Multimodal Language Models (MLMs)
+
+Vision Encoder-Decoder MLMs are language models conditioned on other modalities than text, for example, images. For each noun $\mathcal{N}$ in our datasets, we collect a set of images $\mathbb{I}$ from the web.4 We probe an MLM similarly to LMs, using the same set of prompts. An MLM yields a score for each property given an image $i\in \mathbb{I}$ using Formula 3.
+
+$$
+\operatorname {S c o r e} _ {\mathrm {M L M}} (\mathcal {P}, i) = \log P _ {\mathrm {M L M}} \left(w _ {t} = \mathcal {P} \mid \mathbf {W} _ {\backslash t}, i\right), \tag {3}
+$$
+
+where $P_{\mathrm{MLM}}(\cdot)$ is the probability from multimodal language model. In addition to the context $\mathbf{W}_{\backslash t}$ , the MLM conditions on the image $i$ . Then we aggregate over all the images $\mathbb{I}$ for the noun $\mathcal{N}$ to get the score for the property:
+
+$$
+\operatorname {S c o r e} _ {\mathrm {M L M}} (\mathcal {P}) = \frac {1}{| \mathbb {I} |} \sum_ {i \in \mathbb {I}} \operatorname {S c o r e} _ {\mathrm {M L M}} (\mathcal {P}, i) \tag {4}
+$$
+
+ViLT We experiment with the Transformer-based (Vaswani et al., 2017) ViLT model (Kim et al., 2021) as an MLM. ViLT uses the same tokenizer as BERT and is pretrained on the Google Conceptual Captions (GCC) dataset which contains more than 3 million image-caption pairs for about 50k words (Sharma et al., 2018). Most other vision-language datasets contain a significantly smaller vocabulary (10k words). In addition, ViLT requires minimal image pre-processing and is an open visual vocabulary model. This contrasts with other multimodal architectures which require visual predictions before passing the images on to the multimodal layers (Li et al., 2019; Lu et al., 2019; Tan and Bansal, 2019). These have been shown to only marginally surpass text-only models (Yun et al., 2021).
+
+CLIP We also use the CLIP model which is pretrained on 400M image-caption pairs (Radford et al., 2021). CLIP is trained to align the embedding spaces learned from images and text using contrastive loss as a learning objective. The CLIP model integrates a text encoder $f_{\mathrm{T}}$ and a visual
+
+
+Top-1: An object with the property of showy. Bottom-1: An object with the property of tartan.
+Figure 2: Examples of Top-1 and Bottom-1 prompts ranked by CLIP.
+
+
+Top-1: An object with the property of yellow. Bottom-1: An object with the property of kneaded.
+
+encoder $f_{\mathrm{V}}$ which separately encode the text and image to vectors with the same dimension. Given a batch of image-text pairs, CLIP maximizes the cosine similarity for matched pairs while minimizing the cosine similarity for unmatched pairs.
+
+We use CLIP to compute the cosine similarity of an image $i \in \mathbb{I}$ and this text prompt $(s_{\mathcal{P}})$ : "An object with the property of [MASK]", where the [MASK] token is replaced with a candidate property $\mathcal{P} \in \mathbb{P}$ . The score for each property $\mathcal{P}$ is the mean similarity between the sentence prompt $s_{\mathcal{P}}$ and all images $\mathbb{I}$ collected for a noun:
+
+$$
+\operatorname {S c o r e} _ {\mathrm {C L I P}} (\mathcal {P}) = \frac {1}{| \mathbb {I} |} \sum_ {i \in \mathbb {I}} \cos \left(f _ {\mathrm {T}} \left(s _ {\mathcal {P}}\right), f _ {\mathrm {V}} (i)\right) \tag {5}
+$$
+
+This score serves to rank the candidate properties according to their relevance for a specific noun. Figure 2 shows the most and least relevant properties for the nouns peacock and sunflower.
+
+# 3.3.3 Concreteness Ensemble Model (CEM)
+
+The concreteness score for a property guides CEM towards "trusting" the language or the vision model more. We propose two CEM flavors which we describe as CEM-PRED and CEM-GOLD. CEM-PRED uses the score $(c_{\mathcal{P}} \in [0,1])$ that is proposed by our concreteness prediction model for every candidate property $\mathcal{P} \in \mathbb{P}$ , while CEM-GOLD uses the score for $\mathcal{P}$ in the Brysbaert et al. (2014) dataset. $^8$ If there is no gold score for a property, we use the score of the word with the longest matching subsequence in the dataset. $^9$ The idea behind this
+
+
Model
Prompt Selected
BERT
Most [NOUN-plural] are [MASK].
ROBERTA
A/An [NOUN-singular] is generally [MASK].
GPT-2
Most [NOUN-plural] are [MASK].
ViLT
[NOUN-plural] are [MASK].
CLIP
An object with the property of [MASK].
+
+Table 2: The prompt template selected for each model.
+
+heuristic is that properties without ground truth concreteness scores often have inflected forms or derivations in the dataset (e.g., sharpened/sharpen, invented/invention, etc.).10 We also experimented with GLOVE word embedding cosine similarity which resulted in suboptimal performance (cf. Section 4). Additionally, sequence matching is much faster than GLOVE similarity (cf. Appendix B).
+
+Both CEMs combine the rank11 of $\mathcal{P}$ proposed by the language model (RankLM) and by CLIP (RankCLIP) through a weighted sum which is controlled by the concreteness score, $c_{\mathcal{P}}$ :
+
+$$
+\begin{array}{l} \operatorname {R a n k} _ {\mathrm {C E M}} (\mathcal {P}) = \left(1 - c _ {\mathcal {P}}\right) \cdot \operatorname {R a n k} _ {\mathrm {L M}} (\mathcal {P}) \tag {6} \\ + c _ {\mathcal {P}} \cdot \operatorname {R a n k} _ {\mathrm {C L I P}} (\mathcal {P}) \\ \end{array}
+$$
+
+# 3.3.4 Concreteness Prediction Model
+
+We generate concreteness scores using the model of Charbonnier and Wartena (2019) with FastText embeddings (Bojanowski et al., 2017). The model leverages part-of-speech and suffix features to predict concreteness in a classical regression setting. We train the model on the 40k concreteness dataset (Brysbaert et al., 2014), excluding the 425 adjectives found in our test sets. The model obtains a high Spearman $\rho$ correlation of 0.76 with the ground truth scores of the adjectives in our test sets. This result shows that automatically predicted scores are a viable alternative which allows the application of the method to new data and domains where hand-crafted resources might be unavailable.
+
+# 3.3.5 Baselines
+
+We compare the predictions of the language, vision, and ensemble models to the predictions of three baseline methods.
+
+RANDOM: Generates a RANDOM property ranking for each noun.
+
+GLOVE: Ranking based on the cosine similarity of the GLOVE embeddings (Pennington et al., 2014) of the noun and the property.
+
+GOOGLE NGRAM: Ranking by the bigram frequency of each noun-property pair in Google Ngrams (Brants and Franz, 2009). If a noun-property pair does not appear in the corpus, we assign to it a frequency of 0.
+
+# 3.4 Evaluation Metrics
+
+We evaluate the property ranking proposed by each model using the top-K Accuracy (A@K), top-K recall (R@K), and Mean Reciprocal Rank (MRR) metrics. A@K is defined as the percentage of nouns for which at least one ground truth property is among the top-K predictions (Ettinger, 2020). R@K shows the proportion of ground truth properties retrieved in the top-K predictions. We report the average R@K across all nouns in a test set. MRR stands for the ground truth properties' average reciprocal ranks (more precisely, the inverse of the rank, $\frac{1}{\text{rank}}$ ). For all three metrics, high scores are better.
+
+# 3.5 Implementation Details
+
+Prompt Selection We evaluate the performance of BERT-LARGE, ROBERTA-LARGE, GPT-2-LARGE, and VILT on the dev set (cf. Section 3.2) using the prompt templates proposed by Apidianaki and Gari Soler (2021). For CLIP, we handcraft a set of prompts that are close to the format that was recommended in the original paper (Radford et al., 2021) and evaluate their performance on the dev set. We choose the prompt that yields the highest performance in terms of MRR on the dev set for each model, and use it for all our experiments (cf. Appendix A for details). Table 2 lists the prompt templates selected for each model.
+
+Image Collection We collect images for the nouns in our datasets using the Bing Image Search API, an image query interface widely used for research purposes (Kiela et al., 2016; Mostafazadeh et al., 2016).12 We use again the dev set to determine the number of images needed for each noun. We find that good performance can be achieved with only ten images (cf. Figure 7 in Appendix C.1). Adding more images increases the computations needed without significantly improving the performance. Therefore, we set the number of images per noun to ten for all vision models and experiments.
+
+
Model
#Param
Img
FEATURE NORMS
CONCEPT PROPERTIES-test
MEMORY COLORS
A@1
A@5
R@5
R@10
MRR
A@1
A@5
R@5
R@10
MRR
A@1
A@2
A@3
RANDOM
0
X
1.0
2.4
0.7
1.4
.018
0.2
3.8
0.5
1.7
.014
11.9
20.2
25.7
GLOVE
0
X
16.3
42.2
16.4
26.6
.124
18.5
46.6
9.5
16.4
.078
28.4
45.0
60.1
GOOGLE-NGRAM
0
X
23.4
65.2
31.5
47.7
.192
27.9
72.1
18.5
30.3
.122
44.0
63.3
69.7
BERT-LARGE
345M
X
27.3
60.3
29.4
43.6
.194
31.4
72.1
18.2
29.2
.123
44.0
57.8
67.9
ROBERTA-LARGE
354M
X
24.6
63.1
30.2
46.3
.188
34.1
79.1
22.4
34.8
.138
48.6
61.5
67.9
GPT2-LARGE
1.5B
X
22.0
60.7
28.4
42.9
.173
35.6
77.0
21.0
32.4
.136
44.0
57.8
67.9
GPT3-DAVINCI
175B
X
37.9
61.5
31.8
44.2
-
47.0
72.2
20.1
29.7
-
74.3
82.6
84.4
VILT
135M
✓
27.9
56.0
26.2
40.1
.185
34.5
63.2
15.7
23.7
.118
74.3
-
-
CLIP-ViT/L14
427M
✓
28.5
61.7
29.4
42.7
.197
29.2
63.0
15.0
24.9
.113
84.4
91.7
97.2
CEM-GOLD (GloVe)
781M
✓
38.9
75.6
39.4
53.3
.249
48.6
84.8
27.0
39.3
.171
83.5
92.7
99.1
CEM-GOLD
781M
✓
40.1
76.2
40.0
53.3
.252
48.5
84.2
26.8
38.8
.170
83.5
92.7
99.1
CEM-PRED
781M
✓
39.9
75.8
40.0
52.5
.251
49.9
85.8
28.1
40.0
.175
88.1
96.3
99.1
+
+Table 3: Results obtained on the three datasets. The best result for each metric is marked in boldface.
+
+
Noun
Property
most concrete
least concrete
dandelion
yellow
annoying
cougar
brown
vicious
wand
round
magical
spear
sharp
dangerous
pyramid
triangular
mysterious
+
+Table 4: Examples of nouns with their most and least concrete properties in FEATURE NORMS.
+
+Model Implementation All LMs and MLMs are built on the huggingface API. $^{13}$ The CLIP model is adapted from the official repository. $^{14}$ CEM ensembles the ROBERTA-LARGE and the CLIP-ViT/L14 models. The experiments were run on Quadro RTX 6000 24GB. All our experiments involve zero-shot and one-shot (for GPT-3) probing, hence no training of the models is needed. The inference time of CEM is naturally longer than that of individual models, but it is still very fast and only takes a few minutes for each dataset, with pre-computed image features. For more details on runtime refer to Section B, and specifically to Table 10, in the Appendix.
+
+# 4 Evaluation
+
+# 4.1 Property Ranking Task
+
+Table 3 shows the results obtained by the LMs, the MLMs and our CEM model on the FEATURE NORMS, CONCEPT PROPERTIES-test $^{15}$ and MEMORY COLORS datasets. The two flavors of CEM (CEM-PRED and CEM-GOLD) outperform all other models with a significant margin across datasets. Interestingly, CEM-PRED performs better than CEM-GOLD on the CONCEPT
+
+
+Figure 3: Top-1 Accuracy for the FEATURE NORMS properties filtered by concreteness. The average concreteness score for each band is given on the x-axis. The error bars in the "random" category represent the standard deviation on 10 trials.
+
+PROPERTIES-test dataset. This may be due to the fact that 49 properties in this dataset do not have ground truth concreteness scores (vs. only 15 properties in FEATURE NORMS), indicating that the prediction model probably approximates concreteness better in these cases, contributing to higher scores for CEM-PRED.
+
+As explained in Section 3.3.3, we explore two different heuristics to select the score for these properties for CEM-GOLD: longest matching subsequence and GloVE cosine similarity. The latter similarity metric results to a drop in performance on FEATURE NORMS and almost identical performance for CONCEPT-PROPERTIES-test. $^{16}$
+
+We notice that the GOOGL-NGRAM baseline performs well on FEATURE NORMS with results on par or superior to big LMs. The somewhat lower results obtained on CONCEPT PROPERTIES-test might be due to the higher number of properties in this dataset (cf. Table 1), which makes the ranking
+
+
+Figure 4: The average Rank Improvement (RI) score for properties in the CONCEPT PROPERTIES-test grouped in ten bins according to their concreteness. The higher the concreteness score of the properties in a bin, the larger the improvement brought by CEM-GOLD and CEM-PRED over ROBERTA.
+
+task more challenging. $^{17}$ There is also a higher number of noun-property pairs that are not found in Google Bigrams which are assigned a zero score. $^{18}$
+
+The MEMORY COLORS dataset associates each noun with a single color so we only report Accuracy at top-K (last three columns of Table 3). We can compare these scores to a previous baseline, the top-1 Accuracy reported by Norlund et al. (2021) for the CLIP-BERT model which is 78.5.[19] CEM-PRED and GOLD both do better on this dataset (88.1). GPT-3 gets much higher scores than the other three language models on this task with a top-1 Accuracy of 74.3, but is outperformed by CLIP and CEM. Note that MRR does not apply to GPT-3 since it generates properties instead of reranking them (cf. Appendix A.3).
+
+The multimodal model with the lowest performance, VILT, is as good as GPT-3. CLIP falls halfway between VILT and CEM-PRED/GOLD. CEM-PRED and CEM-GOLD present a clear advantage compared to language and multimodal models, achieving a top-1 Accuracy of 88.1. Although ROBERTA gets very low Accuracy on MEMORY COLORS, it does not hurt performance when combined with CLIP in our CEM-GOLD model. This is because the color properties in this dataset have high concreteness scores (0.82 on aver
+
+
+Figure 5: Top-1 Accuracy obtained by different ensemble models on the FEATURE NORMS. The x-axis shows the weight used to interpolate two models. The straight dashed and dotted lines are the top-1 Accuracy of CEM-GOLD (40.1) and CEM-PRED (39.9) respectively.
+
+age), so CEM-GOLD relies mainly on CLIP which works very well in this setting. CEM-GOLD makes the same top-1 predictions as CLIP for 95 nouns (out of 109), while only 50 nouns are assigned the same color by CEM-GOLD and ROBERTA.
+
+# 4.2 Additional Analysis
+
+Concreteness level. We examine the performance of each model for properties at different concreteness levels. From the properties available for a noun in FEATURE NORMS,[20] we keep a single property as our ground truth for this experiment: (a) most concrete: the property with the highest concreteness score in the Brysbaert et al. (2014) lexicon; (b) least concrete: the property with the lowest concreteness score; (c) random: a randomly selected property.[21] Figure 3 shows the top-1 Accuracy of the models for the properties in each concreteness band. Examples of nouns with their most and least concrete properties are given in Table 4. The results of this experiment confirm our initial assumption that MLMs (e.g., CLIP and ViLT) are better at capturing concrete properties, and LMs (e.g., ROBERTA and GPT-2) are better at identifying abstract ones. GPT-3 is the only LM that performs better for concrete than for abstract properties, while still falling behind CEM variations.
+
+Rank Improvement. We investigate the relationship between the performance of CEM and the concreteness score of the properties in CONCEPT
+
+
Noun
Model
Top-3 Properties
swan
ROBERTA
male, white, black
CLIP
white, graceful, gentle
GPT-3
graceful, regal, stately
CEM-GOLD
white, large, graceful
CEM-PRED
white, endangered, graceful
ox
ROBERTA
male, white, black
CLIP
endangered, wild, harvested
GPT-3
strong, muscular, brawny
CEM-GOLD
large, wild, friendly
CEM-PRED
large, wild, hairy
plum
RBTA
edible, yellow, red
CLIP
purple, edible, picked
CEM
edible, purple, harvested
GPT-3
tart, acidic, sweet
orange
ROBERTA
edible, yellow, orange
CLIP
orange, citrus, juicy
GPT-3
tart, acidic, sweet
CEM-GOLD
orange, edible, healthy
CEM-PRED
orange, edible,citrus
cape
ROBERTA
black, white, fashionable
CLIP
cozy, dressy, cold
GPT-3
tart, acidic, sweet
CEM-GOLD
fashionable, dark, grey
CEM-PRED
fashionable,grey,dark
+
+Table 5: Top-3 properties proposed by different models for nouns in FEATURE NORMS.
+
+PROPERTIES-test. We measure the rank improvement (RI) for a property $(\mathcal{P})$ that occurs when using CEM compared to when ROBERTA is used as follows:
+
+$$
+\operatorname {R I} (\mathcal {P}) = \operatorname {R a n k} _ {\mathrm {C E M}} (\mathcal {P}) - \operatorname {R a n k} _ {\mathrm {R o B E R T a}} (\mathcal {P}) \tag {7}
+$$
+
+A high RI score for $\mathcal{P}$ means that its rank is improved with CEM compared to ROBERTA. We calculate the RI for properties at different concreteness levels. We sort the 400 properties in CONCEPT PROPERTIES-test by increasing the concreteness score and group them into ten bins of 40 properties each. We find a clear positive relationship between the average RI and concreteness scores within each bin, as shown in Figure 4. This confirms that both CEM-PRED and CEM-GOLD perform better with concrete properties.
+
+Ensemble Weight Selection. We explore whether a dynamic concreteness-based ensemble weight outperforms a fixed one. We experiment with different model combinations (ROBERTA with BERT, GPT-2, and ViLT) with an interpolation weight $w$ that takes values in the range [0,1]. If the weight is close to 0, CEM relies more on ROBERTA; if it is 1, CEM relies more on the
+
+
+
+
+Figure 6: Number of nouns in FEATURE NORMS and CONCEPT PROPERTIES-test for which a model proposed the same top-3 properties in the same order.
+
+second model.
+
+$$
+\begin{array}{l} \operatorname {R a n k} _ {\text {c o m b i n e}} (\mathcal {P}) = (1 - w) \cdot \operatorname {R a n k} _ {\text {R o B E R T a}} (\mathcal {P}) \\ + w \cdot \operatorname {R a n k} _ {\text {o t h e r m o d e l}} (\mathcal {P}) \tag {8} \\ \end{array}
+$$
+
+We also run the best performing ROBERTA + CLIP combination again using weights fixed in this way, i.e. without recourse to the properties' concreteness score as in CEM-PRED and in CEM-GOLD. Note that we do not expect the combination of two text-based LMs to improve Accuracy a lot compared to ROBERTA alone. Our intuition is confirmed by the results obtained on FEATURE NORMS and shown in Figure 5.
+
+The dashed and dotted straight lines in the figure represent the top-1 Accuracy of CEM-GOLD and CEM-PRED, respectively, when the weights used are not the ones on the x-axis, but the gold and predicted concreteness scores (cf. Equation 6). To further highlight the importance of concreteness in interpolating the models, we provide additional results and comparisons in Appendix D.2. Note that CEM-GOLD and CEM-PRED have highly similar performance and actual output. On average over all nouns, they propose 4.35 identical properties at top-5 for nouns in FEATURE NORMS, and 4.41 for nouns in CONCEPT PROPERTIES-test.
+
+We observe a slight improvement in top-1 Accuracy (5%) when ensembling two text-based LMs (ROBERTA + BERT or ROBERTA + GPT-2). Text-based LMs have similar output distributions, hence combining them does not change the final distribution much. The ROBERTA + ViLT ensemble model achieves higher performance due to the interpolation with an image-based model,
+
+but it does not reach the Accuracy of the CEM models (ROBERTA + CLIP). The ViLT model gets lower performance than CLIP when combined with ROBERTA, because it was exposed to much less data than CLIP during training (400M vs. 30M). Finally, we notice that the best performance of ROBERTA + CLIP with fixed weight is slightly lower than that of the CEM models. This indicates that using a fixed weight to ensemble two models hurts performance compared to calibrating their mutual contribution using the concreteness score. Another advantage of the concreteness score is that it is more transferable since it does not require tuning on new datasets.
+
+Properties Quality. Table 5 shows a random sample of the top-3 predictions made by each model for nouns in CONCEPT PROPERTIES-test. We notice that the properties proposed by the two flavors of CEM are both perceptual and abstract, due to their access to both a language and a vision model. We further observe that CEM retrieves rarer and more varied properties for different nouns, compared to the language models.[22]
+
+Figure 6 shows the number of nouns for which a model made the exact same top-3 predictions.[23] For example GPT-3 proposed the properties [tart, acidic, sweet, juicy, smooth] for 20 different nouns[24] in the same order. Note that better prompt engineering might decrease the number of repeated properties. However, we are already prompting GPT-3 with one shot, whereas the other models, including CEM are zero-shot. ROBERTA predicted [male, healthy, white, black, small] for both mittens and penguin, and [male, black, white, brown, healthy] for owl and flamingo. We observe that CEM-PRED and CEM-GOLD are less likely to retrieve the same top-K predictions for a noun than language models. CEM combines the variability and accuracy of CLIP with the benefits of text-based models, which are exposed to large volumes of texts during pre-training.
+
+# 5 Conclusion
+
+We propose a new ensemble model for noun property prediction which leverages the strengths of
+
+language models and multimodal (vision) models. Our model, CEM, calibrates the contribution of the two types of models in a property ranking task by relying on the properties' concreteness level. The results show that the CEM model, which combines ROBERTA and CLIP outperforms powerful text-based language models (such as GPT-3) with significant margins in three evaluation datasets. Additionally, our methodology yields better performance than alternative ensembling techniques, confirming our hypothesis that concrete properties are more accessible through images, and abstract properties through text. The Accuracy scores obtained on the larger datasets show that there is still room for improvement for this challenging task.
+
+# 6 Limitations
+
+Our experiments address concreteness at the lexical level, specifically using scores assigned to adjectives in an external resource (Brysbaert et al., 2014) or predicted using (Charbonnier and Wartena, 2019). Another option would be to use the concreteness of the noun phrases formed by the adjectives and the nouns they modify. We would expect this to be different than the concreteness of adjectives in isolation since the concreteness of the nouns would have an impact on that of the resulting phrase (e.g., useful knife vs. useful idea). We were not able to evaluate the impact of noun phrase concreteness on property prediction because the property datasets used in our experiments mostly contain concrete nouns. Another limitation of our methodology is the reliance on pairing images with nouns. In particular, we use a search engine to retrieve images corresponding to nouns in order to get grounded predictions from the vision model. Finally, we only evaluate our methodology in English and leave experimenting with other languages to future work, since this would require the collection of multi-lingual semantic association datasets and/or the translation of existing ones. We did not pursue this extension for this paper as MULTILINGUAL CLIP model weights only became available very recently.
+
+# 7 Acknowledgements
+
+We thank Marco Baroni for his feedback on an earlier version of the paper. This research is based upon work supported in part by the DARPA KAIROS Program (contract FA8750-19-2-1004), the DARPA LwLL Program (contract FA8750-
+
+19-2-0201), the IARPA BETTER Program (contract 2019-19051600004), and the NSF (Award 1928631). Approved for Public Release, Distribution Unlimited. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of DARPA, IARPA, NSF, or the U.S. Government.
+
+# References
+
+Armen Aghajanyan, Anchit Gupta, Akshit Shrivastava, Xilun Chen, Luke Zettlemoyer, and Sonal Gupta. 2021. Muppet: Massive Multi-task Representations with Pre-Finetuning. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 5799-5811.
+Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, Antoine Miech, Iain Barr, Yana Hasson, Karel Lenc, Arthur Mensch, Katie Millican, Malcolm Reynolds, et al. 2022. Flamingo: a visual language model for few-shot learning. arXiv preprint arXiv:2204.14198.
+Marianna Apidianaki and Aina Gari Soler. 2021. ALL dolphins are intelligent and SOME are friendly: Probing BERT for nouns' semantic properties and their prototypicality. In Proceedings of the Fourth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP, pages 79-94, Punta Cana, Dominican Republic. Association for Computational Linguistics.
+Kobus Barnard and Matthew Johnson. 2005. Word sense disambiguation with pictures. Artificial Intelligence, 167(1-2):13-30.
+Lawrence W. Barsalou and Katja Wiemer-Hastings. 2005. Situating abstract concepts. pages 129-163.
+Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5:135-146.
+Zied Bouraoui, José Camacho-Collados, and Steven Schockaert. 2020. Inducing Relational Knowledge from BERT. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, pages 7456-7463, New York, NY, USA. AAAI Press.
+Thorsten Brants and Alex Franz. 2009. Web 1T 5-gram, 10 European languages version 1. Linguistic Data Consortium, Philadelphia.
+Elia Bruni, Nam Khanh Tran, and Marco Baroni. 2014. Multimodal Distributional Semantics. J. Artif. Int. Res., 49(1):1-47.
+Marc Brysbaert, Amy Beth Warriner, and Victor Kuperman. 2014. Concreteness ratings for 40 thousand generally known english word lemmas. Behavior research methods, 46(3):904-911.
+
+Brian Byrne. 1974. Item concreteness vs spatial organization as predictors of visual imagery. *Memory & Cognition*, 2(1):53-59.
+Jean Charbonnier and Christian Wartena. 2019. Predicting word concreteness and imagery. In Proceedings of the 13th International Conference on Computational Semantics-Long Papers, pages 176-187. Association for Computational Linguistics.
+Barry J Devereux, Lorraine K Tyler, Jeroen Geertzen, and Billi Randall. 2014. The Centre for Speech, Language and the Brain (CSLB) concept property norms. Behavior research methods, 46(4):1119-1127.
+Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.
+Allyson Ettinger. 2020. What BERT is not: Lessons from a new suite of psycholinguistic diagnostics for language models. Transactions of the Association for Computational Linguistics, 8:34-48.
+Michael Friendly, Patricia E Franklin, David Hoffman, and David C Rubin. 1982. The Toronto Word Pool: Norms for imagery, concreteness, orthographic variables, and grammatical usage for 1,080 words. Behavior Research Methods & Instrumentation, 14(4):375-399.
+Jonathan Gordon and Benjamin Van Durme. 2013. Reporting Bias and Knowledge Acquisition. In Proceedings of the 2013 Workshop on Automated Knowledge Base Construction, AKBC '13, page 25-30, New York, NY, USA. Association for Computing Machinery.
+Aurelie Herbelot and Eva Maria Vecchi. 2015. Building a shared world: Mapping distributional to model-theoretic semantic spaces. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 22-32.
+John Hewitt, Daphne Ippolito, Brendan Callahan, Reno Kriz, Derry Tanti Wijaya, and Chris Callison-Burch. 2018. Learning translations via images with a massively multilingual image dataset. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2566-2576, Melbourne, Australia. Association for Computational Linguistics.
+Douwe Kiela, Felix Hill, Anna Korhonen, and Stephen Clark. 2014. Improving multi-modal representations using image dispersion: Why less is sometimes more. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 835-841, Baltimore, Maryland. Association for Computational Linguistics.
+
+Douwe Kiela, Anita Lilla Verő, and Stephen Clark. 2016. Comparing data sources and architectures for deep visual representation learning in semantics. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 447-456, Austin, Texas. Association for Computational Linguistics.
+Wonjae Kim, Bokyung Son, and Ildoo Kim. 2021. Vilt: Vision-and-language transformer without convolution or region supervision. In International Conference on Machine Learning, pages 5583-5594. PMLR.
+Jamie Kiros, William Chan, and Geoffrey Hinton. 2018. Illustrative language understanding: Large-scale visual grounding with image search. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 922-933.
+Angeliki Lazaridou, Nghia The Pham, and Marco Baroni. 2015. Combining language and vision with a multimodal skip-gram model. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 153–163, Denver, Colorado. Association for Computational Linguistics.
+Angeliki Lazaridou, Nghia The Pham, and Marco Baroni. 2016. The red one!: On learning to refer to things based on discriminative properties. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 213-218, Berlin, Germany. Association for Computational Linguistics.
+Lianian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, and Kai-Wei Chang. 2019. Visualbert: A simple and performant baseline for vision and language. arXiv preprint arXiv:1908.03557.
+Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692.
+Jiasen Lu, Dhruv Batra, Devi Parikh, and Stefan Lee. 2019. Vilbert: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks. Advances in neural information processing systems, 32.
+Ken McRae, George S Cree, Mark S Seidenberg, and Chris McNorgan. 2005. Semantic feature production norms for a large set of living and nonliving things. Behavior research methods, 37(4):547-559.
+Tomas Mikolov, Edouard Grave, Piotr Bojanowski, Christian Puhrsch, and Armand Joulin. 2018. Advances in Pre-Training Distributed Word Representations. In Proceedings of the International Conference on Language Resources and Evaluation (LREC 2018).
+
+Nasrin Mostafazadeh, Ishan Misra, Jacob Devlin, Margaret Mitchell, Xiaodong He, and Lucy Vanderwende. 2016. Generating natural questions about an image. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1802-1813, Berlin, Germany. Association for Computational Linguistics.
+Tobias Norlund, Lovisa Hagstrom, and Richard Johansson. 2021. Transferring knowledge from vision to language: How to achieve it and how to measure it? In Proceedings of the Fourth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP, pages 149-162, Punta Cana, Dominican Republic. Association for Computational Linguistics.
+Cory Paik, Stephane Aroca-Ouellette, Alessandro Roncone, and Katharina Kann. 2021. The World of an Octopus: How Reporting Bias Influences a Language Model's Perception of Color. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 823-835, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
+Allan Paivio, John C Yuille, and Stephen A Madigan. 1968. Concreteness, imagery, and meaningfulness values for 925 nouns. Journal of experimental psychology, 76(1p2):1.
+Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532-1543.
+Fabio Petroni, Tim Rocttäschel, Sebastian Riedel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, and Alexander Miller. 2019. Language models as knowledge bases? In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2463-2473, Hong Kong, China. Association for Computational Linguistics.
+Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. 2021. Learning transferable visual models from natural language supervision. In International Conference on Machine Learning, pages 8748-8763. PMLR.
+Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9.
+Piyush Sharma, Nan Ding, Sebastian Goodman, and Radu Soricut. 2018. Conceptual captions: A cleaned, hypernymed, image alt-text dataset for automatic image captioning. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2556-2565.
+
+Vered Shwartz and Yejin Choi. 2020. Do neural language models overcome reporting bias? In Proceedings of the 28th International Conference on Computational Linguistics, pages 6863-6870, Barcelona, Spain (Online). International Committee on Computational Linguistics.
+Carina Silberer, Vittorio Ferrari, and Mirella Lapata. 2013. Models of semantic representation with visual attributes. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 572-582, Sofia, Bulgaria. Association for Computational Linguistics.
+Haoyu Song, Li Dong, Weinan Zhang, Ting Liu, and Furu Wei. 2022. CLIP Models are Few-Shot Learners: Empirical Studies on VQA and Visual Entailment. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 6088-6100.
+Yixuan Su, Tian Lan, Yahui Liu, Fangyu Liu, Dani Yogatama, Yan Wang, Lingpeng Kong, and Nigel Collier. 2022. Language Models Can See: Plugging Visual Controls in Text Generation. arXiv preprint arXiv:2205.02655.
+Sho Takase and Shun Kiyono. 2021. Lessons on parameter sharing across layers in transformers. arXiv preprint arXiv:2104.06022.
+Hao Tan and Mohit Bansal. 2019. LXMERT: Learning Cross-Modality Encoder Representations from Transformers. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5100-5111.
+Maria Tsimpoukelli, Jacob L Menick, Serkan Cabi, SM Eslami, Oriol Vinyals, and Felix Hill. 2021. Multimodal few-shot learning with frozen language models. Advances in Neural Information Processing Systems, 34:200-212.
+Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in neural information processing systems, 30.
+Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Lukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyls, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2016. Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation. arXiv preprint:1609.08144.
+
+Ikuya Yamada, Akari Asai, Hiroyuki Shindo, Hideaki Takeda, and Yuji Matsumoto. 2020. LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6442-6454.
+Yue Yang, Joongwon Kim, Artemis Panagopoulou, Mark Yatskar, and Chris Callison-Burch. 2021a. Induce, edit, retrieve: Language grounded multimodal schema for instructional video retrieval. arXiv preprint arXiv:2111.09276.
+Yue Yang, Artemis Panagopoulou, Qing Lyu, Li Zhang, Mark Yatskar, and Chris Callison-Burch. 2021b. Visual goal-step inference using wikihow. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 2167-2179.
+Tian Yun, Chen Sun, and Ellie Pavlick. 2021. Does vision-and-language pretraining improve lexical grounding? In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 4357-4366, Punta Cana, Dominican Republic. Association for Computational Linguistics.
+Chenyu Zhang, Benjamin Van Durme, Zhuowan Li, and Elias Stengel-Eskin. 2022. Visual commonsense in pretrained unimodal and multimodal models. arXiv preprint arXiv:2205.01850.
+Lisai Zhang, Qingcai Chen, Joanna Siebert, and Buzhou Tang. 2021. Semi-supervised Visual Feature Integration for Language Models through Sentence Visualization. In Proceedings of the 2021 International Conference on Multimodal Interaction, pages 682-686.
+
+# A Prompt Selection
+
+# A.1 Language Model Prompts
+
+In our experiments with language models, we use the 11 prompts proposed by Apidianaki and Gari Soler (2021) for retrieving noun properties. As shown in Table 6, these involve nouns in singular and plural forms. The performance achieved by each language model with these prompts on the CONCEPT PROPERTIES development set is given in Table 8. The results show that model performance varies significantly with different prompts. The best-performing prompt is different for each model. For BERT and GPT-2, the "most + PLURAL" obtains the highest Recall and MRR scores. The best performing prompt for ROBERTA-LARGE is "SINGULAR + generally", and "PLURAL" for VILT.
+
+
Prompt Type
Prompt Example
SINGULAR
a motorcycle is [MASK].
PLURAL
motorcycles are [MASK].
SINGULAR + usually
a motorcycle is usually [MASK].
PLURAL + usually
motorcycles are usually [MASK].
SINGULAR + generally
a motorcycle is generally [MASK].
PLURAL + generally
motorcycles are generally [MASK].
SINGULAR + can be
a motorcycle can be [MASK].
PLURAL + can be
motorcycles can be [MASK].
most + PLURAL
most motorcycles are [MASK].
all + PLURAL
all motorcycles are [MASK].
some + PLURAL
some motorcycles are [MASK].
+
+# A.2 CLIP Prompts
+
+For CLIP, we handcraft ten prompts and report their performance on the CONCEPT PROPERTIES development set in Table 7. Similar to what we observed with language models, CLIP performance is also sensitive to the prompts used. We select for our experiments the prompt "An object with the property of [MASK]". which obtains the highest average Accuracy and MRR score on the CONCEPT PROPERTIES development set.
+
+Table 6: Prompts used for language models.
+
+
Prompt Type
Acc@1
R@5
R@10
MRR
[MASK]
26.0
13.1
21.9
.097
This is [MASK].
28.0
9.6
13.6
.089
A [MASK] object.
22.0
13.2
18.9
.089
This is a [MASK] object.
22.0
12.0
17.2
.087
The item is [MASK].
18.0
7.5
17.2
.074
The object is [MASK].
24.0
10.5
16.2
.088
The main object is [MASK].
24.0
10.3
20.3
.091
An object which is [MASK].
28.0
13.7
19.9
.106
An object with the property of [MASK].
32.0
12.3
20.0
.108
+
+Table 7: Full results of CLIP-ViT/L14 on the CONCEPT PROPERTIES development set.
+
+# A.3 GPT-3 Prompts
+
+Since we do not have complete control of GPT-3 at this moment, we treat GPT-3 as a question-answering model using the following prompt in a one-shot example setting:
+
+Use ten adjectives to describe the properties of kiwi:\n
+
+1. tart\n2. acidic\n3. sweet\n
+
+4. juicy\n5. smooth\n6. fuzzy\n
+
+7. green\n8. brown\n9. small\n
+
+10. round\n
+
+Use ten adjectives to describe
+
+the properties of [NOUN]:\n
+
+We use the text- davinci- 001 engine of GPT3 which costs $0.06 per 1,000 tokens. On average, it costs$ 0.007 to generate 10 properties for each noun.
+
+# B Inference Times
+
+Table 10 provides details about the runtime of the experiments. The second column of the Table indicates whether a model uses images. Training the concreteness predictor for CEM-PRED takes 10 minutes. Inference for all nouns in the datasets with CEM-PRED only takes a couple of seconds. Note that CEM-PRED is faster than CEM-GOLD, since CEM-GOLD leverages the longest matching subsequence heuristic (LMS) or GloVe vector cosine similarity in order to find the concreteness score of the most similar word in Brysbaert et al. (2014) for properties without a gold concreteness score. The times reported in the table for image feature pre-computation correspond to the time needed for computing embeddings for 200 images for each noun in a dataset, which is only computed once for each dataset. We, however, only use 10 of them for the final CEM models (cf. Appendix C.1).
+
+# C Implementation of CLIP
+
+# C.1 Number of Images
+
+For each noun, we collected 200 images from Bing. Given that it is not practical to use such a high number of images for a large-scale experiment, we investigate the performance of CLIP with different number of images. We first filter the 200 images collected for each noun to remove duplicates. We then sort the remaining images based on the cosine similarity of each image with the sentence "A photo of [NOUN]".
+
+
Prompt Type
BERT-large
RoBERTa-large
GPT-2-large
ViLT
R@5
R@10
MRR
R@5
R@10
MRR
R@5
R@10
MRR
R@5
R@10
MRR
SINGULAR
8.9
17.3
.067
17.1
23.6
.092
14.0
27.5
.097
12.6
18.2
.085
PLURAL
11.5
21.9
.070
10.5
21.1
.085
14.9
23.7
.098
15.5
24.5
.105
SINGULAR + usually
12.7
24.5
.082
15.5
26.5
.098
16.2
25.3
.107
11.8
18.7
.088
PLURAL + usually
14.4
27.6
.107
13.3
23.7
.106
17.8
24.6
.113
15.6
21.7
.091
SINGULAR + generally
14.3
23.6
.087
17.7
27.9
.119
18.7
29.2
.114
12.7
19.4
.083
PLURAL + generally
15.0
26.7
.097
16.0
25.3
.105
17.4
26.7
.128
9.8
18.6
.075
SINGULAR + can be
12.4
23.9
.102
14.7
22.7
.090
14.3
24.7
.105
9.2
14.1
.056
PLURAL + can be
16.0
26.4
.107
12.1
17.7
.073
10.2
18.3
.096
10.0
14.2
.060
most + PLURAL
16.7
27.3
.107
12.6
25.7
.098
20.0
33.4
.122
12.6
20.8
.095
all + PLURAL
13.4
20.5
.083
8.2
13.5
.073
19.6
31.3
.113
14.4
20.4
.103
some + PLURAL
11.2
21.5
.082
16.4
23.5
.100
15.4
31.5
.097
10.7
17.2
.091
+
+Table 8: Full results of language models on the CONCEPT PROPERTIES development set with different prompts. The best scores for each metric are **bold**. The best prompt for each model is **highlighted**, selected based on the average performance over all metrics.
+
+
FEATURE NORMS
CONCEPT PROPERTIES-test
MEMORY COLORS
Acc@1
R@5
R@10
MRR
Acc@1
R@5
R@10
MRR
Acc@1
Acc@3
Acc@5
CLIP-ViT/B32
24.8
24.8
36.1
.172
27.6
13.0
19.6
.097
83.5
95.4
99.1
CLIP-ViT/B16
25.3
27.4
38.9
.184
28.3
14.3
22.0
.103
87.2
96.3
98.2
CLIP-ViT/L14
26.1
29.2
43.3
.192
29.2
15.0
24.9
.113
82.6
96.3
99.1
+
+Table 9: Performance of CLIP models with different sizes.
+
+
Model
Img
FEATURE NORMS
CONCEPT PROPERTIES-test
MEMORY COLORS
Time
Image Features
+Pre-Computation
Time
Image Features
+Pre-Computation
Time
Image Features
+Pre-Computation
GLOVE
X
11 sec.
-
12 sec.
-
10 sec.
-
GOOGLE NGRAM
X
15 min.
-
15 min.
-
15 min.
-
BERT-LARGE
X
3 min. 18 sec.
-
7 min 33 sec.
-
4 sec.
-
ROBERTA-LARGE
X
2 min. 31 sec.
-
5 min 50 sec
-
3 sec.
-
GPT2-LARGE
X
48 min. 2 sec.
-
1 hr. 39 min.
-
38 sec.
-
GPT3-DAVINCI
X
6 min. 50 sec.
-
8 min 7 sec
-
1 min. 27 sec.
-
ViLT
✓
1 hr. 40 min.
2 hr. 50 min.
2 hr. 45 min.
3 hr. 20 min.
57 sec.
33 min.
CLIP-VILT/L14
✓
52 seconds
5 hr. 40 min.
2 min. 10 sec.
6 hr. 41 min.
13 sec.
1 hr. 13 min
CEM-GOLD (GLOVE)
✓
4 min. 14 sec.
5 hr. 40 min.
10 min. 4 sec.
6 hr. 41 min.
28 sec.
1 hr. 13 min
CEM-GOLD (LMS)
✓
3 min. 30 sec.
5 hr. 40 min.
8 min. 12 sec.
6 hr. 41 min.
20 sec.
1 hr. 13 min
CEM-PRED
✓
4 min. 29 sec.
5 hr. 40 min.
7 min. 20 sec.
6 hr. 41 min.
49 sec.
1 hr. 13 min
+
+Table 10: Experiment inference times. Note that all models are used in zero-shot scenarios with no fine-tuning involved.
+
+We pick the top-M images and gradually increase the value of M.25 Figure 7 shows the MRR obtained by CLIP on the CONCEPT PROPERTIES development set with a varying number of images. We observe that the model's MRR score increases with a higher number of images. Nevertheless, the improvement is marginal when the number of images is higher than ten and starts to overfit when the number is higher than 20. Therefore, we decided to use ten images for all experiments involving CLIP.
+
+# C.2 CLIP Size
+
+We evaluate three sizes of CLIP, from small to large: CLIP-ViT/B16, CLIP-ViT/B32, and
+
+CLIP-ViT/L14. As shown in Figure 7, the performance positively correlates with the model size. The largest model, CLIP-ViT/L14 has a higher MRR score than the other two models. We also report the performance of the three CLIP models on FEATURE NORMS, CONCEPT PROPERTIES-test, and MEMORY COLORS in Table 9, indicating that the larger CLIP model yields better performance across metrics.
+
+# D CEM Variations
+
+# D.1 Concretess Prediction Model
+
+In Table 12, we report the results obtained by the CEM model using predicted concreteness values (instead of gold standard ones). We predict these values by training the model of Charbonnier and
+
+
Model
Images
Non-Prototypical
Prototypical
Acc@5
Acc@10
R@5
R@10
MRR
Acc@5
Acc@10
R@5
R@10
MRR
RANDOM
X
4.13
7.67
2.73
4.96
0.030
4.66
8.03
2.15
3.84
0.025
GLOVE
X
22.59
33.20
16.99
26.76
0.124
30.05
44.56
15.68
26.71
0.124
GOOGLE-NGRAM
X
45.19
57.96
39.22
58.80
0.240
39.64
56.99
24.06
36.47
0.142
BERT-LARGE
X
35.76
51.28
30.22
48.12
0.197
45.60
58.81
28.16
39.42
0.191
ROBERTA-LARGE
X
35.76
48.92
28.53
46.39
0.176
47.67
63.73
28.95
43.08
0.200
GPT2-LARGE
X
36.35
48.92
29.92
45.79
0.181
40.93
55.96
24.12
37.23
0.166
GPT3-DAVINCI
X
30.84
40.67
25.77
39.42
-
55.18
64.51
38.30
49.66
-
ViLT
✓
34.97
46.76
28.85
42.70
0.211
38.34
53.63
23.52
36.57
0.159
CLIP-ViT/L14
✓
32.22
43.81
25.08
37.95
0.159
52.59
69.95
33.67
49.82
0.226
CEM-GOLD (Ours)
✓
41.85
54.03
35.88
49.55
0.217
64.77
75.39
43.11
56.06
0.289
CEM-PRED (Ours)
✓
41.65
51.47
35.11
46.46
0.211
65.80
74.87
44.67
56.20
0.306
+
+Table 11: Results obtained on the FEATURE NORMS dataset filtered by prototypical and non-prototypical properties. The splits are derived from (Apidianaki and Garí Soler, 2021).
+
+
FEATURE NORMS
CONCEPT PROPERTIES-test
MEMORY COLORS
Acc@1
R@5
R@10
MRR
Acc@1
R@5
R@10
MRR
Acc@1
Acc@3
Acc@5
CEM-GOLD
40.1
40.5
53.3
.252
48.3
26.9
39.1
.171
82.6
96.3
99.1
CEM-PRED
39.9
40.4
52.5
.251
49.9
28.1
40.0
.175
84.4
97.2
99.1
CEM-RANDOM
35.4
38.3
51.0
.232
46.3
25.3
36.5
.162
62.4
90.8
94.5
CEM-AVERAGE
38.7
41.0
53.0
.249
48.3
28.0
40.2
.173
71.6
92.7
99.1
CEM-MAX
36.9
38.4
51.3
.238
48.6
26.7
38.1
.167
67.0
90.8
96.3
CEM-MIN
25.1
34.2
50.1
.204
30.1
21.2
34.1
.135
69.7
95.4
98.2
+
+Table 12: Comparison of ensemble methods on the three datasets. The highest score for each metric is bolded and the second-best is underlined.
+
+
+Figure 7: CLIP performance on CONCEPT PROPERTIES-test development set with a different number of images per noun.
+
+Wartena (2019) using the concreteness scores of 40k words (all parts-of-speech) in the Brysbaert et al. (2014) dataset. We exclude 425 adjectives that are found in the FEATURE NORMS, CONCEPT PROPERTIES, and MEMORY COLORS datasets.[26] The concreteness prediction model uses FastText embeddings (Mikolov et al., 2018) enhanced with POS and suffix features. We evaluate the model on the 425 adjectives that were left out during training and for which we have ground truth scores. The
+
+Spearman correlation between the predicted and gold scores is 0.76, showing that our automatically predicted scores can be safely used in our ensemble model instead of the gold standard ones.
+
+# D.2 CEM Weight Selection
+
+We also experiment with different ways for generating scores and combining the property ranks proposed by the models. (a) CEM-pred: We generate a concreteness score using the model of Charbonnier and Wartena (2019) and FastText embeddings (Bojanowski et al., 2017). We train the model on the 40k concreteness dataset (Brysbaert et al., 2014), excluding the 425 adjectives found in our evaluation datasets. The model obtains a high Spearman $\rho$ correlation of 0.76 against the ground truth scores of the adjectives in our test sets, showing that automatically predicted scores are a good alternative to manually defined ones. (b) CEM-random: We randomly generate a score for each property and use it to combine the ranks from two models. (c) CEM-average: We use the average of the property ranks; (d) CEM-high: We use the maximum rank of the property; (e) CEM-low: We use the minimum rank of the property. Table 12 shows the comparison between CEM-PRED, CEM-GOLD and models that rely on these alternative weight
+
+generation and ensembling methods on FEATURE NORMS. CEM achieves the highest performance across all metrics, indicating that concreteness offers a reliable criterion for model ensembling under unsupervised scenarios.
+
+# E Qualitative Analysis
+
+# E.1 Unigram Prediction Frequency
+
+In Table 13, we report the mean Google unigram frequency (Brants and Franz, 2009) for all properties in the top 5 predictions of each model. We observe that our CEM model – which achieves the best performance among the tested models, as shown in Table 3 – often predicts medium-frequency words. This is a desirable property of our model compared to models which would instead predict highly frequent or rare words (highly specific or technical terms). This is the case for GPT3 and CLIP, which propose rarer attributes but obtain lower performance than CEM. It is worth noting that, contrary to CLIP, GPT3 retrieves properties from an open vocabulary.
+
+Given that Google NGrams frequencies are computed based on text, many common properties might not be reported. For example, FEATURE NORMS propose as typical attributes of an "ambulance": loud, white, fast, red, large, orange. The frequency of the corresponding property-noun bigrams (e.g., loud ambulance, white ambulance) are: 0, 687, 50, 193, 283, and 0. Meanwhile, the bigrams formed with less typical properties (e.g., old, efficient, modern, and independent) have a higher frequency (1725, 294, 314, and 457). While language models rely on text and, thus, suffer from reporting bias, vision-based models can retrieve properties that are rarely stated in the text.
+
+# E.2 Prototypical Property Retrieval
+
+We carry out an additional experiment aimed at estimating the performance of the models on prototypical vs. non-prototypical properties. Prototypical properties are the ones that apply to most of the objects in the class denoted by the noun (e.g., red strawberries); in contrast, non-prototypical properties describe attributes of a smaller subset of the objects denoted by the noun (e.g., delicious strawberry). We make the assumption that prototypical properties are common and, often, visual or perceptual; we expect them to be more rarely stated in texts and, hence, harder to retrieve using language models than using images.
+
+We use the split of the FEATURE NORMS dataset performed by Apidianaki and Garí Soler (2021) into prototypical and non-prototypical properties, based on the quantifier annotations found in the Herbelot and Vecchi (2015) dataset.[27] The first split (Prototypical) contains 785 prototypical adjective-noun pairs (for 386 nouns) annotated with at least two ALL labels, or with a combination of ALL and MOST (healthy banana $\rightarrow$ [ALL-ALL]). The second set (Non-Prototypical) contains 807 adjective-noun pairs (for 509 nouns) with adjectives in the ground truth that are not included in the Prototypical set. In Table 11, we report the performance of each model in retrieving these properties.
+
+In the ALL, MOST column we consider properties that have at least 2 ALL annotations, with the combination of a MOST annotation, and in the SOME column, we consider all properties that do not contain NO and FEW annotations, and have at least one SOME annotation. The results confirm our intuition that non-prototypical properties are more frequently mentioned in the text. This is reflected in the score of the GOOgLE NGRAM baseline for these properties. For prototypical properties, our CEM model outperforms all other models.
+
+# E.3 Same Top-K Predictions by Different Nouns
+
+Figure 8 shows the number of nouns in the FEATURE NORMS and CONCEPT PROPERTIES-test datasets for which a model made the exact same top-K predictions. We observe that LMs consistently repeat the same properties for different nouns, while MLMs exhibit a higher variation in their predictions.
+
+# E.4 Multi-piece Performance
+
+Each model splits words into a different number of word pieces. Table 14 shows the number of multi-piece properties for each model, and its performance on these properties. We observe that all models perform worse than average (refer to Table 3 for the average performance) on the multi-piece properties, however, CEM has the smallest reduction in performance compared to the average values. This could be because CEM relies on information from two models with different tokenizers.
+
+
Model
CONCEPT PROPERTIES-test
FEATURE NORMS
Unigram Freq. ↓
Bigram Freq. ↓
Unigram Freq. ↓
Bigram Freq. ↓
BERT
53M
11.6K
55M
7.6K
ROBERTA
50M
6.8K
53M
6K
GPT-2
96M
10.3K
78M
6.4K
GPT-3
24M
6.5K
25M
2.8K
ViLT
50M
6.2K
40M
3.8K
CLIP
11M
5.3K
18M
2.2K
CEM-GOLD
32M
7.4K
33M
4.1K
CEM-PRED
34M
7.1K
31M
6.1K
+
+
+
+
+Figure 8: Number of nouns in the FEATURE NORMS and CONCEPT PROPERTIES-test datasets for which a model proposed the same top-K properties (where $\mathbf{K} = (3,4,5)$ ) in the same order.
+
+Table 13: Mean Google unigram and bigram frequency for the top-5 predictions by each model. We observe that CEM produces rarer words than most other models (excluding GPT-3 and CLIP) while maintaining high performance.
+
+
Model
FEATURE NORMS
# Multi-piece Properties
Acc@5
Acc@10
R@5
R@10
MRR
BERT-LARGE
106
0.0
0.0
0.0
0.0
0.009
ROBERTA-LARGE
590
23.77
32.02
22.64
32.27
0.182
GPT2-LARGE
12
0.0
0.0
0.0
0.0
0.018
GPT3-DAVINCI
0
-
-
-
-
-
ViLT
106
1.57
2.55
7.51
13.0
0.060
CLIP-ViT/L14
45
4.72
5.50
55.95
66.67
0.401
CEM-GOLD (OURS)
590/45
36.54/1.2
43.81/3.14
37.65/13.10
49.59/35.71
0.245/0.124
CEM-PRED (OURS)
590/45
32.22/1.77
41.85/3.73
33.7/20.24
46.76/42.86
0.165/0.122
+
+
Model
CONCEPT PROPERTIES-test
# Multi-piece Properties
Acc@5
Acc@10
R@5
R@10
MRR
BERT-LARGE
429
0.0
0.33
0.0
0.59
0.006
ROBERTA-LARGE
1939
45.42
59.56
19.12
27.65
0.120
GPT2-LARGE
60
0.0
0.0
0.0
0.0
0.010
GPT3-DAVINCI
27
0.33
0.50
7.41
11.11
-
ViLT
429
1.66
3.99
2.43
5.77
0.029
CLIP-VIT/L14
300
16.47
20.13
39.12
49.12
0.029
CEM-GOLD (OURS)
1939/300
54.58/6.49
68.39/9.65
26.24/13.03
38.92/20.96
0.161/0.095
CEM-PRED (OURS)
1939/300
56.99/5.63
69.87/9.62
27.31/12.12
39.35/21.05
0.165/0.078
+
+Table 14: Performance on multi-piece properties by each model. The highest scores are highlighted in boldface. CEM uses two different tokenizers RoBERTa/CLIP. Hence, we report results for both separated by a backslash (/).
+
+# E.5 Qualitative Examples
+
+Table 15 contains more examples of the top-5 predictions made by the models for nouns in the CON
+
+CEPT PROPERTIES-test and FEATURE NORMS datasets.
+
+
\ No newline at end of file
diff --git a/visualizingtheobviousaconcretenessbasedensemblemodelfornounpropertyprediction/images.zip b/visualizingtheobviousaconcretenessbasedensemblemodelfornounpropertyprediction/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..28925db352d85e220bd8e543dc8befb76ec74f8c
--- /dev/null
+++ b/visualizingtheobviousaconcretenessbasedensemblemodelfornounpropertyprediction/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:cb37eb46b66b237cddbb4fba27e6d6c673972152f247708d36831e2d13b10244
+size 1487021
diff --git a/visualizingtheobviousaconcretenessbasedensemblemodelfornounpropertyprediction/layout.json b/visualizingtheobviousaconcretenessbasedensemblemodelfornounpropertyprediction/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..08bb6c7707a0986ba4599463a659ae7acecc2bdb
--- /dev/null
+++ b/visualizingtheobviousaconcretenessbasedensemblemodelfornounpropertyprediction/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:2ff05cffbc83b7318c6d55a4302de6ea86f3a802a05cac72f6e97d68cc4f8afa
+size 511379
diff --git a/visualnamedentitylinkinganewdatasetandabaseline/cdbe0e0a-4d2a-47eb-945c-0e718199d3a5_content_list.json b/visualnamedentitylinkinganewdatasetandabaseline/cdbe0e0a-4d2a-47eb-945c-0e718199d3a5_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..645f41337fd521627996030a14aa75c77b08aca7
--- /dev/null
+++ b/visualnamedentitylinkinganewdatasetandabaseline/cdbe0e0a-4d2a-47eb-945c-0e718199d3a5_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:1709882021746b88cfa08698708e93958e86b735cb234ccf68658184b0dbcc4a
+size 83493
diff --git a/visualnamedentitylinkinganewdatasetandabaseline/cdbe0e0a-4d2a-47eb-945c-0e718199d3a5_model.json b/visualnamedentitylinkinganewdatasetandabaseline/cdbe0e0a-4d2a-47eb-945c-0e718199d3a5_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..c8293f81983ba88ceb5e16fd02f5ddb0e060f85c
--- /dev/null
+++ b/visualnamedentitylinkinganewdatasetandabaseline/cdbe0e0a-4d2a-47eb-945c-0e718199d3a5_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:80e7f5f0a4773885553dcf8a58ea903a2d97c730b3f28557cd3393ca9be32aa7
+size 101279
diff --git a/visualnamedentitylinkinganewdatasetandabaseline/cdbe0e0a-4d2a-47eb-945c-0e718199d3a5_origin.pdf b/visualnamedentitylinkinganewdatasetandabaseline/cdbe0e0a-4d2a-47eb-945c-0e718199d3a5_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..27db9d045d8f3b619bf7bd3e44926e4dd1ee10ae
--- /dev/null
+++ b/visualnamedentitylinkinganewdatasetandabaseline/cdbe0e0a-4d2a-47eb-945c-0e718199d3a5_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:79e66e45cdb705c5b481ce9b218f1b74d1cf2e8a95833826566279f0995eb427
+size 9411300
diff --git a/visualnamedentitylinkinganewdatasetandabaseline/full.md b/visualnamedentitylinkinganewdatasetandabaseline/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..4fe6c08916ae7e138b0fcc529924fb6ecd631fba
--- /dev/null
+++ b/visualnamedentitylinkinganewdatasetandabaseline/full.md
@@ -0,0 +1,345 @@
+# Visual Named Entity Linking: A New Dataset and A Baseline
+
+Wenxiang Sun, Yixing Fan, Jiafeng Guo*, Ruqing Zhang, Xueqi Cheng
+
+CAS Key Lab of Network Data Science and Technology, Institute of Computing Technology,
+
+Chinese Academy of Sciences, Beijing, China
+
+University of Chinese Academy of Sciences, Beijing, China
+
+{sunwenxiang20s,fanyixing,guojiafeng,zhangruqing,cxq}@ict.ac.cn
+
+# Abstract
+
+Visual Entity Linking (VEL) is a task to link regions of images with their corresponding entities in Knowledge Bases (KBs), which is beneficial for many computer vision tasks such as image retrieval, image caption, and visual question answering. While existing tasks in VEL either rely on textual data to complement a multi-modal linking or only link objects with general entities, which fails to perform named entity linking on large amounts of image data. In this paper, we consider a purely Visual-based Named Entity Linking (VNEL) task, where the input only consists of an image. The task is to identify objects of interest (i.e., visual entity mentions) in images and link them to corresponding named entities in KBs. Since each entity often contains rich visual and textual information in KBs, we thus propose three different sub-tasks, i.e., visual to visual entity linking (V2VEL), visual to textual entity linking (V2TEL), and visual to visual-textual entity linking (V2VTEL). In addition, we present a high-quality human-annotated visual person linking dataset, named WIKIPerson. Based on WIKIPerson, we establish a series of baseline algorithms for the solution of each sub-task, and conduct experiments to verify the quality of the proposed datasets and the effectiveness of baseline methods. We envision this work to be helpful for soliciting more works regarding VNEL in the future. The codes and datasets are publicly available at https://github.com/ict-bigdatalab/VNEL.
+
+# 1 Introduction
+
+An in-depth understanding of visual content in an image is fundamental for many computer vision tasks. VEL (Tilak et al., 2017; Maigrot et al., 2016) is a task to put the image understanding to the entity-level. For example, given an image of the debate between Trump and Hillary, the goal of VEL
+
+
+(a) Textual Named Entity Linking
+
+
+(b) Visual General Entity Linking
+
+
+(c) Multi-modal Named Entity Linking
+
+
+(d) Visual Named Entity Linking
+Figure 1: Different categories of Entity Linking. VNEL is a task to identify images individually without any text input and link visual mentions to specific named entities in KBs.
+
+is not only to recognize the region of Trump and Hillary, but also to link them to the correct entity in KBs (e.g., Wikidata (Vrandecic and Krötzsch, 2014), DBpedia (Auer et al., 2007), or YAGO (Fabian et al., 2007)). Just as the significance of textual entity linking for many NLP tasks such as Information Extraction and Information Retrieval (Sevgili et al., 2022), visual tasks, such as image retrieval (Datta et al., 2008) and image caption (Tariq and Foroosh, 2017), would also benefit from entity-level fine-grained comprehension of images.
+
+In recent years, VEL has been given increasing attention. Early works (Tilak et al., 2017; Weegar et al., 2014) try to link objects in images with general entities, e.g., 'Person' and 'Suit', in KBs as is described in Figure 1(b). Apparently, these works are restricted to the coarse-level entity linking and fail to distinguish objects within the same class. Besides, there are also some works that make use of deep image understanding to link objects with named entities in KBs (Müller-Budack et al., 2021; Zheng et al., 2022; Dost et al., 2020; Gan et al., 2021). However, they generally require detailed entity mention information in text, which plays a vital role via multi-modal entity linking as shown
+
+
Dataset
Multi-modal
Entity-aware
Entity-labeled
Modality
KB
Source
Lang
Size
AIDA(Hoffart et al., 2011)
✓
✓
Tm→Te
Wikipedia
News
en
1K docs
Flicker30K(Young et al., 2014)
✓
social media
en
30k images
BreakingNews(Ramisa et al., 2017)
✓
✓
News
en
100k images
SnapCaptionsKB(Moon et al., 2018)
✓
✓
✓
Tm+V→Te
Freebase
Social Media
en
12K captions
WIKIDiverse(Wang et al., 2022)
✓
✓
✓
Tm+V→Te,Ve
Wikipedia
News
en
8K captions
WIKIPerson
✓
✓
✓
Vm→VeVm→TeVm→Ve, Te
Wikipedia
News
en
50k Images
+
+Table 1: The public related dataset of WikIPerson. $T^m$ , $T^e$ , $V^m$ , $V^e$ , and $V$ represent textual mention, textual entity, visual mention, visual entity, and visual information, respectively.
+
+in Figure 1(c). We argue that all the above tasks fail to process the named entity linking well for images without any text annotations, which is often the case in social media platforms.
+
+In this work, we consider a purely Visual-based Named Entity Linking (VNEL) task, which is described in Figure 1(d). Given an image without textual description, the goal is to link the visual mention in the image with the whole image as the context to the corresponding named entity in KBs. Considering the format of entity in KBs, such as textual descriptions, images, and other structured attributes, we further introduce three sub-tasks according to the type of entity context, i.e., the visual to visual entity linking (V2VEL), visual to textual entity linking (V2TEL), and visual to visual-textual entity linking (V2VTEL). We believe these tasks could put forward higher requirements and more detailed granularity for image understanding, cross-modal alignment, and multi-modal fusion.
+
+Following the definition of VNEL, currently public available EL datasets may not fit for our research, as they either only focus on textual modality or lack of detailed annotations for entity information in each image. As a result, we release a new dataset called WIKIPerson. The WIKIPerson is a high-quality human-annotated visual person linking dataset based on Wikipedia. Unlike previously commonly-used datasets in EL, the mention in WIKIPerson is only an image containing the PERSON entity with its bounding box. The corresponding label identifies an unique entity in Wikipedia. For each entity in Wikipedia, we provide textual descriptions as well as images to satisfy the need of three sub-tasks.
+
+In the experiments, we benchmark a series of baseline models on WIKIPerson under both zero-shot and fine-tuned settings. In detail, we adopt a universal contrastive learning framework to learn a robust and effective representation for both mentions and entities. Experimental results show that
+
+
+Figure 2: VNEL with its three sub-tasks.
+
+existing models are able to obtain a reasonably good performance on different VNEL tasks, but there is still a large room for further enhancements.
+
+# 2 The Visual Named Entity Linking Task
+
+This section first presents a formal definition of the task. Then we introduce the complete building procedure of the human-annotated dataset, which covers a wide variety of Wikipedia person entities for further research. Finally, an in-depth data analysis will be elaborated on in detail.
+
+# 2.1 Definition of VNEL and Three Sub-tasks
+
+VNEL takes an image as input and extracts bounding boxes around objects, and then links them to entities in KBs. More precisely, given an image $I$ , all visual mentions $V^{m}$ , which are regions of the image, are firstly recognized with a bounding box. Then, all visual mentions $V^{m}$ are linked with the corresponding entity $e$ in knowledge base $E$ . The visualized process of the VNEL task is shown in Figure 2, which often consists of two stages, namely the visual mention detection stage and the visual entity linking stage. In this work, we follow existing works (Mulang' et al., 2020; Sil et al., 2018) to pay attention to the visual entity linking
+
+stage.
+
+Generally, each entity $e_i \in E$ is often characterized with rich textual and visual descriptions, and each modality of the description can provide sufficient information for visual entity linking. To make the task more clearly presented, we further decompose the VNEL task into three sub-tasks according to the type of description used for the entity. In the first place, only the visual description $V_{e_i}$ of the entity can be used in the visual entity linking stage, which we denote as the V2VEL sub-task. The core of V2VEL is to match two visual objects. It is worth noting that entities in KB may contain more than one image. To simply this, we take the first image of $e_i$ as $V_{e_i}$ , and leave the multiple images per entity as the future work. In the second place, only the textual description $T_{e_i}$ of the entity is used in the visual entity linking stage, which we denote as the V2TEL sub-task. The V2TEL task aims to evaluate the ability in image-text matching, central to cross-modal entity linking. Finally, both the visual description and the textual description $(V_{e_i}, T_{e_i})$ of the entity could be employed to link the visual mention, which we denote as the V2VTEL sub-task. The V2VTEL task could leverage both textual and visual modality to complement each other in linking visual mentions.
+
+Formally, let $e_i$ represent the $i^{th}$ entity in KB with corresponding visual description $V_{e_i}$ or textual description $T_{e_i}$ and the whole image can be seen as visual context $V^c$ . As a result, three sub-tasks of the VNEL can be formulated as the following respectively:
+
+$$
+\begin{array}{l} e^{*}(m) = \operatorname *{arg max}_{\substack{V\to V\\ e_{i}\in E}}\Phi^{\alpha}\left(V^{m},V_{e_{i}}\mid V^{c}\right), \\ e^{*}(m) = \operatorname *{arg max}_{\substack{e_{i}\in E}}\Phi^{\beta}\left(V^{m},T_{e_{i}}\mid V^{c}\right), \\ \underset {V \to V + T} {e ^ {*} (m)} = \underset {e _ {i} \in E} {\arg \max} \Phi^ {\gamma} \left(V ^ {m}, \left(V _ {e _ {i}}, T _ {e _ {i}}\right) \mid V ^ {c}\right), \\ \end{array}
+$$
+
+where $\Phi$ represents the value of the score function between a mention and an entity.
+
+# 2.2 Dataset Setups of WIKIPerson
+
+To facilitate research on VNEL, we introduce WIKIPerson, a benchmark dataset designed for linking person in images with named entities in KB. The dataset building process is shown in Figure 3, which consists of three main steps. We firstly select the data source to build the input image collection, and then filter and clean the collection to obtain a high-quality dataset. Finally, we annotate
+
+
+Figure 3: The procedure of building WikIPerson.
+
+each image by several experienced annotators. In the following, we will describe each step in detail.
+
+# 2.2.1 Data Source Collection
+
+For the source of data, we follow existing works (Ramisa et al., 2017; Tran et al., 2020; Liu et al., 2020; Biten et al., 2019) to use News collections, since the content of images in News collection often contains many named entities at a higher degree of specificity, e.g., specific people, which convey key information regarding the events presented in the images. In this paper, we choose VisualNews1, which has the largest data scale with 1.2 million image-text pairs among them as the original data source. In addition, VisualNews covers diverse news topics, consisting of more than one million images accompanied by news articles, image captions, author information, and other metadata. All these additional metadata could help us in the subsequent entity annotation procedure. However, only images and annotated mentions with bounding boxes are available in all VNEL sub-tasks.
+
+For the knowledge base, we employ the commonly-used Wikipedia as back-end, consisting of a wide range and abundant information of entities. Specifically, we crawl the first image of each entity from wiki commons as the visual description and the text information from Wikipedia as the textual description, respectively.
+
+# 2.2.2 Data Filter and Clean
+
+In this work, we pay our attention to PERSON mentions in images since person is the most common named entity, and leave the research on other entity types for future work. For this purpose, we keep only images with PERSON mentions from the news collection, and remove non-PERSON entities from the KB. Specifically, for each image caption pair in the news collection, we take Spacy to analyze the text caption and filter out the corre
+
+
+Figure 4: Examples of the WikIPerson dataset. Left: An image and its mention's bounding box with WikiId, which represents an unique entity in Wikipedia. Right: The ground truth entity in KB with both visual and textual information.
+
+sponding data without any PERSON entities. Moreover, we leverage the MTCNN model (Zhang et al., 2016), which is the state-of-the-art face detection model, to check the number of PERSON mentions in each image. Then, we select images with the number of person mentions less than 4 to reduce the complexity of the task. Lastly, we remove repeated and blurred images to keep the quality of the dataset.
+
+# 2.2.3 Data Annotation
+
+The primary goal of WikIPerson is to link the PERSON mention in the image to the correct Wikipedia entity. As a consequence, the annotators need to identify the person mention and label each mention with the corresponding Wikipedia entity in the form of a Wikidata id.
+
+In the earlier step, Spacy is used to identify the caption of origin image-text pairs to extract possible PERSON entities. MTCNN is adopted to recognize the faces, supplying bounding boxes in the picture. So the annotators only need to check the faces in the bounding box and choose the corresponding entity from the results generated by searching the keywords of PERON entities detected in the caption. In this way, we can largely reduce the labor in labeling the entity of each mention. Mentions that do not have corresponding entities in Wikipedia will be filtered in the procedure.
+
+In the process of data annotation, we designed end-to-end labeling web demos to facilitate manual annotation. The provided information on the website includes news images, captions, news content, and possible candidate entities with pictures
+
+
#Image
#Ecov
#Mavg
#KB
WIKIPerson
48K
13k
1.08
120K
+
+Table 2: Statistics of WikIPerson. $\# {E}_{\text{cov }}$ and $\# {M}_{avg}^{I}$ denotes number of covered entities and average number of mentions per image, respectively.
+
+
+Figure 5: Left: Topic distribution of entities in WikIPerson. Right: Link popularity distribution between entities in WikIPerson and the whole Wikipedia.
+
+
+
+and descriptions to help the annotator make judgments. All annotators have linguistic knowledge and are instructed with detailed annotation principles. The annotators need to link the mention with each bounding box to the correct entity in Wikipedia. Finally, after the labeling, we can get the dataset full of the image which comprises several mentions with each bounding box and corresponding entity WikiId.
+
+# 2.3 Dataset Analysis
+
+# 2.3.1 Basic Statistics
+
+Table 2 shows the statistics of the WIKIPerson in detail. The dataset contains a total of 48k different news images, covering 13k out of 120K (i.e. $|E| \approx 120K$ ) PERSON named entities, each of which corresponds to a celebrity in Wikipedia. Many entities appear many times in the data, which ensures that entities can be fully learned. Unlike many datasets in traditional EL, the image of the PERSON named entity usually focuses on a single person in the news except for the scene such as group photo, debate, etc. As a result, the average amount of the mention per image is about 1.08 and only about 3k images contain more than one mention.
+
+# 2.3.2 Entity Distribution
+
+The WIKIPerson comprises diverse PERSON named entity types such as politicians, singers, actresses, sports players, and so on, from different news agencies. These entities do not belong to a
+
+
+Figure 6: The overall framework of different baselines.
+
+single analogy but are widely distributed in different topics, occupations, skin colors, and multiple age stages. The detailed information is shown in left of Figure 5. It can be observed that in addition to the common politician in the news, the dataset also includes artistic, sports, entertainment, and even criminal topics, which greatly increases the richness of image information. The diversity makes the task could pay attention to the alignment between the background information of the picture, e.g., visual context and entity's meta info in KBs.
+
+Moreover, considering the difference in entities' popularity, we analyzed the link-popularity of the entities in the WIKIPerson compared to that in the whole Wikipedia. As shown in the right of Figure 5, both covered entities and the whole Wikipedia entities conform to the long-tailed distribution, which ensures that the dataset will not be biased because of some significantly popular entities. Generally speaking, celebrities are likely to be reported in news articles, which causes the entity in our dataset to be more prevalent than in the whole Wikipedia. To the best of our knowledge, WIKIPerson is the first diverse human-annotated PERSON-entity-aware dataset with high research value.
+
+# 3 Baseline Methods
+
+Generally, the VNEL task is to link mentions in the input image with the corresponding entities from a large-scale KB. Typically, the existing VNEL system is often implemented as a two-stage process, i.e., the candidate retrieval stage and the entity disambiguation stage, to balance the efficiency and the effectiveness. In this work, we implement a fast end-to-end linking directly from a large-scale collection by employing an efficient model.
+
+We take a widely-used bi-encoder contrastive learning framework to learn robust and effective representations of both visual mentions and entities. Given a visual mention $V^{m}$ and a candidate entity $e_{i}$ , which is accompanied by visual description $V_{e_{i}}$ and/or textual description $T_{e_{i}}$ , the framework aims to produce a relevance score between the mention and the entity. The overall structure of the framework is shown in Figure 6, which consists of two major components, namely the mention encoder and the entity encoder. These two encoders aim to extract features as embeddings $f^{m}$ for the input image and $f^{e}$ for the entity. For each encoder, we directly take existing pre-trained models as the implementation. Inspired by existing works (Gao et al., 2021; Zhang et al., 2021b) in applying pretrained model, we add a feed-forward layer to transform the vector generated from the encoder to the task-oriented embedding space. After that, a residual connection (He et al., 2016) is added to obtain $F^{m}$ and $F^{e_{i}}$ , followed by using L2 norm and dot-production to calculate the similarity score.
+
+$$
+\begin{array}{l} f ^ {m} = \operatorname {E n c o d e r} ^ {m} \left(V ^ {m}\right), f ^ {e _ {i}} = \operatorname {E n c o d e r} ^ {e} \left(e _ {i}\right) \\ F ^ {m} = f ^ {m} + \mathrm {R e L U} \left(f ^ {m} \mathbf {W} _ {1} ^ {m}\right) \mathbf {W} _ {2} ^ {m} \\ F ^ {e _ {i}} = f ^ {e _ {i}} + \mathrm {R e L U} \left(f ^ {e _ {i}} \mathbf {W} _ {1} ^ {e}\right) \mathbf {W} _ {2} ^ {e} \\ e^{*}(m) = \operatorname *{arg max}_{e_{i}\in E}F^{m}\cdot F^{e_{i}} \\ \end{array}
+$$
+
+Where $\mathbf{W}_1^m$ and $\mathbf{W}_2^m$ are learnable parameters for mention representation learning, and $\mathbf{W}_1^e$ and $\mathbf{W}_2^e$ are learnable parameters for entity representation learning.
+
+Since each sub-task of VNEL have different types of inputs, we thus implement each baseline with different encoders:
+
+- V2VEL Encoders: We adopt ResNet (Szegedy et al., 2017) in a single-modal way following (Schroff et al., 2015), which has been pre-trained on the vggface2 (Cao et al., 2018) to extract visual features. Here, both mention and entity encoder use ResNet and share the parameters.
+- V2TEL Encoders: We directly take CLIP (Radford et al., 2021), which has been pretrained with a large-scale image-text dataset, to implement the mention encoder and the entity encoder. For entity encoder, we apply two types of textual information about entity, i.e., entity name (CLIP_N) and entity name with description (CLIP_N_D), to study the influence of the entity's meta info.
+
+
Sub-Task
Model
Recall
MRR@3
MRR@5
MRR@10
R@1
R@3
R@5
R@10
zero-shot
V2VEL
ResNet
0.3097
0.4053
0.4479
0.5076
0.3518
0.3616
0.3695
V2TEL
CLIP_N
0.4393
0.5673
0.6145
0.6724
0.4964
0.5071
0.5149
CLIP_N_D
0.4586
0.5872
0.6323
0.6827
0.5158
0.5260
0.5328
ResNet+CLIP_N
0.5644
0.6665
0.6981
0.7309
0.6101
0.6174
0.6217
V2VTEL
ResNet+CLIP_N_D
0.5892
0.6859
0.7102
0.7440
0.6327
0.6383
0.6429
CLIP_N + ResNet
0.5667
0.6618
0.6893
0.7072
0.6095
0.6158
0.6184
CLIP_N_D + ResNet
0.5895
0.6794
0.7066
0.7235
0.6302
0.6365
0.6389
fine-tune
V2VEL
ResNet
0.4212
0.5530
0.5832
0.6428
0.4701
0.4821
0.4899
V2TEL
CLIP_N
0.5527
0.6860
0.7250
0.7756
0.6126
0.6215
0.6285
CLIP_N_D
0.5946
0.7180
0.7550
0.8022
0.6550
0.6634
0.6697
ResNet+CLIP_N
0.7171
0.8115
0.8385
0.8634
0.7600
0.7661
0.7696
V2VTEL
ResNet+CLIP_N_D
0.7301
0.8242
0.8512
0.8798
0.7714
0.7776
0.7815
CLIP_N + ResNet
0.7180
0.7921
0.8082
0.8177
0.7502
0.7539
0.7552
CLIP_N_D + ResNet
0.7370
0.8178
0.8347
0.8445
0.7739
0.7778
0.7792
+
+Table 3: Experimental results of baselines among three sub-tasks under both zero-shot and fine-tuned settings.
+
+- V2VTEL Encoders: We combine encoders of V2VEL and V2TEL to implement the V2VTEL. Specifically, we take a simple but effective strategy that uses one model to recall Top-K results and the other to re-rank. For example, ResNet + CLIP means recall with the ResNet first and re-rank Top-K results with CLIP again. We also test different combinations about the order of V2VEL encoders and V2TEL encoders, whose results are listed in Section 4.1.
+
+In the training step, the contrastive loss function of a single mention-entity sample is defined as:
+
+$$
+\begin{array}{l} \mathcal {L} \left(V ^ {m}, e _ {i}\right) = - \log \left[ \frac {\exp \left(\Phi \left(V ^ {m} , e _ {i} ^ {+}\right) / \tau\right)}{\sum^ {-} + \exp \left(\Phi \left(V ^ {m} , e _ {i} ^ {+}\right) / \tau\right)} \right] \\ \sum_ {k \neq i} ^ {-} = \sum_ {k \neq i} \exp \left(\Phi \left(V ^ {m}, e _ {k} ^ {-}\right) / \tau\right) \\ \end{array}
+$$
+
+where $e_i^+$ represents the ground truth positive entity of $V_m$ and $e_k^-$ denotes the $k^{th}$ candidate of $V^m$ in the batch, which is all negative samples. $\tau$ is the temperature coefficient that helps control the softmax's smoothness(Jang et al., 2016).
+
+# 4 Experiments
+
+During experiments, we split images in WIKIPerson into train, dev, and test set with the ratio of 6:2:2. Besides, to avoid the bias of popular entities affecting the evaluation, each named entity appears at most once in test set. For evaluation, we report two widely-used metrics of Top-k retrieve: Recall@K (K=1, 3, 5, 10) and Mean Reciprocal
+
+Rank (MRR@K, K=3, 5, 10).5
+
+# 4.1 Results
+
+All results are summarized in Table 3. Since all the encoders we adopt are pre-trained and can be directly applied in each task, we thus report both zero-shot and fine-tuned performances to show the effectiveness of all baselines.
+
+Zero-shot vs. fine-tune. In zero-shot, we directly use the embedding generated from the encoder as the feature. As we can see, ResNet has achieved a reasonable good performance for R@10 (i.e., 0.5076), which demonstrates the effectiveness of the pre-trained model. Moreover, we can see that the CLIP, which is pre-trained with about 400M image-caption pairs, has achieved better performances against ResNet with either CLIP_N or CLIP_N_D across all metrics. When combining ResNet with CLIP, we observe a distinct improvement for all combination, which demonstrate the effectiveness in combining both visual description and textual description in VNEL. While comparing the zero-shot with fine-tuned baselines, all models have obtained significant improvements, e.g., an average improvement of MRR@10 is 0.13. The improvements verify the quality of dataset and demonstrates that the WIKIPerson could significantly boost the ability of visual named entity linking.
+
+Sub-tasks of VNEL. We focus on the below part of table 3 where all models are fine-tuned on WIKIPerson.
+
+1) The V2VEL sub-task: As the most funda-
+
+
+Figure 7: The qualitative case studies of Top-3 predicted entities. The result with a green border is the ground truth entity of the input image.
+
+mental part concerning VNEL, the ResNet extracts features for both visual mentions and visual descriptions of entities, and matches them in visual feature space. However, it obtains generally low absolute numbers in different evaluation metrics, e.g., 0.4212 on R@1, which leaves a large room for improvement. A possible reason is that the image of an entity in KB are often earlier pictures which show very different state (e.g., age and occasion) with entities appeared in news articles.
+
+2) The V2TEL sub-task: CLIP obtains higher performance compared to ResNet by matching the visual mention with textual descriptions of the entity. Besides, these results show that the cross-modal matching between the image and the text is very powerful in linking images with entities. Moreover, by comparing the two different types of textual information about the entity, we can see that entity description could provide useful information in distinguishing disambiguate entities since CLIP_N_D outperforms CLIP_N over all metrics.
+3) The V2VTEL sub-task: By combining the textual information and visual information of each entity, the performance could be further boosted. For example, the relative improvement of ResNet+CLIP_N_D over ResNet and CLIP_N_D against R@1 is about $73\%$ and $23\%$ , respectively. These results verify that both textual and visual modality of the entity could complement each other
+
+in linking visual mentions with named entities. Moreover, as for different order of combination between ResNet and CLIP, we can see that each method could obtain a relatively close performance, which confirms the effectiveness of the strategy in combining the V2VEL method and the V2TEL method.
+
+# 4.2 Qualitative Analysis
+
+To better understand baseline methods among different sub-tasks, we show several cases in Figure 7. The input image is on the left, and the top 3 predicted results are partitioned into two rows corresponding to different baselines. The entity with a green border is the ground truth entity.
+
+The first case of Figure 7 is a picture of a famous American golfer named Tiger Woods. ResNet could identify the correct entity, and other returned results have a similar face to the input image. CLIP_N_D also returned the ground truth entity at the second position in the top-3 results, and all three candidates are professional golfers. This shows that only text descriptions may unable to disambiguate between the correct entity and irrelevant entities.
+
+Analogously, the second case is an image about Bill Clinton speaking at his foundation. ResNet links it to the entity "Andy Gill", which looks very similar to Clinton. While CLIP_N_D correctly predicts the ground truth entity in the first position,
+
+and all returned entities are related to Clinton. This verifies that CLIP_N_D can learn high-level association between image mention and entity meta-info.
+
+The last case is an image of a famous Chinese tennis sports player named Li Na. We can see that the image has complicated backgrounds, and both ResNet and CLIP_N_D cannot link the mention with the ground truth entity in top-3 returned results. This motivates the need for focused research on building effective VNEL models.
+
+From all the above cases, it is clearly presented that ResNet pays more attention to the pixel-level matching, and CLIP learns high-level semantic connection between mentions and entities. However, the dynamic nature of the input images highlights the difficulty of the task, especially for entities with outdated pictures. We believe this work could pave the way for better visual entity linking.
+
+# 5 Related Work
+
+Entity Linking. There is extensive research on EL, which serves as a classic NLP task. With the help of large-scale pre-train language models (Devlin et al., 2018; Liu et al., 2019), several recent deep learning methods (Mulang' et al., 2020; Yamada et al., 2019; De Cao et al., 2020) achieve $90\%+$ accuracy on AIDA (Hoffart et al., 2011), which is a commonly used high-quality robust EL dataset. However, as mentioned in (Cao et al., 2020), it seems that the current methods have already torched the task ceiling. As a result, many more challenging EL-related tasks are formulated. For example, zero-shot entity linking (Logeswaran et al., 2019; Wu et al., 2019), engaging other features like global coherence across all entities in a document, NIL prediction, joining MD and ED steps together, or providing completely end-to-end solutions to address emerging entities is rapidly evolving (Sevgili et al., 2022).
+
+Multi-modal Entity Linking. Recently, Multimodal Entity Linking(MEL) (Moon et al., 2018) task has also been proposed for consideration. Given a text with images attached, MEL uses both textual and visual information to map an ambiguous mention in the text to an entity in the KBs. (Moon et al., 2018) proves that image information helps identify the mention in social media for the fuzzy and short text. Furthermore, (Adjali et al., 2020) transfer the scene to Twitter and perform MEL on Twitter users. (Zhang et al., 2021a) proposes an attention-based structure to eliminate dis
+
+tracting information from irrelevant images and builds a multi-source Social Media multi-modal dataset. (Wang et al., 2022) builds a multi-modal Entity Linking Dataset with Diversified Contextual Topics and Entity Types. However, for all those works, the text input plays a vital part, and the visual input only serves as a complementary role to the text.
+
+Multi-modal Dataset. At the same time, our work is also related to the multi-modal image-text datasets, which is also a hot issue in recent years. Flicker30k (Young et al., 2014) annotates 30k image-caption pairs from Flicker with five descriptive sentences per image, such as "a man is wearing a tie." In addition, MSCOCO caption (Chen et al., 2015) scale up the size with over one and a half million captions describing over 330000 images. However, the caption in all these datasets is descriptive sentences and non-entity aware. As a result, some work has started to build a news-related dataset for entity-aware image caption tasks. For example, (Ramisa et al., 2017) focus on the news website and have crawled 100k image-caption pairs. (Biten et al., 2019; Liu et al., 2020) expand the size of the dataset. Nevertheless, the detailed entity information is neither annotated nor linked to the KBs.
+
+# 6 Conclusion and Future Work
+
+To tackle the limitation that previous visual entity linking either rely on textual data to complement a multi-modal linking or only link objects with general entities, we introduce a purely Visual-based Named Entity Linking task, where the input only contains the image. The goal of this task is to identify objects of interest in images and link them to corresponding named entities in KBs. Considering the rich multi-modal contexts of each entity in KBs, we propose three different sub-tasks, i.e. the V2VEL sub-task, the V2TEL sub-task, and the V2VTEL sub-task. Moreover, we build a high-quality human-annotated visual person linking dataset, named WIKIPerson, which aims at recognizing persons in images and linking them to Wikipedia. Based on WIKIPerson, we introduce several baseline algorithms for each sub-task. According to the experimental results, the WIKIPerson is a challenging dataset worth further explorations. In the future, we intend to build a larger scale VNEL dataset with diverse types and adopt more advanced models to achieve higher accuracy.
+
+# Limitations
+
+Low extensibility of the entity information. In the V2VEL sub-task, each entity in the KB can have more than one attached image. However, in our paper, only the first image is selected for convenience, which will inevitably omit additional information. At the same time, in the V2TEL sub-task, we only use the short descriptive sentences of the entity. How to integrate longer unstructured text information is also a problem worth exploring.
+
+# Ethics Statement
+
+We collected data based on open-source datasets and databases. These data have been strictly manually reviewed and do not contain any pictures that are sexual or violate politics. We are authorized by the relevant authority in our university to hire employees from the laboratory to build the platform and carry out the annotations. All employees are adults and ethical. On average, they were paid £5-£10/hour.
+
+# Acknowledgements
+
+This work was funded by the National Natural Science Foundation of China (NSFC) under Grants No. 61902381 and 62006218, the Youth Innovation Promotion Association CAS under Grants No. 2021100 and 20144310, the Young Elite Scientist Sponsorship Program by CAST under Grants No. YESS20200121, and the Lenovo-CAS Joint Lab Youth Scientist Project.
+
+# References
+
+Omar Adjali, Romaric Besançon, Olivier Ferret, Herve Le Borgne, and Brigitte Grau. 2020. Multimodal entity linking for tweets. In European Conference on Information Retrieval, pages 463-478. Springer.
+Soren Auer, Christian Bizer, Georgi Kobilarov, Jens Lehmann, Richard Cyganiak, and Zachary G. Ives. 2007. Dbpedia: A nucleus for a web of open data. In The Semantic Web, 6th International Semantic Web Conference, 2nd Asian Semantic Web Conference, ISWC 2007 + ASWC 2007, Busan, Korea, November 11-15, 2007, volume 4825 of Lecture Notes in Computer Science, pages 722-735. Springer.
+A. F. Biten, L. Gomez, M. Rusinol, and D. Karatzas. 2019. Good news, everyone! context driven entity-aware captioning for news images. In 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
+
+N. D. Cao, G. Izacard, S. Riedel, and F. Petroni. 2020. Autoregressive entity retrieval.
+Qiong Cao, Li Shen, Weidi Xie, Omkar M Parkhi, and Andrew Zisserman. 2018. Vggface2: A dataset for recognising faces across pose and age. In 2018 13th IEEE international conference on automatic face & gesture recognition (FG 2018), pages 67-74. IEEE.
+Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Dólar, and C Lawrence Zitnick. 2015. Microsoft coco captions: Data collection and evaluation server. arXiv preprint arXiv:1504.00325.
+Ritendra Datta, Dhiraj Joshi, Jia Li, and James Z Wang. 2008. Image retrieval: Ideas, influences, and trends of the new age. ACM Computing Surveys (Csur), 40(2):1-60.
+Nicola De Cao, Gautier Izacard, Sebastian Riedel, and Fabio Petroni. 2020. Autoregressive entity retrieval. arXiv preprint arXiv:2010.00904.
+Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.
+Shahi Dost, Luciano Serafini, Marco Rosponcher, Lamberto Ballan, and Alessandro Sperduti. 2020. Vt-linker: Visual-textual-knowledge entity linker. In ECAI 2020, pages 2897-2898. IOS Press.
+M Fabian, Kasneci Gjergji, WEIKUM Gerhard, et al. 2007. Yago: A core of semantic knowledge unifying wordnet and wikipedia. In 16th International world wide web conference, WWW, pages 697-706.
+Jingru Gan, Jinchang Luo, Haiwei Wang, Shuhui Wang, Wei He, and Qingming Huang. 2021. Multimodal entity linking: a new dataset and a baseline. In Proceedings of the 29th ACM International Conference on Multimedia, pages 993-1001.
+Peng Gao, Shijie Geng, Renrui Zhang, Teli Ma, Rongyao Fang, Yongfeng Zhang, Hongsheng Li, and Yu Qiao. 2021. Clip-adapter: Better vision-language models with feature adapters. arXiv preprint arXiv:2110.04544.
+Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770-778.
+Johannes Hoffart, Mohamed Amir Yosef, Ilaria Bordino, Hagen Fürstenau, Manfred Pinkal, Marc Spaniol, Bilyana Taneva, Stefan Thater, and Gerhard Weikum. 2011. Robust disambiguation of named entities in text. In Proceedings of the 2011 conference on empirical methods in natural language processing, pages 782-792.
+
+Eric Jang, Shixiang Gu, and Ben Poole. 2016. Categorical reparameterization with gumbel-softmax. arXiv preprint arXiv:1611.01144.
+F. Liu, Y. Wang, T. Wang, and V. Ordonez. 2020. Visualnews: Benchmark and challenges in entity-aware image captioning.
+Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692.
+Lajanugen Logeswaran, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, Jacob Devlin, and Honglak Lee. 2019. Zero-shot entity linking by reading entity descriptions. arXiv preprint arXiv:1906.07348.
+Ilya Loshchilov and Frank Hutter. 2017. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101.
+Cédric Maigrot, Vincent Claveau, Ewa Kijak, and Ronan Sicre. 2016. Mediaeval 2016: A multimodal system for the verifying multimedia use task. In MediaEval 2016:"Verfying Multimedia Use" task.
+S. Moon, L. Neves, and V. Carvalho. 2018. Multimodal named entity disambiguation for noisy social media posts. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers).
+Isaiah Onando Mulang', Kuldeep Singh, Chaitali Prabhu, Abhishek Nadgeri, Johannes Hoffart, and Jens Lehmann. 2020. Evaluating the impact of knowledge graph context on entity disambiguation models. In Proceedings of the 29th ACM International Conference on Information & Knowledge Management, pages 2157-2160.
+Eric Muller-Budack, Jonas Theiner, Sebastian Diering, Maximilian Idahl, Sherzod Hakimov, and Ralph Ewerth. 2021. Multimodal news analytics using measures of cross-modal entity and context consistency. International Journal of Multimedia Information Retrieval, 10(2):111-125.
+Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. 2019. Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems, 32.
+A. Radford, J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal, G. Sastry, A. Askell, P. Mishkin, and J. Clark. 2021. Learning transferable visual models from natural language supervision.
+A. Ramisa, Fei Yan, Francesc Moreno-Noguer, and K. Mikolajczyk. 2017. Breakingnews: Article annotation by image and text processing. IEEE Transactions on Pattern Analysis Machine Intelligence, PP(99):1-1.
+
+Florian Schroff, Dmitry Kalenichenko, and James Philbin. 2015. Facenet: A unified embedding for face recognition and clustering. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 815-823.
+Özge Sevgili, Artem Shelmanov, Mikhail Y. Arkhipov, Alexander Panchenko, and Chris Biemann. 2022. Neural entity linking: A survey of models based on deep learning. Semantic Web, 13(3):527-570.
+Avirup Sil, Gourab Kundu, Radu Florian, and Wael Hamza. 2018. Neural cross-lingual entity linking. In Thirty-Second AAAI Conference on Artificial Intelligence.
+Christian Szegedy, Sergey Ioffe, Vincent Vanhoucke, and Alexander A Alemi. 2017. Inception-v4, inception-resnet and the impact of residual connections on learning. In Thirty-first AAAI conference on artificial intelligence.
+Amara Tariq and Hassan Foroosh. 2017. A context-driven extractive framework for generating realistic image descriptions. IEEE Trans. Image Process., 26(2):619-632.
+Neha Tilak, Sunil Gandhi, and Tim Oates. 2017. Visual entity linking. In 2017 International Joint Conference on Neural Networks, IJCNN 2017, Anchorage, AK, USA, May 14-19, 2017, pages 665-672. IEEE.
+Alasdair Tran, Alexander Mathews, and Lexing Xie. 2020. Transform and tell: Entity-aware news image captioning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 13035-13045.
+Denny Vrandecic and Markus Krötzsch. 2014. Wikidata: a free collaborative knowledgebase. Communications of the ACM, 57(10):78-85.
+X. Wang, J. Tian, M. Gui, Z. Li, R. Wang, M. Yan, L. Chen, and Y. Xiao. 2022. Wikidiverse: A multimodal entity linking dataset with diversified contextual topics and entity types. arXiv e-prints.
+Rebecka Weegar, Linus Hammarlund, Agnes Tegen, Magnus Oskarsson, Kalle Åström, and Pierre Nugues. 2014. Visual entity linking: A preliminary study. In Workshops at the Twenty-Eighth AAAI Conference on Artificial Intelligence.
+Ledell Wu, Fabio Petroni, Martin Josifoski, Sebastian Riedel, and Luke Zettlemoyer. 2019. Scalable zero-shot entity linking with dense entity retrieval. arXiv preprint arXiv:1911.03814.
+Ikuya Yamada, Koki Washio, Hiroyuki Shindo, and Yuji Matsumoto. 2019. Global entity disambiguation with pretrained contextualized embeddings of words and entities. arXiv preprint arXiv:1909.00426.
+P. Young, A. Lai, M. Hodosh, and J. Hockenmaier. 2014. From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions. Nlp.cs.illinois.edu.
+
+Kaipeng Zhang, Zhanpeng Zhang, Zhifeng Li, and Yu Qiao. 2016. Joint face detection and alignment using multitask cascaded convolutional networks. IEEE signal processing letters, 23(10):1499-1503.
+L. Zhang, Z. Li, and Q. Yang. 2021a. Attention-Based Multimodal Entity Linking with High-Quality Images. Database Systems for Advanced Applications.
+Renrui Zhang, Rongyao Fang, Peng Gao, Wei Zhang, Kunchang Li, Jifeng Dai, Yu Qiao, and Hongsheng Li. 2021b. Tip-adapter: Training-free clip-adapter for better vision-language modeling. arXiv preprint arXiv:2111.03930.
+Qiushuo Zheng, Hao Wen, Meng Wang, and Guilin Qi. 2022. Visual entity linking via multi-modal learning. Data Intell., 4(1):1-19.
+
+# A Baselines Details
+
+Parameters Setting. In the architecture, we set the number of layers in the feed-forward as 2 and the dimensions are [512*1024, 1024*512] both for mention and entity in the two models. The initial learning rate is set to 2e-4 for ResNet and 2e-6 for CLIP. Images are all resized to $224 \times 224$ pixels according to the common size and textual information is truncated to 77 words. The batch sizes for ResNet and clip are both set to 64. All the methods are implemented in Pytorch (Paszke et al., 2019) and optimized by the AdamW (Loshchilov and Hutter, 2017) algorithm.
+
+Experimental setup. We train our models on two NVIDIA Tesla V100 GPU. We train each model with much to 20 epochs. For inference, we use Faiss6 to achieve fast recall in large-scale embedding space with about 500ms per instance.
+
+# B Evaluation Metrics
+
+All evaluation and empirical analysis are reported by two widely-used metrics of Top-k retrieve: Recall and Mean Reciprocal Rank (MRR). The final result is the average score among all the cases.
+
+$$
+\begin{array}{l} \mathrm {R e c a l l} @ \mathrm {K} = \frac {1}{Q} \sum_ {i = 1} ^ {Q} {\bf 1} _ {q k _ {i}} (g t _ {i}) \\ \mathrm {M R R} @ \mathrm {K} = \frac {1}{Q} \sum_ {i = 1} ^ {Q} \frac {1}{\mathrm {r a n k} _ {i}} \\ \end{array}
+$$
+
+where $\mathbf{1}_A(x)$ denotes a 0,1 valued indicator function. $qk_{i},gt_{i}$ are the Top-k result and the ground truth of query i. MRR is a measure to evaluate systems that "Where is the first relevant item". For a single query, the reciprocal rank is $\frac{1}{\mathrm{rank}}$ where rank is the position of the highest-ranked answer. If no correct answer was returned in the query, then the reciprocal rank is 0.
+
+# C More Examples from WIKIPerson
+
+To demonstrate more details of our dataset, we pick two examples from our dataset. (Figure 8, Figure 9).
+
+# D Detailed Analysis
+
+According to the experimental results, the reranking strategy improves performance to a certain degree. So we conduct a detailed analysis of the
+
+
+
+
+
+
+Figure 8: The images of Taylor Swift (Q26876, a famous American singer-songwriter) in WikIPerson.
+
+
+
+
+
+
+
+
+
+
+Figure 9: The images of Indra Nooyi (Q264913, Indian American business executive and former CEO of PepsiCo) in WIKIPerson.
+
+
+
+
+
+strategy to help understand the reason and provide some insights for future model designs.
+
+Firstly, we analyze the effect of re-ranking sequence length, which is the main factor affecting the result. Specifically, we conduct research on the re-ranking sequence length. Then we plot the Recall@1 for ResNet + CLIP_N_D and CLIP_N_D + ResNet in Figure 10. From the results, we can see that both two methods achieve high performance as the re-ranking length increases at the beginning. Then it starts to decrease slightly. It can be simply inferred that when the re-ranking length continues to grow to the size of the |E|, the re-ranking model can be equal to the single Reset or CLIP_N_D. Besides, these two models have different Inflection Points and speeds of the downtrend. CLIP_N_D
+
+
+Figure 10: The Recall@1 of the models with different re-rank size.
+
+
+Figure 11: Overlap of Top-k result between CLIP_N_D and ResNet in zero-shot and fine-tune.
+
++ ResNet reaches its peak at lower re-rank length and decent sharply while ResNet + CLIP_N_D increases until re-rank length equals 600 and decent slowly. The reason for the phenomenon is that CLIP_N_D outperforms ResNet. As a result, a larger re-rank size is necessary for ResNet to guarantee to recall the ground truth.
+
+Secondly, we notice that the Top-k results of CLIP_N_D and ResNet differ greatly. As a result, we plot the precise overlap between ResNet and CLIP_N_D's Top-k result in Figure 11.
+
+The origin and fine-tune model have the same trend: with the increase of the K, the overlap decreases first and increases later. When k nears 50, the overlap minimum. For fine-tune model, it has a higher overlap than the zero-shot. The overlap starts from $30.1\%$ , which means only the $30.1\%$ of entities are identical among the Top-1 results between the two models even though they have comparable performance. Then it drops to $15\%$ sharply. When k equals |E|, the overlap will reach
+
+100%. Smaller coverage with high and comparable model performance ensures that using one model to re-ranking based on the recall of the other model could improve performance significantly.
\ No newline at end of file
diff --git a/visualnamedentitylinkinganewdatasetandabaseline/images.zip b/visualnamedentitylinkinganewdatasetandabaseline/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..1027cfcd5363e6bb743d66a8271cbc4e76f86b65
--- /dev/null
+++ b/visualnamedentitylinkinganewdatasetandabaseline/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9cdfaecd759fba7036add5c4764a1da4bf24cca9299e699af33fe68e1114f5ba
+size 751202
diff --git a/visualnamedentitylinkinganewdatasetandabaseline/layout.json b/visualnamedentitylinkinganewdatasetandabaseline/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..6d3e3d4f8c2f1d5ec0a2dec7b990d06d5e1e1d83
--- /dev/null
+++ b/visualnamedentitylinkinganewdatasetandabaseline/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c768077c1d202e13d4bd4e2864accfe3a655b9fbbe2acd90d011f333615943aa
+size 384874
diff --git a/viterbidecodingofdirectedacyclictransformerfornonautoregressivemachinetranslation/ce6d8fd6-ddaa-48de-b4d1-d31c0ed3cae6_content_list.json b/viterbidecodingofdirectedacyclictransformerfornonautoregressivemachinetranslation/ce6d8fd6-ddaa-48de-b4d1-d31c0ed3cae6_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..daa87c2a7a6ff855b3518065a6c69c51c44fa974
--- /dev/null
+++ b/viterbidecodingofdirectedacyclictransformerfornonautoregressivemachinetranslation/ce6d8fd6-ddaa-48de-b4d1-d31c0ed3cae6_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:6c1df37ef4a5f997b67c94e77d08f126f56f0d3cf2546f357bc78a8d7a8c5e57
+size 59037
diff --git a/viterbidecodingofdirectedacyclictransformerfornonautoregressivemachinetranslation/ce6d8fd6-ddaa-48de-b4d1-d31c0ed3cae6_model.json b/viterbidecodingofdirectedacyclictransformerfornonautoregressivemachinetranslation/ce6d8fd6-ddaa-48de-b4d1-d31c0ed3cae6_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..37c1af1026d87c46067c25174106f14adf3724ed
--- /dev/null
+++ b/viterbidecodingofdirectedacyclictransformerfornonautoregressivemachinetranslation/ce6d8fd6-ddaa-48de-b4d1-d31c0ed3cae6_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:410d00f43b0b732c84e4d3c3533c1138306abedbe7acccd46dc5b82e8d5d7428
+size 73923
diff --git a/viterbidecodingofdirectedacyclictransformerfornonautoregressivemachinetranslation/ce6d8fd6-ddaa-48de-b4d1-d31c0ed3cae6_origin.pdf b/viterbidecodingofdirectedacyclictransformerfornonautoregressivemachinetranslation/ce6d8fd6-ddaa-48de-b4d1-d31c0ed3cae6_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..c2716ca953cf4d78cf7f522aedb4f8a145e9f23a
--- /dev/null
+++ b/viterbidecodingofdirectedacyclictransformerfornonautoregressivemachinetranslation/ce6d8fd6-ddaa-48de-b4d1-d31c0ed3cae6_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b5f9df9e40df6db27673d79562dd88a0afcda32ac2b0fa1afa2d3030d1cf2b60
+size 320085
diff --git a/viterbidecodingofdirectedacyclictransformerfornonautoregressivemachinetranslation/full.md b/viterbidecodingofdirectedacyclictransformerfornonautoregressivemachinetranslation/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..754f0c556a594903fde3e0c04b350d607f822379
--- /dev/null
+++ b/viterbidecodingofdirectedacyclictransformerfornonautoregressivemachinetranslation/full.md
@@ -0,0 +1,282 @@
+# Viterbi Decoding of Directed Acyclic Transformer for Non-Autoregressive Machine Translation
+
+Chenze Shao $^{1,2}$ , Zhengrui Ma $^{1,2}$ , Yang Feng $^{1,2*}$
+
+1 Key Laboratory of Intelligent Information Processing Institute of Computing Technology, Chinese Academy of Sciences
+
+$^{2}$ University of Chinese Academy of Sciences
+
+{shaochenze18z, mazhengrui21b, fengyang}@ict.ac.cn
+
+# Abstract
+
+Non-autoregressive models achieve significant decoding speedup in neural machine translation but lack the ability to capture sequential dependency. Directed Acyclic Transformer (DA-Transformer) was recently proposed to model sequential dependency with a directed acyclic graph. Consequently, it has to apply a sequential decision process at inference time, which harms the global translation accuracy. In this paper, we present a Viterbi decoding framework for DA-Transformer, which guarantees to find the joint optimal solution for the translation and decoding path under any length constraint. Experimental results demonstrate that our approach consistently improves the performance of DA-Transformer while maintaining a similar decoding speedup.
+
+# 1 Introduction
+
+Non-autoregressive translation (Gu et al., 2018) models achieve a significant decoding speedup but suffer from performance degradation, which is mainly attributed to the multi-modality problem. Multi-modality refers to the scenario where the same source sentence may have multiple translations with a strong cross-correlation between target words. However, non-autoregressive models generally hold the conditional independence assumption on target words, which prevents them from capturing the multimodal target distribution.
+
+Recently, Directed Acyclic Transformer (Huang et al., 2022) was proposed to model sequential dependency with a directed acyclic graph consisting of different decoding paths that enable the model to capture multiple translation modalities. Although it has been proven effective, it cannot directly find the most probable translation with the argmax operation. Therefore, DA-Transformer has to apply a se
+
+quential decision process at inference time, which harms the global translation accuracy.
+
+In this paper, we propose a Viterbi decoding (Viterbi, 1967) framework for DA-Transformer to improve the decoding accuracy. Using the Markov property of decoding path, we can apply Viterbi decoding to find the most probable path, conditioned on which we can generate the translation with argmax decoding. Then, we further improve this decoding algorithm to perform a simultaneous search for decoding paths and translations, which guarantees to find the joint optimal solution under any length constraint. After Viterbi decoding, we obtain a set of translations with different lengths and rerank them to obtain the final translation. We apply a length penalty term in the reranking process, which prevents the generation of empty translation (Stahlberg and Byrne, 2019) and enables us to control the translation length flexibly.
+
+Experimental results on several machine translation benchmark tasks (WMT14 En $\leftrightarrow$ De, WMT17 Zh $\leftrightarrow$ En) show that our approach consistently improves the performance of DA-Transformer while maintaining a similar decoding speedup.
+
+# 2 Preliminaries: DA-Transformer
+
+# 2.1 Model Architecture
+
+DA-Transformer is formed by a Transformer encoder and a directed acyclic decoder. The encoder and layers of the decoder are the same as vanilla Transformer (Vaswani et al., 2017). On top of the decoder, the hidden states are organized as a directed acyclic graph, whose edges represent transition probabilities between hidden states.
+
+Given a source sentence $X = \{x_{1},\dots ,x_{N}\}$ and a target sentence $Y = \{y_{1},\dots ,y_{M}\}$ , the decoder length $L$ is set to $\lambda \cdot N$ , where $\lambda$ is a hyperparameter. The translation probability from $X$ to
+
+$Y$ is formulated as:
+
+$$
+P _ {\theta} (Y | X) = \sum_ {A \in \Gamma} P _ {\theta} (A | X) P _ {\theta} (Y | X, A), \tag {1}
+$$
+
+where $A = \{a_{1},\dots ,a_{M}\}$ is a translation path for the target sentence $Y$ and $a_i$ represents the position of word $y_{i}$ in the decoder. $\Gamma$ contains all possible translation paths with $1 = a_{1} < \dots < a_{M} = L$
+
+The probability of translation path $A$ is formulated based on the Markov hypothesis:
+
+$$
+P _ {\theta} (A | X) = \prod_ {i = 1} ^ {M - 1} P _ {\theta} \left(a _ {i + 1} \mid a _ {i}, X\right) = \prod_ {i = 1} ^ {M - 1} E _ {a _ {i}, a _ {i + 1}}, \tag {2}
+$$
+
+where $E \in \mathbb{R}^{L \times L}$ is the transition matrix obtained by self-attention, and $E_{a_i, a_{i+1}}$ represents the transition probability from position $a_i$ to position $a_{i+1}$ . $E$ is masked by a lower triangular matrix to ensure that the translation path is acyclic.
+
+Conditioned on $X$ and the translation path $A$ , the translation probability of $Y$ is formulated as:
+
+$$
+P _ {\theta} (Y \mid A, X) = \prod_ {i = 1} ^ {M} P _ {\theta} \left(y _ {i} \mid a _ {i}, X\right), \tag {3}
+$$
+
+where $P_{\theta}(y_i|a_i,X)$ represents the translation probability of word $y_{i}$ on the position $a_{i}$ of decoder.
+
+# 2.2 Training and Inference
+
+The training objective of DA-Transformer is to maximize the log-likelihood $\log P_{\theta}(Y|X)$ , which requires marginalizing all paths $A$ . Using the Markov property of translation path, DA-Transformer employs dynamic programming to calculate the translation probability. Besides, it applies glancing training (Qian et al., 2021) with a hyper-parameter $\tau$ to promote the learning.
+
+During inference, the objective is to find the most probable translation $\operatorname{argmax}_Y P_\theta(Y|X)$ . However, there is no known tractable decoding algorithm for this problem. Huang et al. (2022) proposed three approximate decoding strategies to find high-probability translations. The intuitive strategy is greedy decoding, which sequentially takes the most probable transition as the decoding path and generates a translation according to the conditional probabilities. Lookahead decoding improves greedy decoding by taking the most probable combination of transition and prediction as follows:
+
+$$
+y _ {i} ^ {*}, a _ {i} ^ {*} = \underset {y _ {i}, a _ {i}} {\operatorname {a r g m a x}} P _ {\theta} \left(y _ {i} \mid a _ {i}, X\right) P _ {\theta} \left(a _ {i} \mid a _ {i - 1}, X\right). \tag {4}
+$$
+
+Beam search decoding is a more accurate method that merges the paths of the same prefix, which approximates the real translation probability and better represents the model's preference. Beam search can be optionally combined with an n-gram language model to improve the performance further. However, the speed of beam search is much lower than greedy and lookahead decoding.
+
+# 3 Methodology
+
+This section presents a Viterbi decoding framework for DA-Transformer to improve decoding accuracy. We first develop a basic algorithm to find the optimal decoding path and then improve it to find the joint optimal solution of the translations and decoding paths. Finally, we introduce the technique to rerank the Viterbi decoding outputs.
+
+# 3.1 Optimal Decoding Path
+
+Recall that the greedy decoding strategy sequentially takes the most probable transition as the decoding path, which may not be optimal since the greedy strategy does not consider long-term profits. In response to this problem, we propose a Viterbi decoding framework for DA-Transformer that guarantees to find the optimal decoding path $\operatorname{argmax}_A P_\theta(A|X)$ under any length constraint.
+
+Specifically, we consider decoding paths of length $i$ that end in position $a_{i} = t$ , and use $\alpha(i,t)$ to represent the maximum probability of these paths. By definition, we set the initial state $\alpha(1,1) = 1$ and $\alpha(1,t > 1) = 0$ . The Markov property of decoding paths enables us to sequentially calculate $\alpha(i,\cdot)$ from its previous step $\alpha(i - 1,\cdot)$ :
+
+$$
+\alpha (i, t) = \max _ {t ^ {\prime}} \alpha (i - 1, t ^ {\prime}) \cdot E _ {t, t ^ {\prime}},
+$$
+
+$$
+\psi (i, t) = \underset {t ^ {\prime}} {\operatorname {a r g m a x}} \alpha (i - 1, t ^ {\prime}) \cdot E _ {t, t ^ {\prime}}, \tag {5}
+$$
+
+where $E$ is the transition matrix defined in Equation 2 and $\psi (i,t)$ is the backtracking index pointing to the previous position. After $L$ iterations, we obtain the score for every possible length, and then we can find the optimal length with the argmax function:
+
+$$
+M = \underset {i} {\operatorname {a r g m a x}} \alpha (i, L). \tag {6}
+$$
+
+After determining the length $M$ , we can trace the best decoding path along the backtracking index starting from $a_{M} = L$ :
+
+$$
+a _ {i} = \psi (i + 1, a _ {i + 1}). \tag {7}
+$$
+
+Finally, conditioning on the optimal path $A$ , we can generate the translation with argmax decoding:
+
+$$
+y _ {i} = \underset {y _ {i}} {\operatorname {a r g m a x}} P _ {\theta} \left(y _ {i} \mid a _ {i}, X\right). \tag {8}
+$$
+
+# 3.2 Joint Optimal Solution
+
+The decoding algorithm described above can be summarized as the following process:
+
+$$
+\begin{array}{l} A ^ {*} = \operatorname {a r g m a x} P _ {\theta} (A | X), \\ \begin{array}{l} A \\ = (1 0, 0, - 1) \end{array} \tag {9} \\ \end{array}
+$$
+
+$$
+Y ^ {*} = \underset {Y} {\operatorname {a r g m a x}} P _ {\theta} (Y | X, A ^ {*}).
+$$
+
+Even though the algorithm now finds the optimal decoding path, the translation on this path may have low confidence, resulting in a low joint probability $P_{\theta}(A,Y|X)$ . We further improve the decoding algorithm to search for both decoding paths and translations, which guarantees to find the joint optimal solution:
+
+$$
+A ^ {*}, Y ^ {*} = \underset {A, Y} {\operatorname {a r g m a x}} P _ {\theta} (A, Y | X). \tag {10}
+$$
+
+Notice that when the path $A$ is given, we can easily find the most probable translation $Y$ with $\operatorname{argmax}$ decoding. Let $Y^{A}$ denotes the $\arg\max$ decoding result under path $A$ , where $y_{i}^{a_{i}} = \operatorname{argmax}_{y_{i}} P_{\theta}(y_{i}|a_{i}, X)$ is the $i$ -th word of $Y^{A}$ . Then we can simplify our objective with $Y^{A}$ :
+
+$$
+\begin{array}{l} \max _ {A, Y} P _ {\theta} (A, Y | X) = \max _ {A, Y} P _ {\theta} (A | X) P _ {\theta} (Y | X, A) \\ = \max _ {A} \left(P _ {\theta} (A | X) \max _ {Y} P _ {\theta} (Y | X, A)\right) \\ = \max _ {A} P _ {\theta} (A | X) P _ {\theta} (Y ^ {A} | X, A) \\ = \max _ {A} P _ {\theta} \left(y _ {1} ^ {a _ {1}} \mid a _ {1}, X\right) \prod_ {i = 1} ^ {M - 1} E _ {a _ {i}, a _ {i + 1}} P _ {\theta} \left(y _ {i + 1} ^ {a _ {i + 1}} \mid a _ {i + 1}, X\right) \\ = \max _ {A} P _ {\theta} \left(y _ {1} ^ {A} \mid a _ {1}, X\right) \prod_ {i = 1} ^ {M - 1} E ^ {\prime} _ {a _ {i}, a _ {i + 1}}, \tag {11} \\ \end{array}
+$$
+
+where we introduce a new transition matrix $E^{\prime}$ with $E_{a_i,a_{i + 1}}' = E_{a_i,a_{i + 1}}P_\theta (y_{i + 1}^{a_i + 1}|a_{i + 1},X)$ . Compared to $\max_A P_\theta (A|X)$ , the major difference is the transition matrix $E^{\prime}$ , which considers both the transition probability and the prediction probability. Therefore, we can still apply the Viterbi decoding framework to find the optimal joint solution.
+
+We use 'Viterbi' to represent the Viterbi decoding algorithm proposed in section 3.1, and use 'Joint-Viterbi' to represent the improved algorithm in this section that finds the joint optimal solution. It is worth noting that Viterbi and
+
+Joint-Viterbi can be regarded as improvements to greedy decoding and lookahead decoding, respectively. Both greedy decoding and lookahead decoding consider the one-step probability and find the next token with $\operatorname{argmax}_{a_i} P_{\theta}(a_i | X, a_{i-1})$ and $\operatorname{argmax}_{y_i, a_i} P_{\theta}(y_i | a_i, X) P_{\theta}(a_i | a_{i-1}, X)$ , respectively. In comparison, Viterbi and Joint-Viterbi consider the whole decoding path and guarantee to find the global optimal solution $\operatorname{argmax}_A P_{\theta}(A | X)$ and $\operatorname{argmax}_{A, Y} P_{\theta}(A, Y | X)$ , respectively.
+
+# 3.3 Reranking with Length Penalty
+
+After Viterbi decoding, we have a set of translations of different lengths that can be ranked to obtain the most probable one. However, argmax decoding is biased toward short translations and may even degenerate to an empty translation, as also observed in Stahlberg and Byrne (2019).
+
+To solve this problem, we introduce the hyperparameter $\beta$ for length normalization in Wu et al. (2016) and modify Equation 6 to divide by the length penalty term:
+
+$$
+M = \underset {i} {\operatorname {a r g m a x}} \frac {\alpha (i , L)}{i ^ {\beta}}. \tag {12}
+$$
+
+By changing the length penalty $\beta$ to different values, we now have the flexibility to control the translation length with little additional overhead, which is another appealing feature of our approach.
+
+# 4 Experiments
+
+# 4.1 Settings
+
+We conduct experiments on WMT14 English $\leftrightarrow$ German (En $\leftrightarrow$ De, 4.5M pairs) and WMT17 Chinese $\leftrightarrow$ English (Zh $\leftrightarrow$ En, 20M pairs). These datasets are all encoded into subword units (Sennrich et al., 2016). We use the same preprocessed data and train/dev/test splits as Kasai et al. (2020). The translation quality is evaluated with sacreBLEU (Post, 2018) for WMT17 En-Zh and tokenized BLEU (Papineni et al., 2002) for other benchmarks. We use GeForce RTX 3090 to train models and measure translation latency. Our models are implemented based on the open-source toolkit of fairseq (Ott et al., 2019).
+
+We strictly follow the hyper-parameter settings of Huang et al. (2022) to implement DA-Transformer. We adopt Transformer-base (Vaswani et al., 2017) as the model architecture. We set dropout to 0.1, weight decay to 0.01, and label smoothing to 0.1 for regularization. We use $\lambda = 8$
+
+
Models
Iter
WMT14
WMT17
Average Gap
Speedup
En-De
De-En
En-Zh
Zh-En
Transformer
M
27.67
31.84
35.05
24.26
0
1.0×
DA-Transformer + Greedy
1
26.06
30.69
33.29
22.32
1.62
14.2×
DA-Transformer + Viterbi
1
26.43†
30.84
33.25
22.58†
1.43
13.3×
DA-Transformer + Lookahead
1
26.55
30.81
33.54
22.68
1.31
14.0×
DA-Transformer + Joint-Viterbi
1
26.89†
31.10†
33.65
23.24†
0.98
13.2×
+
+for the graph size and linearly anneal $\tau$ from 0.5 to 0.1 for the glancing training. For fair comparisons, we tune the length penalty in [0.95, 1.05] to obtain a similar translation length as lookahead. We train all models for 300K steps, where each batch contains approximately 64K source tokens. All models are optimized by Adam (Kingma and Ba, 2014) with $\beta = (0.9, 0.999)$ and $\epsilon = 10^{-8}$ . The learning rate warms up to $5 \cdot 10^{-4}$ and then begins to anneal it after 10K steps with the inverse square-root schedule. We calculate the validation BLEU scores every epoch and obtain the final model by taking an average of the best five checkpoints.
+
+# 4.2 Main Results
+
+As shown in Table 1, both Viterbi and Joint-Viterbi improve over their corresponding baseline. Joint-Viterbi achieves the best performance, which outperforms the previous lookahead strategy by 0.33 BLEU. Besides, it is worth noting that the Viterbi decoding process is highly parallelizable, which does not bring much overhead in the decoding and only reduces the speedup by less than $1\times$ .
+
+# 4.3 Results with Knowledge Distillation
+
+In this section, we evaluate the performance of our method with sequence-level knowledge distillation (Hinton et al., 2015; Kim and Rush, 2016), where the target side of the training set is replaced by the output of an autoregressive teacher model. Experimental results in Table 2 show that the differences between decoding strategies are relatively small.
+
+Intuitively, we attribute this phenomenon to the improvement of model confidence. As knowledge distillation reduces the multi-modality of the dataset (Zhou et al., 2020; Sun and Yang, 2020), the model may become more confident in predicting target sentences, which makes the greedy strategy more likely to reach the optima. To verify this,
+
+Table 1: Results on WMT14 En $\leftrightarrow$ De and WMT17 Zh $\leftrightarrow$ En. $M$ is the length of the target sentence. 'Iter' means the number of decoding iterations. The speedup is evaluated on WMT14 En-De test set with a batch size of 1. $\dagger$ means significantly better than the baseline model $(p < 0.05)$ . We use the statistical significance test with paired bootstrap resampling (Koehn, 2004).
+
+
Method
Greedy
Lookahead
Viterbi
Joint-Viterbi
BLEU
26.81
26.91
26.88
27.03
+
+Table 2: Results with knowledge distillation on WMT14 En-De test set.
+
+
Metric
T-Entropy
P-Entropy
Percentage
w/o kd
1.088
1.892
59.6%
w/ kd
0.998
0.601
70.1%
+
+Table 3: Statistics of DA-Transformer on WMT14 EnDe test set. 'kd' means knowledge distillation. 'T' means transition and 'P-' means prediction.
+
+we measure the average entropy of transition and prediction probabilities and evaluate the percentage of lookahead outputs that match the optima $\operatorname{argmax}_{A,Y} P(A,Y|X)$ under their length. As Table 3 shows, DA-Transformer with distillation has smaller entropies and a larger percentage of optimal translations, which confirms our intuition.
+
+# 4.4 Probability Analysis
+
+Recall that the decoding objective is to find the most probable translation $\operatorname{argmax}_Y P(Y|X)$ , while our approach finds the joint solution $\operatorname{argmax}_{A,Y} P(A,Y|X)$ . Although there is a gap between them, we argue that optimizing the joint probability helps us achieve higher translation probability. To prove it, we collect the outputs of lookahead decoding and Joint-Viterbi on WMT14 En-De test set and compute their probabilities $P(Y|X)$ by dynamic programming. We then calculate the average log probability of each decoding strategy, and also evaluate the percentage of translations that one strategy obtains a larger probability than another. As Table 4 shows, Joint-Viterbi outperforms lookahead decoding by a large margin, indicating that we can obtain a higher average translation probability
+
+
Method
Lookahead
Joint-Viterbi
Log-prob
-4.39
-4.14
Percentage
24.4%
41.6%
+
+Table 4: Probability analysis of on Lookahead and Joint-Viterbi decoding on WMT14 En-De test set.
+
+by optimizing the joint probability.
+
+# 4.5 Effect of Length Penalty
+
+Viterbi decoding is capable of flexibly controlling the output length with the length penalty $\beta$ . To show the effect of the length penalty, we change the value of $\beta$ in Joint-Viterbi to decode the WMT17 Zh-En test set and report the corresponding BLEU scores and average output lengths in Figure 1. It shows that the length penalty can almost linearly control the output length, which can help us obtain satisfactory translations. Generally, Viterbi decoding can obtain better performance when the output length is closer to the reference length. If there is no length penalty, only finding outputs with the maximum joint probability will break the translation quality with extremely small output lengths.
+
+
+Figure 1: The effect of length penalty $\beta$ measured on WMT17 Zh-En test set.
+
+# 5 Related Works
+
+Most non-autoregressive models can directly find the most probable output with argmax decoding, which is the fastest decoding algorithm. However, models of this type usually suffer from the multimodality problem (Gu et al., 2018), leading to severe performance degradation. A relatively more accurate method is noisy parallel decoding, which requires generating multiple translation candidates and greatly increases the amount of computation.
+
+Many efforts have been made to address the multi-modality problem, including latent models (Kaiser et al., 2018; Ma et al., 2019; Shu et al.,
+
+2020; Bao et al., 2021, 2022), alignment-based models (Gu et al., 2018; Ran et al., 2021; Song et al., 2021), and better training objectives (Shao et al., 2019, 2020; Shan et al., 2021; Ghazvininejad et al., 2020; Du et al., 2021; Shao et al., 2021). However, these techniques are still not powerful enough, which heavily rely on knowledge distillation (Kim and Rush, 2016).
+
+Some researchers seek iterative decoding approaches to improve translation quality. Work in this area includes semi-autoregressive decoding (Wang et al., 2018), iterative refinement (Lee et al., 2018), mask-predict decoding (Ghazvininejad et al., 2019), Levenshtein Transformer (Gu et al., 2019), multi-thread decoding (Ran et al., 2020), Imputer (Saharia et al., 2020), and rewriting (Geng et al., 2021). Although their translations are of better quality, they are criticized for being slow at inference time (Kasai et al., 2021).
+
+Recently, latent alignment models like CTC (Libovicky and Helcl, 2018; Sahara et al., 2020) and DA-Transformer (Huang et al., 2022) achieved impressive performance and received a lot of attention. Beam search is an useful decoding strategy for latent alignment models (Kasner et al., 2020; Gu and Kong, 2020; Zheng et al., 2021; Shao et al., 2022; Huang et al., 2022; Shao and Feng, 2022). It brings considerable improvements but also reduces the decoding speed.
+
+Viterbi decoding has also been used in nonautoregressive models. In CRF-based NAT models, Viterbi decoding is applied to find the most probable output (Sun et al., 2019; Sun and Yang, 2020).
+
+# 6 Conclusion
+
+The current decoding strategies of DA-Transformer need to apply a sequential decision process, which harms the global translation accuracy. In this paper, we propose a Viterbi decoding framework for DA-Transformer to find the joint optimal solution of the translation and decoding path and further demonstrate its effectiveness on multiple benchmarks.
+
+# 7 Acknowledgement
+
+We thank the anonymous reviewers for their insightful comments. We thank Fei Huang for helping us open source code.
+
+# Limitations
+
+The major limitation of our method is that it cannot find the most probable translation
+
+$\operatorname{argmax}_Y P(Y|X)$ but alternatively finds the joint optimal solution $\operatorname{argmax}_{A,Y} P(A,Y|X)$ . However, as we show in section 4.4, outputs with higher joint probability usually also have higher translation probability, suggesting that optimizing the joint probability is helpful.
+
+Another limitation is that the improvements of our method are smaller in the knowledge distillation setting. However, the main advantage of DA-Transformer is that it does not heavily rely on knowledge distillation and achieves superior performance on raw data, which makes the impact of this limitation small.
+
+# References
+
+Yu Bao, Shujian Huang, Tong Xiao, Dongqi Wang, Xinyu Dai, and Jiajun Chen. 2021. Nonautoregressive translation by learning target categorical codes. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5749-5759, Online. Association for Computational Linguistics.
+Yu Bao, Hao Zhou, Shujian Huang, Dongqi Wang, Lihua Qian, Xinyu Dai, Jiajun Chen, and Lei Li. 2022. "Latent-GLAT: Glancing at latent variables for parallel text generation." In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 8398-8409, Dublin, Ireland. Association for Computational Linguistics.
+Cunxiao Du, Zhaopeng Tu, and Jing Jiang. 2021. Order-agnostic cross entropy for non-autoregressive machine translation. In ICML.
+Xinwei Geng, Xiaocheng Feng, and Bing Qin. 2021. Learning to rewrite for non-autoregressive neural machine translation. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 3297-3308, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
+Marjan Ghazvininejad, Vladimir Karpukhin, Luke Zettlemoyer, and Omer Levy. 2020. Aligned cross entropy for non-autoregressive machine translation. In ICML.
+Marjan Ghazvininejad, Omer Levy, Yinhan Liu, and Luke Zettlemoyer. 2019. Mask-predict: Parallel decoding of conditional masked language models. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 6112-6121.
+
+Jiatao Gu, James Bradbury, Caiming Xiong, Victor O.K. Li, and Richard Socher. 2018. Nonautoregressive neural machine translation. In International Conference on Learning Representations.
+Jiatao Gu and Xiang Kong. 2020. Fully non-autoregressive neural machine translation: Tricks of the trade.
+Jiatao Gu, Changhan Wang, and Junbo Zhao. 2019. Levenshtein transformer. In Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc.
+Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531.
+Fei Huang, Hao Zhou, Yang Liu, Hang Li, and Min-lie Huang. 2022. Directed acyclic transformer for non-autoregressive machine translation. In Proceedings of the 39th International Conference on Machine Learning, ICML 2022.
+Lukasz Kaiser, Samy Bengio, Aurko Roy, Ashish Vaswani, Niki Parmar, Jakob Uszkoreit, and Noam Shazeer. 2018. Fast decoding in sequence models using discrete latent variables. In Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pages 2390-2399. PMLR.
+Jungo Kasai, James Cross, Marjan Ghazvininejad, and Jiatao Gu. 2020. Non-autoregressive machine translation with disentangled context transformer. In ICML.
+Jungo Kasai, Nikolaos Pappas, Hao Peng, James Cross, and Noah A. Smith. 2021. Deep encoder, shallow decoder: Reevaluating non-autoregressive machine translation. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net.
+Zdeněk Kasner, Jindřich Libovický, and Jindřich Helcl. 2020. Improving fluency of non-autoregressive machine translation.
+Yoon Kim and Alexander M. Rush. 2016. Sequence-level knowledge distillation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1317-1327, Austin, Texas. Association for Computational Linguistics.
+Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. CoRR, abs/1412.6980.
+Philipp Koehn. 2004. Statistical significance tests for machine translation evaluation. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing, pages 388-395, Barcelona, Spain. Association for Computational Linguistics.
+
+Jason Lee, Elman Mansimov, and Kyunghyun Cho. 2018. Deterministic non-autoregressive neural sequence modeling by iterative refinement. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1173-1182, Brussels, Belgium. Association for Computational Linguistics.
+Jindrich Libovicky and Jindrich Helcl. 2018. End-to-end non-autoregressive neural machine translation with connectionist temporal classification. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3016-3021, Brussels, Belgium. Association for Computational Linguistics.
+Xuezhe Ma, Chunting Zhou, Xian Li, Graham Neubig, and Eduard Hovy. 2019. FlowSeq: Non-autoregressive conditional sequence generation with generative flow. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4282-4292, Hong Kong, China. Association for Computational Linguistics.
+Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. In Proceedings of NAACL-HLT 2019: Demonstrations.
+Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311-318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.
+Matt Post. 2018. A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 186-191, Brussels, Belgium. Association for Computational Linguistics.
+Lihua Qian, Hao Zhou, Yu Bao, Mingxuan Wang, Lin Qiu, Weinan Zhang, Yong Yu, and Lei Li. 2021. Glancing transformer for non-autoregressive neural machine translation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1993-2003, Online. Association for Computational Linguistics.
+Qiu Ran, Yankai Lin, Peng Li, and Jie Zhou. 2020. Learning to recover from multi-modality errors for non-autoregressive neural machine translation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3059-3069, Online. Association for Computational Linguistics.
+
+Qiu Ran, Yankai Lin, Peng Li, and Jie Zhou. 2021. Guiding non-autoregressive neural machine translation decoding with reordering information. In Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI 2021, Virtual Event, February 2-9, 2021, pages 13727-13735. AAAI Press.
+Chitwan Sahara, William Chan, Saurabh Saxena, and Mohammad Norouzi. 2020. Non-autoregressive machine translation with latent alignments. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1098-1108, Online. Association for Computational Linguistics.
+Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715-1725, Berlin, Germany. Association for Computational Linguistics.
+Yong Shan, Yang Feng, and Chenze Shao. 2021. Modeling coverage for non-autoregressive neural machine translation. In 2021 International Joint Conference on Neural Networks (IJCNN), pages 1-8. IEEE.
+Chenze Shao and Yang Feng. 2022. Non-monotonic latent alignments for ctc-based non-autoregressive machine translation. In Proceedings of NeurIPS 2022.
+Chenze Shao, Yang Feng, Jinchao Zhang, Fandong Meng, Xilin Chen, and Jie Zhou. 2019. Retrieving sequential information for non-autoregressive neural machine translation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3013-3024, Florence, Italy. Association for Computational Linguistics.
+Chenze Shao, Yang Feng, Jinchao Zhang, Fandong Meng, and Jie Zhou. 2021. Sequence-Level Training for Non-Autoregressive Neural Machine Translation. Computational Linguistics, 47(4):891-925.
+Chenze Shao, Xuanfu Wu, and Yang Feng. 2022. One reference is not enough: Diverse distillation with reference selection for non-autoregressive translation. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3779-3791, Seattle, United States. Association for Computational Linguistics.
+Chenze Shao, Jinchao Zhang, Yang Feng, Fandong Meng, and Jie Zhou. 2020. Minimizing the bag-of-ngrams difference for non-autoregressive neural machine translation. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, New York, NY, USA, February 7-12, 2020, pages 198-205. AAAI Press.
+Raphael Shu, Jason Lee, Hideki Nakayama, and Kyunghyun Cho. 2020. Latent-variable non-autoregressive neural machine translation with deterministic inference using a delta posterior. In AAAI.
+
+Jongyoon Song, Sungwon Kim, and Sungroh Yoon. 2021. AligNART: Non-autoregressive neural machine translation by jointly learning to estimate alignment and translate. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 1-14, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
+Felix Stahlberg and Bill Byrne. 2019. On NMT search errors and model errors: Cat got your tongue? In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3356-3362, Hong Kong, China. Association for Computational Linguistics.
+Zhiqing Sun, Zhuohan Li, Haoqing Wang, Di He, Zi Lin, and Zhihong Deng. 2019. Fast structured decoding for sequence models. In Advances in Neural Information Processing Systems 32, pages 3016-3026.
+Zhiqing Sun and Yiming Yang. 2020. An EM approach to non-autoregressive conditional sequence generation. In Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pages 9249-9258. PMLR.
+Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS'17, pages 6000-6010, Red Hook, NY, USA. Curran Associates Inc.
+A. Viterbi. 1967. Error bounds for convolutional codes and an asymptotically optimum decoding algorithm. IEEE Transactions on Information Theory, 13(2):260-269.
+Chunqi Wang, Ji Zhang, and Haiqing Chen. 2018. Semi-autoregressive neural machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 479-488, Brussels, Belgium. Association for Computational Linguistics.
+Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google's neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144.
+Zaixiang Zheng, Hao Zhou, Shujian Huang, Jiajun Chen, Jingjing Xu, and Lei Li. 2021. Duplex sequence-to-sequence learning for reversible machine translation.
+
+Chunting Zhou, Jiatao Gu, and Graham Neubig. 2020. Understanding knowledge distillation in non-autoregressive machine translation. In International Conference on Learning Representations.
\ No newline at end of file
diff --git a/viterbidecodingofdirectedacyclictransformerfornonautoregressivemachinetranslation/images.zip b/viterbidecodingofdirectedacyclictransformerfornonautoregressivemachinetranslation/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..9ba12fa088ccc5e5508c47b1e51e673e4ecb16d3
--- /dev/null
+++ b/viterbidecodingofdirectedacyclictransformerfornonautoregressivemachinetranslation/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5ddcf02eae89a396a9b902da1a5c65a5010644af93ee0052687b1a82068db4ce
+size 198245
diff --git a/viterbidecodingofdirectedacyclictransformerfornonautoregressivemachinetranslation/layout.json b/viterbidecodingofdirectedacyclictransformerfornonautoregressivemachinetranslation/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..6e46b77f90e82965e75c8f29e4eda76e4e3daab9
--- /dev/null
+++ b/viterbidecodingofdirectedacyclictransformerfornonautoregressivemachinetranslation/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f020efc6f27f594faec4d7ae6ba03a17426b742f9db5978a307d83ca43c8553f
+size 319105
diff --git a/waitinfopolicybalancingsourceandtargetatinformationlevelforsimultaneousmachinetranslation/54919471-a829-4c42-adf3-129c52b2a4ed_content_list.json b/waitinfopolicybalancingsourceandtargetatinformationlevelforsimultaneousmachinetranslation/54919471-a829-4c42-adf3-129c52b2a4ed_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..796b8ba38973c3a8f0e7ebad5ba7c3a526b348f7
--- /dev/null
+++ b/waitinfopolicybalancingsourceandtargetatinformationlevelforsimultaneousmachinetranslation/54919471-a829-4c42-adf3-129c52b2a4ed_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d553839f87c1700153b8117f50a10b00d6f8af3e45acec40fbf6464aafec30c6
+size 111612
diff --git a/waitinfopolicybalancingsourceandtargetatinformationlevelforsimultaneousmachinetranslation/54919471-a829-4c42-adf3-129c52b2a4ed_model.json b/waitinfopolicybalancingsourceandtargetatinformationlevelforsimultaneousmachinetranslation/54919471-a829-4c42-adf3-129c52b2a4ed_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..d1fa0f1ffdebc899caea8e1717161b83e4333d6b
--- /dev/null
+++ b/waitinfopolicybalancingsourceandtargetatinformationlevelforsimultaneousmachinetranslation/54919471-a829-4c42-adf3-129c52b2a4ed_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f517a8b8bef775d64642e181324f852c7b18a5fab61a2fdd11fb2e10b462707d
+size 129570
diff --git a/waitinfopolicybalancingsourceandtargetatinformationlevelforsimultaneousmachinetranslation/54919471-a829-4c42-adf3-129c52b2a4ed_origin.pdf b/waitinfopolicybalancingsourceandtargetatinformationlevelforsimultaneousmachinetranslation/54919471-a829-4c42-adf3-129c52b2a4ed_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..20c9f491c33fe23dec24be5d2e940d1edfa34106
--- /dev/null
+++ b/waitinfopolicybalancingsourceandtargetatinformationlevelforsimultaneousmachinetranslation/54919471-a829-4c42-adf3-129c52b2a4ed_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:7392a697013400161e5b2393e623f12d75f0953b68bbbf7e364bb332bb799606
+size 1017393
diff --git a/waitinfopolicybalancingsourceandtargetatinformationlevelforsimultaneousmachinetranslation/full.md b/waitinfopolicybalancingsourceandtargetatinformationlevelforsimultaneousmachinetranslation/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..8b7b5f4c0f4bd90f8d4f3f805e50ff590166b15e
--- /dev/null
+++ b/waitinfopolicybalancingsourceandtargetatinformationlevelforsimultaneousmachinetranslation/full.md
@@ -0,0 +1,437 @@
+# Wait-info Policy: Balancing Source and Target at Information Level for Simultaneous Machine Translation
+
+Shaolei Zhang $^{1,2}$ , Shoutao Guo $^{1,2}$ , Yang Feng $^{1,2*}$
+
+1Key Laboratory of Intelligent Information Processing
+
+Institute of Computing Technology, Chinese Academy of Sciences (ICT/CAS)
+
+$^{2}$ University of Chinese Academy of Sciences, Beijing, China
+
+{zhangshaolei20z,guoshoutao22z,fengyang}@ict.ac.cn
+
+# Abstract
+
+Simultaneous machine translation (SiMT) outputs the translation while receiving the source inputs, and hence needs to balance the received source information and translated target information to make a reasonable decision between waiting for inputs or outputting translation. Previous methods always balance source and target information at the token level, either directly waiting for a fixed number of tokens or adjusting the waiting based on the current token. In this paper, we propose a Wait-info Policy to balance source and target at the information level. We first quantify the amount of information contained in each token, named info. Then during simultaneous translation, the decision of waiting or outputting is made based on the comparison results between the total info of previous target outputs and received source inputs. Experiments show that our method outperforms strong baselines under and achieves better balance via the proposed info1.
+
+# 1 Introduction
+
+Simultaneous machine translation (SiMT) (Cho and Esipova, 2016; Gu et al., 2017; Ma et al., 2019) outputs the translation while receiving the source sentence, aiming at the trade-off between translation quality and latency. Therefore, a policy is required for SiMT to decide between waiting for the source inputs (i.e., READ) or outputting translations (i.e., WRITE), the core of which is to wisely balance the received source information and the translated target information. When the source information is less, the model should wait for more inputs for a high-quality translation; conversely, when the translated target information is less, the model should output translations for a low latency.
+
+Existing SiMT policies, involving fixed and adaptive, always balance source and target at the
+
+
+(a) Wait-k policy: treats each token equally, and lags $k$ tokens.
+
+
+(b) Wait-info policy: quantifies the information in each token, named info (e.g., 0.5, 1.7, ...) and keeps the target information always less than the received source information $\mathcal{K}$ info.
+Figure 1: Schematic diagram of Wait-info v.s. Wait-k.
+
+token level, i.e., treating each source and target token equally when determining READ/WRITE. Fixed policies decide READ/WRITE based on the number of received source tokens (Ma et al., 2019; Zhang and Feng, 2021c), such as wait-k policy (Ma et al., 2019) simply considers each source token to be equivalent and lets the target outputs always lag the source inputs by $k$ tokens, as shown in Figure 1(a). Fixed policies are always limited by the fact that the policy cannot be adjusted according to complex inputs, making them difficult to get the best trade-off. Adaptive policies predict READ/WRITE according to the current source and target tokens (Arivazhagan et al., 2019; Ma et al., 2020) and thereby get a better trade-off, but they often ignore and under-utilize the difference between tokens when deciding READ/WRITE. Besides, existing adaptive policies always rely on complicated training (Ma et al., 2020; Miao et al., 2021) or additional labeled data (Zheng et al., 2019; Zhang et al., 2020; Alinejad et al., 2021), making them more computationally expensive than fixed policies.
+
+Treating each token equally when balancing source and target is not the optimal choice for SiMT
+
+policy. Many studies have shown that different words have significantly different functions in translation (Lin et al., 2018; Moradi et al., 2019; Chen et al., 2020), often divided into content words (i.e., noun, verb, ...) and function words (i.e., conjunction, preposition, ...), where the former express more important meaning and the latter is less informative. Accordingly, tokens with different amounts of information should also play different roles in the SiMT policy, where more informative tokens should play a more dominant role because they bring more information to SiMT model (Zhang and Feng, 2022a,b). Therefore, explicitly differentiating various tokens rather than treating them equally when determining READ/WRITE will be beneficial to developing a more precise SiMT policy.
+
+In this paper, we differentiate various source and target tokens based on the amount of information they contain, aiming to balance received source information and translated target information at the information level. To this end, we propose wait-info policy, a simple yet effective policy for SiMT. As shown in Figure 1(b), we first quantify the amount of information contained in each token through a scalar, named info, which is jointly learned with the attention mechanism in an unsupervised manner. During the simultaneous translation, READ/WRITE decisions are made by balancing the total info of translated target information and received source information. If the received source information is more than translated target information by $\mathcal{K}$ info or more, the model outputs translation, otherwise the model waits for the next input. Experiments and analyses show that our method outperforms strong baselines and effectively quantifies the information contained in each token.
+
+# 2 Related Work
+
+SiMT Policy Recent policies fall into fixed and adaptive. For fixed policy, Ma et al. (2019) proposed wait-k policy, which first READ $k$ source tokens and then READ/WRITE one token alternately. Elbayad et al. (2020) proposed an efficient multipath training for wait-k policy to randomly sample $k$ during training. Zhang et al. (2021) proposed future-guide training for wait-k policy, which introduces a full-sentence MT to guide training. Zhang and Feng (2021a) proposed a char-level wait-k policy. Zhang and Feng (2021c) proposed a mixture-of-experts wait-k policy to develop a universal SiMT model. For adaptive policy, Gu et al. (2017)
+
+trained an agent to decide READ/WRITE via reinforcement learning. Arivazhagan et al. (2019) proposed MILk, which predicts a Bernoulli variable to determine READ/WRITE. Ma et al. (2020) proposed MMA to implement MILk on Transformer. (Zhang and Feng, 2022c) proposed dual-path SiMT to enhance MMA with dual learning. Zheng et al. (2020) developed adaptive wait-k through heuristic ensemble of multiple wait-k models. Miao et al. (2021) proposed a generative framework to generate READ/WRITE decisions. Zhang and Feng (2022a) proposed Gaussian multi-head attention to decide READ/WRITE based on alignments.
+
+Previous policies always treat each token equally when determining READ/WRITE, ignoring the fact that tokens with different amounts of information often play different roles in SiMT policy. Our method aims to develop a more precise SiMT policy by differentiating the importance of various tokens when determining READ/WRITE.
+
+Information Modeling in NMT Linguistics divides words into content words and function words according to their information and functions in the sentence. Therefore, modeling the information contained in each word is often used to improve the NMT performance. Moradi et al. (2019) and Chen et al. (2020) used the word frequency to indicate how much information each word contains, and the words with lower frequencies contain more information. Liu et al. (2020) and Kobayashi et al. (2020) found that the norm of word embedding is related to the token information in NMT. Lin et al. (2018) and Zhang and Feng (2021b) argued that the attention mechanism for different types of word should be different, where the attention distribution of content word tends to be more concentrated.
+
+Our method explores the usefulness of modeling information for SiMT policy, and proposes an unsupervised method to quantify the information of tokens through the attention mechanism, achieving good explainability.
+
+# 3 Background
+
+Full-sentence MT For a translation task, we denote the source sentence as $\mathbf{x} = (x_{1},\dots ,x_{n})$ with source length $n$ and the target sentence as $\mathbf{y} = (y_1,\dots ,y_m)$ with target length $m$ . Transformer (Vaswani et al., 2017) is the most widely used architecture for full-sentence MT, consisting of an encoder and a decoder. Encoder maps $\mathbf{x}$ to source hidden states $\mathbf{z} = (z_{1},\dots ,z_{n})$ . Decoder
+
+maps $\mathbf{y}$ to target hidden states $\mathbf{s} = (s_1, \dots, s_m)$ , and then performs translating. Specifically, each encoder layer contains two sub-layers: self-attention and feed-forward network (FFN), while each decoder layer contains three sub-layers: self-attention, cross-attention and FFN. Both self-attention and cross-attention are implemented through the dot-product attention between query $\mathbf{Q}$ and key $\mathbf{K}$ , calculated as:
+
+$$
+e _ {i j} = \frac {Q _ {i} W ^ {Q} \left(K _ {j} W ^ {K}\right) ^ {\top}}{\sqrt {d _ {k}}}, \tag {1}
+$$
+
+$$
+\alpha_ {i j} = \mathrm {s o f t m a x} \left(e _ {i j}\right). \tag {2}
+$$
+
+where $e_{ij}$ is the similarity score between $Q_{i}$ and $K_{j}$ , and $\alpha_{ij}$ is the normalized attention weight. $d_{k}$ is the input dimension, $W^{Q}$ and $W^{K}$ are projection parameters. More specifically, self-attention extracts the monolingual representation of source or target tokens, so the query and key both come from the source hidden states $\mathbf{z}$ or target hidden states $\mathbf{s}$ . While cross-attention extracts the cross-lingual representation through measuring the correlation between target and source token, so query comes from the target hidden states $\mathbf{s}$ , and key comes from the source hidden states $\mathbf{z}$ .
+
+Wait-k Policy Simultaneous machine translation (SiMT) determines when to start translating each target token through a policy. Wait-k policy (Ma et al., 2019) is the most widely used policy for SiMT, which refers to first waiting for $k$ source tokens and then translating and waiting for one token alternately, i.e., the target outputs always lagging $k$ tokens behind the source inputs. Formally, when translating $y_{i}$ , wait-k policy forces the SiMT model to wait for $g_{k}(i)$ source tokens, where $g_{k}(i)$ is calculated as:
+
+$$
+g _ {k} (i) = \min \left\{k + i - 1, n \right\}. \qquad (3)
+$$
+
+# 4 Method
+
+To differentiate various tokens when determining READ/WRITE, we quantify the amount of information contained in each source and target token, named info. As shown in Figure 2, we propose info-aware Transformer to jointly learn the quantified info with the attention mechanism in an unsupervised manner. Then based on the quantified info, we propose wait-info policy to balance the received source information and translated target information. The details are as follows.
+
+
+Figure 2: Architecture of the proposed info-aware Transformer, where we omit residual connection and layer normalization in the figure for clarity.
+
+# 4.1 Info Quantification
+
+To quantify the amount of information in each token, we use a scalar to represent how much information each token contains, named info. We denote the info of the source tokens and the target tokens as $\mathbf{I}^{src} \in \mathbb{R}^{n \times 1}$ and $\mathbf{I}^{tgt} \in \mathbb{R}^{m \times 1}$ , respectively, where $I_{j}^{src}$ and $I_{i}^{tgt}$ represent the info of $x_{j}$ and $y_{i}$ , and the higher info means that the token has more information.
+
+To predict $\mathbf{I}^{src}$ and $\mathbf{I}^{tgt}$ , we introduce two Info Quantizers before the encoder and decoder to respectively quantify the information of each source and target token, as shown in Figure 2. Specifically, the info quantizer is implemented by a 3-layer feedforward network (FFN):
+
+$$
+\mathbf {I} ^ {s r c} = 2 \times \text {s i g m o i d (F F N} (\mathbf {x})) \tag {4}
+$$
+
+$$
+\mathbf {I} ^ {t g t} = 2 \times \operatorname {s i g m o i d} \left(\operatorname {F F N} (\mathbf {y})\right). \tag {5}
+$$
+
+For the formulation of the following wait-info policy, $2 \times \text{sigmoid}(\cdot)$ is used to restrict the quantified info $I_{j}^{src}, I_{i}^{tgt} \in (0, 2)$ .
+
+Further, in a translation task, source sentence and target sentence should be semantically equivalent (Finch et al., 2005; Guo et al., 2022), so the total information of source tokens should be equal to that of target tokens. To this end, we introduce an info-sum loss $\mathcal{L}_{sum}$ to constrain the total info of
+
+the source tokens and target tokens, calculated as:
+
+$$
+\mathcal {L} _ {s u m} = \left\| \sum_ {j = 1} ^ {n} I _ {j} ^ {s r c} - \zeta \right\| _ {2} + \left\| \sum_ {i = 1} ^ {m} I _ {i} ^ {t g t} - \zeta \right\| _ {2}, \tag {6}
+$$
+
+where $\zeta$ is a hyperparameter to represent the total info, and we set $\zeta = \frac{m + n}{2}$ (i.e., average length of source and target) to control the average info to be around 1. Therefore, the final loss $\mathcal{L}$ is:
+
+$$
+\mathcal {L} = \mathcal {L} _ {c e} + \lambda \mathcal {L} _ {s u m}, \tag {7}
+$$
+
+where $\mathcal{L}_{ce}$ is the original cross-entropy loss for the translation (Vaswani et al., 2017). $\lambda$ is a hyperparameter and we set $\lambda = 0.3$ in our experiments.
+
+# 4.2 Learning of Quantified Info
+
+The form of quantified info $\mathbf{I}^{src}$ and $\mathbf{I}^{tgt}$ has been constrained through Eq.(4-7), and then the key challenge is how to encourage the quantified info to accurately reflect the amount of information each token contains. Since the tokens with different amounts of information often show different preferences in the attention distribution (Lin et al., 2018), we propose an unsupervised method to learn the quantified info through the attention mechanism. As shown in Figure 2, we introduce an info-aware Transformer, consisting of info-aware self-attention and info-consistent cross-attention.
+
+Info-aware Self-attention Self-attention in both encoder and decoder are used to extract monolingual representations of tokens, where tokens with different amounts of information tend to exhibit different attention distributions (Lin et al., 2018; Zhang and Feng, 2021b). Specifically, tokens with much information, such as content words, tend to pay more attention to themselves. For the tokens with less information, since they have less meaning in themselves, they need more context information and thereby pay less attention to themselves. Therefore, we use the quantified info to bias the tokens' attention to themselves, thereby encouraging those tokens that tend to focus more on themselves to get higher info. Specifically, based on the original self-attention in Eq.(1,2), we add the quantified info $I_i^\tau$ , $\tau \in \{src, tgt\}$ (respectively used for encoder and decoder self-attention) on the token's similarity to itself $e_{ii}$ (Lin et al., 2018), and then normalize them with softmax $(\cdot)$ to get the
+
+info-aware self-attention $\beta_{ij}$ , calculated as:
+
+$$
+\widetilde {e} _ {i j} = \left\{ \begin{array}{l l} e _ {i j} + \left(I _ {i} ^ {\tau} - 1\right), & \text {i f} i = j \\ e _ {i j} & , \quad \text {o t h e r w i s e} \end{array} \right., \tag {8}
+$$
+
+$$
+\beta_ {i j} = \operatorname {s o f t m a x} \left(\widetilde {e} _ {i j}\right). \tag {9}
+$$
+
+If $I_i^\tau > 1$ (i.e., containing more information), the token will pay more attention to itself, otherwise the token will focus more on other tokens to extract context information. Therefore, the info can be learned from the attention distribution.
+
+Info-consistent Cross-attention In addition to modeling the token info in a monolingual context, the consistency of the token info between target and source is also crucial for the SiMT policy, which ensures that the received source information and the target information can be accurately balanced under the same criterion. For consistency, the target and source tokens with high similarity (i.e., those with high cross-attention scores) should have similar info. Therefore, we scale the cross-attention with the info consistency between target and source, where the info consistency is measured by $L_{1}$ distance between target and source info. Info-consistent cross-attention $\gamma_{ij}$ is calculated as:
+
+$$
+\widetilde {\gamma} _ {i j} = \alpha_ {i j} \times \left(2 - \left| I _ {i} ^ {t g t} - I _ {j} ^ {s r c} \right|\right), \tag {10}
+$$
+
+$$
+\gamma_ {i j} = \widetilde {\gamma} _ {i j} / \sum_ {j} \widetilde {\gamma} _ {i j}, \tag {11}
+$$
+
+where $\left(2 - \left|I_i^{tgt} - I_j^{src}\right|\right)\in (0,2]$ measures the info consistent between $y_{i}$ and $x_{j}$
+
+Overall, we apply the proposed info-aware self-attention $\beta_{ij}$ and info-consistent cross-attention $\gamma_{ij}$ to replace the original attention for the learning of the quantified info.
+
+# 4.3 Wait-info Policy
+
+Owing to the quantification and learning of info, we get $\mathbf{I}^{src}$ and $\mathbf{I}^{tgt}$ to reflect how much information that source and target tokens contain. Then, we develop wait-info policy for SiMT to balance source and target at the information level.
+
+Borrowing the idea from the wait-k policy that requires the target outputs to lag behind the source inputs by $k$ tokens (Ma et al., 2019), wait-info policy keeps that the target information is always less than the received source information $\mathcal{K}$ info, where $\mathcal{K}$ is the lagging info, a hyperparameter to control the latency. Formally, we denote the number of
+
+Algorithm 1: Wait-info Policy
+Input: source inputs x (incremental), lagging info $\kappa$ $\hat{y}_0 =$ BeginOfSequence
+Output: target outputs $\hat{\mathbf{y}}$
+Init: target idx $i = 1$ , source idx $j = 1$
+while $\hat{y}_{i - 1}\neq$ EndOfSequence do
+Calculate info $I_{j}^{src}$ and $I_{i}^{tgt}$ /\*1) Source info is more; or 2) Inputs is complete. \*/
+if $\sum_{l = 1}^{j}I_{l}^{src}\geq \sum_{l = 1}^{i}I_{l}^{tgt} + \kappa$ or
+ $x_{j} = =$ EndOfSequence
+then //WRITE
+Translate $\hat{y}_i$ with $(x_{1},\dots ,x_{j})$ .
+ $i\gets i + 1$
+else //READ
+Wait for next source input $x_{j + 1}$ .
+ $j\gets j + 1$
+return $\hat{\mathbf{y}}$
+
+source tokens that the SiMT model waits for before translating $y_{i}$ as $g_{\mathcal{K}}(i)$ , calculated as:
+
+$$
+g _ {\mathcal {K}} (i) = \underset {j} {\operatorname {a r g m i n}} \left(\sum_ {l = 1} ^ {j} I _ {l} ^ {s r c} \geq \sum_ {l = 1} ^ {i} I _ {l} ^ {t g t} + \mathcal {K}\right). \tag {12}
+$$
+
+The specific decoding process of wait-info policy is shown in Algorithm 1.
+
+During training, we mask out the source token $x_{j}$ that $j > g_{\mathcal{K}}(i)$ to simulate the incomplete source sentence. Besides, we apply multi-path training (Elbayad et al., 2020) to randomly sample different $\mathcal{K}$ in each batch to enhance the training efficiency.
+
+# 5 Experiment
+
+# 5.1 Datasets
+
+IWSLT15² English → Vietnamese (En→Vi) (133K pairs) We use TED tst2012 (1553 pairs) as the dev set and TED tst2013 (1268 pairs) as the test set. Following the previous setting (Ma et al., 2020), we replace tokens that frequency less than 5 by ⟨unk⟩, and the vocabulary sizes of English and Vietnamese are 17K and 7.7K respectively.
+
+WMT153 German → English (De→En) (4.5M pairs) We use newstest2013 (3000 pairs) as the dev set and newstest2015 (2169 pairs) as the test set.
+
+BPE (Sennrich et al., 2016) is applied with 32K merge operations and the vocabulary is shared.
+
+# 5.2 System Settings
+
+We conduct experiments on following systems.
+
+Full-sentence MT Standard Transformer model (Vaswani et al., 2017), which waits for the complete source sentence and then starts translating.
+
+Wait-k Wait-k policy (Ma et al., 2019), which first READ $k$ source tokens, and then alternately READ one token and WRITE one token.
+
+Efficient Wait-k An efficient multi-path training for wait-k (Elbayad et al., 2020), which randomly samples $k$ between batches during training.
+
+Adaptive Wait-k An adaptive policy via a heuristic composition of a set of wait-k models (e.g., $k$ from 1 to 13) (Zheng et al., 2020). Adaptive Wait-k uses the tokens number of target and source to select a wait-k model to generate a target token, and then decides whether to output or not according to the generating probability.
+
+MoE Wait- $\mathbf{k}^4$ Mixture-of-experts wait-k policy (Zhang and Feng, 2021c), which applies multiple experts to perform wait-k policy with various $k$ to consider the translation under multiple latency.
+
+$\mathbf{MMA}^{5}$ Monotonic multi-head attention (MMA) (Ma et al., 2020), which uses a Bernoulli variable $0 / 1$ to decide READ/WRITE and Bernoulli variable is jointly learning with multi-head attention.
+
+GSiMT Generative SiMT (Miao et al., 2021), which applies a generative framework to predict a Bernoulli variable to decide READ/WRITE, and uses the dynamic programming to train the policy.
+
+$\mathbf{G M A}^{6}$ Gaussian multi-head attention (GMA) (Zhang and Feng, 2022a), which uses a Gaussian prior to learn the alignments in attention, and then performs READ/WRITE based on the alignments.
+
+Wait-info The proposed method in Sec.4.
+
+The implementation of all systems are based on Transformer (Vaswani et al., 2017) and adapted from Fairseq Library (Ott et al., 2019). Following Ma et al. (2020), we apply Transformer-Small (4 heads) for $\mathrm{En} \rightarrow \mathrm{Vi}$ , Transformer-Base (8 heads) and Transformer-Big (16 heads) for $\mathrm{De} \rightarrow \mathrm{En}$ . Since GSiMT involves dynamic programming with expensive training costs, we only report GSiMT on $\mathrm{De} \rightarrow \mathrm{En}$ with Transformer-Base, the same as its original setting (Miao et al., 2021). For evaluation,
+
+4github.com/ictnlp/MoE-Waitk
+5github.com/pytorch/fairseq/tree/master/examples/simultaneous Translation
+6github.com/ictnlp/GMA
+
+
+(a) $\mathrm{En}\rightarrow \mathrm{Vi}$ ,Transformer-Small
+
+
+(b) $\mathrm{De}\rightarrow \mathrm{En}$ , Transformer-Base
+
+
+(c) $\mathrm{De}\rightarrow \mathrm{En}$ ,Transformer-Big
+
+we report BLEU (Papineni et al., 2002) for translation quality and Average Lagging (AL) (Ma et al., 2019) for latency. Average lagging evaluates the number of tokens lagging behind the ideal policy, calculated as:
+
+$$
+\mathrm {A L} = \frac {1}{\tau} \sum_ {i = 1} ^ {\tau} g (i) - \frac {i - 1}{m / n}, \tag {13}
+$$
+
+where $\tau = \operatorname{argmax}_i(g(i) = n)$ , and $g(i)$ is number of waited source tokens before translating $y_i$ .
+
+# 5.3 Main Results
+
+We compare the proposed wait-info policy with previous policies in Figure 3, where Wait-info outperforms the previous methods under all latency. Compared with Wait-k and Efficient Wait-k which directly wait for a fixed number of source tokens, Wait-info balances target outputs and source inputs at the information level, which provides a more flexibly SiMT trade-off and thereby brings significant improvements. MoE Wait-k uses multiple experts to fuse the translation under multiple latency to cope with complex inputs, while Wait-info dynamically adjusts READ/WRITE based on the info and thereby deals with the complex inputs in a more straightforward manner. Both Adaptive Wait-k and Wait-info are adaptive policies, but Adaptive Wait-k still decides which $k$ to use based on the token number of target outputs and received source inputs (Zheng et al., 2020), while Wait-info decides READ/WRITE based on more refined info and thus performs better. Besides, Adaptive Wait-k trains multiple wait-k models, which is computationally expensive, while Wait-info only trains one model to perform SiMT under different latency.
+
+Compared with the adaptive policies, Wait-info also achieves better performance. Previous adap
+
+
+Figure 3: Translation quality (BLEU) v.s. latency (Average Lagging, AL) of Wait-info and previous methods.
+(b) Effects of src and tgt info.
+Figure 4: Ablation Studies on wait-info policy.
+
+
+(a) Effects of two attention.
+
+tive policies often decide READ/WRITE based on the current source and target token (Ma et al., 2020; Zhang and Feng, 2022a), while Wait-info is based on the accumulated source and target info, which is more reasonable for the SiMT policy. More importantly, most adaptive policies rely on complicated and time-consuming training (Zheng et al., 2020) since involving dynamic programming (Ma et al., 2020; Miao et al., 2021). The training of Wait-info is simple as fixed policy, meanwhile the performance is better than adaptive policies.
+
+# 6 Analysis
+
+We conduct extensive analyses on wait-info policy. Unless otherwise specified, all results are reported on De $\rightarrow$ En with Transformer-Base.
+
+# 6.1 Ablation Study
+
+Info-aware Self-attention v.s. Info-consistent Cross-attention We propose two novel attention to learn the quantified info, so we analyze their roles in Figure 4(a). Without info-aware self-attention, the SiMT performance drops 0.7 BLEU on average, showing that info-aware self
+
+
+(a) Distribution of source info on different POS.
+
+
+(b) Distribution of target info on different POS.
+Figure 5: Info distribution on different parts of speech (POS), where POS marked in red is often the content word, POS marked in blue is often the function word.
+
+attention is beneficial to the learning of quantified info. When removing the info-consistent cross-attention, the latency becomes much higher, which is because some target info exceptionally becomes much larger than the source info. Info-consistent cross-attention ensures the info consistency between similar tokens and thus controls the latency in a suitable range. When removing both of them, the source or target info is unconstrained and becomes the same value. While the target info will be slightly larger than source info (due to $\mathcal{L}_{sum}$ ), which is beneficial for SiMT under low latency, we will analyze it in Sec.6.5.
+
+Source Info v.s. Target Info Wait-info policy quantifies the info of both source and target tokens, and we respectively fix the source info $\mathbf{I}^{src} = \mathbf{1}$ or the target info $\mathbf{I}^{tgt} = \mathbf{1}$ (i.e., degenerate into wait-k policy that treats each source or target token equally) to compare the effect of only quantifying the source or target info. As shown in Figure 4(b), quantifying the source or target info can both bring significant improvements, where the improvements brought by target info are even more significant.
+
+# 6.2 Improvements on Full-sentence MT
+
+Besides focusing on SiMT, the proposed info-aware Transformer can also improve full-sentence MT. As the full-sentence MT results shown in Table 1, info-aware Transformer improves 0.08 BLEU on $\mathrm{En}\rightarrow \mathrm{Vi}(\mathrm{Small})$ , 0.59 BLEU on $\mathrm{De}\rightarrow \mathrm{En}(\mathrm{Base})$ and 0.39 BLEU on $\mathrm{De}\rightarrow \mathrm{En}(\mathrm{Big})$ , showing that explicitly modeling token info is also beneficial for NMT.
+
+# 6.3 Comparison on Information Modeling
+
+To model the information amount contained in each token, we propose an unsupervised method to adaptively learn the info from the attention mechanism. Some previous methods apply heuristic methods to model the information, such as using the token
+
+
En→Vi (Small)
De→En (Base)
De→En (Big)
Transformer
28.90
31.60
32.84
Info-aware Transformer
28.98
32.19
33.23
+
+Table 1: Improvements on full-sentence MT.
+
+
+Figure 6: Comparison of different methods of information modeling in wait-info policy, including via attention, token frequency and embedding norm.
+
+frequency to indicate the amount of information (Moradi et al., 2019; Chen et al., 2020) or associating the norm of embedding with the token information (Liu et al., 2020; Kobayashi et al., 2020). We apply different methods of information modeling (i.e., via attention, via token frequency and via norm of token embedding) in the proposed waitinfo policy, and show the results in Figure 6.
+
+Using embedding norm to indicate token info is not suitable for the proposed wait-info policy, we argue that this is because the embedding norm is better at identifying specific tokens such as and punctuation (Kobayashi et al., 2020), but has limited ability to distinguish token information in more detail. Modeling the info via attention and
+
+
Length Train.
Ratio (src/tgt) Dev.
Test.
Info Ratio (tgt/src)
En→Vi
0.84
0.84
0.81
0.85
De→En
1.09
1.08
1.06
1.10
+
+frequency can both achieve improvements, where our proposed method of learning info from attention performs much better, since jointly learning the info with translation is more flexible than the fixed frequency (Zhang et al., 2022).
+
+# 6.4 Quality of Quantified Info
+
+We expect that the proposed info can reflect the amount of information contained in the token, thus providing reasonable evidence for the SiMT policy. To verify the quality of quantified info, we further explore whether the quantified info can distinguish different types of tokens, especially content words and function words as mentioned above. In response to this question, we categorize different tokens using the Universal Part-of-Speech (POS) Tagging tool7, and draw the info distribution of tokens with different POS8 via violin plot in Figure 5. Tokens with different parts of speech have obvious differences in info distribution, where content words (e.g., VERB, NOUN, AUX, ADJ, PROPN) generally get larger info, while function words (e.g., CCONJ, SCONJ, ADP, PART, DET) have smaller info, which is in line with our expectations (Xu et al., 2019). Therefore, info can successfully learn the amount of information contained in different tokens, so as to develop a reasonable SiMT policy.
+
+# 6.5 Flexibility on Length Difference
+
+Early-stop Caused by Length Difference The length difference between the two languages is a major challenge for SiMT, especially for wait-k policy. Wait-k policy is sensitive to the length ratio between source and target and sometimes may force the model to finish the target translation before
+
+Table 2: Length ratio (source/target) on $\mathrm{En} \rightarrow \mathrm{Vi}$ and $\mathrm{De} \rightarrow \mathrm{En}$ and the info ratio (target/source) in our wait-info policy. During training, the ratio between source and target info is successfully adjusted according to the length ratio, thereby ensuring that the total source info and total target info are equal.
+
+
Wait-k
Wait-info
k
De→En
En→Vi
De→En
En→Vi
1
29.88%
0.39%
0.00%
0.00%
3
22.68%
0.16%
0.00%
0.00%
5
13.09%
0.00%
0.00%
0.00%
7
6.78%
0.00%
0.00%
0.00%
9
3.23%
0.00%
0.00%
0.00%
+
+Table 3: Proportion of early-stop. Under low latency, Wait-k emerges much early-stop on De→En, while Wait-info completely avoids this situation (0.00%). Note that for Wait-info, we select the results under the similar latency with the Wait-k for comparison.
+
+
+Figure 7: Comparison of Wait-info and Catch-up.
+
+reading the complete source sentence (Ma et al., 2019; Zhang and Feng, 2022d), named early-stop, especially when the source sentence is longer than the target sentence. Formally, wait-k policy will early-stop translating when $g_{k}(m) < n$ , where $g_{k}(m) = k + m - 1$ defined in Eq.(3), $n$ and $m$ are source and target lengths.
+
+More importantly, the length difference is always language-specific (Ma et al., 2019), and Table 2 reports the length ratio between source and target on $\mathrm{En} \rightarrow \mathrm{Vi}$ and $\mathrm{De} \rightarrow \mathrm{En}$ datasets. As seen, the target sentence in $\mathrm{En} \rightarrow \mathrm{Vi}$ is generally longer than the source sentence, on the contrary, the source sentence in $\mathrm{De} \rightarrow \mathrm{En}$ is longer (i.e., $n > m$ ), which is more prone to the early-stop. To study the severity of early-stop, we calculate the proportion of early-stop in wait-k policy in Table 2, where over $20\%$ of $\mathrm{De} \rightarrow \mathrm{En}$ cases will early stop translating before receiving the complete source sentence under low latency. The essential reason for early-stop is that wait-k policy balances source and target at the token level, where the token-level balance is not the best choice because the number of tokens (i.e., length) is often language-specific.
+
+
Source:
Gra@@ ham Ab@@ bot@@ t unter@@ zog sich im März 2012 der operation .(Graham) (Abbott) (underwent) (himself) (in) (March) (2012) (the) (surgery) (.
Reference:
Gra@@ ham Ab@@ bot@@ t went in for surgery in March 2012 .
Wait-k
Inputs:
Gra@@ ham Ab@@ bot@@ t unter@@ zog sich im März 2012 der operation .<eos>
Outputs:
Gra@@ ham Ab@@ bot@@ t was educated in March 2012 .
Wait-info
Inputs:
Gra@@ ham Ab@@ bot@@ t unter@@ zog sich im März 2012 der operation .<eos>
Source Info:
0.98
0.96
0.94
0.90
0.95
0.80
0.99
1.00
0.88
1.04
1.03
0.64
1.18
1.00
0.91
Target Info:
1.00
1.24
1.09
1.13
1.16
0.9
1.28
1.12
1.44
1.00
1.27
1.14
1.00
0.92
Outputs:
Gra@@ ham Ab@@ bot@@ t under@@ went a surgery in March 2012 .<eos>
+
+Figure 8: Case study of No.1219 in De→En test set, showing Wait-k ( $k = 5$ ) and Wait-info ( $\mathcal{K} = 1$ ) under the similar latency ( $\mathrm{AL} \approx 3$ ). To show the process of SiMT more clearly, we correspond the outputs and inputs in the horizontal direction, indicating which source tokens are received when translating the target token. For source and target info, values that are larger than the average info (i.e., containing more information) are marked in red, values that are smaller than the average info (i.e., containing less information) are marked in blue.
+
+Wait-info Avoids Early-stop Owing to $\mathcal{L}_{sum}$ in Eq.(6) that constrains the total source info to be equal to total target info, the proposed wait-info policy can learn to adjust the ratio between source and target info according to the length ratio, thereby avoiding early-stop. As shown in Table 2, the average quantified info ratio (target info/source info) is basically the same as the length ratio (source length/target length), which shows that $\mathcal{L}_{sum}$ successfully constrains the equality between total source info and total target info. Therefore, as shown in Table 3, wait-info policy completely avoids the early-stop caused by length difference. Different from the wait-k policy, wait-info policy balances source and target at the info level, where the total info of target and source is the same and language-independent, thereby overcoming the length difference between two languages.
+
+Wait-info v.s. Catch-up To avoid early-stop, Ma et al. (2019) proposed a heuristic approach Catch-up for wait-k policy to compensate for the length difference between target and source. Catch-up requires the model to read one additional source token after every generating $c$ target tokens (i.e., try to read more source tokens to avoid early-stop), where $c$ is a hyperparameter. We compare the performance of 'Wait-k+Catch-up' and Wait-info in Figure 7, where Wait-info performs better since it balances the source and target more flexibly from the info level rather than reading more source tokens according to heuristic rules.
+
+# 7 Case Study
+
+To study the specific improvement of the proposed wait-info policy compared to the wait-k policy, we conduct a case study in Figure 8. In Wait-k, the model is forced to wait for a fixed 5 tokens be
+
+fore translating, which makes the model either too aggressive or too conservative in different cases (Zheng et al., 2020). As shown in this case, at the beginning of translation, when translating 'Grahams', 2 source tokens are enough to translate, but wait-k policy forces the model to wait for 5 tokens, resulting in unnecessary waiting. When translating the noun 'surgery', the model should have waited until receiving 'operation', but the model was forced to output in advance, resulting in the wrong translation 'educated' (marked in green).
+
+In Wait-info, this weakness is ameliorated by quantifying the information in each token rather than considering each token equally. First of all, we find the proposed info can effectively distinguish different tokens, where the content words often get larger info, such as 'sich', 'März' and 'operation' in German, and 'went', 'surgery' and 'March' in English, thereby being more important to the SiMT policy. Owing to the quantified info, when translating the 'surgery', the model recognized that the previous 'der' (i.e., determiner in German) does not contain enough info, so the model continues to wait for the 'operation' and thereby generates the correct translation 'surgery' (marked in red). Overall, in wait-info policy, tokens with larger info, such as verbs and nouns, play a more important role in the model's decision of READ/WRITE, making it easier to ensure that those content words are read before translating.
+
+# 8 Conclusion
+
+In this paper, we quantify the information in tokens and propose a wait-info policy accordingly. Experiments show the superiority of our method on SiMT tasks and good explainability of the quantified info.
+
+# Limitations
+
+In this work, we quantify the amount of information contained in each token via a scalar. Although quantifying information as a scalar is intuitive and friendly to SiMT policy, the expression space of a scalar may be limited for some particularly complex situations. Quantifying the information contained in each token through a low-dimensional vector may be able to further improve the performance of wait-info policy. However, how to balance the info in vector form between source and target is also a new challenge, and we will put it into our future work.
+
+# Acknowledgements
+
+We thank all the anonymous reviewers for their insightful and valuable comments.
+
+# References
+
+Ashkan Alinejad, Hassan S. Shavarani, and Anoop Sarkar. 2021. Translation-based supervision for policy generation in simultaneous neural machine translation. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 1734-1744, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
+Naveen Arivazhagan, Colin Cherry, Wolfgang Macherey, Chung-cheng Chiu, Semih Yavuz, Ruoming Pang, Wei Li, and Colin Raffel. 2019. Monotonic Infinite Lookback Attention for Simultaneous Machine Translation. pages 1313-1323.
+Kehai Chen, Rui Wang, Masao Utiyama, and Eiichiro Sumita. 2020. Content word aware neural machine translation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 358-364, Online. Association for Computational Linguistics.
+Kyunghyun Cho and Masha Esipova. 2016. Can neural machine translation do simultaneous translation?
+Maha Elbayad, Laurent Besacier, and Jakob Verbeek. 2020. Efficient Wait-k Models for Simultaneous Machine Translation.
+Andrew Finch, Young-Sook Hwang, and Eiichiro Sumita. 2005. Using machine translation evaluation techniques to determine sentence-level semantic equivalence. In Proceedings of the Third International Workshop on Paraphrasing (IWP2005).
+Jiatao Gu, Graham Neubig, Kyunghyun Cho, and Victor O.K. Li. 2017. Learning to translate in real-time with neural machine translation. In Proceedings of the 15th Conference of the European Chapter of the
+
+Association for Computational Linguistics: Volume 1, Long Papers, pages 1053-1062, Valencia, Spain. Association for Computational Linguistics.
+Shoutao Guo, Shaolei Zhang, and Yang Feng. 2022. Turning fixed to adaptive: Integrating post-evaluation into simultaneous machine translation. In *Findings of the Association for Computational Linguistics: EMNLP* 2022, Online and Abu Dhabi. Association for Computational Linguistics.
+Goro Kobayashi, Tatsuki Kuribayashi, Sho Yokoi, and Kentaro Inui. 2020. Attention is not only a weight: Analyzing transformers with vector norms. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 7057-7075, Online. Association for Computational Linguistics.
+Junyang Lin, Xu Sun, Xuancheng Ren, Muyu Li, and Qi Su. 2018. Learning when to concentrate or divert attention: Self-adaptive attention temperature for neural machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2985-2990, Brussels, Belgium. Association for Computational Linguistics.
+Xuebo Liu, Houtim Lai, Derek F. Wong, and Lidia S. Chao. 2020. Norm-based curriculum learning for neural machine translation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 427-436, Online. Association for Computational Linguistics.
+Mingbo Ma, Liang Huang, Hao Xiong, Renjie Zheng, Kaibo Liu, Baigong Zheng, Chuanqiang Zhang, Zhongjun He, Hairong Liu, Xing Li, Hua Wu, and Haifeng Wang. 2019. STACL: Simultaneous translation with implicit anticipation and controllable latency using prefix-to-prefix framework. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3025-3036, Florence, Italy. Association for Computational Linguistics.
+Xutai Ma, Juan Miguel Pino, James Cross, Liezl Puzon, and Jiatao Gu. 2020. Monotonic multihead attention. In International Conference on Learning Representations.
+Yishu Miao, Phil Blunsom, and Lucia Specia. 2021. A generative framework for simultaneous machine translation. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 6697-6706, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
+Pooya Moradi, Nishant Kambhatla, and Anoop Sarkar. 2019. Interrogating the explanatory power of attention in neural machine translation. In Proceedings of the 3rd Workshop on Neural Generation and Translation, pages 221-230, Hong Kong. Association for Computational Linguistics.
+
+Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. *fairoseq: A fast, extensible toolkit for sequence modeling*. In *Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics* (Demonstrations), pages 48-53, Minneapolis, Minnesota. Association for Computational Linguistics.
+Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311-318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.
+Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715-1725, Berlin, Germany. Association for Computational Linguistics.
+Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 5998-6008. Curran Associates, Inc.
+Mingzhou Xu, Derek F. Wong, Baosong Yang, Yue Zhang, and Lidia S. Chao. 2019. Leveraging local and global patterns for self-attention networks. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3069–3075, Florence, Italy. Association for Computational Linguistics.
+Ruiqing Zhang, Chuanqiang Zhang, Zhongjun He, Hua Wu, and Haifeng Wang. 2020. Learning adaptive segmentation policy for simultaneous translation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2280-2289, Online. Association for Computational Linguistics.
+Shaolei Zhang and Yang Feng. 2021a. ICT's system for AutoSimTrans 2021: Robust char-level simultaneous translation. In Proceedings of the Second Workshop on Automatic Simultaneous Translation, pages 1-11, Online. Association for Computational Linguistics.
+Shaolei Zhang and Yang Feng. 2021b. Modeling concentrated cross-attention for neural machine translation with Gaussian mixture model. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 1401–1411, Punta Cana, Dominican Republic. Association for Computational Linguistics.
+Shaolei Zhang and Yang Feng. 2021c. Universal simultaneous machine translation with mixture-of-experts
+
+wait-k policy. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 7306-7317, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
+Shaolei Zhang and Yang Feng. 2022a. Gaussian multihead attention for simultaneous machine translation. In Findings of the Association for Computational Linguistics: ACL 2022, pages 3019-3030, Dublin, Ireland. Association for Computational Linguistics.
+Shaolei Zhang and Yang Feng. 2022b. Information-transport-based policy for simultaneous translation. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, Online and Abu Dhabi. Association for Computational Linguistics.
+Shaolei Zhang and Yang Feng. 2022c. Modeling dual read/write paths for simultaneous machine translation. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2461-2477, Dublin, Ireland. Association for Computational Linguistics.
+Shaolei Zhang and Yang Feng. 2022d. Reducing position bias in simultaneous machine translation with length-aware framework. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 6775-6788, Dublin, Ireland. Association for Computational Linguistics.
+Shaolei Zhang, Yang Feng, and Liangyou Li. 2021. Future-guided incremental transformer for simultaneous translation. Proceedings of the AAAI Conference on Artificial Intelligence, 35(16):14428-14436.
+Songming Zhang, Yijin Liu, Fandong Meng, Yufeng Chen, Jinan Xu, Jian Liu, and Jie Zhou. 2022. Conditional bilingual mutual information based adaptive training for neural machine translation. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2377-2389, Dublin, Ireland. Association for Computational Linguistics.
+Baigong Zheng, Kaibo Liu, Renjie Zheng, Mingbo Ma, Hairong Liu, and Liang Huang. 2020. Simultaneous translation policies: From fixed to adaptive. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2847-2853, Online. Association for Computational Linguistics.
+Baigong Zheng, Renjie Zheng, Mingbo Ma, and Liang Huang. 2019. Simpler and faster learning of adaptive policies for simultaneous translation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1349-1354, Hong Kong, China. Association for Computational Linguistics.
+
+
+Figure 9: Comparison on different settings of total info $\zeta$ in Eq.(6), where $n$ is the length of source sentence and $m$ is the length of source sentence.
+
+# A Comparison on Settings of Total Info
+
+Based on the semantic equivalence between the source sentence and the target sentence, we introduce $\mathcal{L}_{sum}$ to constrain the total info of the source tokens and target tokens in Eq.(6). $\mathcal{L}_{sum}$ can not only ensure that the total info of the source and target is equal, but also constrain the average info to be around 1, which is friendly to wait-info policy. In our experiments, we set the total info $\zeta = \frac{m + n}{2}$ , where $n$ is the length of source sentence and $m$ is the length of source sentence. We compare the performance under different $\zeta$ settings in Figure 9, including $\zeta = \frac{m + n}{2}$ , $\zeta = m$ and $\zeta = n$ . Our method is not sensitive to the setting of $\zeta$ and achieves almost similar performance under different settings.
+
+# B Extended Analyses on Early-stop
+
+Severity of Early-stop As mentioned in Sec.6.5, wait-k policy may early-stop translating before receiving complete source inputs, especially under low latency. The reason for early-stop is $g_{k}(m) < n$ caused by the length difference between the source and target. To investigate how seriously early-stop affects translation quality, we calculate the BLEU scores of wait-k policy for early-stop or not-early-stop cases respectively in Figure 10. When the wait-k policy appears early-stop, the translation quality is 11 BLEU lower than those cases not-early-stop on average, indicating that early-stop seriously affects SiMT performance.
+
+Why Does Wait-info Avoid Early-stop? The wait-k policy will early-stop translating when
+
+
+Figure 10: We divide the $\mathrm{De} \rightarrow \mathrm{En}$ test set into two sets, early-stop and not-early-stop, based on whether the wait-k early-stop translating before receiving the complete source inputs. Then we calculate the BLEU scores of wait-k policy on each set.
+
+$g_{k}(m) < n$ . While for wait-info policy, $g_{\mathcal{K}}(m) =$ $\operatorname *{argmin}_{j}\left(\sum_{l = 1}^{j}I_{l}^{src}\geq \sum_{l = 1}^{m}I_{l}^{tgt} + \mathcal{K}\right)$ (defined in Eq.(12)) will almost always greater than $n$ , since we introduce an info-sum loss $\mathcal{L}_{sum}$ (defined in Eq.(6)) to constrain the $\begin{array}{r}\sum_{j = 1}^{n}I_{j}^{src} = \sum_{i = 1}^{m}I_{i}^{tgt} \end{array}$
+
+# C Numerical Results
+
+Besides Average Lagging (AL) (Ma et al., 2019), we also use Consecutive Wait (CW) (Gu et al., 2017), Average Proportion (AP) (Cho and Esipova, 2016) and Differentiable Average Lagging (DAL) (Arivazhagan et al., 2019) to evaluate the latency of the SiMT model. We use $g(i)$ to record the number of source tokens received when translating $y_{i}$ . The calculation of latency metrics are as follows.
+
+Consecutive Wait (CW) (Gu et al., 2017) evaluates the average number of source tokens waited between two target tokens, calculated as:
+
+$$
+\mathrm {C W} = \frac {\sum_ {i = 1} ^ {| \mathbf {y} |} (g (i) - g (i - 1))}{\sum_ {i = 1} ^ {| \mathbf {y} |} \mathbb {1} _ {g (i) - g (i - 1) > 0}}, \tag {14}
+$$
+
+where $\mathbb{1}_{g(i) - g(i - 1)} = 1$ counts the number of $g(i) - g(i - 1) > 0$ .
+
+Average Proportion (AP) (Cho and Esipova, 2016) measures the proportion of the received source tokens, calculated as:
+
+$$
+\mathrm {A P} = \frac {1}{| \mathbf {x} | | \mathbf {y} |} \sum_ {i = 1} ^ {| \mathbf {y} |} g (i). \tag {15}
+$$
+
+Differentiable Average Lagging (DAL) (Ari-vazhagan et al., 2019) is a differentiable version of
+
+average lagging, calculated as:
+
+$$
+g ^ {\prime} (i) = \left\{ \begin{array}{c c} g (i) & i = 1 \\ \max \left(g (i), g ^ {\prime} (i - 1) + \frac {| \mathbf {x} |}{| \mathbf {y} |}\right) & i > 1 \end{array} , \right. \tag {16}
+$$
+
+$$
+\mathrm {D A L} = \frac {1}{| \mathbf {y} |} \sum_ {i = 1} ^ {| \mathbf {y} |} g ^ {\prime} (i) - \frac {i - 1}{| \mathbf {x} | / | \mathbf {y} |}. \tag {17}
+$$
+
+Numerical Results Table 4, 5 and 6 report the numerical results of all systems in our experiments, evaluated with BLEU for translation quality and CW, AP, AL and DAL for latency.
+
+
IWSLT15 English→Vietnamese
Transformer-Small
Full-sentence MT(Vaswani et al., 2017)
CW
AP
AL
DAL
BLEU
22.08
1.00
22.08
22.08
28.91
Wait-k(Ma et al., 2019)
k
CW
AP
AL
DAL
BLEU
1
1.00
0.63
3.03
3.54
25.21
3
1.17
0.71
4.80
5.42
27.65
5
1.46
0.78
6.46
7.06
28.34
7
1.96
0.83
8.21
8.79
28.60
9
2.73
0.88
9.92
10.51
28.69
Efficient Wait-k(Elbayad et al., 2020)
k
CW
AP
AL
DAL
BLEU
1
1.01
0.63
3.06
3.61
26.23
3
1.17
0.71
4.66
5.20
28.21
5
1.46
0.78
6.38
6.94
28.56
7
1.96
1.96
8.13
8.69
28.62
9
2.73
0.87
9.80
10.34
28.52
Adaptive Wait-k(Zhang et al., 2020)
(ρ1, ρ9)(0.02, 0.00)
CW
AP
AL
DAL
BLEU
(0.04, 0.00)
1.05
0.63
2.98
3.64
25.69
(0.05, 0.00)
1.19
0.63
3.07
4.06
26.05
(0.10, 0.00)
1.27
1.27
3.14
4.30
26.33
(0.10, 0.05)
1.97
0.68
4.08
6.05
27.80
(0.10, 0.05)
2.36
0.71
4.77
7.11
28.46
(0.20, 0.00)
2.73
0.78
6.56
8.34
28.73
(0.30, 0.20)
3.39
0.86
9.42
10.42
28.80
MoE Wait-k(Zhang and Feng, 2021c)
k
CW
AP
AL
DAL
BLEU
1
1.00
0.63
3.19
3.76
26.56
3
1.17
0.71
4.70
5.42
28.43
5
1.46
0.78
6.43
7.14
28.73
7
1.97
0.83
8.19
8.88
28.81
9
2.73
0.87
9.86
10.39
28.88
MMA(Ma et al., 2020)
λ
CW
AP
AL
DAL
BLEU
0.4
1.03
0.58
2.68
3.46
27.73
0.3
1.09
0.59
2.98
3.81
27.90
0.2
1.15
0.63
3.57
4.44
28.47
0.1
1.31
0.67
4.63
5.65
28.42
0.04
1.64
0.70
5.44
6.57
28.33
0.02
2.01
0.76
7.09
8.29
28.28
GMA(Zhang and Feng, 2022a)
δ
CW
AP
AL
DAL
BLEU
0.9
1.20
0.65
3.05
4.08
27.95
1.0
1.27
0.68
4.01
4.77
28.20
2.0
1.49
0.74
5.47
6.37
28.44
2.2
1.60
0.77
6.04
6.96
28.56
2.5
1.74
0.78
6.55
7.55
28.72
Wait-info
K
CW
AP
AL
DAL
BLEU
1
1.10
0.67
3.76
4.33
28.37
2
1.19
0.69
4.10
4.71
28.45
3
1.34
0.71
4.60
5.28
28.54
4
1.46
0.74
5.28
5.97
28.59
5
1.63
0.77
6.01
6.71
28.70
6
1.86
0.80
6.80
7.51
28.78
7
2.16
0.82
7.61
8.33
28.80
8
2.51
0.84
8.39
9.11
28.82
+
+Table 4: Numerical results on $\mathrm{{En}} \rightarrow \mathrm{{Vi}}$ with Transformer-Small.
+
+
WMT15 German→English
Transformer-Base
Full-sentence MT(Vaswani et al., 2017)
CW
AP
AL
DAL
BLEU
27.77
1.00
27.77
27.77
31.60
Wait-k(Ma et al., 2019)
k
CW
AP
AL
DAL
BLEU
1
1.17
0.52
0.02
1.84
17.61
3
1.23
0.59
1.71
3.33
23.75
5
1.37
0.66
3.85
5.20
26.86
7
1.70
0.73
5.86
7.12
28.20
9
2.17
0.78
7.85
9.01
29.42
11
2.78
0.82
9.71
10.79
30.36
13
3.56
0.86
11.55
12.49
30.75
Efficient Wait-k(Elbayad et al., 2020)
k
CW
AP
AL
DAL
BLEU
1
1.27
0.50
-0.49
1.60
19.51
3
1.27
0.58
1.56
3.29
24.11
5
1.39
0.66
3.71
5.18
26.85
7
1.71
0.73
5.78
7.12
28.34
9
2.17
0.78
7.84
8.98
29.39
11
2.78
0.82
9.73
10.79
30.02
13
3.56
0.86
11.50
12.49
30.25
Adaptive Wait-k(Zhang et al., 2020)
(ρ1, ρ13)(0.02, 0.00)
CW
AP
AL
DAL
BLEU
(0.04, 0.00)
1.54
0.54
0.83
3.27
20.29
(0.05, 0.00)
2.07
0.56
1.40
4.59
22.34
(0.06, 0.00)
2.28
0.58
1.90
5.25
23.56
(0.07, 0.00)
2.58
0.60
2.43
5.99
24.59
(0.09, 0.00)
2.79
0.62
2.94
6.57
25.96
(0.10, 0.00)
3.25
0.66
4.10
7.78
27.44
(0.10, 0.01)
3.45
0.68
4.66
8.31
27.88
(0.10, 0.03)
3.68
0.70
5.11
8.84
28.29
(0.10, 0.05)
4.13
0.72
6.09
9.87
28.91
(0.20, 0.00)
4.48
0.75
7.21
10.72
29.73
(0.20, 0.05)
4.02
0.78
8.23
10.92
30.10
(0.20, 0.10)
4.75
0.82
10.12
12.35
30.76
(0.30, 0.20)
4.68
0.85
11.55
12.98
30.78
k
CW
AP
AL
DAL
BLEU
1
1.49
0.49
-0.32
1.69
21.43
3
1.26
0.59
1.79
3.30
25.81
5
1.37
0.66
3.88
5.18
28.34
7
1.69
0.73
5.94
7.12
29.71
9
2.17
0.78
7.86
8.99
30.61
11
2.78
0.82
9.73
10.78
30.89
13
3.56
0.86
11.53
12.48
31.08
MMA(Ma et al., 2020)
λ
CW
AP
AL
DAL
BLEU
0.4
2.35
0.68
4.97
7.51
28.66
0.3
2.64
0.72
6.00
9.30
29.11
0.25
3.35
0.78
8.03
12.28
28.92
0.2
4.03
0.83
9.98
14.86
28.18
0.1
14.88
0.97
13.25
19.48
27.47
GMA(Zhang and Feng, 2022a)
δ
CW
AP
AL
DAL
BLEU
0.9
1.33
0.64
3.87
4.61
28.12
1.0
1.49
0.67
4.66
5.56
28.50
2.0
1.85
0.72
5.79
7.75
28.71
2.2
2.01
0.73
6.13
8.43
29.23
2.4
5.89
0.96
14.05
25.76
31.31
GSiMT(Miao et al., 2021)
ζ
CW
AP
AL
DAL
BLEU
4
-
-
3.64
-
28.82
5
-
-
4.45
-
29.50
6
-
-
5.13
-
29.78
7
-
-
6.24
-
29.63
Wait-info
κ
CW
AP
AL
DAL
BLEU
1
1.29
0.61
3.00
3.77
27.55
2
1.36
0.64
3.78
4.56
28.89
3
1.44
0.67
4.68
5.46
29.66
4
1.53
0.71
5.71
6.43
30.12
5
1.68
0.74
6.66
7.37
30.59
6
1.86
0.77
7.62
8.33
31.13
7
2.10
0.79
8.57
9.26
31.28
8
2.38
0.81
9.48
10.18
31.39
9
2.66
0.83
10.41
11.11
31.55
10
3.01
0.85
11.31
11.97
31.68
11
3.38
0.87
12.16
12.82
31.66
12
3.81
0.88
12.99
13.64
31.69
13
4.25
0.89
13.79
14.43
31.88
14
4.73
0.90
14.56
15.19
31.94
15
5.20
0.91
15.32
15.92
32.05
+
+Table 5: Numerical results on De→En with Transformer-Base.
+
+
WMT15 German→English
Transformer-Big
Full-sentence MT(Vaswani et al., 2017)
CW27.77
AP1.00
AL27.77
DAL27.77
BLEU32.94
Wait-k(Ma et al., 2019)
k
CW
AP
AL
DAL
BLEU
1
1.16
0.52
0.25
1.82
19.13
3
1.20
0.60
2.23
3.41
25.45
5
1.36
0.67
4.00
5.23
28.67
7
1.70
0.73
5.97
7.17
30.12
9
2.17
0.78
7.95
9.03
31.46
11
2.79
0.82
9.75
10.82
31.83
13
3.56
0.86
11.59
12.51
32.08
Efficient Wait-k(Elbayad et al., 2020)
k
CW
AP
AL
DAL
BLEU
1
1.23
0.51
-0.19
1.79
20.56
3
1.26
0.59
1.73
3.36
25.45
5
1.39
0.66
3.82
5.24
28.58
7
1.71
0.73
5.89
7.16
30.13
9
2.17
0.78
7.88
9.02
31.23
11
2.78
0.82
9.77
10.81
31.52
13
3.56
0.86
11.58
12.51
32.02
Adaptive Wait-k(Zhang et al., 2020)
(ρ1, ρ13)(0.02, 0.00)
CW
AP
AL
DAL
BLEU
(0.04, 0.00)
1.42
0.54
0.99
3.00
20.50
(0.05, 0.00)
1.86
0.56
1.37
4.22
22.62
(0.06, 0.00)
2.10
0.57
1.69
4.81
23.77
(0.07, 0.00)
2.36
0.59
2.23
5.54
25.43
(0.08, 0.00)
2.58
0.61
2.70
6.14
27.06
(0.09, 0.00)
3.08
0.65
3.17
6.75
27.96
(0.10, 0.00)
3.28
0.67
4.28
7.33
28.92
(0.10, 0.03)
3.95
0.71
5.59
9.43
30.97
(0.10, 0.05)
4.36
0.74
6.70
10.41
31.30
(0.20, 0.00)
3.90
0.78
8.09
10.80
32.38
(0.20, 0.05)
4.78
0.82
10.00
12.35
32.46
(0.30, 0.20)
4.16
0.86
12.19
13.11
32.24
MoE Wait-k(Zhang and Feng, 2021c)
k
CW
AP
AL
DAL
BLEU
1
1.41
0.51
0.16
1.79
21.76
3
1.28
0.59
2.03
3.37
26.51
5
1.37
0.67
4.03
5.22
29.33
7
1.70
0.73
5.95
7.14
30.66
9
2.17
0.78
7.86
8.99
30.61
11
2.78
0.82
9.73
10.78
30.89
13
3.56
0.86
11.53
12.48
31.08
MMA(Ma et al., 2020)
λ
CW
AP
AL
DAL
BLEU
1
1.69
0.56
3.00
4.03
26.10
0.75
1.66
0.58
3.40
4.46
26.50
0.5
1.69
0.59
3.69
4.83
27.70
0.4
1.70
0.59
3.75
4.90
29.20
0.3
1.82
0.60
4.18
5.35
30.30
0.27
2.37
0.71
5.91
8.27
30.88
0.25
2.62
0.75
7.02
9.88
31.04
0.2
3.21
0.79
8.75
12.60
31.08
GMA(Zhang and Feng, 2022a)
δ
CW
AP
AL
DAL
BLEU
1.0
1.54
0.68
4.60
5.89
30.20
2.0
1.98
0.74
6.34
8.18
30.64
2.2
2.13
0.75
6.86
8.91
31.33
2.4
2.28
0.76
7.28
9.59
31.62
2.5
3.10
0.88
12.06
20.43
31.91
Wait-info
K
CW
AP
AL
DAL
BLEU
1
1.30
0.62
3.41
4.17
29.19
2
1.37
0.65
4.19
4.90
30.42
3
1.46
0.69
5.12
5.79
31.26
4
1.56
0.72
6.05
6.74
31.68
5
1.71
0.75
6.96
7.65
32.04
6
1.88
0.77
7.94
8.57
32.32
7
2.14
0.80
8.83
9.49
32.56
8
2.40
0.82
9.75
10.38
32.86
9
2.68
0.84
10.66
11.25
32.99
10
3.00
0.85
11.53
12.13
33.10
11
3.38
0.87
12.35
12.93
32.99
12
3.79
0.88
13.15
13.72
33.10
13
4.21
0.89
13.94
14.48
33.23
14
4.67
0.91
14.69
15.21
33.23
15
5.15
0.92
15.42
15.93
33.31
+
+Table 6: Numerical results on De→En with Transformer-Big.
\ No newline at end of file
diff --git a/waitinfopolicybalancingsourceandtargetatinformationlevelforsimultaneousmachinetranslation/images.zip b/waitinfopolicybalancingsourceandtargetatinformationlevelforsimultaneousmachinetranslation/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..b025ad179eedb337f8da23bc450190b08c9833ba
--- /dev/null
+++ b/waitinfopolicybalancingsourceandtargetatinformationlevelforsimultaneousmachinetranslation/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:72ccf19866b71806ca49a0dcad47ca57d1d861b0aa5eb10bcf4459e64094944c
+size 1045832
diff --git a/waitinfopolicybalancingsourceandtargetatinformationlevelforsimultaneousmachinetranslation/layout.json b/waitinfopolicybalancingsourceandtargetatinformationlevelforsimultaneousmachinetranslation/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..bc020fcdd18ff068c78bea37bb8542a4255a2a92
--- /dev/null
+++ b/waitinfopolicybalancingsourceandtargetatinformationlevelforsimultaneousmachinetranslation/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:aa5463d5aea93af48ab54239e1551caeadb5d362169bdbde53b95e47fffdc0cf
+size 528005
diff --git a/wanliworkerandaicollaborationfornaturallanguageinferencedatasetcreation/3ce257c6-1e2c-47fb-9912-79c64030ca44_content_list.json b/wanliworkerandaicollaborationfornaturallanguageinferencedatasetcreation/3ce257c6-1e2c-47fb-9912-79c64030ca44_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..2be8c6a82626e4a33463942f28f51500120ee85a
--- /dev/null
+++ b/wanliworkerandaicollaborationfornaturallanguageinferencedatasetcreation/3ce257c6-1e2c-47fb-9912-79c64030ca44_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f5a4f46120f30d58579e5b1335e722a2528e8c4273ff569140ea59b2d305dc10
+size 154550
diff --git a/wanliworkerandaicollaborationfornaturallanguageinferencedatasetcreation/3ce257c6-1e2c-47fb-9912-79c64030ca44_model.json b/wanliworkerandaicollaborationfornaturallanguageinferencedatasetcreation/3ce257c6-1e2c-47fb-9912-79c64030ca44_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..a3a870a166d4a925932eca6ac3faefbe00440966
--- /dev/null
+++ b/wanliworkerandaicollaborationfornaturallanguageinferencedatasetcreation/3ce257c6-1e2c-47fb-9912-79c64030ca44_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c937ee123b12e690eddaa01b4ba9f6f27983fd7db0c7791d94ecb9e965887f6a
+size 190782
diff --git a/wanliworkerandaicollaborationfornaturallanguageinferencedatasetcreation/3ce257c6-1e2c-47fb-9912-79c64030ca44_origin.pdf b/wanliworkerandaicollaborationfornaturallanguageinferencedatasetcreation/3ce257c6-1e2c-47fb-9912-79c64030ca44_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..f906c02c6d01025f4951158c8a7607013c0e39ce
--- /dev/null
+++ b/wanliworkerandaicollaborationfornaturallanguageinferencedatasetcreation/3ce257c6-1e2c-47fb-9912-79c64030ca44_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0303c4d0612960ca794de8b67236aed2e4f429319a56bc8e47e161f5fde1549b
+size 2253176
diff --git a/wanliworkerandaicollaborationfornaturallanguageinferencedatasetcreation/full.md b/wanliworkerandaicollaborationfornaturallanguageinferencedatasetcreation/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..d8970df391e76a5cd6a6325c8828630f89b2f992
--- /dev/null
+++ b/wanliworkerandaicollaborationfornaturallanguageinferencedatasetcreation/full.md
@@ -0,0 +1,620 @@
+# WANLI: Worker and AI Collaboration for Natural Language Inference Dataset Creation
+
+Alisa Liu $^{\text{♥}}$ Swabha Swayamdipta $^{\text{♣}}$ Noah A. Smith $^{\text{♣}}$ Yejin Choi $^{\text{♣}}$ Paul G. Allen School of Computer Science & Engineering, University of Washington $\clubsuit$ Allen Institute for Artificial Intelligence $\diamond$ University of Southern California alisaliu@cs.washington.edu
+
+# Abstract
+
+A recurring challenge of crowdsourcing NLP datasets at scale is that human writers often rely on repetitive patterns when crafting examples, leading to a lack of linguistic diversity. We introduce a novel approach for dataset creation based on worker and AI collaboration, which brings together the generative strength of language models and the evaluative strength of humans. Starting with an existing dataset, MultiNLI for natural language inference (NLI), our approach uses dataset cartography to automatically identify examples that demonstrate challenging reasoning patterns, and instructs GPT-3 to compose new examples with similar patterns. Machine generated examples are then automatically filtered, and finally revised and labeled by human crowdworkers. The resulting dataset, WANLI, consists of 107,885 NLI examples and presents unique empirical strengths over existing NLI datasets. Remarkably, training a model on WANLI improves performance on eight out-of-domain test sets we consider, including by $11\%$ on HANS and $9\%$ on Adversarial NLI, compared to training on the $4\times$ larger MultiNLI. Moreover, it continues to be more effective than MultiNLI augmented with other NLI datasets. Our results demonstrate the promise of leveraging natural language generation techniques and re-imagining the role of humans in the dataset creation process.
+
+# 1 Introduction
+
+As much as large-scale crowdsourced datasets have expedited progress on various NLP problems, a growing body of research has revealed fundamental limitations in existing datasets: they are often flooded with repetitive and spurious patterns, rather than covering the broad range of linguistic phenomena required by the task (Bowman and Dahl, 2021). This leads to models that seem to achieve human-level performance on in-domain test sets, yet are brittle when given out-of-domain or adversarial examples (Ribeiro et al., 2020; Glockner et al., 2018).
+
+
+Figure 1: An illustration of our pipeline for creating WANLI. Starting with a data map (Swayamdipta et al., 2020) of an existing dataset relative to a trained model, (1) we automatically identify pockets of data instances exemplifying challenging reasoning patterns. Next, (2) we use GPT-3 to generate new instances with the same pattern. These generated examples are then (3) automatically filtered via a metric we introduce inspired by data maps, and (4) given to human annotators to assign a gold label and optionally revise.
+
+We attribute this problem to an inherent challenge in the crowdsourcing design—the prevalent paradigm for creating large-scale NLP datasets—where a relatively small number of workers create a massive number of free text examples. While human annotators are generally reliable for writing correct examples, crafting diverse and creative examples at scale can be challenging. Thus, crowdworkers often resort to a limited set of writing strategies for speed, at the expense of diversity (Geva et al., 2019; Gururangan et al., 2018). When models overfit to such repetitive patterns, they fail
+
+to generalize to out-of-domain examples where these patterns no longer hold (Geirhos et al., 2020).
+
+On the other hand, there has been remarkable progress in open-ended text generation based on massive language models (Brown et al., 2020; Raffel et al., 2020, i.a.). Despite known deficiencies such as incoherence or repetition (Dou et al., 2021), these models often produce human-like text (Clark et al., 2021) and show potential for creative writing tasks (Lee et al., 2022). Importantly, these models are capable of replicating a pattern given just a few examples in context (Brown et al., 2020, GPT-3).
+
+In this paper, we introduce a novel approach for dataset creation which brings together the generative strength of language models and the evaluative strength of humans through human and machine collaboration (§2). The key insight of our approach is that language models can create new examples by replicating linguistic patterns that are valuable for training, without necessarily "understanding" the task itself. Illustrated in Figure 1, our pipeline starts with an existing dataset. We use dataset cartography from Swayamdipta et al. (2020) to automatically identify pockets of examples that demonstrate challenging reasoning patterns relative to a trained model. Using each group as a set of in-context examples, we leverage a pretrained language model to generate new examples likely to have the same pattern (see Table 1). We then propose a novel metric, building on dataset cartography, to automatically filter generations that are most likely to aid model learning. Finally, we validate the generated examples by subjecting them to human review, where crowdworkers assign a gold label and (optionally) revise for quality.
+
+We demonstrate the effectiveness of our approach on the task of natural language inference (NLI), which determines whether a premise entails (i.e., implies the truth of) a hypothesis, both expressed in natural language. Despite being one of the most resource-available tasks in NLP, analysis and challenge sets repeatedly demonstrate the limitations of existing datasets and the brittleness of NLI models trained on them (Gururangan et al., 2018; Poliak et al., 2018; Tsuchiya, 2018). Using MultiNLI (Williams et al., 2018) as our original dataset, we use our pipeline to create a dataset of 107,885 examples, which we call Worker-and-AI NLI (WANLI).1
+
+Remarkably, empirical results demonstrate that replacing MultiNLI supervision with WANLI (which is 4 times smaller) improves performance on eight different out-of-domain test sets, including datasets that are converted to the NLI format from downstream tasks such as question-answering and fact verification ( $\S 3$ ). This result holds even when augmenting MultiNLI with other NLI datasets and recently proposed augmentation sets. Moreover, including WANLI in the training data can help improve performance on certain in-domain test sets. We then analyze WANLI and show that it has fewer previously documented spurious correlations than MultiNLI ( $\S 4$ ), and provide insights into the collaborative framework ( $\S 5$ ).
+
+Our approach contrasts with previous instruction-based generation of dataset examples (Schick and Schütze, 2021; West et al., 2021), which require the model to understand the task from context, fundamentally limiting the complexity of generated output to what is accessible by the model. Moreover, our human-in-the-loop approach is collaborative, rather than adversarial (Dinan et al., 2019; Nie et al., 2020; Bartolo et al., 2020). Overall, we leverage the best of both worlds: a powerful model's ability to efficiently generate diverse examples, and humans' ability to improve and ensure the quality of generations.
+
+Our worker-AI collaborative approach is more scalable compared to the traditional crowdsourcing framework. Our approach is generalizable, allowing for rejuvenating datasets on many different classification tasks, especially when performance seems to stagnate due to overfitting to popular benchmarks (Recht et al., 2019). Our work shows the promise of leveraging language models in a controlled way to aid the dataset creation process, and we encourage the community to think of dataset curation as an AI challenge itself.
+
+# 2 Worker-AI Collaborative Dataset Creation for NLI
+
+We describe our four-stage approach for dataset creation based on worker and AI collaboration. In this work, we apply it to the task of natural language inference (NLI), which involves predicting whether a premise entails, contradicts or is neutral to a hypothesis. NLI has broad applicability in NLP: it has proven useful for pretraining (Clark et al., 2019; Phang et al., 2018), and can be applied to verify candidate answers in question-answering
+
+
Seed MultiNLI example
Generated WANLI Example
Label & Reasoning
P: 5 percent probability that each part will be defect free.H: Each part has a 95 percent chance of having a defect.
P: 1 percent of the seats were vacant.H: 99 percent of the seats were occupied.
EntailmentSet complements
P: The artisans, tradespeople, and providers of entertainment (reputable and not so reputable) lived downtown on the reclaimed marshlands north and east, in the area still known as Shitamachi.H: The only place where artisans, tradespeople and entertainers could live was in the marshlands to the north and east.
P: At the time of the Revolution, the old port of Marseille was a great center of shipbuilding and commerce.H: The only place where ships were built was in the old port of Marseille.
NeutralHypothesis introduces exclusivity
P: To the south, in the Sea of Marmara, lie the woods and beaches of the Princes' Islands.H: In the north is the Sea of Marmara where there are mountains to climb.
P: From the park's southern entrance, follow the avenue south to the Hotel de Ville.H: From the park's northern entrance, follow the avenue north to the Hotel de Ville.
ContradictionReversing cardinal directions
P: Democrats released documents indicating that Republicans sold big political donors meals with the party's leaders in federal buildings in 1995.H: It is illegal for a party to solicit products to donors.
P: In the late 1960s, students at a university in Wisconsin tried to organize a union.H: It was illegal for the students to organize a union.
NeutralIllegal things can happen
P: She ducked and parried the blow.H: She ducked to miss the blow.
P: She stepped on the brake and the car came to a stop.H: She stepped on the brake to stop the car.
EntailmentImplied intention
P: To build a worldclass finance organization and help achieve better business outcomes, each of the organizations we examined set an agenda for transforming the finance organization by defining a shared vision -i.e.H: The transformation was a disaster and the entire organization had to be scrapped.
P: In order to help improve customer service, I suggested that they send a representative to our office to discuss our concerns.H: The representative sent to our office did not solve our problems and we lost a lot of business.
NeutralIntended goals may not actualize
P: Salinger wrote similar letters to other young female writers.H: Other young female writers received similar letters from Salinger as well.
P: The three schools have a number of students who are from families with no history of financial difficulties.H: Families with no history of financial difficulties send their children to the three schools.
EntailmentSubstituting a verb with a different subcategorization frame
+
+Table 1: Seed MultiNLI examples, and corresponding WANLI examples generated by GPT-3. P stands for premise, H for hypothesis. The seed example is "ambiguous" according to the definitions of Swayamdipta et al. (2020), discussed in §2. The remaining in-context examples (shown in Appendix C.1) share the same pattern and are found using distance in [CLS] embeddings of a trained task model. The reasoning is a short description of the pattern we observe from the group, and which is successfully repeated in the generated example.
+
+(Chen et al., 2021) or factuality of generated summaries (Maynez et al., 2020).
+
+Our approach requires as prerequisites an initial dataset $\mathcal{D}_0$ and a strong task model $\mathcal{M}$ trained on $\mathcal{D}_0$ . We use MultiNLI (Williams et al., 2018), a large-scale multi-genre NLI dataset, as $\mathcal{D}_0$ . We finetune RoBERTa-large (Liu et al., 2019) on MultiNLI for our task model $\mathcal{M}$ (training details in Appendix B).
+
+As an overview, we first automatically collect groups of examples exemplifying challenging reasoning patterns in $\mathcal{D}_0$ relative to $\mathcal{M}$ , using data maps (Swayamdipta et al., 2020; Stage 1, see §2.1). Then we overgenerate similar examples by leveraging the pattern replication capabilities of GPT-3 (Brown et al., 2020) (Stage 2; §2.2). While GPT-3 can generate examples efficiently, it may not reliably replicate the desired pattern and its output quality will not be uniform. We address this by automatically filtering the generated examples using a metric derived from data maps (Stage 3; §2.3). We finally subject the collected data to human review, in which crowdworkers optionally revise
+
+examples and assign gold labels (Stage 4; §2.4).
+
+Dataset Cartography. A key component of our pipeline is inspired by data maps (Swayamdipta et al., 2020), which automatically reveal different regions in a dataset, w.r.t. the behavior of a classification model during training. These include easy-to-learn examples which the model consistently predicts correctly through training, hard-to-learn examples on which it is consistently incorrect, and ambiguous examples for which the model's confidence in the correct answer exhibits high variability across train epochs. Our pipeline focuses on ambiguous examples, which were shown to lead to more robust models. Additionally, ambiguous examples contain fewer spurious correlations (Gardner et al., 2021), suggesting that they capture underrepresented counterexamples to spurious correlations. Indeed, such counterexamples take more epochs of training to learn and are crucial for generalization (Tu et al., 2020), providing a potential explanation for why they appear ambiguous across early epochs and lead to more robust models.
+
+# 2.1 Stage 1: Collection of Exemplars
+
+In this stage, we automatically collect groups of examples from $\mathcal{D}_0$ which represent linguistic patterns we wish to include in the target dataset. We begin with a seed example $(x_i,y_i)\in \mathcal{D}_0$ belonging to the most ambiguous $p = 25\%$ relative to $\mathcal{M}$ .
+
+To generate a new example with the same reasoning pattern, we wish to leverage the ability of GPT-3 (Brown et al., 2020) for in-context learning; hence, we need to first collect examples that test a similar kind of reasoning to $x_{i}$ . To do this, we use the [CLS] token representation of each example relative to the task model $\mathcal{M}$ , and find the $k = 4$ nearest neighbors via cosine similarity to $x_{i}$ that have the same label. Detailed qualitative inspection shows that the nearest neighbors in this representation space tend to capture a human-interpretable similarity in the reasoning required to solve an example, rather than lexical or semantic similarity (examples in Table 1).
+
+Han and Tsvetkov (2021) give another interpretation for this approach: for examples with the same label, the similarity of [CLS] token embeddings actually represents the similarity of gradient updates in the row of the final projection layer corresponding to that label. Thus, two examples are close if training on them would "update" the final layer of the model similarly.
+
+By automatically identifying areas for augmentation, our method does not require any prior knowledge of challenging patterns and makes our method tractable for building on top of large-scale datasets. Nonetheless, exemplar collection could potentially be approached in different ways (e.g., through expert curation or category labels).
+
+# 2.2 Stage 2: Overgeneration
+
+Given an automatically extracted group of $k + 1$ examples from the original dataset $\mathcal{D}_0$ , we construct a natural language context (prompt) for a left-to-right language model; in this work, we use GPT-3 Curie (the second-largest GPT-3 model). The prompt template we use is shown in Figure 2, where we order the examples in increasing similarity to the seed example.
+
+Note that our method leverages GPT-3 in way
+
+
+Figure 2: Prompt template instructing GPT-3 to generate a new example, given a set of in-context examples. To separate the premise and hypothesis, the word "Implication" is used for entailment examples (shown here), "Possibility" for neutral examples, and "Contradiction" for contradiction examples.
+
+that is distinct from its typical usage in few-shot settings, where given examples demonstrating a task, GPT-3 performs the task on a new, unlabeled example. Here, we instead give GPT-3 examples representing a particular slice of the task, and ask GPT-3 to generate a new example in the same slice.
+
+For each context, we sample from GPT-3 to create $n = 5$ distinct examples. We use top- $p$ decoding (Holtzman et al., 2020) with $p = 0.5$ (additional details in Appendix C.2). Although generated examples at this stage could be assumed to share label of its $k + 1$ in-context examples, we instead consider the resulting dataset $\mathcal{D}_{\mathrm{gen}} = \{x_i\}_i$ at the end of Stage 1 to be unlabeled.
+
+# 2.3 Stage 3: Automatic Filtering
+
+In this step, we wish to filter generated examples from Stage 2 to retain those that are the most ambiguous with respect to $\mathcal{M}$ . However, computing ambiguity for an example requires that it be a part of the original training set, whereas we wish to estimate the ambiguity of an unlabeled example without additional training. Thus we introduce a new metric called estimated max variability, which measures the worst-case spread of predictions on an example $x_{i}$ across checkpoints of a trained model. Let $E$ be the total epochs in training, $\mathcal{V}$ the label set, and $p_{\theta^{(e)}}$ the probability assigned with parameters $\theta^e$ at the end of the $e$ -th epoch. We define the estimated max variability as:
+
+$$
+\sigma_ {i} = \max _ {y \in \mathcal {Y}} \sigma \left(\left\{p _ {\theta (e)} \left(y \mid x _ {i}\right) \right\} _ {e \in E}\right), \tag {1}
+$$
+
+where $\sigma$ is the standard deviation function.
+
+Concretely, we retroactively compute the prediction from each saved epoch of $\mathcal{M}$ on $x_{i}$ . The
+
+only assumption made is that the single example, if it had been a part of the training set, would have made a negligible difference on each model checkpoint (at least as observed through its posterior probabilities). In taking a maximum across labels, we consider $x_{i}$ to be ambiguous as long as $\mathcal{M}$ is undecided on any label $\in \mathcal{V}$ .
+
+We first employ simple heuristics to discard examples exhibiting observable failure cases of GPT-3. Specifically, we discard examples where 1) the premise and hypothesis are identical, modulo punctuation or casing, 2) the generated example is an exact copy of an in-context example, 3) the example contains some phrases from the instruction (e.g., "pair of sentences"), or 4) the premise or hypothesis is shorter than 5 characters. Then, we compute the estimated max variability for the remaining examples with respect to $\mathcal{M}$ , and retain an equal number of examples from each (intended) label class with the highest max variability, to create a dataset $\mathcal{D}_{\mathrm{filtered}}$ that is half the size of $\mathcal{D}_{\mathrm{gen}}$ .
+
+# 2.4 Stage 4: Human Review
+
+As the final stage of our pipeline, we recruit human annotators on Amazon Mechanical Turk to review each unlabeled example $x_{i} \in \mathcal{D}_{\text{filtered}}$ . (Details about crowdworkers and guidelines in Appendix D.) The annotator may optionally revise $x_{i}$ to create a higher-quality example $x_{i}'$ , or let $x_{i}' = x_{i}$ . Either way, they assign a label $y_{i}$ . When revising examples, we asked annotators to preserve the intended meaning as much as possible through minimal revisions. However, if an example would require a great deal of revision to fix or if it could be perceived as offensive, they should discard it. This results in the labeled dataset $\mathcal{D}_{\text{collab}} = \{(x_{i}', y_{i})\}_{i}$ .
+
+Crowdworkers annotate a total of 118,724 examples, with two distinct workers reviewing each example. For examples that both annotators labeled without revision, we achieved a Cohen's $\kappa$ of 0.60, indicating substantial agreement. To create the final dataset, we discard an example if either annotator chose to discard it, and we keep a revision only if both annotators revise an example (and choose a revision uniformly at random). When both annotators label the example as-is but choose different labels, we sample one of the two labels uniformly
+
+
Split
Size
Label distribution (E/N/C)
Train
102,885
38,511 / 48,977 / 15,397
Test
5,000
1,858 / 2,397 / 745
+
+Table 2: WANLI dataset statistics.
+
+at random. The rationale for this is discussed in Appendix D.4. This leads to a labeled dataset of 107,885 examples (90.87% of all annotated examples, with the remaining discarded). Of the labeled examples, 3.54% were revised.
+
+We randomly split the data into a train and test sets. Key dataset statistics are summarized in Table 2. Unlike MultiNLI, WANLI is not label-balanced; see §5.3 for a discussion.
+
+In general, we believe the role of revision depends on the quality of machine-generated examples. Indeed, we need to strike a balance between leveraging human capabilities and avoiding the re-emergence of annotation artifacts that may come with too much freedom in revision.
+
+# 3 Training NLI Models with WANLI
+
+We finetune different copies of RoBERTa-large (Liu et al., 2019) on different training sets, and evaluate each resulting model's performance on a large suite of NLI challenge sets. Given that the challenge sets were constructed independently of MultiNLI or WANLI, we consider them out-of-distribution (OOD) for both training datasets.
+
+# 3.1 NLI Test Suite
+
+The NLI challenge sets come from a wide array of domains, methodologies (e.g., crowdsourcing, expert curation, generation), and initial task formats (e.g., question-answering, fact verification).5
+
+NLI Diagnostics (Wang et al., 2018) is a manually-curated test set that evaluates a variety of linguistic phenomena using naturally-occurring sentences from several domains.
+
+HANS (McCoy et al., 2019) targets unreliable syntactic heuristics based on lexical overlap between the premise and hypothesis.
+
+QNLI was adapted from the Stanford Question-Answering Dataset (Rajpurkar et al., 2016) by the GLUE benchmark (Wang et al., 2018). Each exam
+
+
Data size
Test Set
Diagnostics
HANS*
QNLI*
WNLI*
NQ-NLI*
ANLI
FEVER-NLI
BIG-Bench*
WANLI
1104
30K
5266
706
4855
3200
20K
3324
5000
Training Set
MNLI
393K
68.47
78.08
52.69
56.09
62.34
32.37
68.29
64.68
64.62
MNLI + Tailor
485K
67.75
79.03
54.89
56.23
63.83
32.87
68.75
72.38
64.27
MNLI + Z-Aug
754K
66.39
80.52
57.72
55.52
62.30
33.37
68.73
66.12
64.78
MNLI ∘ ANLI
393K
67.75
79.90
68.74
60.48
62.49
54.59
72.30
72.32
65.96
MNLI + ANLI
556K
66.84
77.94
62.41
57.08
62.84
53.84
72.30
71.11
65.93
MNLI ∘ FEVER-NLI
393K
66.75
76.50
56.70
57.08
61.81
35.65
76.83
58.39
63.31
MNLI + FEVER-NLI
601K
67.57
76.05
52.90
54.95
63.02
35.37
76.93
64.65
64.53
MNLI + SNLI + ANLI
943K
68.75
78.65
63.38
58.49
62.94
54.21
72.02
71.05
65.10
MNLI ∘ WANLI
393K
71.01
83.10
77.00
61.89
62.94
36.46
71.14
76.17
75.49
MNLI + WANLI
496K
71.64
82.00
68.40
60.05
63.21
36.78
70.79
70.81
75.26
WANLI
103K
72.73
89.28
81.40
67.28
64.18
41.12
70.13
85.19
75.40
+
+Table 3: Empirical comparison of different training sets for RoBERTa-large, for generalization to out-of-distribution (OOD) challenge sets. Gray cells mark settings that do not represent an OOD challenge. Top: Training on MultiNLI alone. Middle: Comparison of combination schemes with MultiNLI. We consider two data combination strategies, augmentation (+), and random replacement $(\diamond)$ , where the resulting dataset size is unchanged. Bottom: Training sets that include WANLI. The highest accuracy on each test set (excluding gray cells) is bolded. Test sets with * contain two label classes: entailment and non-entailment.
+
+ple consists of a premise that is a sentence, and a hypothesis that is a question, which is entailed if the question is answered by the premise.
+
+Winograd NLI was adapted by the GLUE benchmark from the Winograd Schema Challenge (Levesque et al., 2011), which tests correct coreference via common sense. To convert this dataset to NLI, an entailed hypothesis is formed by substituting a correct referent and a non-entailed hypothesis is formed by substituting an incorrect referent.
+
+Adversarial NLI (ANLI; Nie et al., 2020) is an adversarially-constructed dataset where crowd-workers are instructed to write examples that stump existing models. Examples are collected in three rounds that progressively increase in difficulty, with model adversaries trained on MultiNLI, SNLI (Bowman et al., 2015), FEVER-NLI (discussed below), as well as ANLI sets from earlier rounds.
+
+Natural Questions NLI (NQ-NLI, Chen et al., 2021) is created from the Natural Questions QA dataset (Kwiatkowski et al., 2019). The premise is a decontextualized sentence from the original context; the hypothesis consists of a question and answer candidate converted into declarative form.
+
+FEVER NLI is adapted from the FEVER fact verification dataset (Thorne et al., 2018), and introduced along with ANLI. In each example, the premise is a short context from Wikipedia, and the hypothesis is a claim that is either supported (entailed), refuted (contradicted), or neither (neutral).
+
+BIG-Bench NLI is a combination of four datasets from BIG-Bench (Srivastava et al., 2022) about
+
+entailment: Analytic Entailment, Epistemic Reasoning, Disambiguation QA, Presuppositions NLI.
+
+# 3.2 Training Datasets
+
+In addition to stand-alone WANLI and MultiNLI, we also consider combining MultiNLI with other NLI datasets. We use the train sets of SNLI (Bowman et al., 2015), ANLI, and FEVER-NLI, as well as the augmentation set generated via TAILOR (Ross et al., 2022), which perturbed SNLI hypotheses to create examples with high lexical overlap between the premise and hypothesis, and the augmentation set Z-Aug (Wu et al., 2022), which was created by generating in-distribution examples and filtering them based on spurious correlations.
+
+We consider two schemes for combining datasets $\mathcal{A}$ and $\mathcal{B}$ : 1) augmentation $(\mathcal{A} + \mathcal{B})$ , in which the two datasets are concatenated, and 2) random replacement $(\mathcal{A} \diamond \mathcal{B})$ , where $|\mathcal{B}|$ examples from $\mathcal{A}$ are randomly swapped out and replaced with all examples from $\mathcal{B}$ .
+
+# 3.3 Results
+
+Results are shown in Table 3. When comparing MultiNLI (MNLI) and WANLI alone, training a model on WANLI instead of MultiNLI leads to better performance on every test set we consider, including by $4\%$ on Diagnostics, $11\%$ on HANS, and $9\%$ on Adversarial NLI. This is remarkable given WANLI is $4\times$ smaller than MultiNLI, and contains primarily machine-written examples.
+
+A WANLI-trained model continues to outperform baselines that combine MultiNLI with other
+
+
Test Set
Diagnostics
HANS*
ANLI
BIG-Bench*
WANLI
ANLI
65.67
80.58
55.21
77.10
63.85
ANLI + WANLI
72.82
88.58
56.59
84.89
75.84
+
+Table 4: Comparison of whether including WANLI in the training data of ANLI improves in-domain test performance, when finetuning RoBERTa-large.
+
+NLI datasets and augmentation sets, in every OOD setting. This includes when comparing to a model trained on $9 \times$ more data from three existing NLI datasets, MNLI + SNLI + ANLI. The consistent advantage of WANLI over datasets that include ANLI (e.g., MNLI + ANLI) is noteworthy, as ANLI's adversarial creation pipeline posed a much greater challenge for human workers, and used more existing resources to train model adversaries.
+
+Quite surprisingly, training on WANLI alone also outperforms combining WANLI with MultiNLI. This reinforces that more data might not necessarily be better, especially when the data predominantly consists of easy-to-learn examples.
+
+In addition to the OOD setting, we consider whether augmentation with WANLI can improve in-domain test performance for another dataset (Table 4). Indeed, augmenting ANLI's train set with WANLI improves test accuracy on ANLI by $1.4\%$ , while greatly aiding OOD test performance.
+
+# 4 Artifacts in WANLI
+
+We next investigate whether WANLI contains similar artifacts to MultiNLI. We find that while WANLI contains fewer previously known spurious correlations, it has a distinct set of lexical correlations that may reflect artifacts in GPT-3 output.
+
+# 4.1 Partial Input Models
+
+Given that the task requires reasoning with both the premise and the hypothesis, a model that sees only one of the two inputs should have no information about the correct label. We reproduce the methodology from Gururangan et al. (2018) and train fastText classifiers to predict the label using partial input. After first balancing WANLI, a model trained on just the hypotheses of WANLI achieves $41.6\%$ accuracy on the test set compared to $49.6\%$ for MultiNLI, when restricted to the same size. A
+
+
+
+
+Figure 3: Competency problem-style statistical correlation plot between individual words and particular class labels, where the $y$ -axis is the probability of label $y$ given the presence of the word $x_{i}$ , and the $x$ -axis is the number of times word $x_{i}$ appears in the data. All points representing (word, label) pairs above the blue line have detectable correlations (Gardner et al., 2021).
+
+premise-only model trained on WANLI achieves an accuracy of $42.9\%$ .7
+
+# 4.2 Lexical Correlations
+
+Gardner et al. (2021) posit that all correlations between single words and output labels are spurious. We plot the statistical correlation for every word and label in Figure 3, after balancing WANLI and downsampling MultiNLI. We observe that WANLI also contains words with detectable correlations, suggesting that GPT-3 may have some artifacts of its own due to the slightly different templates and different sets of in-context examples for each label. Interestingly, the correlations tend to be a different set of words than for MultiNLI (other than "not" and "no"), with less interpretable reasons for correlating with a certain label (e.g., "second", "was").
+
+# 4.3 Premise-Hypothesis Semantic Similarity
+
+We explore the semantic similarity between the premise and hypothesis within each label class using Sentence-BERT (Reimers and Gurevych, 2019); these distributions are shown in Figure 4. In both MultiNLI and WANLI, entailed hypotheses are naturally most semantically similar to the premise. In MultiNLI, this is followed by neutral
+
+
+Figure 4: Semantic similarity between the premise and hypothesis, computed based on SBERT embeddings (Reimers and Gurevych, 2019). The distributions for each label class are much more well-separated in MultiNLI than in WANLI.
+
+examples and then contradiction examples. In contrast, in WANLI there is much greater overlap in the three distributions, and those for neutral and contradiction examples are nearly indistinguishable. This suggests in WANLI, the semantic similarity between the premise and hypothesis provides less signal of the label.
+
+# 5 What does WANLI show about the human machine collaboration pipeline?
+
+We discuss observations from collecting WANLI that may shed insight for future work in the direction of collaborative dataset creation.
+
+# 5.1 What kinds of revisions do annotators tend to make?
+
+We find that revisions fall broadly into two categories: improving the fluency of the text, and improving the clarity of the relationship. The majority of revisions change the length only slightly, with $74\%$ of both premise revisions and hypothesis revisions changing the word count between $-1$ and $+2$ words. Fluency revisions often target well-documented issues with text generation, such as redundancy and self-contradiction. Clarity revisions often resolve ambiguities in the example that make the entailment relationship difficult (or impossible) to determine, such as ambiguous coreference or temporal references. We provide examples of revisions in Appendix D.3.
+
+# 5.2 What kinds of examples do annotators disagree on?
+
+We find that examples on which annotators disagree provide an extremely interesting test bed for how ambiguities surface in classification tasks. Upon inspecting the examples (some are shown in Table 5), we observe that they represent genuinely ambiguous cases rather than careless mislabels, echoing
+
+previous findings (Pavlick and Kwiatkowski, 2019). See further discussion in Appendix D.4.
+
+# 5.3 How reliably does GPT-3 reproduce the in-context pattern?
+
+One characteristic of WANLI is its imbalanced label distribution: even though the set of seed examples for generation was constructed to be balanced, after undergoing human labeling, only $15\%$ of examples are given the contradiction label. We observe that contradiction patterns in in-context examples are generally much more challenging for GPT-3 to copy, likely because it was trained on (mostly) coherent sequences of sentences. More broadly, we find that more abstract reasoning patterns are harder for GPT-3 to mimic than patterns that involve simpler transformations.
+
+Nonetheless, even when GPT-3 does not successfully copy the examples, the diverse set of in-context examples leads to a variety of creative output that may be challenging for human crowd-workers to achieve.
+
+# 6 Related Work
+
+Crowdsourcing The scalability and flexibility of crowdsourcing has enabled the creation of foundational NLP benchmarks across a wide range of subproblems, and made it the dominant paradigm for data collection (Mihaylov et al., 2018; Rajpurkar et al., 2016; Huang et al., 2019; Talmor et al., 2019, i.a.). Nonetheless, a growing body of research shows that resulting datasets may not isolate the key linguistic phenomena (Jia and Liang, 2017; Chen et al., 2016; Sugawara et al., 2020).
+
+For crowdsourcing NLI datasets, where the annotator is given a premise and asked to write a hypothesis of each label (Bowman et al., 2015; Williams et al., 2018), the presence of annotation artifacts is especially well-studied (Gururangan et al., 2018; McCoy et al., 2019; Glockner et al., 2018). Recent work attempted to remedy this through different data collection protocols but found negative results (Vania et al., 2020; Bowman et al., 2020), showing this is a hard problem requiring greater innovation.
+
+Adversarial data collection In this paradigm, annotators are asked to produce examples on which current systems fail (Kiela et al., 2021; Talmore et al., 2021; Zellers et al., 2019, i.a.). Beyond increasing annotator effort (Bartolo et al., 2020), adversarial methods have been challenged for not leading to better generalization on non-adversarial
+
+
Example
Labels
Ambiguity
P: According to the most recent statistics, the rate of violent crime in the United States has dropped by almost half since 1991.
+H: The rate of violent crime has not dropped by half since 1991.
Entailment
+Contradiction
Does “almost half” mean “not half” or “basically half”?
P: As a result of the disaster, the city was rebuilt and it is now one of the most beautiful cities in the world.
+H: A disaster made the city better.
Entailment
+Neutral
Do indirect consequences count?
P: It is a shame that the world has to suffer the pain of such unnecessary war.
+H: The world does not have to suffer such pain.
Entailment
+Contradiction
Is the scope of “has to” in the hypothesis given the war or not?
P: The original draft of the treaty included a clause that would have prohibited all weapons of mass destruction.
+H: The clause was removed in the final version of the treaty.
Entailment
+Neutral
Does the premise imply that the clause is no longer in the treaty?
P: If you can’t handle the heat, get out of the kitchen.
+H: If you can’t handle the pressure, get out of the situation.
Entailment
+Neutral
Is the premise to be interpreted literally or figuratively?
P: In a world of increasing uncertainty, the only certainty is that nothing is certain.
+H: There is no certainty in the world.
Entailment
+Contradiction
Self-contradictory but coherent premise
+
+Table 5: Examples where two annotators assigned different labels. We find that many examples represent genuinely ambiguous cases rather than careless mislabels, echoing previous findings (Pavlick and Kwiatkowski, 2019).
+
+test sets (Kaushik et al., 2021) and decreasing data diversity (Bowman and Dahl, 2021). Moreover, the resulting data has been shown to depend strongly on the adversaries, inhibiting a fair evaluation (Phang et al., 2021). Finally, these approaches may produce examples beyond the scope of the task. For example, in Adversarial NLI (Nie et al., 2020), an estimated $58\%$ of examples required "reasoning from outside knowledge or additional facts," which is arguably separate from the underlying problem of understanding semantic entailments. We argue that we can better leverage the strengths of machines and humans by having them collaborate rather than act as adversaries.
+
+Dataset generation Another recent approach leverages language models toward fully automatic dataset creation (Schick and Schütze, 2021; Wu et al., 2022; West et al., 2021; Bartolo et al., 2021a, i.a.). Removing human input may fundamentally limit the complexity of examples to phenomena already accessible by the model, when our goal is precisely to teach models more diverse phenomena. The most similarly-motivated work to ours, Lee et al. (2021), trains a data generator on "data-rich slices" of an existing dataset, and applies it to under-represented slices. However, they use labels or metadata to represent slices, leaving automatic methods of identifying slices to future work.
+
+Human-machine collaboration In terms of human-machine collaboration, Tekiroğlu et al. (2020) and Yuan et al. (2021) employ a language model to generate counter-narratives to hate speech and biographies, respectively, which are validated and revised by humans. This was for a generative task, and we complement their findings by showing that human-machine collaboration can also be useful for generating labeled datasets for robust classification models. Contemporary work (Bartolo et al., 2021b) finetunes a generative annotation assistant to produce question-answer pairs that humans can revise for extractive QA.
+
+# 7 Conclusion
+
+At the heart of dataset creation is distilling human linguistic competence into data that models can learn from. The traditional crowdsourcing paradigm takes the view that the best approach for this is to solicit people to write free-form examples expressing their capabilities. In this work, we present a worker-and-AI collaborative approach and apply it to create WANLI, whose empirical utility suggests that a better way of eliciting human intelligence at scale is to ask workers to revise and evaluate content. To this end, we hope to encourage more work in developing generative algorithms to aid the dataset creation process, and therefore re-imagining the role of human annotation.
+
+# Acknowledgments
+
+We thank members of UW NLP, AI2, and Mila NLP for valuable feedback and discussion, and especially Jena Hwang for help in designing the AMT template, Julian Michael for countless discussions of NLI examples, and Alexander Fang for feedback during writing. We thank OpenAI for offering access to the GPT-3 API and the anonymous reviewers for valuable feedback.
+
+This work was funded in part by the DARPA MCS program through NIWC Pacific (N66001-19-2-4031). The first author is supported by the National Science Foundation Graduate Research Fellowship Program.
+
+# 8 Ethics Statement
+
+We acknowledge that text generated from large pretrained language models is susceptible to perpetuating social harms and containing toxic language (Sheng et al., 2019; Gehman et al., 2020). To partially remedy this, we ask annotators to discard any examples that may be perceived as offensive. Nonetheless, it is possible that harmful examples (especially if they contain subtle biases) may have been missed by annotators and included in the final dataset. Specifically due to the above harms, we additionally caution readers and practitioners against fully automating any data creation pipeline.
+
+In addition, we are cognizant of the asymmetrical relationship between requesters and workers in crowdsourcing. We took great care to pay fair wages, and were responsive to feedback and questions throughout the data collection process (see Appendix D for details). The only personal information we collect is the worker IDs from Amazon Mechanical Turk, which we will not release. The annotation effort received an IRB exemption.
+
+# 9 Limitations
+
+In this paper, we apply our collaborative dataset creation pipeline to a single language and task, English natural language inference, and leave application of the pipeline more broadly to future work.
+
+It is possible (if not likely) that datasets partially authored by language models will have artifacts of their own, especially those reflecting social biases that may not be captured by our accuracy-based evaluation setup. For investigation of a specific generation artifact observed by Yuan et al. (2021) in their own collaborative dataset, namely the over
+
+representation of Western entities, please see Appendix C.4.
+
+We are not able to perform ablations on different parts of the pipeline to understand the effectiveness of each component, e.g., by comparing different means of collecting exemplar groups or different templates for prompting GPT-3. Unfortunately, such variations would be prohibitively expensive as they each require collecting a dataset of sufficient scale (along with the necessary human annotation).
+
+Finally, although we uncover examples where annotators disagree for valid reasons (see Table 5), we only use one label per example for training and evaluation. This is because to show the effectiveness of WANLI, we need to compare WANLI to existing (singly-labeled) training datasets via performance on established (singly-labeled) benchmarks. We encourage future work to understand the limitations of forcing inherently ambiguous instances into the $n$ -way classification scheme, or otherwise discarding these potentially valuable examples of linguistic reasoning as noise.
+
+# References
+
+Alan Akbik, Tanja Bergmann, Duncan Blythe, Kashif Rasul, Stefan Schweter, and Roland Vollgraf. 2019. FLAIR: An easy-to-use framework for state-of-the-art NLP. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations), pages 54-59, Minneapolis, Minnesota. Association for Computational Linguistics.
+Max Bartolo, Alastair Roberts, Johannes Welbl, Sebastian Riedel, and Pontus Stenetorp. 2020. Beat the AI: Investigating adversarial human annotation for reading comprehension. Transactions of the Association for Computational Linguistics, 8:662-678.
+Max Bartolo, Tristan Thrush, Robin Jia, Sebastian Riedel, Pontus Stenetorp, and Douwe Kiela. 2021a. Improving question answering model robustness with synthetic adversarial data generation. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 8830-8848, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
+Max Bartolo, Tristan Thrush, Sebastian Riedel, Pontus Stenetorp, Robin Jia, and Douwe Kiela. 2021b. Models in the loop: Aiding crowdworkers with generative annotation assistants. ArXiv.
+Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages
+
+632-642, Lisbon, Portugal. Association for Computational Linguistics.
+Samuel R. Bowman and George Dahl. 2021. What will it take to fix benchmarking in natural language understanding? In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4843-4855, Online. Association for Computational Linguistics.
+Samuel R. Bowman, Jennimaria Palomaki, Livio Baldini Soares, and Emily Pitler. 2020. New protocols and negative results for textual entailment data collection. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 8203-8214, Online. Association for Computational Linguistics.
+T. Brown, B. Mann, Nick Ryder, Melanie Subbiah, J. Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, G. Kruger, T. Henighan, R. Child, Aditya Ramesh, D. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, E. Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, J. Clark, Christopher Berner, Sam McCandlish, A. Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In Proceedings of the 34th Conference on Neural Information Processing Systems (NeurIPS).
+Danqi Chen, Jason Bolton, and Christopher D. Manning. 2016. A thorough examination of the CNN/Daily Mail reading comprehension task. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2358-2367, Berlin, Germany. Association for Computational Linguistics.
+Jifan Chen, Eunsol Choi, and Greg Durrett. 2021. Can NLI models verify QA systems' predictions? In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 3841-3854, Punta Cana, Dominican Republic. Association for Computational Linguistics.
+Christopher Clark, Kenton Lee, Ming-Wei Chang, Tom Kwiatkowski, Michael Collins, and Kristina Toutanova. 2019. *BoolQ: Exploring the surprising difficulty of natural yes/no questions.* In *Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies*, Volume 1 (Long and Short Papers), pages 2924–2936, Minneapolis, Minnesota. Association for Computational Linguistics.
+Elizabeth Clark, Tal August, Sofia Serrano, Nikita Hahuong, Suchin Gururangan, and Noah A. Smith. 2021. All that's 'human' is not gold: Evaluating human evaluation of generated text. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 7282-7296, Online. Association for Computational Linguistics.
+
+Emily Dinan, Samuel Humeau, Bharath Chintagunta, and Jason Weston. 2019. Build it break it fix it for dialogue safety: Robustness from adversarial human attack. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4537-4546, Hong Kong, China. Association for Computational Linguistics.
+Yao Dou, Maxwell Forbes, Rik Koncel-Kedziorski, Noah A. Smith, and Yejin Choi. 2021. Scarecrow: A framework for scrutinizing machine text. arXiv.
+Jacob Eisenstein. 2022. Informativeness and invariance: Two perspectives on spurious correlations in natural language. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4326-4331, Seattle, United States. Association for Computational Linguistics.
+Matt Gardner, William Merrill, Jesse Dodge, Matthew Peters, Alexis Ross, Sameer Singh, and Noah A. Smith. 2021. Competency problems: On finding and removing artifacts in language data. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 1801-1813, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
+Samuel Gehman, Suchin Gururangan, Maarten Sap, Yejin Choi, and Noah A. Smith. 2020. RealToxicityPrompts: Evaluating neural toxic degeneration in language models. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 3356-3369, Online. Association for Computational Linguistics.
+Robert Geirhos, Jörn-Henrik Jacobsen, Richard Zemel Claudio Michaelis, Wieland Brendel, Matthias Bethge, and Felix A. Wichmann. 2020. Shortcut learning in deep neural networks.
+Mor Geva, Yoav Goldberg, and Jonathan Berant. 2019. Are we modeling the task or the annotator? an investigation of annotator bias in natural language understanding datasets. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1161-1166, Hong Kong, China. Association for Computational Linguistics.
+Max Glockner, Vered Shwartz, and Yoav Goldberg. 2018. Breaking NLI systems with sentences that require simple lexical inferences. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 650-655, Melbourne, Australia. Association for Computational Linguistics.
+Suchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Samuel Bowman, and Noah A. Smith. 2018. Annotation artifacts in natural language inference data. In Proceedings of the 2018 Conference of
+
+the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 107-112, New Orleans, Louisiana. Association for Computational Linguistics.
+Xiaochuang Han and Yulia Tsvetkov. 2021. Influence tuning: Demoting spurious correlations via instance attribution and instance-driven updates. In *Findings of the Association for Computational Linguistics: EMNLP* 2021, pages 4398-4409, Punta Cana, Dominican Republic. Association for Computational Linguistics.
+Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. 2020. The curious case of neural text degeneration. In International Conference on Learning Representations.
+Lifu Huang, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. 2019. Cosmos QA: Machine reading comprehension with contextual commonsense reasoning. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2391-2401, Hong Kong, China. Association for Computational Linguistics.
+Robin Jia and Percy Liang. 2017. Adversarial examples for evaluating reading comprehension systems. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2021-2031, Copenhagen, Denmark. Association for Computational Linguistics.
+Divyansh Kaushik, Douwe Kiela, Zachary C. Lipton, and Wen-tau Yih. 2021. On the efficacy of adversarial data collection for question answering: Results from a large-scale randomized study. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 6618-6633, Online. Association for Computational Linguistics.
+Douwe Kiela, Max Bartolo, Yixin Nie, Divyansh Kaushik, Atticus Geiger, Zhengxuan Wu, Bertie Vidgen, Grusha Prasad, Amanpreet Singh, Pratik Ringshia, Zhiyi Ma, Tristan Thrush, Sebastian Riedel, Zeerak Waseem, Pontus Stenetorp, Robin Jia, Mohit Bansal, Christopher Potts, and Adina Williams. 2021. Dynabench: Rethinking benchmarking in NLP. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4110-4124, Online. Association for Computational Linguistics.
+Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, Kristina Toutanova, Llion Jones, Matthew Kelcey, Ming-Wei Chang, Andrew M. Dai, Jakob
+
+Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natural questions: A benchmark for question answering research. Transactions of the Association for Computational Linguistics, 7:452-466.
+Ronan Le Bras, Swabha Swayamdipta, Chandra Bhagavatula, Rowan Zellers, Matthew E. Peters, Ashish Sabharwa, and Yejin Choi. 2020. Adversarial filters of dataset biases. In 37th International Conference on Machine Learning.
+Kenton Lee, Kelvin Guu, Luheng He, Tim Dozat, and Hyung Won Chung. 2021. Neural data augmentation via example extrapolation. arXiv.
+Mina Lee, Percy Liang, and Qian Yang. 2022. Coauthor: Designing a human-ai collaborative writing dataset for exploring language model capabilities. In CHI Conference on Human Factors in Computing Systems, New Orleans, LA, USA.
+Hector J Levesque, Ernest Davis, and Leora Morgenstern. 2011. The winograd schema challenge. In AAAI Spring Symposium: Logical Formalizations of Commonsense Reasoning.
+Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. ArXiv.
+Joshua Maynez, Shashi Narayan, Bernd Bohnet, and Ryan McDonald. 2020. On faithfulness and factuality in abstractive summarization. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1906-1919, Online. Association for Computational Linguistics.
+Tom McCoy, Ellie Pavlick, and Tal Linzen. 2019. Right for the wrong reasons: Diagnosing syntactic heuristics in natural language inference. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3428-3448, Florence, Italy. Association for Computational Linguistics.
+Todor Mihaylov, Peter Clark, Tushar Khot, and Ashish Sabharwal. 2018. Can a suit of armor conduct electricity? a new dataset for open book question answering. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2381-2391, Brussels, Belgium. Association for Computational Linguistics.
+Nikita Nangia, Saku Sugawara, Harsh Trivedi, Alex Warstadt, Clara Vania, and Samuel R. Bowman. 2021. What ingredients make for an effective crowdsourcing protocol for difficult NLU data collection tasks? In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1221-1235, Online. Association for Computational Linguistics.
+
+Yixin Nie, Adina Williams, Emily Dinan, Mohit Bansal, Jason Weston, and Douwe Kiela. 2020. Adversarial NLI: A new benchmark for natural language understanding. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4885-4901, Online. Association for Computational Linguistics.
+Ellie Pavlick and Tom Kwiatkowski. 2019. Inherent disagreements in human textual inferences. Transactions of the Association for Computational Linguistics, 7:677-694.
+Jason Phang, Angelica Chen, William Huang, and Samuel R. Bowman. 2021. Adversarily constructed evaluation sets are more challenging, but may not be fair. ArXiv.
+Jason Phang, Thibault Févry, and Samuel R. Bowman. 2018. Sentence encoders on stilts: Supplementary training on intermediate labeled-data tasks. ArXiv.
+Adam Poliak, Jason Naradowsky, Aparajita Haldar, Rachel Rudinger, and Benjamin Van Durme. 2018. Hypothesis only baselines in natural language inference. In Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics, pages 180-191, New Orleans, Louisiana. Association for Computational Linguistics.
+Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research, 21(140):1-67.
+Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: $100,000+$ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383-2392, Austin, Texas. Association for Computational Linguistics.
+Benjamin Recht, Rebecca Roelofs, Ludwig Schmidt, and Vaishaal Shankar. 2019. DoImagenet classifiers generalize toImagenet? In International Conference on Machine Learning, pages 5389-5400. PMLR.
+Nils Reimers and Iryna Gurevych. 2019. SentenceBERT: Sentence embeddings using Siamese BERTnetworks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3982-3992, Hong Kong, China. Association for Computational Linguistics.
+Marco Tulio Ribeiro, Tongshuang Wu, Carlos Guestrin, and Sameer Singh. 2020. Beyond accuracy: Behavioral testing of NLP models with CheckList. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4902-4912, Online. Association for Computational Linguistics.
+
+Alexis Ross, Tongshuang Wu, Hao Peng, Matthew Peters, and Matt Gardner. 2022. Tailor: Generating and perturbing text with semantic controls. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 3194-3213, Dublin, Ireland. Association for Computational Linguistics.
+Timo Schick and Hinrich Schütze. 2021. Generating datasets with pretrained language models. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 6943-6951, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
+Emily Sheng, Kai-Wei Chang, Premkumar Natarajan, and Nanyun Peng. 2019. The woman worked as a babysitter: On biases in language generation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3407-3412, Hong Kong, China. Association for Computational Linguistics.
+Neha Srikanth and Rachel Rudinger. 2022. Partial-input baselines show that NLI models can ignore context, but they don't. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4753-4763, Seattle, United States. Association for Computational Linguistics.
+Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu Awal Md Shoeb, Abubakar Abid, Adam Fisch, Adam R. Brown, Adam Santoro, Aditya Gupta, and Adrià Garriga-Alonso et al. 2022. Beyond the imitation game: Quantifying and extrapolating the capabilities of language models. arXiv.
+Saku Sugawara, Pontus Stenetorp, Kentaro Inui, and Akiko Aizawa. 2020. Assessing the benchmarking capacity of machine reading comprehension datasets. In AAAI Conference on Artificial Intelligence, pages 8918-8927.
+Swabha Swayamdipta, Roy Schwartz, Nicholas Lourie, Yizhong Wang, Hannaneh Hajishirzi, Noah A. Smith, and Yejin Choi. 2020. Dataset cartography: Mapping and diagnosing datasets with training dynamics. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 9275-9293, Online. Association for Computational Linguistics.
+Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. 2019. CommonsenseQA: A question answering challenge targeting commonsense knowledge. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4149-4158, Minneapolis, Minnesota. Association for Computational Linguistics.
+
+Alon Talmor, Ori Yoran, Ronan Le Bras, Chandra Bhagavatula, Yoav Goldberg, Yejin Choi, and Jonathan Berant. 2021. CommonsenseQA 2.0: Exposing the limits of AI through gamification. In 35th Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 1).
+Serra Sinem Tekiroğlu, Yi-Ling Chung, and Marco Guerini. 2020. Generating counter narratives against online hate speech: Data and strategies. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1177-1190, Online. Association for Computational Linguistics.
+James Thorne, Andreas Vlachos, Christos Christodoulopoulos, and Arpit Mittal. 2018. FEVER: a large-scale dataset for fact extraction and VERIFICATION. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 809-819, New Orleans, Louisiana. Association for Computational Linguistics.
+Masatoshi Tsuchiya. 2018. Performance impact caused by hidden bias of training data for recognizing textual entailment. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan. European Language Resources Association (ELRA).
+Lifu Tu, Garima Lalwani, Spandana Gella, and He He. 2020. An empirical study on robustness to spurious correlations using pre-trained language models. Transactions of the Association for Computational Linguistics, 8:621-633.
+Clara Vania, Ruijie Chen, and Samuel R. Bowman. 2020. Asking Crowdworkers to Write Entailment Examples: The Best of Bad options. In Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing, pages 672-686, Suzhou, China. Association for Computational Linguistics.
+Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2018. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 353-355, Brussels, Belgium. Association for Computational Linguistics.
+Peter West, Chandra Bhagavatula, Jack Hessel, Jena D. Hwang, Liwei Jiang, Ronan Le Bras, Ximing Lu, Sean Welleck, and Yejin Choi. 2021. Symbolic knowledge distillation: from general language models to commonsense models. arXiv.
+Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American
+
+Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112-1122, New Orleans, Louisiana. Association for Computational Linguistics.
+Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Association for Computational Linguistics.
+Yuxiang Wu, Matt Gardner, Pontus Stenetorp, and Pradeep Dasigi. 2022. Generating data to mitigate spurious correlations in natural language inference datasets. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2660-2676, Dublin, Ireland. Association for Computational Linguistics.
+Ann Yuan, Daphne Ippolito, Vitaly Nikolaev, Chris Callison-Burch, Andy Coenen, and Sebastian Gehrmann. 2021. Synthbio: A case study in humanai collaborative curation of text datasets. In Neural Information Processing Systems Track on Datasets and Benchmarks.
+Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. 2019. HellaSwag: Can a machine really finish your sentence? In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4791-4800, Florence, Italy. Association for Computational Linguistics.
+
+# A Estimated Max Variability
+
+In order to test the correlation between variability and estimated max variability on a dataset $\mathcal{D}$ , we would have to repeatedly hold out a single example $x$ , train a model on $\mathcal{D} \setminus \{x\}$ , and evaluate how well the estimated max variability from the model trained on $\mathcal{D} \setminus \{x\}$ correlates with the true variability from the model trained on $\mathcal{D}$ , which saw $x$ during training.
+
+Unfortunately, this would be a very expensive experiment. Instead, we split the MNLI train set into $99\%$ for training and $1\%$ (3928 examples) for evaluation. For each of the held-out examples, we calculate the variability under $\mathcal{M}_{\mathrm{MNLI}}$ and estimated max variability under $\mathcal{M}_{\mathrm{MNLI}99\%}$ . The correlation is shown in Figure 5, and has a Pearson's correlation coefficient of 0.527 with a $p$ -value of $7\times 10^{-281}$ .
+
+
+Figure 5: Correlation between variability of examples on a model that trains on the full MNLI dataset, and estimated max variability of the same examples when they are held out of the training set.
+
+# B Modeling Details
+
+All model training is implemented with the HuggingFace (Wolf et al., 2020) library and uses the original hyperparameters from the RoBERTa paper for finetuning on GLUE (Liu et al., 2019). We train the model for five epochs and evaluate the final model. We choose not to use an early stopping scheme in order to isolate the training data as the object of study and control for training length as a confounding factor. This is important since Tu et al. (2020) showed that counter-examples can be learned better with longer training.
+
+All training was performed on a single Nvidia Quadro RTX 6000 GPU. The duration of training varied depending on the size of the training data, from 3 hours for WANLI to 14 hours for MultiNLI + WANLI.
+
+
Hyperparameter
Assignment
Model
RoBERTa-large
Number of parameters
345M
Number of epochs
5
Learning rate
10-5
Batch size
32
Weight decay
0.1
Learning rate decay
linear
Warmup ratio
0.06
+
+# C WANLI Details and Discussion
+
+# C.1 Example GPT-3 Context
+
+We include some examples of full GPT-3 contexts in Table 12, 13, 14, 15.
+
+# C.2 GPT-3 Generation Hyperparameters
+
+We queried the GPT-3 Curie model available through the OpenAI $\mathsf{API}^8$ on the dates November 3 to November 5, 2021. In total, the generation cost $677.89. Hyperparameters for generation are shown in Table 7.
+
+Table 6: Training hyperparameters for RoBERTa-large.
+
+
Hyperparameter
Assignment
Top p
0.5
Temperature
1
Max tokens
120
Stop string
\n\n
Presence penalty
0.0
Frequency penalty
0.0
+
+Table 7: Hyperparameters for generation from GPT-3.
+
+# C.3 Dataset sizes at each stage
+
+In Stage 1, we collect the top $25\%$ most ambiguous examples from each label class in MultiNLI as our set of seed examples. This leads to 98,176 seed examples, where each seed example corresponds to a unique context for GPT-3. We generate $n = 5$ examples per seed example, and skip examples that are not properly formatted with a distinct premise and hypothesis following the context template (Figure 2). At the end of Stage 2, the size of $\mathcal{D}_{\mathrm{gen}}$ is 372,404. After applying the filtering heuristics described in §2.3 on $\mathcal{D}_{\mathrm{gen}}$ , the remaining dataset size is 287,241. Of the examples discarded, 79,278 generated examples had identical premise and hypothesis (sans punctuation and casing), and 4,732 examples had copied an in-context example. Next, we keep the half with the highest estimated max variability by sourcing an equal number of examples from each (intended) label class for a balanced dataset, resulting in $\mathcal{D}_{\mathrm{filtered}}$ with size 143,619. However, we do not actually recruit human review on all of $\mathcal{D}_{\mathrm{filtered}}$ , and instead annotate a total of 118,724 examples. Since some of these examples are discarded, the final WANLI dataset
+
+contains 107,885 examples. These correspond to 57,825 seed examples from MultiNLI.
+
+# C.4 Investigation of Western entities in WANLI versus MNLI
+
+While we investigated known artifacts of crowd-sourced datasets in §4, generated datasets may have distinct kinds of artifacts. Indeed, recent related work qualitatively observed an over-representation of Western entities in generated biographies (Yuan et al., 2021). To investigate whether this is also characteristic of WANLI, we use flair (Akbik et al., 2019) to perform named entity recognition on MultiNLI and WANLI. Due to the challenges and ethical risks of automatically determining the origin of names and organizations, we focus on the diversity of locations mentioned. We use geopy10 to map all locations (e.g., cities, provinces, landmarks, as well as countries) to a country.
+
+We find that $79\%$ of location mentions in WANLI are in Europe or North America, compared to $71\%$ in MultiNLI. In particular, the United States is massively over-represented, accounting for $46\%$ of mentions in WANLI and $26\%$ in MultiNLI. However, both datasets feature a diversity of location names: WANLI mentions locations in 210 countries across 22K location entities, and MultiNLI mentions locations in 227 countries across 163K location entities. We conclude that over-representation of Western entities is indeed a concern for generated datasets, and encourage future work to consider this.
+
+# D Human Review
+
+Screenshot of the instructions, guidelines, and annotation interface are shown in Tables 6, 7, and 8. The guidelines take inspiration from the design of the NLI Diagnostics dataset (Wang et al., 2018). To collect a pool of qualified workers, we designed a qualification task with examples testing each of these categories. NLI is a challenging task, and many generated examples are especially challenging by design. Therefore, instructing annotators in how to think about the task and resolve common issues is key to collecting high-quality, label-consistent data.
+
+# D.1 The Annotators
+
+Annotators were required to have a HIT approval rate of $98\%$ , a total of 10,000 approved HITs, and
+
+be located in the United States.
+
+300 Turkers took our qualification test, of which 69 passed. Turkers who were later found to produce extremely careless annotations were removed from the qualification list (and oftentimes, their annotations were discarded, though they were still paid for their work). The number of workers who contributed to the final dataset is 62.
+
+Throughout the data collection process, the authors would review annotations and write individualized emails to Turkers with feedback, as well as group emails to clarify common challenging cases of NLI (such as examples involving questions). This follows the recommended crowdsourcing protocol from Nangia et al. (2021).
+
+# D.2 Compensation
+
+In designing the task, we aimed for a pay rate of at least $15 per hour. Workers were paid$ 0.12 for each example that they annotate. At the end of data collection, we aggregate the earning and time spent from each crowdworker, and find that the median hourly rate was $22.72, with 85% of workers being paid over the $15/hour target.
+
+# D.3 Revision Analysis
+
+We provide examples of revisions in Table 9. We find that revisions are generally targeted yet effective. The majority of revisions change the length only slightly, with $74\%$ of both premise revisions and hypothesis revisions changing the word count between $-1$ and $+2$ words. A very large proportion, $11.6\%$ of premise revisions and $20.6\%$ of hypothesis revisions, changed the set of pronouns present in the text, often to clarify coreference.
+
+We instructed annotators to revise examples only when it would make the example more "interesting" in some sense, or more clear without removing what's interesting. Nonetheless, we still observed a large number of revisions that greatly simplified the example, oftentimes re-introducing the same artifacts that have been documented in prior work. Therefore, we ultimately chose to include revisions only when both annotators revised the example, indicating that the revision was necessary to improve the quality of the example.
+
+# D.4 Disagreement Analysis
+
+In order to investigate the utility of collecting a third annotation, we randomly sampled 80 examples where the two annotators disagreed on the label (and neither revised nor discarded), and two of
+
+the authors separately annotated each one. Shockingly, the two authors agreed on the label only $49\%$ of the time. Furthermore, in $12\%$ of cases, all three labels were present among the four annotations. This suggests that disagreement is often due to true ambiguity rather than careless mislabeling, and a third annotation would be unlikely to have high payoff in terms of "correcting" the label. As a result, we choose not to collect a third annotation in this work. Instead, we believe that the doubly-annotated examples in WANLI have flagged many interesting cases of ambiguity in NLI, and we encourage future work to design richer annotation frameworks to uncover the source(s) of ambiguity.
+
+We choose to keep examples with disagreement in the WANLI dataset because we believe that finetuning with one of multiple reasonable labels still provides valuable training signal.
+
+
MNLI Dev. Set
Matched
Mismatched
Train Set
MNLI
90.30
90.10
MNLI◇WANLI
89.63
88.95
MNLI+WANLI
89.90
89.32
WANLI
80.17
80.46
+
+Table 8: Results on MultiNLI's development set.
+
+# E Additional Experiments
+
+# E.1 Additional baselines
+
+We additionally perform comparisons with several subsets of MultiNLI which are the same size as WANLI: MultiNLI filtered with the AFLite algorithm (MultiNLI with AFLite; Le Bras et al., 2020), the most ambiguous examples of MultiNLI (MultiNLI ambiguous; Swayamdipta et al., 2020), and a random subset of MultiNLI (MultiNLI downsampled). Results in Table 10 show that a WANLitrained model outperforms these baselines on every test set.
+
+# E.2 Evaluation on MultiNLI
+
+We report the results on MultiNLI's development set in Table 8. We find that mixing WANLI into the MultiNLI training data (either through swapping or augmentation) maintains in-domain accuracy within $\sim 1\%$ . Training on WANLI alone drops performance on MultiNLI's development set by $\sim 10\%$ ; however, the higher performance on other out-of-domain test sets suggests that evaluation
+
+through MultiNLI may not be a definitive signal of model ability.
+
+# E.3 Finetuning T5
+
+We demonstrate that the robustness improvements from training on WANLI generalize to another model architecture, T5-base (Raffel et al., 2020), which was never used in the data curation pipeline. Shown in Table 11, training T5-base on WANLI also outperforms training on MultiNLI on every test set, including by $4\%$ of NLI Diagnostics, $10\%$ on HANS, and $8\%$ on Adversarial NLI (similar margins compared to finetuning RoBERTa-large).
+
+# F Data Map of WANLI
+
+In Figure 9, we show a data map of MultiNLI relative to RoBERTa-large trained on MNLI, and of WANLI relative to RoBERTa-large trained on WANLI.
+
+You will create high-quality examples that illustrate the relationship between two short pieces of text. Each example consists of a premise, a hypothesis, and the relationship between them. You will be given a premise and hypothesis, and your task is to 1) optionally revise them to improve the quality of the example, then 2) determine the relationship between them. The types of relationships are as follows.
+
+# Entailment
+
+Given the premise, the hypothesis is definitely correct. The premise fully implies the hypothesis. For example, the premise Pebbles the cat sat on the mat entails the hypothesis Pebbles sat.
+
+# Contradiction
+
+Given the premise, the hypothesis is definitely incorrect. The premise and hypothesis cannot both be true. For example, the premise Pebbles the cat sat on the mat contradicts the hypothesis Pebbles is not on the mat.
+
+# Neutral
+
+Given the premise, the hypothesis may or may not be correct. The hypothesis is plausible but not entailed by the hypothesis. For example, the premise Pebbles the cat sat on the mat is neutral to the hypothesis Pebbles purred.
+
+# Discard
+
+This example is low-quality or offensive in nature, and would require a great deal of revision in order to fix. In this case, there is no need to revise any text.
+
+Before assigning a label, you may optionally revise the example in order to improve its quality. In these cases, you should preserve the intended meaning of the example as much as possible by making minimal revisions. Do not insert words that drastically change the meaning of the sentence, or delete entire spans of text unless they affect the fluency of the example. The goal is to ensure that the relationship is well-defined but not trivially easy; imagine you are writing challenging but unambiguous examples that could potentially be used in a classroom setting to teach or test understanding of the task.
+
+Figure 6: Instructions provided to crowdworkers on Amazon Mechanical Turk.
+
+Here are some guidelines to help you with determining the relationship between the premise and hypothesis. Remember to consult these when you are unsure.
+
+Presuppositions: X knows that Y, X recognizes that Y, X shows that Y, or X reveals that Y all entail Y, since Y is a presupposition in the premise. However, X thinks that Y or X said that Y is neutral with respect to Y, since X can be wrong. For example, I said I would be on time does not imply I was on time.
+However, you can assume that X said that Y is an honest reflection of what X thinks. For example, She said that all apples are red entails She believes that all apples are red, and is neutral with respect to All apples are red.
+Conditionals: If X, then Y is neutral with respect to both X and Y. For example, if the water level is low, then the engine will not start does not imply The water level is low or The engine will not start, since the premise does not say anything about whether the water level is actually high or low!
+- Background knowledge: A minimal amount of background knowledge is okay. For example, I visited Mt. Fuji entails I visited Japan, and I am watching an NFL game contradicts I am watching basketball. There may be some ambiguous cases here, and you will have to use your best judgment.
+Common sense: We should use a common sense interpretation of the text, when it strongly dominates a conflicting literal interpretation. For example, we can take When I was young, I was obsessed with the supernatural to entail I am not obsessed with the supernatural anymore, because it is the only commonsense way of reading the premise.
+- Coreference: We can assume that expressions in the premise and hypothesis are referring to the same entity when there is a reasonable amount of corroborating information. For example, The music building has 55 rooms entails The building has 55 rooms and contradicts The building has only one room, by assuming "the building" in the hypothesis is referring to "the music building" in the premise. However, The couple is talking to each other is neutral with respect to The redheads are talking to each other, even though the couple and redheads might be the same two people, because there is not enough information to suggest this.
+- Questions: As a rule of thumb, if the premise or hypothesis is a question (or both), consider whether saying the premise and hypothesis in sequence would add any information (entailment) or be contradictory (contradiction). For example, saying "Jane is coming at 6. When is Jane coming?" is nonsensical because the question does not need to be asked (it is already entailed). On the other hand, saying "Jane is coming at 6. Why isn't Jane coming?" is clearly contradictory. More precisely:
+
+If the premise is a question and the hypothesis is a statement, we take the premise to entail its presuppositions (i.e., what is assumed in asking the question). For example, When is Jane coming? presupposes and therefore entails Jane is coming, and also contradicts Jane is not coming.
+If the premise is a statement and the hypothesis is a question, it is an entailment if the premise answers the hypothesis, and a contradiction if the premise contradicts the presupposition of the hypothesis. For example, Jane is coming at 6 entails When is Jane coming?, and contradicts Why isn't Jane coming?.
+- When the premise and hypothesis are both questions, it is an entailment if an answer to the premise also answers the hypothesis, and a contradiction if they make contradictory presuppositions. For example, When is Jane coming? entails (but is not entailed by) Will Jane come before 6?, and contradicts Why isn't Jane coming? (since the premise assumes Jane is coming, and the hypothesis assumes she isn't).
+
+- Point of view: The premise and hypothesis should be read from the same point of view. When there is a shift in perspective that makes it seem like the premise and hypothesis are about different people, it is preferable to revise this when possible to keep the perspective consistent. For example, given the premise I don't know if I'll ever be able to do that and hypothesis You can do it, it would be preferable to revise the hypothesis to become I can do it. This way, the premise and hypothesis are both about I.
+
+Figure 7: Guidelines provided to crowdworkers in the human review stage.
+
+1) Premise: He claimed that he had been pressured into giving a false confession.
+
+Hypothesis: He had been pressured into giving a false confession.
+
+(Optional) Revise the example below.
+
+Premise:
+
+He claimed that he had been pressured into giving a false confession.
+
+Hypothesis:
+
+He had been pressured into giving a false confession.
+
+Given the premise, the hypothesis is...
+
+Definitely correct.
+
+Entailment
+
+Maybe correct, maybe not
+
+Neutral
+
+Definitely incorrect
+
+Discard
+
+Contradiction
+
+
+Figure 8: The interface on Amazon Mechanical Turk used for collecting human annotations. Annotators are given free text boxes that are pre-populated with the original premise and hypothesis, to ease the work of revision. Then, they either select an entailment class or discard the example.
+
+
+Figure 9: Left: Data map for MultiNLI train set, based on a RoBERTa-large classifier trained on MultiNLI. Right: Data map for WANLI train set, based on a RoBERTa-large classifier trained on WANLI. A comparison of the distribution in variability (which determines example ambiguity) is remarkable – we see that MultiNLI is overwhelmingly dominated by easy-to-learn examples with variability close to 0. In contrast, the distribution in variability is much more spread out in WANLI, suggesting that the dataset contains more valuable examples overall.
+
+
Example
Label
Purpose of Revision
P: The power plant It is the only source of continuous electric power for the city.
Entailment
Coreference resolution
H: The power plant is very important for the city.
P: It was a well-known fact that it was a well-known fact that the solution was well-known.
Entailment
Redundancy
H: The solution was well-known.
P: This will be the first time the king has met the queen in person.
Contradiction
Clarity
H: The king has met the queen in person before.
P: She walked with a light step, as if she were floating on air.
Contradiction
Coherence
H: She was floating on air, as if she were walking on air.
P: There is a slight possibility that, if the same temperature data are used, the temperature of the Earth's surface in 1998 will be lower than the temperature of the Earth's surface in 1998 now.
Neutral
Self-contradiction
H: The Earth's surface in 1998 was lower than the Earth's surface in 1998 now.
P: She had to go to the library to find out what the name of the street was.
Contradiction
Ambiguous temporal reference
H: She already knew the name of the street.
P: A number of theories have been proposed to explain the decline of violence in modern society.
Entailment
Consistent tense
H: Violence will decline has declined in modern society.
+
+Table 9: Some examples of revisions that were done by annotators on examples generated by GPT-3.
+
+
Data size
Test Set
Diagnostics
HANS*
QNLI*
WNLI*
NQ-NLI*
ANLI
FEVER-NLI
BIG-Bench*
WANLI
1104
30K
5266
706
4855
3200
20K
3324
5000
Training Set
MNLI
393K
68.47
78.08
52.69
56.09
62.34
32.37
68.29
64.68
64.62
MNLI (AFLite)
103K
60.50
73.73
53.91
56.37
64.28
33.12
68.04
70.75
62.19
MNLI (ambiguous)
103K
65.03
74.93
54.42
62.32
62.14
32.68
67.42
68.77
61.15
MNLI (downsampled)
103K
64.67
71.15
59.15
52.97
62.14
28.99
69.08
56.76
62.84
WANLI
103K
72.55
89.40
76.81
65.15
64.03
41.12
70.63
75.40
75.49
+
+Table 10: Additional baselines that finetune RoBERTa-large on different subsets of MultiNLI, filtered via existing debiasing methods.
+
+
Test Set
Diagnostics
HANS*
QNLI*
WNLI*
NQ-NLI*
ANLI
FEVER-NLI
BIG-Bench*
WANLI
1104
30K
5266
706
4855
3200
20K
3324
5000
Training Set
MNLI
393K
60.87
76.40
65.49
50.56
61.33
30.56
66.94
58.87
61.72
MNLI + Tailor
485K
61.14
74.34
63.33
50.70
62.05
31.06
67.15
68.95
61.28
MNLI + Z-Aug
754K
60.05
76.73
63.46
50.14
60.53
32.50
67.10
54.81
61.38
MNLI ▽ ANLI
393K
61.23
73.55
69.80
52.26
61.64
49.91
70.82
68.80
61.66
WANLI
103K
64.58
86.25
74.66
51.13
63.66
38.22
68.27
76.17
72.56
+
+Table 11: Empirical comparison of different training datasets for T5-base. For brevity, we include MNLI, WANLI, and the strongest baselines from the results based on RoBERTa-large from Table 3.
+
+Write a pair of sentences that have the same relationship as the previous examples. Examples:
+
+1. In six states, the federal investment represents almost the entire contribution for providing civil legal services to low-income individuals.
+Implication: In 44 states, the federal investment does not represent the entire contribution for providing civil legal services for people of low income levels.
+2. But if it's at all possible, plan your visit for the spring, autumn, or even the winter, when the big sightseeing destinations are far less crowded.
+Implication: This destination is most crowded in the summer.
+3. 5 percent of the routes operating at a loss.
+Implication: 95 percent of routes are operating at either profit or break-even.
+4. 30 About 10 percent of households did not
+
+Implication: Roughly ninety percent of households did this thing.
+5. 5 percent probability that each part will be defect free.
+
+Implication: Each part has a 95 percent chance of having a defect.
+
+6.
+
+Table 12: Context corresponding to row 1 in Table 1, which contains Entailment examples from MultiNLI found via nearest neighbors in [CLS] token embedding space. All examples require reasoning about set complements, including from the universe of 100 percent, the 50 U.S. states, as well as the four seasons.
+
+Write a pair of sentences that have the same relationship as the previous examples. Examples:
+
+1. Small holdings abound, and traditional houses sit low on the treeless hillsides.
+Possibility: The hills were the only place suitable to build traditional houses.
+2. The inner courtyard has a lovely green and blue mosaic of Neptune with his wife Amphitrite.
+Possibility: The only colors used in the mosaic of Neptune and Amphitrite are green and blue.
+3. Nathan Road, Central, and the hotel malls are places to look.
+
+Possibility: The only places to look are Nathan Road, Central and hotel malls.
+
+4. Make your way westward to the Pont Saint-Martin for a first view of the city's most enchanting quarter, the old tannery district known as Petitie France.
+
+Possibility: The only place to the west of Pont Saint-Martin is the old tannery district.
+
+5. The artisans, tradespeople, and providers of entertainment (reputable and not so reputable) lived downtown on the reclaimed marshlands north and east, in the area still known as Shitamachi.
+
+Possibility: The only place where artisans, tradespeople and entertainers could live was in the marshlands to the north and east.
+
+6.
+
+Table 13: Context corresponding to row 2 in Table 1, which contains Neutral examples where the hypothesis introduces an exclusivity that is not implied by the premise.
+
+Write a pair of sentences that have the same relationship as the previous examples. Examples:
+
+1. Dun Laoghaire is the major port on the south coast.
+Contradiction: Dun Laoghaire is the major port on the north coast.
+2. Leave the city by its eastern Nikanor Gate for a five-minute walk to Hof Argaman (Purple Beach), one of Israel's finest beaches.
+
+Contradiction: Leave the city by its western Nikanor Gate for a fifty five minute walk to Hof Argaman.
+
+3. Southwest of the Invalides is the Ecole Militaire, where officers have trained since the middle of the 18th century.
+Contradiction: North of the Invalides is the Ecole Militaire, where officers have slept since the early 16th century.
+4. Across the courtyard on the right-hand side is the chateau's most distinctive feature, the splendid Francois I wing.
+
+Contradiction: The Francois I wing can be seen across the courtyard on the left-hand side.
+
+5. To the south, in the Sea of Marmara, lie the woods and beaches of the Princes' Islands.
+
+Contradiction: In the north is the Sea of Marmara where there are mountains to climb.
+
+6.
+
+Table 14: Context corresponding to row 3 in Table 1, which contains Contradiction examples that flip cardinal directions between the premise and hypothesis.
+
+Write a pair of sentences that have the same relationship as the previous examples. Examples:
+
+1. Vendors and hair braiders are sure to approach you.
+Implication: You're likely to be solicited by vendors or hair braiders.
+2. The Carre d'Art, an ultramodern building opposite the Maison Carre, exhibits modern art.
+
+Implication: Pieces of modern art can be found in the Carre d'Art, a structure which stands across from the Maison Carre.
+3. But they also take pains not to dismiss the trauma the Holocaust visited and continues to visit upon Jews.
+
+Implication: The Holocaust visited trauma upon Jews, and they are careful not to dismiss this.
+4. One fortunate result of this community's influence has been the proliferation of good restaurants and interesting bars from which to choose.
+
+Implication: The influence of this community has led to an increase in the number of intriguing bars and good dining establishments.
+
+5. Salinger wrote similar letters to other young female writers.
+
+Implication: Other young female writers received similar letters from Salinger as well.
+
+6.
+
+Table 15: Context corresponding to row 7 in Table 1, which contains Entailment examples that substitute a verb in the premise with one in the hypothesis that has a different subcategorization frame. Note that the third in-context example does not share quite the same pattern, but GPT-3 is still able to replicate the pattern present in other examples.
\ No newline at end of file
diff --git a/wanliworkerandaicollaborationfornaturallanguageinferencedatasetcreation/images.zip b/wanliworkerandaicollaborationfornaturallanguageinferencedatasetcreation/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..772ba2019e6e0a1ae86980727ee7c9747177c7d1
--- /dev/null
+++ b/wanliworkerandaicollaborationfornaturallanguageinferencedatasetcreation/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5a8da0540226a2dc2882485da298eeaface5d28c5346994094c5f1f9131fd5ac
+size 982885
diff --git a/wanliworkerandaicollaborationfornaturallanguageinferencedatasetcreation/layout.json b/wanliworkerandaicollaborationfornaturallanguageinferencedatasetcreation/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..55e6484a8fb7a733c11328d14580034f5fb88e8e
--- /dev/null
+++ b/wanliworkerandaicollaborationfornaturallanguageinferencedatasetcreation/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:7b87f891360090f8cbc53d37b35e499d18dcc313bc4b266d21a52eaa3929f113
+size 698657
diff --git a/weaklysupervisedheadlinedependencyparsing/90ef52be-25fc-45fe-a199-b85728df30db_content_list.json b/weaklysupervisedheadlinedependencyparsing/90ef52be-25fc-45fe-a199-b85728df30db_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..01743047f8bf7d2fc0fc1b2e3cf1a6daaade5508
--- /dev/null
+++ b/weaklysupervisedheadlinedependencyparsing/90ef52be-25fc-45fe-a199-b85728df30db_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b338969a64624a45cfb5b23a128213fc90f932c6930f4ca41d3f30d5330a50fc
+size 99325
diff --git a/weaklysupervisedheadlinedependencyparsing/90ef52be-25fc-45fe-a199-b85728df30db_model.json b/weaklysupervisedheadlinedependencyparsing/90ef52be-25fc-45fe-a199-b85728df30db_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..2afdb3efe16f80e8625542dd2d5b1538dbf4955b
--- /dev/null
+++ b/weaklysupervisedheadlinedependencyparsing/90ef52be-25fc-45fe-a199-b85728df30db_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b179aac538a08f054f106b7f9d1bfbd686c41510f2de057b5392b4c4608bd764
+size 122033
diff --git a/weaklysupervisedheadlinedependencyparsing/90ef52be-25fc-45fe-a199-b85728df30db_origin.pdf b/weaklysupervisedheadlinedependencyparsing/90ef52be-25fc-45fe-a199-b85728df30db_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..133a6d88080b0d1e134f563ac23deb915a9a377b
--- /dev/null
+++ b/weaklysupervisedheadlinedependencyparsing/90ef52be-25fc-45fe-a199-b85728df30db_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:fb325f663413c30fc23f22c64425df1eb7b4e3fba86d45d64f7f5b1d4044eafe
+size 428977
diff --git a/weaklysupervisedheadlinedependencyparsing/full.md b/weaklysupervisedheadlinedependencyparsing/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..57deb5543a1338e8351aaba7cb423d78271afb3c
--- /dev/null
+++ b/weaklysupervisedheadlinedependencyparsing/full.md
@@ -0,0 +1,458 @@
+# Weakly Supervised Headline Dependency Parsing
+
+Adrian Benton*
+
+Bloomberg
+
+Tianze Shi*
+
+Cornell University
+
+{adbenton,tianze}@google.com
+
+OzanIrsoy
+
+Bloomberg
+
+{oirsoy,imalioutov}@bloomberg.net
+
+# Abstract
+
+English news headlines form a register with unique syntactic properties that have been documented in linguistics literature since the 1930s. However, headlines have received surprisingly little attention from the NLP syntactic parsing community. We aim to bridge this gap by providing the first news headline corpus of Universal Dependencies annotated syntactic dependency trees, which enables us to evaluate existing state-of-the-art dependency parsers on news headlines. To improve English news headline parsing accuracies, we develop a projection method to bootstrap silver training data from unlabeled news headlinearticle lead sentence pairs. Models trained on silver headline parses demonstrate significant improvements in performance over models trained solely on gold-annotated long-form texts. Ultimately, we find that, although projected silver training data improves parser performance across different news outlets, the improvement is moderated by constructions idiosyncratic to outlet.
+
+# 1 Introduction
+
+English news headlines are written to convey the most salient piece of information in an article in as little space as possible. This makes them an attractive target for information extraction systems, and other NLP applications that operate on the most salient information in a news article. Headlines have been the target for many NLP tasks including semantic clustering (Wities et al., 2017; Laban et al., 2021), multi-document summarization (Bambrick et al., 2020), and sentiment/stance classification (Strapparava and Mihalcea, 2007; Ferreira and Vlachos, 2016).
+
+However, while headlines often present the most salient information from an article, brevity introduces its own obstacles. English news headlines are
+
+
+Figure 1: Example English news headlines with parses and POS tags generated by Stanza (Qi et al., 2020). Mispredicted relations and labels in red. The text shown is after truecasing, before being fed to Stanza.
+
+written in a unique register known as headlines. The structure of this register is determined primarily by typographical constraints along with the various functions that headlines serve, including summarization and eliciting reader interest (Mårdh, 1980). Headlinese syntax deviates from long-form news body through such features as a preference for atypical word senses and terms, frequent omission of determiners and auxiliaries, the acceptability of nominal and adverbial phrases, as well as multiple independent phrases in a single headline (decks). Figure 1 presents a sample of English headlines exhibiting some of these properties, and parse errors made by a strong English dependency parser, Stanza (Qi et al., 2020).
+
+While the syntax of English headlines deviates significantly from article body text, there has been little work in evaluating and developing classical NLP pipeline models for this register. News headline-related NLP tasks, such as headline gener
+
+ation (Rush et al., 2015; Takase et al., 2016; Takase and Okazaki, 2019) and classification (Kozareva et al., 2007; Oberlander et al., 2020), do not rely on syntactic annotations like POS tags, syntactic or semantic parses. This is by design, as the oftentimes poor performance of existing syntactic parsers on headlines has impeded their application to tasks such as sentence compression (Filippova and Altun, 2013).
+
+In this work, we take a step towards improving headline dependency parsers by releasing the first English news headline treebank annotated according to universal dependency (UD) typology. We present the first quantitative evaluation of existing dependency parsers on English headlines, and we propose a method for generating weak supervision for headline dependency parsers inspired by cross-lingual annotation projection (Yarowsky et al., 2001).
+
+# Contributions
+
+1. We release the first English headline treebank of 1,055 manually annotated and adjudicated universal dependency (UD) syntactic dependency trees, the English Headline Treebank (EHT), to encourage research in improving NLP pipelines for English headlines.
+2. We establish baselines on the EHT evaluation set with existing state-of-the-art parsers. Our experiments confirm prior observations that existing syntactic parsers perform poorly on headlines (Filippova and Altun, 2013).
+3. We propose a tree projection method to generate weak supervision for training more accurate headline parsers, and demonstrate that training on silver-annotated trees can significantly reduce parsing errors. Most strikingly, we show that after finetuning on weak supervision, we are able to reduce root prediction relative error rate by $92.8\%$ within domain, and by $21.3\%$ for an out-of-domain wire. We further show that these gains translate to downstream improvements in the quality of tuples extracted by an open domain information extraction system.
+
+This paper is structured as follows: Section 2 presents prior work on linguistic analyses of head-linese and their treatment in the NLP community;
+
+Section 3 describes the EHT annotation process and descriptive statistics; Section 4 describes our tree projection algorithm for generating silver headline dependency trees; Section 5 describes our experiment set up; Sections 6 and 7 respectively present intrinsic parser performance on the EHT and extrinsic performance on an open information extraction (OpenIE) task; and Section 8 presents related work on headline and low-resource syntactic processing.
+
+# 2 Background
+
+Linguistic Analysis of Headlines English news headlines are known for their compressed telegraphic style, constituting a unique register known as headlined (Garst and Bernstein, 1933; Straumann, 1935). Through a manual corpus analysis of over 1,800 headlines from two British newspapers, Mårdh (1980) finds that headlined shares some syntactic features with "ordinary" English language, but there also exist a number of features peculiar to headlined. These include the validity of nominal and adverbial headlines, lack of determiners, omission of auxiliaries and copulas, and use of the present tense to denote urgency of the event. Nevertheless, these syntactic hallmarks of headlined vary across country of publication (Ehineni, 2014), news outlet (Mårdh, 1980; Siegal and Connolly, 1999), and time period (Vanderbergen, 1981; Schneider, 2000; Afful, 2014), making development of a strong, general English headline parser particularly challenging.
+
+Headline NLP In spite of the clear evidence that headlinedes differs significantly from standard written English syntax, there has been scant work on building traditional NLP pipeline components for headlines. This has limited the linguistic features that NLP researchers can extract from headlines, and subsequently limited the analyses that can be performed on them. For instance, Filippova and Altun (2013) reports that poor parsing accuracy for headlines impedes their use of headline parses in alignment with a body sentence.
+
+This is not to say that headlines have been ignored as an object of study by the community. Tasks such as headline generation, compression, and news summarization are all well-studied problems, partly because they circumvent the need for annotation of linguistic structure (Filippova and Altun, 2013; Rush et al., 2015; Tan et al., 2017; Takase and Okazaki, 2019; Ao et al., 2021). Other studied tasks include emotion identification/semi-
+
+
Dataset
Headlines
Tokens
EHT
GSC
600
5,017
NYT
455
3,986
Silver
Projection
48,633
395,237
+
+Table 1: Statistics of our gold (EHT) data and silver (projected GSC) data.
+
+ment analysis (Kozareva et al., 2007; Oberlander et al., 2020), stance identification (Ferreira and Vlachos, 2016), framing or bias detection (Gangula et al., 2019; Liu et al., 2019), and headline clustering (Laban et al., 2021).
+
+# 3 The English Headline Treebank
+
+Here we describe the compilation and characteristics of our evaluation set, the English Headline Treebank (EHT).
+
+# 3.1 Data Sources and Pre-processing
+
+We sample English news headlines from two sources to build the EHT: the Google sentence compression corpus (GSC; Filippova and Altun, 2013), and the New York Times Annotated Corpus $^{2}$ (NYT). We sample from the GSC as it contains hundreds of thousands of news headlines across tens of thousands of domains, and as is described in Section 4, we leverage it as a rich source of silver-annotated training data. We sample 600 headlines from GSC in total.
+
+In addition, we sample from NYT to form an out-of-domain evaluation set, which was not subjected to the same preprocessing decisions used to build the GSC. We sample 500 headlines uniformly at random from the NYT, under the constraint that they are 4 to 12 tokens long (up to the $95^{\text{th}}$ percentile). We impose this length constraint to avoid trivial parses, as well as noise in the data.3 Of these 500 headlines, we removed 45 headlines that were templated death notices and obituaries.4
+
+All headlines are tokenized using the Stanford Penn Treebank tokenization algorithm with default settings. We use Stanza (Qi et al., 2020) to bootstrap our expert annotators with predicted UD-style part-of-speech tags and parse trees. To reduce the discrepancy between training data of Stanza and
+
+our news headline data, we truecased headlines using an n-gram truecaser model trained on English news body text, and inserted a period at the end of the headline before running inference with Stanza. Table 1 shows the number of headlines and tokens in our headline datasets.
+
+# 3.2 Annotation and Adjudication
+
+Our annotation follows the UD guidelines whenever possible. In Appendix A, we provide an addendum for consistent treatment of syntactic constructions that frequently occur in headlines, but are underspecified in the original UD guidelines.
+
+In the first stage of annotation, each headline is independently annotated for POS tag sequence and dependency parse by two expert annotators $^{5}$ using the UD Annotatrix interface (Tyers et al., 2017). Any discrepancies between annotators are resolved in the second, adjudication, stage. Two adjudicators independently examine the annotations and pick the one that conforms to UD guidelines, or construct their own parse if they disagree with both candidate parses from the first stage. $^{6}$ The third and last stage is group discussion, where all four annotators and adjudicators discuss and resolve any remaining disagreements.
+
+First-stage annotation takes roughly one minute per instance per annotator, and the combination of second and third-stage adjudication takes another minute per instance per adjudicator. On a sample of 50 headlines held out to compute inter-annotator agreement, we find that $56\%$ of headlines were parsed or POS-tagged incorrectly by Stanza. $48\%$ contain some attachment error and $50\%$ have an incorrect relation label. Annotators achieved a $72\%$ headline-level agreement rate on this sample in the first stage, with individual POS tag agreement of $98.8\%$ and labeled dependency attachment agreement rate of $94.8\%$ . Many annotator discrepancies arose from parsing the internal structure of named entities, which were resolved during the subsequent adjudication phases. These along with other common issues are listed in Appendix A.
+
+# 3.3 Characterizing Headline Data
+
+Figure 2 presents the distributions of relation labels in the EHT compared to the UD 2.8 English
+
+
+Figure 2: Distributions of dependency relation labels across the EHT, compared with UD 2.8 EWT and GUM corpora. We exclude punct and root relations when calculating the distributions, and omit low-frequency labels (below $2\%$ across all datasets) in this chart.
+
+Web Treebank (EWT; Silveira et al., 2014) and the Georgetown University Multilayer corpus (GUM; Zeldes, 2017). The EWT includes data from web media (weblogs, newsgroups, emails, reviews, and Yahoo! answers), and the texts in GUM corpus are drawn from a range of domains including news, fiction, academic writings, as well as dialogue such as transcribed interviews.
+
+Compared with texts from other domains, English news headlines use fewer determiners, auxiliaries, and copulas, which is consistent with prior linguistic characterization of headlines (Mårdh, 1980). News headlines have higher proportions of compound and flat relations, due to frequent mentions of named entities, and we also observe larger percentages of nsubj and obj relations, as a consequence of headline brevity and focus on core argument structure.
+
+# 4 Generating Silver Data by Projecting from Lead Sentences
+
+While EHT is suitable for evaluating parser performance on English news headlines, 1,055 headlines is much less data than is typically used for training a syntactic parser. For comparison, the EWT contains more than 15 times as many tokens as the EHT.
+
+On the other hand, it may be data-inefficient to manually annotate a training set of tens of thousands of headlines, since English news headlines constitute a different register of written English, not a different language. Although certain constructions are idiosyncratic to headlines, one can often expand a headline to a well-formed sentence in the news body register, as words are frequently omitted to produce a headline (Straumann, 1935; Mårdh, 1980). This section describes an algorithm
+
+Algorithm 1 Algorithm for projecting a parse tree from a news article lead sentence $\mathsf{s}$ to a headline $\mathsf{h}$ , which is a subsequence of $\mathsf{s}$ .
+
+# Definitions
+
+$\mathsf{N}(\mathsf{s})$ : set of nodes in tree s, each corresponding to a word in the sentence or the dummy root $\mathsf{N}(\mathsf{h})\subseteq \mathsf{N}(\mathsf{s})$ : set of nodes in tree h
+
+# function EXTRACTSUBTREE(s,h)
+
+$\mathsf{N}^{\prime}\leftarrow \emptyset$ $\triangleright$ Subset of nodes to be returned
+
+# for all $n \in \mathsf{N}(h)$ do
+
+$\mathsf{P}\gets$ nodes on path from root of $\mathfrak{s}$ to n
+
+$$
+N ^ {\prime} \leftarrow N ^ {\prime} \cup P
+$$
+
+# end for
+
+$\mathsf{R}^{\prime}\gets$ relations in tree s.s.t. both the head and tail of the relation are in $\mathsf{N}'$
+
+# return N', R'
+
+# end function
+
+# function PROJECT(s, h)
+
+N', R' ← EXTRACTSUBTREE(s, h)
+
+while $|\mathbf{N}^{\prime}| > |\mathbf{N}(\mathsf{h})|$ do
+
+n←closest to root s.t. n∈N',n∉N(h)
+
+$\mathsf{c}\gets$ leftmost child of n s.t. $\mathsf{c}\in \mathsf{N}^{\prime}$
+
+$\mathsf{p}\gets$ parent of n according to $\mathsf{R}^{\prime}$
+
+Update $\mathsf{R}^{\prime}$ : attach all siblings of c to c
+
+and attach c to p
+
+$\mathsf{N}^{\prime}\gets \mathsf{N}^{\prime}\setminus \{\mathsf{n}\}$
+
+# end while
+
+return A tree formed by N', R'
+
+# end function
+
+for automatically assigning dependency trees to unannotated headlines to create silver training data for training a headline dependency parser.
+
+Our approach is based on the key observation
+
+that headlines convey similar semantic content as the bodies and they typically share many local substructures. Lead sentences, often the first sentence in an article, serve a similar function as news headlines in grabbing reader attention and stating essential facts about news events; lead sentences are sometimes direct expansions of the headlines. Consequently, the pairs of lead sentence and headline have been used to automatically construct examples for sentence compression (Filippova and Altun, 2013).
+
+Algorithm 1 projects the dependency tree annotation from a news article lead sentence to a headline, where the headline is a (possibly non-contiguous) subsequence of the lead sentence. The main idea of this algorithm is to prune the lead sentence's dependency tree until it only contains those tokens in the headline. When a token from the lead sentence is missing in the headline, but it has children appearing in both strings, we promote its first child to preserve connectivity. For example, the following sentence snippet contains an extra "promised" than the corresponding headline:
+
+
+
+Researchers ... promised to release data ...
+
+and our algorithm promotes "release" to be the new root of the tree for the headline:
+
+
+
+Researchers to release data
+
+We use Algorithm 1 to construct a silver-annotated corpus of headline dependency trees from headline-lead sentence pairs in the GSC corpus. Our silver corpus contains 48,633 headlines that satisfied the subsequence constraint, the same magnitude as the EWT and significantly larger than our manually-annotated EHT. Of these, 8,633 were held out as a development set, with the remaining 40,000 used for training.
+
+# 5 Training Headline Parsers
+
+We vary two main dimensions during parser training: training data selection and data combination methods. We consider the EWT, the projected GSC headline data described in Section 3, and a combination of both as different training sets. We experiment with three different ways of combining training sets: a) simply concatenate the two cor
+
+pora; b) use a multi-domain model with shared feature extractors, but independent parameters for the parsing modules in each domain (Benton et al., 2021); and c) first train on the gold-standard EWT corpus, and subsequently finetune on the silver-annotated GSC headline corpus.
+
+Model Our model architecture follows the deep biaffine parser (Dozat and Manning, 2017), using a pre-trained BERT (Devlin et al., 2019) as a feature extractor. This architecture underlies many state-of-the-art dependency parsers (e.g., Kondratyuk and Straka, 2019) and the winning solutions in recent runs of IWPT shared tasks (Bouma et al., 2020, 2021). Model and implementation details are provided in Appendix C.
+
+Combining EWT and Projected GSC In our experiments, we consider three different data combination methods:
+
+1. Concat: We simply concatenate the two corpora and train a dependency parser based on the joint dataset. This strategy does not require any modification to the model architecture or the training procedures.
+2. MultiDom: Inspired by the multi-domain POS tagging architecture in Benton et al. (2021), we experiment with a multi-domain parser. In this parsing architecture, we have one parser for EWT and another for headlines, sharing the same underlying BERT-based feature extractor. In other words, each parser has its own trainable projection and bioaffine attention layers. In each training step, we sample a batch of examples from the concatenated corpora and jointly update the domain-specific parameters and shared feature extractor.
+3. Finetune: Finally, we also experiment with a two-step training strategy where we finetune on the projected GSC headline data based on a trained parser on EWT. Stymne et al. (2018) find this strategy to be one of the most effective ways to learn from multiple treebanks in the same language. $^{8}$
+
+Ensembling We train each parser under each setting with five random restarts and report means and standard deviations in Section 6. To reduce variations in our manual analysis in Section 7, we analyze the ensembled parse trees using the reparsing technique of Sagae and Lavie (2006).
+
+# 6 Intrinsic Parser Performance
+
+Intrinsic parser performance is shown in Table 2. The discrepancy in baseline performance between NYT and GSC (85.49% vs. 60.60% LAS) can be attributed to the fact that NYT headlines exhibit a closer distribution of relation types to EWT than GSC headlines (Figure 2). Many NYT headlines already constitute a well-formed body sentence, albeit without final punctuation. This is further supported by the fact that only training on projected GSC parse trees significantly improves performance on GSC (89.09% LAS) while actually hurting NYT performance, with a slight drop to 84.75% LAS.
+
+However, training on both EWT and projected GSC improves parser performance across both domains. We found that training a multi-domain model performed about as well as concatenating EWT and projected GSC training data. Ultimately, we found that a pipelined finetuning scheme – first training on EWT, then silver projected GSC headlines – yielded a strong parser across both domains (LAS of $87.13\%$ on NYT and $90.08\%$ on GSC).
+
+# 6.1 Error Analysis
+
+Although finetuning on GSC headlines with projected dependency parses improves parser performance on both the GSC and NYT evaluation sets, we see more marked improvements on the GSC corpus. Figure 3 displays the $\%$ relative error reduction in F1 of the GSC-finetuned ensemble against the EWT baseline ensemble broken by relation type. See Appendix D for absolute F1 for each relation type and domain. We compute model performance for all models using the eval07.p1 evaluation script released as part of the First Shared Task on Parsing Morphologically-Rich Languages. $^{9}$
+
+It is clear from the relation-level error analysis that most of the gains on GSC come from correct identification of the headline root, arguably the most important relation in the headline parse. In fact, the finetuned parser achieves $98.2\%$ recall in
+
+identifying the root, whereas the baseline parser only achieves $74.6\%$ recall. Headlines using the "to VERB" construction, indicating future tense or an expected event, are particularly susceptible to root misprediction by the baseline parser (example given in Figure 4). Performance on the nsubj relation also improves as a side effect of correctly identifying the root.
+
+Gains on the NYT evaluation set are consistent across most relations, but with smaller improvements. This is encouraging in that no NYT headline training data was used to train the model, silver or otherwise. parataxis benefits from finetuning on projected GSC headlines. This relation occurs frequently in NYT headlines due to a preference for headlines with multiple decks, independent syntactic components: e.g., "Essay; B.C.C.I.: Justice Delayed". The fact that this deck structure occurs more frequently in headlines results in a parser with a stronger prior for predicting parataxis.
+
+It is also important to note that the finetuned parser can identify passive constructions much more accurately than the baseline. $\%$ F1 performance for identifying nsubj: pass improves from $11.1\%$ to $90.5\%$ on GSC and from $60.0\%$ to $86.8\%$ on NYT.
+
+# 7 Extrinsic Evaluation
+
+In addition to intrinsic evaluation of parsers, we also evaluate these models downstream. We perform an extrinsic evaluation using the state-of-the-art syntax-based PredPatt OpenIE System (White et al., 2016), and evaluate extracted tuples using the protocol and error typology taken from Benton et al. (2021). As PredPatt relies solely on a UD parse and POS tag sequence to extract candidate tuples, this constitutes a direct downstream evaluation of more accurate headline parses.
+
+OpenIE Evaluation Protocol Two annotators independently annotated 200 extracted tuples manually. These tuples were randomly sampled from the GSC and NYT headlines, such that the PredPatt extracted different OpenIE tuples from the baseline ensemble parse compared to finetuned ensemble parse. Each tuple was judged as either Correct, or annotated with its most salient error type: Malformed Predicate, Bad Sub-predicate, Missing Core Argument, Argument Misattachment, or Incomplete Argument.
+
+
Training data / regime
NYT
GSC
UAS
LAS
UEM
LEM
UAS
LAS
UEM
LEM
EWT
88.82±0.22
85.49±0.28
57.89±0.65
48.57±1.03
83.01±0.36
80.60±0.28
50.80±1.14
42.90±0.68
Proj
88.27±0.30
84.75±0.33
56.88±1.17
48.22±0.63
90.99±0.23
89.09±0.30
68.33±0.59
59.93±0.80
Concat
89.97±0.65
87.05±0.56
60.70±1.35
53.14±1.66
91.23±0.19
89.32±0.23
68.67±0.67
61.23±0.83
MultiDom
89.58±0.27
86.29±0.36
60.00±0.87
50.29±0.90
91.16±0.11
89.31±0.21
68.63±0.77
61.07±1.00
Finetune
89.93±0.14
87.13±0.18
61.45±0.65
53.71±0.75
91.88±0.06
90.08±0.11
71.07±0.25
63.37±0.27
+
+Table 2: Parsing accuracies on NYT and GSC headlines from the EHT, comparing models trained on EWT, silver headline projection data (Proj), and different methods for combining these two training data sources: concatenating (Concat), training with a multi-domain model (MultiDom), and finetuning on silver GSC headline trees (Finetune). UAS and LAS correspond to (un)labeled attachment score, and UEM/LEM to (un)labeled exact match score (at the sentence level).
+
+
+Figure 3: % relative error reduction in F1 score across dependency relations for both the GSC and NYT evaluation sets, from the ensembled Both (finetuning) model to EWT(baseline). Relations are sorted by descending frequency and only relations that occurred at least 20 times in the evaluation set are shown. Support for each class is indicated by the line plot.
+
+To control for potential annotation bias, tuples were shuffled and identity of the parser and example domain were hidden from annotators. After independent annotation, the two annotators adjudicated conflicting annotations and converged on a single label for each tuple. Prior to adjudication, annotators achieved an agreement rate of $62\%$ for annotating salient error type, with a Cohen's $\kappa$ of 0.430. Many discrepancies in the first annotation round resulted from confusion between Malformed Predicate and Argument Misattachment or Bad Subpredicate. Often, several error types were present in the incorrect extractions, but deciding which error type was most salient was resolved during adjudication. Examples of each error type and annotation conventions are given in Appendix E.
+
+OpenIE Results Results from a typological error analysis of 200 tuples are shown in Table 3. As expected, Malformed Predicates were the predominant source of error for the baseline EWT model, followed by Missing Core Argument errors. This agrees with the finding that headline root identification exhibited marked regressions in the baseline.
+
+In our experiments, the domain-specific model was able to drastically reduce errors for both of these error types. We registered a statistically significant improvement (26% absolute) in valid tuple extraction performance when using the output of the model finetuned on EWT+GSC data when compared to the EWT-only baseline. For NYT, the improvement was not statistically significant. We hypothesize that this is due to the fact that the wire exhibits more structural similarities to long-form
+
+
+Figure 4: Example parses given by EWT (baseline) (bottom) and Both (finetuned) (top) on an example headline from the GSC. Differing edges are highlighted in green and red for finetuned and baseline, respectively.
+
+
Domain
Model
Malformed Predicate
Bad Sub-predicate
Missing Core Argument
Argument Mis-attachment
Incomplete Argument
Correct
GSC
EWT
20
4
14
4
2
56
Finetune
\( 4^{\dagger } \)
6
\( 2^{*} \)
6
0
\( 82^{\dagger } \)
NYT
EWT
10
8
6
12
0
64
Finetune
12
6
2
4
2
74
+
+Table 3: % error type for OpenIE tuples. Statistically significantly better performance within domain, according to a two population proportion test is indicated by * at the $p = 0.05$ level and † at the $p = 0.01$ level. Sample size of 50 tuples for each (domain, model) pair. Best performing model per (error type, domain) in bold.
+
+text, as evidenced by the frequency of relation types (Figure 2).
+
+# 8 Further Related Work
+
+Headline Syntactic Processing Perhaps the two most relevant works are the recently published POSH (Benton et al., 2021) and GoodNewsEveryone corpora (Oberländer et al., 2020). POSH is a dataset of POS-tagged English news headlines, without gold dependency parse annotations. GoodNewsEveryone, on the other hand, contains thousands of emotion-bearing headlines labeled for semantic roles (SRL). In GoodNewsEveryone, the relationships between identified actor, target, and predicate are solely determined by their roles. Collecting dependency parse annotations is much more involved than either POS tagging or SRL, as dependency parses require identifying deep relationships between individual words that are not solely derived from their types. That said, the release of both of these corpora underscores the importance of headlines as an object of study in NLP, and the desire for richer linguistic annotations.
+
+Low Resource Syntactic Processing Low resource syntactic parsing is typically motivated by the need to develop a parser for languages with scant gold supervision (Vania et al., 2019). Agić
+
+et al. (2016) employ a similar, yet more involved method of annotation projection to project parser predictions from a high-resource language to a low-resource language. As we are not projecting across languages, and we restrict our parallel text to cases where a headline is a subsequence of the lead sentence, we rely on heuristics to repair the projected dependency parse.
+
+Dependency parsers and treebanks for tweets are similar in spirit to the current work (Owoputi et al., 2013; Kong et al., 2014; Liu et al., 2018). Unlike Tweebank, we chose not to develop our own annotation scheme, but rather annotate under the UD schema. UD is sufficiently expressive for annotating headlines, and allows us to leverage multiple domains for training a parser.
+
+# 9 Conclusion
+
+In this work, we describe the first gold-annotated evaluation set for English headline UD dependency parsing, the EHT. We hope this data will encourage further research in improving dependency parsers for overlooked registers of English. In addition, we hope that the development of accurate headline dependency parsers will result in stronger performance at existing headline understanding and processing tasks, and enable more subtle linguistic
+
+analysis, such as identification of "crash blossom" news headlines.
+
+# Limitations
+
+Variation across news outlets Figure 3 demonstrates out-of-domain generalization to NYT headlines on several structurally important relations such as root and nsubj:pass by training on silver projected trees. However, some relations that can be more accurately predicted in GSC headlines do not generalize to NYT. These include adjunct relations such as nmod, nummod, and advmod. Even within a register as niche as English headlines, there is significant variation in convention between news outlets.
+
+General news headline distributions The GSC corpus is originally collected by Filippova and Altun (2013) and contains crawled news headlines and lead sentences from a wide variety of news outlets. The headline-lead-sentence pairs are filtered to include only grammatical and informative headlines (see Section 4 of Filippova and Altun (2013)) and thus the resulting GSC corpus may not be representative of all English news headlines. The NYT corpus contains samples from a single outlet and is also not representative of the general news headline distribution.
+
+Different news headline categories Depending on the types of news articles (e.g., front page, editorials, op-eds, etc.), their corresponding headlines may exhibit distinctive structural properties. Our work is agnostic to the different categories of news articles and their headlines.
+
+Multilinguality In this work, we demonstrate that training on silver parse trees projected onto English news headlines results in more accurate English headline parsers. For other languages, headlines may or may not exhibit significant grammatical differences from English headlines, and when they do, the types of headline constructions are language- and culture-dependent. We expect the benefits of training on projected trees to be mediated by the discrepancy between the "conventional" and headline grammar within a given language. In addition, for languages with richer morphology, morphological analysis may be required to align dependency relation annotations from a body sentence to its headline. As we only consider English headlines in this work, further exploration is required before determining whether the projection
+
+algorithm, Algorithm 1, can be adapted to morphologically rich languages.
+
+# Acknowledgements
+
+We thank the anonymous reviewers for their insightful reviews. Tianze Shi acknowledges support from Bloomberg's Data Science Ph.D. Fellowship.
+
+# References
+
+Isaac Afful. 2014. A diachronic study of the NP structure in Ghanaian newspaper editorials. Journal of Advances in Linguistics, 5:555-565.
+Zeljko Agic, Anders Johannsen, Barbara Plank, Héctor Martínez Alonso, Natalie Schluter, and Anders Søgaard. 2016. Multilingual projection for parsing truly low-resource languages. Transactions of the Association for Computational Linguistics, 4:301-312.
+Xiang Ao, Xiting Wang, Ling Luo, Ying Qiao, Qing He, and Xing Xie. 2021. PENS: A dataset and generic framework for personalized news headline generation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 82-92, Online. Association for Computational Linguistics.
+Joshua Bambrick, Minjie Xu, Andy Almonte, Igor Malioutov, Guim Perarnau, Vittorio Selo, and Iat Chong Chan. 2020. LSTM: Real-time query-driven news overview composition at Bloomberg. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 350-361, Online. Association for Computational Linguistics.
+Adrian Benton, Hanyang Li, and Igor Malioutov. 2021. Cross-register projection for headline part of speech tagging. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 6475-6490, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
+Gosse Bouma, Djamé Seddah, and Daniel Zeman. 2020. Overview of the IWPT 2020 shared task on parsing into enhanced Universal Dependencies. In Proceedings of the 16th International Conference on Parsing Technologies and the IWPT 2020 Shared Task on Parsing into Enhanced Universal Dependencies, pages 151-161, Online. Association for Computational Linguistics.
+Gosse Bouma, Djamé Seddah, and Daniel Zeman. 2021. From raw text to enhanced Universal Dependencies: The parsing shared task at IWPT 2021. In Proceedings of the 17th International Conference on Parsing Technologies and the IWPT 2021 Shared
+
+Task on Parsing into Enhanced Universal Dependencies (IWPT 2021), pages 146-157, Online. Association for Computational Linguistics.
+Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.
+Timothy Dozat and Christopher D. Manning. 2017. Deep biaffine attention for neural dependency parsing. In Proceedings of the 5th International Conference on Learning Representations, pages 1-8, Toulouse, France. OpenReview.net.
+Taiwo Oluwaseun Ehineni. 2014. A syntactic analysis of lexical and functional heads in nigerian english newspaper headlines. International Journal of Linguistics, 6(5):9.
+William Ferreira and Andreas Vlachos. 2016. Emergent: a novel data-set for stance classification. In Proceedings of the 2016 conference of the North American chapter of the association for computational linguistics: Human language technologies, pages 1163-1168.
+Katja Filippova and Yasemin Altun. 2013. Overcoming the lack of parallel data in sentence compression. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1481-1491.
+Rama Rohit Reddy Gangula, Suma Reddy Duggenpudi, and Radhika Mamidi. 2019. Detecting political bias in news articles using headline attention. In Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 77-84.
+Robert E Garst and Theodore M Bernstein. 1933. Headlines and deadlines. Columbia University Press.
+Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of the 3rd International Conference on Learning Representations, San Diego, California, USA.
+Dan Kondratyuk and Milan Straka. 2019. 75 languages, 1 model: Parsing Universal Dependencies universally. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2779-2795, Hong Kong, China. Association for Computational Linguistics.
+Lingpeng Kong, Nathan Schneider, Swabha Swayamdipta, Archna Bhatia, Chris Dyer, and Noah A Smith. 2014. A dependency parser for
+
+tweets. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1001-1012.
+Zornitsa Kozareva, Borja Navarro, Sonia Vázquez, and Andrés Montoyo. 2007. Ua-zbsa: a headline emotion classification through web information. In Proceedings of the Fourth International Workshop on Semantic Evaluations (SemEval-2007), pages 334-337.
+Philippe Laban, Lucas Bandarkar, and Marti A Hearst. 2021. News headline grouping as a challenging nlu task. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3186-3198.
+Siyi Liu, Lei Guo, Kate Mays, Margrit Betke, and Derry Tanti Wijaya. 2019. Detecting frames in news headlines and its application to analyzing news framing trends surrounding us gun violence. In Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL), pages 504-514.
+Yijia Liu, Yi Zhu, Wanxiang Che, Bing Qin, Nathan Schneider, and Noah A Smith. 2018. Parsing tweets into universal dependencies. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 965-975.
+Ingrid Mårdh. 1980. Headlines: On the grammar of English front page headlines, volume 58. Liberäromedel/Gleerup.
+Laura Ana Maria Oberländer, Evgeny Kim, and Roman Klinger. 2020. Goodnewseveryone: A corpus of news headlines annotated with emotions, semantic roles, and reader perception. In Proceedings of the 12th Language Resources and Evaluation Conference, pages 1554-1566.
+Olutobi Owoputi, Brendan O'Connor, Chris Dyer, Kevin Gimpel, Nathan Schneider, and Noah A Smith. 2013. Improved part-of-speech tagging for online conversational text with word clusters. In Proceedings of the 2013 conference of the North American chapter of the association for computational linguistics: human language technologies, pages 380-390.
+Peng Qi, Yuhao Zhang, Yuhui Zhang, Jason Bolton, and Christopher D. Manning. 2020. *Stanza: A Python natural language processing toolkit for many human languages*. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations*, pages 101–108, Online. Association for Computational Linguistics.
+Alexander M Rush, Sumit Chopra, and Jason Weston. 2015. A neural attention model for abstractive sentence summarization. In Proceedings of the 2015
+
+Conference on Empirical Methods in Natural Language Processing, pages 379-389.
+Kenji Sagae and Alon Lavie. 2006. Parser combinatio by reparsing. In Proceedings of the Human Language Technology Conference of the NAACL, Companion Volume: Short Papers, pages 129-132, New York City, USA. Association for Computational Linguistics.
+Kristina Schneider. 2000. The emergence and development of headlines in British newspapers. English Media Texts, Past and Present: Language and Textual Structure, 80:45.
+Allan M Siegal and William G Connolly. 1999. The New York Times manual of style and usage. Three Rivers Press (CA).
+Natalia Silveira, Timothy Dozat, Marie-Catherine de Marneffe, Samuel Bowman, Miriam Connor, John Bauer, and Christopher D. Manning. 2014. A gold standard dependency corpus for English. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC-2014).
+Carlo Strapparava and Rada Mihalcea. 2007. Semeval-2007 task 14: Affective text. In Proceedings of the Fourth International Workshop on Semantic Evaluations (SemEval-2007), pages 70-74.
+Heinrich Straumann. 1935. Newspaper headlines: A study in linguistic method. London, Allen.
+Sara Stymne, Miryam de Lhoneux, Aaron Smith, and Joakim Nivre. 2018. Parser training with heterogeneous treebanks. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 619-625, Melbourne, Australia. Association for Computational Linguistics.
+Sho Takase and Naoaki Okazaki. 2019. Positional encoding to control output sequence length. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3999-4004.
+Sho Takase, Jun Suzuki, Naoaki Okazaki, Tsutomu Hirao, and Masaaki Nagata. 2016. Neural headline generation on abstract meaning representation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1054-1059.
+Jiwei Tan, Xiaojun Wan, and Jianguo Xiao. 2017. From neural sentence summarization to headline generation: a coarse-to-fine approach. In Proceedings of the 26th International Joint Conference on Artificial Intelligence, pages 4109-4115.
+Francis M. Tyers, Mariya Sheyanova, and Jonathan North Washington. 2017. UD Annotatrix: An annotation tool for Universal Dependencies. In
+
+Proceedings of the 16th International Workshop on Treebanks and Linguistic Theories, pages 10-17, Prague, Czech Republic.
+AM Simon Vanderbergen. 1981. The Grammar of Headlines in the Times: 1870-1970, volume 95. AWLSK.
+Clara Vania, Yova Kementchedjhieva, Anders Søgaard, and Adam Lopez. 2019. A systematic comparison of methods for low-resource dependency parsing on genuinely low-resource languages. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1105-1116.
+Aaron Steven White, Drew Reisinger, Keisuke Sakaguchi, Tim Vieira, Sheng Zhang, Rachel Rudinger, Kyle Rawlins, and Benjamin Van Durme. 2016. Universal Decompositional Semantics on Universal Dependencies. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1713-1723, Austin, Texas. Association for Computational Linguistics.
+Rachel Wities, Vered Shwartz, Gabriel Stanovsky, Meni Adler, Ori Shapira, Shyam Upadhyay, Dan Roth, Eugenio Martínez-Cármara, Iryna Gurevych, and Ido Dagan. 2017. A consolidated open knowledge representation for multiple texts. In Proceedings of the 2nd workshop on linking models of lexical, sentential and discourse-level semantics, pages 12-24.
+David Yarowsky, Grace Ngai, and Richard Wicentowski. 2001. Inducing multilingual text analysis tools via robust projection across aligned corpora. In Proceedings of the First International Conference on Human Language Technology Research.
+Amir Zeldes. 2017. The GUM corpus: Creating multilayer resources in the classroom. Language Resources and Evaluation, 51(3):581-612.
+
+# A Syntactic Annotation Guidelines
+
+# General principles
+
+Please refer to the UD annotation guidelines (https://universaldependencies.org/guidelines.html) for general rules of syntactic dependency annotation. This document serves as an addendum to the UD guidelines, in order to detail how to annotate certain frequently occurring and/or headline-specific constructions.
+
+# Frequent headline constructions
+
+# Headlines with multiple decks/components
+
+Use parataxis to connect multiple components. For example:
+
+(1) Paid Notice: Deaths BROOKS, JOHN N.
+
+includes three independent components: "Paid Notice", "Deaths", and "BROOKS, JOHN N.", with the latter two attached to the first through parataxis. There can be nested parataxis if necessary to reflect hierarchical structures within the headline components.
+
+Headlines with omitted auxiliaries For frequent constructions including "NP $\mathrm{VP_{ed}}$ ", "NP $\mathrm{VP_{ing}}$ ", and "NP $\mathrm{VP_{to}}$ ", where the finite auxiliary "be" verbs are omitted, we still treat the headlines as verbal headlines and mark the main verbs as the root/head of the headlines.
+
+Reported speech Refer to the UD guidelines. Typically, a ccomp or parataxis relation is used.
+
+flat and compound First, refer to UD guidelines on flat and compound. These are typically annotated as flat:
+
+- (Person) Names
+- Company/team/organization/... names without internal (compositional) structures. (e.g., "Rolling Stones" is not compositional and should be analyzed as flat.)
+Foreign phrases
+- Dates without explicit internal structures (excluding "the 1st of May")
+- Titles/honorifics
+
+# Dates
+
+(2) Thursday, December 7, 2000
+
+Refer to English web treebank example email-enronsent06_01-0005
+
+
+
+# Currency
+
+(3) $ 200 million
+
+Refer to English web treebank example: newsgroup-groups.google.com_FOOLED_ 1bf9cdc5a4c2ac48_ENG_ 20050904_130400-0022
+
+
+
+# Game Scores
+
+(4) 4-0
+
+Refer to English web treebank example: newsgroup-groups.google.com_hiddensnook_1fd8f731ae7ffaa0_ENG_20050214_192900-0006
+
+
+
+# Special cases
+
+Named Entities Locations should be annotated similarly to people names, with flat. Therefore "Lake Erie" should be parsed using flat rather than compound. Although this conflicts with how locations are annotated in EWT, the judgments in EWT are occasionally inconsistent or conflict with the UD annotation guidelines, as evidenced by UD issue 777 - https://github.com/UniversalDependencies/docs/issues/777. flat should also be used for names of racing horses, where although there is often compositional structure in these names, they are treated as a single
+
+unit as there are loose syntactic constraints on what constitutes a valid race horse name.
+
+Company names with typical suffixes like "Inc." or "Co." should be analyzed with that word as the head, with a compound relation to the idiosyncratic part of the company name. Arbitrary names of companies should be analyzed with flat. Names of creative works: visual art, books, movies, video games should be annotated such that internal structure is preserved, e.g., "Lord of the Rings" is not parsed with flat.
+
+Legitimate PP Attachment Ambiguity In certain cases there may be inherent ambiguity in where a prepositional phrase should attach, but the syntactic ambiguity has little effect on the meaning of the headline (nmod on an oblique/object argument vs. obl attaching to the matrix verb). In these cases, we chose to attach the PP as an nmod to the argument, out of convention.
+
+Hyphenated Words In the case of hyphenated words, we analyze the internal structure of the hy-phenated words and attach as the entire hyphen- ated word functions in the headline. For example, "shake-up" is parsed as:
+
+
+
+even if the entire word functions as a noun.
+
+Typos In the case of typographical errors or issues with data processing, we assume the intended word during annotation. So, for "Baby game changer for to", "to" is labeled as "NUM", assuming "two" was the intended word. These typographical errors will be remedied by their corrected lemma in the future.
+
+# B Implementation of the Projection Algorithm
+
+Figure 5 provides a detailed python implementation of Algorithm 1 for reproducibility.
+
+# C Parser and Implementation Details
+
+Our parser architecture combines the deep biaffine parser (Dozat and Manning, 2017) with the pretrained contextual BERT feature extractor (Devlin et al., 2019). For words with multiple subword tokens, we adopt BERT representations on the final
+
+subword tokens. For the deep biaffine parser, the attachment and labeling probabilities are determined by biaffine attention scores between pairs of head-dependent words, which in turn are linearly projected from BERT embeddings and then followed by a non-linear leaky ReLU activation function. We used a dimension of 400 for the attachment biaffine scorer, and 100 for the label scorer. For the BERT feature extractor, the weights are initialized from the public bert-base-uncased model,[11] consisting of roughly 110 million parameters, and fine-tuned during training. Each model was trained on a single Nvidia GTX 2080 Ti GPU, and took up two hours to train depending on when training was halted.
+
+We selected learning rate on the baseline EWT model, $^{12}$ and used the same hyperparameter settings when training all other parsers. We used a maximum learning rate of $10^{-5}$ , a batch size of 8, and a learning rate schedule that tenth the base learning rate every 5 iterations without increase in validation accuracy, up to two times maximum. Learning rate was warmed up according to a linear schedule during the first 320 iterations. Gradients are clipped to a maximum norm of 5.0. We used Adam optimizer (Kingma and Ba, 2015) with $\beta_{1} = 0.9$ , $\beta_{2} = 0.999$ for all training runs. For finetuning, we used a maximum learning rate of $10^{-6}$ with an identical learning rate schedule. Dropout rates of 0.3 are applied to all non-linear activations in the parsing modules.
+
+# D Intrinsic Performance by Relation
+
+Per-relation absolute F1 is displayed Figure 6 for the ensembled baseline EWT-trained parser vs. additionally finetuning on projected GSC trees.
+
+# E OpenIE Annotation Details
+
+Table 4 contains a handful of examples for each of the salient OpenIE error types annotated in Section 7. Please refer to Benton et al. (2021) for descriptions of each of these error types. In addition to that protocol, we adopted the following annotation conventions in order to consistently annotate corner cases. In general, tuples were labeled as incorrect only if there were clear mistakes in the definition of arguments or predicate:
+
+- Tuples where the substructure of the reported
+
+```python
+def project(heads: List[int], rels: List[str], subset: List[int]):
+ """ Finds the subtree spanned by the nodes in `subset`.
+Arguments:
+ heads: Each element `heads[i]` corresponds to the head of node `i`; node 0 is root, and `heads[0] = -1`. rels: Each element `rels[i]` is the dependency relation label between `heads[i]` and node `i`. subset: A list of nodes that are found in the subtree.
+Returns:
+ A tuple of heads and relations in the same format as `heads` and `rels`, the length of each is equal to `len(subset)`.
+ """ heads = deepcopy(heads)
+ rels = deepcopy(rels)
+# Collect all involved nodes (ExtractSubtree).
+included = set()
+for i in subset: cur = i while cur != -1: included.add(cur) cur = heads[cur]
+# Cache the children of each node.
+children = [[] for x in heads]
+for i in sorted(include): if heads[i] != -1: children[heads[i]].append(i)
+while len(include) != len(subset):
+ # Find the top-most node that is not currently in the subset. queue = [-1] while len(queue): cur = queue.pop() if cur not in subset and cur != -1: node_to Collapse = cur break queue.append(children[cur])
+# Find the local structure and collapse. children_nodes = children[node_to Collapse] leftmost = children_nodes[0] for c in children_nodes: heads[c] = leftmost heads(leftmost] = heads(node_to Collapse] rels(leftmost] = rels(node_to Collapse] included.discard(node_to Collapse)
+# Update cache. children = [[] for x in heads] for i in sorted(include): if heads[i] != -1: children[heads[i]].append(i)
+# Extract the subgraph. mapping = {n: i for i, n in enumerate(subset)}
+subset_heads = [mapping.get(heading[x], -1) for x in subset]
+subset_rels = [rels[x] for x in subset]
+return subset_heads, subset_rels
+```
+
+Figure 5: The python implementation of Algorithm 1.
+
+
+
+
+Figure 6: % F1 score across dependency relations for both the GSC (top) and NYT (bottom) evaluation sets for the ensembled EWT-only model against the finetuned EWT+projected GSC predictions. Relations are sorted by descending frequency and only relations that occurred at least 20 times in the evaluation set are shown.
+
+phrase is decomposed as additional arguments in reporting structures ("Prime Minister says...") are judged "Correct".
+
+- A complicated predicate was labeled as valid, even when an object could have been treated as a separate argument.
+- Tuples of independent decks, related by parataxis, or appositives are judged "Correct".
+- Sub-predicates that are entailed by the headline are judge "Correct". For example, "X engaged to wed" $\rightarrow$ (wed, X); (engaged, X); (engaged to wed, X) are all valid.
+- Relative pronouns should not be included as separate arguments in the relative clause, as they are redundant with the nominal head.
+
+
Malformed Predicate
[A1 Torrid heatwave sweeps] [P Punjab]
+[P Kenya acrobat falls during] [A1 circus show in Moscow]
+[P Will Wright to leave] [A1 Electronic Arts]
Missing Core Argument
Toyota to [P revise] [A1 dollar forecast to 80 yen]
+Several paths available to [P extend] [A1 the litigation]
Bad Sub-Predicate
[A1 Sanofi] to [P take] [A2 control of Shantha Biotechnics]
Argument Misattachment
[A1 J.] [A2 W. Kirby] [P wed to] [A2 miss McCabe]
+[A1 Bishop] [A2 who] [P had denied] [A2 Holocaust] apologizes
Incomplete Argument
A raft of [A1 plans] [A2 that] [P try to dispel] [A2 math anxieties]
+
+Table 4: Example salient error types for OpenIE tuples extracted from the baseline EWT-only ensemble parse. Extracted tuples encoded as $\langle [P\underline{\mathbf{Pred}}\underline{\mathbf{c}}\underline{\mathbf{d}}\underline{\mathbf{c}}]$ , [A1 Argument 1], [A2 Argument 2]].
\ No newline at end of file
diff --git a/weaklysupervisedheadlinedependencyparsing/images.zip b/weaklysupervisedheadlinedependencyparsing/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..0fc7ac68af7bf9c1520a2ba54e148d29085fbc66
--- /dev/null
+++ b/weaklysupervisedheadlinedependencyparsing/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ff2e2e6e94d04359297c23de426a477558ed02ff4da1baf7e600ab4ec4a2bd0f
+size 488686
diff --git a/weaklysupervisedheadlinedependencyparsing/layout.json b/weaklysupervisedheadlinedependencyparsing/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..54de172017a323776e4c1c41343e8bc93a13cdd3
--- /dev/null
+++ b/weaklysupervisedheadlinedependencyparsing/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b1ca1850eb960b3b91eef1ddc8edf584fb7a016654e6a6c148a32044c844dd38
+size 450531
diff --git a/weightperturbationasdefenseagainstadversarialwordsubstitutions/4c9566ce-5e6b-4a4d-8cd1-d4c67a13eb4e_content_list.json b/weightperturbationasdefenseagainstadversarialwordsubstitutions/4c9566ce-5e6b-4a4d-8cd1-d4c67a13eb4e_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..abf72684cd043df809a0577da464abee3e15d45c
--- /dev/null
+++ b/weightperturbationasdefenseagainstadversarialwordsubstitutions/4c9566ce-5e6b-4a4d-8cd1-d4c67a13eb4e_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:40febac5e9288f68cdd4c9ecc73d30031dacf623043db5cf99b44b34870018bd
+size 78060
diff --git a/weightperturbationasdefenseagainstadversarialwordsubstitutions/4c9566ce-5e6b-4a4d-8cd1-d4c67a13eb4e_model.json b/weightperturbationasdefenseagainstadversarialwordsubstitutions/4c9566ce-5e6b-4a4d-8cd1-d4c67a13eb4e_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..7741ad8855c45a3d714b74b3902be987e8633166
--- /dev/null
+++ b/weightperturbationasdefenseagainstadversarialwordsubstitutions/4c9566ce-5e6b-4a4d-8cd1-d4c67a13eb4e_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:1b79a644a307062e7a738e55b97116b15bca69bed55ea51031b7adea4331f796
+size 92102
diff --git a/weightperturbationasdefenseagainstadversarialwordsubstitutions/4c9566ce-5e6b-4a4d-8cd1-d4c67a13eb4e_origin.pdf b/weightperturbationasdefenseagainstadversarialwordsubstitutions/4c9566ce-5e6b-4a4d-8cd1-d4c67a13eb4e_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..79e3250f05e07d53d6d968dbcab67e63b7e378b7
--- /dev/null
+++ b/weightperturbationasdefenseagainstadversarialwordsubstitutions/4c9566ce-5e6b-4a4d-8cd1-d4c67a13eb4e_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:96e219923069f373a8872ccda811a0e6086d4129d65e27ee67bff36913518d34
+size 1838171
diff --git a/weightperturbationasdefenseagainstadversarialwordsubstitutions/full.md b/weightperturbationasdefenseagainstadversarialwordsubstitutions/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..476baf2e7c438028fbfbf033a75c0a67074f1832
--- /dev/null
+++ b/weightperturbationasdefenseagainstadversarialwordsubstitutions/full.md
@@ -0,0 +1,329 @@
+# Weight Perturbation as Defense against Adversarial Word Substitutions
+
+Jianhan Xu $^{1,2}$ , Linyang Li $^{1,2}$ , Jiping Zhang $^{1,2}$ , Xiaqing Zheng $^{1,2}$ , Kai-Wei Chang $^{3}$ , Cho-Jui Hsieh $^{3}$ , Xuanjing Huang $^{1,2}$
+
+$^{1}$ School of Computer Science, Fudan University, Shanghai, China
+
+$^{2}$ Shanghai Key Laboratory of Intelligent Information Processing
+
+$^{3}$ Department of Computer Science, University of California, Los Angeles, USA
+
+{jianhanxu20,zhengxq}@fudan.edu.cn
+
+{kwchang,chohsieh}@cs.ucla.edu
+
+# Abstract
+
+The existence and pervasiveness of textual adversarial examples have raised serious concerns to security-critical applications. Many methods have been developed to defend against adversarial attacks for neural natural language processing (NLP) models. Adversarial training is one of the most successful defense methods by adding some random or intentional perturbations to the original input texts and making the models robust to the perturbed examples. In this study, we explore the feasibility of improving the adversarial robustness of NLP models by performing perturbations in the parameter space rather than the input feature space. The weight perturbation helps to find a better solution (i.e., the values of weights) that minimizes the adversarial loss among other feasible solutions. We found that the weight perturbation can significantly improve the robustness of NLP models when it is combined with the perturbation in the input embedding space, yielding the highest accuracy on both clean and adversarial examples across different datasets.
+
+# 1 Introduction
+
+Deep neural networks (DNNs) have achieved impressive results in a wide range of domains, but they were found to be vulnerable to adversarial examples maliciously crafted by adding a small perturbation to original examples (Szegedy et al., 2014). Many studies have demonstrated the vulnerability of DNNs on various natural language processing (NLP) tasks, including machine translation (Zhao et al., 2018; Cheng et al., 2020), dialogue systems (Cheng et al., 2019) and text classification (Liang et al., 2018; Zhao et al., 2018; Gao et al., 2018; Ren et al., 2019; Jin et al., 2020). These methods attack an NLP model by replacing, scrambling, and erasing characters or words under certain semantic and syntactic constraints.
+
+The existence and pervasiveness of textual adversarial examples have raised serious concerns, espe
+
+cially when NLP models are deployed to security-sensitive applications. Many methods have been proposed to defend against adversarial attacks for neural NLP models, including adversarial data augmentation (Zheng et al., 2020; Si et al., 2021), adversarial training (Madry et al., 2018; Zhu et al., 2020) and certified defense (Jia et al., 2019; Huang et al., 2019; Ye et al., 2020). Most of them improve the adversarial robustness of NLP models by applying some perturbations on input data and making the models robust to these perturbations in the input space. For example, one of the most effective methods is adversarial training that applies a min-max optimization into the training process by adding (usually gradient-guided) perturbations to the input embeddings (Miyato et al., 2017; Sato et al., 2018; Zhu et al., 2020). By augmenting these perturbed examples with the original training data, the models are robust to such perturbations. However, it is infeasible to enumerate and explore all possible inputs that would be fed to models by adversaries. In this study, we want to explore the feasibility of enhancing the robustness of neural NLP models by performing weight perturbations in the parameter space. The weight perturbation is useful to find a better solution in the parameter space (i.e., the weights) that minimizes the adversarial loss among other feasible solutions.
+
+Adversarial weight perturbation (Wu et al., 2020; Foret et al., 2021) has been investigated in the image domain, but our preliminary experiments show that their methods to implement weight perturbation cannot be trivially applied to NLP models. The existing weight perturbation results in inferior robustness and requires a long training time due to the discrete nature of texts. We found that the weight perturbation works better for NLP models when it is combined with the perturbation in the input feature space. Based on this finding, we propose a mixed adversarial training method with accumulated weight perturbation, named MAWP. The
+
+mixed adversarial training was designed to boost the model's robustness by combining the weight perturbation with the traditional adversarial training (i.e., perturbation in the input embedding space like FreeLB (Zhu et al., 2020)). In this way, the resulting models can benefit more from the weight perturbation by exposing them to the input perturbation during the training process. The accumulated weight perturbation was mainly introduced to accelerate the training process while the model's robustness can be further improved. The accumulated perturbation takes the smoothed form of a weighted sum of gradient descents calculated in the previously-performed weight perturbations, which carries the global gradient information and gives a clear signal in which direction the parameters should move to aggressively if the successive gradients point in a similar direction. Through extensive experiments, we demonstrate that our method can boost the robustness of NLP models to a great extent while suffering no or little performance drop on the clean data across three different datasets.
+
+# 2 Related Work
+
+# 2.1 Textual Adversarial Defense
+
+The goal of adversarial defenses is to learn a model capable of achieving high accuracy on both clean and adversarial examples. Recently, many defense methods have been developed to defend against textual adversarial attacks, which can roughly be divided into two categories: empirical (Miyato et al., 2017; Sato et al., 2018; Zhou et al., 2021; Dong et al., 2021) and certified (Jia et al., 2019; Huang et al., 2019; Ye et al., 2020) methods.
+
+Adversarial data augmentation is one of the most effective empirical defenses (Ren et al., 2019; Jin et al., 2020; Li et al., 2020) for NLP models. During the training time, they replace a word with one of its synonyms to create adversarial examples, and the models are trained on the dataset augmented with these adversarial examples. By augmenting these adversarial examples with the original training data, the model is robust to such perturbations. Zhou et al. (2021) and Dong et al. (2021) relax a set of discrete points (a word and its synonyms) to a convex hull spanned by the word embeddings of all these points, and use a convex hull formed by a word and its synonyms to capture word substitutions. Adversarial training (Miyato et al., 2017; Zhu et al., 2020) is another one of the most successful empirical defense methods by adding norm
+
+bounded adversarial perturbations to word embeddings and minimizes the resultant adversarial loss.
+
+The downside of existing empirical methods is that failure to discover an adversarial example does not mean that another more sophisticated attack could not find one. To address this problem, some certified defenses (Jia et al., 2019; Huang et al., 2019; Ye et al., 2020) have been introduced to guarantee the robustness to certain specific types of attacks. However, the existing certified defense methods make an unrealistic assumption that the defenders can access the synonyms used by the adversaries. They would be easily broken by more sophisticated attacks by using synonym sets with large sizes (Jin et al., 2020) or generating synonyms dynamically with BERT (Li et al., 2020).
+
+Most of the existing defense methods improve the robustness by making the models adapt to the training set augmented with the adversarial examples crafted by adding adversarial perturbations to discrete tokens or distributed embeddings. However, it is infeasible to enumerate all possible inputs that would be fed to the models by adversaries. In contrast, we have full control over the values of the model's parameters. Therefore, we propose to improve the robustness of neural NLP models by performing weight perturbations in the parameter space rather than in the input space.
+
+# 2.2 Weight Perturbation
+
+Weight perturbation has been explored to improve the generalization of models in the image domain. Graves (2011) first investigated the method to apply the perturbations on the weights of neural networks and introduced a stochastic variational method to improve the generalization. Following this direction, Foret et al. (2021) proposed an optimization method, named Sharpness-Aware Minimization (SAM), to seek the values of parameters that yield a uniformly low loss in their neighborhood.
+
+Recently, researchers from the computer vision community have also tried to improve the model's robustness by weight perturbation. He et al. (2019) presented a Parametric Noise Injection method that intentionally injects trainable noises on the activations and weights of neural networks. Wu et al. (2020) showed the correlation of model's performance with the direction and scale of weight perturbation by investigating the weight loss landscapes of multiple adversarial training techniques. They also proposed an Adversarial Weight Perturbation
+
+(AWP) method, which can be incorporated into existing adversarial training methods to narrow down the gap in robustness between training and test sets.
+
+However, the existing weight perturbation methods cannot trivially be applied to NLP models due to the discrete nature of texts. To address the problem we faced when implementing the weight perturbation to train NLP models, we propose a mixed adversarial training method to further improve the adversarial robustness of neural NLP models and introduce an accumulated weight perturbation to speed up the training process. This study is among the first ones to study how to apply the adversarial weight perturbations in the text domain.
+
+# 3 Preliminary
+
+In the following, we first introduce the traditional adversarial training that performs perturbation in the input feature space, and then give a brief review of the adversarial weight perturbation. Before diving into the details, we need to set up some notations. A training dataset $\mathcal{D} = \{(x_i,y_i)\}_{i = 1}^n$ with $n$ instances consists of a set of the feature vector representation $\pmb {x}\in \mathbb{R}^{d}$ of an input text $x$ and its corresponding label $y\in \{1,\dots,C\}$ pairs, where $d$ is the size of feature vectors and $C$ is the number of classes. Given a neural text classifier with a set of trainable weights $\pmb{w}$ and a loss function of $L(\pmb {w},\pmb {x},y)$ , the regular training aims to find the values of weights $\pmb{w}$ that minimizes the empirical risk of $\mathbb{E}_{(\pmb {x},y)\sim \mathcal{D}}[L(\pmb {w},\pmb {x},y)]$ . In adversarial training, we denote the adversarial perturbation to input feature vectors $\pmb{x}$ as $\delta$ , the weight perturbation to model's weights $\pmb{w}$ as $\epsilon_{\pmb{w}}$ , and the number of ascent steps as $k\in \{1,\ldots ,K\}$ .
+
+# 3.1 Traditional Adversarial Training
+
+Traditional adversarial training can be formulated as a min-max optimization problem (Madry et al., 2018) as follows:
+
+$$
+\min _ {\boldsymbol {w}} \mathbb {E} _ {(\boldsymbol {x}, y) \sim \mathcal {D}} \left[ \max _ {\| \delta \| _ {F} \leq \epsilon} L (\boldsymbol {w}, \boldsymbol {x} + \boldsymbol {\delta}, y) \right], \tag {1}
+$$
+
+where $\delta$ is constrained in a Frobenius norm ball with a radius $\epsilon$ . As pointed out by Zhu et al. (2020), the outer minimization can be achieved by Stochastic Gradient Descent (SGD) method, and the inner maximization can be accomplished by Projected Gradient Descent (PGD)-based attack algorithm. Specifically, PGD-based algorithms take the following step (with a step size $\alpha$ ) at $k$ -th iteration
+
+under the constraint of Frobenius norm:
+
+$$
+\boldsymbol {\delta} _ {k + 1} = \prod_ {\| \boldsymbol {\delta} \| _ {F} \leq \epsilon} \left(\boldsymbol {\delta} _ {k} + \frac {\alpha g \left(\boldsymbol {\delta} _ {k}\right)}{\| g \left(\boldsymbol {\delta} _ {k}\right) \| _ {F}}\right), \tag {2}
+$$
+
+where $g(\delta_k) = \nabla_{\delta_k}L(\boldsymbol {w},\boldsymbol {x} + \delta_k,y)$ denotes the loss gradient with respect to $\delta_{k}$ , and $\prod_{\| \pmb {\delta}\| _F\leq \epsilon}$ is the projection of input perturbation $\delta$ within the Frobenius norm ball with the radius $\epsilon$ .
+
+# 3.2 Adversarial Weight Perturbation
+
+We here give a brief introduction of adversarial weight perturbation (Wu et al., 2020; Foret et al., 2021) in the image domain. Given an adversarial example $\pmb{x}^{\prime}$ of a clean one $\pmb{x}$ , the adversarial weight perturbation seeks the values of parameters $\pmb{w}$ that have the lowest training loss within the surrounding neighborhood, which can be formulated as follows:
+
+$$
+\min _ {\boldsymbol {w}} \mathbb {E} _ {(\boldsymbol {x}, y) \sim \mathcal {D}} \left[ \max _ {\| \boldsymbol {\epsilon} _ {\boldsymbol {w}} \| _ {2} \leq \rho} L \left(\boldsymbol {w} + \boldsymbol {\epsilon} _ {\boldsymbol {w}}, \boldsymbol {x} ^ {\prime}, y\right) \right], \tag {3}
+$$
+
+where $\rho$ is the radius of weight perturbation under the $l_{2}$ -norm. Since it is hard to directly maximize the values of $\epsilon_{\pmb{w}}$ , the optimal values can be approximated via the first-order Taylor expansion of $L(\pmb{w} + \epsilon_{\pmb{w}}, \pmb{x}', y)$ as follows:
+
+$$
+\boldsymbol {\epsilon} _ {\boldsymbol {w}} ^ {*} \approx \underset {\| \boldsymbol {\epsilon} _ {\boldsymbol {w}} \| _ {2} \leq \rho} {\arg \max } \boldsymbol {\epsilon} _ {\boldsymbol {w}} ^ {T} \nabla_ {\boldsymbol {w}} L (\boldsymbol {w}, \boldsymbol {x} ^ {\prime}, y). \tag {4}
+$$
+
+By this approximation, the value of $\epsilon_{w}$ can be estimated as $\rho \nabla_{\boldsymbol{w}}L(\boldsymbol{w},\boldsymbol{x}',y) / \| \nabla_{\boldsymbol{w}}L(\boldsymbol{w},\boldsymbol{x}',y)\| _2$ . Specifically, Wu et al. (2020) performs the adversarial weight perturbations at layer level by setting $\epsilon_{w}$ to $\eta \| \pmb {w}\| _2\cdot \nabla_wL(\pmb {w},\pmb {x}',y) / \| \nabla_wL(\pmb {w},\pmb {x}',y)\| _2$ . Once the values of $\epsilon_{w}$ are obtained, they update $\pmb{w}$ based on the gradient of $\nabla_wL(\pmb {w},\pmb {x}',y)|_{\pmb {w} + \epsilon_w}$ in order to make the models generalize well on the adversarial sample $\pmb{x}'$
+
+# 4 Method
+
+In the following, we first introduce our accumulated weight perturbation method that is designed to accelerate the training process when fine-tuning a pre-trained language model. After then, we discuss how to combine it with adversarial training to further improve the robustness of NLP models.
+
+# 4.1 Accumulated Weight Perturbation
+
+Adversarial weight perturbation works by searching for the worst case within the neighborhood of current weights with respect to the training loss and finding a better solution in the neighborhood
+
+Algorithm 1 A mixed adversarial training algorithm with accumulated weight perturbation
+Input: $K$ : the number of ascent steps; $\mathcal{D}$ .. a training dataset $(\pmb {x}_i,y_i)_i = 1$ . $\alpha$ .. the size of ascent steps; $H$ .. the demensionality of the hidden layers; $\pmb{w}$ .. the weights of a model; $\tau$ .. a learning rate; $\eta$ .. the size of weight perturbation $\epsilon_{\pmb{w}}$ . $\eta_{2}$ .. the size of accumulated weight perturbation $\epsilon_{g}$ . $U$ .. a uniform distribution with the bound of $\sigma$
+Output: the resulting weights $\pmb{w}$
+1: Initialize $\pmb{w}$
+2: for epoch $= 1,\dots ,N$ do
+3: $\epsilon_{g}\gets 0$
+4: for minibatch $B\subset \mathcal{D}$ do
+5: $\pmb{w}_{0}\leftarrow \pmb{w},g_{0}\leftarrow 0$
+6: $\delta_0\gets \frac{1}{\sqrt{H}} U(-\sigma ,\sigma)$
+7: for $k = 0\dots (K - 1)$ do
+8: // Calculate weight perturbation $\epsilon_{wk}$
+9: $\pmb{d_{wk}}\gets \nabla_{\pmb{w}_k}L(\pmb {w}_k,\pmb {x} + \pmb {\delta}_k,y)$
+10: $\epsilon_{w_k}\gets \eta \| w_k\| _2\cdot \frac{d_{w_k}}{\|d_{w_k}\|_2}$
+11: $\pmb{w}_{k + 1}\gets \pmb{w}_k + \pmb{\epsilon}_{\pmb{w}_k}$
+12: // Accumulate gradient $g_{k}$ for $w_{k + 1}$
+13: $g_{k + 1}\gets g_k + \frac{1}{K}\mathbb{E}_{(x,y)\in B}\nabla_{\pmb{w}_{k + 1}}L(\pmb{w}_{k + 1},\pmb {x}+$ $\delta_{k},y)$
+14: // Update $\delta_{k}$ via input perturbation
+15: $g_{adv}\gets \nabla_{\delta_k}L(w_{k + 1},x + \delta_k,y)$
+16: $\delta_{k + 1}\gets \prod_{\| \delta \| _2\leq \epsilon}(\delta_k + \alpha \frac{g_{adv}}{\|g_{adv}\|_2})$
+17: end for
+18: // Calculate accumulated perturbation $\epsilon_{g}$
+19: $\epsilon_s\gets \sum_{k = 0}^{K - 1}\epsilon_{w_k}$
+20: $\epsilon_g\gets \prod_{\| \epsilon_g\| _2\leq \eta_2}(\epsilon_s + \frac{\epsilon_g\| \epsilon_s\|_2}{\| \epsilon_g\|_2 + \epsilon_0})$
+21: $w\gets w - \tau g_K - \epsilon_g$
+22: end for
+23: end for
+24: return w
+
+through minimizing the adversarial loss. Note that the gradient calculated to find the worse case by the weight perturbation can be reused to optimize the value of the current weight, which would be updated in the reverse direction of the gradient already computed. For example, starting from a point of weight we can calculate its gradient with respect to the loss function to locate the worse case around this point to perform the weight perturbation. The normalized version of this gradient also can be used to update the model's parameters by the gradient descent. By reusing such a gradient calculated at each step of weight perturbation, we can improve both the generalization and robustness with less computational cost.
+
+We also found experimentally that the robustness of models can be further improved by introducing a global term that takes the form of the accumulated gradients obtained at previous perturbation steps. The accumulated gradient carries the global
+
+information and gives a clear signal in which direction the parameters should move to aggressively if the gradients obtained at different steps point in a similar direction.
+
+At each step $k$ , we can perform the weight perturbation $\epsilon_{\boldsymbol{w}_k}$ that finds the worse case in the neighborhoods of weight $\boldsymbol{w}_k$ . Meanwhile, an accumulated weight perturbation is also calculated which takes the smoothed form of a weighted sum of gradient descents calculated in the previously-performed weight perturbations. Specifically, for each minibatch $B$ , the values of weight perturbation $\epsilon_s$ are obtained by summing up all the perturbations as $\epsilon_s = \sum_{k=0}^{K-1} \epsilon_{\boldsymbol{w}_k}$ . Then, we can calculate the accumulated weight perturbation $\epsilon_g$ as follows:
+
+$$
+\epsilon_ {g} = \prod_ {\| \epsilon_ {g} \| _ {2} \leq \eta_ {2}} \left(\epsilon_ {s} + \frac {\epsilon_ {g} \cdot \| \epsilon_ {s} \| _ {2}}{\| \epsilon_ {g} \| _ {2} + \epsilon_ {0}}\right), \tag {5}
+$$
+
+where $\eta_{2}$ is the radius of the accumulated weight perturbation, $\epsilon_0$ a constant introduced for numerical stability in the computation, and $\prod_{\| \epsilon_g\| _2\leq \eta_2}$ the projection function.
+
+# 4.2 Mixed Adversarial Training Method
+
+We here describe our adversarial weight perturbation and how to combine it with FreeLB (Free Large-Batch) (Zhu et al., 2020), a popular adversarial training algorithm. The algorithm of FreeLB adds adversarial perturbations to word embeddings and minimizes the resultant adversarial loss inside different regions around input samples. It also adds norm-bounded adversarial perturbations to the input sentences' embeddings using a gradient-based method and enlarges the batch size with diversified adversarial samples under such norm constraints. By the mixed adversarial training, NLP models can benefit more from the adversarial weight perturbation by exposing the models to the input perturbation during the training process.
+
+Specifically, at the $k$ -th ascent step, we calculate the weight perturbation $\epsilon_{\boldsymbol{w}_k}$ based on the input perturbations $\delta_k$ and the weights $\boldsymbol{w}_k$ as follows:
+
+$$
+\boldsymbol {\epsilon} _ {\boldsymbol {w} _ {k}} = \eta \| \boldsymbol {w} _ {k} \| _ {2} \cdot \frac {\nabla_ {\boldsymbol {w} _ {k}} L (\boldsymbol {w} _ {k} , \boldsymbol {x} + \boldsymbol {\delta} _ {k} , y)}{\| \nabla_ {\boldsymbol {w} _ {k}} L (\boldsymbol {w} _ {k} , \boldsymbol {x} + \boldsymbol {\delta} _ {k} , y) \| _ {2}}. \tag {6}
+$$
+
+The input perturbation $\delta_0$ is initialized from a uniform distribution, and $\delta_0 \sim \frac{1}{\sqrt{H}} U(-\sigma, \sigma)$ . After calculating the weight perturbation $\epsilon_{\boldsymbol{w}_k}$ , the perturbed weights of $\boldsymbol{w}_{k+1}$ can be updated by $\boldsymbol{w}_k + \epsilon_{\boldsymbol{w}_k}$ . Following the gradient accumulating operation in FreeLB, we compute the gradients of
+
+
Datasets
Methods
Clean%
TextFooler
BERT-Attack
TextBugger
TextFooler*
BERT-Attack*
TextBugger*
Aua%
#Query
Aua%
#Query
Aua%
#Query
Aua%
#Query
Aua%
#Query
Aua%
#Query
SST-2
Base
92.24
11.77
101.22
11.10
128.23
28.00
52.44
19.93
102.81
18.97
107.94
21.63
99.99
PGD
90.78
10.87
114.16
8.67
131.56
33.73
49.87
29.37
106.73
22.53
106.29
29.60
103.88
FreeLB++
92.35
13.33
112.86
11.40
136.90
33.07
51.10
33.83
110.43
27.63
111.22
33.93
109.16
TA-VAT
93.08
13.89
117.06
11.56
133.47
32.89
53.45
28.00
106.11
22.33
106.84
27.33
106.64
InfoBERT
93.14
10.56
111.95
10.33
138.74
33.33
53.09
26.67
105.50
22.45
111.87
28.89
106.15
PGD-AWP
91.14
11.87
113.53
9.93
129.50
32.33
50.27
29.17
106.68
22.50
106.33
27.90
104.66
FreeLB-AWP
92.35
14.80
118.97
13.67
142.30
35.63
53.89
35.37
121.32
29.00
113.61
36.73
129.45
MAWP
91.96
31.30
146.01
24.57
184.02
43.07
88.02
41.47
174.01
32.60
167.06
39.23
187.00
AGNEWS
Base
94.50
14.20
320.72
21.87
433.90
38.23
178.74
23.90
345.58
36.73
380.99
28.10
371.26
PGD
94.56
30.43
413.93
25.80
456.81
57.03
169.74
50.03
416.89
42.60
389.03
56.67
416.02
FreeLB++
95.33
28.20
410.36
29.83
494.79
55.17
185.76
47.60
421.38
47.10
415.16
53.63
437.54
TA-VAT
94.97
28.44
404.81
28.00
470.20
52.00
185.56
43.11
405.67
44.22
403.80
49.44
422.24
InfoBERT
95.04
17.11
351.46
23.33
448.84
46.00
187.95
30.22
367.01
37.67
389.81
34.78
391.28
PGD-AWP
94.38
28.53
407.12
23.77
446.69
57.60
165.08
48.40
410.16
39.83
380.61
55.70
405.38
FreeLB-AWP
94.39
32.03
425.13
29.03
481.79
58.57
178.73
51.57
425.95
46.07
406.96
57.43
435.43
MAWP
95.23
31.37
423.78
29.97
481.29
58.40
178.99
50.70
424.20
46.23
407.66
57.33
436.20
IMDB
Base
92.09
8.53
866.73
6.10
878.57
18.27
576.04
26.90
672.57
26.60
526.46
26.80
691.31
PGD
92.70
10.33
1059.83
7.33
862.28
17.41
590.15
40.73
788.64
24.93
562.40
34.60
769.36
FreeLB++
93.31
15.80
1298.17
10.33
1149.16
26.08
757.12
48.20
937.28
37.53
641.63
44.67
1005.66
TA-VAT
92.84
12.60
1270.00
9.77
1105.95
28.11
830.73
45.77
885.73
35.53
638.21
41.40
1008.66
InfoBERT
92.60
10.67
882.74
8.44
906.67
16.33
596.63
28.67
713.80
25.67
567.49
26.89
774.45
PGD-AWP
93.18
12.83
1258.87
8.77
1068.43
22.22
733.80
47.47
879.47
33.80
628.32
42.60
915.04
FreeLB-AWP
93.25
18.80
1353.93
13.47
1184.38
28.58
778.54
49.23
897.46
37.53
636.53
44.93
948.74
MAWP
93.24
35.97
1594.49
15.90
1501.68
39.90
1033.27
58.40
2522.56
43.20
1795.51
56.07
3167.35
+
+Table 1: The experimental results of different defense methods on SST-2, AGNEWS, and IMDB datasets. The best performance is highlighted in bold fonts. The symbol * indicates the attack algorithms on which we impose some constraints for fair comparison by ensuring the quality of adversarial examples (see Section 5.2 for details).
+
+$g_{k + 1}$ with respect to the perturbed weights $\pmb{w}_{k + 1}$ , the perturbed inputs $\pmb{x} + \pmb{\delta}_k$ and the accumulated gradient $g_{k}$ in $k - 1$ -th step:
+
+$$
+g _ {k + 1} = g _ {k} + \frac {1}{K} \nabla_ {\boldsymbol {w} _ {k + 1}} L (\boldsymbol {w} _ {k + 1}, \boldsymbol {x} + \boldsymbol {\delta} _ {k}, y) \tag {7}
+$$
+
+where $g_{0}$ is initialized to 0. When we calculate the accumulated gradient $g_{k + 1}$ , it is free for us to calculate the input perturbation without additional cost. Therefore, we can calculate the input perturbation $\delta_{k + 1}$ based on the perturbed weights $\pmb{w}_{k + 1}$ and the former input perturbation $\delta_{k}$ as follows:
+
+$$
+\boldsymbol {\delta} _ {k + 1} = \prod_ {\| \boldsymbol {\delta} \| _ {2} \leq \epsilon} \left(\boldsymbol {\delta} _ {k} + \alpha \frac {\nabla_ {\boldsymbol {\delta} _ {k}} L (\boldsymbol {w} _ {k + 1} , \boldsymbol {x} + \boldsymbol {\delta} _ {k} , y)}{\| \nabla_ {\boldsymbol {\delta} _ {k}} L (\boldsymbol {w} _ {k + 1} , \boldsymbol {x} + \boldsymbol {\delta} _ {k} , y) \| _ {2}}\right). \tag {8}
+$$
+
+We list the proposed MAWP in Algorithm 1. After the weights $w_{0}$ and the input perturbations $\delta_{0}$ are initialized, we calculate the weight perturbations $\epsilon_{w_k}$ and add them to the model's weight $w_{k}$ at each ascent step. We then calculate the gradient $g_{k}$ (Line 14) and the next input perturbations $\delta_{k + 1}$ (Line 16). Finally, the accumulated weight perturbations $\epsilon_{g}$ are calculated to update the model's weights with the adversarial loss.
+
+# 5 Experiments
+
+We conducted three sets of experiments. The first one is to evaluate our MAWP in both clean accuracy and adversarial robustness on several datasets
+
+under three representative attack algorithms, compared to seven baseline methods. The goal of the second one is to investigate the impact of the number of ascent steps and training epochs on the performance. In the third experiment, we would like to better understand the interpretability of adversarial training via visualizations.
+
+Three widely-used text classification datasets were used for evaluation: Stanford Sentiment Treebank (SST-2) (Socher et al., 2013), AG-News corpus (AGNEWS) (Zhang et al., 2015) and Internet Movie Database (IMDB) (Maas et al., 2011). SST-2 has about 67,000 sentences for binary categories, and IMDB consists of about 50,000 movie reviews for positive and negative sentiment classification, and AGNEWS is a text classification dataset pertaining to four categories containing about 30,000 news articles. For fair comparison, we used a BERT-based model (Devlin et al., 2019) as the base model for all the defense methods.
+
+# 5.1 Implementation Details
+
+We implemented MAWP based on Huggingface Transformers1. We chose to use PGD (Madry et al., 2018), FreeLB++ (Li et al., 2021), TA-VAT (Li and Qiu, 2020), and InfoBERT (Wang et al., 2020) as baselines. They are widely-used defense methods against textual adversarial attacks or have been
+
+proposed most recently. We also developed two strong baselines. One is the combination of AWP (Wu et al., 2020) (a weight perturbation method proposed for computer vision tasks) and PGD, denoted as PGD-AWP. Another is the combination of AWP with FreeLB, denoted by FreeLB-AWP. These two baselines were also designed for the ablation study and used to compare our mixed adversarial training method. For fair comparison, we also implemented PGD-AWP and FreeLB-AWP as describe in Algorithm 1 except using AWP instead of the proposed weight perturbation.
+
+The size of the weight perturbation $\eta$ was set to $1\times 10^{-5}$ for all the training methods compared, and that of the accumulated perturbation $\eta_{2}$ was set to $2\times 10^{-5}$ . All the experimental results are obtained over three runs with different random initialization.
+
+# 5.2 Attack Algorithms and Evaluation Metrics
+
+Three representative attack algorithms were used to evaluate the adversarial robustness of models: TextFooler (Jin et al., 2020), BERT-Attack (Li et al., 2020) and TextBugger (Li et al., 2019). We use these attack algorithms reimplemented by TextAttack toolkit ${}^{2}$ (Morris et al.,2020). TextFooler and BERT-Attack adversarially perturb text inputs by synonym-based substitutions, whereas TextBugger can perform adversarial perturbation to inputs at both character and word levels. TextFooler generates synonyms using 50 nearest neighbors in the word embedding space, while BERT-Attack uses BERT to generate synonyms dynamically, and thus no defender can know in advance the synonyms used to replace original words by BERT-Attack.
+
+Following Li et al. (2021), we used three metrics to evaluate our methods and other competitors: clean accuracy (the accuracy of models on clean examples) is denoted as $\text{Clean\%}$ , accuracy under attack (the accuracy of models on adversarial examples under a certain attack) denoted as $Aua\%$ ; and number of queries (the average number of queries an attacker needs to perform successful attacks) denoted as $\#Query$ .
+
+The clean accuracy is calculated on the entire test set while the other two metrics are evaluated on the 1,000 examples randomly sampled from the test set. In Table 1, we also report the experimental results under the attack algorithms imposed with some constraints suggested by Li et al. (2021) to
+
+ensure the quality of adversarial examples generated. For all the attack algorithms, the maximum percentage of words that are allowed to be modified is set to 0.2 on SST-2, 0.3 on AGNEWS, and 0.1 on IMDB. We set the minimum semantic similarity between original sample and adversary to 0.84, set the maximum number of one word's synonyms to 50 and set the maximum number of queries to the victim model for each sample text to $50L$ , where $L$ serves as the length of sample text. For a fair comparison, the number of ascent steps $K$ was set to 10 for all adversarial training methods considered.
+
+# 5.3 Experimental Results
+
+From the numbers reported in Table 1, a handful of trends are readily apparent: (1) MAWP consistently outperforms all the competitors by a significant margin on the adversarial data across three different attacks on SST-2 and IMDB datasets, and it also achieves comparable results on the predication accuracy of clean examples; (2) Although the gap in the accuracy under attack between MAWP and other baseline methods is slightly reduced under the attack algorithms imposed by some constraints as recommended by (Li et al., 2021), the adversaries require much more queries to find adversarial examples for the models trained with MAWP. Note that the greater the average number of queries required by the adversaries is, the more difficult it is for the defense model to be compromised; (3) MAWP outperforms most of the competitors and achieves a relatively high clean accuracy of 95.23 on the test set of AGNEWS.
+
+# 5.4 Impact of Different Training Epochs
+
+The accumulated weight perturbation was introduced to accelerate the training process and further improve the adversarial robustness of models. To evaluate its effectiveness, we implemented a variant of MAWP, denoted as "MAWP w/o Accum", in which the accumulation operation is not used (i.e., reset the value of $\epsilon_{g}$ to 0 at every minibatch).
+
+As shown in Figure 1, we found that: (1) Both MAWP and its variant outperform FreeLB++ and FreeLB-AWP in the accuracy under attack especially after 10 epochs; (2) MAWP requires fewer training epochs than its variant of "MAWP w/o Accum" in order to achieve the same or similar Aua%; (3) The robustness of the model trained with FreeLB-AWP still can be improved even after 35 epochs while the Aua% of the model trained
+
+
+Figure 1: Accuracy under attack versus the number of training epochs. The experiments were conducted on SST-2 under the attack algorithm of TextFooler.
+
+with FreeLB++ drops obviously at the end of the training process.
+
+# 5.5 Impact of the Number of Ascent Steps
+
+We would like to understand how the choice of the number of ascent steps $K$ impacts the adversarial robustness of models. MAWP was compared to FreeLB++ with different numbers of ascent steps, and the results on SST-2 dataset are reported in Table 2.
+
+
Methods
Step K
Clean%
Text,Follower*
Bert-Attack*
TextBugger*
FreeLB++
3
92.53
24.67
22.00
26.07
5
93.10
27.53
24.87
28.90
10
92.17
33.83
27.63
33.93
15
91.14
36.20
28.27
37.40
MAWP
3
92.53
30.27
27.27
29.47
5
92.53
36.80
30.27
35.03
10
92.07
42.60
34.23
40.00
15
91.21
38.43
29.10
36.80
+
+Table 2: The impact of different ascent steps in clean accuracy and adversarial robustness on the validation set of SST-2.
+
+MAWP outperforms FreeLB++ in all the cases of $K$ under three different attack algorithms. As the number of ascent steps grows, the clean accuracy of the two models drops to 91.14 and 91.21 respectively, and the performance in adversarial robustness also drops slightly. It shows that too much perturbation does harm the model's clean accuracy and adversarial robustness. Therefore, we chose to set $K = 10$ in all the experiments except those reported in this subsection.
+
+# 5.6 Input Loss Landscape
+
+To give a reasonable explanation of the effect of an enlarged number of ascent steps, we visualized the input loss landscapes produced by the models trained with FreeLB++ and MAWP with the step $K \in \{5, 10, 15\}$ . The visualization was provided
+
+
+(a)
+
+
+(b)
+
+
+(c)
+
+
+(d)
+
+
+(e)
+Figure 2: The input loss landscapes produced by the models trained with FreeLB++ and MAWP. Sub-figures (a), (c), and (e) are the loss landscapes produced by the models trained with FreeLB++-5, FreeLB++-10, and FreeLB++-15 respectively, while sub-figure (b), (d), (f) are the loss landscapes produced by those trained with MAWP-5, MAWP-10, and MAWP-15 respectively.
+
+
+(f)
+
+by the models trained on SST-2 dataset.
+
+Specifically, we perturb the original input embedding $\pmb{x}$ to $\pmb{x} + \alpha \delta_{1} + \beta \delta_{2}$ , where $\delta_{1}$ and $\delta_{2}$ denote two random Gaussian direction vectors with normalization, and $\alpha$ and $\beta$ are two scalar parameters. Then, the corresponding input loss landscapes can be obtained and are shown in Figure 2. As we can see from Figure 2-(a), (c) and (e), the input loss landscapes will gradually become more flattened as the number of ascent steps $K$ grows when the BERT-based model is trained by FreeLB++ (an enhanced adversarial training method of FreeLB). The similar trend also can be observed from Figure 2-(b), 2(d) and 2(f) when MAWP is used to train the models. As the number of ascent steps $K$ grows, the region with lower loss will become larger, leading to more robust models. However, too strong perturbations without the norm-bounded constraints will distort the decision boundary of the models, which reduces the models' generalization and robustness.
+
+To sum up, a larger number of ascent steps will make the input landscape flatter and enable the
+
+model less impacted by the adversarial perturbations. However, the model with a too flat landscape (i.e., an over-smooth model) will result in poor generalization and robustness.
+
+# 5.7 The Features Captured with and without Weight Perturbation
+
+We want to gain a deeper understanding of why the weight perturbation will lead to more robust models. We examined the differences in the feature vectors produced by the resulting models between original and perturbed examples. Given a neural model $f$ , for an original example $x$ we obtain a perturbed example $\hat{x}$ that stays very close to the decision boundary of the model $f$ and $f(x) = f(\hat{x})$ . Note that the smaller the difference, the harder a model can be compromised by the adversaries, and the more robust the model will be generally.
+
+We chose to use TextFooler* as the attack algorithm when generating such perturbed examples. Note that TextFooler* was imposed by some constraints highly recommended by (Li et al., 2021). We used this variant of TextFooler because we want the results yielded by different defense methods can be compared in a more controlled manner. We define the following distance function $\mathcal{G}(x, \hat{x})$ to measure the difference between the feature vectors of $x$ and $\hat{x}$ :
+
+$$
+\mathcal {G} (x, \hat {x}) = \frac {\| h ^ {j} (x) - h ^ {j} (\hat {x}) \| _ {2}}{\| h ^ {j} (x) \| _ {2}} \tag {9}
+$$
+
+where $h^{j}(x)$ denotes the averaged feature vector extracted from the $j$ -th layer of the BERT-based model for an input $x$ . We chose to use $l_{2}$ -norm for calculating the distances since the size of perturbation in the embedding space is usually measured based on $l_{2}$ -norm for almost all the textual attack algorithms.
+
+We first investigated a set of an input $x$ and its quasi-adversarial example $\hat{x}$ (stays very close to the decision boundary but does not make the model change its prediction yet) produced by different defense methods where $\hat{x}$ was generated against the BERT-based model. The reported differences are averaged over all the examples in SST-2 test set. As shown in Figure 3-(a), we found that: (1) The distances between the features of $x$ and $\hat{x}$ generated at the 0-th layer (i.e., embeddings) are quite similar for almost all the models except InfoBERT, which aggressively compresses the embeddings; (2) At the deeper layers, the differences in this distance gradually increase among the models. More
+
+
+
+
+Figure 3: The differences in the feature vector representations between original and perturbed examples produced at each BERT layer on SST-2, where the number "0" denotes the word embedding layer, "13" the output layer, and the numbers in between the hidden layers. (a) The perturbed examples were generated against the BERT-based model trained without any adversarial training method; (b) The perturbed examples were generated against respective models under the text-time attack.
+
+adversarily robust models have a lower distance between the features of $x$ and $\hat{x}$ generated at every layer, especially for the last few layers.
+
+We also examined the differences between the feature vectors of original examples and perturbed ones generated against respective models (i.e., under test-time attacks). As shown in Figure 3-(b), the model trained with MAWP archived a relatively low distance in the generated feature space compared to other adversarial training methods, indicating that MAWP can keep such a distance smaller in the feature space produced at every network layer, which makes the model trained with MAWP more resistant to the perturbations imposed in the inputs.
+
+# 6 Conclusion
+
+This study is among the first ones to explore the feasibility of improving the adversarial robustness of neural NLP models by performing the perturbations in the parameter space (i.e., weights) rather than the input feature space (i.e., word embeddings). We experimentally demonstrate that the weight perturbation can be used to find a better solution in the parameter space by minimizing the adversarial loss with a multi-step, gradient-guided optimization method. We also show that the proposed method is complementary to existing adversarial training methods, and the combination of our
+
+method and FreeLB achieved state-of-the-art accuracy on both clean and adversarial examples on multiple benchmark datasets.
+
+# Limitations
+
+We evaluated the proposed mixed adversarial training method with accumulated weight perturbation (MAWP) under the word substitution-based attacks only in this study. We are aware that there are a wide range of textual adversarial attacks, including adding, deleting or modifying characters or words or other language units under certain semantics-preserving constraints. In the future, we would like to investigate how well the MAWP can defend against other types of adversarial attacks. In the current implementation of the MAWP, the weight perturbation needs to be calculated and applied at each adversarial training step, which requires a relatively longer training time. It is also planned to implement the weight perturbations for training NLP models in a more efficient way.
+
+# Acknowledgements
+
+The authors would like to thank the anonymous reviewers for their valuable comments. This work was supported by National Science Foundation of China (No. 62076068), Shanghai Municipal Science and Technology Major Project (No. 2021SHZDZX0103), and Zhangjiang Lab. Chang is supported in part by Cisco and Sloan fellowship. Hsieh is supported in part by NSF IIS-2008173 and IIS-2048280.
+
+# References
+
+Minhao Cheng, Wei Wei, and Cho-Jui Hsieh. 2019. Evaluating and enhancing the robustness of dialogue systems: A case study on a negotiation agent. In *NAACL*.
+Minhao Cheng, Jinfeng Yi, Pin-Yu Chen, Huan Zhang, and Cho-Jui Hsieh. 2020. Seq2sick: Evaluating the robustness of sequence-to-sequence models with adversarial examples.
+Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In *NAACL-HLT*.
+Xinshuai Dong, Anh Tuan Luu, Rongrong Ji, and Hong Liu. 2021. Towards robustness against natural language word substitutions. In ICLR.
+
+Pierre Foret, Ariel Kleiner, Hossein Mobahi, and Behnam Neyshabur. 2021. Sharpness-aware minimization for efficiently improving generalization. In ICLR.
+Ji Gao, Jack Lanchantin, Mary Lou Soffa, and Yanjun Qi. 2018. Black-box generation of adversarial text sequences to evade deep learning classifiers. In 2018 IEEE Security and Privacy Workshops.
+Alex Graves. 2011. Practical variational inference for neural networks. In NIPS.
+Zhezhi He, Adnan Siraj Rakin, and Deliang Fan. 2019. Parametric noise injection: Trainable randomness to improve deep neural network robustness against adversarial attack. In 2019 CVPR.
+Po-Sen Huang, Robert Stanforth, Johannes Welbl, Chris Dyer, Dani Yogatama, Sven Gowal, Krishnamurthy Dvijotham, and Pushmeet Kohli. 2019. Achieving verified robustness to symbol substitutions via interval bound propagation. In EMNLP, Hong Kong, China.
+Robin Jia, Aditi Raghunathan, Kerem Göksel, and Percy Liang. 2019. Certified robustness to adversarial word substitutions. In EMNLP.
+Di Jin, Zhijing Jin, Joey Tianyi Zhou, and Peter Szolovits. 2020. Is BERT really robust? a strong baseline for natural language attack on text classification and entailment. AAAI.
+Jinfeng Li, Shouling Ji, Tianyu Du, Bo Li, and Ting Wang. 2019. Textbugger: Generating adversarial text against real-world applications. Proceedings 2019 Network and Distributed System Security Symposium.
+Linyang Li, Ruotian Ma, Qipeng Guo, Xiangyang Xue, and Xipeng Qiu. 2020. BERT-ATTACK: Adversarial attack against BERT using BERT. In EMNLP, Online.
+Linyang Li and Xipeng Qiu. 2020. Tavat: Token-aware virtual adversarial training for language understanding. arXiv preprint arXiv:2004.14543.
+Zongyi Li, Jianhan Xu, Jiehang Zeng, Linyang Li, Xiaoqing Zheng, Qi Zhang, Kai-Wei Chang, and Cho-Jui Hsieh. 2021. Searching for an effective defender: Benchmarking defense against adversarial word substitution.
+Bin Liang, Hongcheng Li, Miaoqiang Su, Pan Bian, Xirong Li, and Wenchang Shi. 2018. Deep text classification can be fooled. In IJCAI'18. AAAI Press.
+Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In ACL, Portland, Oregon, USA.
+Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. 2018. Towards deep learning models resistant to adversarial attacks. In ICLR. OpenReview.net.
+
+Takeru Miyato, Andrew M. Dai, and Ian Goodfellow. 2017. Adversarial training methods for semi-supervised text classification.
+John Morris, Eli Lifland, Jin Yong Yoo, Jake Grigsby, Di Jin, and Yanjun Qi. 2020. Textattack: A framework for adversarial attacks, data augmentation, and adversarial training in nlp. In EMNLP.
+Shuhuai Ren, Yihe Deng, Kun He, and Wanxiang Che. 2019. Generating natural language adversarial examples through probability weighted word saliency. In ACL.
+Motoki Sato, Jun Suzuki, Hiroyuki Shindo, and Yuji Matsumoto. 2018. Interpretable adversarial perturbation in input embedding space for text.
+Chenglei Si, Zhengyan Zhang, Fanchao Qi, Zhiyuan Liu, Yasheng Wang, Qun Liu, and Maosong Sun. 2021. Better robustness by more coverage: Adversarial training with mixup augmentation for robust fine-tuning.
+Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In EMNLP.
+Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. 2014. Intriguing properties of neural networks.
+Boxin Wang, Shuohang Wang, Yu Cheng, Zhe Gan, Ruoxi Jia, Bo Li, and Jingjing Liu. 2020. InfoBERT: Improving robustness of language models from an information theoretic perspective. arXiv preprint arXiv:2010.02329.
+Dongxian Wu, Shu-Tao Xia, and Yisen Wang. 2020. Adversarial weight perturbation helps robust generalization. In NeurIPS.
+Mao Ye, Chengyue Gong, and Qiang Liu. 2020. SAFER: A structure-free approach for certified robustness to adversarial word substitutions. In ACL.
+Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classification. In NIPS.
+Zhengli Zhao, Dheeru Dua, and Sameer Singh. 2018. Generating natural adversarial examples. In ICLR.
+Xiaoqing Zheng, Jiehang Zeng, Yi Zhou, Cho-Jui Hsieh, Minhao Cheng, and Xuanjing Huang. 2020. Evaluating and enhancing the robustness of neural network-based dependency parsing models with adversarial examples. In ACL.
+Yi Zhou, Xiaqing Zheng, Cho-Jui Hsieh, Kai-Wei Chang, and Xuanjing Huang. 2021. Defense against synonym substitution-based adversarial attacks via Dirichlet neighborhood ensemble. In ACL.
+
+Chen Zhu, Yu Cheng, Zhe Gan, Siqi Sun, Tom Goldstein, and Jingjing Liu. 2020. FreeLB: Enhanced adversarial training for natural language understanding. In ICLR.
\ No newline at end of file
diff --git a/weightperturbationasdefenseagainstadversarialwordsubstitutions/images.zip b/weightperturbationasdefenseagainstadversarialwordsubstitutions/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..a6e26bd73ee0ef4c6608588dbfdd9927fb8e83ff
--- /dev/null
+++ b/weightperturbationasdefenseagainstadversarialwordsubstitutions/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3ab8484c93df0d84192bfd587e40f86ab6674bd35b8eded946ab8fea37615ae5
+size 366776
diff --git a/weightperturbationasdefenseagainstadversarialwordsubstitutions/layout.json b/weightperturbationasdefenseagainstadversarialwordsubstitutions/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..5b07a3825893a9159e258ca10c19d155facbbf43
--- /dev/null
+++ b/weightperturbationasdefenseagainstadversarialwordsubstitutions/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4e2a0324b0c82acb52d0aaaeab8b133b87c8fc9df2cf013cf5ce23702fee2f86
+size 422641
diff --git a/whatdocompressedmultilingualmachinetranslationmodelsforget/ccbe30dc-e167-46ec-8a9e-023a29bb405f_content_list.json b/whatdocompressedmultilingualmachinetranslationmodelsforget/ccbe30dc-e167-46ec-8a9e-023a29bb405f_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..8b38b9253953fdfbd13b7c284daf7ad06ec422a4
--- /dev/null
+++ b/whatdocompressedmultilingualmachinetranslationmodelsforget/ccbe30dc-e167-46ec-8a9e-023a29bb405f_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:857cc640d4631161fe1fade4e4f4e7f26732186fd53e9f9d9152f01830c54e8d
+size 119554
diff --git a/whatdocompressedmultilingualmachinetranslationmodelsforget/ccbe30dc-e167-46ec-8a9e-023a29bb405f_model.json b/whatdocompressedmultilingualmachinetranslationmodelsforget/ccbe30dc-e167-46ec-8a9e-023a29bb405f_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..4ca629722a2a33f239413e22ec91f6c694933188
--- /dev/null
+++ b/whatdocompressedmultilingualmachinetranslationmodelsforget/ccbe30dc-e167-46ec-8a9e-023a29bb405f_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0d35f0d8707c621ced95f747ab33e16404caba892b1ef1e97c28f5213e13e903
+size 150332
diff --git a/whatdocompressedmultilingualmachinetranslationmodelsforget/ccbe30dc-e167-46ec-8a9e-023a29bb405f_origin.pdf b/whatdocompressedmultilingualmachinetranslationmodelsforget/ccbe30dc-e167-46ec-8a9e-023a29bb405f_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..a031f9b826bf72f7ac09cfdc349ece7d812b176b
--- /dev/null
+++ b/whatdocompressedmultilingualmachinetranslationmodelsforget/ccbe30dc-e167-46ec-8a9e-023a29bb405f_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ca9a5e93cf081ac7211b554427761d8f44f1fd60adeb86dd6661a00a6dd80d47
+size 3477567
diff --git a/whatdocompressedmultilingualmachinetranslationmodelsforget/full.md b/whatdocompressedmultilingualmachinetranslationmodelsforget/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..b58bed8190cc66547ee462aa65ea0f9bb5170294
--- /dev/null
+++ b/whatdocompressedmultilingualmachinetranslationmodelsforget/full.md
@@ -0,0 +1,510 @@
+# What Do Compressed Multilingual Machine Translation Models Forget?
+
+Alireza Mohammadshahi\*1,2,3 Vassilina Nikoulina $^{1}$ Alexandre Berard $^{1}$ Caroline Brun $^{1}$ James Henderson $^{2}$ Laurent Besacier $^{1}$
+
+$^{1}$ Naver Labs Europe $^{2}$ IDIAP Research Institute $^{3}$ EPFL
+
+{first.last}@naverlabs.com
+
+{alireza.mohammadshahi, james.henderson}@idiap.ch
+
+# Abstract
+
+Recently, very large pre-trained models achieve state-of-the-art results in various natural language processing (NLP) tasks, but their size makes it more challenging to apply them in resource-constrained environments. Compression techniques allow to drastically reduce the size of the models and therefore their inference time with negligible impact on top-tier metrics. However, the general performance averaged across multiple tasks and/or languages may hide a drastic performance drop on under-represented features, which could result in the amplification of biases encoded by the models. In this work, we assess the impact of compression methods on Multilingual Neural Machine Translation models (MNMT) for various language groups, gender, and semantic biases by extensive analysis of compressed models on different machine translation benchmarks, i.e. FLORES-101, MT-Gender, and DiBiMT. We show that the performance of under-represented languages drops significantly, while the average BLEU metric only slightly decreases. Interestingly, the removal of noisy memorization with compression leads to a significant improvement for some medium-resource languages. Finally, we demonstrate that compression amplifies intrinsic gender and semantic biases, even in high-resource languages. $^{1}$
+
+# 1 Introduction
+
+Over the recent years, pre-trained Transformer (Vaswani et al., 2017) models have reached a substantial improvement in a variety of Natural Language Processing (NLP) tasks. This improvement mostly comes from increasing their parameter size (Devlin et al., 2019; Fan et al., 2020; Brown et al., 2020; Zhang et al., 2022) which escalates the cost of training (Yang et al., 2019; Strubell et al., 2019; Patterson et al., 2021), and hurts the
+
+memory footprint and latency at inference (Dai et al., 2019; Fan et al., 2020; Wang et al., 2022). Specially in Neural Machine Translation (NMT) task, massively MNMT models (Aharoni et al., 2019; Fan et al., 2020; Tang et al., 2020; Zhang et al., 2020) demonstrated promising results. They have been shown particularly interesting for low-resource languages which benefit a lot from knowledge transfer. On the other hand, it has also been observed that the curse of multilinguality may hurt the performance in high-resource languages. The strategy employed to overcome this problem (Aharoni et al., 2019; Fan et al., 2020; Goyal et al., 2021a) is to scale up the number of parameters, thus attaining state-of-the-art performance in both high and low-resource languages.
+
+Consequently, efficient inference with these very large models has become a crucial problem. This challenge can be overcome through model compression, e.g. knowledge distillation (Kim and Rush, 2016; Sanh et al., 2019; Li et al., 2020; Wang et al., 2021), pruning (Michael H. Zhu, 2018; Frankle and Carbin, 2019; Behnke and Heafield, 2020; Zhang et al., 2021), and quantization (Xu et al., 2018; Wu et al., 2020; Bondarenko et al., 2021; Kim et al., 2021a; Tao et al., 2022; Yang et al., 2022; Yao et al., 2022). These methods can be applied with a little loss in top-line metrics, while reducing the memory-footprint, and enhancing inference time. However, recent work (Hooker et al., 2020; Ahia et al., 2021; Xu et al., 2021; Du et al., 2021; Renduchintala et al., 2021) has demonstrated that under-represented features can suffer from a drastic decrease in performance which is not necessarily reflected by global (aggregated) metrics. In multilingual NMT, the overall metrics are often reported as an average across all the language pairs, where the performance between individual language pairs can vary a lot. Therefore it is even more critical to understand what would be the exact impact of compression on multilingual
+
+NMT models, beyond the aggregated metrics.
+
+In this work, we illustrate the impacts of applying compression methods to massively multilingual NMT models, that are pre-trained in a great number of languages in several domains. To the best of our knowledge, this is the first attempt to analyze how compression impacts massively multilingual models. We hope it could be a starting point to bringing a comprehensive understanding between fairness and compression in multilingual NMT models. In this study, we concentrate on light compression techniques, specifically post-training quantization and magnitude pruning without any further fin-tuning. We exploit the recent and largest MNMT model, M2M-100 (Fan et al., 2020) that covers 100 languages and contains nearly 12B parameters and analyze the impact of compression on different language pairs evaluated on FLORES-101 benchmark (Goyal et al., 2021b) (covering 101 languages). We also consider MT-Gender (Stanovsky et al., 2019) and DiBiMT (Campolungo et al., 2022) benchmarks allowing us to assess different types of biases that could be present in the data and MNMT model. To sum up, our contributions are as follows:
+
+- We conduct extensive analysis on the effects of light compression methods for massively multilingual NMT models.
+- On FLORES-101 (Goyal et al., 2021b), we discover that while the overall performance is barely impacted by the compression, a subset of language pairs corresponding to under-represented languages during training suffers an extreme drop in performance.
+- Also, we observe an important improvement for some language pairs after the compression. We hypothesize that this is due to the removal of noisy memorization.
+- We show that the compression amplifies gender and semantic biases, hidden in MNMT models across several high-resource languages by evaluating on MT-Gender, and DiBiMT benchmarks.
+
+In section 2, we describe light compression methods we rely on, and MNMT model. Section 3 presents our experimental setup and evaluation benchmarks. Section 4 shows the analysis of the impact of the compression for NMT benchmarks.
+
+# 2 Model and Compression Techniques
+
+# 2.1 M2M-100 Model
+
+We assume that potential biases, discovered after the compression are mostly related to the training data, than the model architecture, as previous work (Hooker et al., 2020) demonstrated for the image classification task.
+
+So, we use M2M-100 (Fan et al., 2020), as it is the best performing and the largest massively multilingual MT model, which covers more than 10K language directions, including a great number of low and medium-resource language pairs. Other previous work (Aharoni et al., 2019; Tang et al., 2020) cover fewer languages, especially from low and medium-resource languages, and have worse results compared to M2M-100.
+
+M2M-100 is trained on large-scale multilingual corpora (El-Kishky et al., 2020; Schwenk et al., 2021) with a novel data mining procedure, that uses language similarities. The biggest model introduced consists of 24 encoder, and 24 decoder Transformer (Vaswani et al., 2017) layers. Using several scaling techniques, it is trained with nearly 12B parameters. We refer to Fan et al. (2020) for more details. In all our experiments, we exploit the largest M2M-100 model.
+
+# 2.2 Light Compression Techniques
+
+Compression techniques without any further fine-tuning are defined as light compression methods. We do not fine-tune the compressed models due to the massive computation cost, as we have to fine-tune the model for all language pairs to provide a fair comparison.3 We discuss our methods in the following paragraphs.
+
+Magnitude Pruning is a popular technique for both memory footprint reduction and inference speed-up. It reduces the model size by removing redundant nodes that do not contribute to the resulting performance. It usually achieves comparable results with state-of-the-art models with further fine-tuning (Michael H. Zhu, 2018; Gale et al., 2019; Menghani, 2021; Ahia et al., 2021). In this work, we apply post-training magnitude pruning for each layer of Transformer (including Embedding layers). Given $\Theta_{l}$ as the parameters of Transformer layer $l$ and $p$ as the sparsity ratio,
+
+
+(a) MT-Gender example: for a correct translation, system will have to link English pronoun 'her' to 'doctor'.
+
+
+(b) DiBiMT Example. German instance contains wrong word senses, while Spanish one is correct.
+Figure 1: Samples of MT-Gender (Stanovsky et al., 2019) and DiBiMT (Campolungo et al., 2022) benchmarks.
+
+the output of the pruning function is $\Theta_l^{\prime}$ where $p$ percentage of weights sets to zero.
+
+Post-Training Quantization Recent work applies post-training, and training-aware quantization to pre-trained machine translation and language models (Wu et al., 2020; Menghani, 2021; Liang et al., 2021; Bondarenko et al., 2021; Wei et al., 2022), and achieves promising results while lowering the inference latency, and the model size. In this work, we exploit the post-training quantization method proposed by Wu et al. (2020), converting all weights and activations from 32-bit floating-point values to an 8-bit fixed-point integer. Specifically, it quantizes linear layers input and weights, matrix multiplications, and the residual summations for Transformer (Vaswani et al., 2017).
+
+# 3 Experimental Setup
+
+# 3.1 Evaluation Benchmarks
+
+We analyze our compressed models on three different NMT benchmarks. We exploit FLORES-101 (Goyal et al., 2021b) to study the model behavior based on the amount of available resources for each language. MT-Gender (Stanovsky et al., 2019) is used to study the impact of compression on gender bias. Finally, we evaluate on DiBiMT (Campolungo et al., 2022) to illustrate the compression effect on semantic biases.
+
+FLORES-101 is a many-to-many NMT evaluation benchmark, including sentences extracted from English Wikipedia. It is translated into 101 languages by human translators, enabling 10,100 language directions to be evaluated. In this paper, we evaluate our models on devtest subset of the FLORES-101 (Goyal et al., 2021b) benchmark. This benchmark provides test sets comparable across all the language pairs, and thus allows us to assess to what extent each language pair gets impacted by the compression techniques.
+
+MT-Gender (Stanovsky et al., 2019) is an English-centric multilingual NMT benchmark for evaluating gender bias in multiple target languages: Arabic, Ukrainian, Hebrew, Russian, Italian, French, Spanish, and German. The method relies on automatic alignment and morphological analysis, without the need for gold translations. An example is shown in Figure 1a. Later, Kocmi et al. (2020) extends the benchmark by adding Czech and Polish languages. We choose MT-Gender as it covers more languages compared to other existing MT gender bias benchmarks (Bentivogli et al., 2020; Renduchintala et al., 2021; Savoldi et al., 2022).
+
+DiBiMT is the first fully manually-crafted NMT benchmark for evaluating word sense disambiguation on five high-resource languages: Chinese, German, Italian, Russian, and Spanish (Campolungo et al., 2022), where the source language is English. Besides, they propose several bias evaluation metrics to compare different models (defined in Section 4.3). As shown in Figure 1b, given English source sentence, specific word $(w_{i})$ with associated synset $(\sigma)$ , and language $L$ , set of GOOD, and BAD translation candidates include sentences that do and do not contain set of correct translation of $\sigma$ in language $L$ , respectively. More details can be found in Campolungo et al. (2022).
+
+# 3.2 Implementation Details
+
+We use pre-trained M2M-100 12B model. For quantization, we use Mean Squared Error (MSE) calibration. For weights, we use default per-channel calibration. In FLORES-101,
+
+
Resource Type
Criterion
No. Languages
Very-Low
|L|≤100k
16
Low
100k < |L| ≤ 1M
40
Medium
1M < |L| ≤ 100M
38
High
100M < |L|
7
+
+
+Figure 2: Average spBLEU score for different sparsity ratios on 9 FLORES-101 language pairs, selected from all pairwise combinations of "low", "medium", and "high" language resource categories.
+
+we use SentencePiece BLEU (spBLEU) score for the evaluation, as it is shown to be fair for the multilingual comparison (Goyal et al., 2021b). Additionally, we use character $n$ -gram F-score (ChrF) (Popovic, 2015) metric to compare compressed models with M2M-100 model. We evaluate our compressed models on language pairs in which M2M-100 12B model (Fan et al., 2020) has reasonable performance. This leaves us with 3,763 language directions. All experiments are computed on 2 NVIDIA A100-40GB GPUs.
+
+# 4 Results and Discussion
+
+# 4.1 Compression Impact Across Languages
+
+Language Resource Type. The true amount of available training data for a language is difficult to estimate, as it relies both on the quality and quantity of the data. Inspired by (Goyal et al., 2021b), we classify languages into four categories, based on the amount of available data to/from English. The distribution of language resource types is illustrated in Table 1.
+
+Table 1: Distribution of lang. in FLORES-101 based on amount of available data to/from English $(|L|)$
+
+
Model
Memory size
Avg spBLEU
drop(%)
M2M-100
1×
22.44
-
Pruned 30% M2M-100
0.7×
20.95
6.6
Pruned 45% M2M-100
0.55×
15.12
32.6
Quantized M2M-100
0.25×
22.31
0.6
+
+Table 2: Memory size and average spBLEU score of M2M-100, and compressed models on FLORES-101.
+
+Magnitude pruning: Sparsity Ratio $(p)$ Selection. Figure 2 shows the average spBLEU score of different sparsity ratios for a subset of language pairs. $^{10}$ Based on this preliminary analysis, we decide to analyze the model behavior for two sparsity ratios, $30\%$ which is the maximum sparsity ratio for which the compressed model mostly keeps the performance, and $45\%$ for which the performance starts to drop drastically. Therefore, we evaluate the pruned models on sparsity ratios of $30\%$ , and $45\%$ for further experiments.
+
+# 4.1.1 Main Results
+
+Table 211 illustrates memory footprint and spBLEU scores on FLORES-101 dataset averaged over 3.7k language pairs retained for analysis.12 Pruned $30\%$ model suffers from a slight drop in performance, while quantization mostly preserves the same average spBLEU score. Both quantized and pruned $30\%$ models reduce the memory footprint by $75\%$ and $30\%$ , respectively. The performance of $45\%$ pruned model drops significantly. In what follows, we check the behavior of each language pair after compression along different criteria.
+
+Amount of Bitext Data. Figure 3 shows the relative spBLEU performance of compressed models for each language pair $(x,y)$ compared to the M2M-100. The X-axis corresponds to the amount of bitext data with English defined as $\rho_{x,y} = \min (\rho_x,\rho_y)$ where $\rho_{x}$ is the amount of Bi-text data with English for language $x$ . For pruned $30\%$ model, while the average spBLEU score drops by $6.63\%$ (shown in Table 2), there is a subset of language pairs that drops drastically (shown as "+" ). Interestingly, there is a subset of language pairs that get significantly improved after compression (shown as " $\times "$ ). For pruned $45\%$ model, there is also a subset of languages with more than $50\%$
+
+
+(a) Pruned $30\%$ Model
+
+
+(b) Pruned $45\%$ Model
+
+
+(c) Quantized Model
+
+
+Figure 3: Relative spBLEU difference $(\%)$ between the compressed models and M2M-100 model based on the amount of available Bitext data with English $(\rho_{x,y})$ . Green points $(^{\prime \prime}\times^{\prime \prime})$ are language pairs with significant improvement. Red points $(^{n + n})$ correspond to language pairs with a drastic performance drop.
+Figure 4: Relative spBLEU difference (\%) between the compressed models and M2M-100 model grouped by the resource type of language pairs.
+
+drop in performance, while the average spBLEU degradation is $32.62\%$ . For the quantized model which preserves almost the same average spBLEU, we see that there is also a set of languages suffering from a significant drop, and others being significantly improved. The behavior of compressed models in these specific language pairs is further studied in Section 4.1.2 and 4.1.3, respectively.
+
+Resource Type. We study the performance of the compressed models based on the resource category of language pairs, which is defined as the category of $\rho_{x,y}$ for a pair $x\to y$ . Figure 4
+
+demonstrates the relative spBLEU drop for each category of the compressed models. For pruning $30\%$ , the relative spBLEU drop is inversely proportional to the amount of training data for different categories, which confirms that pruning disproportionately impacts the performance of under-represented language pairs, while the average performance is near to the base M2M-100 model (as shown in Table 2). For quantization, we see a much smaller decrease in all language categories. Furthermore, we show that the resource type of the target language is more crucial than the source language,[13] meaning that the performance of language pairs with "low" and "very-low" target languages drops drastically after the compression.
+
+ChrF Difference. For more fine-grained analysis, we perform sentence-level ChrF (Popovic, 2015) $^{14}$ evaluation. We define $\Delta = \mathrm{ChrF}_{\mathrm{comp}} - \mathrm{ChrF}_{\mathrm{base}}$ where $\mathrm{ChrF}_{\mathrm{comp}}$ and $\mathrm{ChrF}_{\mathrm{base}}$ correspond to ChrF of compressed and baseline models, respectively. Sentences with $\Delta$ close to zero are less impacted by compression, while those further away from zero are the most impacted (either positively or negatively) by compression. We define Losing Pairs as a set of instances where $\Delta < -0.5$ , and Winning Pairs as a set of instances where $\Delta > 0.5$ . Thus, identified samples could be seen as
+
+
Model
Off-T(%) base
Off-T(%) comp
Total No.
Pruned 30%
5.9
13.7(+7.8)
1,521
Pruned 45%
6.4
30.3(+23.9)
10,314
Quantized
5.2
17.5(+12.3)
268
+
+
+Figure 5: Absolute number of sentences in each language pair category for different $\Delta$ bins.
+Figure 6: Cross-attention matrices of an on-target losing sentence for the M2M-100 model, and pruned $30\%$ model. Output translations show the hallucination for the compressed model. Source language is Asturian.
+
+an adaptation of Compression-Identified Exemplars introduced by (Hooker et al., 2019) for the case of translation. Figure $5^{15}$ plots the distribution of sentences from different language pair groups along with the different $\Delta$ bins for these two subsets. $^{16}$
+
+In the following, we comprehensively analyze the behavior of the model for Losing Pairs, and Winning Pairs.[17]
+
+# 4.1.2 Analysis of Losing Pairs
+
+As shown in Figure 5 (left side), losing pairs belong to very-low, low, and medium-resource languages, that are mostly under-represented
+
+
+(a) M2M-100 Model
+
+
+(b) Compressed Model
+
+Table 3: Percentage of off-target translations for M2M-100 (base), and compressed models (comp). Last column is the total number of losing sentences (both on-and off-targets) for each compressed model.
+
+
Reference
To better represent traffic flow, relationships have been established between the three main characteristics: (1) flow, (2) density, and (3) velocity.
M2M-100
To better represent the flow of traffic, relationships have been established between three main characteristics: (1) flow, (2) density, and (3) speed.
Compressed
It is believed to have been one of the earliest inhabitants of this place, and it is believed to be one of the oldest inhabitants of this place.
+
+(c) Reference and output translations of M2M-100, and compressed models.
+
+subsets during training. $^{18}$ We manually inspected some of the translations from the losing pairs sets and we have identified 2 main reasons for the drop in performance which are off-target translations (translation in the wrong target language) and hallucinations. In what follows we attempt to quantify these two phenomena.
+
+Off-Target. We use FastText language identifier (Joulin et al., 2016a,b) to predict the languages of reference and the translated sentences. Table 3 shows the total number of losing sentences and percentage of off-target translations for both baseline and compressed models. $^{19}$ As the sparsity increases, the compressed model predicts more off-target translations (7.8% and 23.9% increase from baseline). Quantization also increases the percentage of off-target translation by 12.3%.
+
+Hallucinations. It refers to the case, in which a model generates an output unrelated to the source sentence. Lee et al. (2018) have shown
+
+
Model
λ
No. On-Target sents
Pruned 30%
2.95
1,312
Pruned 45%
3.01
7,192
Quantized
1.96
221
+
+Table 4: Total number of on-target (excluding off-target translations) sentences and relative alignment $(\lambda)$ metric on losing pair subset.
+
+
Model
λ
Total No.
Pruned 30% M2M-100
0.42
863
Pruned 45% M2M-100
0.15
1,455
Quantized M2M-100
0.52
308
+
+that the cases of hallucinations have different cross-attention matrices. Figure 6 shows an example of cross-attention matrices for a losing sentence, where the translation of the compressed model is considered as a hallucination. As expected, translated tokens ignore the alignment with the source sequence. To quantitatively analyze the hallucination effect on all on-target losing sentences (excluding off-target translations), we define the relative alignment metric as:
+
+$$
+\lambda = \frac {\operatorname {v a r} _ {\text {c o m p}}}{\operatorname {v a r} _ {\text {b a s e}}} \tag {1}
+$$
+
+where var is defined as:
+
+$$
+\left\{ \begin{array}{l} \operatorname {v a r} = \frac {1}{| I | . | J |} \sum_ {i \in I} \sum_ {j \in J} \alpha_ {i, j} (\mu_ {i} - j) ^ {2} \\ \mu_ {i} = \sum_ {j \in J} j. \alpha_ {i, j} \end{array} \right. \tag {2}
+$$
+
+where $I$ and $J$ correspond to sequences of source and target languages, respectively; $\alpha_{i,j}$ is the attention weight, where we use the average attention over all layers and all attention heads. Inspired by Vig and Belinkov (2019); Kim et al. (2021b), the variance (var) is high for cases where the target sequence pays attention to a very small subset of source tokens (hallucination), while it is low when the cross-attention matrix is near to the diagonal matrix (approximation of perfect alignment matrix). Table 4 displays the relative alignment $(\lambda)$ metric for different compressed models. As the metric is higher than "1" for compressed models, it confirms that target translations of compressed models contain more hallucinated sentences. Lastly, we provide a list of the most affected language pairs in Appendix H for further studies.
+
+
+(a) M2M-100 Model
+Figure 7: Cross-attention matrices of a winning sentence for the M2M-100 model, and pruned $30\%$ model. Output translations show the hallucination for M2M-100 model. Source language is Afrikaans.
+
+
+(b) Compressed Model
+
+Table 5: The relative alignment $(\lambda)$ metric for different compressed models on winning pairs subset.
+
+
Reference
Crossties were introduced fairly early to hold the tracks in place. Gradually, however, it was realised that tracks would be more efficient if they had a stip of iron on the top.
M2M-100
Cucumbers Zucchini Summer Squash Carrots Kale Radishes Broccoli Rosemary Basil Pole Beans Peas Arugula Bibb Lettuce Cutting Lettuces Potatoes
Compressed
Crossbars were inserted fairly early in order to keep the tracks in place. Gradually, however, it was realized that the tracks would be more effective if there were an iron strip at the top.
+
+(c) Reference and output translations of M2M-100, and compressed models.
+
+# 4.1.3 Analysis of Winning Pairs
+
+When manually inspecting some examples from the translation of winning pairs, we realize that a lot of them are matching cases where the baseline model generates hallucinations, while the compressed model generates acceptable translations, as shown in Figure 7. We recall that in Figure 5, most of the winning pairs (right side) belong to medium-resource languages $^{20}$ , which include a moderate amount of training instances, and could contain some poorly aligned parallel sentences. Raunak et al. (2021) connects the phenomenon of hallucination to the corpus-level noise and suggests that it could also be amplified by back-translation (used for data augmentation to training M2M-100 model). Therefore, the compression seems to remove the memorization of noisy samples, which is more important for medium-resource languages, thus fixing some of
+
+
+Figure 8: Number of sentences in winning pairs, added to each language category after increasing the sparsity from $30\%$ to $45\%$ .
+
+the cases of hallucination. In Table 5, we compute the total number of winning sentences, and the relative alignment metric $(\lambda)$ for compressed models and M2M-100 model. As $\lambda$ is lower than "1", it confirms that the compression removes the noisy memorization of medium-resource languages, and benefits the generalization of the model. Ahia et al. (2021) made a similar observation in the case of bilingual MT models. Interestingly, the number of winning sentences increases as the model gets sparser (1,455 vs. 863). Figure 8 shows that new sentences mostly belong to medium-resource languages. Finally, a list of most winning language pairs is provided in Appendix H.
+
+# 4.2 Gender Bias Analysis
+
+We evaluate M2M-100 and our compressed models on MT-Gender benchmark (Stanovsky et al., 2019; Kocmi et al., 2020). Inspired by Boito et al. (2022), we use a fairness metric to compare the behavior of compressed models on male and female subsets:
+
+$$
+\psi = \frac {f _ {m} - f _ {f}}{f _ {m} + f _ {f}} \tag {3}
+$$
+
+where $f_{m}$ , and $f_{f}$ refer to F1 scores of male and female, respectively. If $\psi$ is near zero, then the model is not biased toward any gender, however, $\psi$ values of +1 or -1 mean that the model is highly biased toward male or female, respectively. We extend the fairness metric to pro
+
+
Model
ψ (%)
ψ* (%)
Original M2M-100
17.36
16.51
Pruned 30% M2M-100
21.65 (+24.7)
19.52 (+18.25)
Pruned 45% M2M-100
29.03 (+67.2)
20.8 (+25.9)
Quantized M2M-100
18.24 (+5.1)
15.53 (-5.8)
+
+Table 6: Average fairness metrics over languages of MT-Gender (Stanovsky et al., 2019). Numbers in parentheses are the relative score differences between a specific compressed model and M2M-100 model.
+
+
Model
SFII
SPDI
MFS
MFS+
AVG
Baseline
77.6
71.6
52.8
87.6
72.4
Pruned 30%
76.4
72.2
52.9
87.8
72.4
Pruned 45%
80.2
74.8
53.4
87.8
74.1
Quantized
79.5
74
53.7
88.8
74
+
+Table 7: The average semantic bias metrics over languages of DiBiMT (Campolungo et al., 2022). Last column is the average score of bias metrics for each model.
+
+and anti-stereotypical subsets as follows:21:
+
+$$
+\psi^ {*} = \left| \psi_ {\text {a n t i}} - \psi_ {\text {p r o}} \right| \tag {4}
+$$
+
+where $\psi_{pro}$ , and $\psi_{anti}$ belong to the fairness metric of pro- and anti-stereotypical sections. Intuitively, if the model has different behaviors in pro- and anti-stereotypical subsets, then it results in increasing the absolute difference of $\psi_{anti}$ and $\psi_{pro}$ .22
+
+Average fairness metrics over 10 languages are illustrated in Table 6. Increasing the sparsity ratio results in a more biased model as both $\psi$ and $\psi^{*}$ relatively increase $+67.2\%$ , and $+25.9\%$ . Quantization has less effect on the gender bias as both $\psi$ and $\psi^{*}$ negligibly change after applying it. Detailed results for each language are provided in Appendix J. Interestingly, pruning $30\%$ highly increases the gender bias even for high-resource languages e.g. French and German, while spBLEU is almost the same after the compression (according to Appendix D).
+
+# 4.3 Word Sense Disambiguation Benchmark
+
+In this section, we analyze the impact of the compression on semantic biases by evaluating our models on a multilingual word sense disambiguation benchmark. We first detail metrics used in Campolungo et al. (2022) to measure semantic biases.
+
+Notation. Given a specific word $(w_{i})$ , $l_{w_i}$ is defined as (lemmatization, Part-of-Speech tag) pair. $\Pi_L(l_{w_i}) = \{\sigma_1,\dots,\sigma_n\}$ is the ordered list of synsets according to WordNet's sense frequency (Miller et al., 1990) in language $L$ . For instance, it is built as {the act of firing, photograph, drink, ...} for noun shot in English. $C_{l_{w_i}}(\sigma)$ is the index of synset $(\sigma)$ in $\Pi_L(l_{w_i})$ .
+
+SFII is calculated as the error rate averaged over $C_{l_{w_i}}(\sigma)$ for different positions and words $w_i$ . Intuitively, it measures the sensitivity of the model when predicting a sense concerning its corresponding index in $\Pi_L(l_{w_i})$ .
+
+SPDI is computed as the average error rate based on polysemy degrees of synsets.
+
+MFS measures how often the model chooses more frequent senses than the correct one. Given $C_{l_{w_i}}(\sigma)$ for a synset, it is increased once the model predicts a synset $(\sigma')$ with $C_{l_{w_i}}(\sigma') < C_{l_{w_i}}(\sigma)$ .
+
+$\mathbf{MFS}^{+}$ . It is similar to the MFS metric, but it increases when $C_{l_{w_i}}(\sigma')$ equals to 1.
+
+Since metrics are based on the error rate, the lower values show that the model is less biased.
+
+Table 7 demonstrates the semantic bias scores, averaged over all languages in DiBiMT (Campolungo et al., 2022). The last column is the average of semantic bias metrics for each model. According to the average bias score, quantized and pruned $45\%$ models amplify the bias metric by 1.6, and 1.7 points on average, compared to M2M-100, respectively. It confirms that the compression amplifies the semantic bias while keeping almost the same BLEU performance, especially for the quantization (average BLEU scores are shown in Table 2).
+
+# 5 Related Work
+
+The first connection between compression and bias amplification has been made by (Hooker et al., 2019, 2020) in the case of image classification. The same authors proposed an approach to find a subset of the dataset which contains samples that have disproportionately high errors after the compression. There is also recent work that analyzes the effect of compression on pre-trained language models (Xuet al., 2021; Lauscher et al., 2021; Du et al., 2021). Notably, de Vassimon Manela et al. (2021) demonstrated a higher gender bias in compressed pre-trained language models. Concerning NMT,
+
+while Renduchintala et al. (2021) demonstrated that optimization of inference speed up may result in gender bias amplification. To the best of our knowledge, this work is the first in-depth study of the impact of compression on massively multilingual models. We hope our findings would encourage further research on this topic.
+
+# 6 Conclusion
+
+We demonstrate the impacts of applying compression methods to the massively Multilingual Machine Translation models by evaluating compressed models on FLORES-101 (Goyal et al., 2021b), gender bias benchmark (Stanovsky et al., 2019), and word sense disambiguation benchmark (Campolungo et al., 2022). We show that while average BLEU drops negligibly, the performance of under-represented language pairs drops drastically. Interestingly, sparsity improves the performance of some medium-resource language pairs by removing the noisy memorization. By evaluating our compressed models on gender bias and word sense disambiguation benchmarks, we show that the compression amplifies the intrinsic gender and semantic biases, even in high-resource language pairs. We hope our findings could be a starting point to consider the fairness aspects when compressing multilingual models.
+
+# Limitations
+
+Our compression techniques are limited to post-training quantization, and magnitude pruning without additional fine-tuning due to the huge cost of fine-tuning these massively multilingual models, but future research could extend our analysis to compression methods with additional fine-tuning, e.g. knowledge distillation (Kim and Rush, 2016), training-aware pruning and quantization (Behnke and Heafield, 2020; Zhang et al., 2021; Yao et al., 2022). We analyze our compressed models based on the amount of available training data for each language pair, gender bias, and word sense disambiguation bias. Future research could apply our analysis to other linguistic biases in the machine translation task.
+
+# Acknowledgement
+
+The work is done during the research internship at Naver LABS Europe. Alireza Mohammadshahi is supported by the Swiss National Science Foundation (grant number CRSII5-180320) and EU UTTER project (grant #101070631)
+
+# References
+
+Roee Aharoni, Melvin Johnson, and Orhan First. 2019. Massively multilingual neural machine translation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3874-3884, Minneapolis, Minnesota. Association for Computational Linguistics.
+Orevaoghene Ahia, Julia Kreutzer, and Sara Hooker. 2021. The low-resource double bind: An empirical study of pruning for low-resource machine translation. In *Findings of the Association for Computational Linguistics: EMNLP* 2021, pages 3316-3333, Punta Cana, Dominican Republic. Association for Computational Linguistics.
+Maximiliana Behnke and Kenneth Heafield. 2020. Losing heads in the lottery: Pruning transformer attention in neural machine translation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2664-2674, Online. Association for Computational Linguistics.
+Luisa Bentivogli, Beatrice Savoldi, Matteo Negri, Mattia A. Di Gangi, Roldano Cattoni, and Marco Turchi. 2020. Gender in danger? evaluating speech translation technology on the MuST-SHE corpus. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6923-6933, Online. Association for Computational Linguistics.
+Marcely Zanon Boito, Laurent Besacier, Natalia Tomashenko, and Yannick Esteve. 2022. A study of gender impact in self-supervised models for speech-to-text systems.
+Yelysei Bondarenko, Markus Nagel, and Tijmen Blankevoort. 2021. Understanding and overcoming the challenges of efficient transformer quantization. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 7947-7969, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
+Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In Advances in Neural Information Processing Systems, volume 33, pages 1877-1901. Curran Associates, Inc.
+Niccolò Campolungo, Federico Martelli, Francesco Saina, and Roberto Navigli. 2022. DiBiMT: A novel benchmark for measuring Word Sense Disambiguation biases in Machine Translation. In Proceedings
+
+of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 4331-4352, Dublin, Ireland. Association for Computational Linguistics.
+Zihang Dai, Zhilin Yang, Yiming Yang, Jaime Carbonell, Quoc Le, and Ruslan Salakhutdinov. 2019. Transformer-XL: Attentive language models beyond a fixed-length context. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2978-2988, Florence, Italy. Association for Computational Linguistics.
+Daniel de Vassimon Manela, David Errington, Thomas Fisher, Boris van Breugel, and Pasquale Minervini. 2021. Stereotype and skew: Quantifying gender bias in pre-trained and fine-tuned language models. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 2232-2242, Online. Association for Computational Linguistics.
+Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.
+Mengnan Du, Subhabrata Mukherjee, Yu Cheng, Milad Shokouhi, Xia Hu, and Ahmed Hassan Awadallah. 2021. What do compressed large language models forget? robustness challenges in model compression.
+Ahmed El-Kishky, Vishrav Chaudhary, Francisco Guzmán, and Philipp Koehn. 2020. CCAligned: A massive collection of cross-lingual web-document pairs. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 5960–5969, Online. Association for Computational Linguistics.
+Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, Naman Goyal, Tom Birch, Vitaliy Liptchinsky, Sergey Edunov, Edouard Grave, Michael Auli, and Armand Joulin. 2020. Beyond english-centric multilingual machine translation.
+Jonathan Frankle and Michael Carbin. 2019. The lottery ticket hypothesis: Finding sparse, trainable neural networks. In International Conference on Learning Representations.
+Trevor Gale, Erich Elsen, and Sara Hooker. 2019. The state of sparsity in deep neural networks.
+Naman Goyal, Jingfei Du, Myle Ott, Giri Anantharaman, and Alexis Conneau. 2021a. Larger-scale transformers for multilingual masked language modeling.
+
+Naman Goyal, Cynthia Gao, Vishrav Chaudhary, Peng-Jen Chen, Guillaume Wenzek, Da Ju, Sanjana Krishnan, Marc'Aurelio Ranzato, Francisco Guzman, and Angela Fan. 2021b. The flores-101 evaluation benchmark for low-resource and multilingual machine translation.
+Sara Hooker, Aaron Courville, Gregory Clark, Yann Dauphin, and Andrea Frome. 2019. What do compressed deep neural networks forget?
+Sara Hooker, Nyalleng Moorosi, Gregory Clark, Samy Bengio, and Emily Denton. 2020. Characterising bias in compressed models.
+Armand Joulin, Edouard Grave, Piotr Bojanowski, Matthijs Douze, Herve Jégou, and Tomas Mikolov. 2016a. Fasttext.zip: Compressing text classification models. arXiv preprint arXiv:1612.03651.
+Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. 2016b. Bag of tricks for efficient text classification. arXiv preprint arXiv:1607.01759.
+Sehoon Kim, Amir Gholami, Zhewei Yao, Michael W. Mahoney, and Kurt Keutzer. 2021a. I-bert: Integer-only bert quantization. In Proceedings of the 38th International Conference on Machine Learning, volume 139 of Proceedings of Machine Learning Research, pages 5506-5518. PMLR.
+Yoon Kim and Alexander M. Rush. 2016. Sequence-level knowledge distillation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1317-1327, Austin, Texas. Association for Computational Linguistics.
+Zae Myung Kim, Laurent Besacier, Vassilina Nikoulina, and Didier Schwab. 2021b. Do multilingual neural machine translation models contain language pair specific attention heads? In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 2832-2841, Online. Association for Computational Linguistics.
+Tom Kocmi, Tomasz Limisiewicz, and Gabriel Stanovsky. 2020. Gender coreference and bias evaluation at WMT 2020. In Proceedings of the Fifth Conference on Machine Translation, pages 357-364, Online. Association for Computational Linguistics.
+Anne Lauscher, Tobias Lueken, and Goran Glavaš. 2021. Sustainable modular debiasing of language models. In *Findings of the Association for Computational Linguistics: EMNLP* 2021, pages 4782-4797, Punta Cana, Dominican Republic. Association for Computational Linguistics.
+Chanhee Lee, Young-Bum Kim, Dongyub Lee, and Heuiseok Lim. 2018. Character-level feature extraction with densely connected networks. In Proceedings of the 27th International Conference on Computational Linguistics, pages 3228-3239, Santa Fe, New Mexico, USA. Association for Computational Linguistics.
+
+Bei Li, Ziyang Wang, Hui Liu, Quan Du, Tong Xiao, Chunliang Zhang, and Jingbo Zhu. 2020 Learning light-weight translation models from deep transformer.
+Tailin Liang, John Glossner, Lei Wang, Shaobo Shi, and Xiaotong Zhang. 2021. Pruning and quantization for deep neural network acceleration: A survey.
+Gaurav Menghani. 2021. Efficient deep learning: A survey on making deep learning models smaller, faster, and better.
+Suyog Gupta Michael H. Zhu. 2018. To prune, or not to prune: Exploring the efficacy of pruning for model compression.
+George A. Miller, Richard Beckwith, Christiane Fellbaum, Derek Gross, and Katherine J. Miller 1990. Introduction to WordNet: An On-line Lexical Database*. International Journal of Lexicography, 3(4):235-244.
+David Patterson, Joseph Gonzalez, Quoc Le, Chen Liang, Lluis-Miquel Munguia, Daniel Rothchild, David So, Maud Texier, and Jeff Dean. 2021. Carbon emissions and large neural network training.
+Maja Popovic. 2015. *chrF: character n-gram f-score for automatic MT evaluation*. In *Proceedings of the Tenth Workshop on Statistical Machine Translation*, pages 392–395, Lisbon, Portugal. Association for Computational Linguistics.
+Matt Post. 2018. A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 186-191, Belgium, Brussels. Association for Computational Linguistics.
+Vikas Raunak, Arul Menezes, and Marcin Junczys-Dowmunt. 2021. The curious case of hallucinations in neural machine translation. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1172-1183, Online. Association for Computational Linguistics.
+Adithya Renduchintala, Denise Diaz, Kenneth Heafield, Xian Li, and Mona Diab. 2021. Gender bias amplification during speed-quality optimization in neural machine translation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 99-109, Online Association for Computational Linguistics.
+Victor Sanh, Lysandre Debut, Julien Chaumont, and Thomas Wolf. 2019. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter.
+Beatrice Savoldi, Marco Gaido, Luisa Bentivogli, Matteo Negri, and Marco Turchi. 2022. Under the morphosyntactic lens: A multifaceted evaluation of gender bias in speech translation. In Proceedings
+
+of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1807-1824, Dublin, Ireland. Association for Computational Linguistics.
+Holger Schwenk, Guillaume Wenzek, Sergey Edunov, Edouard Grave, Armand Joulin, and Angela Fan. 2021. CCMatrix: Mining billions of high-quality parallel sentences on the web. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 6490-6500, Online. Association for Computational Linguistics.
+Gabriel Stanovsky, Noah A. Smith, and Luke Zettlemoyer. 2019. Evaluating gender bias in machine translation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1679-1684, Florence, Italy. Association for Computational Linguistics.
+Emma Strubell, Ananya Ganesh, and Andrew McCallum. 2019. Energy and policy considerations for deep learning in NLP. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3645-3650, Florence, Italy. Association for Computational Linguistics.
+Yuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Naman Goyal, Vishrav Chaudhary, Jiatao Gu, and Angela Fan. 2020. Multilingual translation with extensible multilingual pretraining and finetuning.
+Chaofan Tao, Lu Hou, Wei Zhang, Lifeng Shang, Xin Jiang, Qun Liu, Ping Luo, and Ngai Wong. 2022. Compression of generative pre-trained language models via quantization.
+Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc.
+Jesse Vig and Yonatan Belinkov. 2019. Analyzing the structure of attention in a transformer language model. In Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 63-76, Florence, Italy. Association for Computational Linguistics.
+Fusheng Wang, Jianhao Yan, Fandong Meng, and Jie Zhou. 2021. Selective knowledge distillation for neural machine translation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 6456-6466, Online. Association for Computational Linguistics.
+Hongyu Wang, Shuming Ma, Li Dong, Shaohan Huang, Dongdong Zhang, and Furu Wei. 2022. Deepnet: Scaling transformers to 1,000 layers.
+
+Xiuying Wei, Ruihao Gong, Yuhang Li, Xianglong Liu, and Fengwei Yu. 2022. Qdrop: Randomly dropping quantization for extremely low-bit post-training quantization.
+Hao Wu, Patrick Judd, Xiaojie Zhang, Mikhail Isaev, and Paulius Micikevicius. 2020. Integer quantization for deep learning inference: Principles and empirical evaluation.
+Canwen Xu, Wangchunshu Zhou, Tao Ge, Ke Xu, Julian McAuley, and Furu Wei. 2021. Beyond preserved accuracy: Evaluating loyalty and robustness of BERT compression. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 10653-10659, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
+Chen Xu, Jianqiang Yao, Zhouchen Lin, Wenwu Ou, Yuanbin Cao, Zhirong Wang, and Hongbin Zha. 2018. Alternating multi-bit quantization for recurrent neural networks.
+Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. In Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc.
+Ziqing Yang, Yiming Cui, and Zhigang Chen. 2022. Textpruner: A model pruning toolkit for pre-trained language models.
+Zhewei Yao, Reza Yazdani Aminabadi, Minjia Zhang, Xiaoxia Wu, Conglong Li, and Yuxiong He. 2022. Zeroquant: Efficient and affordable post-training quantization for large-scale transformers.
+Biao Zhang, Philip Williams, Ivan Titov, and Rico Senrich. 2020. Improving massively multilingual neural machine translation and zero-shot translation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1628-1639, Online. Association for Computational Linguistics.
+Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, Todor Mihaylov, Myle Ott, Sam Shleifer, Kurt Shuster, Daniel Simig, Punit Singh Koura, Anjali Sridhar, Tianlu Wang, and Luke Zettlemoyer. 2022. Opt: Open pre-trained transformer language models.
+Tianfu Zhang, Heyan Huang, Chong Feng, and Longbing Cao. 2021. Enlivening redundant heads in multi-head self-attention for machine translation. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 3238-3248, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
+
+# Appendix A Magnitude Pruning Strategy
+
+Figure 9 shows the performance of pruned models with different pruning strategies. Results illustrate that pruning based on Transformer-layer is slightly better than pruning based on each module of the model, and separate pruning for self-attention and feed-forward Transformer layers.
+
+
+Figure 9: Average spBLEU score of different magnitude pruning strategies on 9 FLORES-101 language pairs, defined in Appendix C.
+
+# Appendix B Selection of Language Pairs in FLORES-101
+
+Figure 10 shows the distribution of different language pair categories (defined in Table 1) based on spBLEU score of M2M-100 12B model (Fan et al., 2020). We use 12 spBLEU as the threshold, which is approximately the average score over the median of different language pair categories.
+
+Table 8 illustrates the number of language pairs in each category after the filtering.
+
+
Target Source
Very-Low
Low
Medium
High
Very-Low
10
51
157
33
Low
58
164
643
143
Medium
108
440
1,277
257
High
23
103
252
39
+
+Table 8: Number of language pairs in each category after the filtering.
+
+
+Figure 10: Histogram of number of language pairs based on spBLEU score for different language pair categories.
+
+# Appendix C Language Pairs for Selection of Sparsity Ratio
+
+
Language Pair
Resource-Type
M2M-100 spBLEU
Bosnian-Afrikaans
low-to-low
29.9
Afrikaans-Bulgarian
low-to-medium
37.3
Afrikaans-French
low-to-high
41.5
Catalan-Asturian
medium-to-low
29.7
Danish-Bulgarian
medium-to-medium
37.8
Swedish-Spanish
medium-to-high
27.5
French-Afrikaans
high-to-low
30.9
Spanish-Swedish
high-to-medium
27.5
English-French
high-to-high
51.3
+
+Table 9: Subset of language pairs used to compute average spBLEU score of Figure 2. M2M-100 model achieves reasonable performance for all selected pairs as shown in the last column.
+
+# Appendix D FLORES-101 spBLEU Scores
+
+For compressed models, spBLEU score is calculated for language pairs for which M2M-100 12B model has spBLEU higher than 12 (shown as green in Table 10).
+
+# D.A M2M-100 12B
+
+
+Table 10: spBLEU score of M2M-100 12B model (Fan et al., 2020) on all language pairs of FLORES-101.
+
+# D.B Pruned $30\%$ M2M-100 12B
+
+
+Table 11: spBLEU score of pruned $30\%$ M2M-100 12B model (Fan et al., 2020) on selected language pairs of FLORES-101.
+
+# D.C Pruned $45\%$ M2M-100 12B
+
+
+Table 12: spBLEU score of pruned $45\%$ M2M-100 12B model (Fan et al., 2020) on selected language pairs of FLORES-101.
+
+# D.D Quantized M2M-100
+
+
+Table 13: spBLEU score of quantized M2M-100 12B model (Fan et al., 2020) on selected language pairs of FLORES-101.
+
+
+(a) Source Resource Type
+
+
+(b) Target Resource Type
+Figure 11: Relative spBLEU difference (\%) between compressed models and M2M-100 model grouped by the resource type of source or target languages.
+
+# F.A Pruned $30\%$ Model
+
+
+(a) Absolute number of sentences.
+
+
+(b) Normalized distribution of sentences.
+
+
+(c) Normalized distribution of sentences in each bin for different categories.
+Figure 12: ChrF analysis of pruned $30\%$ M2M-100 model.
+
+
+(a) Absolute number of sentences.
+
+
+(b) Normalized distribution of sentences.
+
+
+(c) Normalized distribution of sentences in each bin for different categories.
+Figure 13: ChrF analysis of pruned $45\%$ M2M-100 model.
+
+
+(a) Absolute number of sentences.
+
+
+(b) Normalized distribution of sentences.
+
+
+(c) Normalized distribution of sentences in each bin for different categories.
+Figure 14: ChrF analysis of quantized M2M-100 model.
+
+# Appendix G Languages with Two Scripts in M2M-100 Training
+
+
ISO
Language
sr
Serbian
cy
Welsh
az
Azerbaijani
uz
Uzbek
ja
Japanese
bn
Bengali
lo
Lao
zh
Chinese
+
+# Appendix H Most Affected Language Pairs After Compression
+
+Language pairs are selected, if both quantization and pruning have significant effect on them (based on spBLEU performance shown in Figure 3).
+
+Table 14: Languages for which M2M-100 training data contains two scripts, while FLORES-101 provides one script for the evaluation.
+
+
Source
Target
Catalan
Cebuano
Latvian
Igbo
Arabic
Igbo
Danish
Xhosa
French
Zulu
+
+(a) Most losing language pairs
+
+
Source
Target
Latvian
Vietnamese
Bulgarian
Latvian
Arabic
Urdu
Thai
Vietnamese
Latvian
Italian
+
+(b) Most winning language pairs
+
+Table 15: Most affected language pairs after the compression.
+
+# Appendix I Proposed Metrics for MT-Gender Benchmark
+
+Equation 3 considers the range of F1 scores for female and male subsets, while the simple difference between F1 scores does not reflect the range of F1 scores. The range is crucial since a model with the same F1 score difference but higher individual F1 scores should have a lower fairness score, as lied in Equation 3. We also believe equation 4 is a better metric than the simple difference between accuracies of the model in pro-stereotypical and anti-stereotypical subsets since it again considers the range of scores, and ignores missed translations and wrongly aligned genders. Additionally, it exactly reflects the difference in the behavior of the model in these two subsets. If the compressed model has a contrary performance in pro- and anti-stereotypical subsets, e.g. amplifying the bias in the anti-stereotypical subset more than the pro-stereotypical one or decreasing the bias more in one subset, then $\psi *$ becomes higher. We suggest using Equation 3 and Equation 4 for comparing models on MT-Gender benchmark (Stanovsky et al., 2019; Kocmi et al., 2020).
+
+# Appendix J MT-Gender Results per Language
+
+
Model
ψ (%)
ψ* (%)
Original M2M-100
21.01
15.09
Pruned 30% M2M-100
20.71
16.87
Pruned 45% M2M-100
28.58
17.33
Quantized M2M-100
18.07
12.55
+
+(a) Arabic
+
+
Model
ψ (%)
ψ* (%)
Original M2M-100
39.02
11.39
Pruned 30% M2M-100
45.19
7.15
Pruned 45% M2M-100
45.56
18.54
Quantized M2M-100
40.93
2.54
+
+(b) Ukrainian
+
+
Model
ψ (%)
ψ* (%)
Original M2M-100
7.98
20.09
Pruned 30% M2M-100
10.38
16.30
Pruned 45% M2M-100
8.89
2.75
Quantized M2M-100
10.39
21.26
+
+(c) Hebrew
+
+
Model
ψ (%)
ψ* (%)
Original M2M-100
29.06
3.93
Pruned 30% M2M-100
29.10
2.30
Pruned 45% M2M-100
30.28
8.08
Quantized M2M-100
32.65
8.74
+
+(d) Russian
+
+
Model
ψ (%)
ψ* (%)
Original M2M-100
22.46
2.03
Pruned 30% M2M-100
30.17
13.81
Pruned 45% M2M-100
48.59
4.61
Quantized M2M-100
24.71
2.6
+
+(e) Italian
+
+
Model
ψ (%)
ψ* (%)
Original M2M-100
13.86
28.71
Pruned 30% M2M-100
29.03
40.20
Pruned 45% M2M-100
38.44
32.83
Quantized M2M-100
15.43
25.86
+
+(f) French
+
+
Model
ψ (%)
ψ* (%)
Original M2M-100
5.77
15.72
Pruned 30% M2M-100
4.89
14.62
Pruned 45% M2M-100
22.53
34.01
Quantized M2M-100
6.01
15.11
+
+(g) Spanish
+
+
Model
ψ (%)
ψ* (%)
Original M2M-100
6.48
16.93
Pruned 30% M2M-100
13.16
26.83
Pruned 45% M2M-100
22.14
18.12
Quantized M2M-100
6.23
14.96
+
+(h) German
+
+
Model
ψ (%)
ψ* (%)
Original M2M-100
18.20
39.01
Pruned 30% M2M-100
21.82
42.60
Pruned 45% M2M-100
25.95
45.01
Quantized M2M-100
18.24
38.42
+
+(i) Polish
+
+
Model
ψ (%)
ψ* (%)
Original M2M-100
7.91
12.14
Pruned 30% M2M-100
11.65
14.43
Pruned 45% M2M-100
19.31
27.23
Quantized M2M-100
9.78
13.26
+
+# Appendix K Detailed DiBiMT Results
+
+(j)Czech
+Table 16: MT-Gender (Stanovsky et al., 2019; Kocmi et al., 2020) results for M2M-100 12B (Fan et al., 2020), and compressed models.
+
+
Model
SFII
SPDI
MFS
MFS+
Avg
Original M2M-100
89.14
80.59
41.8
92.59
76.03
Pruned 30% M2M-100
87.32
80.56
39.55
93.04
75.11
Pruned 45% M2M-100
86.78
82.9
39.93
92.41
75.50
Quantized M2M-100
88.86
81.26
43.32
92.51
76.48
+
+(a) Chinese
+
+
Model
SFII
SPDI
MFS
MFS+
Avg
Original M2M-100
80
71.61
60.63
89.76
75.5
Pruned 30% M2M-100
78.96
73.79
61.44
88.56
75.68
Pruned 45% M2M-100
81.28
77.05
62.5
91.67
78.12
Quantized M2M-100
82.32
74.42
61.07
91.22
77.25
+
+(b) German
+
+
Model
SFII
SPDI
MFS
MFS+
Avg
Original M2M-100
75.99
70.53
61.23
88.41
74.04
Pruned 30% M2M-100
75.91
71.86
60.92
87.74
74.10
Pruned 45% M2M-100
83.38
75.08
62.22
86.67
76.83
Quantized M2M-100
81.73
75.81
63.33
88.33
77.3
+
+(c) Italian
+
+
Model
SFII
SPDI
MFS
MFS+
Avg
Original M2M-100
68.16
66.42
47.06
83.82
66.36
Pruned 30% M2M-100
68.2
64.73
48.21
87.18
67.08
Pruned 45% M2M-100
70.92
66.41
50
85.29
68.15
Quantized M2M-100
68.16
69.03
44.19
86.51
66.97
+
+(d) Russian
+
+
Model
SFII
SPDI
MFS
MFS+
Avg
Original M2M-100
75.08
68.92
53.44
83.61
70.26
Pruned 30% M2M-100
71.58
70.26
54.58
82.71
69.78
Pruned 45% M2M-100
78.39
72.46
52.33
83.15
71.58
Quantized M2M-100
76.45
69.72
56.88
85.63
72.17
+
+(e) Spanish
+
+Table 17: DiBiMT (Campolungo et al., 2022) evaluation for M2M-100 12B (Fan et al., 2020), and compressed models.
\ No newline at end of file
diff --git a/whatdocompressedmultilingualmachinetranslationmodelsforget/images.zip b/whatdocompressedmultilingualmachinetranslationmodelsforget/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..25074cbf0fc9a59286844533b2072feb382676d7
--- /dev/null
+++ b/whatdocompressedmultilingualmachinetranslationmodelsforget/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:2e49ea275531b963d92db02c42b792a5dd73a0db458b6bfa7fec4c6a48948c4b
+size 2454650
diff --git a/whatdocompressedmultilingualmachinetranslationmodelsforget/layout.json b/whatdocompressedmultilingualmachinetranslationmodelsforget/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..b1aa19ea308f1b0a62e42eeab7e9e3060663036c
--- /dev/null
+++ b/whatdocompressedmultilingualmachinetranslationmodelsforget/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ba3672892260cd753d91758e46b2accd89ad57ba4b7e7f22741f52143bbdb8c6
+size 649835
diff --git a/whatdolargelanguagemodelslearnbeyondlanguage/971abc60-1f49-4017-8b1c-a9647a5b506c_content_list.json b/whatdolargelanguagemodelslearnbeyondlanguage/971abc60-1f49-4017-8b1c-a9647a5b506c_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..0f329cb09e8f4c0f7e3a961ea87a93ba769881e7
--- /dev/null
+++ b/whatdolargelanguagemodelslearnbeyondlanguage/971abc60-1f49-4017-8b1c-a9647a5b506c_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ec8894fa94ff53191cc886b177fc11d5f4bb966839ad1eacf96c879843c88009
+size 89747
diff --git a/whatdolargelanguagemodelslearnbeyondlanguage/971abc60-1f49-4017-8b1c-a9647a5b506c_model.json b/whatdolargelanguagemodelslearnbeyondlanguage/971abc60-1f49-4017-8b1c-a9647a5b506c_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..8f1c4a84e4df8b8a839ab8bae5bc059565887927
--- /dev/null
+++ b/whatdolargelanguagemodelslearnbeyondlanguage/971abc60-1f49-4017-8b1c-a9647a5b506c_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f2bbcf0659e7d9ac1322ddfc40caca39d609bf04e7ef2ec6498f6083cf8ff1a4
+size 109339
diff --git a/whatdolargelanguagemodelslearnbeyondlanguage/971abc60-1f49-4017-8b1c-a9647a5b506c_origin.pdf b/whatdolargelanguagemodelslearnbeyondlanguage/971abc60-1f49-4017-8b1c-a9647a5b506c_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..1eeb5e69976a1dba1a81a9ea007ca547f71b50e0
--- /dev/null
+++ b/whatdolargelanguagemodelslearnbeyondlanguage/971abc60-1f49-4017-8b1c-a9647a5b506c_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:01da7bbe9bdc7cd4e8ddfe0314cc37ca1a7b2727d688349db770fc8f4f4ca1d9
+size 932035
diff --git a/whatdolargelanguagemodelslearnbeyondlanguage/full.md b/whatdolargelanguagemodelslearnbeyondlanguage/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..8ed78bef62900820753895ad9856ba5377285d76
--- /dev/null
+++ b/whatdolargelanguagemodelslearnbeyondlanguage/full.md
@@ -0,0 +1,376 @@
+# What do Large Language Models Learn beyond Language?
+
+Avinash Madasu Shashank Srivastava
+
+UNC Chapel Hill
+
+{avinashm,ssrivastava}@cs.unc.edu
+
+# Abstract
+
+Large language models (LMs) have rapidly become a mainstay in Natural Language Processing. These models are known to acquire rich linguistic knowledge from training on large amounts of text. In this paper, we investigate if pre-training on text also confers these models with helpful 'inductive biases' for non-linguistic reasoning. On a set of 19 diverse non-linguistic tasks involving quantitative computations, recognizing regular expressions and reasoning over strings. We find that pretrained models significantly outperform comparable non-pretrained neural models. This remains true also in experiments with training non-pretrained models with fewer parameters to account for model regularization effects. We further explore the effect of text domain on LMs by pretraining models from text from different domains and provenances. Our experiments surprisingly reveal that the positive effects of pre-training persist even when pretraining on multi-lingual text or computer code, and even for text generated from synthetic languages. Our findings suggest a hithertho unexplored deep connection between pre-training and inductive learning abilities of language models1.
+
+# 1 Introduction
+
+Pretrained Language Models (LMs) have shown singular success on a range of natural language understandings tasks, to the extent that they have become foundational for contemporary NLP systems. Several works have investigated why pretraining works so well (Warstadt et al., 2019; Zhao et al., 2020). In particular, studies have shown that the pretrained LMs like BERT capture linguistic knowledge about syntax (Lin et al., 2019; Wu et al., 2020), semantics (Vulić et al., 2020b,a) and morphology (Hofmann et al., 2020, 2021). In fact, Tenney et al. (2019) demonstrated that learned representations
+
+
+Figure 1: We investigate the effect of pretraining of languages models on learning non-linguistic tasks using three task paradigms involving symbolic reasoning.
+
+in pretrained LMs even internally reflect the classical NLP pipeline. Since most NLP benchmarks such as SuperGLUE (Wang et al., 2019) naturally are focused on tasks such as textual entailment and reading comprehension that require linguistic knowledge and reasoning, it is unsurprising that LMs have achieved strong results on these tasks. On the other hand, little work so far has explored the abilities of pretrained LMs for learning non-linguistic tasks.
+
+In this paper, we explore whether pretraining on text is inherently about learning language, or if pretraining also imbues LMs with skills for symbolic manipulation and non-linguistic reasoning (for example, performing quantitative computation such as finding the median of a set of numbers, recognizing regular expressions, or identifying whether a string is a palindrome, as shown in Figure 1). In other words, we investigate whether and how pretraining develops helpful inductive biases for non-linguistic reasoning. For this analysis, we create a set of 19 tasks from three categories of task paradigms: quantitative computation ( $\S 3.1$ ), recognizing regular expressions ( $\S 3.2$ ), and string reasoning ( $\S 3.3$ ). Figure 1 shows an example for each category, and the full list of tasks is described in the table 1. We experiment with transformer and RNN based LMs ( $\S 4$ ) for learning these tasks, and per
+
+
Task
Input Eg.
Output Eg.
Classes
Input range
Odd classification
4210
0
0 - 1
[1, 20000]
Even classification
4210
1
0 - 1
[1, 20000]
Odd even classification
4210 even
1
0 - 1
[1, 20000]
Decimal operation
872 / 436
2
0 - 9
[1, 10000]
Decimal & word operation
four / 2
2
0 - 9
[1, 10000]
Mean
15,-8,15,-5,-14,-3 ?
0
0 - 9
[-15, 15]
Median
3,6,5,15,2,3,-6,-2,9,-3,-9,-5,-14 ?
2
0 - 9
[-15, 15]
Mode
5,9,7,0,2,5,3,3,3,0 ?
3
0 - 9
[0, 9]
Recognize {0, 1, 2} *02*
01202102222
1
0 - 1
[0, 2]
Recognize AA*BB*CC*DD*EE*
a a a a a a a b b b b c c c c d d e
1
0 - 1
[a, e]
Palindrome classification
a W X X W a
1
0 - 1
[a-z], [A-Z]
Anagram classification
r G r P J h k - k h G r P J r
1
0 - 1
[a-z],[A-Z]
Isogram classification
v F J o S j
1
0 - 1
[a-z], [A-Z]
Tautonym classification
s t P v g - t P v g a
1
0 - 1
[a-z], [A-Z]
Length of a string
t e e o
4
0 - 9
[a-z]
Count of unique characters
d e i i e e d i i d
3
0 - 9
[a-j]
Parity check
011101001110
0
0 - 1
[0, 1]
Vowels classification
i i v x c m o u o
0
0 - 9
[a-z]
Maximum frequent character
j j j c j j
9 (j)
0 - 9
[a-j]
+
+Table 1: Description of the non-linguistic tasks with input and output examples. Classes are the class labels for each task. Input range denotes the range of the input operands in each task.
+
+form a comparative analysis with (non-pretrained) neural model variants from the perspective of learning metrics such as accuracy and sample efficiency.
+
+Our experiments (§5) reveal that pretrained models overall perform substantially better and are more sample efficient on most tasks. However, there are significant differences and patterns in performance between task types, as well as variance between different LM architectures. Since non-pretrained models do not have the benefit of regularization that comes from pretraining, a plausible reason for the discrepancy between them and pretrained LMs might be underfitting of the non-pretrained models when trained on comparatively small dataset sizes. To account for this, we also comprehensively explore the effect of model size (§6) of non-pretrained models for both transformer and RNN architectures. We find that the discrepancy in performance remains even for smaller neural models, indicating that the differences are not simply due to a mismatch in model and data sizes.
+
+Finally, we investigate the role that pretraining data plays in influencing task performance on non-linguistic tasks (§7). We experiment with pretraining on different domains of text, pretraining on perturbed representations of natural language text (such as shuffled word order), pretraining on text of computer programs (no linguistic properties of natural languages), pretraining on multi-lingual and non-English text, and pretraining with synthetic text (data sampled from synthetic distributions).
+
+Our analysis reveals that the advantages of pretraining surprisingly persist with various degrees across these variations, suggesting hithertho unexplored connections between pretraining and the learning abilities of language models. Our contributions are:
+
+- We compare a range of pretrained LMs and non-pretrained models on a carefully designed suite of 19 classifications tasks that require non-linguistic reasoning.
+- We comprehensively explore the role of the pretraining data by experimenting with models pretrained from texts with different provenances.
+- We establish that the positive effects of pretraining are not simply due to better model regularization by experimenting with neural models with different complexities and architectures.
+
+# 2 Related Work
+
+A body of work has investigated contextual word embeddings to determine whether they capture aspects of mathematical meaning for numbers (Naik et al., 2019). Wallace et al. (2019) probed numerical supremacy on token embeddings of contextual language models such as ELMO and BERT. (Thawani et al., 2021) surveyed numerical understanding in NLP models using 7 sub-tasks such as measurement estimation and word problems. Our work diverges from these in exploring a richer set of tasks including harder tasks such as set operations. Further, previous methods explore mathematical reasoning tasks posed as language problems, which
+
+conflates the problems of language and mathematical learning and also makes the datasets susceptible to biases due to data collection. Our analysis circumvents both these issues by design.
+
+Some previous works have explored the ability of RNN and Transformer architectures for learning regular languages (Weiss et al., 2018; Sennhauser and Berwick, 2018; Suzgun et al., 2019b; Bhattachamishra et al., 2020), closing brackets (Skachkova et al., 2018), and dynamic counting (Suzgun et al., 2019a). However, they focus on the learnability of these tasks with specific architectures, and do not look at pretrained LMs, which are our focus here.
+
+Finally, in our discussion, we conceptually stretch the notion of inductive bias. The idea of inductive bias is usually associated with specific model types (McCoy et al., 2020; Kharitonov and Chaabouni, 2021), architectures (Xu et al., 2021; Brutzkus and Globerson, 2021) and regularization approaches (Helmbold and Long, 2015). We believe that extending this to refer to learning tasks with pretrained LMs is both reasonable and useful.
+
+# 3 NILM
+
+In this section, we describe the tasks used for our analysis, which we refer to as NILM (measuring Non-linguistic Inductive bias in Language Models). The tasks correspond to three task paradigms: (1) quantitative computation, (2) regular expressions, and (3) string reasoning. Each task in NILM is posed as a classification task. The descriptions for all the tasks with input and output examples, class labels and the input range are shown in Table 1. Each task has a synthetically generated dataset with train/dev/test splits2. To avoid biases in the datasets, relevant numbers and strings in individual examples are uniformly sampled from the appropriate ranges.
+
+# 3.1 Quantitative computation
+
+This task paradigm focuses on tasks involving arithmetic and set statistics.
+
+Odd classification. Classify if a number is odd.
+
+Even classification. Classify if a number is even.
+
+Odd even classification. For a given number $N$ and a string "even" or "odd", classify if the number satisfies the string condition.
+
+Decimal operation. Subtract or divide two numbers. Operands are represented in decimal notation.
+
+Decimal & word operation. Subtract or divide two numbers. Operands are represented in decimal or word notation.
+
+Mean. Given a set of numbers, output the mean.
+
+Median. Given a set, output the median.
+
+Mode. Given a set of numbers, output the mode.
+
+# 3.2 Recognizing regular expressions
+
+This task paradigm focuses on recognizing regular expressions. The training data consists of positive and negative examples of strings matching a regular expression (Bhattamishra et al., 2020).
+
+Recognize $\{0,1,2\}^{*}02^{*}$ . Recognize if a pattern matches $\{0,1,2\}^{*}02^{*}$ . The maximum length of the patterns is 20.
+
+Recognize AA*BB*CC*DD*EE*. Recognize if a pattern matches AA*BB*CC*DD*EE*. The maximum length of the patterns is 30.
+
+# 3.3 String reasoning
+
+This task paradigm focuses on reasoning tasks over individual strings or pairs of strings.
+
+Palindrome classification. A string is a palindrome if it reads the same forward and backward. The task is to classify whether a given string is a palindrome. The string length ranges from 1 to 15.
+
+Anagram classification. Two strings are anagrams if one is formed by rearranging letters from the other. The task is to classify if a pair of strings are anagrams. The string length ranges from 2 to 15.
+
+Isogram classification. A string is an isogram if it has no repeating characters. The task is to classify whether a given string is an isogram. The string length ranges from 1 to 52.
+
+Tautonym classification. A tautonym is a word which can be broken down into two identical parts, with the same spelling. The task is to classify whether a given string is a tautonym. The string length ranges from 1 to 10.
+
+Length of a string. Output the length of a given string. The string length ranges from 1 to 10.
+
+Count of unique characters. Given a string, count the number of unique characters in it. The string lengths ranges from 10 to 30.
+
+Parity check. Given a binary string, output if the counts of ones and zeros are the same. The maximum length of the binary string is 20.
+
+Vowels classification. Given a string, classify if the string contains only vowel characters. The string length ranges from 3 to 10.
+
+Maximum frequent character. Given a string, output the character with the maximum frequency.
+
+
+(a) BERT small
+
+
+(b) ELMO
+
+
+Figure 2: Performance comparison of pretrained and non-pretrained models of BERT small, and ELMO on four quantitative computation tasks (odd classification, even classification, odd even classification and decimal operation).
+(a) BERT small
+
+
+(b) ELMO
+Figure 3: Performance comparison of pretrained and non-pretrained models of BERT small, and ELMO on four quantitative computation tasks (mean, median, mode and decimal & word operation tasks).
+
+The string length ranges from 5 to 30.
+
+# 4 Models & variants
+
+Next, we describe the LMs and their variants used in NILM. We experiment with four language models, based on both Transformer and RNN architectures.
+
+BERT small. This is the bert-base-uncased model with 12 transformer encoder layers and the dimension of the representations is 768. BERT tokenizer is based on the WordPiece model (Wu et al., 2016).
+
+BERT large. This is the bert-large-uncased model which has 24 transformer encoders and representations have 1024 dimensions.
+
+DeBERTa. This is a transformer based language model and its tokenizer is built using Byte Pair Encoding (Sennrich et al., 2016). We consider the DeBERTa base model. It has 12 transformer encoder layers and representations have 768 dimensions.
+
+ELMO. This is an LSTM based language model (Peters et al., 2018). It has 3 layers and the output representations have 1024 dimensions.
+
+Our experiments are based on pretrained and non-pretrained variants of these architectures. For pretrained variants, the weights are initialized with the pretrained weights. The tokenization on the
+
+training data is performed using the pre-built vocabulary. For the non-pretrained neural models, the weights are initialized randomly and updated during training. The tokenizer used is the same as in the pretrained variant.
+
+All the models are trained with varying training data of sizes 10, 20, 40, 80, 160, 320, 640, 1280, 2560, 5120, 6000, 7000, 8000, 9000 and 10000. For training set sizes of less than 1000 samples, we report the average of 10 runs. For training set sizes greater than 1000, all reported numbers are averages of 5 runs. In the next section, we present a comparative analysis of pretrained and non-pretrained models.
+
+# 5 Comparative Evaluation
+
+Next, we compare the performance of pretrained and non-pretrained models on tasks in NILM3.
+
+Quantitative computation: Figure 2 shows results on odd classification, even classification, odd even classification and decimal operation tasks. We find that pretrained LMs outperformed non-pretrained model for all of these tasks. Further, Transformer
+
+
+(a) BERT small
+
+
+(b) ELMO
+Figure 4: Performance comparison of pretrained and non-pretrained models of BERT small, and ELMO on regular expression tasks (AA*BB*CC*DD*EE* and recognize $\{0,1,2\}^{*}02^{*}$ ).
+
+based LMs outperformed the RNN-based ELMO models in all the tasks $^4$ . We note that for the relatively easy tasks such as odd and even classifications, the pretrained LMs show more stable training. However, for harder tasks such as Decimal operations (where the baseline performance is around $10\%$ ), no models are able to learn the task well even with 10K labeled examples.
+
+Figure 3 shows results on median, mean, mode and decimal & word operation tasks. The median task requires complex reasoning (sorting numbers and computing the middle element), and shows significantly lower performance than the mean and mode tasks for the non-pretrained models even with the maximum training set size. The pretrained LM models show little eventual difference in performance between these three tasks. On the other hand, for the easiest of these tasks (mode), non-pretrained models actually show higher performance than pretrained LMs in the low data regime.
+
+Recognizing regular expressions: Figure 4 shows the comparative performance of pretrained LMs on non-pretrained models on the two tasks involving recognizing regular expressions. For both tasks, we note that the pretrained LMs can perfectly learn the tasks with many fewer labeled examples compared to the non-pretrained models. In both cases, the non-pretrained Transformer-based models eventually reach optimal performance as well. However, curiously the ELMO based non-pretrained models struggle with learning both tasks.
+
+String reasoning: Figures 6 show the results on Palindrome, Anagram, Isogram and Tautonym classification. These tasks require character comparison within the string or with another string. Again,
+
+the pretrained variants consistently outperformed non-pretrained models variants in all of these tasks. In particular, the non-pretrained models completely fail to learn the Anagram and Palindrome tasks even for the largest training set size. Again, Transformer based LMs outperform LSTM based LMs.
+
+Figure 7 shows the results on vowels classification, maximum frequent character, length of a string and parity check tasks. These tasks don't require intra-string comparisons. We see that most Transformer-based variants eventually achieve optimal performance. For these simpler tasks, we again observe several instances where the Transformer-based non-pretrained models actually outperform pretrained LMs in the low data regime.
+
+# 6 Effect of model size
+
+
+Figure 5: Effect of model size on non-pretrained models. NP denotes a non-pretrained model and PT denotes the pretrained model. Mid-sized non-pretrained models outperform bigger and smaller variants, but still perform significantly lower than pretrained LM models. Results are the average of six representative tasks: palindrome classification, anagram classification, isogram classification, tautonym classification, mean and median.
+
+architecture relative to the sizes of the training data might be leading to under-fitting. To test this, we experiment with smaller Transformer-based models with varying numbers of parameters.
+
+Figure 5 illustrates the effect of model sizes of non-pretrained model. The original 110 million parameter model has 12 encoder layers, 12 attention heads, and 768 dimensional representations. The 42 million parameter model has 8 encoder layers, 8 attention heads and 512 dimensional representations. The 29 million parameter model has 4 encoder layers, 8 attention heads and 512 dimensional representations. The 11 million parameter model has 4 encoder layers, 4 attention heads and 256 dimensional representations. The smallest 4 million parameter model has 2 encoder layers, 2 attention heads and 128 dimensional representations.
+
+As seen in the figure, reducing the model size significantly improves the average performance of the non-pretrained models over 6 representative tasks. However, the smallest models show a performance drop. Most significantly, even the best performing intermediate-sized architectures are significantly worse than the pretrained LM models. This strongly suggests that the discrepancy between pre-trained and non-pretrained models is not simply due to a mismatch between model and data sizes.
+
+# 7 Effects of Pretraining Data
+
+We observe that pretrained LMs consistently performed better than non-pretrained models. This leads to the natural question of what role the text data used for pretraining plays in the process. Next, we investigate this in depth by experimenting with language models pretrained on different types of text. For this, we pretrain models using the BERTsmall and DeBERTa architectures and an MLM objective on different text datasets, and evaluate the performance of these models on NILM tasks.
+
+# 7.1 Variance with text domain
+
+We first explore models pretrained on three different domains of text.
+
+SNLI. We pretrained BERT small from scratch on SNLI data (Bowman et al., 2015). It has 1000k sentences (570k pairs of text and hypothesis).
+
+Amazon reviews. We selected 500k movies and tv reviews from the larger Amazon reviews dataset (He and McAuley, 2016) and used for pretraining. Since reviews are in a free-text format, and their collection was not tailored with a NLP task in mind,
+
+they might be more representative of the complexity of real-world language use than SNLI.
+
+ROC. ROC is a corpora of 100K children stories, each made up of five sentences (Mostafazadeh et al., 2017). The language in ROC is relatively simple in both vocabulary and sentence structure.
+
+Tables 2 and 3 shows the average accuracy of six non-linguistic tasks (palindrome classification, isogram classification, tautonym classification, odd even classification, decimal operation and median) fine-tuned using different BERT and DeBERTA representations respectively. We note that the models pretrained on all three domains outperformed the non-pretrained model (NP). This suggests that the results of experiments in Section 5 generalize to new text corpora for pretraining, and do not rely on having access to text on specific topics during pretraining. This is a non-trivial result, since it suggests for example, that the higher performance of pretrained models on tasks such as palindrome and anagram classification is not due to the pretrained models having seen information about such concepts during pretraining. This is especially so since the results even generalize to ROC stories, which contain no information on such technical concepts.
+
+# 7.2 Perturbed text
+
+Next, we experiment with perturbing the text used for pretraining by changing the order of words in the text. We explore the following models:
+
+SNLI sort. The words in the sentences of SNLI dataset are sorted based on alphabetical order.
+
+SNLI shuffle. We randomly shuffle words in sentences in the SNLI dataset.
+
+Amazon reviews sort. Similar to SNLI sort, the words in sentences are alphabetically sorted.
+
+Amazon reviews shuffle. We randomly shuffle words in sentences in the Amazon reviews dataset. We observe that models pretrained with perturbed text also significantly outperformed non-pretrained models, and perform comparably to the original pretrained LMs. For the SNLI dataset, there is $3\%$ drop in best performance when pretrained on SNLI sort and $2\%$ drop in performance when pretrained on SNLI shuffle for BERT (Table 2). In fact, for DeBERTa, SNLI shuffle outperformed the standard SNLI by $2\%$ (Table 3). Similarly, the Amazon sort and Amazon shuffle versions outperformed or achieved similar performance as the standard Amazon data version. A likely explanation for this is that, even though syntactic word order is disturbed
+
+
+(a) BERT small
+
+
+(b) ELMO
+
+
+Figure 6: Performance comparison of pretrained and non-pretrained models of BERT small, and ELMO on four string reasoning tasks (palindrome, anagram, isogram and tautonym classification).
+(a) BERT small
+
+
+(b) ELMO
+Figure 7: Performance comparison of pretrained and non-pretrained models of BERT small, and ELMO on five string reasoning tasks (length of a string, maximum frequent character, vowels classification, parity check and count of unique character).
+
+
Sample size
SNLI
SNLI sort
SNLI shuffle
Amz
Amz sort
Amz shuffle
ROC
X-ling BERT
Chinese BERT
Code BERT
Zipf
Unif
Syn Voc
NP
10
37
39
38
36
36
36
36
38
38
37
38
36
36
37
20
37
37
37
36
38
38
38
37
37
38
37
37
37
37
40
37
38
36
37
36
36
36
42
42
37
42
36
37
37
80
38
40
40
37
38
38
38
55
55
47
55
36
36
38
160
38
40
37
37
40
40
40
56
56
37
56
37
37
39
320
40
49
41
38
41
41
41
64
64
61
64
39
37
41
640
44
60
47
43
52
52
52
75
75
69
75
42
39
44
1280
60
71
63
55
69
69
69
80
80
92
80
52
41
50
2560
76
84
75
75
79
79
79
81
81
89
81
59
48
50
5120
82
87
82
83
89
89
89
94
94
97
94
71
58
58
6000
83
87
83
85
90
90
90
94
94
96
94
73
60
59
7000
88
89
88
89
91
91
91
94
94
97
94
78
62
64
8000
89
89
88
90
92
92
92
94
94
97
94
81
63
59
9000
90
90
89
91
92
92
92
94
94
97
94
84
64
59
10000
91
88
89
91
92
92
92
94
94
97
94
85
64
64
+
+Table 2: Average accuracy scores of different pretrained BERT representations on six representative non-linguistic tasks: palindrome, anagram, isogram, tautonym, mean, and median. The results are rounded to the nearest percentage point. All models except Synthetic Vocabulary (Syn Voc). show statistically significant improvements $(p < 0.05)$ over the non-pretrained models.
+
+by shuffling, distributional information over sentence contexts is still preserved in the perturbed data. We describe experiments with text data having no distributional information in later sections.
+
+# 7.3 Non-English and Computer Languages
+
+A possible rationale for explaining the beneficial effect of pretraining for non-linguistic tasks is that irrespective of whether the tasks require non
+
+linguistic reasoning, their format is in language, and hence language models should be able to learn these tasks with fewer examples. To test this hypothesis, we also experiment with models pretrained on text from languages different from English, as well as models pretrained on computer code. These include the following models:
+
+Multilingual BERT. Multilingual BERT is pretrained on text from 102 different languages. About
+
+
Sample size
SNLI
SNLI sort
SNLI shuffle
Amz
Amz sort
Amz shuffle
ROC
X-ling DeBERTa
Zipf
Unif
Syn Voc
NP
10
36
36
37
36
35
36
37
36
37
36
36
37
20
37
36
36
36
35
35
37
39
36
37
37
37
40
37
36
36
36
36
35
37
38
37
36
37
37
80
38
37
39
37
37
36
37
38
37
36
36
37
160
37
38
37
36
38
37
37
40
38
37
37
38
320
39
39
39
37
42
39
41
58
40
39
37
38
640
44
44
45
42
52
46
48
71
47
42
39
47
1280
54
51
54
50
72
58
52
80
61
52
41
60
2560
70
70
69
65
81
72
65
90
75
59
48
72
5120
79
78
80
79
87
83
83
93
83
71
58
73
6000
79
82
80
81
88
84
82
91
84
73
60
74
7000
84
86
87
85
89
87
84
93
84
78
62
74
8000
85
87
87
86
89
88
85
93
87
81
63
76
9000
86
87
88
86
91
90
85
93
88
84
64
77
10000
87
87
89
86
91
90
85
93
87
85
64
78
+
+Table 3: Average accuracy scores of different pretrained DeBERTA representations on six representative non-linguistic tasks: palindrome, anagram isogram, tautonym, mean, and median. The results are rounded to the nearest percentage point. All models except Synthetic Vocabulary (Syn Voc). show statistically significant improvements $(p < 0.05)$ over the non-pretrained models.
+
+$21\%$ of the pretraining text is English.
+
+Chinese BERT. Chinese BERT is a BERT model pretrained on Chinese text.
+
+Code BERT. CodeBERT (Feng et al., 2020) is pretrained on code from six programming languages.
+
+In Table 2, we note that all three non-English pretrained LMs significantly outperformed non-pretrained models, with the best performance being comparable or marginally lower than English versions. In fact, Code-BERT surprisingly surpasses ROC by $5\%$ . These findings strongly indicate that the advantages from pretraining have little to do with the format of the tasks, since they persist for scenarios with little shared linguistic structure.
+
+# 7.4 Synthetic languages
+
+Finally, to investigate what happens if we weaken the distributional properties that hold even in the perturbed text versions from Section 6.2, we experiment with pretraining models on synthetic text sampled from simple probability distributions:
+
+Zipf distribution. We select 30k words (types) from the Amazon reviews dataset. Words are picked with a unigram probability that follows Zipf's word frequency law, which all natural languages empirically follow (Piantadosi, 2014). For the Zipf distribution, we chose $\alpha = 1$ and $\beta = 2.7$ , to match the parameters of most natural languages. The text does not follow any word order.
+
+Uniform distribution. In this dataset, words are sampled from the same vocabulary as in 'Zipf distribution', but with a uniform unigram probability. The text does not follow any word order.
+
+Synthetic Vocabulary. Words are selected with uniform distribution from a vocabulary to form
+
+sentences. However, instead of a vocabulary of English words, the words in the vocabulary are also synthetically generated (3 letter combinations of lower-case alphabets). In this text, the words do not possess morphology in addition to no syntax.
+
+In Tables 2 and 3, we note that surprisingly, even models pretrained on Zipfian and uniform distribution text continue to outperform the non-pretrained models. In fact, the Zipf version's best accuracy is $3\%$ higher than the standard Amazon data version and $2\%$ compared to perturbed Amazon shuffled data version in case of BERT. Zipf outperforms standard amazon data by $1\%$ and lags behind amazon shuffle by $3\%$ for DeBERTA. The Uniform distribution version lags behind Zipf by $9\%$ and $2\%$ for BERT and DeBERTa respectively. We note that the Zipf and Uniform versions still use the prebuilt vocabulary from the Amazon data, and hence this text maintains morphological structure. However, the gains finally disappear for the Synthetic vocabulary model, which cannot leverage morphological structure in the text, and its performance is similar to the non-pretrained models.
+
+# 8 Conclusion
+
+We explore the non-linguistic inductive biases of pretrained LMs. While the general trend (that pretraining helps) is unsurprising, our analysis with models pretrained on different text corpora shows that this is not due to the model seeing related topics during pretraining. We find that these gains persist even in absence of any shared linguistic structure (in cross-lingual settings). Our observation that this behavior is seen even when pretraining on synthetically generated languages is intriguing
+
+and can be explored further by future work.
+
+# Acknowledgements
+
+This work was supported in part by NSF grant DRL2112635. We are also thankful to the anonymous reviewers for their thoughtful suggestions.
+
+# Ethics and Broader Impact
+
+Our synthetic datasets contain no linguistic or social information, and hence cannot introduce any type of social, gender and cultural biases in our analyses. The datasets used in the section 7 are publicly available, and should contribute towards the goal of reproducible research. In terms of broader impact, our results suggest that LMs accrue helpful inductive biases for non-linguistic reasoning during pretraining. This suggests that LMs can potentially be explored for a broader range of downstream applications rather than language-related tasks, which is the current predominant focus of these models. In the long run, making such foundational models available for learning a broad range of tasks from limited data can make predictive AI technologies more accessible than in the current day.
+
+# Limitations
+
+In terms of findings, we find strong evidence of pretraining on text providing advantageous inductive biases for non-linguistic tasks. Our analysis in Section 6 suggests that this is not simply a regularization effect. However, it does not definitively rule out this possibility since direct comparisons between pretrained and non-pretrained networks (even of different sizes) are difficult. Also, the scope of our analysis here is limited to small to mid-sized language models (with tens of millions of parameters), rather than massive language models such as GPT3 (with tens of billions of parameters). Finally, we note that all tasks chosen for this analysis are formulated as classification, where the number of classes is not high. Hence, learning some of the tasks might easier than possible more general formulations. e.g., quantitative computation.
+
+# References
+
+Satwik Bhattachamishra, Kabir Ahuja, and Navin Goyal. 2020. On the Ability and Limitations of Transformers to Recognize Formal Languages. In Proceedings of the 2020 Conference on Empirical Methods
+
+in Natural Language Processing (EMNLP), pages 7096-7116, Online. Association for Computational Linguistics.
+Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 632-642, Lisbon, Portugal. Association for Computational Linguistics.
+Alon Brutzkus and Amir Globerson. 2021. On the inductive bias of a {cnn} for distributions with orthogonal patterns.
+Zhangyin Feng, Daya Guo, Duyu Tang, Nan Duan, Xiaocheng Feng, Ming Gong, Linjun Shou, Bing Qin, Ting Liu, Daxin Jiang, and Ming Zhou. 2020. CodeBERT: A pre-trained model for programming and natural languages. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 1536-1547, Online. Association for Computational Linguistics.
+Ruining He and Julian McAuley. 2016. Ups and downs: Modeling the visual evolution of fashion trends with one-class collaborative filtering. In *proceedings of the 25th international conference on world wide web*, pages 507-517.
+David P Helmbold and Philip M Long. 2015. On the inductive bias of dropout. The Journal of Machine Learning Research, 16(1):3403-3454.
+Valentin Hofmann, Janet Pierrehumbert, and Hinrich Schütze. 2021. Superbizarre is not superb: Derivational morphology improves BERT's interpretation of complex words. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 3594-3608, Online. Association for Computational Linguistics.
+Valentin Hofmann, Janet B Pierrehumbert, and Hinrich Schütze. 2020. Dagobert: Generating derivational morphology with a pretrained language model. arXiv preprint arXiv:2005.00672.
+Eugene Kharitonov and Rahma Chaabouni. 2021. What they do when in doubt: a study of inductive biases in seq2seq learners. In International Conference on Learning Representations.
+Yongjie Lin, Yi Chern Tan, and Robert Frank. 2019. Open sesame: Getting inside bert's linguistic knowledge. arXiv preprint arXiv:1906.01698.
+R. Thomas McCoy, Robert Frank, and Tal Linzen. 2020. Does syntax need to grow on trees? sources of hierarchical inductive bias in sequence-to-sequence networks. Transactions of the Association for Computational Linguistics, 8:125-140.
+
+Nasrin Mostafazadeh, Michael Roth, Annie Louis, Nathanael Chambers, and James Allen. 2017. LSDSem 2017 shared task: The story cloze test. In Proceedings of the 2nd Workshop on Linking Models of Lexical, Sentential and Discourse-level Semantics, pages 46-51, Valencia, Spain. Association for Computational Linguistics.
+Aakanksha Naik, Abhilasha Ravichander, Carolyn Rose, and Eduard Hovy. 2019. Exploring numeracy in word embeddings. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3374-3380, Florence, Italy. Association for Computational Linguistics.
+Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227-2237, New Orleans, Louisiana. Association for Computational Linguistics.
+Steven T Piantadosi. 2014. Zipf's word frequency law in natural language: A critical review and future directions. Psychonomic bulletin & review, 21(5):1112-1130.
+Luzi Sennhauser and Robert Berwick. 2018. Evaluating the ability of LSTMs to learn context-free grammars. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 115-124, Brussels, Belgium. Association for Computational Linguistics.
+Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715-1725, Berlin, Germany. Association for Computational Linguistics.
+Natalia Skachkova, Thomas Trost, and Dietrich Klakow. 2018. Closing brackets with recurrent neural networks. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 232-239, Brussels, Belgium. Association for Computational Linguistics.
+Mirac Suzgun, Yonatan Belinkov, Stuart Shieber, and Sebastian Gehrmann. 2019a. LSTM networks can perform dynamic counting. In Proceedings of the Workshop on Deep Learning and Formal Languages: Building Bridges, pages 44-54, Florence. Association for Computational Linguistics.
+Mirac Suzgun, Yonatan Belinkov, and Stuart M. Shieber. 2019b. On evaluating the generalization of LSTM models in formal languages. In Proceedings of the Society for Computation in Linguistics (SCiL) 2019, pages 277-286.
+
+Ian Tenney, Dipanjan Das, and Ellie Pavlick. 2019. BERT rediscovers the classical NLP pipeline. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4593-4601, Florence, Italy. Association for Computational Linguistics.
+Avijit Thawani, Jay Pujara, Filip Ilievski, and Pedro Szekely. 2021. Representing numbers in NLP: a survey and a vision. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 644-656, Online. Association for Computational Linguistics.
+Ivan Vulic, Simon Baker, Edoardo Maria Ponti, Ulla Petti, Ira Leviant, Kelly Wing, Olga Majewska, Eden Bar, Matt Malone, Thierry Poibean, Roi Reichart, and Anna Korhonen. 2020a. Multi-SimLex: A largescale evaluation of multilingual and crosslingual lexical semantic similarity. Computational Linguistics, 46(4):847-897.
+Ivan Vulic, Edoardo Maria Ponti, Robert Litschko, Goran Glavaš, and Anna Korhonen. 2020b. Probing pretrained language models for lexical semantics. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 7222-7240, Online. Association for Computational Linguistics.
+Eric Wallace, Yizhong Wang, Sujian Li, Sameer Singh, and Matt Gardner. 2019. Do NLP models know numbers? probing numeracy in embeddings. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5307-5315, Hong Kong, China. Association for Computational Linguistics.
+Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. 2019. Superglue: A stickier benchmark for general-purpose language understanding systems. arXiv preprint arXiv:1905.00537.
+Alex Warstadt, Yu Cao, Ioana Grosu, Wei Peng, Hagen Blix, Yining Nie, Anna Alsop, Shikha Bordia, Haokun Liu, Alicia Parrish, Sheng-Fu Wang, Jason Phang, Anhad Mohananey, Phu Mon Htut, Paloma Jeretic, and Samuel R. Bowman. 2019. Investigating BERT's knowledge of language: Five analysis methods with NPIs. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2877-2887, Hong Kong, China. Association for Computational Linguistics.
+Gail Weiss, Yoav Goldberg, and Eran Yahav. 2018. On the practical computational power of finite precision RNNs for language recognition. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers),
+
+pages 740-745, Melbourne, Australia. Association for Computational Linguistics.
+Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google's neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144.
+Zhiyong Wu, Yun Chen, Ben Kao, and Qun Liu. 2020. Perturbed masking: Parameter-free probing for analyzing and interpreting BERT. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4166-4176, Online. Association for Computational Linguistics.
+Rui Xu, Xintao Wang, Kai Chen, Bolei Zhou, and Chen Change Loy. 2021. Positional encoding as spatial inductive bias in gans. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 13569-13578.
+Mengjie Zhao, Philipp Dufter, Yadollah Yaghoobzadeh, and Hinrich Schütze. 2020. Quantifying the contextualization of word representations with semantic class probing. In *Findings of the Association for Computational Linguistics: EMNLP* 2020, pages 1219-1234, Online. Association for Computational Linguistics.
+
+# A Appendix
+
+
Baseline
p-value
SNLI
5.45 × 10-5
SNLI sort
3.33 × 10-4
SNLI shuffle
5.5 × 10-4
Amazon
7.48 × 10-5
Amazon sort
7.2 × 10-5
Amazon shuffle
4.5 × 10-5
Multilingual BERT
9.07 × 10-4
Chinese BERT
8.9 × 10-5
Code BERT
8.1 × 10-5
ROC
2.64 × 10-5
Zipf distribution
7.45 × 10-5
Uniform distribution
4.61 × 10-4
Synthetic vocabulary
1.2 × 10-1
+
+Table 4: Statistical significance values (paired t-test) between non-pretrained model and other baseline BERT models trained on different datasets.
+
+
Baseline
p-value
SNLI
2.45 × 10-5
SNLI sort
1.33 × 10-4
SNLI shuffle
4.3 × 10-5
Amazon
6.32 × 10-4
Amazon sort
8.7 × 10-5
Amazon shuffle
7.3 × 10-5
Multilingual BERT
9.07 × 10-5
ROC
2.14 × 10-3
Zipf distribution
3.1 × 10-3
Uniform distribution
4.61 × 10-4
Synthetic vocabulary
1.3 × 10-1
+
+Table 5: Statistical significance values (paired t-test) between non-pretrained model and other baseline DeBERTA models trained on different datasets.
+
+# A.1 Implementation details
+
+For transformer LMs, we add a fully connected classification layer on the top of final encoder layer. The pooled representations from the final encoder layer are then passed onto fully connected layer. We train these models in an end-to-end manner. For the RNN LMs, we first pretrain LM onto the task. The final word representations are the weighted sum of three layers. Max-pooling operation is applied on the time step dimension for these weighted representations. A final classification layer is trained with the pooled representations.
+
+# A.2 Computational requirements
+
+All the models are run using PyTorch framework on 4 geforce gtx 1080 gpus. Each of the fine-tuning experiments takes about 5gpu hours and pre-training takes about 10gpu hours.
+
+# A.3 Statistical significance
+
+We perform a paired t-test between pretrained and non-pretrained models of the LMs on all the tasks. The statistical significance values are shown in the table 6. We also calculated the paired t-value between non-pretrained model and BERT and DeBERTA pretrained on different datasets. The paired t-values are shown in the table 4 and 5.
+
+
+(a) DeBERTa
+
+
+(b) BERT large
+
+
+Figure A.1: Performance comparison of pretrained and non-pretrained models of DeBERTa and BERT large on four quantitative computation tasks (odd classification, even classification, odd even classification and decimal operation).
+(a) DeBERTa
+
+
+(b) BERT large
+
+
+Figure A.2: Performance comparison of pretrained and non-pretrained models of DeBERTa and BERT large on four quantitative tasks (mean, median, mode, decimal & word operation).
+(a) BERT small
+
+
+(b) ELMO
+
+
+Figure A.3: Performance comparison of pretrained and non-pretrained models of DeBERTa, and BERT large on regular expression tasks (AA*BB*CC*DD*EE* and recognize $\{0,1,2\} *02^*$ ).
+(a) DeBERTa
+Figure A.4: Performance comparison of pretrained and non-pretrained models of DeBERTa and BERT large on four string reasoning (palindrome, anagram, isogram and tautonym classification).
+
+
+(b) BERT large
+
+
+(a) DeBERTa
+
+
+(b) BERT large
+Figure A.5: Performance comparison of pretrained and non-pretrained models of DeBERTa and BERT large on five string reasoning tasks (length of a string, maximum frequent character, vowels classification, parity check and count of unique character).
+
+
Task
BERT small
DeBERTa
BERT large
ELMO
Odd classification
10.4 × 10-2
8.8 × 10-1
2.9 × 10-3
7.35 × 10-7
Even classification
8.1 × 10-2
8.7 × 10-2
5.25 × 10-3
7.35 × 10-7
Odd even classification
2.2 × 10-1
6.96 × 10-7
6.46 × 10-4
7.35 × 10-7
Decimal operation
4.1 × 10-4
7.07 × 10-1
1.35 × 10-5
3.49 × 10-7
Decimal & word operation
6.85 × 10-8
6.43 × 10-7
4.34 × 10-8
5.39 × 10-7
Mean
9.5 × 10-2
7.56 × 10-1
7.8 × 10-6
2.2 × 10-7
Median
9.28 × 10-6
8.04 × 10-1
5.68 × 10-7
1.99 × 10-7
Mode
9.2 × 10-2
2.27 × 10-1
9.2 × 10-1
3.35 × 10-7
Recognize {0,1,2} *02*
1.31 × 10-1
8.4 × 10-1
4.34 × 10-1
5.48 × 10-5
Recognize AA*BB*CC*DD*EE*
4.06 × 10-1
6.97 × 10-1
4.02 × 10-1
2.39 × 10-6
Palindrome classification
4.34 × 10-7
2.1 × 10-3
1.85 × 10-7
1.97 × 10-6
Anagram classification
5.1 × 10-6
1.44 × 10-6
3.45 × 10-7
7.46 × 10-6
Isogram classification
1.28 × 10-7
4.77 × 10-3
3.47 × 10-4
2.18 × 10-6
Tautonym classification
1.92 × 10-7
1.29 × 10-5
1.69 × 10-8
4.39 × 10-6
Length of a string
2.7 × 10-1
1.27 × 10-4
3.39 × 10-4
7.07 × 10-4
Count of unique characters
1.79 × 10-4
2.7 × 10-2
1.23 × 10-7
3.18 × 10-6
Parity check
2.68 × 10-4
4.66 × 10-4
4.34 × 10-7
6.05 × 10-6
Vowels classification
4.26 × 10-1
9.5 × 10-1
7.22 × 10-1
5.11 × 10-2
Maximum frequent character
5.02 × 10-1
5.65 × 10-1
6.07 × 10-1
6.47 × 10-1
+
+Table 6: Statistical significance values (paired t-test) between pretrained and non-pretrained model on all the tasks.
\ No newline at end of file
diff --git a/whatdolargelanguagemodelslearnbeyondlanguage/images.zip b/whatdolargelanguagemodelslearnbeyondlanguage/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..c93f1be13c0f129a828764b64f8e1ea7d54d58a1
--- /dev/null
+++ b/whatdolargelanguagemodelslearnbeyondlanguage/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ee15f899f261f5c5f9fd2103995e3c6f6e1c1a3648b37c3276fb18140aef278c
+size 1082309
diff --git a/whatdolargelanguagemodelslearnbeyondlanguage/layout.json b/whatdolargelanguagemodelslearnbeyondlanguage/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..c6fae1e580b1da576b5a42af27a545c09ab7c60e
--- /dev/null
+++ b/whatdolargelanguagemodelslearnbeyondlanguage/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:dad268443c05ff0d4ce065a388ebc4e9a490b637a9c6e08b0d8beefb43bf15c5
+size 389921
diff --git a/whathasbeenenhancedinmyknowledgeenhancedlanguagemodel/0a0a64d2-4b53-430a-bdf5-7e43834496cb_content_list.json b/whathasbeenenhancedinmyknowledgeenhancedlanguagemodel/0a0a64d2-4b53-430a-bdf5-7e43834496cb_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..db0304518878a524299b67f949efbc09dab32598
--- /dev/null
+++ b/whathasbeenenhancedinmyknowledgeenhancedlanguagemodel/0a0a64d2-4b53-430a-bdf5-7e43834496cb_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d3e6513788be898b7cfcb15f4d0de2be6dae57e40979134f2e6efb47358343de
+size 158306
diff --git a/whathasbeenenhancedinmyknowledgeenhancedlanguagemodel/0a0a64d2-4b53-430a-bdf5-7e43834496cb_model.json b/whathasbeenenhancedinmyknowledgeenhancedlanguagemodel/0a0a64d2-4b53-430a-bdf5-7e43834496cb_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..6087712c3cf73f7f2c9d83b32848ca87afd4b135
--- /dev/null
+++ b/whathasbeenenhancedinmyknowledgeenhancedlanguagemodel/0a0a64d2-4b53-430a-bdf5-7e43834496cb_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f907374ef99e5bb3bcde9df088393f83ec2afdbe66fca3b7053ce4d6d962d529
+size 185969
diff --git a/whathasbeenenhancedinmyknowledgeenhancedlanguagemodel/0a0a64d2-4b53-430a-bdf5-7e43834496cb_origin.pdf b/whathasbeenenhancedinmyknowledgeenhancedlanguagemodel/0a0a64d2-4b53-430a-bdf5-7e43834496cb_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..c4a5755c0f71b3b58d7e8e5c5dd0f1d4573ce2b6
--- /dev/null
+++ b/whathasbeenenhancedinmyknowledgeenhancedlanguagemodel/0a0a64d2-4b53-430a-bdf5-7e43834496cb_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c612c681795530954bae0d6ffc27879097dade5f811ac0ff0ac36546408e50f8
+size 1107744
diff --git a/whathasbeenenhancedinmyknowledgeenhancedlanguagemodel/full.md b/whathasbeenenhancedinmyknowledgeenhancedlanguagemodel/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..e4abee35bf4b272d4787637435f78a29d5831e0c
--- /dev/null
+++ b/whathasbeenenhancedinmyknowledgeenhancedlanguagemodel/full.md
@@ -0,0 +1,661 @@
+# What Has Been Enhanced in my Knowledge-Enhanced Language Model?
+
+Yifan Hou $^{1}$ , Guoji Fu $^{2}$ , Mrinmaya Sachan $^{1}$
+
+$^{1}$ ETH Zürich, $^{2}$ Southern University of Science and Technology
+
+$^{1}\{yifan.hou, mrinmaya.sachan\} @inf.ethz.ch, \quad^{2}11749236@mail.sustech.edu.cn$
+
+# Abstract
+
+A number of knowledge integration (KI) methods have recently been proposed to incorporate external knowledge into pretrained language models (LMs). Even though knowledge-enhanced LMs outperform base LMs on knowledge-intensive tasks, the inner-workings of these KI methods are not well-understood. For instance, it is unclear which knowledge is effectively integrated into knowledge-enhanced LMs and which is not; and if such integration leads to catastrophic forgetting of already learned knowledge. We show that existing model interpretation methods such as linear probes and prompts have some key limitations in answering these questions. We revisit KI from an information-theoretic view and propose a new theoretically sound probe called Graph Convolution Simulator (GCS) for KI interpretation. GCS uses graph attention on the corresponding knowledge graph for interpretation. In our experiments we verify that GCS can provide reasonable interpretation results for two well-known knowledge-enhanced LMs: ERNIE and K-Adapter. We also find that only a marginal amount of knowledge is successfully integrated in these models, and simply increasing the size of the KI corpus may not lead to better knowledge-enhanced LMs.1
+
+# 1 Introduction
+
+Pretrained language models (LMs) have become the backbone of NLP. Recent work has shown that linguistic knowledge is captured quite well in LMs (Liu et al., 2019a; Jawahar et al., 2019). However, LMs are much worse at capturing factual knowledge about the world (Petroni et al., 2019; Wang et al., 2021b). This has led to the development of a number of knowledge integration (KI) methods which integrate external knowledge from knowledge graphs (KGs) into LMs, leading to knowledge-
+
+enhanced language models such as ERNIE (Zhang et al., 2019) and $K$ -Adapters (Wang et al., 2021a).
+
+Even though enhanced LMs perform better on knowledge-intensive tasks, there is little understanding of where this improvement comes from. Which factual knowledge is successfully integrated into the LM, and which kind of knowledge is not, is not well-understood. As new knowledge is integrated in LMs, old knowledge could be catastrophically forgotten (Kirkpatrick et al., 2016). KI could also lead to a situation called catastrophic remembering (Kaushik et al., 2021), where the old knowledge could prevent the integration of new knowledge. Our understanding about these issues is also limited.
+
+An intuitive way to understand the KI process could be to probe for factual knowledge in the base LM and the enhanced LM respectively, and compare their results for interpretation. However, we find that because of the high variance of classifier-based probes for KG prediction (Menon and Elkan, 2011; Li et al., 2016) and the significant human effort required to verbalize knowledge graph facts into templates (Petroni et al., 2019; Shin et al., 2020), we cannot easily extend existing probe methods to interpret knowledge-integration in LMs ( $\S 3$ ).
+
+In this paper, we revisit the KI process and formulate it with an information-theoretic view (§4). We measure the factual knowledge encoded in the LM by the mutual information (MI) between the model representations and the KG (Hou and Sachan, 2021). Now, the integration and forgetting of factual knowledge can be measured using the difference in MI between the KG and the base LM and the MI between the KG and the enhanced LM. Based on this idea, we theoretically derive a transformation composed of graph convolutions to simulate and interpret this change in MI, leading to our proposed probe, Graph Convolution Simulator (GCS) (§5). The interpretation mechanism in GCS is quite different from existing probe methods as
+
+
+Figure 1: Adaptations of two existing probe methods and GCS to interpret knowledge integration in language models. Probes from left to right are: linear probe, prompt-based probe, and GCS. We can find that only GCS does not have the compare operation, which can avoid introducing extra noise of interpretation results.
+
+
+
+
+
+shown in Figure 1. GCS uses a graph attention layer on the KG to simulate the MI change. Then, it interprets KI from the information flow on the KG using graph attention coefficients.
+
+In our experiments (§6), we verify that GCS can provide reasonable KI interpretation results. We show that: (a) results of GCS have significantly smaller variance compared to the linear probe, (b) dropping knowledge identified as non-integrated by GCS does not affect enhanced LMs' performance, and (c) enhanced LMs perform better on samples that include knowledge identified as integrated by GCS. In particular, we use GCS to understand two well-known knowledge-enhanced LMs: ERNIE (Zhang et al., 2019) and $K$ -Adapter (Wang et al., 2021a). Our findings are listed as follows.
+
+- Both of them only integrate little new knowledge (i.e., less than $30\%$ knowledge triples are successfully integrated). ERNIE is better at integrating triples with high degree in KGs, while K-Adapter integrates triples with low-degree entities well.
+- In our qualitative study, we find that enhanced LMs do not integrate numbers and temporal knowledge well, where only less than $0.01\%$ triples are successfully integrated.
+- Finally, we find that there is no positive relationship between KI corpus size and KI quality. This suggests that merely building larger corpus would not be enough, highlighting the need for more fundamental advances in KI.
+
+# 2 Preliminaries
+
+KI methods. There are several approaches for KI. KI in LMs can be implemented by aligning phrases in text to entities (Peters et al., 2019; Zhang et al.,
+
+2019) or triples (Liu et al., 2020; Wang et al., 2021a) and incorporating corresponding entity or the triple embeddings in the LM. KI methods also include modifications to the Transformer architecture (Peters et al., 2019; Liu et al., 2020), verbalizing knowledge triples and using data augmentation for finetuning (Agarwal et al., 2021), and designing objective functions that predict the factual knowledge (Yao et al., 2019; Wang et al., 2021a).
+
+Knowledge graphs. We assume that factual knowledge for integration can be formulated as a KG $\mathcal{G} = (\mathcal{V},\mathcal{E})$ , where nodes $v_{i}\in \mathcal{V}$ represent entities, and edges in $\mathcal{E}$ represent relations between them. Let $\mathcal{N}_{v_i}$ denote the set of neighbors of node $v_{i}$ , and $t_i$ denote the entity label corresponding to the node $v_{i}$ . Further, let $\pmb {x}_i = \mathrm{LM}(t_i)$ denote the entity (label) representations given by a $\mathrm{LM^2}$ , and $\pmb {X}\in \mathbb{R}^{|\mathcal{V}|\times d}$ denote a matrix formed by stacking all entity representations $\pmb {x}_i\in \mathbb{R}^d$ . In this paper, we only consider nodes and edges in the KG and ignore other side information such as relation weights, directions and labels3.
+
+# 3 Unsuitability of Existing Probes for KI
+
+Classifier probes (Alain and Bengio, 2017; Hewitt and Liang, 2019) and prompt-based probes (e.g. the LAMA probe) (Petroni et al., 2019; Shin et al., 2020) are typically used to test for various kinds of knowledge in LMs. Classifier probes train simple (usually linear) classifiers to predict the linguistic property of interest and the probe accuracy is used for interpretation (Ribeiro et al., 2016; Hewitt and Manning, 2019). However, simple classifiers are
+
+not powerful enough to make reliable predictions about large KGs (Menon and Elkan, 2011; Li et al., 2016). Moreover, linear probes are also unable to provide reasonable insights for LMs as they suffer from high variance. If we use them to probe two LMs and compare their results for KI interpretation, variance of interpretation would further increase. We provide empirical evidence later in §6.1.1.
+
+Prompting is another popular way to understand what factual knowledge LMs know. Prompts are designed to let LMs solve text infilling problems, and the prompt output is then used for interpretation (Petroni et al., 2019). However, people have to manually design templates for factual knowledge to be probed5, and the quality of templates is vital in the overall prompt accuracy (Jiang et al., 2021; Li et al., 2022). As KI methods often use large KGs for integration, it would be infeasible to write a large number of templates for all triples in KGs. To address these issues, we introduce the GCS model and its theoretical motivation behind.
+
+# 4 Knowledge Integration Understanding
+
+First, we revisit KI in LMs by formulating it in an information-theoretic way. Then, we construct transformations to simulate and interpret the KI process. Notations are summarized in Appendix A.
+
+# 4.1 Knowledge Integration Definition
+
+We measure knowledge in LMs using mutual information (MI) between the knowledge and the LM representations (Hou and Sachan, 2021). We assume that the local graph $\mathcal{G}(v_i)$ contains all factual knowledge regarding on $v_{i}$ , and successfully integrated knowledge should be reflected in the entity representations. Let $\mathbf{x}$ be a random variable that takes values ranging over all possible entity representations of a $\mathrm{LM^6}$ , and $\mathbf{g}$ be a random variable that ranges over all possible corresponding local structures $\mathcal{G}(v_i)$ . Mutual information MI(x; g) can be used to measure the amount of information in $\mathbf{g}$ contained in $\mathbf{x}$ as
+
+$$
+\operatorname {M I} (\mathbf {x}; \mathbf {g}) = D _ {K L} \left(\mathbb {P} _ {\mathbf {x g}} | | \mathbb {P} _ {\mathbf {x}} \otimes \mathbb {P} _ {\mathbf {g}}\right),
+$$
+
+which is equivalent to the Kullback-Leibler (KL) divergence between the joint distribution $\mathbb{P}_{\mathbf{xg}}$ and the product of the marginal distributions $\mathbb{P}_{\mathbf{x}}\otimes \mathbb{P}_{\mathbf{g}}$ . Next, we present a formal definition of KI.
+
+Definition 4.1 (Knowledge Integration). Let $\mathbf{x}$ , $\mathbf{h}$ denote random variables for entity representations in the base LM and the enhanced LM, respectively. The KI process can be formulated as a function $f$ such that $\mathbf{h} = f(\mathbf{x},\mathbf{g})$ . Consequently, we assume that the knowledge change during KI can be measured using MI by: $\mathrm{MI}(\mathbf{x};\mathbf{g})\rightarrow \mathrm{MI}(\mathbf{h};\mathbf{g})$ .
+
+Definition 4.1 can be intuitively visualized by Figure 2. Ideally, if most knowledge is successfully integrated without much forgetting of the old knowledge, regions 2
+
+
+Figure 2: Venn diagram of KI. $\mathbf{x}$ and $\mathbf{h}$ are random variables for entity representations of base LM and enhanced LM. $\mathbf{g}$ is the random variable for the local graph structure.
+
+and 4 are large. We have $\mathrm{MI}(\mathbf{h};\mathbf{x})\approx \mathrm{MI}(\mathbf{x};\mathbf{x})$ and $\mathrm{MI}(\mathbf{h};\mathbf{g})\approx \mathrm{MI}(\mathbf{g};\mathbf{g})$ . If little new knowledge has been integrated, i.e., catastrophic remembering happens, region 4 is small, and we have $\mathrm{MI}(\mathbf{h};\mathbf{g})\approx \mathrm{MI}(\mathbf{x};\mathbf{g})$ . Similarly, if catastrophic forgetting happens, region 2 is small and we have $\mathrm{MI}(\mathbf{h};\mathbf{x})\approx \mathrm{MI}(\mathbf{x};\mathbf{g})$ .
+
+# 4.2 Knowledge Integration Simulation
+
+Note that $f$ in Definition 4.1 shows how KI happens and is an unknown ground-truth transformation that depends on many factors such as the base LM, KI corpus, and KI methods. Thus, we propose an approximated transformation $f'$ to fit $f$ such that it can simulate and approximate $f$ with high accuracy (§4.2.1). However, high accuracy cannot promise interpretation. To interpret KI in a fine-grained way, we propose another interpretable transformation $f''$ such that it can be as close to $f$ as $f'$ under the MI measurement (§4.2.2). Figure 3 briefly illustrates the idea. The shown black dashed transformation (i.e., $f'$ ) can simulate KI with arbitrary accuracy. The solid red lines represent the interpretable transformation (i.e., $f''$ ) using graph convolutions, which could promise accuracy with the MI measurement and it is interpretable. Below we introduce details of the two transformations.
+
+# 4.2.1 Approximated Transformation
+
+Note that samples of $\mathbf{g}$ are non-Euclidean (local graph structures) while samples of $\mathbf{x}$ and $\mathbf{h}$ are
+
+
+Figure 3: Illustration of the KI simulation. The part above the horizontal blue dashed line represents graph spectral domain (i.e., KG space). GFT and RGFT are graph Fourier transformation and its inverse transformation. Black dashed arrows show the approximated transformation (i.e., $f'$ ), which can promise the approximation accuracy. Red arrows show the interpretable transformation with graph convolutions (i.e., $f''$ ), which can promise the accuracy under the MI measurement and interpretability.
+
+vectors. To understand how $\mathbf{x}$ is transformed to $\mathbf{h}$ by integrating $\mathbf{g}$ , we first map $\mathbf{x}$ and $\mathbf{h}$ to a new space related to $\mathbf{g}$ . Graph Fourier transform (GFT) (Sandryhaila and Moura, 2014) can be used to transform the entity representation $\mathbf{x}$ from the spatial domain to the graph spectral domain (KG space). We denote the transformation of $\mathbf{x}$ to the KG space as $\mathrm{GFT}(\mathbf{x})$ , and its inverse transformation as $\mathrm{RGFT}(\mathrm{GFT}(\mathbf{x})) = \mathbf{x}$ . Formal definition can be found in Appendix C. Using GFT, we will look at the change from $\mathbf{x}$ to $\mathbf{h}$ in the KG space, and construct an approximated transformation to simulate the KI process there.
+
+Theorem 4.2 (Approximation). Given a base $LM$ and its enhanced $LM$ , suppose that $\mathrm{MI}(\mathbf{x};\mathbf{g}) < \mathrm{MI}(\mathbf{h};\mathbf{g})$ . Then, for any $\epsilon >0$ , there exists an $n$ -layer neural network $\mathrm{NN}^n (\cdot)$ such that
+
+$$
+| f (\mathbf {x}, \mathbf {g}) - \mathrm {R G F T} (\mathrm {N N} ^ {n} (\mathrm {G F T} (\mathbf {x}))) | < \epsilon .
+$$
+
+The proof can be found in Appendix D. Theorem 4.2 shows that there exists an approximated transformation composed of GFT and a neural network that can simulate $f$ with arbitrary accuracy. However, the transformation $f'$ cannot provide specific insights about spatial samples. For example, it cannot show which set of knowledge triples contribute to KI. Thus, to interpret KI, we develop a new transformation which can promise both accuracy and interpretability.
+
+# 4.2.2 Interpretable Transformation
+
+In order to achieve an interpretable transformation with high accuracy, we make use of the invariance property of MI (Kraskov et al., 2004), i.e., the property that bijective functions would not change the MI. If we can change the metric in Theorem 4.2 from $L1$ norm to MI, and replace operations in
+
+$f^{\prime}(\mathbf{x},\mathbf{g})$ that change MI by other equivalent and interpretable operations, we can obtain an interpretable transformation to simulate KI.
+
+Graph convolutions (Defferrard et al., 2016) are often used to model the relational information in KGs by filters (i.e., kernels), where entities aggregate information from their neighbors and pass the information based on the KG structure. Let $\mathrm{GC}(\cdot)$ denote the convolution operation on $\mathcal{G}$ . Formal definition of GC can be found in Appendix C. If we run graph convolutions with attention (Velickovic et al., 2018), the information flow on graphs can be indicated by attention coefficients (Zheng et al., 2020; Fu et al., 2020), which can be used for interpretation. Thus, we propose an interpretable transformation to simulate KI process using graph convolutions with attention mechanism.
+
+Theorem 4.3 (Interpretation). Given a base $LM$ and its enhanced $LM$ , let $\mathrm{MI}(\mathbf{x};\mathbf{g}) < \mathrm{MI}(\mathbf{h};\mathbf{g})$ . Denote that $f^{\prime}(\mathbf{x},\mathbf{g}) = \mathrm{RGFT}(\mathrm{NN}^{n}(\mathrm{GFT}(\mathbf{x})))$ . Let $\mathrm{MLP}_b(\cdot)$ denote the bijective MLP layer and $\mathrm{GC}(\cdot)$ denote the graph convolution on $KG\mathcal{G}$ . There exists $f^{\prime \prime}(\mathbf{x},\mathbf{g}) = \mathrm{MLP}_b(\mathrm{GC}^n (\mathrm{MLP}_b(\mathbf{x})))$ where $\mathrm{GC}^n$ is composed of $n$ repeat components as $\mathrm{MLP}_b(\mathrm{GC}(\cdot))$ , such that
+
+$$
+\operatorname {M I} (f ^ {\prime \prime} (\mathbf {x}, \mathbf {g}); f (\mathbf {x}, \mathbf {g})) = \operatorname {M I} (f ^ {\prime} (\mathbf {x}, \mathbf {g}); f (\mathbf {x}, \mathbf {g})).
+$$
+
+The proof can be found in Appendix E. Note that the equality above becomes approximate equality when graph filters are approximated filters. For example, GCN (Kipf and Welling, 2017) as well as its variants (e.g., GAT (Velickovic et al., 2018)) runs graph convolutions using localized first-order approximated filters. Theorem 4.3 shows that we can use graph convolution operations to gain interpretability without the loss of MI. Assuming that MI can be used to measure knowledge in LMs as
+
+defined before, the interpretable transformation in Theorem 4.3 can promise accuracy as well.
+
+# 4.3 Knowledge Integration Interpretation
+
+As shown in Theorem 4.3, $f''$ is composed of $n$ graph convolution operations and $n + 2$ bijective functions (bijective MLPs). Since MI does not change with bijective functions, the information change (i.e., $\mathrm{MI}(\mathbf{x};\mathbf{g}) \to \mathrm{MI}(\mathbf{h};\mathbf{g})$ ) can only happen in graph convolutions in $f''(\mathbf{x},\mathbf{g})$ . We then use graph attention coefficients in the graph convolutions to interpret how the information flows in a KG. Based on the information flow, we can interpret the integration of knowledge triples.
+
+
+Figure 4: Information flow on knowledge graph with respect to the entity $v_{i}$ . Information from $n$ -hop neighbors is aggregated with $n$ -th graph convolution. Here, the aggregation number $n$ corresponds to the number of layers of the neural network $\mathrm{NN}^n(\cdot)$ in Theorem 4.3.
+
+Figure 4 illustrates the information flow on KG with respect to $v_{i}$ . Given a stacking of $n$ graph convolutions, the $i$ -th graph convolution aggregates information from the $i$ -hop neighbors. After the transformation, $v_{i}$ keeps $a_{i,i}^{1}$ ( $0 < a_{i,i}^{1} < 1$ ) of its original information, and aggregates $1 - a_{i,i}^{1}$ of information from its $n$ -hop neighbor entities. For example, if $n = 2$ , the 2-hop neighbor $v_{k}$ contributes $a_{i,j}^{1} \cdot a_{j,k}^{2}$ proportion of information to $v_{i}$ .
+
+We now use the information flow for interpretation, where attention coefficients on self-loops can be used to show if catastrophic remembering and catastrophic forgetting happened. For example, given entity $v_{i}$ , if $a_{i,i}^{1} \approx 0$ , it means entity $v_{i}$ did not keep any information of itself from the base LM in the enhanced LM, which means that catastrophic forgetting happens to $v_{i}$ . Similarly, if $a_{i,i}^{1} \approx 1$ , it means that catastrophic remembering happened to $v_{i}$ . Attention coefficients on the triples can be used to show if they are captured or not during KI. For example in Figure 4, if $a_{i,j}^{1} \approx 1$ , it means that from the base LM to the enhanced LM, entity $v_{i}$ became more closely associated with $v_{j}$ . This indicates that the knowledge triple $(v_{i}, r, v_{j})$ is newly integrated during KI. In our experiment, we regard knowledge triple $(v_{i}, r, v_{j})$ as a single entity.
+
+edge triples with $a_{i,j}^{1} > 0.1$ as integrated triples, and others as non-integrated ones. Entities with $a_{i,i}^{1} < 0.1$ imply that catastrophic forgetting happened, and correspondingly, $a_{i,i}^{1} > 0.9$ implies that catastrophic remembering happened. $^{7}$
+
+# 5 Graph Convolution Simulator
+
+So far, we have shown that there exists an interpretable transformation that can simulate KI theoretically. Next, we describe how to optimize the transformation (e.g., the MLP layers, and graph convolution layers) and introduce a practical implementation of our final GCS probe model.
+
+To implement the probe in practice, we make three approximations in GCS: two in the model design and one in optimization. We design experiments in the next section to make sure that GCS works well empirically. The transformation as described in Theorem 4.3 is implemented by bijective MLP layers and graph convolutional layers with attention. We show that if the weight matrix in MLP is a square matrix, the function is bijective. Formal description and proof are in Appendix F. For graph convolutions with attention, we use approximated graph attention filters similar to (Thekumparamil et al., 2018; Velickovic et al., 2018) (Approximation 1). Then, we assume that the knowledge being integrated in the LM can be expressed as triples. In other words, we do not consider multi-hop knowledge triples (e.g., the 2-hop knowledge triple $(v_{i},r,v_{k})$ in Figure 4) in KI. Thus, we design GCS on the simple condition, where there is only one graph convolution operation (Approximation 2). Therefore, GCS is designed as two bijective MLP layers and one graph attention layer in between as:
+
+$$
+\operatorname {G C S} _ {\theta_ {1}} (\cdot) = \operatorname {M L P} (\operatorname {G C} (\operatorname {M L P} (\cdot))),
+$$
+
+where $\mathrm{MLP}(\cdot)$ is the bijective MLP layer and $\mathrm{GC}(\cdot)$ is the graph convolutional layer with attention on $\mathcal{G}$ . Given an entity $v_{i}$ and its neighbors $\mathcal{N}_{v_i}$ , we can write the graph convolutional layer as:
+
+$$
+\begin{array}{l} \operatorname {G C} \left(\boldsymbol {x} _ {i}\right) = \sigma \left(\sum_ {v _ {j} \in \mathcal {N} _ {v _ {i}} \cup \{v _ {i} \}} a _ {i, j} \boldsymbol {W} ^ {V} \boldsymbol {x} _ {j}\right), \\ a _ {i, j} = \operatorname {s o f t m a x} \left(\frac {\left(\boldsymbol {W} ^ {Q} \boldsymbol {x} _ {i}\right) \cdot \left(\boldsymbol {W} ^ {K} \boldsymbol {x} _ {j}\right)}{t}\right). \\ \end{array}
+$$
+
+Here, $\pmb{x}_i$ is the entity representation of $v_i$ from the base LM. The activation function $\sigma(\cdot)$ is $\mathrm{ELU}(\cdot)$ , and $W^V$ is a weight matrix. $a_{i,j}$ is the attention coefficient on the relation that connects $v_i$ and $v_j$ . $W^Q$ and $W^K$ are two parameter matrices in the graph attention. $\mathrm{softmax}(\cdot)$ is the edge-wise softmax function with respect to node $v_i$ . The temperature $t$ is a hyperparameter that controls the attention distribution to be hard or soft.
+
+We optimize GCS using the MI between its outputs and the entity representations from the enhanced LM:
+
+$$
+\mathcal {L} = - \operatorname {M I} \left(\operatorname {G C S} _ {\theta_ {1}} (\mathbf {x}); \mathbf {h}\right). \tag {1}
+$$
+
+In practice, we maximize the compression lemma lower bound of MI instead as introduced in Belghazi et al. (2018). More details can be found in Appendix G.1. In practice, there may be a gap between the ground-truth MI and the compression lemma lower bound, and a stochastic optimization (e.g., Adam (Kingma and Ba, 2015)) may not converge to the optimal point. Thus, GCS may not fit $f^{\prime \prime}(\mathbf{x},\mathbf{g})$ perfectly (Approximation 3).
+
+# 6 Experiments
+
+We begin by reviewing ERNIE (Zhang et al., 2019) and K-Adapter (Wang et al., 2021a). Knowledge is integrated in entity-wise manner in ERNIE, and triple-wise manner in K-Adapter.
+
+ERNIE. ERNIE integrates knowledge into BERT (Devlin et al., 2019) using a Wikipedia corpus and Wikidata KG. As no alignment is provided between sentences in the Wikipedia corpus and entities in the Wikipedia KG, ERNIE uses TAGME (Ferragina and Scaiella, 2010) to extract entity mentions in sentences and aligns them with corresponding entities in KGs. A new objective is designed for KI in addition to the standard MLM and NSP objectives: alignments in the input text are randomly masked, and the model is asked to select aligned entities from KGs. When ERNIE finds the aligned entity, its KG embedding obtained from Bordes et al. (2013) is integrated into the hidden representations of BERT.
+
+K-Adapter. K-Adapter takes RoBERTa (Liu et al., 2019b) as the base LM and inserts three new layers into RoBERTa to integrate factual knowledge. The final output is concatenated with the output of RoBERTa. During the integration, parame
+
+ters of RoBERTa are frozen, only parameters in the adapter are updated. K-Adapter uses the T-RExrc (Elsahar et al., 2018) dataset for KI, which has an alignment of sentences with knowledge triples in Wikidata. For the KI objective, K-Adapter decides whether certain relations exist or not, and classifies relation labels given the aligned sentence.
+
+# 6.1 GCS Verification
+
+Next, we design a set of experiments to show that GCS can provide reasonable interpretations for KI. We first compare GCS with the linear probe with respect to variance of the interpretation results (§6.1.1). We show that the linear probe does not work in interpreting KI, but GCS can provide stable interpretations. Then, we verify GCS based on the KG used in KI (§6.1.2). We drop knowledge triples that are identified as non-integrated by GCS during KI and show that it would not affect the performance of enhanced LMs. Third, we verify GCS based on the downstream tasks (§6.1.3). We show that enhanced LMs perform well on the test samples that contain integrated knowledge triples that are identified by GCS, and vice versa.
+
+# 6.1.1 Variance of interpretation results
+
+As alluded to earlier, linear probes do not work for large-scale factual knowledge interpretation. We support this claim by evaluating the variance of interpretation results. We use the entropy of probe results to test if the interpretation is stable.
+
+Setting. We run a linear probe and GCS to detect if a knowledge triple is integrated or non-integrated 100 times with different random seeds. $^{10}$ We calculate the entropy of these results to estimate the probe variance. If a triple is classified as "unlearned" in the base LM but "learned" in the enhanced LM, we regard it as integrated and if it is classified as "unlearned" in both base LM and enhanced LM, we say the triple is non-integrated. For GCS, as introduced in §4.3, if the attention coefficient for the triple is larger than 0.1, it is deemed as integrated, else not.
+
+Results. A histogram of the entropy of all the entities is shown in Figure 5. For a random guess strategy, the entropy of most knowledge triples is around 1, which means the results are highly unstable. For the linear probe, we find that only $10\%$ of
+
+
+Figure 5: The histogram of entropy for all entities (K-Adapter), where $x$ -axis shows the entropy value and $y$ -axis shows the empirical probability (i.e., frequency) of entities. The random guess strategy is also included for comparison. We can find that the KI interpretation results using linear probes have fairly large variance, which are similar to the results of random guessing. But our GCS can provide stable interpretations.
+
+
+
+
+
+knowledge triples are interpreted in a stable manner. The other $90\%$ triples have large entropy (their interpretation is similar to random guessing). On the other hand, for GCS, most knowledge triples have stable interpretation results. This shows that GCS can indeed provide more reliable interpretations.
+
+# 6.1.2 Verification via the KI corpus
+
+As our second verification experiment, we only use the factual knowledge that is identified as integrated by GCS to enhance BERT and RoBERTa to get ERNIE (drop-UE) and K-Adapter (drop-UE). Then we judge if the GCS interpretation was reasonable using two downstream tasks, where enhanced LMs outperform base LMs most significantly. If GCS can interpret the KI process well, the performance of the drop-UE versions should be roughly the same as that of ERNIE/K-Adapter.
+
+Setting. This experiment comprises of three steps. First, we use GCS to interpret the KI process in ERNIE and K-Adapter, and identify triples or entities that are integrated successfully. Second, we re-enhance BERT/RoBERTa to get ERNIE (drop-UE) / K-Adapter (drop-UE) only using the entities/triples that are identified as integrated. Third, we finetune ERNIE/K-Adapter and their drop-UE versions on two downstream tasks.
+
+After we get our interpretation results, we only keep the KI corpus aligned with the integrated knowledge.11 We finetune the models on two downstream tasks about entity typing: OpenEntity (Choi et al., 2018) and FIGER (Ling et al., 2015). Implementation details of GCS, ERNIE, and K-Adapter can be found in Appendix G.1 and Appendix G.2.
+
+Results. From Table 1, we find that even if we drop a large amount of KI data in this way, the
+
+Table 1: Performance of ERNIE, K-Adapter and their dropUE versions on the entity typing downstream tasks. We can find that dropping a large amount of non-integrated knowledge would not affect enhanced LMs' performance much on knowledge-intensive downstream tasks.
+
+
Model
OpenEntity
FIGER
P
R
F1-Micro
P
R
F1-Micro
ERNIE
78.24
68.75
73.19
77.39
65.81
71.13
ERNIE (drop-UE)
78.11
71.43
74.62 ↑
77.38
64.90
70.60 ↓
K-Adapter
76.63
75.26
75.94
67.50
88.79
76.69
K-Adapter (drop-UE)
75.95
75.95
75.95 ↑
67.29
88.88
76.59 ↓
+
+
+Figure 6: Performance of BERT, RoBERTa, ERNIE, K-Adapter, and their dropped versions on the FIGER dataset. We can find that even if there are negative effects on the performance, they are marginal that can be ignored.
+
+
+
+performance of drop-UE versions on entity typing task is roughly the same as original versions. For the OpenEntity dataset, better performance is achieved. For the FIGER dataset, the performance of drop-UE versions is slightly worse. We also report the performance of BERT, RoBERTa, and two drop-IE $^{12}$ versions in Figure 6. We find that compared to dropping KI corpus aligned with integrated knowledge, dropping KI corpus aligned with non-integrated knowledge achieves much better performance. Thus, we verify that GCS provides reasonable interpretations.
+
+# 6.1.3 Verification via the downstream task
+
+We use the performance of ERNIE and K-Adapter on downstream tasks to verify GCS. If GCS can reasonably interpret KI, enhanced LMs should perform better on the test set with samples aligned with the integrated knowledge, and vice versa.
+
+Setting. We first align entities in the KI corpus and the OpenEntity dataset based on their Wikidata Q identifier. $^{13}$ For the entity typing task (OpenEntity dataset), we drop samples in the finetuning test set that aligns with the integrated knowledge and non-integrated entities (called w/o-IE test set and w/o-UE test set), and test ERNIE and K-Adapter on the two dropped test sets. Detailed statistics can be found in the Table 7 in Appendix H.
+
+
+
+
+Figure 7: Performance of BERT, RoBERTa, ERNIE, K-Adapter on the OpenEntity dataset for different test sets (original versions and dropped versions). We can find that enhanced LMs perform better on test samples that contain successfully integrated knowledge.
+
+Results. As shown in Figure 7, we find that for ERNIE, the difference is significant. The performance on the test set (w/o-IE) is more than 20 F1 points worse than that on the complete test set (all). For K-Adapter, there is a drop in F1 on the w/o-IE set and increase in F1 on the w/o-UE set (albeit small). We hypothesize that this may be because of the differences in the finetuning objective and the KI objective $^{14}$ , and because the knowledge integrated in K-Adapter may change during finetuning. These results show that GCS can reasonably interpret which set of knowledge is integrated.
+
+# 6.2 GCS Findings
+
+After verifying GCS with three set of experiments, we analyze the interpretation results. We find that both ERNIE and K-Adapter integrate only few knowledge triples $(\approx 20 - 30\%)$ . Detailed results can be found in Figure 11 in Appendix H. Next, we classify the factual knowledge based on its relation types (in terms of their topology type and Wiki data type) and analyze how ERNIE and K-Adapter integrate knowledge with certain relation types.
+
+# 6.2.1 Analysis via relation topology
+
+We classify relations into three types based on their topology features. Following previous work (Bordes et al., 2013; Tang et al., 2020), we denote rela
+
+
+
+
+Figure 8: Analysis of KI interpretation results in terms of different relation topology. We can find that the degree of knowledge (types) integration is different for enhanced LMs using different KI methods.
+
+
+
+
+
+tions that connect two leaf nodes (entities) in the KG as $1 - 1$ relations, and relations that connect two center nodes (entities) in the KG as $N - M$ relations. Others are denoted as $N - 1$ relations. An example can be found in Figure 12 in Appendix H. We perform an analysis in terms of different types of relations and report the percentage of successfully integrated entities and triples for ERNIE and K-Adapter. For each relation type, we also present the percentage of connected entities that are catastrophically remembered or forgotten (CR and CF).
+
+Figure 8 presents the results, and detailed statistics can be found in Table 8 in Appendix H. We find that for ERNIE, entities connected with complex relations (i.e., $N - M$ relations) are captured well. However, K-Adapter shows different behaviors. It captures triples with simple relations (i.e., $1 - 1$ and $N - 1$ relations) well. Note that ERNIE uses graph embeddings provided by the specially designed model (Bordes et al., 2013) for KI, while K-Adapter integrates knowledge into dedicated feedforward networks called adapters. This implies that structures are not well encoded into classic adapter modules, and we may need a better approach to integrate knowledge with complex structures into neural modules. Besides, we find that for both ERNIE and K-Adapter, CR happens more often to entities in simple structures (i.e., $N - 1$ relations), while CF is more common for entities in complex structures (i.e., connected to $N - M$ relations).
+
+# 6.2.2 Analysis via relation's Wiki features
+
+For further analysis, we select six relations aligned with roughly the same number of sentences in the
+
+Table 2: Analysis of KI interpretations via different relation labels for the T-REx-rc dataset (KI corpus used by K-Adapter). We can find that temporal knowledge is hard to get integrated.
+
+
Relation label
T-REx-rc
Wiki Count
Wiki data type
Integrated triple
place of birth (LF)
2,850,424
Wikibase item
10.95%
part of (LF)
4,164,470
Wikibase item
17.25%
date of death (TR)
2,637,358
Time
<0.01%
date of birth (TR)
5,294,649
Time
<0.01%
located in the administrative territorial (HF)
10,776,120
Wikibase item
6.13%
country (HF)
14,174,811
Wikibase item
0.12%
Total
-
-
10.09%
+
+T-REx-rc dataset15 and categorize them into three groups based on the Wiki Count and Wiki data type16: low-frequency (LF) relations, time-related (TR) relations, and high-frequency (HF) relations. From Table 2, we find that even if LF relations have roughly the same Wiki Count as TR relations, the temporal knowledge still cannot be integrated by K-Adapter. We speculate that this is because Transformer encoders do not capture information about time well (Dhingra et al., 2021; Zhou et al., 2021). When comparing LF relations and HF relations, we find that if relations have small Wiki Count, knowledge triples are easily captured.
+
+Table 3: Examples of triples in the T-REx-rc dataset (KI corpus used by K-Adapter) with attention coefficients.
+
+
Knowledge triple
Attention coefficient
(Adam Smith, place of birth, Kirkcaldy)
1.079 × 10-1
(Lake Huron, part of, Great Lakes)
1.742 × 10-1
(Jean-Jacques Rousseau, date of death, 02 July 1778)
1.729 × 10-25
(Barack Obama, date of birth, 04 August 1961)
6.827 × 10-31
(Mauna Kea Observatory, located in the administrative territorial, Hawaii)
6.044 × 10-2
(China, country, Mahalangur Himal)
1.250 × 10-3
+
+Randomly picked examples of knowledge triples are given in Table 3. We can find that for TR relations, they connect entities composed of numbers. The poor performance of language models in handling numbers (Wallace et al., 2019) provides an alternative explanation for the observation that K-Adapter does not integrate triples with TR relations. For triples with LF and HF relations, we find that some entities connected to HF relations are very common (e.g., entity "China" is in the complex KG structure) compared to those connected to LF relations. These results are consistent with our findings in Figure 8 that knowledge of popular entities is not integrated.
+
+# 6.2.3 Can we further improve the KI quality?
+
+Finally, we attempt to answer the question: can we simply improve the quality of $KI$ by increasing the amount of our aligned training corpora? Intuitively, repeatedly learning a knowledge triple with several aligned sentences could increase the possibility of successful integration of this knowledge.
+
+
+Figure 9: The correlation between the attention coefficient of the knowledge triple and its aligned sentence number in T-REx-rc dataset. We can find that there is no correlation between them, which means simply increasing the KI corpus could not be helpful for better KI quality.
+
+We answer this question by calculating correlation between the attention coefficients (i.e., success ratio of integration) for K-Adapter and number of aligned sentences (i.e., size of KI corpus) for knowledge triples in the T-REx-rc dataset. Surprisingly, we find that this Pearson correlation is $-0.0055$ (Figure 9). This shows that there is no apparent positive relationship between the KI quality and the size of the KI dataset. It suggests that simply increasing the size of the aligned dataset alone may not improve KI and we might need more fundamental advances to push the state-of-the-art in KI.
+
+# 7 Conclusion
+
+In this paper, through a series of theoretical results, we derived an information-theoretic probe that uses attention over knowledge graphs to interpret the knowledge integration process in LMs. In our experiments, we verified our probe model and used it to understand what knowledge has been integrated in two existing knowledge-enhanced language models, leading to some new findings about these models. We hope that our probe model would aid in better understanding and informed design of knowledge integration approaches for LMs. We have published the code and the demo to help users easily implement GCS for their own knowledge-enhanced LM interpretation.
+
+# Limitations
+
+There are some limitations of our work. We simplify the KG without considering relation information such as labels, since existing graph neural networks (e.g., R-GCN (Schlichtkrull et al., 2018)) still cannot handle such large number of imbalanced distributed relations. These can be considered by future works. Besides, GCS only provides a way to interpret the knowledge integration. Once we have an understanding of the it, improving the integration quality still remains challenging.
+
+# Reproducibility Statement
+
+We have published the code, the demo, and the interpretation results. The design of GCS can be found in §5. The implementation details about the knowledge integration for K-Adapter and ERNIE can be found in Appendix G.2, and details about GCS can be found in Appendix G.1.
+
+# Ethics Statement
+
+While our probe models are not tuned for any specific real-world application, our methods could be used in sensitive contexts such as legal or healthcare settings; and it is essential that any work that builds on our approaches undertake extensive quality-assurance and robustness testing before using it in their setting.
+
+# Acknowledgment
+
+We are grateful to the anonymous reviewers for their insightful comments and suggestions, which helped us significantly improve the paper. We also owe many thanks to Shehzaad Dhuliawala and Nico Daheim for their constructive advice on this paper. Yifan Hou is supported by the Swiss Data Science Center PhD Grant (P22-05). We also acknowledge support from an ETH Zurich Research grant (ETH-19 21-1) and a grant from the Swiss National Science Foundation (project #201009) for this work.
+
+# References
+
+Oshin Agarwal, Heming Ge, Siamak Shakeri, and Rami Al-Rfou. 2021. Knowledge graph based synthetic corpus generation for knowledge-enhanced language model pre-training. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3554-3565, Online. Association for Computational Linguistics.
+
+Guillaume Alain and Yoshua Bengio. 2017. Understanding intermediate layers using linear classifier probes. In 5th International Conference on Learning Representations, ICLR 2017, Toulouse, France, April 24-26, 2017, Workshop Track Proceedings. OpenReview.net.
+Arindam Banerjee. 2006. On bayesian bounds. In ICML, volume 148 of ACM International Conference Proceeding Series, pages 81-88. ACM.
+Mohamed Ishmael Belghazi, Aristide Baratin, Sai Rajeswar, Sherjil Ozair, Yoshua Bengio, R. Devon Hjelm, and Aaron C. Courville. 2018. Mutual information neural estimation. In ICML, volume 80 of Proceedings of Machine Learning Research, pages 530-539. PMLR.
+Antoine Bordes, Nicolas Usunier, Alberto García-Durán, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multi-relational data. In Neural Information Processing Systems, pages 2787-2795.
+Ronald Newbold Bracewell and Ronald N Bracewell. 1986. The Fourier transform and its applications, volume 31999. McGraw-Hill New York.
+Joan Bruna, Wojciech Zaremba, Arthur Szlam, and Yann LeCun. 2014. Spectral networks and locally connected networks on graphs. In ICLR.
+Boxi Cao, Hongyu Lin, Xianpei Han, Le Sun, Lingyong Yan, Meng Liao, Tong Xue, and Jin Xu. 2021. Knowledgeable or educated guess? revisiting language models as knowledge bases. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume I: Long Papers), pages 1860-1874, Online. Association for Computational Linguistics.
+Zhengdao Chen, Soledad Villar, Lei Chen, and Joan Bruna. 2019. On the equivalence between graph isomorphism testing and function approximation with gnns. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 15868-15876.
+Eunsol Choi, Omer Levy, Yejin Choi, and Luke Zettlemoyer. 2018. Ultra-fine entity typing. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 87-96, Melbourne, Australia. Association for Computational Linguistics.
+George Cybenko. 1992. Approximation by superpositions of a sigmoidal function. Math. Control. Signals Syst., 5(4):455.
+Michaël Defferrard, Xavier Bresson, and Pierre Vandergheynst. 2016. Convolutional neural networks on graphs with fast localized spectral filtering. In Neural Information Processing Systems, pages 3837-3845.
+
+Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.
+Bhuwan Dhingra, Jeremy R. Cole, Julian Martin Eisenschlos, Daniel Gillick, Jacob Eisenstein, and William W. Cohen. 2021. Time-aware language models as temporal knowledge bases. CoRR, abs/2106.15110.
+Hady Elsahar, Pavlos Vougiouklis, Arslen Remaci, Christophe Gravier, Jonathon Hare, Frederique Laforest, and Elena Simperl. 2018. T-REx: A large scale alignment of natural language with knowledge base triples. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan. European Language Resources Association (ELRA).
+Paolo Ferragina and Ugo Scaiella. 2010. TAGME: on-the-fly annotation of short text fragments (by wikipedia entities). In CIKM, pages 1625-1628. ACM.
+Guoji Fu, Yifan Hou, Jian Zhang, Kaili Ma, Barakeel Fanseu Kamhoua, and James Cheng. 2020. Understanding graph neural networks from graph signal denoising perspectives. CoRR, abs/2006.04386.
+John Hewitt and Percy Liang. 2019. Designing and interpreting probes with control tasks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2733-2743, Hong Kong, China. Association for Computational Linguistics.
+John Hewitt and Christopher D. Manning. 2019. A structural probe for finding syntax in word representations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4129-4138, Minneapolis, Minnesota. Association for Computational Linguistics.
+Yifan Hou and Mrinmaya Sachan. 2021. Bird's eye: Probing for linguistic graph structures with a simple information-theoretic approach. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1844-1859, Online. Association for Computational Linguistics.
+Ganesh Jawahar, Benoit Sagot, and Djamé Seddah. 2019. What does BERT learn about the structure of language? In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics,
+
+pages 3651-3657, Florence, Italy. Association for Computational Linguistics.
+Zhengbao Jiang, Jun Araki, Haibo Ding, and Graham Neubig. 2021. How can we know when language models know? on the calibration of language models for question answering. Transactions of the Association for Computational Linguistics, 9:962-977.
+Prakhar Kaushik, Alex Gain, Adam Kortylewski, and Alan L. Yuille. 2021. Understanding catastrophic forgetting and remembering in continual learning with optimal relevance mapping. CoRR, abs/2102.11343.
+Nicolas Keriven and Gabriel Peyre. 2019. Universal invariant and equivariant graph neural networks. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 7090-7099.
+Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.
+Thomas N. Kipf and Max Welling. 2017. Semi-supervised classification with graph convolutional networks. In ICLR. OpenReview.net.
+James Kirkpatrick, Razvan Pascanu, Neil C. Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A. Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska, Demis Hassabis, Claudia Clopath, Dharshan Kumaran, and Raia Hadsell. 2016. Overcoming catastrophic forgetting in neural networks. CoRR, abs/1612.00796.
+Alexander Kraskov, Harald Stögbauer, and Peter Grassberger. 2004. Estimating mutual information. *Physical review E*, 69(6):066138.
+Bopeng Li, Sougata Chaudhuri, and Ambuj Tewari. 2016. Handling class imbalance in link prediction using learning to rank techniques. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, February 12-17, 2016, Phoenix, Arizona, USA, pages 4226-4227. AAAI Press.
+Jiaoda Li, Ryan Cotterell, and Mrinmaya Sachan. 2022. Probing via prompting. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1144-1157, Seattle, United States. Association for Computational Linguistics.
+Xiao Ling, Sameer Singh, and Daniel S. Weld. 2015. Design challenges for entity linking. Transactions of the Association for Computational Linguistics, 3:315-328.
+Nelson F. Liu, Matt Gardner, Yonatan Belinkov, Matthew E. Peters, and Noah A. Smith. 2019a. Linguistic knowledge and transferability of contextual
+
+representations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1073-1094, Minneapolis, Minnesota. Association for Computational Linguistics.
+Weijie Liu, Peng Zhou, Zhe Zhao, Zhiruo Wang, Qi Ju, Haotang Deng, and Ping Wang. 2020. K-BERT: enabling language representation with knowledge graph. In AAAI, pages 2901-2908. AAAI Press.
+Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019b. Roberta: A robustly optimized BERT pretraining approach. CoRR, abs/1907.11692.
+Aditya Krishna Menon and Charles Elkan. 2011. Link prediction via matrix factorization. In Machine Learning and Knowledge Discovery in Databases - European Conference, ECML PKDD 2011, Athens, Greece, September 5-9, 2011, Proceedings, Part II, volume 6912 of Lecture Notes in Computer Science, pages 437-452. Springer.
+Ilsang Ohn and Yongdai Kim. 2019. Smooth function approximation by deep neural networks with general activation functions. Entropy, 21(7):627.
+Matthew E. Peters, Mark Neumann, Robert Logan, Roy Schwartz, Vidur Joshi, Sameer Singh, and Noah A. Smith. 2019. Knowledge enhanced contextual word representations. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 43-54, Hong Kong, China. Association for Computational Linguistics.
+Fabio Petroni, Tim Rocttäschel, Sebastian Riedel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, and Alexander Miller. 2019. Language models as knowledge bases? In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2463-2473, Hong Kong, China. Association for Computational Linguistics.
+Tiago Pimentel, Josef Valvoda, Rowan Hall Maudslay, Ran Zmigrod, Adina Williams, and Ryan Cotterell. 2020. Information-theoretic probing for linguistic structure. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4609-4622, Online. Association for Computational Linguistics.
+Marco Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. "why should I trust you?": Explaining the predictions of any classifier. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Demonstrations, pages 97-101, San Diego, California. Association for Computational Linguistics.
+
+Aliaksei Sandryhaila and José M. F. Moura. 2014. Discrete signal processing on graphs: Frequency analysis. IEEE Trans. Signal Process., 62(12):3042-3054.
+Michael Sejr Schlichtkrull, Nicola De Cao, and Ivan Titov. 2020. Interpreting graph neural networks for NLP with differentiable edge masking. CoRR, abs/2010.00577.
+Michael Sejr Schlichtkrull, Thomas N. Kipf, Peter Bloem, Rianne van den Berg, Ivan Titov, and Max Welling. 2018. Modeling relational data with graph convolutional networks. In *The Semantic Web - 15th International Conference, ESWC* 2018, Heraklion, Crete, Greece, June 3-7, 2018, Proceedings, volume 10843 of Lecture Notes in Computer Science, pages 593-607. Springer.
+Taylor Shin, Yasaman Razeghi, Robert L. Logan IV, Eric Wallace, and Sameer Singh. 2020. AutoPrompt: Eliciting Knowledge from Language Models with Automatically Generated Prompts. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4222-4235, Online. Association for Computational Linguistics.
+Yun Tang, Jing Huang, Guangtao Wang, Xiaodong He, and Bowen Zhou. 2020. Orthogonal relation transforms with graph context modeling for knowledge graph embedding. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2713-2722, Online. Association for Computational Linguistics.
+Kiran Koshy Thekumparampil, Chong Wang, Sewoong Oh, and Li-Jia Li. 2018. Attention-based graph neural network for semi-supervised learning. CoRR, abs/1803.03735.
+Petar Velickovic, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. 2018. Graph attention networks. In ICLR. OpenReview.net.
+Eric Wallace, Yizhong Wang, Sujian Li, Sameer Singh, and Matt Gardner. 2019. Do NLP models know numbers? probing numeracy in embeddings. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5307-5315, Hong Kong, China. Association for Computational Linguistics.
+Ruize Wang, Duyu Tang, Nan Duan, Zhongyu Wei, Xuanjing Huang, Jianshu Ji, Guihong Cao, Daxin Jiang, and Ming Zhou. 2021a. K-Adapter: Infusing Knowledge into Pre-Trained Models with Adapters. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 1405-1418, Online. Association for Computational Linguistics.
+Xiaozhi Wang, Tianyu Gao, Zhaocheng Zhu, Zhengyan Zhang, Zhiyuan Liu, Juanzi Li, and Jian Tang. 2021b.
+
+KEPLER: A unified model for knowledge embedding and pre-trained language representation. Transactions of the Association for Computational Linguistics, 9:176-194.
+Liang Yao, Chengsheng Mao, and Yuan Luo. 2019. KG-BERT: BERT for knowledge graph completion. CoRR, abs/1909.03193.
+Zhengyan Zhang, Xu Han, Zhiyuan Liu, Xin Jiang, Maosong Sun, and Qun Liu. 2019. ERNIE: Enhanced language representation with informative entities. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1441-1451, Florence, Italy. Association for Computational Linguistics.
+Bo Zheng, Haoyang Wen, Yaobo Liang, Nan Duan, Wanxiang Che, Daxin Jiang, Ming Zhou, and Ting Liu. 2020. Document modeling with graph attention networks for multi-grained machine reading comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6708-6718, Online. Association for Computational Linguistics.
+Zexuan Zhong, Dan Friedman, and Danqi Chen. 2021. Factual probing is [MASK]: Learning vs. learning to recall. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5017-5033, Online. Association for Computational Linguistics.
+Ben Zhou, Kyle Richardson, Qiang Ning, Tushar Khot, Ashish Sabharwal, and Dan Roth. 2021. Temporal reasoning on implicit events from distant supervision. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1361-1371, Online. Association for Computational Linguistics.
+
+# A Notations
+
+The notations can be found in the below table.
+
+# B Relation Distribution
+
+We present the features of relation label distribution using two real KGs used for integration in ERNIE and K-Adapter, and we briefly illustrate that existing probe methods cannot support the relation label well.
+
+
+
+
+Figure 10: The distribution of relations with respect to number in two KGs used in KI for ERNIE and K-Adapter.
+
+- The number of relations is large. The KG used in ERNIE has 594 distinct relations and the KG used in K-Adapter has 410 distinct relations. As for probes, if we differentiate them but use shared parameters such as the linear probe, the probe task is to predict whether the relation exist and if yes, which relation label it is. There is no wonder that using simple linear model cannot handle such difficult task with that large number of labels for classification. Regarding prompt probe, for each KG, the template for each relation should be manually created by human, and it is fairly costly.
+
+If we use different sets of parameters for different relations like in RGCN (Schlichtkrull et al., 2018). It is hard to implement hundreds
+
+or thousands of sets of parameters to analyze KI. For example, in our GCS model, we have to include attention mechanism for interpretation. The number of parameters for attention mechanism is hard to be scaled to $400 - 600$ times. Even if we can address this technical issue, using such as complex probe model for analysis is also problematic.
+
+- The distribution of relation number is highly imbalanced. As shown in Figure 10, we can see that the distribution (i.e., PDF) of relations is very imbalanced. For ERNIE, $10\%$ relations account for $93\%$ edges, and 5 relations (around $1\%$ ) account for $50\%$ edges. For K-Adapter, $10\%$ relations account for $78\%$ edges, and 5 relations (around $1\%$ ) account for $29\%$ edges. Simply treating relations in different ways in interpretation could also provide problematic results. For example, in the linear probe, the simple linear model could not handle such highly imbalanced labels for classification.
+
+# C Formal Definitions of GFT and Graph Convolutions
+
+Formal Definition of GFT. Specifically, given a KG denoted as $\mathcal{G}$ , let $\pmb{A} \in \mathbb{R}^{|\mathcal{V}| \times |\mathcal{V}|}$ be its symmetric adjacency matrix corresponding to $\mathcal{G}$ . Let $\pmb{L}_n = \pmb{I} - D^{-1/2} \pmb{A} \pmb{D}^{-1/2}$ denote the normalized Laplacian matrix for $\mathcal{G}$ , where $\pmb{D}$ denotes the degree matrix. We do the eigendecomposition for $\pmb{L}_n$ as $\pmb{L}_n = \pmb{U} \pmb{\Lambda} \pmb{U}^T$ , where $\pmb{U}$ is the matrix of eigenvectors ordered by eigenvalues and $\pmb{\Lambda} = \mathrm{diag}(\lambda_1, \lambda_2, \dots, \lambda_N)$ is the diagonal matrix of eigenvalues. Consider the node feature matrix as $\pmb{X}$ . The GFT can be written as $\mathrm{GFT}(\pmb{X}) = \pmb{U}^T \pmb{X}$ , and the inverse GFT can be written as $\mathrm{RGFT}(\mathrm{GFT}(\pmb{X})) = \pmb{U} \mathrm{GFT}(\pmb{X}) = \pmb{U} \pmb{U}^T \pmb{X} = \pmb{X}$ .
+
+Formal Definition of Graph Convolutions. Graph convolutions (Bruna et al., 2014) can be implemented by filters (i.e., kernels) as $g_{\Theta}$ in the graph spectral domain (i.e., KG space). As the GFT of the convolution of $g_{\Theta}$ and $X$ is the pointwise product of their GFT (Bracewell and Bracewell, 1986), the convolution can be written as
+
+$$
+\operatorname {G C} (\boldsymbol {X}) = g _ {\Theta} \star \boldsymbol {X} = \operatorname {R G F T} (g _ {\Theta} \cdot \operatorname {G F T} (\boldsymbol {X})).
+$$
+
+Regarding graph filters, Velickovic et al. (2018) and Thekumparampil et al. (2018) introduce the attention mechanism to them, where the contribution of
+
+Table 4: Notations and their descriptions
+
+
Notation
Description
G
The knowledge graph for KI
V
The set of entities/nodes of KG
E
The set of edges of KG
vi
The entity/node indexed as i in the KG
ti
The entity label attached on vi
LM(·)
The language model, where the input is entity text, and the output is its representation
Nvi
The set of neighbors (entities/nodes) connected to vi
G(vi)
The local graph structure in terms of vi
x
The random variable of the entity representation
xi
The entity representations of vi
g
The random variable of the local graph structure
MI(·;·)
The mutual information between two random variables
A
The adjacency matrix of KG
|V|
The number of entities/nodes in KG
R
The set of real numbers
I
The identity matrix
D
The degree matrix of KG
Ln
The normalized Laplacian matrix
diag(·)
The diagonalization operation
U
The matrix of eigenvectors
Λ
The diagonal matrix of eigenvalues
λi
The i-th eigenvalue
X
The set of entity representations in terms of V
C or d
The dimension of entity representations; The number of channels
GFT(·)
The graph Fourier transformation
RGFT(·)
The inverse graph Fourier transformation
gΘ
The graph filter parameterized by parameter Θ
H
The entity representations given by a knowledge-enhanced LM
h
The random variable of the entity representation given by a knowledge-enhanced LM
f(·;·)
The mapping that can transform x to h with g
ε
The error of the approximation
sigmoid(x)
The Sigmoid function sigmoid(·) = 1/(1+e-α)
n
The number of layers of the neural network for approximation
W
The weight matrix
x
The input vector
b
The bias (in the weight matrix)
λ0'
The minimum eigenvalue of the weight matrix W
MLPb(·)
The bijective MLP function
GC(·)
The graph convolution function with respect to KG G
GCSθ1
The GCS model parameterized by θ1
L
The loss function (objective) of the optimization
Z
The output of GCS, i.e., set of output entity representations
z
The random variable of the output of GCS
sup
The supremum value
T
A class of functions
F
Any class of functions
Ω
The domain of a function
Tθ2
A class of functions parameterized by θ2, i.e., neural networks
P
The probability distribution
p|V|
The empirical distribution with |V| samples
NNσ(·|θ')
The neural network with activation function σ(·) and parameterized by θ'
|U|
The norm of matrix U
An
The normalized adjacency matrix
X̂
The ground-truth entity representations/node features
X*
The variable matrix
Tr(·)
The trace of a matrix
ε1, ε2
The error bound of entity representations/node features and adjacency matrix
γ
The Lagrangian multiplier
p(t)
The characteristic polynomial for weight matrix W
det(·)
The determinant of a matrix
+
+each edge to the convolution can be shown explicitly. Graph attention makes filters more powerful and convolutions more interpretable (Velickovic et al., 2018; Thekumparampil et al., 2018; Fu et al., 2020; Zheng et al., 2020).
+
+# D Proof of Theorem 4.2
+
+Proof. As aforementioned, the graph Fourier transformation $\mathrm{GFT}(\cdot)$ and its inverse transformation $\mathrm{RGFT}(\cdot)$ in terms of the KG $\mathcal{G}$ can be written as
+
+$$
+\begin{array}{l} \operatorname {G F T} (\boldsymbol {X}) = \boldsymbol {U} ^ {T} \boldsymbol {X} \\ \operatorname {R G F T} (\operatorname {G F T} (\boldsymbol {X})) = \boldsymbol {U} \operatorname {G F T} (\boldsymbol {X}) = \boldsymbol {U} \boldsymbol {U} ^ {T} \boldsymbol {X} = \boldsymbol {X}. \\ \end{array}
+$$
+
+The second equation can be derived since $\pmb{U}$ is the set of eigenvectors of the normalized Laplacian matrix in terms of $\mathcal{G}$ , which is orthogonal.
+
+According to the universal approximation theorem (Cybenko, 1992), in general, we can use one-layer neural networks (arbitrary width) with the sigmoid activation function to fit any functions. Ohn and Kim (2019) bound the approximation with both the width and depth, and supports more activation functions. Based on the conclusion of Ohn and Kim (2019), we know that given a mapping $g^{\prime}(\cdot)$ , for any $\epsilon' > 0$ , there exists a neural network parameterized by $\theta'$ s.t.
+
+$$
+\left| g ^ {\prime} (\cdot) - \mathrm {N N} _ {\sigma} (\cdot | \theta^ {\prime}) \right| < \epsilon^ {\prime}.
+$$
+
+Note that there are some constraints about the input and the model architecture, i.e., layer width. We leave out those details for simplicity since we only focus on the existence.
+
+Since $\mathbf{h}$ is obtained by integrating $\mathbf{g}$ into $\mathbf{x}$ , we can simplify the mapping in the graph spectral space by researching on the transformation from $\mathrm{GFT}(\mathbf{x})$ to $\mathrm{GFT}(\mathbf{h})$ . Assume the mapping satisfies $g^{\prime}(\mathrm{GFT}(\mathbf{x})) = \mathrm{GFT}(\mathbf{h})$ . Then we have
+
+$$
+\left| g ^ {\prime} (\operatorname {G F T} (\mathbf {x})) - \operatorname {N N} _ {\sigma} (\operatorname {G F T} (\mathbf {x}) | \theta^ {\prime}) \right| < \epsilon^ {\prime}.
+$$
+
+Consider that we have $f(\mathbf{x},\mathbf{g}) = \mathbf{h} =$ RGFT $(g^{\prime}(\mathrm{GFT}(\mathbf{x})))$ . If we assign $\epsilon^{\prime} = \frac{\epsilon}{|\pmb{U}|} >0$ we have
+
+$$
+| \boldsymbol {U} | \cdot | g ^ {\prime} (\operatorname {G F T} (\mathbf {x})) - \operatorname {N N} _ {\sigma} (\operatorname {G F T} (\mathbf {x}) | \theta^ {\prime}) | < \epsilon .
+$$
+
+Since we know that
+
+$$
+\boldsymbol {U} \cdot g ^ {\prime} (\operatorname {G F T} (\mathbf {x})) = \operatorname {R G F T} (g ^ {\prime} (\operatorname {G F T} (\mathbf {x}))) = \mathbf {h} = f (\mathbf {x}, \mathbf {g}),
+$$
+
+we have
+
+$$
+\begin{array}{l} | f (\mathbf {x}, \mathbf {g}) - \operatorname {R G F T} (\mathrm {N N} (\operatorname {G F T} (\mathbf {x}))) | \\ < | \boldsymbol {U} | \cdot | g ^ {\prime} (\operatorname {G F T} (\mathbf {x})) - \operatorname {N N} _ {\sigma} (\operatorname {G F T} (\mathbf {x}) | \theta^ {\prime}) | < \epsilon , \\ \end{array}
+$$
+
+where $\mathrm{NN}(\cdot)$ is parameterized by $\theta^{\prime}$ with activation function $\sigma$ as $\mathrm{NN}_{\sigma}(\cdot |\theta^{\prime})$ . And without loss of generality, we assume it is composed of $n$ layers.
+
+# E Proof of Theorem 4.3
+
+According to the invariance property of MI (Kriskov et al., 2004), the introduction of bijective functions does not introduce any new information - MI remains unchanged upon the introduction of bijective functions. We know that GFT and RGFT are both bijective (Appendix E.1). We show that nonlinear activation functions in a neural network (e.g., sigmoid( $\cdot$ ) are bijective as well (Appendix E.1). Thus, the MI change in the KI process can only happen in the linear function (Appendix E.2). Based on the convolution theorem (Bracewell and Bracewell, 1986), linear functions in graph spectral domain are graph convolution operations (Sandryhaila and Moura, 2014; Bruna et al., 2014; Kipf and Welling, 2017) (Appendix E.3). Consider that the graph attention can show how the information flow on graphs during the convolution (Zheng et al., 2020; Fu et al., 2020) (Appendix E.4). Thus, we can use graph convolutions in the transformation to interpret the KI process.
+
+Proof. We present the proof with 4 steps below.
+
+# E.1 Step 1
+
+$\mathrm{GFT}(\cdot), \mathrm{RGFT}(\cdot),$ and $\operatorname{sigmoid}(\cdot)$ are bijective. Given two entity representations $\pmb{x}_i, \pmb{x}_j$ and the matrix of eigenvectors of the KG as $\pmb{U}$ , suppose that $\mathrm{GFT}(\pmb{x}_i) = \mathrm{GFT}(\pmb{x}_j)$ . Then, we have
+
+$$
+\boldsymbol {U} ^ {T} \boldsymbol {x} _ {i} = \boldsymbol {U} ^ {T} \boldsymbol {x} _ {j}.
+$$
+
+Since $\pmb{U}^T$ are set of eigenvectors and are by definition nonzero, we have
+
+$$
+\boldsymbol {x} _ {i} = \boldsymbol {x} _ {j}.
+$$
+
+If $\pmb{x}_i = \pmb{x}_j$ , it is easy to get $\mathrm{GFT}(\pmb{x}_i) = \mathrm{GFT}(\pmb{x}_j)$ . Thus, graph Fourier transformation is bijective.
+
+As for the nonlinear activation function, since we consider neural networks composed of MLP layers, the activation function is sigmoid $(\cdot)$ function. It is easy to find that its inverse function is $f(\pmb{y}) = \ln(1 - \frac{1}{\pmb{y}})$ . Similarly, we can prove that it is bijective as well.
+
+# E.2 Step 2
+
+Information gain and loss can only happen in the linear function in the graph spectral domain. Without the loss of generality, we set the anchor random variable as g. Same result can be derived using any other random variables. Based on the invariance of MI (Kriskov et al., 2004), we have
+
+$$
+\operatorname {M I} (\mathbf {x}; \mathbf {g}) = \operatorname {M I} (\operatorname {G F T} (\mathbf {x}); \mathbf {g}),
+$$
+
+$$
+\operatorname {M I} (\mathbf {x}; \mathbf {g}) = \operatorname {M I} (\operatorname {R G F T} (\mathbf {x}); \mathbf {g}), \tag {2}
+$$
+
+$$
+\operatorname {M I} (\mathbf {x}; \mathbf {g}) = \operatorname {M I} (\text {s i g m o i d} (\mathbf {x}); \mathbf {g}).
+$$
+
+Since we know that
+
+$$
+\operatorname {M I} (\mathbf {h}; \mathbf {g}) - \operatorname {M I} (\mathbf {x}; \mathbf {g}) > 0,
+$$
+
+and the neural network can well approximate the mapping, we have
+
+$$
+\begin{array}{l} \operatorname {M I} (\mathbf {h}; \mathbf {g}) - \operatorname {M I} (\mathbf {x}; \mathbf {g}) \\ \cong \operatorname {M I} (\operatorname {R G F T} (\operatorname {N N} (\operatorname {G F T} (\mathbf {x})); \mathbf {g}) - \operatorname {M I} (\mathbf {x}; \mathbf {g}) \\ = \mathrm {M I} (\mathrm {N N} (\mathrm {G F T} (\mathbf {x})); \mathbf {g}) - \mathrm {M I} (\mathrm {G F T} (\mathbf {x}); \mathbf {g}) > 0. \\ \end{array}
+$$
+
+If we write $\mathrm{NN}(\cdot)$ with $n$ MLP layers as $n\times \sigma (\mathrm{Linear}(\cdot))$ , we have
+
+$$
+\begin{array}{l} \operatorname {M I} (\operatorname {N N} (\operatorname {G F T} (\mathbf {x})); \mathbf {g}) - \operatorname {M I} (\operatorname {G F T} (\mathbf {x}); \mathbf {g}) \\ = \operatorname {M I} (n \times \sigma (\operatorname {L i n e a r} (\operatorname {G F T} (\mathbf {x}); \mathbf {g}) - \operatorname {M I} (\operatorname {G F T} (\mathbf {x}); \mathbf {g}). \\ \end{array}
+$$
+
+Recursively with equations 2, it is easy to get that MI only changes in the Linear $(\cdot)$ functions. And if we can show that linear function in the graph spectral domain is the graph convolution operation, we can then easily get that
+
+$$
+\begin{array}{l} \operatorname {M I} (n \times \sigma (\operatorname {L i n e a r} (\operatorname {G F T} (\mathbf {x})); \mathbf {g}) \\ = \operatorname {M I} (n \times \operatorname {M L P} _ {b} (\operatorname {G C} (\operatorname {M L P} _ {b} (\mathbf {x}); \mathbf {g})). \\ \end{array}
+$$
+
+# E.3 Step 3
+
+The linear function in the graph spectral domain is the graph convolution operation. Even if many existing works (Sandryhaila and Moura, 2014; Bruna et al., 2014; Kipf and Welling, 2017) have provided clear descriptions, we simply re-illustrate it under the multi-channel setting. Consider the graph filter in Bruna et al. (2014) as an exemplar.
+
+For a linear function $f(\pmb{x}) = \pmb{W} \times \pmb{x}$ , its weight matrix $\pmb{W} \in \mathbb{R}^{F \times C}$ is parameterized by $\Theta \in \mathbb{R}^{F \times C}$ . If the parameters are not shared for all nodes, the input $\pmb{X} \in \mathbb{R}^{|\mathcal{V}| \times C}$ can be rescaled in $\mathbb{R}^{|\mathcal{V}| \times C \times 1}$ , and the weight matrix is $\pmb{W} \in \mathbb{R}^{|\mathcal{V}| \times F \times C}$ parameterized by $\Theta \in \mathbb{R}^{F \times C \times |\mathcal{V}|}$ . The output of this linear function is mapped in $\mathbb{R}^{|\mathcal{V}| \times F}$ .
+
+Consider the signal in graph convolution, i.e., all $\pmb{x}$ in $\mathbf{X} \in \mathbb{R}^{|\mathcal{V}| \times C}$ . Since parameters are not shared (Bruna et al., 2014), for one graph filter, the parameters in $g_{\Theta}$ is in $\mathbb{R}^{C \times |\mathcal{V}| \times |\mathcal{V}|}$ that is parameterized by $\Theta \in \mathbb{R}^{C \times |\mathcal{V}|}$ with simple diagonalization. If we have $F$ different graph filters for the convolution, $g_{\Theta}$ is in $\mathbb{R}^{F \times C \times |\mathcal{V}| \times |\mathcal{V}|}$ that is parameterized by $\Theta \in \mathbb{R}^{F \times C \times |\mathcal{V}|}$ . Here, the graph Fourier transformation of $\mathbf{X}$ is $\mathrm{GFT}(\mathbf{X}) \in \mathbb{R}^{|\mathcal{V}| \times C}$ , which can be rescaled in $\mathbb{R}^{1 \times |\mathcal{V}| \times C \times 1}$ with simple diagonalization. The output is in $\mathbb{R}^{F \times |\mathcal{V}| \times |\mathcal{V}| \times 1}$ . Note that since the parameters in the graph filter is diagonalized, we can rescale the output in $\mathbb{R}^{|\mathcal{V}| \times F}$ .
+
+If we regard the weight matrix $\mathbf{W}$ as the parameters in the graph filter $g_{\Theta}$ , the input matrix $\mathbf{X}$ as the signal, obviously, the linear function in the graph spectral space is the graph convolution operation.
+
+# E.4 *Step 4
+
+Graph attention can show how information flows on the graph. Graph attention works as denoising information from neighbors, since it can adaptively learn the optimal weights (i.e., attention coefficients) for different neighbors. If the node features of a neighbor contain much useless information for the center node, the learned weight should be small to denoise that information. It can show how information (i.e., node features) flows among nodes on the KG structure.
+
+Consider a graph signal denoising problem that we aim to extract the ground-truth node features $\hat{\pmb{X}}$ and edge weights $\hat{A}_n$ from a graph $\mathcal{G} = (\mathcal{V},\mathcal{E},A_n)$ with noise in both node features $\pmb{X}$ and edge weights $A_{n}$ . Here, $A_{n}$ is the normalized adjacency matrix $A_{n} = D^{-1 / 2}AD^{-1 / 2}$ . To this end, we formulate the optimization problem under the assumption that the ground-truth node features $\hat{\pmb{X}}$ are smooth w.r.t the ground-truth adjacency matrix $\hat{A}_n$ and the noise in the graph can be upper-bounded:
+
+$$
+\begin{array}{l} \hat {\boldsymbol {X}} ^ {*}, \hat {\boldsymbol {A}} _ {n} ^ {*} = \operatorname * {a r g m i n} _ {\hat {\boldsymbol {X}}, \hat {\boldsymbol {A}} _ {n}} \operatorname {T r} \left(\hat {\boldsymbol {X}} \hat {\boldsymbol {L}} _ {n} ^ {T} \hat {\boldsymbol {X}}\right) \\ \text {s . t .} \| \hat {\boldsymbol {X}} - \boldsymbol {X} \| _ {2} ^ {2} \leq \epsilon_ {1}, \tag {3} \\ \left\| \hat {\boldsymbol {A}} _ {n} - \boldsymbol {A} _ {n} \right\| _ {2} ^ {2} \leq \epsilon_ {2}, \\ \end{array}
+$$
+
+where $\hat{\pmb{L}} = \pmb {I} - \hat{\pmb{A}},\epsilon_1,\epsilon_2\in \mathbb{R},$ are the level of noise in node features and edge weights, respectively. $\mathrm{Tr}(\cdot)$ indicates the trace of a matrix. By Lagrange multipliers methods, we can obtain the solution as following:
+
+$$
+\hat {\boldsymbol {X}} ^ {*} = \frac {\gamma}{1 + \gamma} \left(\boldsymbol {I} - \frac {1}{1 + \gamma} \hat {\boldsymbol {A}} _ {n} ^ {*}\right), \tag {4}
+$$
+
+$$
+\hat {A} _ {n} ^ {*} = A _ {n} + \sqrt {\epsilon_ {2}} \frac {\hat {X} ^ {*} \hat {X} ^ {* \top}}{\| \hat {X} \| _ {2} ^ {2}}, \tag {5}
+$$
+
+where $\gamma > 0$ is the Lagrangian multiplier. Note that the attention coefficients of GAT (Velickovic et al., 2018) and AGNN (Thekumparampil et al., 2018) are obtained by (without less of generality, we show the results in the first-layer) equation 6 and equation 7, respectively:
+
+$$
+a _ {i, j} = \operatorname {s o f t m a x} \left(\mathrm {L R e L U} \left(\mathbf {a} ^ {\top} \left[ \boldsymbol {W} \boldsymbol {X} _ {i} \| \boldsymbol {W} \boldsymbol {X} _ {j} \right]\right) _ {j \in \mathcal {N} _ {i} \cup \{i \}}\right), \tag {6}
+$$
+
+$$
+a _ {i, j} = \operatorname {s o f t m a x} \left(\left[ \beta \frac {\boldsymbol {H} _ {i} ^ {\top} \boldsymbol {H} _ {j}}{\| \boldsymbol {H} _ {i} \| \| \boldsymbol {H} _ {j} \|} \right] _ {j \in \mathcal {N} _ {i} \cup \{i \}}\right), \tag {7}
+$$
+
+where LReLU is the leakyReLU; $H = \operatorname{ReLU}(XW)$ , a, $W$ in equation 6, and $\beta$ , $W$ in equation 7 are learnable parameters. The attention coefficients of GAT and AGNN are then used as the weights of aggregating the neighborhood information of nodes. As we can see that equation 5, equation 6, and equation 7 are in a form of measuring the similarity between paired node features. Similar to the denoised edge weights obtained in equation 5, the attention coefficients (i.e. the aggregation weights) between a node and its neighborhoods are proportional to the similarity of their node embeddings. Therefore, the attention coefficients of GAT and AGNN can be regarded as the results of denoised weights on the existing edges in a graph, i.e., the graph attentions are implicitly denoising the edge weights.
+
+In general case, graph attention functions as denoising edge weights. The input is noisy representations and the output is the groundtruth. Attention coefficients show how much distortion is corrected during the convolution operation. For example, if the input representations are also groundtruth, there is no need to fetch information from neighbors to get output. And edge weights will be reduced to 0, i.e., attention coefficients on edges are calculated as 0. If the input representations are very noisy, i.e., much noise are removed, attention coefficients on edges should be large to restore the groundtruth signal. Therefore, in the KI scenario, we can use attention coefficients in graph attention in graph convolution layer to interpret the KI process based on the information flow. As for the CR and CF, equally, we can use the attention coefficients on the self-loop edges for interpretation, such as how much original information is remembered/forgotten.
+
+
+
+# F Bijective MLP
+
+Theorem F.1. Give an MLP layer denoted as $\mathrm{MLP}(\pmb{x}) = \mathrm{sigmoid}(\pmb{W}\pmb{x} + \pmb{b})$ . If $\pmb{W}$ is a square matrix, there exist a constant $\lambda_0' > 0$ that for any $0 < \epsilon < \lambda_0'$ , the function below is bijective:
+
+$$
+\operatorname {M L P} _ {n} (\boldsymbol {x}) = \text {s i g m o i d} ((\boldsymbol {W} - \epsilon \boldsymbol {I}) \boldsymbol {x} + \boldsymbol {b}). \tag {8}
+$$
+
+Proof. We first prove that two bijective function compositions are still bijective. Then, we prove that adding a small noise on MLP weight matrix can make it bijective.
+
+Give two MLP function $f_{1}(\cdot)$ and $f_{2}(\cdot)$ . Suppose they are injective and suppose $f_{1}(f_{2}(\pmb{x})) = f_{1}(f_{2}(\pmb{y}))$ . Since we know that $f_{1}(\cdot)$ is injective, we have $f_{2}(\pmb{x}) = f_{2}(\pmb{y})$ . Similarly, since $f_{2}(\cdot)$ is injective, we have $\pmb{x} = \pmb{y}$ . Thus $f_{1}(f_{2}(\cdot))$ is injective. Suppose $f_{1}(\cdot)$ and $f_{2}(\cdot)$ are surjective and $z \in C$ . Since we know that $f_{1}(\cdot)$ is surjective, there exists a set of $\pmb{y} \in B$ with $f_{1}(\pmb{y}) = z$ . Similarly, since $f_{2}(\cdot)$ is surjective, there exists a set of $\pmb{x} \in A$ with $f_{2}(\pmb{x}) = \pmb{y}$ . Then, we have $z = f_{1}(f_{2}(\pmb{x}))$ and so $z$ is onto $f_{1}(f_{2}(\cdot))$ . Thus, $f_{1}(f_{2}(\cdot))$ is surjective. Therefore, if $f_{1}(\cdot)$ and $f_{2}(\cdot)$ are bijective, $f_{1}(f_{2}(\cdot))$ is also bijective.
+
+To prove that the special MLP is bijective, consider an MLP function as
+
+$$
+\operatorname {M L P} (\boldsymbol {x}) = \sigma (\boldsymbol {W} \boldsymbol {x} + \boldsymbol {b}),
+$$
+
+where $\mathbf{W} \in \mathbb{R}^{C \times C}$ is the weight matrix and $\mathbf{b} \in \mathbb{R}^C$ is the bias. Let
+
+$$
+p (t) = \prod_ {i = 1} ^ {C} \left(\lambda_ {i} ^ {\prime} - t\right)
+$$
+
+be the characteristic polynomial for weight matrix $\mathbf{W}$ . Here $\lambda_{i}^{\prime}$ are eigenvalues of matrix $\mathbf{W}$ . Without loss of generality, let $|\lambda_0^{\prime}| = \min_i|\lambda_i^{\prime}|$ . Then, we know that for any constant $0 < \epsilon < |\lambda_0^{\prime}|$ , we have
+
+$$
+\det (\boldsymbol {W} - \epsilon \boldsymbol {I}) = p (\epsilon) \neq 0.
+$$
+
+Thus, if the perturbation $\epsilon$ is small enough, the perturbed matrix $W^{\prime} = W - \epsilon I$ is nonsingular. Consider the fact that the nonlinear activation function $\sigma (\cdot)$ is sigmoid $(\cdot)$ function, which is bijective. Therefore, the special MLP function $\mathrm{MLP}_n(\cdot)$ is bijective.
+
+Note that in practice, we use the floating-point arithmetic. Consider the float accuracy. Small errors from the float approximation can be regarded
+
+as the constant $\epsilon$ , and in most cases, it satisfies the assumption $0 < \epsilon < |\lambda_0^{\prime}|$ . Thus, we can regard MLPs with square weight matrices in practice as bijective functions.
+
+
+
+# G Implementation Details
+
+# G.1 GCS
+
+In practice, GCS is composed of 3 layers: bijective MLP layer, graph convolutional layer, and another bijective MLP layer. As for bijective MLP layers, since weight matrices in them are square matrices, the dimension would remain unchanged: 768 for ERNIE and 1024 for K-Adapter. The nonlinear activation functions are set as $\mathrm{ELU}(\cdot)$ function, which is also bijective. The learning rate is set as $1e^{-3}$ , and the dropout rate of the two MLP layers is 0.2.
+
+Regarding the graph attention, to make sure interpretation results are stable, we apply multi-head attention mechanism, where the number of attention head is set as 8. Entity representations are first embedded into a space with the dimension as 64. Then, the embedded representations are used to calculate the attention coefficients. Note that since the purpose is to simulate and interpret the KI process, we do not split datasets for KI. Considering that GCS model is very simple for large KGs, overfitting is unlikely to happen. Thus, we optimize GCS for the whole datasets. Specifically, for K-Adapter, the whole KG is used for optimization, and results are used for interpretation. And for ERNIE, since the KG is very large, we sample a small subgraph with 1,344,393 entities and 3,240,272 triples for optimization (see Table 5), and then implement the optimized GCS on the whole KG for interpretation.
+
+The objective function of optimizing GCS can be reconstruction loss minimization or MI maximization. In this paper, we all select MI maximization as the objective. Note that users can use other objectives such as the reconstruction loss minimization. Regarding the MI maximization, we optimize MI (equation 1) by maximizing the compression lemma lower bound (Banerjee, 2006) as in Belghazi et al. (2018). The inputs of GCS are $X$ , and let the output be denoted by $Z$ . We can regard $Z$ and $H$ as empirical samples of random variables $z$ and $h$ . Thus, we have:
+
+$$
+\operatorname {M I} (\mathbf {z}; \mathbf {h}) \geq \sup _ {T \in \mathcal {F}} \mathbb {E} _ {\mathbb {P} _ {\mathbf {z h}}} [ T ] - \log \left(\mathbb {E} _ {\mathbb {P} _ {\mathbf {z}} \otimes \mathbb {P} _ {\mathbf {h}}} \left[ e ^ {T} \right]\right). \tag {9}
+$$
+
+Here, $\mathcal{F}$ can be any class of functions $T:\Omega \to \mathbb{R}$
+
+satisfying certain integrability constraints (Belghazi et al., 2018). $\mathbb{P}_{\mathbf{zh}}$ represents the joint distribution of $\mathbf{z}$ and $\mathbf{h}$ , and $\mathbb{P}_{\mathbf{z}} \otimes \mathbb{P}_{\mathbf{h}}$ represents the product of their marginal distributions. In practice, we let $\mathcal{F} = \{T_{\theta_2}\}$ be the set of functions parameterized by a neural network, and optimize it by stochastic gradient descent. Then, the objective function can be rephrased as
+
+$$
+\max _ {\theta_ {1}, \theta_ {2}} \left(\mathbb {E} _ {\mathbb {P} _ {\mathbf {z}, \mathbf {h}} ^ {| \nu |} [ T _ {\theta_ {2}} ]} - \log \left(\mathbb {E} _ {\mathbb {P} _ {\mathbf {z}} ^ {| \nu |} \otimes \mathbb {P} _ {\mathbf {h}} ^ {| \nu |} [ e ^ {T _ {\theta_ {2}}} ]}\right)\right), \tag {10}
+$$
+
+$$
+w h e r e \mathbf {z} = \operatorname {G C S} _ {\theta_ {1}} (\mathbf {x}).
+$$
+
+In equation 10, $\mathbb{P}_{\mathbf{z}}^{|\mathcal{V}|}$ represents the empirical distribution of $\mathbf{z}$ , i.e., $Z$ . If the KG is very large, we can optimize the network by sampling a small subgraph of the KG. In practice, we simply add two extra MLPs layers to GCS for MI maximization as (Belghazi et al., 2018). The added two MLP layers may not be bijective, where the dimension would be first reduced to 64, then to 1 for MI maximization. The nonlinear activation functions are all set as $\mathrm{ELU}(\cdot)$ function, which is also bijective.
+
+For interpretation, we use the attention coefficients on edges and self-loops to analyze the KI in terms of triples and entities. Different from Schlichtkrull et al. (2020) that specially designs a discrete function to mask edges that are not important, we simply introduce a temperature hyperparameter $t$ and set it as $t = 0.1$ to make the attention coefficient distribution hard. $^{18}$ Thus, knowledge can be well clustered into learned and unlearned.
+
+# G.2 ERNIE and K-Adapter
+
+KI. To ensure that the experiment settings are fair, we set hyperparameters as the default values. For K-Adapter, the code and hyperparameters for KI that we use are from the official projects19 published by the authors (Wang et al., 2021a). The only two differences are that: we use PyTorch float 32 instead of float 16 since BERT and RoBERTa that we use are float32, and we use 4 NVIDIA Tesla V100 GPUs for KI training. For ERNIE, settings are the same. All hyperparameters for KI are set as their default values.20 Similarly, float 16 of PyTorch is changed to float 32, and we do the integration with 4 NVIDIA Tesla V100 GPUs.
+
+Note that the dataset that ERNIE used for KI is Wikipedia, since the code is to fetch latest version of it, the data that we use could be slightly different. Therefore, for both ERNIE and K-Adapter, to ensure the fairness, we reproduce their KI, and report the results of reproduced models instead of results provided in their papers.
+
+Finetuning. As for the downstream tasks, all the hyperparameters are consistent with the official project: either they are given in the project or in the README. In the same way, float 32 and 4 NVIDIA Tesla V100 GPUs are chosen to make sure that the comparison is fair. Note that for K-Adapter and ERNIE, the best performance for different datasets is achieved in different settings. For example, the best performance for K-Adapter on the OpenEntity dataset is achieved with single GPU, but on the FIGER dataset is achieved with four GPUs. Since we focus on the relative performance and the fairness of the comparison, we run finetuning on 4 NVIDIA Tesla V100 GPUs for all downstream tasks and all LMs (as well as BERT and RoBERTa).
+
+# H Additional Statistics
+
+Table 5: Statistics of T-REx-rc and Wikidata. The datasets that K-Adapter and ERNIE use are T-REx-rc and Wikidata.
+
+
Statistics Datasets
# of entities
# of triples
# of aligned sentences
# of entities (optimization)
# of triples (optimization)
T-REx-rc
781,275
1,282,504
5,565,478
-
-
Wikidata
3,275,534
12,849,311
-
1,344,393
3,240,272
+
+Table 6: Drop statistics for the Integration Experiment.
+
+
Statistics Datasets
Percentage of integrated entities
Percentage of integrated triples
# of aligned sentences/entity embeddings (integrated knowledge)
T-REx-rc
-
28.86%
561,687 out of 5,565,478
Wikidata
61.72%
-
2,240,260 out of 3,275,534
+
+Table 7: Performance change of K-Adapter and ERNIE on the OpenEntity dataset with different test sets.
+
+
Model (Test set)
OpenEntity
Left test set
P
R
ΔF1-Micro
K-Adapter (w/o IE)
37.44%
-0.33
-0.37
-0.35
K-Adapter (w/o UE)
64.46%
-0.18
+1.12
+0.47
ERNIE (w/o IE)
27.28%
-18.20
-25.14
-22.67
ERNIE (w/o UE)
66.87%
-0.31
+3.08
+1.57
+
+
+
+
+
+
+Figure 11: The attention coefficient distributions of edges and self-loops for K-Adapter and ERNIE. The histogram shows the empirical distributions (i.e., frequency), and the blue curves are the Gaussian kernel density estimate. The black dashed vertical lines indicate the average values.
+
+
+
+
+Figure 12: An example of different relations. The black solid lines are relations. We can find that nodes connect to multiple neighbors (e.g., "Logic") are more common than those leaf node (e.g., "OR gate").
+
+
+
+Table 8: Analysis of KI interpretation results for K-Adapter and ERNIE in terms of different types of relations (topology feature). The percentages of integrated entities/triples, as well as of CR and CF entities for each type of relations are presented.
+
+
Model
+Statistics
K-Adapter on T-REx-rc
1 - 1 relation
N - 1 relation
N - M relation
Total
# of triples
21,690
813,674
1,729,644
2,565,008
Integrated triple percentage
58.89%
38.39%
24.00%
28.86%
# of connected entities
21,690
406,837
352,748
781,275
CR entity percentage
41.11%
31.72%
26.02%
29.41%
CF entity percentage
26.40%
30.29%
40.89%
34.97%
Model
+Statistics
ERNIE on Wikidata
1 - 1 relation
N - 1 relation
N - M relation
Total
# of connected entities
1,799
529,186
2,744,549
3,275,534
Integrated entity percentage
70.65%
42.86%
73.33%
68.39%
CR entity percentage
29.41%
56.07%
26.67%
38.28%
CF entity percentage
23.18%
8.65%
37.10%
32.49%
+
+Table 9: The number of aligned sentences for relations.
+
+
Statistics
+Relation label
# of triples
Place of birth
134,976
Part of
134,999
Date of death
135,190
Date of birth
135,169
Located in the administrative territorial entity
135,055
Country
135,147
Total
5,565,478
\ No newline at end of file
diff --git a/whathasbeenenhancedinmyknowledgeenhancedlanguagemodel/images.zip b/whathasbeenenhancedinmyknowledgeenhancedlanguagemodel/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..4c55ff192c657e44ae04105a2ca7cdae3e9b1947
--- /dev/null
+++ b/whathasbeenenhancedinmyknowledgeenhancedlanguagemodel/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:1b1ca67119f89b42cdd08fa3ba93ac25a48c2f0a689ebeda3f4b091062cb827e
+size 1052050
diff --git a/whathasbeenenhancedinmyknowledgeenhancedlanguagemodel/layout.json b/whathasbeenenhancedinmyknowledgeenhancedlanguagemodel/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..ee0e33a4967b1b6434fdb2f5e71d53593ef7f382
--- /dev/null
+++ b/whathasbeenenhancedinmyknowledgeenhancedlanguagemodel/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:809949fa3a079e6d5d1a8a52d78e75058f383e93763040ee4895cc551b8da351
+size 875293
diff --git a/whatlanguagemodeltotrainifyouhaveonemilliongpuhours/7c9f2891-7d86-4f40-a134-24ddd710cba9_content_list.json b/whatlanguagemodeltotrainifyouhaveonemilliongpuhours/7c9f2891-7d86-4f40-a134-24ddd710cba9_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..2b4ae3cf436a39cb58267f6cd0cafb60a6caa0f9
--- /dev/null
+++ b/whatlanguagemodeltotrainifyouhaveonemilliongpuhours/7c9f2891-7d86-4f40-a134-24ddd710cba9_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:70d2287715429ae5ef62582b0986787adae124d14cded233200e13a8fa3440c1
+size 111162
diff --git a/whatlanguagemodeltotrainifyouhaveonemilliongpuhours/7c9f2891-7d86-4f40-a134-24ddd710cba9_model.json b/whatlanguagemodeltotrainifyouhaveonemilliongpuhours/7c9f2891-7d86-4f40-a134-24ddd710cba9_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..dba29f303a0f8eed881d5aec7b6701e13f0a162b
--- /dev/null
+++ b/whatlanguagemodeltotrainifyouhaveonemilliongpuhours/7c9f2891-7d86-4f40-a134-24ddd710cba9_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3f97dece4374adc94eb95d5901a4c22ad1ce45d5ef441c8080fd6abcd0ffec71
+size 136910
diff --git a/whatlanguagemodeltotrainifyouhaveonemilliongpuhours/7c9f2891-7d86-4f40-a134-24ddd710cba9_origin.pdf b/whatlanguagemodeltotrainifyouhaveonemilliongpuhours/7c9f2891-7d86-4f40-a134-24ddd710cba9_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..c626aca6a95a2882e658682cdce61a7ee999bcaf
--- /dev/null
+++ b/whatlanguagemodeltotrainifyouhaveonemilliongpuhours/7c9f2891-7d86-4f40-a134-24ddd710cba9_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e81b7d56452fffb632ea69c599883e0e850cc09bdcc45aface9dfd43d7b5b726
+size 614555
diff --git a/whatlanguagemodeltotrainifyouhaveonemilliongpuhours/full.md b/whatlanguagemodeltotrainifyouhaveonemilliongpuhours/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..b4da4c2e10e081e2e2320af223c52bb1cfde482f
--- /dev/null
+++ b/whatlanguagemodeltotrainifyouhaveonemilliongpuhours/full.md
@@ -0,0 +1,353 @@
+# What Language Model to Train if You Have One Million GPU Hours?
+
+# The BigScience Architecture & Scaling Group
+
+Teven Le Scao $^{1*}$ Thomas Wang $^{1*}$ Daniel Hesslow $^{2*}$ Lucile Saulnier $^{1*}$ Stas Bekman $^{1*}$
+
+M Saiful Bari3 Stella Biderman4,5 Hady Elsahar6 Niklas Muennighoff1 Jason Phang5 Ofir Press8
+
+Colin Raffel1 Victor Sanh1 Sheng Shen9 Lintang Sutawika10 Jaesung Tae1 Zheng Xin Yong11
+
+Julien Launay $^{2,12\dagger}$ Iz Beltagy $^{13\dagger}$
+
+$^{1}$ Hugging Face $^{2}$ LightOn $^{3}$ NTU, Singapore $^{4}$ Booz Allen $^{5}$ EleutherAI $^{6}$ Naver Labs Europe $^{7}$ New York University
+
+$^{8}$ University of Washington $^{9}$ Berkeley University $^{10}$ Big Science $^{11}$ Brown University $^{12}$ LPENS $^{13}$ Allen Institute for AI
+
+# Abstract
+
+The crystallization of modeling methods around the Transformer architecture has been a boon for practitioners. Simple, well-motivated architectural variations can transfer across tasks and scale, increasing the impact of modeling research. However, with the emergence of state-of-the-art $100\mathrm{B}+$ parameters models, large language models are increasingly expensive to accurately design and train. Notably, it can be difficult to evaluate how modeling decisions may impact emergent capabilities, given that these capabilities arise mainly from sheer scale alone. In the process of building BLOOM—the Big Science Large Open-science Open-access Multilingual language model—our goal is to identify an architecture and training setup that makes the best use of our 1,000,000 A100-GPU-hours budget. Specifically, we perform an ablation study at the billion-parameter scale comparing different modeling practices and their impact on zero-shot generalization. In addition, we study the impact of various popular pretraining corpora on zero-shot generalization. We also study the performance of a multilingual model and how it compares to the English-only one. Finally, we consider the scaling behaviour of Transformers to choose the target model size, shape, and training setup. All our models and code are open-sourced at https://huggingface.co/bigscience.
+
+# 1 Introduction
+
+Recent years have seen the advent of large language models characterized by emergent capabilities (e.g., zero-shot generalization) arising from sheer scale alone (Radford et al., 2019; Brown et al., 2020). Scaling LLMs results in a predictable increase in performance: simple scaling laws connect the number of parameters, pretraining dataset size, and compute budget (Kaplan et al., 2020; Ganguli et al., 2022; Hoffmann et al., 2022), providing a clear
+
+
+Figure 1: Smooth scaling of language modeling loss as compute budget and model size increase. We observe a power-law coefficient $\alpha_{C} \sim 0.046$ , in-line with Kaplan et al. (2020). We use this fit to estimate the optimal size and number of tokens to train on for the final model given the available budget.
+
+path towards more capable models. This paradigm shift has been fueled by the wide adoption of the Transformer (Vaswani et al., 2017), providing a scalable basis for practitioners to build upon.
+
+In this paper, we design an architecture and training setup for a multilingual 100B+ parameters model (BLOOM, BigScience Workshop (2022)), seeking to best use a fixed 1,000,000 A100-hours budget. Because of the costs involved with training large language models, we cannot exhaustively explore the landscape of possible models. Instead, we position ourselves as practitioners exploring "off-the-shelf" solutions. We thus test promising additions to the Transformer to attempt to reproduce their findings in a controlled, large-scale setting.
+
+Although our main goal was to prepare the architecture and training setup of BLOOM, our findings are also valuable for practitioners building models in the 1-10B range, as they equally improve the performance of such smaller models. At variance with major works on large language models, we also make a significant effort towards reproducibility
+
+and openness: all of our pretrained models, code, and notes from our weekly meetings are made available. See Appendix A for the relevant links.
+
+Contributions. We first study the impact of pretraining corpora, positional embeddings, activation functions, and embedding norm on zero-shot generalization. We base our study on the popular GPT-2 architecture (Radford et al., 2019), with experiments at the 1.3B parameters scale. We then consider the impact of massive multilinguality, showing language-specific scaling laws in a multilingual setting for the first time. Finally, we describe our approach to drafting an architecture for the final 176B parameters BLOOM model.
+
+# 2 Methods
+
+We first justify our choice to base our model on the popular recipe of combining a decoder-only model with an autoregressive language modeling objective, and introduce our experimental setup. We then discuss our evaluation benchmarks, and motivate our choice of zero-shot generalization as our key metric. Finally, we introduce the baselines we compare to throughout the paper.
+
+# 2.1 Architecture and Pretraining Objective
+
+In this paper, we base all models on a decoder-only Transformer pretrained with an autoregressive language modeling objective. This is a popular choice for large language models (Brown et al., 2020; Rae et al., 2021; Thoppilan et al., 2022), possibly because it lends itself to zero-shot application to many downstream tasks (Radford et al., 2019). Alternatives include encoder-decoder models trained with a span-corruption objective (e.g., T5 Raffel et al. (2019)), as well as non-causal decoders models with visibility over a prefix (so-called Prefix LMs, Liu et al. (2018); Dong et al. (2019)).
+
+Our decision is motivated by the findings of Wang et al. (2022), which showed that decoder-only models combined with an autoregressive language modeling objective provide the best zero-shot generalization abilities immediately after pretraining. Although multitask finetuning (Sanh et al., 2021; Wei et al., 2021) will instead favor an encoder-decoder with span corruption for best zero-shot generalization, Wang et al. (2022) found a compromise between these two practices. Following autoregressive pretraining, decoder-only models can be efficiently adapted into non-causal decoders, simply by extending pretraining with
+
+span corruption. This adaptation produces a second model, which can provide excellent zero-shot generalization after multitask finetuning. Accordingly, we follow their recommendation, and train an autoregressive decoder-only model first which we will later consider adapting and finetuning.
+
+# 2.2 Experimental Setup
+
+We follow the architectures GPT-2 (Radford et al., 2019) and the hyperparameters of GPT-3 (Brown et al., 2020). For learning rate, we use a maximum value of $2 \times 10^{-4}$ , with a linear warm-up over 375M tokens, followed by cosine decay to a minimum value of $1 \times 10^{-5}$ . We use a 1M tokens batch size, with linear ramp-up over the first 4B tokens, and a sequence length of 2,048. We use the Adam optimizer (Kingma and Ba, 2014), with $\beta_{1} = 0.9$ , $\beta_{2} = 0.999$ , $\epsilon = 1 \times 10^{-8}$ , weight decay 0.1, and gradient clipping to 1.0. We also tie the word embedding and softmax matrix (Press and Wolf, 2017). Unless noted otherwise, we conduct our experiments with 1.3B parameters models, pretraining on 112B tokens.
+
+We picked this size and dataset size as a compromise between compute cost and the likelihood that our conclusions would transfer to the target 100B+ model. Notably, we needed to be able to reliably measure zero-shot generalization above random chance. We note that training for 112B tokens 1.3B parameters models bring them significantly above the optimality threshold of Kaplan et al. (2020), and of Hoffmann et al. (2022).
+
+The main architectural difference with GPT-3 is that all our layers use full attention, while GPT-3 uses alternating sparse attention layers (Child et al., 2019). The main value of sparse attention layers is to save compute with long sequence lengths. However, at the $100\mathrm{B}+$ scale, sparse attention layers provide negligible compute savings, as the vast majority of the compute is spent on the large feedforward layers. Kaplan et al. (2020) estimated the amount of compute per token to be:
+
+$$
+C _ {\text {f o r w a r d}} = 2 \times \left(1 2 n _ {\text {l a y e r}} d ^ {2} + n _ {\text {l a y e r}} n _ {\text {c t x}} d\right),
+$$
+
+where $C_{\mathrm{forward}}$ is the cost for the forward pass, $n_{\mathrm{layer}}$ is the number of layers, $d$ is the hidden dimension, and $n_{\mathrm{ctx}}$ is the sequence length. This means if $12d >> n_{\mathrm{ctx}}$ , the second $n_{\mathrm{layer}}n_{\mathrm{ctx}}d$ term is negligible, which is the case for our final model where $d > 10,000$ and $n_{\mathrm{ctx}} = 2048$ .
+
+
Model
Parameters
Pretraining tokens
Dataset
112B
250B
300B
OpenAI — Curie
6.7B
49.28
OpenAI — Babbage
1.3B
45.30
EleutherAI — GPT-Neo
1.3B
The Pile
42.94
Ours
13B
OSCAR v1
47.09
Ours
1.3B
The Pile
42.79
43.12
43.46
1.3B
C4
42.77
1.3B
OSCAR v1
41.72
+
+Table 1: Pretraining datasets with diverse cross-domain high-quality data improves zero-shot generalization. Average accuracy on EAI harness (higher is better) using different pretraining corpora and comparison with baseline models. Bold is best 1.3B model for amount of tokens seen, underline is best overall.
+
+What is a FLOP exactly? We report throughput per GPU in FLOPS and total budgets in PF-days (i.e. one PFLOPS sustained for a day). It is important to highlight that FLOPS are never directly measured, but always estimated, with widely different practices across papers. We refer to model FLOP the estimates based on the $C = 6ND$ formula from Kaplan et al. (2020), where $C$ is the total compute, $N$ the model size, and $D$ the number of tokens processed. These are the FLOP actually used to train the model, and which are used for scaling laws. We refer to hardware FLOP the estimates reported by our codebase, using the formula from Narayanan et al. (2021). This notably includes gradient checkpointing, which trades additional computations for reduced memory needs, and a more thorough accounting of operations.
+
+# 2.3 Evaluation Benchmarks
+
+We measure upstream performance using the language modeling loss on an held out sample of the pretraining dataset. However, it is not always possible to compare losses across objectives and tokenizers. Moreover, as upstream performance is not always aligned with task performance (Tay et al., 2021), we must also measure downstream performance explicitly. We could use zero/few-shot generalization, with or without specific finetuning.
+
+Specifically, we choose to measure zero-shot generalization on a diverse set of tasks. Few-shot and zero-shot results are strongly correlated: we found a Pearson correlation coefficient of 0.93 between zero-shot and few-shot performance across model sizes in Brown et al. (2020). We do not rely on finetuning as it is not how the main final model
+
+is likely to be used, given its size and the challenges associated with finetuning at the $100\mathrm{B}+$ scale.
+
+We use the popular EleutherAI Language Model Evaluation Harness (EAI harness, Gao et al. (2021)), evaluating models across 27 diverse tasks that are similar to those used in Brown et al. (2020) (see Appendix C for a list of tasks). Overall, the random baseline on our benchmark sits at $33.3\%$ .
+
+# 2.4Baselines
+
+We use GPT-Neo (Black et al., 2021), a 1.3B decoder-only autoregressive language model trained on the Pile (Gao et al., 2020), and GPT-3 (Brown et al., 2020), accessed via the OpenAI API. We evaluate two models, Babbage and Curie1. Based on Gao (2021) and our own analysis, we assume Babbage is 1.3B while Curie is 6.7B based on how close our computed results are to those reported in the original paper. However, as details of the OpenAI API are kept secret, there is no way to make sure that the models are actually the ones described in Brown et al. (2020) – the number of pretraining tokens reported in Table 1 is thus to be taken cautiously.
+
+# 3 Impact of Pretraining Data
+
+We first study the impact of pretraining data on zero-shot generalization. More diverse pretraining data, ideally curated from a cross-domain collection of high-quality datasets, has been suggested to help with downstream task performance and zero-shot generalization (Rosset, 2020; Gao et al., 2020).
+
+# 3.1 Corpora
+
+We evaluate three possible corpora, all commonly used to train large language models:
+
+- OSCAR v1 (Ortiz Suárez et al., 2019) $^{2}$ , a multilingual, filtered version of Common Crawl;
+- C4 (Raffel et al., 2019), specifically its replication by AllenAI, a processed and filtered version of Common Crawl;
+- The Pile (Gao et al., 2020), a diverse pretraining corpus that contains webscrapes from Common Crawl in addition to high-quality data from cross-domain sources such as academic texts and source code.
+
+For each pretraining corpus, we train a 1.3B parameter model for 112B tokens. For the Pile specifically, motivated by good early results at 112B tokens, we train up to 300B tokens, to compare with GPT-3 models and validate against GPT-Neo.
+
+# 3.2 Results
+
+Evaluation results are outlined in Table 1. We find that training on the Pile produces models that are better at zero-shot generalization, with C4 a close second, and OSCAR significantly behind.
+
+Importantly, this finding transfers to larger scales: as part of engineering test runs, a 13B model was trained on OSCAR for 300B tokens. We found this 13B model to underperform the 6.7B model from OpenAI API which we attribute to the low quality of the English data in OSCAR.
+
+We also note that our model trained on The Pile outperforms the 1.3B GPT-Neo trained on the same dataset. Finally, our 1.3B model still underperforms the 1.3B model from the OpenAI API by $1.6\%$ . It seems most likely that the difference is that of data, but we cannot investigate this further as the GPT-3 training dataset is neither publicly available nor reproducible.
+
+Finding 1. Diverse cross-domain pretraining data combining web crawls with curated high-quality sources improves zero-shot generalization over pretraining datasets constructed from Common crawl only.
+
+# 4 Architecture Ablations
+
+We now consider ablation studies to better identify the best positional embedding, activation function, and embedding normalization placement.
+
+# 4.1 Positional Embeddings
+
+Background Originally, both static sinusoidal position embeddings and learned position embeddings were proposed to capture positional information; the latter are popular in large language models (Brown et al., 2020). Su et al. (2021) proposed rotary embeddings, where the query and key representations inside the self-attention mechanism are modified such that the attention captures relative distances between them. Recently, Press et al. (2022) introduced a method which does not use embeddings, instead directly attenuating the attention scores based on how far away the keys/queries are.
+
+Results We compare learned, rotary, and ALiBi position embeddings, and include a baseline without position embeddings. Our results are presented in Table 2. Although learned positional embeddings outperform rotary embeddings, ALiBi yields significantly better results than all alternatives. We also confirm the findings of Biderman (2021): a baseline with no positional information exhibits competitive performance. While bidirectional models require positional embeddings to determine the location of tokens, we find autoregressive models can simply leverage the causal attention mask. We also confirm the ability of ALiBi to extrapolate to longer sequences than trained on in Figure 2. Note that results in Table 2 do not use any extrapolation: ALiBi embeddings are a better choice even without taking into account their ability to extrapolate.
+
+Finding 2. ALiBi positional embeddings significantly outperforms other embeddings for zero-shot generalization.
+
+
Positional Embedding
Average EAI Results
None
41.23
Learned
41.71
Rotary
41.46
ALiBi
43.70
+
+Table 2: ALiBi significantly outperforms other embeddings for zero-shot generalization. All models are trained on the OSCAR dataset for 112 billion tokens.
+
+
+Figure 2: ALiBi embeddings can effectively extrapolate past the sequence length on which the model was trained, while rotary embeddings can not. This is in line with the findings of Press et al. (2022).
+
+# 4.2 Activation Functions
+
+Background. Large language models by and large still mostly use the GELU activation (Hendrycks and Gimpel, 2016). We evaluate a recently proposed alternative, SwiGLU (Shazeer, 2020), which combines both Gated Linear Units (Dauphin et al., 2016) with the Swish activation function (Ramachandran et al., 2017).
+
+SwiGLU uses $50\%$ extra parameters in the feedforward layers. As suggested in Shazeer (2020), we compensate for this by reducing the hidden size of the feed-forward layer.
+
+Results. We present our results in Table 3. SwiGLU produces slightly better results than GELU. For our final model, we adopted GELU, as we initially observed a lower throughput for SwiGLU. However, further benchmarking identified that this overhead was primarily associated with the change in the hidden size of the feedforward network. Indeed, this new size, 5,456, is divisible by neither the warp size of the GPU (Lashgar et al., 2013) nor the number of streaming multiprocessors, resulting in both tile and wave quantization. We accordingly recommend using SwiGLU for future models.
+
+
Activation function
Average EAI Results
GELU
42.79
SwiGLU
42.95
+
+# 4.3 Embedding Norm
+
+Dettmers et al. (2021) suggests that greater stability of training can be achieved by including an extra layer normalization (Ba et al., 2016) after the embedding layer. We evaluate the performance impact of such a modification in Table 4. We note that this incurs a significant reduction in the performance of the model. However, models above 100 billion parameters are notoriously unstable and require considerable engineering efforts in order to be kept stable. If this addition provides increased stability when training, it may be valuable.
+
+Finding 3. Adding layer normalization after the embedding layer incurs a significant penalty on zero-shot generalization.
+
+# 5 Multilinguality
+
+The majority of $100\mathrm{B}+$ language models have been trained in English, with notable exceptions in Chinese (Zeng et al., 2021; Wu et al., 2021) and Korean (Kim et al., 2021) models. Smaller massively multilingual models have seen wider adoption (Xue et al., 2020), but these models are not suitable for zero-shot. Recent results on large GPT-like multilingual models show that English-only performance is usually disappointing (Lin et al., 2021).
+
+Training data. We train a multilingual model to evaluate the effectiveness and potential impacts of this practice. We use the OSCAR dataset (Ortiz Suárez et al., 2019), but here we include multiple languages, not only English as in the earlier experiments. The languages we include are Arabic, Basque, Bengali, Chinese, Catalan, English, French, Hindi, Indonesian, Portuguese, Spanish, Urdu, and Vietnamese. We sample each language with a different probability that downsamples the most frequent languages and upsamples the least frequent ones, so that all languages are represented. We estimate the sampling probabilities similar to Xue et al. (2021).
+
+Table 3: SwiGLU slightly outperforms GELU for zero-shot generalization. Models trained on The Pile for 112 billion tokens.
+
+
Embedding Norm
Average EAI Results
No
43.46
Yes
42.24
+
+Table 4: Layer normalization after the embedding layer diminishes performance significantly. Models trained on The Pile for 300 billion tokens.
+
+
Model
Size
EN
ZH
ES
FR
VI
AR
HI
UR
Average
XGLM (Lin et al.)
7.5B
54.5
45
38.2
50.7
47.5
47.5
43.4
42.7
46.19
XGLM (reprod.)
7.5B
53.85
45.21
41.7
49.82
47.35
46.37
43.19
42.3
46.22
XGLM
1.7B
49.68
44.63
37.39
47.94
42.75
45.65
44.35
43.19
44.45
Ours
1.3B
49.9
44.53
36.77
46.51
45.75
43.41
45.95
42.91
44.47
+
+English-only evaluation. We first evaluate our multilingual model on the same set of English benchmarks we have used previously, in Table 6. Multilinguality significantly lowers accuracy on the English benchmark, which is in line with the results from Lin et al. (2021).
+
+Multilingual evaluation. Zero-shot multilingual evaluation is more challenging to setup because it requires writing new prompts for each new language. Therefore, instead of manually writing prompts for each language, we follow the strategy proposed by Lin et al. (2021), using English prompts for non-English examples—this can be viewed as cross-lingual zero-shot generalization. They validated this strategy by demonstrating its ability to achieve zero-shot performance on par with (and sometimes even better than) human-written language-specific prompts. This strategy also demonstrates cross-lingual abilities.
+
+We evaluate on XNLI (Conneau et al., 2018), a multilingual NLI dataset that covers 8 of the languages we use for training. Our evaluation is different from the zero-shot evaluation of the XTREME benchmark (Hu et al., 2020). XTREME first finetunes the model on the English training data of each downstream task, then evaluates it on the non-English dataset, attempting cross-lingual generalization. Our evaluation avoids any finetuning, and instead relies entirely on zero-shot generalization.
+
+Table 5: Our multilingual 1.3B model achieves accuracy on zero-shot XNLI in line with XGLM Lin et al. (2021). First row is the reported XGLM results, and the second is our reproduction of their results to validate our multilingual evaluation setup. Last two rows show that our multilingual model matches the XGLM results.
+
+
Pretraining
Average EAI Results
English-only
41.72
Multilingual
38.55
+
+Table 6: Multilingual pretraining very significantly diminishes English zero-shot generalization. Both models trained on OSCAR for 112B tokens.
+
+Results. Table 5 shows the XNLI results of our multilingual model and how it compares to XGLM (Lin et al., 2021). We were able to reproduce the results of XGLM-7.5B which validates our evaluation setup. Furthermore, the table shows that the performance of our 1.3B is in line with the XNLI 1.7B model, validating that our multilingual setup achieves competitive results. It is worth noting that our 1.3B model is trained on only 112B tokens from 13 languages while XGLM is trained on 500B tokens from 30 languages. As far as we are aware, this is the first independent replication of the main results of Lin et al. (2021).
+
+Language-specific scaling laws. To explore how scale influences multilinguality, we train a wider range of models (i.e. 0.3-6B parameters) on a larger corpus of more than 300B tokens of text drawn from a variety of languages (Laurençon et al., 2022). In Figure 3, we show scaling laws for Arabic, Catalan, Code, English, Spanish, Basque, French, Indonesian, Assamese, Bengali, Gujarati, Hindi, Kannada, Malayalam, Marathi, Nepali, Odia, Punjabi, Tamil, Telugu, Urdu, aggregated Niger-Congo languages, Portuguese, Vietnamese, Simplified and Traditional Chinese.
+
+Smaller models struggle more with underrepresented languages such as those in the Indic and Niger-Congo family. For example, the loss of the sub-1 billion models goes up at the end of training for Malayalam, Odia, and Telugu. As data is not repeated, it is unlikely that this effect is due to overfitting; we interpret this as insufficient capacity in the model to handle many language representations, with data in the dominant language sets causing catastrophic forgetting of less represented languages. In contrast, the largest model sees its loss decrease smoothly for every language: larger models handle multilinguality more easily. Overall, scaling laws coefficients are consistent across well-represented languages, only differing in offsets.
+
+
+Figure 3: Scaling laws across languages for the smaller BLOOM models. Black line is Pareto frontier of optimality (best loss at a given compute), dashed line is best fit. Fit coefficients are detailed in Appendix B. All sufficiently represented languages exhibit similar scaling behaviour, with mostly differences in loss offsets.
+
+# 6 Scaling to 176B parameters
+
+We now detail how our previous findings influence our architecture and scaling decisions for the final 176B BLOOM model.
+
+Compute allocation. We have been allocated 18 weeks of dedicated use of partition with 52 nodes of $8 \times 80\mathrm{GB}$ A100 GPUs on the Jean Zay supercomputer. We set four nodes aside as spare, so that our compute budget amounts to 1,161,216 A100-hours in total. Assuming a throughput of 100 model TFLOPS, approximately corresponding to state-of-the-art hardware FLOPS of 150 (Narayanan et al., 2021), we have a compute budget of 4,838 PF-days for the model training. We round this down to 4,500 PF-days, this $\sim 10\%$ safety margin accounting for potential downtime and inefficiencies (e.g., batch size ramp-up) during training. To put this number in perspective, this is $\sim 23\%$ more than the training budget of GPT-3. Given this compute budget, our English-only scaling laws in 1 predict an optimal allocation for training a 392B parameter model for 165B tokens. We will use these as an upper bound in size: the largest model we can afford is 392B parameters, and the minimum number of tokens to train on is 165B tokens.
+
+Model shape. Kaplan et al. (2020) studied the dependence of the loss with model shape, and found only a limited impact within a wide range of feed-forward ratios $d_{ff} / d_{model}$ , aspect ratios $d_{model} / n_{layer}$ , and attention head dimensions.
+
+Levine et al. (2020) proposed a theoretically motivated and empirically backed law describing the optimal compromise between width and depth. They predict that $100\mathrm{B}+$ parameters models such as GPT-3 are too deep, while models in the 10B or smaller range are usually too shallow. For a GPT-3-sized model with 175B parameters, they predict an ideal depth of 80 layers.
+
+# 6.1 Final Model Architecture
+
+We set three main guidelines for our final model:
+
+- 300-400B tokens. We want to guarantee our model will train on around 300-400B tokens of data. This is in the upper range for models in the size range we are pursuing, ensuring that low-resource languages will not be allocated too few tokens. Using the $C = 6ND$ approximation (Kaplan et al., 2020), with $C = 4$ , 500 PF-days and $D = 300 - 400$ B tokens, this constrains the model size to be around 160-200B parameters.
+
+
Model
Size [Bparams.]
Pretraining [Btokens]
Budget [PF-days]
Layers
Hidden dim.
Attention heads
num.
dim.
LaMDA (Thoppilan et al., 2022)
137
432
4,106
64
8,192
128
64
GPT-3 (Brown et al., 2020)
175
300
3,646
96
12,288
96
128
J1-Jumbo (Lieber et al., 2021)
178
300
3,708
76
13,824
96
144
PanGu-α (Zeng et al., 2021)
207
42
604
64
16,384
128
128
Yuan (Wu et al., 2021)
245
180
3,063
76
16,384
Gopher (Rae et al., 2021)
280
300
4,313
80
16,384
128
128
MT-530B (Smith et al., 2022)
530
270
9,938
105
20,480
128
160
+
+Table 7: State-of-the-art 100B+ models with publicly available details. Compute budget is expressed in model PF-days required for training the models, from the $C = 6ND$ approximation of Kaplan et al. (2020). Number of tokens for LaMDA is inferred from reported compute budget and size. Yuan did not report attention head details.
+
+
Model
Size [params.]
Layers
Hidden dim.
Attention heads
Memory [GB]
Performance [sec/iter.]
[TFLOPs]
num.
dim.
(1)
178
82
13,312
64
208
63
104
152
(2)
178
82
128
104
60
109
146
(3)
176
70
14,336
112
128
59
105
150
+
+Table 8: We choose configuration (3) as the final configuration for our 176B model. (1) was rejected because of high attention heads dimension, and (3) was favored over (2) because of higher throughput. Appendix D details all 20 final configurations benchmarked, only the best three are displayed here.
+
+- 70-80 layers. From Levine et al. (2020) and the size constraint above, we estimate that our model should have between 70 and 80 layers.
+- Maximum throughput. Finally, we want the final architecture to have as high of a throughput per GPU as possible, as more compute will translate directly into longer pretraining and thus a better model. Engineering constraints also come into light here: wide shallow models are typically easier to parallelize across nodes, up to a point where excessive tensor parallelism becomes necessary due to memory constraints.
+
+We detail in Table 7 the architectures of current state-of-the-art $100\mathrm{B + }$ models. From these guidelines, we benchmark 20 model configurations, detailed in Appendix D. Among these configurations, we select three of particular interest, outlined in Table 8. They best fit our guidelines above, and offer high throughput, maximizing our training budget.
+
+We discard configuration (1), as its attention heads are much larger than other models in the literature. Configuration (3) is shallower than recommended by Levine et al. (2020), but delivers $3\%$ higher throughput compared to (2). Thus, we choose configuration (3) and its better throughput, and because a shallower model is easier to deal with at inference time by introducing less latency.
+
+# 7 Limitations
+
+Optimal scaling. Concurrent to this work, Hoffmann et al. (2022) identified more optimal scaling laws. For our compute budget, they would suggest a 50B parameters model trained for a trillion tokens. Interestingly, even in hindsight, it would have been difficult to follow this recommendation as we would have been limited by the limited availability of high-quality multilingual data and by the size of the BigScience training dataset, ROOTS (Laurençon et al., 2022). Note that our Figure 1 reproduces Kaplan et al. (2020) as we did not account for the learning rate schedule as suggested by Hoffmann et al. (2022).
+
+Other hyperparameters. In this work we have focused on a subset of the available hyperparameter space of large language models. We have investigated architecture decisions around positional embeddings, activation functions and the embedding norm. Alternative attention mechanisms (Tay et al., 2020) or optimizers are examples of other dimensions that could be investigated, potentially leading to improved models.
+
+Efficient fine-tuning. Our study is focused on zero-shot use and does not consider efficient finetuning (Lester et al., 2021; Zaken et al., 2021), which is quite relevant for large language models, and which may lead to different conclusions.
+
+# 8 Conclusion
+
+Seeking to establish the best possible model architecture that can be accommodated within a fixed 1,000,000 GPU-hours compute budget, we have presented an extensive study on principled modeling decisions for large language models.
+
+First, we have found that complimenting Common Crawl data with high-quality cross-domain curated data can boost zero-shot generalization, validating previous suggestions (Rosset, 2020; Gao et al., 2020). Through an ablation study, we have identified ALiBi as the position embedding of choice, confirmed the potential of SwiGLU, and highlighted that stabilizing techniques such as embedding normalization sometimes come at the expense of zero-shot generalization. Exploring multilinguality, we have found that multilingual models significantly underperform their monolingual counterparts on English zero-shot benchmarks, but that they can learn under-resourced languages along with larger ones if given enough scale. Finally, we identified a candidate architecture for BLOOM 176B, outlining the full reasoning behind every architectural parameter, including model shape.
+
+At variance with previous $100\mathrm{B}+$ models, such as GPT-3 (Brown et al., 2020) or Gopher (Rae et al., 2021), this project was conducted in the open, and resulted in a number of open-access artefacts. Notable similar projects conducted in parallel to this one include OPT (Zhang et al., 2022) and GLM (Zeng et al., 2022), although they lacked the collaborative and massively multilingual components of this project.
+
+We hope our work can help practitioners better understand modeling decisions, leading to better language models, and that this transparency will accelerate future similar work.
+
+# Acknowledgements
+
+This work was granted access to the HPC resources of Institut du développement et des ressources en informatique scientifique (IDRIS) du Centre national de la recherche scientifique (CNRS) under the allocation 2021-A0101012475 made by Grand équipement national de calcul intensif (GENCI). In particular, all the trainings ran on the Jean-Zay cluster of IDRIS, and we want to thank the IDRIS team for responsive support throughout the project, in particular Rémi Lacroix. Evaluations of GPT-3 models were provided in part by the Allen Institute for Artificial Intelligence. We thank Leo Gao
+
+for his expertise and advice on language model evaluation.
+
+# References
+
+Aida Amini, Saadia Gabriel, Shanchuan Lin, Rik Koncel-Kedziorski, Yejin Choi, and Hannaneh Hajishirzi. 2019. MathQA: Towards interpretable math word problem solving with operation-based formalisms. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2357-2367, Minneapolis, Minnesota. Association for Computational Linguistics.
+Stéphane Aroca-Ouellette, Cory Paik, Alessandro Roncone, and Katharina Kann. 2021. PROST: Physical reasoning about objects through space and time. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 4597-4608, Online. Association for Computational Linguistics.
+Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. 2016. Layer normalization. arXiv preprint arXiv:1607.06450.
+Qiang Ning Ben Zhou, Daniel Khashabi and Dan Roth. 2019. "going on a vacation" takes longer than "going for a walk": A study of temporal commonsense understanding. In *EMNLP*.
+Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on freebase from question-answer pairs. In Proceedings of the 2013 conference on empirical methods in natural language processing, pages 1533-1544.
+[@BlancheMinerva] Stella Biderman. 2021. You: Gee stella, #eleutherai sure hypes rotary embeddings a lot. are you sure that they're that good? me:. Twitter.
+BigScience Workshop. 2022. Bloom (revision 4ab0472).
+Yonatan Bisk, Rowan Zellers, Ronan Le Bras, Jianfeng Gao, and Yejin Choi. 2020. Piqa: Reasoning about physical commonsense in natural language. In Thirty-Fourth AAAI Conference on Artificial Intelligence.
+Sid Black, Leo Gao, Phil Wang, Connor Leahy, and Stella Biderman. 2021. GPT-Neo: Large Scale Autoregressive Language Modeling with Mesh-Tensorflow. If you use this software, please cite it using these metadata.
+Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack
+
+Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In Advances in Neural Information Processing Systems, volume 33, pages 1877-1901.
+Rewon Child, Scott Gray, Alec Radford, and Ilya Sutskever. 2019. Generating long sequences with sparse transformers. URL https://openai.com/blog/sparse-transformers.
+Christopher Clark, Kenton Lee, Ming-Wei Chang, Tom Kwiatkowski, Michael Collins, and Kristina Toutanova. 2019. Boolq: Exploring the surprising difficulty of natural yes/no questions. In NAACL.
+Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. 2018. Think you have solved question answering? try arc, the AI2 reasoning challenge. CoRR, abs/1803.05457.
+Alexis Conneau, Rudy Rinott, Guillaume Lample, Adina Williams, Samuel R. Bowman, Holger Schwenk, and Veselin Stoyanov. 2018. Xnli: Evaluating crosslingual sentence representations. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics.
+Ido Dagan, Oren Glickman, and Bernardo Magnini. 2005. The pascal recognising textual entailment challenge. In *Machine Learning Challenges Workshop*, pages 177-190. Springer.
+Yann N. Dauphin, Angela Fan, Michael Auli, and David Grangier. 2016. Language modeling with gated convolutional networks. CoRR, abs/1612.08083.
+Tim Dettmers, Mike Lewis, Sam Shleifer, and Luke Zettlemoyer. 2021. 8-bit optimizers via block-wise quantization.
+William B Dolan and Chris Brockett. 2005. Automatically constructing a corpus of sentential paraphrases. In Proceedings of the Third International Workshop on Paraphrasing (IWP2005).
+Li Dong, Nan Yang, Wenhui Wang, Furu Wei, Xiaodong Liu, Yu Wang, Jianfeng Gao, Ming Zhou, and Hsiao-Wuen Hon. 2019. Unified language model pre-training for natural language understanding and generation. Advances in Neural Information Processing Systems, 32.
+Deep Ganguli, Danny Hernandez, Liane Lovitt, Nova DasSarma, Tom Henighan, Andy Jones, Nicholas Joseph, Jackson Kernion, Ben Mann, Amanda Askell, et al. 2022. Predictability and surprise in large generative models. arXiv preprint arXiv:2202.07785.
+Leo Gao. 2021. On the sizes of openai api models.
+Leo Gao, Stella Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang, Horace He, Anish Thite, Noa Nabeshima, Shawn
+
+Presser, and Connor Leahy. 2020. The Pile: an 800GB dataset of diverse text for language modeling. arXiv preprint arXiv:2101.00027.
+Leo Gao, Jonathan Tow, Stella Biderman, Sid Black, Anthony DiPofi, Charles Foster, Laurence Golding, Jeffrey Hsu, Kyle McDonell, Niklas Muennighoff, Jason Phang, Laria Reynolds, Eric Tang, Anish Thite, Ben Wang, Kevin Wang, and Andy Zou. 2021. A framework for few-shot language model evaluation.
+Andrew Gordon, Zornitsa Kozareva, and Melissa Roemmele. 2012. SemEval-2012 task 7: Choice of plausible alternatives: An evaluation of commonsense causal reasoning. In *SEM 2012: The First Joint Conference on Lexical and Computational Semantics – Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation (SemEval 2012), pages 394–398, Montréal, Canada. Association for Computational Linguistics.
+Dan Hendrycks and Kevin Gimpel. 2016. Gaussian error linear units (gelus). arXiv preprint arXiv:1606.08415.
+Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, et al. 2022. Training compute-optimal large language models. arXiv preprint arXiv:2203.15556.
+Junjie Hu, Sebastian Ruder, Aditya Siddhant, Graham Neubig, Orhan First, and Melvin Johnson. 2020. Xtreme: A massively multilingual multi-task benchmark for evaluating cross-lingual generalization. ArXiv, abs/2003.11080.
+Shankar Iyer, Nikhil Dandekar, and Kornel Csernai. 2017. First quora dataset release: Question pairs.
+Qiao Jin, Bhuwan Dhingra, Zhengping Liu, William Cohen, and Xinghua Lu. 2019. Pubmedqa: A dataset for biomedical research question answering. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2567-2577.
+Matt Gardner Johannes Welbl, Nelson F. Liu. 2017. Crowdsourcing multiple choice science questions.
+Mandar Joshi, Eunsol Choi, Daniel S. Weld, and Luke Zettlemoyer. 2017. Triviaqa: A large scale distantly supervised challenge dataset for reading comprehension. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, Vancouver, Canada. Association for Computational Linguistics.
+Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. 2020. Scaling laws for neural language models. arXiv preprint arXiv:2001.08361.
+
+Daniel Khashabi, Snigdha Chaturvedi, Michael Roth, Shyam Upadhyay, and Dan Roth. 2018. Looking beyond the surface: a challenge set for reading comprehension over multiple sentences. In Proceedings of North American Chapter of the Association for Computational Linguistics (NAACL).
+Boseop Kim, Hyoungseok Kim, Sang-Woo Lee, Gichang Lee, Dongyun Kwak, Dong Hyeon Jeon, Sunghyun Park, Sungju Kim, Seonhoon Kim, Dong Hyung Seo, Heungsub Lee, Minyoung Jeong, Sungjae Lee, Minsub Kim, SukHyun Ko, Seokhun Kim, Taeyong Park, Jinuk Kim, Soyoung Kang, NaHyeon Ryu, Kang Min Yoo, Minsuk Chang, Soobin Suh, Sookyo In, Jinseong Park, Kyungduk Kim, Hium Kim, Jisu Jeong, Yong Goo Yeo, Dong hyun Ham, Do-Hyoung Park, Min Young Lee, Jaewoo Kang, Inho Kang, Jung-Woo Ha, Woo Chul Park, and Nako Sung. 2021. What changes can large-scale language models bring? intensive study on hyperclova: Billions-scale korean generative pretrained transformers. ArXiv, abs/2109.04650.
+Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
+Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, and Eduard Hovy. 2017. Race: Large-scale reading comprehension dataset from examinations. arXiv preprint arXiv:1704.04683.
+Ahmad Lashgar, Amirali Baniasadi, and Ahmad Khonsari. 2013. Warp size impact in gpus: large or small? In GPGPU@ASPLOS.
+Hugo Laurençon, Lucile Saulnier, Thomas Wang, Christopher Akiki, Albert Villanova del Moral, Teven Le Scao, Leandro Von Werra, Chenghao Mou, Eduardo González Ponferrada, Huu Nguyen, Jörg Frohberg, Mario Šaško, Quentin Lhoest, Angelina McMillan-Major, Gérard Dupont, Stella Biderman, Anna Rogers, Loubna Ben allal, Francesco De Toni, Giada Pistilli, Olivier Nguyen, Somaieh Nikpoor, Maraim Masoud, Pierre Colombo, Javier de la Rosa, Paulo Villegas, Tristan Thrush, Shayne Longpre, Sebastian Nagel, Leon Weber, Manuel Romero Muñoz, Jian Zhu, Daniel Van Strien, Zaid Alyafeai, Khalid Almubarak, Vu Minh Chien, Itziar Gonzalez-Dios, Aitor Soroa, Kyle Lo, Manan Dey, Pedro Ortiz Suarez, Aaron Gokaslan, Shamik Bose, David Ifeoluwa Adelani, Long Phan, Hieu Tran, Ian Yu, Suhas Pai, Jenny Chim, Violette Lepercq, Suzana Ilic, Margaret Mitchell, Sasha Luccioni, and Yacine Jernite. 2022. The bigscience ROOTS corpus: A 1.6TB composite multilingual dataset. In Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track.
+Brian Lester, Rami Al-Rfou, and Noah Constant. 2021. The power of scale for parameter-efficient prompt tuning. arXiv preprint arXiv:2104.08691.
+Hector Levesque, Ernest Davis, and Leora Morgenstern. 2012. The winograd schema challenge. In Thir
+
+teenth International Conference on the Principles of Knowledge Representation and Reasoning.
+Yoav Levine, Noam Wies, Or Sharir, Hofit Bata, and Amnon Shashua. 2020. Limits to depth efficiencies of self-attention. Advances in Neural Information Processing Systems, 33:22640-22651.
+Opher Lieber, Or Sharir, Barak Lenz, and Yoav Shoham. 2021. Jurassic-1: Technical details and evaluation. Technical report, AI21 Labs.
+Xi Victoria Lin, Todor Mihaylov, Mikel Artetxe, Tianlu Wang, Shuohui Chen, Daniel Simig, Myle Ott, Naman Goyal, Shruti Bhosale, Jingfei Du, Ramakanth Pasunuru, Sam Shleifer, Punit Singh Koura, Vishrav Chaudhary, Brian O'Horo, Jeff Wang, Luke Zettlemoyer, Zornitsa Kozareva, Mona Diab, Ves Stoyanov, and Xian Li. 2021. Few-shot learning with multilingual language models. ArXiv, abs/2112.10668.
+Jian Liu, Leyang Cui, Hanmeng Liu, Dandan Huang, Yile Wang, and Yue Zhang. 2020. Logiqa: A challenge dataset for machine reading comprehension with logical reasoning. CoRR, abs/2007.08124.
+Peter J Liu, Mohammad Saleh, Etienne Pot, Ben Goodrich, Ryan Sepassi, Lukasz Kaiser, and Noam Shazeer. 2018. Generating wikipedia by summarizing long sequences. In International Conference on Learning Representations.
+Todor Mihaylov, Peter Clark, Tushar Khot, and Ashish Sabharwal. 2018. Can a suit of armor conduct electricity? a new dataset for open book question answering. In EMNLP.
+Deepak Narayanan, Mohammad Shoeybi, Jared Casper, Patrick LeGresley, Mostofa Patwary, Vijay Korthikanti, Dmitri Vainbrand, Prethvi Kashinkunti, Julie Bernauer, Bryan Catanzaro, et al. 2021. Efficient large-scale language model training ongpu clusters using megatron-lm. In Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, pages 1-15.
+Pedro Javier Ortiz Suárez, Benoit Sagot, and Laurent Romary. 2019. Asynchronous pipelines for processing huge corpora on medium to low resource infrastructures. Proceedings of the Workshop on Challenges in the Management of Large Corpora (CMLC-7) 2019. Cardiff, 22nd July 2019, pages 9 - 16, Mannheim. Leibniz-Institut für Deutsche Sprache.
+Denis Paperno, Germán Kruszewski, Angeliki Lazaridou, Ngoc Quan Pham, Raffaella Bernardi, Sandro Pezzelle, Marco Baroni, Gemma Boleda, and Raquel Fernández. 2016. The LAMBADA dataset: Word prediction requiring a broad discourse context. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1525-1534, Berlin, Germany. Association for Computational Linguistics.
+
+Mohammad Taher Pilehvar and os'e Camacho-Collados. 2018. Wic: 10, 000 example pairs for evaluating context-sensitive representations. CoRR, abs/1808.09121.
+Ofir Press, Noah Smith, and Mike Lewis. 2022. Train short, test long: Attention with linear biases enables input length extrapolation. In International Conference on Learning Representations.
+Ofir Press and Lior Wolf. 2017. Using the output embedding to improve language models. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 157-163, Valencia, Spain. Association for Computational Linguistics.
+Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners.
+Jack W Rae, Sebastian Borgeaud, Trevor Cai, Katie Millican, Jordan Hoffmann, Francis Song, John Aslanides, Sarah Henderson, Roman Ring, Susannah Young, et al. 2021. Scaling language models: Methods, analysis & insights from training gopher. arXiv preprint arXiv:2112.11446.
+Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text transformer. CoRR, abs/1910.10683.
+Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: $100,000+$ questions for machine comprehension of text. arXiv preprint arXiv:1606.05250.
+Prajit Ramachandran, Barret Zoph, and Quoc V Le. 2017. Searching for activation functions. arXiv preprint arXiv:1710.05941.
+Corby Rosset. 2020. Turing-nlg: A 17-billion-parameter language model by microsoft.
+Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. 2019. Winogrande: An adversarial winograd schema challenge at scale. arXiv preprint arXiv:1907.10641.
+Victor Sanh, Albert Webson, Colin Raffel, Stephen H. Bach, Lintang A. Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Teven Le Scao, Arun Raja, Manan Dey, M SAIFUL BARI, Canwen Xu, Urmish Thakker, Shanya Sharma Sharma, Eliza Szczechla, Taewoon Kim, Gunjan Chhablani, Nihal V. Nayak, Debajyoti Datta, Jonathan Chang, Mike Tian-Jian Jiang, Han Wang, Matteo Manica, Sheng Shen, Zheng Xin Yong, Harshit Pandey, Rachel Bawden, Thomas Wang, Trishala Neeraj, Jos Rozen, Abheesht Sharma, Andrea Santilli, Thibault Fevry, Jason Alan Fries, Ryan Teehan, Stella Rose Biderman, Leo Gao, T. G. Owe Bers, Thomas Wolf, and Alexander M. Rush. 2021. Multitask prompted training enables zero-shot task generalization. ArXiv, abs/2110.08207.
+
+Noam Shazeer. 2020. Glu variants improve transformer.
+Shaden Smith, Mostofa Patwary, Brandon Norick, Patrick LeGresley, Samyam Rajbhandari, Jared Casper, Zhun Liu, Shrimai Prabhumoye, George Zerveas, Vijay Korthikanti, et al. 2022. Using deep-speed and megatron to train megatron-turing nlg 530b, a large-scale generative language model. arXiv preprint arXiv:2201.11990.
+Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 conference on empirical methods in natural language processing, pages 1631-1642.
+Jianlin Su, Yu Lu, Shengfeng Pan, Bo Wen, and Yunfeng Liu. 2021. Roformer: Enhanced transformer with rotary position embedding. arXiv preprint arXiv:2104.09864.
+Yi Tay, Mostafa Dehghani, Samira Abnar, Yikang Shen, Dara Bahri, Philip Pham, Jinfeng Rao, Liu Yang, Sebastian Ruder, and Donald Metzler. 2020. Long range arena: A benchmark for efficient transformers. arXiv preprint arXiv:2011.04006.
+Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, and Donald Metzler. 2021. Scale efficiently: Insights from pre-training and fine-tuning transformers. ArXiv, abs/2109.10686.
+Romal Thoppilan, Daniel De Freitas, Jamie Hall, Noam Shazeer, Apoorv Kulshreshtha, Heng-Tze Cheng, Alicia Jin, Taylor Bos, Leslie Baker, Yu Du, et al. 2022. Lamda: Language models for dialog applications. arXiv preprint arXiv:2201.08239.
+Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems, pages 5998-6008.
+Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In the Proceedings of ICLR.
+Thomas Wang, Adam Roberts, Daniel Hesslow, Teven Le Scao, Hyung Won Chung, Iz Beltagy, Julien Launay, and Colin Raffel. 2022. What language model architecture and pretraining objective work best for zero-shot generalization? arXiv preprint arXiv:2204.05832.
+Jason Wei, Maarten Bosma, Vincent Y Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M Dai, and Quoc V Le. 2021. Finetuned language models are zero-shot learners. arXiv preprint arXiv:2109.01652.
+
+Shaohua Wu, Xudong Zhao, Tong Yu, Rongguo Zhang, Chong Shen, Hongli Liu, Feng Li, Hong Zhu, Jiangang Luo, Liang Xu, et al. 2021. Yuan 1.0: Large-scale pre-trained language model in zero-shot and few-shot learning. arXiv preprint arXiv:2110.04725.
+Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, and Colin Raffel. 2020. mt5: A massively multilingual pre-trained text-to-text transformer. arXiv preprint arXiv:2010.11934.
+Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, and Colin Raffel. 2021. mt5: A massively multilingual pre-trained text-to-text transformer. In NAACL.
+Elad Ben Zaken, Shauli Ravfogel, and Yoav Goldberg. 2021. Bitfit: Simple parameter-efficient fine-tuning for transformer-based masked language-models. arXiv preprint arXiv:2106.10199.
+Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. 2019. HellaSwag: Can a machine really finish your sentence? In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4791-4800, Florence, Italy. Association for Computational Linguistics.
+Aohan Zeng, Xiao Liu, Zhengxiao Du, Zihan Wang, Hanyu Lai, Ming Ding, Zhuoyi Yang, Yifan Xu, Wendi Zheng, Xiao Xia, et al. 2022. Glm-130b: An open bilingual pre-trained model. arXiv preprint arXiv:2210.02414.
+Wei Zeng, Xiaozhe Ren, Teng Su, Hui Wang, Yi Liao, Zhiwei Wang, Xin Jiang, ZhenZhang Yang, Kaisheng Wang, Xiaoda Zhang, et al. 2021. Pangu- $\alpha$ : Large-scale autoregressive pretrained chinese language models with auto-parallel computation. arXiv preprint arXiv:2104.12369.
+Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, et al. 2022. Opt: Open pre-trained transformer language models. arXiv preprint arXiv:2205.01068.
+
+# A Open artefacts: models, code, and logs
+
+We make public all artefacts produced as part of this work:
+
+- Models. All trained models are centralized at https://huggingface.co/bigscience;
+- Code. All code is available at https://github.com/bigscience-workshop/Megatron-DeepSpeed/tree/main/megatron;
+- Discussions and logbook. The notes from the weekly meetings of our working group are made available at https://docs.google.com/document/d/1qbIkhd6bvbOsJOWXL7SfKQ0 jey3MWQYQb_SshqH1LII/.
+
+# B Multilingual scaling laws
+
+
Language
Proportion [%]
αc
Cm
Arabic
4.6
0.057
1.16
Catalan
1.1
0.057
1.11
Code
10.8
0.054
0.94
English
30.0
0.051
1.08
Spanish
10.8
0.050
1.01
Basque
0.15
0.069
1.28
French
12.9
0.047
1.06
Indonesian
1.2
0.051
1.14
Assamese
0.01
0.051
1.31
Bengali
0.5
0.037
1.15
Gujarati
0.04
0.051
1.30
Hindi
0.7
0.045
1.14
Kannada
0.06
0.046
1.26
Malayalam
0.1
0.044
1.17
Marathi
0.05
0.046
1.23
Nepali
0.07
0.055
1.25
Odia
0.04
0.044
1.25
Punjabi
0.05
0.043
1.20
Tamil
0.2
0.030
1.14
Telugu
0.09
0.056
1.31
Urdu
0.1
0.068
1.31
Niger-Congo (family)
0.03
0.039
1.22
Portuguese
4.9
0.049
1.05
Vietnamese
2.7
0.053
1.08
Chinese (simplified)
16.2
0.052
1.09
Chinese (traditional)
0.05
0.050
1.15
+
+Table 9: Best scaling law fit per language. We fit $\mathcal{L}(C) = C_m C^{-\alpha_c}$ to the runs reported in Figure 3. But for a handful of languages which are poorly represented in the overall mixture (Basque, most of the Indic family, and Niger-Congo languages), scaling mostly different in offset $C_m$ , not in exponent $\alpha_c$ .
+
+
Task
Type
Random baseline
ARC (Clark et al., 2018)
Challenge
Natural Language Inference
25.0
Easy
25.0
GLUE
MRPC (Dolan and Brockett, 2005)
Paraphrase Identification
50.0
QQP (Iyer et al., 2017)
Paraphrase Identification
50.0
HellaSwag (Zellers et al., 2019)
Sentence Completion
25.0
LAMBADA (Paperno et al., 2016)
Sentence Completion
0.0
LogiQA (Liu et al., 2020)
Multiple-Choice Question Answering
25.0
MathQA (Amini et al., 2019)
Multiple-Choice Question Answering
20.1
MC-TACO (Ben Zhou and Roth, 2019)
Multiple-Choice Question Answering
36.2
OpenBookQA (Mihaylov et al., 2018)
Multiple-Choice Question Answering
25.0
PIQA (Bisk et al., 2020)
Multiple-Choice Question Answering
50.0
PROST (Aroca-Ouellette et al., 2021)
Multiple-Choice Question Answering
25.0
PudMedQA (Jin et al., 2019)
Multiple-Choice Question Answering
33.3
QNLI (Rajpurkar et al., 2016; Wang et al., 2019)
Sentence Completion
50.0
Race (Lai et al., 2017)
Closed-Book Question Answering
25.0
SciQ (Johannes Welbl, 2017)
Multiple-Choice Question Answering
25.0
SST (Socher et al., 2013)
Sentiment
50.0
SuperGLUE
Boolq (Clark et al., 2019)
Multiple-Choice Question Answering
50.0
COPA (Gordon et al., 2012)
Sentence Completion
50.0
MultiRC (Khashabi et al., 2018)
Multiple-Choice Question Answering
5.8
RTE (Dagan et al., 2005)
Natural Language Inference
50.0
WIC (Pilehvar and os'e Camacho-Collados, 2018)
Word Sense Disambiguation
50.0
WSC (Levesque et al., 2012)
Word Sense Disambiguation
50.0
TriviaQA (Joshi et al., 2017)
Closed-Book Question Answering
0.0
WebQuestions (Berant et al., 2013)
Closed-Book Question Answering
0.0
Winogrande (Sakaguchi et al., 2019)
Coreference resolution
50.0
WNLI (Sakaguchi et al., 2019)
Natural Language Inference
50.0
EAI harness
33.3
+
+Table 10: Evaluation tasks considered in the EAI harness and random baselines.
+
+
ARCHITECTURE
PARALLELISM
PERFORMANCE
Size [Bparams.]
Hidden dim.
Layers
Attention heads num.
dim.
Data
Tensor
Pipeline
MBS
Memory [GB]
Throughput [s/iter.]
[TFLOPs]
206
14,336
82
128
112
8
4
12
2
OOM
203
13,312
94
128
104
8
4
12
2
67
124,1
146,1
195
12,288
106
128
96
2
67
121,4
143,7
96
128
8
4
12
4
79
120,3
145,0
128
128
2
65
118,8
146,9
64
192
2
67
116,5
149,8
184
12,288
100
64
192
16
4
2
OOM
8
8
6
1
OOM
4
72
121,0
136,2
2
61
140,0
117,9
178
13,312
82
128
104
2
60
108,8
145,7
104
128
8
4
2
62
123,7
128,1
64
208
4
8
12
4
74
104,8
151,2
8
4
2
52
111,8
141,8
2
63
104,5
151,7
176
14,336
70
128
112
2
60
105,9
148,1
112
128
8
4
12
4
59
104,5
150,1
64
224
2
73
102,3
153,3
4
8
12
2
59
102,0
153,7
2
40
121,6
128,9
+
+Table 11: Throughput and memory usage of considered models sizes. Note that pipeline parallelism here considers equal "slots" for embeddings and Transformer layers. This is important to optimize pipeline use, as our multilingual embeddings are quite large (250k).
+
+# E All Results
+
+
Ablation
Dataset
Embedding
Activation
Embedding Norm
Parameters
112GT
250GT
300GT
Embeddings
OSCAR
Learned
GELU
No
1.3B
41.71
Embeddings
OSCAR
None
GELU
No
1.3B
41.23
Embeddings
OSCAR
Rotary
GELU
No
1.3B
41.46
Embeddings
OSCAR
ALiBi
GELU
No
1.3B
43.70
Dataset
The Pile
Learned
GELU
No
1.3B
42.79
43.12
43.46
Dataset
C4
Learned
GELU
No
1.3B
42.77
Dataset
OSCAR
Learned
GELU
No
1.3B
42.79
Activation
The Pile
Learned
GELU
No
1.3B
42.79
Activation
The Pile
Learned
SwiGLU
No
1.3B
42.95
Embedding Norm
The Pile
Learned
GELU
No
1.3B
42.79
43.12
43.46
Embedding Norm
The Pile
Learned
GELU
Yes
1.3B
42.24
Multilinguality
OSCAR-ML
Learned
GELU
No
1.3B
38.55
Multilinguality
OSCAR
Learned
GELU
No
1.3B
41.72
Scale
OSCAR
Learned
GELU
No
1.3B
41.72
Scale
OSCAR
Learned
GELU
No
13B
47.09
+
+Table 12: Summary of all results obtained in this study. The final three columns indicate the average EAI Harness results at across different billion tokens trained. Some rows are duplicated for ease of reading.
+
+
Public Name
OpenAI: babbage
Openai: curie
gpt-neo 1.3B
Dataset
C4
OSCAR
The Pile
The Pile
The Pile
The Pile
The Pile
OSCAR
The Pile
OSCAR
OSCAR
OSCAR
OSCAR-ML
Embeddings
Learned GELU
Learned GELU
Learned GELU
Learned GELU
Learned GELU
Learned GELU
Learned GELU
Learned GELU
Learned GELU
Rotary GELU
ALiBi GELU
None GELU
Learned GELU
Activation
No
No
No
No
No
No
No
No
No
No
No
No
No
Embedding Norm
Parameters in billion
1.3
6.7
1.3
1.3
1.3
1.3
1.3
1.3
1.3
1.3
1.3
1.3
1.3
1.3
1.3
Tokens trained in billion
300
300
300
112
112
112
250
300
300
330
300
112
112
112
112
task
metric
arc_challenge
acc
arc_challengeacc
0.276
0.334
0.243
0.249
0.258
0.264
0.260
0.242
0.250
0.322
0.247
0.236
0.252
0.249
arc_challenge
acc_norm
arc_challengeacc_norm
0.295
0.375
0.259
0.274
0.275
0.277
0.286
0.277
0.290
0.342
0.268
0.270
0.276
0.260
arc_easyacc
acc
arc_easyacc
0.597
0.685
0.562
0.561
0.560
0.569
0.601
0.568
0.582
0.681
0.557
0.554
0.575
0.537
arc_easy
acc_norm
arc_easyacc_norm
0.555
0.633
0.502
0.478
0.506
0.518
0.528
0.516
0.515
0.600
0.502
0.476
0.491
0.461
boolq
acc
boolqacc
0.629
0.666
0.620
0.546
0.566
0.520
0.551
0.606
0.558
0.567
0.540
0.584
0.563
0.526
copa
acc
copaacc
0.810
0.850
0.690
0.700
0.720
0.710
0.710
0.730
0.690
0.690
0.880
0.690
0.780
0.680
hellaswag
acc
hellaswagacc
0.429
0.504
0.387
0.422
0.404
0.374
0.385
0.405
0.378
0.380
0.542
0.379
0.410
0.395
hellaswag
acc_norm
hellaswagacc_norm
0.545
0.664
0.489
0.551
0.515
0.464
0.486
0.521
0.477
0.476
0.716
0.475
0.524
0.495
lambda
acc
lambadaacc
0.625
0.694
0.572
0.469
0.481
0.569
0.575
0.609
0.581
0.580
0.634
0.574
0.496
0.501
logiqa
acc
logiqaacc
0.201
0.215
0.197
0.206
0.237
0.210
0.218
0.203
0.217
0.223
0.232
0.215
0.210
0.237
logiqa
acc_norm
logiqaacc_norm
0.269
0.292
0.273
0.267
0.270
0.275
0.286
0.269
0.281
0.280
0.275
0.272
0.254
0.293
mathqa
acc
mathqaacc
0.244
0.251
0.241
0.233
0.222
0.249
0.248
0.263
0.246
0.235
0.238
0.234
0.237
0.215
mathqa
acc_norm
mathqaacc_norm
0.242
0.247
0.237
0.228
0.228
0.246
0.245
0.259
0.242
0.242
0.235
0.234
0.229
0.238
mc_taco
fl
mc_tacofl
0.458
0.484
0.493
0.361
0.293
0.485
0.488
0.494
0.487
0.489
0.497
0.493
0.461
0.377
mrpc
acc
mrpcacc
0.578
0.684
0.684
0.684
0.588
0.684
0.684
0.684
0.679
0.679
0.684
0.684
0.684
0.679
mrpc
fl
mrpcf1
0.718
0.812
0.812
0.702
0.812
0.812
0.812
0.812
0.808
0.808
0.812
0.812
0.812
0.808
multirc
acc
multircacc
0.018
0.018
0.018
0.026
0.023
0.024
0.023
0.025
0.008
0.018
0.026
0.009
0.011
0.016
openbookka
acc
openbookkaacc
0.224
0.290
0.216
0.220
0.200
0.190
0.196
0.222
0.194
0.208
0.294
0.214
0.224
0.210
openbookka
acc_norm
openbookkaacc_norm
0.336
0.386
0.336
0.336
0.328
0.316
0.334
0.302
0.312
0.412
0.320
0.344
0.340
0.332
piqa
acc
piqaacc
0.745
0.763
0.711
0.732
0.716
0.693
0.704
0.716
0.698
0.706
0.777
0.693
0.720
0.711
piqa
acc_norm
piqaacc_norm
0.746
0.772
0.711
0.730
0.721
0.705
0.705
0.717
0.698
0.701
0.788
0.689
0.721
0.731
prost
acc
prostacc
0.270
0.288
0.238
0.243
0.237
0.249
0.229
0.204
0.219
0.226
0.281
0.244
0.287
0.240
prost
acc_norm
prostacc_norm
0.260
0.295
0.308
0.293
0.303
0.268
0.271
0.268
0.292
0.305
0.283
0.276
0.296
0.332
pubmedqa
acc
pubmedqaacc
0.611
0.622
0.544
0.573
0.438
0.563
0.589
0.662
0.612
0.612
0.615
0.589
0.507
0.514
qnli
acc
qnliacc
0.512
0.529
0.499
0.476
0.507
0.505
0.506
0.505
0.499
0.499
0.517
0.498
0.493
0.481
qqp
acc
qqpacc
0.372
0.441
0.382
0.396
0.384
0.381
0.370
0.375
0.371
0.369
0.368
0.435
0.370
0.370
qqp
fl
qqpfl
0.534
0.515
0.522
0.530
0.519
0.534
0.537
0.538
0.538
0.533
0.495
0.539
0.475
0.537
race
acc
raceacc
0.356
0.386
0.341
0.330
0.323
0.334
0.342
0.321
0.323
0.374
0.374
0.317
0.344
0.332
rte
acc
rteacc
0.585
0.552
0.603
0.502
0.534
0.563
0.549
0.578
0.563
0.549
0.524
0.527
0.545
0.524
sciq
acc
sciqacc
0.867
0.919
0.860
0.825
0.810
0.838
0.853
0.868
0.867
0.895
0.849
0.818
0.828
0.816
sciq
acc_norm
sciqacc_norm
0.809
0.896
0.770
0.747
0.717
0.755
0.762
0.792
0.803
0.815
0.770
0.718
0.728
0.698
sst
acc
sstacc
0.732
0.666
0.656
0.676
0.560
0.753
0.721
0.501
0.528
0.710
0.754
0.760
0.493
0.588
triviaqa
acc
triviaqaacc
0.115
0.195
0.052
0.027
0.025
0.056
0.065
0.058
0.047
0.133
0.050
0.031
0.039
0.028
webqs
acc
webqsacc
0.048
0.065
0.017
0.012
0.004
0.023
0.026
0.023
0.020
0.021
0.012
0.006
0.004
0.015
wic
acc
wicacc
0.495
0.500
0.500
0.495
0.508
0.495
0.500
0.498
0.500
0.498
0.500
0.498
0.492
0.500
winogrande
acc
winograndeacc
0.595
0.648
0.551
0.564
0.565
0.536
0.552
0.563
0.543
0.647
0.538
0.564
0.583
0.543
wsc
acc
wscacc
0.394
0.558
0.365
0.539
0.567
0.365
0.365
0.365
0.414
0.385
0.500
0.365
0.394
0.635
Avg acc
45.30%
49.28%
42.94%
42.77%
41.72%
42.79%
43.12%
43.46%
43.46%
43.08%
47.09%
42.95%
41.45%
43.70%
\ No newline at end of file
diff --git a/whatlanguagemodeltotrainifyouhaveonemilliongpuhours/images.zip b/whatlanguagemodeltotrainifyouhaveonemilliongpuhours/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..7986563bacf4f90f53a2b5f19ca657213892e44c
--- /dev/null
+++ b/whatlanguagemodeltotrainifyouhaveonemilliongpuhours/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:7ad8ef80ca19af926b2b65ffcea33ea8d5a78c7864f45580f42339a7aef6cdc0
+size 1003423
diff --git a/whatlanguagemodeltotrainifyouhaveonemilliongpuhours/layout.json b/whatlanguagemodeltotrainifyouhaveonemilliongpuhours/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..089e447ac2bc11923b8684199bb1c0af8fcb1562
--- /dev/null
+++ b/whatlanguagemodeltotrainifyouhaveonemilliongpuhours/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5d597fb61bf4c7183c848ccf633df02985c8d2ba652fd39eee309f162c99490c
+size 453034
diff --git a/whenlanguagemodelmeetsprivatelibrary/2718e6c8-2e0f-4416-9b6e-7510245b05fd_content_list.json b/whenlanguagemodelmeetsprivatelibrary/2718e6c8-2e0f-4416-9b6e-7510245b05fd_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..a0dd3f370d214fc8a633c2cce8db14e5f5243982
--- /dev/null
+++ b/whenlanguagemodelmeetsprivatelibrary/2718e6c8-2e0f-4416-9b6e-7510245b05fd_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e814a9a2ee7fb332a9b89f9c6a9075eb9366100af1d1faa1e285c4bda41c5c88
+size 89374
diff --git a/whenlanguagemodelmeetsprivatelibrary/2718e6c8-2e0f-4416-9b6e-7510245b05fd_model.json b/whenlanguagemodelmeetsprivatelibrary/2718e6c8-2e0f-4416-9b6e-7510245b05fd_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..8700f1498322450d7207dc65d90201eae570538f
--- /dev/null
+++ b/whenlanguagemodelmeetsprivatelibrary/2718e6c8-2e0f-4416-9b6e-7510245b05fd_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:6eb8079d8426d0846d98888c9ee28b419476e409261b0f81b7f0e5f32375eca4
+size 103911
diff --git a/whenlanguagemodelmeetsprivatelibrary/2718e6c8-2e0f-4416-9b6e-7510245b05fd_origin.pdf b/whenlanguagemodelmeetsprivatelibrary/2718e6c8-2e0f-4416-9b6e-7510245b05fd_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..d61b980024b4346d15679bd1e9507601e8f2e668
--- /dev/null
+++ b/whenlanguagemodelmeetsprivatelibrary/2718e6c8-2e0f-4416-9b6e-7510245b05fd_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9df118aa1f1b893bb4260266f33402906c4ff82d74ddc34b360e31e2958afabd
+size 682592
diff --git a/whenlanguagemodelmeetsprivatelibrary/full.md b/whenlanguagemodelmeetsprivatelibrary/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..e31ab74de443784f3fea83f223538bbc666fd7ec
--- /dev/null
+++ b/whenlanguagemodelmeetsprivatelibrary/full.md
@@ -0,0 +1,319 @@
+# When Language Model Meets Private Library
+
+Daoguang $\mathsf{Zan}^{1,2*}$ Bei Chen $^3$ , Zeqi Lin $^3$ , Bei Guan $^{2,4}$ , Yongji Wang $^{2,4,5}$ , Jian-Guang Lou $^3$
+
+$^{1}$ Cooperative Innovation Center, Institute of Software, Chinese Academy of Sciences
+
+$^{2}$ University of Chinese Academy of Sciences; $^{3}$ Microsoft Research Asia
+
+4 Integrative Innovation Center, Institute of Software, Chinese Academy of Sciences
+
+$^{5}$ State Key Laboratory of Computer Science, Institute of Software, Chinese Academy of Sciences
+
+{daoguang@, guanbei@, ywang@itechs.}iscas.ac.cn
+
+{beichen, zeqi.lin, jlou}@microsoft.com
+
+# Abstract
+
+With the rapid development of pre-training techniques, a number of language models have been pre-trained on large-scale code corpora and perform well in code generation. In this paper, we investigate how to equip pre-trained language models with the ability of code generation for private libraries. In practice, it is common for programmers to write code using private libraries. However, this is a challenge for language models since they have never seen private APIs during training. Motivated by the fact that private libraries usually come with elaborate API documentation, we propose a novel framework with two modules: the APIRetriever finds useful APIs, and then the APICoder generates code using these APIs. For APIRetriever, we present a dense retrieval system and also design a friendly interaction to involve uses. For APICoder, we can directly use off-the-shelf language models, or continually pre-train the base model on a code corpus containing API information. Both modules are trained with data from public libraries and can be generalized to private ones. Furthermore, we craft three benchmarks for private libraries, named TorchDataEval, MonkeyEval, and BeatNumEval. Experimental results demonstrate the impressive performance of our framework.1.
+
+# 1 Introduction
+
+Code generation, automatically generating code snippets based on user descriptions, is one of the long-standing challenges in the software engineering and artificial intelligence communities. With the rapid development of pre-training techniques, a number of language models are pre-trained on large-scale code corpora and able to generate decent code snippets, for example, Codex (Chen et al., 2021), AlphaCode (Li et al., 2022), CODEGEN (Nijkamp et al., 2022), and InCoder (Fried et al., 2022).
+
+# PandasEval
+
+
+Figure 1: A practical example of converting PandasEval (public) to MonkeyEval (private). The changed parts are highlighted in yellow. The performance of Codex 12B and CODEGEN-MONO 350M is shown at the bottom.
+
+Codex 12B
+CODEGEN-Mono 350M
+
+
pass@k
k=1
k=10
k=100
k=1
k=10
k=100
PandasEval
18.88%
43.05%
64.37%
14.24%
30.71%
46.04%
MonkeyEval
1.47%
3.53%
7.31%
0.95%
4.90%
8.89%
+
+They bring fresh energy to code generation and improve coding efficiency (Vaithilingam et al., 2022). Although making remarkable progress, these models may be biased towards generating code that is similar to the training distribution (Chen et al., 2021). What if one wants to generate code beyond the training distribution? A real-world scenario for programmers is to write code using a private library, which is very common in practice. For example, for security and functionality reasons, companies often build private libraries for internal use only. Private libraries provide a number of private APIs that have not been seen by the language models and are also not publicly available on any code hosting platform like GitHub. Therefore, it is worth exploring whether and how pre-trained language models can generate code using private libraries.
+
+It is challenging for existing language models to generate code that uses a private library directly. A practical evidence is shown in Figure 1. We built a pseudo private library named Monkey based on a
+
+public one named Pandas. PandasEval (Zan et al., 2022) is a benchmark consisting of 101 Pandas programming problems. We convert all Pandas-related keywords in PandasEval into the new version and construct MonkeyEval (details in Section 4). As seen in Figure 1, Codex 12B and CODEGEN-MONO 350M show a significant drop in performance on the private MonkeyEval compared to their performance on the public PandasEval. For example, Codex 12B drops from $18.88\%$ to $1.47\%$ on pass@1, showing the inadequacy of the language models in code generation for private libraries.
+
+To meet the challenge, we propose a framework to equip pre-trained language models with the ability to generate code that uses private libraries. As is known, private libraries usually come with elaborate API documentation, which motivates our main idea to mimic the process of a programmer learning to write code using private libraries. This process is also known as API practices in the software engineering field (Snodgrass and Winnie, 2019): first learning the private API documentation and then invoking the APIs to implement the needed functionalities. Analogically, there are two modules in our framework: an APIRetriever first retrieves the useful APIs based on the programming problem and the API documentation, and then an APICoder uses these APIs to generate code. For APIRetriever, we train a dense retriever and also design a friendly interaction to involve users in the loop optionally. For APICoder, we can directly use existing language models of code generation, such as CODEGEN, to invoke the private APIs; furthermore, to better teach a language model how to invoke APIs, we also continually pre-train the base model on a code corpus containing API information from public libraries, and obtain our reinforced model called CODEGENAPI. Since we only have access to the data of public libraries during training, we expect that APIRetriever and APICoder can be generalized to private libraries via learning.
+
+To evaluate the code generation for private libraries, we craft three benchmarks, named TorchDataEval, MonkeyEval, and BeatNumEval. TorchDataEval includes 50 programming problems using the TorchData library. The last two are adapted from PandasEval and NumpyEval (Zan et al., 2022), respectively, each consisting of 101 programming problems. Extensive experiments on the three benchmarks have revealed that our framework effectively improves the performance of pre-trained
+
+
+Figure 2: The overview of our proposed framework.
+
+language models on code generation for private libraries. We also provide a thorough analysis to facilitate progress in this direction.
+
+# 2 Framework
+
+First, we would like to define the task of code generation formally. Given context, the task aims to generate target code. In Figure 1, context and target code are shown in white and grey backgrounds, respectively. Context consists of a comment, which is a natural language description of the programming problem, and a code snippet including import statements, function header, etc. Target code solves the programming problem in context. We denote the context by $\mathbf{x}$ . Code generation model $\mathcal{M}$ outputs target code $\mathbf{y}$ based on $\mathbf{x}$ . For the task of code generation for private library, the context $\mathbf{x}$ contains the instruction for using a private library, such as an import statement. The target code $\mathbf{y}$ contains the calls of the corresponding private library APIs.
+
+As mentioned in Section 1, private libraries are usually equipped with elaborate API documentation. As a technical reference manual outlining how to use the library, API documentation typically includes a quick start guide, tutorials, and an instruction for each API (e.g., API name, signature, description, parameters, and examples). To take advantage of the API documentation, we propose to mimic the generic process of a programmer coding with private APIs, and design a framework to generate code that can invoke private APIs. The framework consists of APIRetriever and APICoder with the overview shown in Figure 2. Given the context, APIRetriever $\mathcal{M}_{\mathrm{R}}$ aims to retrieve possible used APIs from the API documentation; and APICoder $\mathcal{M}_{\mathrm{C}}$ is dedicated to generating code using the retrieved APIs. The process can be formalized as $\mathcal{A} = \mathcal{M}_{\mathrm{R}}(\mathbf{x})$ and $\mathbf{y} = \mathcal{M}_{\mathrm{C}}(\mathcal{A};\mathbf{x})$ , where $\mathcal{A}$ represents the set of information of all proper APIs, and each $\mathbf{a} \in \mathcal{A}$ is the information of an API. In our implementation, we design the API information to include the API name, signature and description. Note that we only use the first sentence of the API
+
+
+Figure 3: The training process of APIRetriever and CODEGENAPI.
+
+description since it is sufficient to summarize.
+
+# 3 Methodology
+
+We have introduced our framework that provides pre-trained models a fantastic way to deal with private libraries. In this section, we present the data collection, followed by the detailed design of our APIRetriever and APICoder.
+
+# 3.1 Data Collection
+
+We collect API information and code files of public libraries due to the fact that we can only access data from public libraries. Then we train the models based on the public data with the expectation that the model can be generalized to private libraries. For API information, we consider the 31 most popular public libraries in Python (e.g., Pandas, NumPy, and scikit-learn) according to the popularity ranking on StackOverFlow2. For each of the libraries, we crawled its API documentation and extracted detailed information about each API, including the API name, signature, description, parameters, usage examples, and so on. Please refer to Appendix A for the details of the 31 public libraries. For code files, we first collected a 330GB corpus from GitHub containing 60.6M python files and then extracted those files that involved one or more API calls from the 31 public libraries. After a bunch of pre-processing strategies, for example, de-duplicating, cleaning, and formatting, we finally obtained 4.54M python files, denoted by $\mathcal{D}$ .
+
+# 3.2 APIRetriever
+
+APIRetriever aims to find the proper APIs based on the description of a programming problem. We
+
+regard it as a dense retrieval task (Qu et al., 2020; Xiong et al., 2020; Santhanam et al., 2021; Formal et al., 2022) and design a simple dual-encoder model (Karpukhin et al., 2020) to retrieve the possible used APIs for each programming problem. To further boost the retrieval performance, a friendly interaction approach is designed to involve users.
+
+Training. To train APIRetriever, we need a large amount of pairwise data, natural language description and API information. We first segment each python file $\mathbf{d} \in \mathcal{D}$ into $K$ code blocks $(d_{1}, d_{2}, \dots, d_{K})$ using the pip-tools, i.e., redbaron, autopep8, and docformatter, where each code block is a relatively well-rounded code fragment, such as a function or a class. For each code block $d_{i}$ , we extract all API names and obtain the corresponding API signatures and descriptions by searching our collected 31 API documentations3. The information of an API includes its name, signature and description, denoted by $\mathbf{a} \in \mathcal{A}$ . Each $\mathbf{a}$ and the natural language description $\mathbf{p}$ extracted from the same code block $d_{i}$ are regarded as a positive training sample. For the negative training sample, we randomly sample an API $\hat{\mathbf{a}}$ that is unrelated to $d_{i}$ from the same library. In total, we obtained 40.3M ( $\mathbf{p}$ , $\mathbf{a}$ , $\hat{\mathbf{a}}_{1}$ , $\hat{\mathbf{a}}_{2}$ , ...) sets as training samples. As in Figure 3, the left part shows the training process of APIRetriever. Our APIRetriever is a dual-encoder model. The two dense encoder, $E_{\mathbf{p}}(.)$ and $E_{\mathbf{a}}(.)$ , map $\mathbf{p}$ and $\mathbf{a}$ to $z$ -dimensional vectors, respectively. Then, we use the dot product of their vectors to calculate the similarity score formalized as $E_{\mathbf{p}}(\mathbf{p})^{\top} E_{\mathbf{a}}(\mathbf{a})$ , where $E_{\mathbf{p}}(.)$ and $E_{\mathbf{a}}(.)$ are implemented by two independent BERT (Devlin et al.,
+
+
+
+Programming Problem
+
+```python
+from torchdata.datapipes_iter import IteratorWrapper
+datapipe = IteratorWrapper([1,2,3])
+# How to augment the datapipe by repeating it six times.
+new_datapipe =
+```
+
+
+
+Chioces: ([choice]: API Name: API Description)
+
+```txt
+[1]: flatmap: Applies a function over each item from ...
+[2]: cycle: Cycles the specified input in perpetuity by default, or for the specified number of times.
+[3]: mux: Yields one element at a time from each of ...
+[4]: header: Yields elements from the source DataPipe ...
+[5]: concat: Concatenates multiple iterable DataPipes ...
+[6]: None of the above.
+[7]: Not sure.
+```
+
+
+Figure 4: Friendly interaction interface for users.
+
+Which APIs would you like to use?
+
+Your Choices: [2]
+
+2019) with base-uncased version. We use BERT instead of CodeBERT (Feng et al., 2020) as most tokens in $\mathbf{p}$ and a are natural language rather than programming language.
+
+Inference. After the training phase, we can use APIRetriever to retrieve private APIs for each programming problem description. In detail, we apply $E_{\mathbf{a}}$ to all the APIs and index them by FAISS (Johnson et al., 2019) offline. Given a new programming problem description $\mathbf{p}$ at run-time, we only need to produce its embedding $v_{\mathbf{p}} = E_{\mathbf{p}}(\mathbf{p})$ and recall the top- $k$ APIs with the embeddings closest to $v_{\mathbf{p}}$ .
+
+Human Interaction with APIRetriever. In order to further increase the accuracy of API retrieval, we provide a friendly interaction interface to allow humans in the loop with APIRetriever, as shown in Figure 4. In the interaction interface, we give the programming problem and the top-5 APIs retrieved by APIRetriever, and let users choose one or more APIs that may be used in the target code. Note that we only provide API names and descriptions to users, as we find in our empirical experiments that providing API signatures has a negative effect on making the correct choice.
+
+# 3.3 APICoder
+
+APIRetriever finds useful APIs for a programming problem, and then APICoder aims to generate code that solves the problem with these APIs. We make use of the most straightforward way for APICoder: prompting API information set $\mathcal{A}$ in front of the context $\mathbf{x}$ . Formally, the APICoder can be written as $\mathbf{y} = \mathcal{M}_{\mathrm{C}}(\operatorname{Concat}(\mathcal{A}, \mathbf{x}))$ , where $\operatorname{Concat}(\mathcal{A}, \mathbf{x})$ means to concatenate the API information set and the context. Examples can be found
+
+in Figure 3. Each API information is in the form of “name(sigature):description”. This is to mimic programmers learning the APIs properly before writing code using them.
+
+Technically speaking, the off-the-shelf code generation models, such as CodeT5, CodeGPT, CodeClippy, CodeParrot, CODEGEN, and Codex, can be applied directly to land APICoder. Although these base models can achieve gains in correctly invoking APIs, they have not learned how to use them as an explicit training task. To better use the APIs, we devised a fantastic idea of continually pre-training the base models using code files with API information inserted.
+
+In practice, we use CODEGEN-MONO 350M (Nijkamp et al., 2022) as our base model, based on which we continually pre-train and obtain our reinforced model called CODEGENAPI. CODEGEN is a GPT-based model skilled at generating code. We choose it because it is by far the most popular and publicly available model. As for the training corpus, we use the collected python files $\mathcal{D}$ mentioned in Section 3.1. Firstly, as done for APIRetriever, each file $\mathbf{d} \in \mathcal{D}$ is split into $K$ code blocks $(d_{1}, d_{2}, \dots, d_{K})$ . For each code block $d_{i}$ , we obtain the set of API information $\mathcal{A}_{i}$ . Then, the $K$ code blocks and sets of API information are cross-merged to output a new file $\hat{\mathbf{d}} = (\mathcal{A}_{1}, d_{1}, \mathcal{A}_{2}, d_{2}, \dots, \mathcal{A}_{K}, d_{K})$ . This mimics the process of API information as a prompt for each block. Then, we continually pre-train the base model on the new code files, teaching the model to write code based on the prompted APIs. In addition, as shown in Figure 3, to make APICoder more robust, we shuffle the APIs in each set $\mathcal{A}_{i}$ and also add noise APIs, since APIRetriever does not know the order of APIs in the target code and often finds incorrect APIs.
+
+During the training phase of CODEGENAPI, unlike the previous settings that force all files to have the same priority, we design a resampling strategy to enable high-quality python files to appear more frequently and vice versa. The strategy considers the star number of the repository, the unit test function rate of the code file, and the API rate of the code file. More details can be found in Appendix B.
+
+# 4 Benchmark Construction
+
+Private libraries are commonly used in practice, but few attempts have been made to evaluate the performance of generating code invoking private APIs.
+
+To fill this gap, we craft three benchmarks, called TorchDataEval, MonkeyEval, and BeatNumEval. Each programming problem consists of context, target code, and the corresponding test cases.
+
+To create a realistic benchmark for evaluating code generation for private library, we use TorchData, a Python library released just recently. We carefully learnt the official API documentation of TorchData and made sure we were proficient in all APIs. Then, we manually created 50 programming problems based on the API usage examples in the documentation. Two volunteers with extensive experience in Python were invited to check the correctness of each problem. We control the difficulty of the programming problems by the number of APIs in the target code. The percentage of programming problems containing 1 API, 2 APIs, and more APIs is set to 6:3:1.
+
+We also construct two benchmarks using pseudo private libraries, named MonkeyEval and BeatNumEval, each containing 101 programming problems. They are modified from PandasEval and NumpyEval, which were proposed for the public libraries Pandas and Numpy (Zan et al., 2022). In detail, we manually modified all library-related keywords in PandasEval and NumpyEval. For example, as shown in Figure 1, pandas is converted to monkey, dataframe is converted to knowledgeframe, and the API name isin is converted to iscontain. For more details on keyword conversion, please refer to Appendix C. To craft the API documentation for Monkey and BeatNum, we manually paraphrased the descriptions of all the new APIs to ensure that the pre-trained language models have never seen them.
+
+# 5 Experiments
+
+In this section, we conduct experiments to illustrate the superiority of our proposed framework.
+
+# 5.1 Experimental Setup
+
+API Information. As shown in the second column (APIs) in Table 1, there are four settings for prompting API information before the context:
+
+- No API: there is nothing to be prompted;
+- Perfect: the information of golden APIs in the target code is prompted;
+
+- Top- $N$ : the information of top $N$ APIs retrieved by APIRetriever is prompted, where $N \in \{1,2,3,5\}$ ;
+- Human: the information of the APIs chosen by users is prompted. In our experiments, we invited three volunteers who are programmers familiar with Python but without any background in our three benchmarks. As in Figure 4, they interacted with the APIRetriever and provided their choices for all programming problems. The final APIs are determined by voting on their choices.
+
+Baselines. Our contributions can be reviewed in terms of both APIRetriever and APICoder. For APIRetriever, all models in the No API setting are our baseline, while we propose the Perfect, Top $N$ , and Human settings. For APICoder, the main baseline is our base model, CODEGEN-MONO 350M (Nijkamp et al., 2022), in the same API information setting. We use CODEGEN for short in the following. In addition, we include advanced pre-trained code generation models that are comparable in parameter size: CodeT5 (Wang et al., 2021), CodeGPT (Lu et al., 2021), CodeClippy and CodeParrot. Codex 12B (Chen et al., 2021) is also used to show the performance of giant models.
+
+Evaluation Metrics. Followed by Chen et al. (2021), we regard $\text{pass} @ k$ as our metric. For each programming problem, we sample $n \geq k$ code snippets, and then count the number of correct ones $c$ , where passing all test cases is considered as correct. If $n - c < k$ , then $\text{pass} @ k$ equals 1; otherwise, equals $1 - \prod_{i=n-c+1}^{n} \left(1 - \frac{k}{i}\right)$ . In our experiments, $k$ is set to one of [1, 10, 100] and $n$ is set to 200.
+
+Implementation Details. We implement our approach based on PyTorch (Paszke et al., 2019) and Huggingface's transformers (Wolf et al., 2019). We use a dense retrieval toolkit7 to train APIRetriever by setting the batch size to 10 per device, the learning rate to 1e-5, the ratio of positive vs. negative samples to 1:8, and the vector dimensions $z$ of $\mathbf{p}$ and $\mathbf{a}$ to 768. The model uses cross-entropy as the loss function and Adam (Kingma and Ba, 2014) as the parameters optimizer. It is trained for 100K steps on a cluster of 8 NVIDIA V100 GPUs with 32GB memory. The training time is about 3 days.
+
+
APICoder
APIs
TorchDataEval
MonkeyEval
BeatNumEval
pass@1
pass@10
pass@100
pass@1
pass@10
pass@100
pass@1
pass@10
pass@100
CodeT5 220M
Top-2
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
CodeGPT 124M
Top-2
0.67
2.78
7.72
0.82
0.99
1.73
0.52
1.88
4.70
CodeClippy 125M
Top-2
0.04
0.39
2.75
0.10
0.76
1.86
0.03
0.33
2.11
CodeParrot 110M
No API
4.04
7.11
13.26
0.54
2.04
7.38
2.67
7.66
18.86
Perfect
4.86+0.82
8.88+1.77
17.25+3.99
2.39+1.85
3.33+1.29
9.99+2.61
5.01+2.34
11.30+3.64
26.36+7.5
Top-1
4.02-0.02
8.35+1.24
18.17+4.91
2.54+2.00
3.43+1.39
11.39+4.01
4.32+1.65
9.39+1.73
19.91+1.05
Top-2
4.64+0.60
8.96+1.85
17.48+4.22
1.52+0.98
2.96+0.92
9.32+1.94
2.77+0.10
8.57+0.91
19.74+0.88
Top-3
4.00-0.04
7.51+0.40
15.13+1.87
1.32+0.78
3.16+1.12
10.59+3.21
1.69-0.98
9.01+1.35
19.90+1.04
Top-5
4.22+0.18
7.51+0.40
15.43+2.17
0.99+0.45
2.78+0.74
11.76+4.38
1.74-0.93
8.11+0.45
17.54+1.32
Human
4.01-0.03
7.60+0.49
14.47+1.21
2.44+1.90
3.62+1.58
9.83+2.45
5.23+2.56
11.78+4.12
22.81+3.95
CODEGEN 350M
No API
6.72
15.71
22.00
0.95
4.90
8.89
5.15
11.96
18.79
Perfect
9.84+3.12
22.62+6.91
34.00+12.00
2.14+1.19
6.41+1.51
11.86+2.97
9.47+4.32
17.05+5.09
28.67+9.88
Top-1
8.72+2.00
19.22+3.51
27.97+5.97
2.22+1.27
7.20+2.30
12.85+3.96
7.52+2.37
15.25+3.29
24.71+5.92
Top-2
7.52+0.80
16.36+0.65
26.00+4.00
2.46+1.51
6.35+1.45
9.89+1.00
6.65+1.50
13.68+1.72
22.74+3.95
Top-3
7.92+1.20
18.65+2.94
28.00+6.00
2.02+1.07
5.26+0.36
8.89+0.00
6.26+1.11
16.12+4.16
24.72+5.93
Top-5
6.08-0.64
17.48+1.77
25.95+3.95
1.58+0.63
5.45+0.55
9.88+0.99
6.34+1.19
15.05+3.09
21.76+2.97
Human
8.08+1.36
19.85+4.14
31.95+9.95
2.14+1.19
6.14+1.24
11.86+2.97
9.47+4.32
17.12+5.06
28.67+9.88
CODEGENAPI 350M
No API
7.19
16.93
23.97
1.19
4.68
7.91
4.44
8.24
13.83
Perfect
20.23+13.04
33.37+16.44
41.97+18.00
4.59+3.40
9.14+4.46
13.85+5.94
9.62+5.18
16.51+8.27
22.75+8.92
Top-1
12.89+5.70
24.26+7.33
31.97+8.00
2.89+1.70
8.28+3.60
12.86+4.94
6.61+2.17
12.62+4.38
17.80+3.97
Top-2
10.41+3.22
23.50+6.57
31.98+8.01
3.41+2.22
8.33+3.65
11.87+8.90
5.90+1.46
11.79+3.55
15.83+2.00
Top-3
10.49+3.30
25.45+8.52
35.98+12.01
3.17+1.98
7.51+2.83
10.88+2.97
5.11+0.67
11.40+3.16
15.82+1.99
Top-5
10.34+3.15
23.04+6.11
27.99+4.02
1.94+0.75
4.75+0.07
7.91+0.00
5.07+0.63
9.64+1.40
13.84+0.01
Human
15.57+8.38
27.76+10.83
33.97+10.00
3.76+2.57
8.32+3.64
12.86+4.95
9.39+4.95
16.40+8.16
23.74+9.91
Codex 12B
No API
7.16
14.46
23.75
1.47
3.53
7.31
6.95
17.54
25.57
Perfect
25.03+17.87
51.26+36.80
56.75+33.00
3.58+2.11
7.48+3.95
12.61+5.30
8.59+1.64
23.75+6.21
36.99+11.42
Top-2
17.98+10.82
32.75+18.29
41.51+17.76
1.92+0.45
5.91+2.38
11.08+3.77
9.54+2.59
21.77+4.23
32.45+6.88
+
+Table 1: Pass@k(%) results on the three benchmarks. The blue background means no API as extra prompt; the yellow background means perfect APIs as extra prompt; the write background means top-1, 2, 3, or 5 APIs retrieved by APIRetriever as extra prompt; and the purple background means the APIs chosen by human from top-5 of APIRetriever as extra prompt. Numbers in red and green indicate the absolute changes over no API setting.
+
+For pre-training CODEGENAPI, we set the code block size to 1,024, the batch size to 4, the learning rate to 5e-4, the gradient accumulation steps to 4, the weight decay to 0.1, and the warm up steps to 1,000. Noise APIs are added at a rate of 0.05. It is trained for 100K steps about 1.6 days on 8 32GB NVIDIA V100 GPUs. In all of our training phases, we use mixed precision FP16 to speed up. When generating code snippets using pre-trained models, we conduct various temperatures ranging from 0.1 to 1.0 with the interval of 0.1. All results are reported with the best values across these hyper-parameters.
+
+# 5.2 Main Results
+
+Table 1 summarizes the performance of our framework and all baselines on TorchDataEval, MonkeyEval, and BeatNumEval. Based on numerous experimental results, we derived plausible observations and valuable insights to answer the following research questions.
+
+"Is API information useful for private library oriented code generation?" As we can see in Table 1, all models without prompting any APIs (the No API setting) achieve relatively poor per
+
+formance on all benchmarks. Especially, Codex 12B, a powerful code generation model with large parameters, can only achieve similar performance to CODEGEN and CODEGENAPI 350M in the No API setting. This indicates that even with gigantic models, the task of code generation with private libraries is extremely challenging. Encouragingly, with prompted API information (the Perfect, Top-N, Human settings), both the off-the-shelf models (e.g., CodeParrot, CODEGEN, and Codex) and our continually pre-trained CodeGenAPI achieve consistent performance gains compared to those in the No API setting. Moreover, the more powerful the model itself in code generation (i.e., Codex 12B > CODEGEN 350M > CodeParrot 110M), the more benefits that API information can bring. For example, on TorchDataEval in the Perfect setting, Codex 12B brings pass@10 an absolute improvement of $36.89\%$ , while CodeParrot 110M only improves $1.77\%$ . This observation also suggests that prompting API information can unleash the potential of gigantic models towards invoking private APIs. All the above results prove the usefulness of API information for code generation for private libraries.
+
+"Is the APIRetriever effective in finding useful
+
+
+Figure 5: The recall rates of retrieved APIs.
+
+API information?" All models in the Top- $N$ setting outperform the same models in the No API setting, suggesting that APIRetriever is able to find useful APIs. For a certain model, we observe that the Top-1/Top-2 settings usually perform better than the Top-3/Top-5 settings due to the fact that the latter introduces more noise APIs to the APICoder. In addition, involving humans (the Human setting) in the selection of APIs can further improve performance, suggesting the effectiveness of the human interaction we designed. Note that the Top- $N$ and Human settings are occasionally superior to the Perfect setting, which is reasonable because the noise APIs exist when training the model.
+
+"Is the APICoder effective in invoking private APIs?" As shown in Table 1, off-the-shelf models like CODEGEN are capable of handling private library invocations. To seek more extraordinary performance, we continually pre-train CODEGEN and obtain a new model CODEGENAPI. We can observe that CODEGENAPI consistently outperforms its base model CODEGEN on TorchDataEval and MonkeyEval, which proves the effectiveness of CODEGENAPI. However, on BeatNumEval, CODEGENAPI is inferior to CODEGEN. After careful troubleshooting, we reveal that the process of continual pre-training aims to essentially learn how to invoke the correct APIs with maximum likelihood, while the key obstacle to using BeatNum modified from Numpy lies in the numerical calculation like 'a[:,None]+b*2', instead of invoking the correct APIs. Therefore, CODEGENAPI fails to yield benefits for BeatNumEval. Overall, APICoder has the capability to invoke private APIs.
+
+# 5.3 Closer Analysis
+
+We have demonstrated the effectiveness of our framework. In this subsection, we provide several closer analyses to inspire future work in this
+
+
+Figure 6: Accuracy of retrieved APIs.
+
+
+Figure 7: Accuracy of CODEGENAPI and CODEGEN with respect to the number of APIs. The problem is solved if one of 200 samples passes all test cases.
+
+direction.
+
+Quality of Retrieved APIs. Retrieving the correct APIs as prompts can enhance the code generation performance for private libraries, so we would like to evaluate the effectiveness of APIRetriever. Figure 5 shows the recall rate of APIRetriever on five benchmarks. We can see that the recall rates of top-5 are already high, demonstrating that it is reasonable to provide 5 API candidates for users to choose from. Furthermore, as shown in Figure 6, we analyze the accuracy of APIs chosen by users. We observe that it dramatically exceeds the accuracy of top 1, 2 or 3 APIs retrieved by APIRetriever. This suggests that it is feasible to involve humans in the retrieval of APIs.
+
+Different Difficulty. We would like to explore the performance of CODEGENAPI on varying difficulty problems. So we calculate its accuracy across various numbers of APIs in target code y. Each benchmark is divided into 3 parts, according to the number of APIs. Figure 7 shows that CODEGENAPI outperforms CODEGEN by a large margin on the problems containing only one API. The trend still holds as the number of APIs increases. It demonstrates CODEGENAPI can boost the performance of generating code snippets using private library on varying difficulty.
+
+
APICoder
TorchDataEval
MonkeyEval
BeatNumEval
pass@1
pass@10
pass@100
pass@1
pass@10
pass@100
pass@1
pass@10
pass@100
CODEGENAPI
10.41
23.50
31.98
3.41
8.33
11.87
5.90
11.79
15.83
-w/ noise rate 0%
9.41
22.88
31.08
2.69
8.03
11.18
5.77
11.01
14.52
-w/ noise rate 10%
9.19
22.87
30.98
3.04
7.67
11.10
4.99
10.80
15.18
-w/ noise rate 20%
8.92
23.04
30.57
2.00
7.39
10.64
4.48
10.97
13.41
-w/o resampling
8.65
21.00
29.71
2.47
7.96
10.13
5.21
8.68
14.75
+
+Noise Rate. A well-chosen noise rate can improve the robustness of CODEGENAPI against a variety of APIs. If we set the noise rate too large, it may change the original distribution of the code corpus, while too small will lose the capability to deal with noise APIs. The default noise rate is $5\%$ , and we also try $0\%$ , $10\%$ , and $20\%$ . As shown in Table 2, both too large and too small noise rates can degrade the performance.
+
+Resampling Strategy. Making high-quality python files high-priority, and vice versa, is in line with our intuition. To demonstrate it, we remove the resampling strategy as mentioned in Section 3.3. As shown in Table 2, we observe a steady decline in performance on the three benchmarks. Such an observation demonstrates the effectiveness of the sampling strategy.
+
+CODEGENAPI for Public Library. Technically speaking, CODEGENAPI also can be employed for generating code for public libraries. So, we do experiments on PandasEval and NumpyEval and show the results in Table 3. We find that the performance improvement of CODEGENAPI over the base model on public libraries is not as significant as on private libraries. One major reason is that the models have seen the public libraries during pre-training, so prompting API information yields limited benefit. We can see CODEGENAPI excels over CODEGEN when prompting perfect APIs. But when prompting top-2 APIs, the advantages of CODEGENAPI are not exhibited. This means that CODEGENAPI can also work on third-party public libraries, but it depends heavily on the performance of APIRetriever.
+
+# 6 Related Work
+
+# 6.1 Code Generation
+
+Thanks to the recent development of pre-training techniques, a lot of pre-trained language models have been proposed for code-related tasks.
+
+Table 2: Ablation studies for CODEGENAPI in the Top-2 setting (top 2 APIs provided by APIRetriever are prompted). The default setting of CODEGENAPI is to use the resampling strategy and a noise rate of $5\%$ .
+
+
APICoder
APIs
PandasEval
pass@1
pass@10
pass@100
CODEGEN
No API
14.24
30.71
46.04
Perfect
11.21
33.59
48.47
Top-2
9.54
29.02
40.56
CODEGENAPI
No API
13.58
34.95
46.51
Perfect
19.96
42.36
53.43
Top-2
11.25
28.61
39.48
NumpyEval
CODEGEN
No API
19.31
40.89
60.58
Perfect
21.41
41.08
56.38
Top-2
18.30
35.12
48.46
CODEGENAPI
No API
16.55
29.48
42.52
Perfect
24.83
41.47
54.41
Top-2
12.67
27.32
35.62
+
+Table 3: Results of CODEGEN and CODEGENAPI on PandasEval and NumpyEval.
+
+For example, CuBERT (Kanade et al., 2020), CodeBERT (Feng et al., 2020), GraphCodeBERT (Guo et al., 2020), CodeT5 (Wang et al., 2021), CodeGPT (Lu et al., 2021), PLBART (Ahmad et al., 2021), PyCodeGPT (Zan et al., 2022), CODEGEN (Nijkamp et al., 2022), Codex (Chen et al., 2021), AlphaCode (Li et al., 2022), and InCoder (Fried et al., 2022). Almost all of them focus on standalone code, while JigSaw (Jain et al., 2021) and CERT (Zan et al., 2022) are presented for generating code using public libraries. In this paper, we aim to generate code invoking private APIs, which is a common scenario in practice. It is more challenging because pre-trained language models have never seen any information about private libraries. As for benchmarks, HumanEval (Chen et al., 2021), APPs (Hendrycks et al., 2021), P3 (Schuster et al., 2021), MBPP (Austin et al., 2021), BIG-bench (Srivastava et al., 2022), and CodeContests (Li et al., 2022) were proposed to evaluate the performance of generating standalone code. GSM8K-Python (Cobbe et al., 2021) and MathQA-Python (Austin et al., 2021) were engaged in evaluating the capability of solving mathematical problems. PandasEval and NumpyEval (Zan et al., 2022) were released to evaluate the code generation for public library. We propose
+
+three benchmarks, called TorchDataEval, MonkeyEval, and BeatNumEval, aiming to evaluate the performance of code generation for private library.
+
+# 6.2 Retrieval-Based Generation
+
+In the natural language field, retrieval-based generation is a hot topic. A lot of works (Izacard and Grave, 2020; Karpukhin et al., 2020; Qu et al., 2020; Xiong et al., 2020; Santhanam et al., 2021; Formal et al., 2022) have emerged under this topic. Therefore, we refer to the above methods and design our APIRetriever for private API retrieval. In the programming language field, there are also several attempts to use retrieval techniques, such as DEEPAPI (Gu et al., 2016), REDCODER (Parvez et al., 2021), ReACC (Lu et al., 2022), and DocCoder (Zhou et al., 2022). Our work is fundamentally different from them. They all aim to retrieve public code snippets or other resources on GitHub/StackOverFlow based on the user query, while our goal is to retrieve APIs from the API documentation of private library based on code comments. Besides, we design retrieval because we focus on private APIs, which have not been seen by the pre-trained generative language models.
+
+# 7 Conclusion
+
+In this paper, we propose a novel framework for code generation for private library. There are two modules: for a specific programming problem, APIRetriever first finds out the useful private APIs from API documentation, and then APICoder leverages these APIs to generate the code. We craft three benchmarks, including TorchDataEval, MonkeyEval, and BeatNumEval, for better evaluating private library oriented code generation. The experimental results and thorough analysis demonstrate the reasonableness and effectiveness of our framework. In future work, we would like to explore how to make better use of API documentation for code generation and improve the approach for real use when programming with private libraries.
+
+# Limitations
+
+While our proposed approach exhibits many advantages, it also has a few limitations. (1) As stated in Section 5.2, our approach that prompts APIs for programming problem relies heavily upon the code generation capacity of the language model itself. The more powerful the model itself, the more benefits the prompting APIs bring. Likewise,
+
+we also find that if a model itself shows very poor performance, prompting APIs will not bring any benefit to it or even bring negative effects. (2) As the first navigator to explore code generation with private library, we have built three private libraries, but they all include a relatively small number of APIs (<200). With these APIs, our APIRetriever can exhibit decent performance. But we surmise that it may become more challenging for APIRetriever as the number of APIs increases. (3) It is extremely challenging to find a real private library and craft a benchmark like TorchDataEval. To evaluate our idea quickly and cost-effectively, besides TorchDataEval, we also crafted two pseudo private libraries that are modified from the existing public ones as mentioned in Section 4. Although we have done our best to preserve the two pseudo private libraries in line with the real private library, it may still pose some threats to the fair evaluation of code generation for private library. (4) We can see from Table 1 that most models with the Top-N setting fall behind the same model with the Perfect setting. Such observation demonstrates that APIRetriever we designed has a big room for improvement. (5) Our experiments show that our framework can enhance the quality of private library oriented code generation on Python. Limitations may exist when we generalize it to other programming languages such as Java, C, and $\mathrm{C + + }$ since the characteristics of libraries for different programming languages are slightly different.
+
+# References
+
+Wasi Ahmad, Saikat Chakraborty, Baishakhi Ray, and Kai-Wei Chang. 2021. Unified pre-training for program understanding and generation. In North American Chapter of the Association for Computational Linguistics, pages 2655-2668.
+Jacob Austin, Augustus Odena, Maxwell Nye, Maarten Bosma, Henryk Michalewski, David Dohan, Ellen Jiang, Carrie Cai, Michael Terry, Quoc Le, et al. 2021. Program synthesis with large language models. arXiv preprint arXiv:2108.07732.
+Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. 2021. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374.
+Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Jacob Hilton, Reiichiro Nakano, Christopher
+
+Hesse, and John Schulman. 2021. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168.
+Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In *NAACL*, pages 4171–4186.
+Zhangyin Feng, Daya Guo, Duyu Tang, Nan Duan, Xiaocheng Feng, Ming Gong, Linjun Shou, Bing Qin, Ting Liu, Daxin Jiang, et al. 2020. CodeBERT: A pre-trained model for programming and natural languages. In Findings of the Conference on Empirical Methods in Natural Language Processing, pages 1536-1547.
+Thibault Formal, Carlos Lassance, Benjamin Piwowarski, and Stephane Clinchant. 2022. From distillation to hard negative sampling: Making sparse neural ir models more effective. arXiv preprint arXiv:2205.04733.
+Daniel Fried, Armen Aghajanyan, Jessy Lin, Sida Wang, Eric Wallace, Freda Shi, Ruiqi Zhong, Wen-tau Yih, Luke Zettlemoyer, and Mike Lewis. 2022. InCoder: A generative model for code infilling and synthesis. arXiv preprint arXiv:2204.05999.
+Xiaodong Gu, Hongyu Zhang, Dongmei Zhang, and Sunghun Kim. 2016. Deep api learning. In ACM SIGSOFT International Symposium on Foundations of Software Engineering, pages 631-642.
+Daya Guo, Shuo Ren, Shuai Lu, Zhangyin Feng, Duyu Tang, et al. 2020. GraphCodeBERT: Pre-training code representations with data flow. In International Conference on Learning Representations.
+Dan Hendrycks, Steven Basart, Saurav Kadavath, Mantas Mazeika, Akul Arora, Ethan Guo, et al. 2021. Measuring coding challenge competence with apps. In Neural Information Processing Systems Datasets and Benchmarks Track.
+Gautier Izacard and Edouard Grave. 2020. Leveraging passage retrieval with generative models for open domain question answering. arXiv preprint arXiv:2007.01282.
+Naman Jain, Skanda Vaidyanath, Arun Iyer, Nagarajan Natarajan, Suresh Parthasarathy, Sriram Rajamani, and Rahul Sharma. 2021. Jigsaw: Large language models meet program synthesis. arXiv preprint arXiv:2112.02969.
+Jeff Johnson, Matthijs Douze, and Hervé Jégou. 2019. Billion-scale similarity search with gpus. IEEE Transactions on Big Data, 7(3):535-547.
+Aditya Kanade, Petros Maniatis, Gogul Balakrishnan, and Kensen Shi. 2020. Learning and evaluating contextual embedding of source code. In International Conference on Machine Learning, pages 5110-5121.
+
+Vladimir Karpukhin, Barlas Oğuz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for open-domain question answering. arXiv preprint arXiv:2004.04906.
+Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
+Yujia Li, David Choi, Junyoung Chung, Nate Kushman, Julian Schrittwieser, Remi Leblond, Tom Eccles, James Keeling, Felix Gimeno, Agustin Dal Lago, et al. 2022. Competition-level code generation with alphabet. arXiv preprint arXiv:2203.07814.
+Shuai Lu, Nan Duan, Hojae Han, Daya Guo, Seung-won Hwang, and Alexey Svyatkovskiy. 2022. ReACC: A retrieval-augmented code completion framework. In Association for Computational Linguistics, pages 6227-6240.
+Shuai Lu, Daya Guo, Shuo Ren, Junjie Huang, Alexey Svyatkovskiy, Ambrosio Blanco, Colin Clement, et al. 2021. CodeXGLUE: A machine learning benchmark dataset for code understanding and generation arXiv preprint arXiv:2102.04664.
+Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, et al. 2022. A conversational paradigm for program synthesis. arXiv preprint arXiv:2203.13474.
+Md Rizwan Parvez, Wasi Ahmad, Saikat Chakraborty, Baishakhi Ray, and Kai-Wei Chang. 2021. Retrieval augmented code generation and summarization. In Findings of EMNLP, pages 2719-2734.
+Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, et al. 2019. PyTorch: An imperative style, high-performance deep learning library. Neural Information Processing Systems.
+Yingqi Qu, Yuchen Ding, Jing Liu, Kai Liu, Ruiyang Ren, Wayne Xin Zhao, Daxiang Dong, Hua Wu, and Haifeng Wang. 2020. RocketQA: An optimized training approach to dense passage retrieval for open-domain question answering. arXiv preprint arXiv:2010.08191.
+Keshav Santhanam, Omar Khattab, Jon Saad-Falcon, Christopher Potts, and Matei Zaharia. 2021. Colbertv2: Effective and efficient retrieval via lightweight late interaction. arXiv preprint arXiv:2112.01488.
+Tal Schuster, Ashwin Kalyan, Oleksandr Polozov, and Adam Tauman Kalai. 2021. Programming puzzles arXiv preprint arXiv:2106.05784.
+Eric Snodgrass and Soon Winnie. 2019. Api practices and paradigms: Exploring the protocological parameters of apis as key facilitators of sociotechnical forms of exchange. First Monday, 24(2).
+
+Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, et al. 2022. Beyond the imitation game: Quantifying and extrapolating the capabilities of language models. arXiv preprint arXiv:2206.04615.
+
+Priyan Vaithilingam, Tianyi Zhang, and Elena L Glassman. 2022. Expectation vs. experience: Evaluating the usability of code generation tools powered by large language models. In *CHI Conference on Human Factors in Computing Systems Extended Abstracts*, pages 1-7.
+
+Yue Wang, Weishi Wang, Shafiq Joty, and Steven CH Hoi. 2021. CodeT5: Identifier-aware unified pretrained encoder-decoder models for code understanding and generation. In Empirical Methods in Natural Language Processing, pages 8696-8708.
+
+Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, et al. 2019. Huggingface's transformers: State-of-the-art natural language processing. arXiv preprint arXiv:1910.03771.
+
+Lee Xiong, Chenyan Xiong, Ye Li, Kwok-Fung Tang, Jialin Liu, Paul Bennett, Junaid Ahmed, and Arnold Overwijk. 2020. Approximate nearest neighbor negative contrastive learning for dense text retrieval. arXiv preprint arXiv:2007.00808.
+
+Daoguang Zan, Bei Chen, Dejian Yang, Zeqi Lin, Minsu Kim, Bei Guan, Yongji Wang, Weizhu Chen, and Jian-Guang Lou. 2022. CERT: Continual pretraining on sketches for library-oriented code generation. In The 2022 International Joint Conference on Artificial Intelligence, pages 3653-3660.
+
+Shuyan Zhou, Uri Alon, Frank F Xu, Zhengbao Jiang, and Graham Neubig. 2022. DocCoder: Generating code by retrieving and reading docs. arXiv preprint arXiv:2207.05987.
+
+# A Collection of API Documentation
+
+We aim to use data from public libraries for training and generalize the models to private libraries. Thus, we crawled the API documentation of the 31 most popular public libraries in Python. Table 4 summarizes the number of APIs we extracted for each library.
+
+# B Resampling Strategy
+
+The resampling strategy allows high-quality python files to be more frequently sampled, and vice versa. So the resampling weight $(w)$ of each python file is defined in the following aspects: the star number of the corresponding repository $(N_{\mathrm{star}})$ , the unit test function rate $(R_{\mathrm{ut}})$ that is the number of unit test functions divided by the number of all functions, the number of API name $(N_{\mathrm{api}})$ in the file, and the
+
+number of APIs $(M_{\mathrm{api}})$ considering one API name may match multiple APIs. Formally, the strategy can be formulated as follows:
+
+$$
+w _ {\text {s t a r}} = 1. 0 + \log \left(N _ {\text {s t a r}} + 1\right). \operatorname {c l i p} \left(_ {0} ^ {5}\right) \times 0. 2,
+$$
+
+$$
+w _ {\mathrm {u t}} = (0. 5 + (1 - R _ {\mathrm {u t}})). \operatorname {c l i p} \left( \begin{array}{l} 1 \\ 0 \end{array} \right),
+$$
+
+$$
+w _ {\mathrm {a p i}} = 5. 0 - \log \left(\frac {M _ {\mathrm {a p i}}}{N _ {\mathrm {a p i}}}\right). \operatorname {c l i p} \left(_ {0} ^ {5}\right) \times 0. 2, \tag {1}
+$$
+
+$$
+w = w _ {\mathrm {s t a r}} \times w _ {\mathrm {u t}} \times w _ {\mathrm {a p i}},
+$$
+
+where $\operatorname{clip}(\frac{y}{x})$ limits the value to $[x, y]$ .
+
+# C Keywords Conversion from Public Library to Private Library.
+
+As mentioned in Section 4, we convert the public library benchmarks (PandasEval and NumpyEval) to the private library benchmarks (MonkeyEval and BeatNumEval) by manually modifying all public library-related keywords. In Table 5, we list all the keywords before and after the conversion.
+
+
Pandas
NumPy
sklearn
PyTorch
TensorFlow
Django
selenium
Matplotlib
Flask
SciPy
Seaborn
7,094
12,085
53,166
124,902
32,116
24,375
4,842
439,913
31,867
153,359
161,477
NLTK
BeatifulSoup
pygame
PIL
jieba
Gensim
spaCy
transformers
fairseq
SQLAlchemy
Scrapy
206,816
22,519
70,396
127,212
26,620
37,331
239,945
652,913
158,721
54,765
3,537
AllenNLP
datasets
tokenizerizers
MXNet
imageio
pytest
MetPy
ansible
requests
276,088
136,843
195
142,070
175,878
1,047
27,429
40,839
39,333
+
+Table 4: The number of APIs in the 31 public libraries we crawled.
+
+
PandasEval-MonkeyEval
isnull
mean
pandas
dataframe
df
isin
pd
ifnull
average
monkey
knowledgeframe
kf
incontain
mk
tolist
apply
to_numeric
dropna
append
tail
copy
convert_list
employ
to_num
signa
adding
lastTAIL
clone
innull
astype
select_dtypes
iterrows
min
max
map
isnone
totype
choose_dtypes
traversal
get_min
get_max
mapping
last
shift
merge
value_counts
Rename_axis
reset_index
sample
final_item
shifting
unitioner
counts_value_num
renaming_axis
resetting_index
sample_by_num
concat
to_dict
cumsum
sort_index
to_string
drop Duplicate
duplicated
concating
convert_dict
cumulative_sum
sorting_index
convert_string
remove Duplicate
duplicated_values
round
format
to_pydatetime
div
ceil
assign
intersection
value_round
formatting
convert_pydatetime
division
ceiling
allocate
interst
drop
Series
ravel
any
fillna
all
Pandas
sip
Collections
flat_underlying
whatever
fillnone
total_all
Monkey
reindex
get
std
Rename
sum
unique
to Datetime
reindexing
getting
standard
renaming
total_sum
distinctive
convert_datetime
applymap
sort_values
DataFrame
groupby
nlargest
replace
len
conduct_map
sort_the_values
KnowledgeFrame
grouper
nbiggest
replacing
length
head
series
isna
header_num
collections
ifna
NumpyEval-BeatNumEval
to_numpy
ndarray
array
transpose
numpy
Numpy
np
to_beatnum
ndnumset
numset
switching_places
beatnum
Beatnum
bn
column_stack
concatenate
slice
sum
imag
abs
real
stack_col
connect
piece
total_count
imaginary
absolute
reality
fill(diagonal)
all
fromstring
in1d
mean
where
std
pad(diagonal)
total
come_from_str
intersection1dim
average
filter_condition
standard_op
add
histogram
fromarrays
reshape
filled
stack
cumsum
add_concat
hist operation
come_from_arrays
change_shape_to
masked_fill
pile operation
cumulative_sum
astype
arange
setxorld
compressed
argmin
argmax
convert_type
arr_range
setting_exclusive_or_one_dim
remove_masked_data
get_argmin_value
get_argmax
vstack
squeeze
hstack
asarray
repeat
vectorize
split
vertical_stack
sqz
horizontal_stack
asnumset
Duplicate
vectorisation
sep_split
diff
unique
unravel_index
flatten
norm
delete
ones
difference
uniq
convert_index_or.arr
convert Into one dim
normlization
remove operation
create.ones
append
any
logical_and
bincount
isnan
argpartition
ravel
apd
any_condition
logic_and_element_wise
binoccurrence
ifnan
perform_partition
asview
array_split
inv
insert
searchsorted
min
max
full
split_array
inverse
stick
find Sorted
get_min
get_max
full_value FUNC
+
+Table 5: The keywords of converting PandasEval to MonkeyEval, and NumpyEval to BeatNumEval. The grey background means the original keywords, and the white background means the converted ones.
\ No newline at end of file
diff --git a/whenlanguagemodelmeetsprivatelibrary/images.zip b/whenlanguagemodelmeetsprivatelibrary/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..d6e72d01fb0e6586e384335d09712f395ee15f8f
--- /dev/null
+++ b/whenlanguagemodelmeetsprivatelibrary/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:7c79e22099be670ea71af9db2cce22d1e24c9e2fbafea302d4f4fec89ba10eda
+size 858390
diff --git a/whenlanguagemodelmeetsprivatelibrary/layout.json b/whenlanguagemodelmeetsprivatelibrary/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..69c565ee9857e056e7cd5bd29fd0c855fec307bb
--- /dev/null
+++ b/whenlanguagemodelmeetsprivatelibrary/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ec337c95226099efe06e5d8d730307d7c118226d78329e59988ba618a16fee50
+size 385229
diff --git a/wishicanfeelwhatyoufeelaneuralapproachforempatheticresponsegeneration/a36e41cd-e9d8-40c3-89df-bc73a203dc2d_content_list.json b/wishicanfeelwhatyoufeelaneuralapproachforempatheticresponsegeneration/a36e41cd-e9d8-40c3-89df-bc73a203dc2d_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..5fe48775579fe9d117ddd8e4850fefb1233e8d76
--- /dev/null
+++ b/wishicanfeelwhatyoufeelaneuralapproachforempatheticresponsegeneration/a36e41cd-e9d8-40c3-89df-bc73a203dc2d_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:19b7d4b9624fa5b81469cfa7f61cdf2a0d873cfeebd631e56f43dd725a7d5546
+size 81955
diff --git a/wishicanfeelwhatyoufeelaneuralapproachforempatheticresponsegeneration/a36e41cd-e9d8-40c3-89df-bc73a203dc2d_model.json b/wishicanfeelwhatyoufeelaneuralapproachforempatheticresponsegeneration/a36e41cd-e9d8-40c3-89df-bc73a203dc2d_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..7affe3bb6e93ca1c5c4d4b6a54db2966d98d192f
--- /dev/null
+++ b/wishicanfeelwhatyoufeelaneuralapproachforempatheticresponsegeneration/a36e41cd-e9d8-40c3-89df-bc73a203dc2d_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a562d063d5c081dd9a1af30bbd86fd008ea6c77cb0b9d1e49ff68c4faad99718
+size 99487
diff --git a/wishicanfeelwhatyoufeelaneuralapproachforempatheticresponsegeneration/a36e41cd-e9d8-40c3-89df-bc73a203dc2d_origin.pdf b/wishicanfeelwhatyoufeelaneuralapproachforempatheticresponsegeneration/a36e41cd-e9d8-40c3-89df-bc73a203dc2d_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..2fa1089b6619a16cadec2985977f579075d9c3fa
--- /dev/null
+++ b/wishicanfeelwhatyoufeelaneuralapproachforempatheticresponsegeneration/a36e41cd-e9d8-40c3-89df-bc73a203dc2d_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4b2c19ef697396d58366801ae5a335c7d4fc0505fa1266388b4931a2659c31b1
+size 6363979
diff --git a/wishicanfeelwhatyoufeelaneuralapproachforempatheticresponsegeneration/full.md b/wishicanfeelwhatyoufeelaneuralapproachforempatheticresponsegeneration/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..fe11d53d35ca596211650868cae03d75dc9dff73
--- /dev/null
+++ b/wishicanfeelwhatyoufeelaneuralapproachforempatheticresponsegeneration/full.md
@@ -0,0 +1,412 @@
+# Wish I Can Feel What You Feel: A Neural Approach for Empathetic Response Generation
+
+Yangbin Chen and Chunfeng Liang
+
+Suzhou Fubian Medical Technology Co., Ltd., China
+
+{dongyiwu92, cfliang666}@gmail.com
+
+# Abstract
+
+Expressing empathy is important in everyday conversations, and exploring how empathy arises is crucial in automatic response generation. Most previous approaches consider only a single factor that affects empathy. However, in practice, empathy generation and expression is a very complex and dynamic psychological process. A listener needs to find out events which cause a speaker's emotions (emotion cause extraction), project the events into some experience (knowledge extension), and express empathy in the most appropriate way (communication mechanism). To this end, we propose a novel approach, which integrates the three components - emotion cause, knowledge graph, and communication mechanism for empathetic response generation. Experimental results on the benchmark dataset demonstrate the effectiveness of our method and show that incorporating the key components generates more informative and empathetic responses.
+
+# 1 Introduction
+
+According to Hoffman (2000), empathy is an affective response more appropriate to another's situation than one's own, which is the spark of human concern for others and the glue that makes social life possible. It is a complex human trait and dynamic psychological process related to emotion and cognition, where emotional empathy refers to vicarious sharing of emotion and cognitive empathy refers to mental perspective taking (Smith, 2006). Since 1990s, the study of empathy has been widely applied to mental health support (Bohart and Greenberg, 1997; Fitzpatrick et al., 2017), quality of care improvement (Mercer and Reynolds, 2002), and intelligent virtual assistants (Shin et al., 2019).
+
+Expressing empathy becomes more important in today's dialogue systems. However, there are challenges in developing an empathetic model, such as preparing a proper training corpus, learning to get a comprehensive understanding of the dialogue
+
+
+Figure 1: An example of empathetic response from EMPATHETICDIALOGUES dataset. In the teal box are emotion and causes detected from the dialogue context. In the orange box is extended knowledge via COMET. The colored texts in the final reply show two types of communication mechanisms.
+
+context, and designing an appropriate empathy expression strategy.
+
+Recently, there has been some work to address these issues. A standard benchmark containing large-scale empathetic conversations was proposed, laying the cornerstone of empathetic dialogue research (Rashkin et al., 2018). Some researchers try to gain a deeper understanding of contextual information. For example, Gao et al. (2021) applied an emotion cause extractor to conversations and used the extracted causes to guide the response generation process. Li et al. (2022) incorporated external commonsense information to enrich the context. During the language generation process, some researchers focus on controlling emotions of generated responses using emotional blending to imitate the speakers' emotions (Majumder et al., 2020; Lin et al., 2019).
+
+All the above work considers only a single aspect that affects empathy. However, in practice, empathy generation and expression is a very complex and dynamic process. According to research work in the field of psychological science, we believe
+
+that three different but related factors matter in empathy: emotion (the automatic proclivity to share emotions with others), cognition (the intersubjectivity to interpret others' intentions and feelings while keeping separate self and other perspectives), and behavioral outcome (the actions to express empathy) (Decety and Meyer, 2008; Heyes, 2018). Consequently, we divide the entire empathy process into five functional modules: emotion perception, cause extraction, experience projection, dialogue reaction, and verbal expression. Specifically, emotion perception aims to sense emotions from others. Cause extraction is to determine detailed events corresponding to the emotions. Experience projection enriches the contextual information through knowledge extension from the emotion causes. Dialogue reaction decides the response strategies by learning from the contexts. Verbal expression is the final step in a dialogue system to generate responses in terms of languages.
+
+Towards this end, we propose a novel approach IMAGINE, a.k.a. Integrating eMotion cAuses, knowledgeGe, and communlcatioN mEchanisms for empathetic dialogue generation. Using these components improves cognitive understanding of contexts and enhances empathy expression in the generated responses. Our framework involves three stages - emotion cause extraction, knowledge-enriched communication, and response generation. We evaluate our approach on the EMPATHETICDI-ALOGUES dataset. Extensive experimental results demonstrate the effectiveness of IMAGINE in automatic and human evaluations, showing that our approach generates more informative and empathetic responses (An example is shown in Figure 1).
+
+Our contributions can be summarized as follows:
+
+1) We propose a new approach IMAGINE which integrates emotion causes, knowledge, and communication mechanisms into a dialogue system, demonstrating that they are significant factors in the generation and expression of empathy.
+2) We divide relationships within a knowledge graph into several categories, including Affect, Behaviour, Physical, and Events. Meanwhile, we design a three-stage process of emotion cause extraction, knowledge-enriched communication, and response generation based on the dialogue history.
+3) Experimental results show that our proposed approach significantly outperforms other comparison methods, with more informative and empa
+
+thetic responses.
+
+# 2 Related Work
+
+# 2.1 Empathetic dialogue generation
+
+Empathetic response generation is a sub-task of emotion-aware response generation. Rashkin et al. (2018) first proposed a standard benchmark containing large-scale empathetic conversations. Some researchers focus on understanding the dialogue context. Li et al. (2021) and Gao et al. (2021) identified the emotion causes of the conversation to understand the context related to emotions better. Sabour et al. (2021) and Li et al. (2022) leveraged external knowledge, including commonsense knowledge and emotional lexical knowledge, to explicitly understand and express emotions. Some researchers focus on the language generation process, for example, controlling emotions of generated responses through mixture model (Lin et al., 2019), adversarial framework (Li et al., 2019), and mimicking the emotions of the speaker (Majumder et al., 2020). Sharma et al. (2020) and Zheng et al. (2021) explore the expressive factors that elicit empathy. Moreover, as big models are popular today, Lin et al. (2020) adapted GPT2 (Radford et al., 2019) to produce empathetic responses via transfer learning, active learning, and negative training.
+
+# 2.2 What affects empathy?
+
+Emotion Cause The emotion cause (also called antecedents, triggers, or stimuli) (Ellsworth and Scherer, 2003) is a stimulus for human emotions. Recognizing the emotion cause helps understand human emotions better to generate more empathetic responses. The cause could also be a speaker's counterpart reacting towards an event cared for by the speaker(inter-personal emotional influence). For example, understanding the sentence, "I like summer as it is a great time to surf," is not only to detect the positive emotion, HAPPY, but also to find its cause - "it is a great time to surf." The emotion cause recognition method (Poria et al., 2021) is used in our work.
+
+External Knowledge A major part of cognitive empathy is understanding the situations and feelings of others. Conversations are limited in time and content. Therefore, using our experience (e.g., external knowledge) is important to connect what is explicitly mentioned and what is associated with it. In this work, we use the ATOMIC-2020 dataset (Hwang et al., 2020) as our commonsense
+
+
+Figure 2: An overall framework of IMAGINE.
+
+
+
+knowledge base, which is a collection of commonsense reasoning inferences about everyday if-then contexts. Detailed information about ATOMIC is covered in Appendix A.
+
+Communication Mechanism (CM) For empathy generation, both conveying cognitive understanding (Truax and Carkhuff, 1967) and expressing stimulated emotions (Davis et al., 1980) are essential. Sharma et al. (2020) presented a computational approach to understanding empathy expressed in textual, asynchronous conversations and addressing both emotional and cognitive aspects of empathy. They developed components of an empathetic expression, consisting of three communication mechanisms - Emotional Reaction (expressing emotions such as warmth, compassion, and concern), Interpretation (conveying an understanding of feelings and experiences), and Exploration (improving understanding of the seeker by exploring the feelings and experiences).
+
+# 2.3 Task Formulation
+
+We formulate the task of empathetic response generation as follows. Given dialogue transcripts $\mathbf{S} = \{\mathbf{s}_0, \mathbf{s}_1, \dots, \mathbf{s}_k\}$ with $k$ utterances, we firstly detect the emotion and extract emotion causes $\mathbf{C} = \{\mathbf{c}_0, \mathbf{c}_1, \dots, \mathbf{c}_u\}$ which are a subset of $\mathbf{S}$ . Each utterance $\mathbf{c}_i = \{\mathbf{c}_{i,1}, \mathbf{c}_{i,2}, \dots, \mathbf{c}_{i,l_i}\}$ is a sequence of tokens, where $l_i$ denotes the length. Then, our goal is to generate an empathetic response $\mathbf{Y} = \{\mathbf{y}_1, \mathbf{y}_2, \dots, \mathbf{y}_n\}$ given the sequence $\mathbf{C}$ , with the assistance of external knowledge and communication mechanisms.
+
+# 3 Approach
+
+Our proposed model, IMAGINE, is built upon the standard Transformer (Vaswani et al., 2017) and its overview is illustrated in Figure 2. It has three stages consisting of five functional modules: emotion cause extraction (emotion perception, cause extraction), knowledge-enriched communication (dialogue reaction, experience projection), and response generation (verbal expression). Emotion perception predicts emotions of the input. Cause extraction extracts causes related to the emotions from the input. Experience projection acquires knowledge based on the causes mentioned above. Dialogue reaction decides the response strategies by learning from the contexts. Verbal expression integrates the information obtained from the above four modules and generates appropriate responses.
+
+# 3.1 Emotion Cause Extraction
+
+Given a dialogue context consisting of $k$ utterances with the context emotion, the goal of emotion cause extraction is to identify which utterances in the dialogue context contain the emotion cause. We leverage an existing model which is trained on an open-domain emotional dialogue dataset named RECCON, for identifying emotion causes at utterance level in conversations (Poria et al., 2021). Gao et al. (2021) has verified the model's validity, and we follow the method in the first stage of our work.
+
+# 3.1.1 Emotion Perception
+
+It is a classification problem aiming at predicting the emotion $\varepsilon$ within the dialogue context. Given the dialogue context $\mathbf{S} = \{\mathbf{s}_0,\mathbf{s}_1,\dots,\mathbf{s}_k\}$ as the input, the tokens are then fed into a transformer-based encoder to obtain a sequence of contextualized rep
+
+resentations $\mathbf{H}_S$ . Hence, we pass $\mathbf{H}_S$ through a linear layer followed by a softmax operation to produce the emotion category distribution:
+
+$$
+\hat {\mathbf {e}} _ {e m o} = \mathbf {W} _ {e} \mathbf {H} _ {S} [ 0 ] + \mathbf {b} _ {e}, \qquad (1)
+$$
+
+$$
+\hat {\mathbf {P}} (\varepsilon | \mathbf {S}) = \operatorname {s o f t m a x} \left(\hat {\mathbf {e}} _ {e m o}\right), \tag {2}
+$$
+
+where $\mathbf{W}_e$ and $\mathbf{b}_e$ are trainable parameters. During training, we employ negative log-likelihood as the emotion perception loss:
+
+$$
+\mathbf {L} _ {e m o} = - \log (\hat {\mathbf {P}} (\boldsymbol {\varepsilon} = \mathbf {e} ^ {*} | \mathbf {S})), \tag {3}
+$$
+
+where $\mathbf{e}^*$ denotes the emotion label, and $\varepsilon$ denotes the predicted output. Emotional vectors $\hat{\mathbf{e}}_{\mathrm{emo}}$ will be fed into the decoder as a crucial emotional signal to guide the empathetic response generation.
+
+# 3.1.2 Cause Extraction
+
+Given the dialogue context $\mathbf{S}$ and its emotion $\varepsilon$ , we extract emotion causes $\mathbf{C} = \{\mathbf{c}_0, \mathbf{c}_1, \dots, \mathbf{c}_u\}$ according to the approach in Poria et al. (2021). The causes $\mathbf{C}$ are a subset of $\mathbf{S}$ and will be used as the input of the next two stages. Following previous work (Lin et al., 2019; Majumder et al., 2020; Sabour et al., 2021), we concatenate the utterances indicating emotion causes and pretend a special token [CLS] to obtain the cause input $\mathbf{C} = [CLS] + \mathbf{c}_0 + \mathbf{c}_1 + \dots + \mathbf{c}_u$ . Each utterance $\mathbf{c}_i$ contains a sequence of tokens: $\mathbf{c}_i = \{\mathbf{c}_{i,1}, \mathbf{c}_{i,2}, \dots, \mathbf{c}_{i,l_i}\}$ , where $l_i$ is the length of $\mathbf{c}_i$ .
+
+Each token is represented from three aspects: its semantic meaning, its position in the sequence, and who said it. Suppose that the token ID and the position ID of $\mathbf{c}_{i,j}$ are $w_{\mathbf{c}_{i,j}} \in [0,|\mathbf{V}|)$ ( $\mathbf{V}$ is the vocabulary) and $p_{\mathbf{c}_{i,j}}$ , respectively. Additionally, in multi-turn dialogue settings, distinguishing a listener from a speaker is helpful. So we incorporate the dialogue state embedding into our input sequence. Specifically, each utterance $\mathbf{c}_i$ is labeled with its corresponding role $s_{\mathbf{c}_i} \in \{0,1\}$ (0 for speaker and 1 for listener).
+
+The token $\mathbf{c}_{i,j}$ is represented by summing up the word embedding, positional embedding, and dialogue state embedding:
+
+$$
+\mathbf {E} _ {\boldsymbol {c} _ {i, j}} = \mathbf {E} _ {W} [ w _ {\boldsymbol {c} _ {i, j}} ] + \mathbf {E} _ {P} [ p _ {\boldsymbol {c} _ {i, j}} ] + \mathbf {E} _ {S} [ s _ {\boldsymbol {c} _ {i}} ], (4)
+$$
+
+where $\mathbf{E}_W\in \mathbb{R}^{|V|\times d}$ , $\mathbf{E}_P\in \mathbb{R}^{1024\times d}$ , $\mathbf{E}_S\in \mathbb{R}^{2\times d}$ denote the embedding matrices of word, position, and state. $[\cdot ]$ denotes the indexing operation, and $d$ is the dimensionality of embeddings. We feed
+
+the entire sequence of token embeddings $\mathbf{E}_C$ organized by $\mathbf{E}_{c_{i,j}}$ to a cause encoder to produce the contextual representation:
+
+$$
+\mathbf {H} _ {C} = \text {C a u s e - E n c o d e r} (\mathbf {E} _ {C}), \tag {5}
+$$
+
+where $\mathbf{H}_C\in \mathbb{R}^{|L|\times d}$ , $L$ is the length of the sequence, and $d$ is the hidden size of the cause encoder.
+
+Next, we use the hidden state at the $[CLS]$ of the cause encoder, $\mathbf{h}_c = \mathbf{H}_C[0]$ , to predict CM strategies in the following stage.
+
+# 3.2 Knowledge - Enriched Communication
+
+# 3.2.1 Dialogue Reaction
+
+CM Prediction While no empathetic conversation corpora provide annotations of diverse empathy factors, there are abundant publicly available resources that make automatic annotation feasible. We use two corpora annotated with CM provided by Sharma et al. (2020). There are three communication factors named Emotion Reaction (ER), Interpretation (IP), and Exploration (EX). Each mechanism has different degrees. In our work, we merge "weak" and "strong" into "yes" and differentiate each mechanism's degree into two types: "no" and "yes".
+
+We pass $\mathbf{h}_c$ through a linear layer followed by a softmax operation to produce the CM category distribution:
+
+$$
+\mathbf {e} _ {c m i} = \mathbf {W} _ {c m i} \mathbf {h} _ {c} + \mathbf {b} _ {c m i}, \quad c m i \in \{e r, i p, e x \} \tag {6}
+$$
+
+$$
+\hat {\mathbf {P}} _ {c m i} = \operatorname {s o f t m a x} (\mathbf {e} _ {\mathbf {i}}), \tag {7}
+$$
+
+The negative log-likelihood loss is calculated:
+
+$$
+\mathbf {L} _ {c m} = \sum_ {c m i \in \{e r, i p, e x \}} - \log \left(\hat {\mathbf {P}} _ {c m i}\right), \tag {8}
+$$
+
+Finally, $\mathbf{e}_{er},\mathbf{e}_{ip},\mathbf{e}_{ex}$ are summed up, weighted by their predicted degree, as a crucial CM signal:
+
+$$
+\hat {\mathbf {e}} _ {c m} = \hat {\mathbf {P}} _ {e r} \cdot \mathbf {e} _ {e r} + \hat {\mathbf {P}} _ {i p} \cdot \mathbf {e} _ {i p} + \hat {\mathbf {P}} _ {e x} \cdot \mathbf {e} _ {e x}, (9)
+$$
+
+# 3.2.2 Experience Projection
+
+Knowledge Acquisition We extend the contexts by selecting from the knowledge graph those that are speaker-centered and contribute positively to the speaker. Finally, we split ATOMIC-2020 (Hwang et al., 2020) into four types: Affect, Behaviour, Physical, and Events, containing 11 relations $[r_1, r_2, \dots, r_{11}]$ in total (See Figure 3). In Affect, we select one
+
+relation: ([XReact]). In Behaviour, we select five relations: ([XIntent], [XNeed], [XWant], [XEffect], [XAttr]). In Physical, we select three relations: ([HasProperty], [CapableOf], [Desires]). In Events, we select two relations: ([Causes], [XReason]). For an input sequence C, we use COMET (Lewis et al., 2019) to generate five commonsense-inferred entities $[s_1^{r_i}, s_2^{r_i}, s_3^{r_i}, s_4^{r_i}, s_5^{r_i}]$ for each relation $r_i$ . Then we concatenate all entities generated from relations belonging to the same relation type. Through this way, we obtain four commonsense sequences for each input sequence: $S_{Affect}$ , $S_{Behav}$ , $S_{Phys}$ , and $S_{Events}$ . For example, $S_{Events} = [s_1^{[Causes]}, \dots, s_5^{[Causes]}, s_1^{[XReasons]}, \dots, s_5^{[XReasons]}.$ Weptides $[CLS]$ to $S_{Behav}$ , $S_{Phys}$ , and $S_{Events}$ . $S_{Affect}$ does not change because the entities for $Affect$ are usually independent emotion words (e.g., happy, surprise, sad) rather than semantically coherent sequences. The commonsense sequences are fed to the knowledge encoder:
+
+$$
+\mathbf {H} _ {K _ {A B P E}} = \text {K n o w l e d g e - E n c o d e r} \left(S _ {K _ {A B P E}}\right), \tag {10}
+$$
+
+where $K_{ABPE}\in$
+
+$\{A f f e c t, B e h a v, P h y s, E v e n t s\}$
+
+$\mathbf{H}_{K_{ABPE}} \in \mathbb{R}^{|L_{K_{ABPE}}| \times d}$ , with $|L_{K_{ABPE}}|$ being lengths of the commonsense entity sequences.
+
+Next, we use hidden representations of the first position to represent sequences $S_{\text{Behav}}, S_{\text{Phys}}$ , and $S_{\text{Events}}$ , respectively:
+
+$$
+\mathbf {h} _ {K _ {B P E}} = \mathbf {H} _ {K _ {B P E}} [ 0 ] \tag {11}
+$$
+
+where $K_{BPE} \in \{Behav, Phys, Events\}$ .
+
+Moreover, we use the mean of hidden representations to represent $S_{Affect}$ :
+
+$$
+\mathbf {h} _ {\text {A f f e c t}} = \operatorname {A v e r a g e} \left(\mathbf {H} _ {\text {A f f e c t}}\right) | _ {\text {a x i s} = 0}, \tag {12}
+$$
+
+Knowledge Refinement In order to refine the emotion causes by knowledge information, we concatenate each commonsense relation representation $\mathbf{h}_{K_{ABPE}}$ to the cause representation $\mathbf{H}_C$ at the token level. In contrast to sequence-level concatenation, token-level concatenation enables us to fuse knowledge within each word in the cause sequence:
+
+$$
+\mathbf {U} _ {K _ {A B P E}} = \mathbf {H} _ {C} \oplus \mathbf {h} _ {K _ {A B P E}}, \tag {13}
+$$
+
+where $\mathbf{U}_{Affect},\mathbf{U}_{Behav},\mathbf{U}_{Phys},\mathbf{U}_{Events}\in \mathbb{R}^{|L|\times 2d}$
+
+
+Figure 3: The four modules of the Knowledge Graph.
+
+Accordingly, we encode the fused representations and obtain knowledge-refined cause representations for each relation type:
+
+$$
+\mathbf {H} _ {K _ {A B P E}} ^ {\text {r e f}} = \operatorname {R e f i n e - E n c o d e r} \left(\mathbf {U} _ {K _ {A B P E}}\right), \tag {14}
+$$
+
+where $\mathbf{H}_{K_{Affect}}^{ref},\mathbf{H}_{K_{Behav}}^{ref},\mathbf{H}_{K_{Phys}}^{ref},\mathbf{H}_{K_{Events}}^{ref}$ in $\mathbb{R}^{|L|\times d}$ .
+
+We believe that relations of the Affect type matter to emotional empathy, meanwhile relations of Behavior, Physical, and Events types matter to cognitive empathy. Hence, we re-represent the knowledge-refined cause representations as below:
+
+$$
+\tilde {\mathbf {H}} _ {K _ {B P E}} = \mathbf {H} _ {K _ {B P E}} ^ {r e f} \oplus \mathbf {H} _ {A f f e c t} ^ {r e f}, \tag {15}
+$$
+
+where $\tilde{\mathbf{H}}_{\textit{Behav}},\tilde{\mathbf{H}}_{\textit{Phys}},\tilde{\mathbf{H}}_{\textit{Events}}\in \mathbb{R}^{|L|\times 2d}$
+
+Next, to highlight important features within the knowledge-refined cause representation, we assign importance scores to $\tilde{\mathbf{H}}_{K_{BPE}}$ , followed by a Multi-Layer Perception (MLP) layer with ReLU:
+
+$$
+\hat {\mathbf {H}} _ {K _ {B P E}} = \mathbf {M} \mathbf {L} \mathbf {P} (\sigma (\tilde {\mathbf {H}} _ {K _ {B P E}}) \cdot \tilde {\mathbf {H}} _ {K _ {B P E}}) \quad (1 6)
+$$
+
+where $\hat{\mathbf{H}}_{\textit{Behav}},\hat{\mathbf{H}}_{\textit{Phys}},\hat{\mathbf{H}}_{\textit{Events}}\in \mathbb{R}^{|L|\times d}$ , and $\cdot$ denotes element-wise multiplication.
+
+Finally, $\hat{\mathbf{H}}_{\text {Behav }}, \hat{\mathbf{H}}_{\text {Phys }}, \hat{\mathbf{H}}_{\text {Events }}$ and $\hat{\mathbf{e}}_{\text {cm }}$ (Equation 9), are fed into the decoder:
+
+$$
+\hat {\mathbf {H}} _ {C} = \hat {\mathbf {H}} _ {\text {B e h a v}} \oplus \hat {\mathbf {H}} _ {\text {P h y s}} \oplus \hat {\mathbf {H}} _ {\text {E v e n t s}} \oplus \hat {\mathbf {e}} _ {c m} \tag {17}
+$$
+
+where $\hat{\mathbf{H}}_C\in \mathbb{R}^{|L|\times 4d}$
+
+# 3.3 Response Generation
+
+Verbal Expression To acquire emotion dependencies, we concatenate the intermediate emotional signal $\hat{\mathbf{e}}_{emo}$ with word embeddings of the expected response and get $[\mathbf{y}_0^*,\mathbf{y}_1^*,\mathbf{y}_2^*,\dots,\mathbf{y}_n^*]$ . Here $\mathbf{y}_0^*$ is $\hat{\mathbf{e}}_{emo}$ . We then feed the embeddings into the
+
+response decoder. Our decoder is built based on Transformer layers:
+
+$$
+\mathbf {P} \left(y _ {t} \mid y _ {< t}, \mathbf {C}\right) = \operatorname {D e c o d e r} \left(\mathbf {E} _ {y _ {< t}}, \hat {\mathbf {H}} _ {C}\right), \tag {18}
+$$
+
+where $\mathbf{E}_{y < t}$ denotes embeddings of tokens that have been generated. Note that the cross attention to the encoder outputs is modified to the knowledge-refined cause representation $\hat{\mathbf{H}}_C$ , which has fused the information from both the cause and the commonsense-inferred entities.
+
+# 3.4 Model Training
+
+We use negative log-likelihood of the ground-truth words $\mathbf{y}_t^*$ as the generation loss function:
+
+$$
+\boldsymbol {L} _ {\text {g e n}} = - \sum_ {t = 1} ^ {n} \log \mathbf {P} \left(\mathbf {y} _ {t} = \mathbf {y} _ {t} ^ {*} \mid \mathbf {y} _ {0}, \dots , \mathbf {y} _ {t - 1}, \mathbf {C}\right) \tag {19}
+$$
+
+Dialogue generation models sometimes generate repetitive phrases or generic responses, such as "That is a good idea" and "Oh, it is bad." To solve this problem, we apply the Response Diversity Loss in our model, implementing Frequency-Aware Cross-Entropy (FACE) (Jiang et al., 2019) as an additional loss to penalize high-frequency tokens using a weighting scheme. Hence, during training, prior to receiving a new batch of samples, we derive the frequency-based weight $\mathbf{w}_i$ for each vocabulary token $\mathbf{v}_i$ in the training corpus:
+
+$$
+\mathbf {w} _ {i} = \mathbf {a} \times \mathbf {F Q} _ {i} + 1, \tag {20}
+$$
+
+$$
+\mathbf {F Q} _ {i} = \frac {\mathbf {f r e q} (\mathbf {v} _ {i})}{\sum_ {j = 1} ^ {V} \mathbf {f r e q} (\mathbf {v} _ {j})}, \tag {21}
+$$
+
+where $V$ denotes the vocabulary size, $\mathbf{a} = -(\max_{0 < j < V}(\mathbf{FQ}_j))^{-1}$ is the frequency slope and 1 is added as the bias so that $\mathbf{w}_i$ falls into [0,1]. Lastly, we normalize $\mathbf{w}_i$ to have mean of 1, as done by (Jiang et al., 2019). The diversity loss would then be calculated as below:
+
+$$
+\boldsymbol {L} _ {d i v} = - \sum_ {t} ^ {n} \sum_ {i} ^ {V} \mathbf {w} _ {i} \delta \left(\mathbf {v} _ {i} = \mathbf {y} _ {t} ^ {*}\right) \log \mathbf {P} \left(\mathbf {v} _ {i} \mid \mathbf {y} _ {< t}, \mathbf {C}\right) \tag {22}
+$$
+
+where $\mathbf{v}_i$ is a candidate token in the vocabulary and $\delta$ is the indicator function, which equals to 1 if and only if $\mathbf{v}_i = \mathbf{y}_t^*$ and 0 otherwise. All parameters of our proposed model are trained and optimized based on the weighted sum of four losses:
+
+$$
+\boldsymbol {L} = \lambda_ {1} \boldsymbol {L} _ {\text {g e n}} + \lambda_ {2} \boldsymbol {L} _ {\text {e m o}} + \lambda_ {3} \boldsymbol {L} _ {\text {c m}} + \lambda_ {4} \boldsymbol {L} _ {\text {d i v}}, \tag {23}
+$$
+
+where $\lambda_1, \lambda_2, \lambda_3$ and $\lambda_4$ are hyper-parameters that we use to control the influence of the four losses. Loss weights $\lambda_1, \lambda_2, \lambda_3$ and $\lambda_4$ are set to 1, 1, 1, and 1.5, respectively.
+
+# 4 Experimental Settings
+
+# 4.1 Dataset
+
+We conduct our experiments on the EMPATHETIC-DIALOGUES dataset (Rashkin et al., 2018). It is a large-scale multi-turn empathetic dialogue dataset containing $25\mathrm{k}$ dialogue sessions, each having 3-5 rounds of dialogue. There are 32 different distributions of emotion labels. Following the original dataset definitions, we use the 8:1:1 train/valid/test subset split.
+
+# 4.2 Comparison Methods
+
+The following models are selected as baselines: 1) Transformer (Vaswani et al., 2017): A Transformer based encoder-decoder model.
+2) Multi-TRS (Rashkin et al., 2018): An extension of the Transformer model that has an additional unit for emotion prediction.
+3) MoEL (Lin et al., 2019): Another extension of Transformer model which softly combines the response representations from different decoders.
+4) MIME (Majumder et al., 2020): Another extension of transformer model which considers emotion clustering and emotional mimicry. Besides, it also introduces sampling stochasticity during training.
+5) EMPDG (Li et al., 2019): A multi-resolution empathetic adversarial chatbot which exploits multi-resolution emotions and user feedback.
+6) CEM (Sabour et al., 2021): A Transformer encoder-decoder model that integrates affection and cognition into commonsense knowledge.
+7) KEMP (Li et al., 2022): A contextual-enhanced empathetic dialogue generator that leverages multi-type external knowledge and emotional signal distilling for response generation.
+
+More implementation details of our IMAGINE model is covered in Appendix B.
+
+# 4.3 Evaluation metrics
+
+Automatic Evaluations Four automatic metrics are applied for evaluation:
+
+1) PPL (Serban et al., 2015): The perplexity (PPL) represents the model's confidence in its set of candidate responses. A low PPL value means high confidence. PPL can be used to evaluate the general quality of the generated responses.
+
+
Models
PPL
BLEU-2
Dinstinct-1
Distinct-2
ACC
Fluency
Relevance
Empathy
Transformer
37.62
1.32
0.45
2.02
-
3.04
2.49
2.50
Multi-TRS
37.75
1.31
0.41
1.67
33.57
2.99
2.51
2.59
MoEL
36.93
1.32
0.44
2.10
30.62
3.28
2.57
2.63
MIME
37.09
1.34
0.47
1.90
31.36
3.14
2.52
2.59
EmpDG
37.29
1.30
0.46
2.02
30.41
3.07
2.69
2.72
CEM
36.11
1.35
0.66
2.99
39.11
3.40
2.96
2.94
KEMP
36.89
1.34
0.55
2.29
39.31
3.27
2.68
2.68
IMAGINE
35.10
1.37
0.76
3.40
39.60
3.58
3.09
3.09
+
+Table 1: Results of automatic and human evaluations.
+
+
Models
PPL
BLEU-2
Dinstinct-1
Distinct-2
ACC
IMAGINE
35.10
1.37
0.76
3.40
39.60
W/O cause
35.43
1.35
0.64
2.57
38.60
W/O cm
35.58
1.34
0.63
2.84
38.88
W/O know
35.0
1.36
0.64
2.92
38.50
W/O DIV
34.50
1.37
0.68
2.94
39.10
+
+Table 2: Ablation study.
+
+
Models
Win%
Lose%
Tie%
Ours VS Transformer
49.18
16.83
33.99
Ours VS Multi-TRS
42.34
17.66
40.00
Ours VS MOEL
45.49
27.42
27.09
Ours VS MIME
47.34
19.33
33.33
Ours VS EmpDG
47.18
19.60
33.22
Ours VS CEM
42.96
25.80
31.24
Ours VS KEMP
41.90
23.98
34.12
+
+Table 3: Results of human A/B test.
+
+2) BLEU-2 (Papineni et al., 2002): It calculates the co-occurrence frequency of n-grams between candidates and references.
+
+3) Distinct-1 and Distinct-2 (Li et al., 2015): It is the proportion of the distinct unigrams/bigrams in all the generated results to indicate the diversity.
+4) ACC: To evaluate the model at the emotional level, we adopt Emotion Accuracy (ACC) as the agreement between the ground truth emotion labels and the predicted emotion labels.
+
+Human Ratings Evaluating open-domain dialogue systems is challenging due to the lack of reliable automatic evaluation metrics (Gao et al., 2021b). Thus, human judgments are necessary. We randomly sample 100 dialogues and generate corresponding responses from different models. Five well-educated native English speakers who work in
+
+literary writing, psychology, and teaching are hired to give each response a rating score from three aspects – Fluency, Relevance, and Empathy. Each aspect is on a scale from 1 to 5, where 1, 2, 3, 4, and 5 indicate unacceptable, not good, moderate, good, and excellent performance, respectively. In order to keep the anonymization of compared methods, the order of responses in each dialogue is shuffled.
+
+Human A/B Test In the human A/B test, to make sure fairness, we re-sample another 700 dialogues (100 for each comparison between our model and a baseline model) and form them into A-vs-B types, where A is our model and B is a baseline model. Another three annotators are asked to choose a better response. They can also choose a Tie if they think both are good or bad. All human evaluation tasks are conducted on https://www.fanhantech.com.
+
+# 5 Experimental Results
+
+# 5.1 Automatic Evaluation Results
+
+Table 1 reports the evaluation results on automatic metrics. Our model IMAGINE achieves the lowest perplexity, indicating that the overall quality of our generated responses is higher than the baselines. Moreover, the results of Distinct-1 and Distinct-2 show that our model generates much more diverse responses than baselines. As for the emotion accuracy, we can see that our model is
+
+
+Figure 4: Case study of the generated responses by IMAGINE and the baselines.
+
+
+
+valid for recognizing emotions.
+
+# 5.2 Human Evaluation Results
+
+Table 1 illustrates that IMAGINE obtains the best performance on Fluency, Relevance, and Empathy scores. It proves that integrating emotion causes, knowledge, and communication mechanisms can generate more informative and empathetic responses. In addition, from the results of the human A/B test in Table 3, we see that responses from IMAGINE are more often preferable to humans than the responses from other baseline models, which strongly supports the advantages of our approach.
+
+# 5.3 Ablation Analysis
+
+We conducted ablation studies to verify the effectiveness of each component in our model. Table 2 reports the results.
+
+1) W/O cause: Looking at Table 2, we can see that removing the emotion cause extraction part leads to a significant performance decrease of both models in terms of response generation and emotion recognition. The original dialogue history may contain emotion-irrelevant information, which results in a shift of focus. The result indicates that emotion cause extraction plays an important role in strengthening the understanding of users' emotions, which improves the generation of empathetic responses.
+2) W/O CM: By removing the communication
+
+mechanism from the response generation module, as shown in Table 2, we can see that our model is less empathetic and also has a tendency to decline in emotion prediction. The communication mechanism is a state of understanding how people feel; without it, our model will have fewer communication skills.
+
+3) W/O know: When we remove the knowledge module, as shown in Table 2, we can see that the quality and diversity of the model's responses are declined, as a lack of knowledge leads to weaker ability to enrich emotion causes. It also affects the closeness and relevance of the generated responses to the context.
+4) W/O DIV: If the diversity loss is removed, we can see from Table 2 that Distinct-1 is reduced from 0.76 to 0.68, and Distinct-2 is reduced from 3.4 to 2.94. It indicates the effectiveness of this loss in generating more diverse responses.
+
+# 5.4 Case Study
+
+We also present some examples of responses generated by our models and baseline models in Figure 4. Compared with baseline models, our model generates responses closer to the "gold" responses. As shown in the first example, our model can reason deeply about the emotion cause and get a good result in terms of knowledge acquisition. In the second example, from the dialogue context, we learn that the user "studied hard and got good grades." Through the knowledge base, we infer richer in
+
+formation like "prepared, successful, and pass the exam." Finally, our model congratulates and praises the user and poses an unasked question to him/her.
+
+# 6 Conclusion
+
+This paper presents a novel framework that integrates emotion causes, knowledge graphs, and communication mechanisms for empathetic response generation. The emotion cause detection allows us to determine what events stimulate a user's emotion. We can understand the events with the knowledge graph, enriching the contextual information. Furthermore, the communication mechanisms enhance our ability to let users feel that we are trying to feel what they feel. Automatic and human evaluations show that our proposed approach can generate more informative and empathetic responses.
+
+# Limitations
+
+The first challenge is a common problem current chatbots face, e.g., traceability of models and reasoning ability. Second, for mental health support chatbots, each person is analyzed on a case-by-case basis. Each person with a mental health impairment needs a personalized approach to communication, which is not overly generalized. Finally, the shortcomings of the knowledge graph - size, breadth, diversity, and rationality - directly determine the quality of the causes' associative expansion and also affect the closeness and relevance of the generated responses to the context.
+
+# Ethics Statement
+
+The empathetic-dialogues dataset (Rashkin et al., 2018) used in our paper protects the privacy of real users. Furthermore, we make sure anonymization in the human evaluation process. We believe our research work meets the ethics of EMNLP.
+
+# References
+
+Arthur C Bohart and Leslie S Greenberg. 1997. Empathy reconsidered: New directions in psychotherapy. American Psychological Association.
+Mark H Davis et al. 1980. A multidimensional approach to individual differences in empathy.
+Jean Decety and Meghan Meyer. 2008. From emotion resonance to empathic understanding: A social developmental neuroscience account. Development and psychopathology, 20(4):1053-1080.
+
+Phoebe C Ellsworth and Klaus R Scherer. 2003. Appraisal processes in emotion. Oxford University Press.
+Kathleen Kara Fitzpatrick, Alison Darcy, and Molly Vierhile. 2017. Delivering cognitive behavior therapy to young adults with symptoms of depression and anxiety using a fully automated conversational agent (woebot): a randomized controlled trial. *JMIR mental health*, 4(2):e7785.
+Jun Gao, Wei Bi, Ruifeng Xu, and Shuming Shi. 2021b. Ream: An enhancement approach to reference-based evaluation metrics for open-domain dialog generation: An enhancement approach to reference-based evaluation metrics for open-domain dialog generation. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 2487-2500.
+Jun Gao, Yuhan Liu, Haolin Deng, Wei Wang, Yu Cao, Jiachen Du, and Ruifeng Xu. 2021. Improving empathetic response generation by recognizing emotion cause in conversations. In *Findings of the Association for Computational Linguistics: EMNLP* 2021, pages 807-819.
+Cecilia Heyes. 2018. Empathy is not in our genes. Neuroscience & Biobehavioral Reviews, 95:499-507.
+Martin L. Hoffman. 2000. Empathy and moral development: implications for caring and justice. Contemporary Sociology, 30:487.
+Jena D Hwang, Chandra Bhagavatula, Ronan Le Bras, Jeff Da, Keisuke Sakaguchi, Antoine Bosselut, and Yejin Choi. 2020. Comet-atomic 2020: On symbolic and neural commonsense knowledge graphs. arXiv preprint arXiv:2010.05953.
+Shaojie Jiang, Pengjie Ren, Christof Monz, and Maarten de Rijke. 2019. Improving neural response diversity with frequency-aware cross-entropy loss. In *The World Wide Web Conference*, pages 2879–2885.
+Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. 2019. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461.
+Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2015. A diversity-promoting objective function for neural conversation models. arXiv preprint arXiv:1510.03055.
+Qintong Li, Hongshen Chen, Zhaochun Ren, Pengjie Ren, Zhaopeng Tu, and Zhumin Chen. 2019. Empdg: Multiresolution interactive empathetic dialogue generation. arXiv preprint arXiv:1911.08698.
+Qintong Li, Piji Li, Zhaochun Ren, Pengjie Ren, and Zhumin Chen. 2022. Knowledge bridging for empathetic dialogue generation.
+
+Yanran Li, Ke Li, Hongke Ning, Xiaogiang Xia, Yalong Guo, Chen Wei, Jianwei Cui, and Bin Wang. 2021. Towards an online empathetic chatbot with emotion causes. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 2041-2045.
+Zhaojiang Lin, Andrea Madotto, Jamin Shin, Peng Xu, and Pascale Fung. 2019. Moel: Mixture of empathetic listeners. arXiv preprint arXiv:1908.07687.
+Zhaojiang Lin, Peng Xu, Genta Indra Winata, Farhad Bin Siddique, Zihan Liu, Jamin Shin, and Pascale Fung. 2020. Caire: An end-to-end empathetic chatbot. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 13622-13623.
+Navonil Majumder, Pengfei Hong, Shanshan Peng, Jiankun Lu, Deepanway Ghosal, Alexander Gelbukh, Rada Mihalcea, and Soujanya Poria. 2020. Mime: Mimicking emotions for empathetic response generation. arXiv preprint arXiv:2010.01454.
+Stewart W Mercer and William J Reynolds. 2002. Empathy and quality of care. British Journal of General Practice, 52(Suppl):S9-12.
+Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting of the Association for Computational Linguistics, pages 311-318.
+Soujanya Poria, Navonil Majumder, Devamanyu Hazarika, Deepanway Ghosal, Rishabh Bhardwaj, Samson Yu Bai Jian, Pengfei Hong, Romila Ghosh, Abhinaba Roy, Niyati Chhaya, et al. 2021. Recognizing emotion cause in conversations. Cognitive Computation, 13(5):1317-1332.
+Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9.
+Hannah Rashkin, Eric Michael Smith, Margaret Li, and Y-Lan Boureau. 2018. Towards empathetic open-domain conversation models: A new benchmark and dataset. arXiv preprint arXiv:1811.00207.
+Sahand Sabour, Chujie Zheng, and Minlie Huang. 2021. Cem: Commonsense-aware empathetic response generation. arXiv preprint arXiv:2109.05739.
+Iulian V Serban, Alessandro Sordoni, Yoshua Bengio, Aaron Courville, and Joelle Pineau. 2015. Hierarchical neural network generative models for movie dialogues. arXiv preprint arXiv:1507.04808, 7(8):434-441.
+Ashish Sharma, Adam S Miner, David C Atkins, and Tim Althoff. 2020. A computational approach to understanding empathy expressed in text-based mental health support. arXiv preprint arXiv:2009.08441.
+
+Jamin Shin, Peng Xu, Andrea Madotto, and Pascale Fung. 2019. Happybot: Generating empathetic dialogue responses by improving user experience look-ahead. arXiv preprint arXiv:1906.08487.
+Adam Smith. 2006. Cognitive empathy and emotional empathy in human behavior and evolution. The Psychological Record, 56(1):3-21.
+Charles B Truax and Robert Carkhuff. 1967. Toward effective counseling and psychotherapy: Training and practice. Transaction Publishers.
+Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in neural information processing systems, 30.
+Chujie Zheng, Yong Liu, Wei Chen, Yongcai Leng, and Minlie Huang. 2021. Comae: a multi-factor hierarchical framework for empathetic response generation. arXiv preprint arXiv:2105.08316.
+
+# A knowledge Graph
+
+In this work, we use the ATOMIC-2020 dataset (Hwang et al., 2020) as our commonsense knowledge base, which is a collection of commonsense reasoning inferences about everyday if-then contexts. They fall into three natural categories based on their meaning: physical-entity, social-interaction, and event-centered commonsense, which are 22 relationships under three categories ( e.g., XReact, XWant, XReason, CapableOf) (See Fig 5). Based on the given contexts, we select those that are speaker-centered and contribute positively to the speaker. We neglect (oReact,oEffect,oWant, Etc.) in our work. Finally, We have extracted 11 important relationships from ATOMIC. These relationships are divided into four modules, which are Physical (CapableOf, HasProperty, Desires), Affect (XReact), Behaviour (XEffect, XNeed, XWant, XIntent, XAttr), Events (Causes, XReason). As shown in Fig 6.
+
+# B Implementation Details
+
+Our models are implemented using Pytorch, a modularized, versatile, and extensible toolkit for machine learning and text generation tasks. We used 300-dimensional word embedding and 300-dimensional hidden size everywhere in our experiments. The word embedding is initialized using pre-trained Glove vectors. We initialize the transformer encoder with one layer and two attention heads for the task. We train our models using Adam optimization with a learning rate of 0.0001. Early
+
+
Categories
Head
Relations
COMET (BART)
Physical-entity
Common sense
ObjectUse
Make a good decision
bird lover
CapableOf
Look at birds
ice cream
MadeUpOf
milk and warter
mouse
HasProPerty
Long tail
doctors
Desires
cure patient
doctors
NotDesires
malpractice suit
gambler
AtLocation
casino
Social-Interaction
X accepts Y's apology
XIntent
To be forgiving
X gives Y gifts
XReact
good about [one]self
X gives Y gifts
ORect
appreciated
X steals a car
XAttr
evil
X get hurts
XEffect
x is hospitalized
X gives Y gifts
XNeed
buy the presents.
X gives Y gifts
XWant
to hug [Y]
X gives Y gifts
OEffect
blush
X gives Y gifts
OWant
open the gift
event-centered commonsense.
accident
Causes
hurt
X accepts Y's apology
HinderedBy
X is too angry
why one has to "walk"
XReason
car has broken down
X does yard work
isAfter
X gets a job as a gardener
X does yard work
isBefore
X takes a shower
Move car
HasSubevent
Get out of car
+
+Figure 5: Example generations of models on relations from ATOMIC-2020 dataset (Hwang et al., 2020).
+
+
Categories
Head
Relations
COMET (BART)
Physical
bird lover
CapableOf
Look at birds
mouse
HasProPerty
Long tail
doctors
Desires
cure patient
Affect
X gives Y gifts
XReact
good about [one]self
Behaviour
X get hurts
XEffect
x is hospitalized
X gives Y gifts
XNeed
buy the presents.
X gives Y gifts
XWant
to hug [Y]
X accepts Y's apology
XIntent
To be forgiving
X steals a car
XAttr
evil
Events
accident
Causes
hurt
why one has to “walk”
XReason
car has broken down
+
+Figure 6: Our knowledge graph, which uses 11 relationships and is inspired by psychology, is divided into four modules: Physical, Affect, Behaviour, Events
+
+stopping is applied during training. We use a batch size of 1 and a maximum of 30 decoding steps during testing and inference.
\ No newline at end of file
diff --git a/wishicanfeelwhatyoufeelaneuralapproachforempatheticresponsegeneration/images.zip b/wishicanfeelwhatyoufeelaneuralapproachforempatheticresponsegeneration/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..0d3d7eafeb5230157b0a30d0149b45d516c12106
--- /dev/null
+++ b/wishicanfeelwhatyoufeelaneuralapproachforempatheticresponsegeneration/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:cb16c57e6cb6f12b865ed619a1eebe2d2418308b283661a9b1b398e227dbee9e
+size 784827
diff --git a/wishicanfeelwhatyoufeelaneuralapproachforempatheticresponsegeneration/layout.json b/wishicanfeelwhatyoufeelaneuralapproachforempatheticresponsegeneration/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..964007e68953417c5ed62a7209e93e3ceb48bdc2
--- /dev/null
+++ b/wishicanfeelwhatyoufeelaneuralapproachforempatheticresponsegeneration/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:15664aa17c064acf9cf446a4c9e7b672804d3ea2cbfa63609bd6e7de2b74d1f9
+size 413997
diff --git a/wordtiesmeasuringwordassociationsinlanguagemodelsviaconstrainedsampling/eb11a814-d158-404c-a2ef-ebb36658d51f_content_list.json b/wordtiesmeasuringwordassociationsinlanguagemodelsviaconstrainedsampling/eb11a814-d158-404c-a2ef-ebb36658d51f_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..eb8f9baaf50270dc38d8c487caf5357901fd6f8b
--- /dev/null
+++ b/wordtiesmeasuringwordassociationsinlanguagemodelsviaconstrainedsampling/eb11a814-d158-404c-a2ef-ebb36658d51f_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:30fddead8e0f575d2f32909c81734bb99d63c39ed672b11ef661df702a299db8
+size 80965
diff --git a/wordtiesmeasuringwordassociationsinlanguagemodelsviaconstrainedsampling/eb11a814-d158-404c-a2ef-ebb36658d51f_model.json b/wordtiesmeasuringwordassociationsinlanguagemodelsviaconstrainedsampling/eb11a814-d158-404c-a2ef-ebb36658d51f_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..c9fd93485ef4352ce406d7e947af2536c8b90688
--- /dev/null
+++ b/wordtiesmeasuringwordassociationsinlanguagemodelsviaconstrainedsampling/eb11a814-d158-404c-a2ef-ebb36658d51f_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:2a241bae706ba84162c0af68f9093c65fd708f18fbf9e411d36c8447f8975b64
+size 96747
diff --git a/wordtiesmeasuringwordassociationsinlanguagemodelsviaconstrainedsampling/eb11a814-d158-404c-a2ef-ebb36658d51f_origin.pdf b/wordtiesmeasuringwordassociationsinlanguagemodelsviaconstrainedsampling/eb11a814-d158-404c-a2ef-ebb36658d51f_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..a5fa9816973596df82a1c9e9c6803957124c0acf
--- /dev/null
+++ b/wordtiesmeasuringwordassociationsinlanguagemodelsviaconstrainedsampling/eb11a814-d158-404c-a2ef-ebb36658d51f_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:438485a5f9dbc93f6cd1354891459359aa66675cd99ba30daa408114feead60f
+size 392599
diff --git a/wordtiesmeasuringwordassociationsinlanguagemodelsviaconstrainedsampling/full.md b/wordtiesmeasuringwordassociationsinlanguagemodelsviaconstrainedsampling/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..8615735d255248a40eba6e002b146ce554f00876
--- /dev/null
+++ b/wordtiesmeasuringwordassociationsinlanguagemodelsviaconstrainedsampling/full.md
@@ -0,0 +1,323 @@
+# WordTies: Measuring Word Associations in Language Models via Constrained Sampling
+
+Peiran Yao and Tobias Renwick and Denilson Barbosa
+
+Department of Computing Science
+
+University of Alberta
+
+{peiran, renwick, denilson}@ualberta.ca
+
+# Abstract
+
+Word associations are widely used in psychology to provide insights on how humans perceive and understand concepts. Comparing word associations in language models (LMs) to those generated by human subjects can serve as a proxy to uncover embedded lexical and commonsense knowledge in language models. While much helpful work has been done applying direct metrics, such as cosine similarity, to help understand latent spaces, these metrics are symmetric, while human word associativity is asymmetric. We propose WordTies, an algorithm based on constrained sampling from LMs, which allows an asymmetric measurement of associated words, given a cue word as the input. Comparing to existing methods, word associations found by this method share more overlap with associations provided by humans, and observe the asymmetric property of human associations. To examine possible reasons behind associations, we analyze the knowledge and reasoning behind the word pairings as they are linked to lexical and commonsense knowledge graphs. When the knowledge about the nature of the word pairings is combined with a probability that the LM has learned that information, we have a new way to examine what information is captured in LMs.
+
+# 1 Introduction
+
+What do you think of when you see a word? Word association is a task where a human participant is shown a cue word, and is asked to quickly list words (formally responses) that come to the mind without thinking (Nelson et al., 2004; De Deyne et al., 2019). These associations provide a way to measure human representations of semantic knowledge (Rodriguez and Merlo, 2020). Similarly, researchers have been mirroring the human word association task on pretrained language models (LMs), as a method for intrinsic evaluations of
+
+
+Figure 1: Overview of the workflow of our word association probing algorithm. The network plotted shows example word associations where the word language is a cue or response. The associations are probed from BERT (Devlin et al., 2019) using the proposed algorithm. The radius of a word circle represents the average frequency of the word being a response for one of the cues. The length of connections represents the relative associative strength between words. Note that the lengths might not be the same for the two directions between the same pair of words.
+
+word embeddings (Thawani et al., 2019) and for measuring and mitigating social biases in language models (Kaneko and Bollegala, 2021; Bommasani et al., 2020). Word associations could be used as a proxy for measuring linguistic and commonsense knowledge in language models.
+
+Existing approaches that probe word associations in language models (Rodriguez and Merlo, 2020; Kaneko and Bollegala, 2021; Bommasani et al., 2020; May et al., 2019) investigate the word embedding spaces of LMs. Word embeddings are contextualized in LMs, and they are converted to static embeddings for analyses with the help of external corpora or templates, which introduces confounding biases. In the meantime, associativity is often measured by the cosine similarity between the embeddings of the cue word and the response. A major problem here is that cosine similarity is symmetric, while human word associations are not (Rodriguez and Merlo, 2020).
+
+Instead of investigating embedding spaces, we propose to perform association rule mining on discrete word sequences sampled from LMs with constraints. To the best of our knowledge, this is the first application of association rule mining on the investigation of word associations in distributional semantic models. This novel approach more closely imitates human word association, and allows us to probe language models as a whole and without the use of external inputs. Our algorithm, named WordTies, samples sentences from language models with the constraint that the cue word must appear in the sentence, and uses the conditional probability that a word co-occurs with the cue word in the sample as the associativity score. The workflow of the WordTies algorithm is illustrated in Figure 1. We validate our probing method by measuring the overlap between associations found in LMs by our algorithm and human associations, and testing if distance properties of human associations, like asymmetry, are preserved by our algorithm.
+
+In another part of this work, we attempt to uncover what linguistic and commonsense knowledge and reasoning are involved in the word association process, for both humans and language models. In order to reach a reasonable cause for a given cue to response association, we link the two words simultaneously to a lexical knowledge graph (WordNet; Miller, 1995) and a commonsense knowledge graph (ASCENT++; Nguyen et al., 2021), which leads to new discoveries about word associations.
+
+# 2 Human Word Associations
+
+Human word associations exhibit certain intriguing properties, such as stability, asymmetry and intransitivity (Rodriguez and Merlo, 2020). Stability is the property that different people usually come up with similar associations, which correlates with one definition of commonsense knowledge that they are shared among most human beings (Sap et al., 2020). This suggests that word associations could potentially be used as a signal for inferring commonsense knowledge. Secondly, some associations are not symmetric, as demonstrated by Rodriguez and Merlo's (2020) example that participants indicate that North Korea is more closely associated with China than vice versa. Finally, intransitivity means the associations do not follow the triangular inequality. For example, iPhone is associated with apple and apple is associated with sour, but iPhone is not associated with sour. These two
+
+geometric properties indicate that traditional tools for interpreting language models, such as vector norms for word embeddings, will not be sufficient to discover word associations as humans do.
+
+It was previously shown that humans often associate words based on similarity, contrast, and contiguity (Thawani et al., 2019). We further investigated what specific types of semantic knowledge and reasoning, including lexical and commonsense knowledge and reasoning, are involved in human word associations, by breaking down the relations between the cue and response word pairs.
+
+# 2.1 Association Norms
+
+Collections of human word associations are called word association norms. We use the data from the English Small World of Words (SWOW; De Deyne et al., 2019) project as the word association norms. In SWOW, up to 100 responses were each collected for 12,292 cues, along with an association strength computed from the frequency with which a word appears as a top-3 response. This serves as the ground truth when evaluating word associations generated from language models.
+
+Compared to other popular word association norms, for example the University of South Florida norms (USF; Nelson et al., 2004) and the Edinburgh Associative Thesaurus (EAT; Kiss et al., 1973), SWOW is more contemporary, heterogeneous, and includes a much larger number of cues and responses (De Deyne et al., 2019). In the USF study, participants were instructed to list words that are "meaningfully related or strongly associated" to the cue word, while in both SWOW and our analogy for LMs, no such constraints are imposed.
+
+# 2.2 Semantic Knowledge
+
+The two research questions we would like to answer here are: what semantic knowledge do humans rely on to produce word associations, and what kind of reasoning is built on that knowledge for word associations?
+
+We attempt to answer the two questions by finding a possible "reasoning path" for each of the cue-response pairs in the SWOW dataset. Based on the observations of human word associations discussed at the beginning of §2, such as stability and the reliance on similarity, contrast and contiguity, it is natural to assume that there exists a certain lexical relation between the pair, or they are related by some commonsense knowledge. For
+
+
Path
Interpretation
Frequency
Source
HasProp-HasProp-1
Share the same property
87,415
ASCENT++
HasProp
response is a property of cue
20,202
ASCENT++
HasProp-1-HasProp-HasProp-1
-
19,830
ASCENT++
ReceivesAction-ReceivesAction-1
Receives the same action
19,524
ASCENT++
∅
Synonym
14,089
WordNet
Hypernymy-Hyponymy
In the same category
10,055
WordNet
Hypernymy
Hypernym
8,815
WordNet
+
+example, we associate dark with light out of contrast (the antonymy lexical relation), and apple with sour, because by commonsense being sour is a property of apples. Therefore, the reasoning paths are determined by first linking the cue word and the response to nodes of two knowledge graphs respectively: a lexical knowledge graph and a commonsense knowledge graph. The shortest path between the two nodes is regarded as the reasoning path for the cue-response pair. When calculating the shortest paths, we treat the knowledge graphs as undirected graphs by adding inverse relations for all the edges.
+
+Knowledge Graphs WordNet (Miller, 1995) is used as the lexical knowledge graph. It provides relations between senses of English words, such as hypernymy / hyponymy and antonymy. Specifically, the version we choose is English WordNet 2020 (McCrae et al., 2020), which is a fork of the original Princeton WordNet (Miller, 1995) that accommodates emerging phenomena in the English language, and is openly available.
+
+For the commonsense knowledge graph, we use ASCENT++ (Nguyen et al., 2021), which contains over 2 million commonsense relationships for 10,000 concepts collected from a large web corpus. At the time of writing, this is the state-of-the-art commonsense knowledge graph in terms of precision and recall. Relations in ASCENT++ are related to properties of general concepts, such as CapableOf and UsedFor.
+
+Breakdown of Knowledge Types If the reasoning path is shorter in the lexical graph, then the cue-response pair is assumed to be more likely to involve lexical knowledge. Otherwise it is assumed to be related to commonsense knowledge. Table 2 provides a breakdown of knowledge types involved
+
+Table 1: Most frequent reasoning paths for the cue-response pairs in the SWOW dataset, with a potential interpretation for the path. HasProperty is shortened as HasProp. $-1$ denotes an inverse relation. For example, $A\text{HasProp}^{-1}B$ means $A$ is a property of $B$ . "-" indicates that there is not a concise interpretation for the path.
+
+
Type
Count
Frequency
Lexical
346, 690
36.1%
Commonsense
417, 144
43.5%
Unknown
196, 066
20.4%
+
+Table 2: Number of cue-response pairs in the SWOW dataset with reasoning paths in the lexical and commonsense knowledge graphs.
+
+in the SWOW dataset. The majority of pairs in the dataset can be linked to the two knowledge graphs, and the shortest reasoning paths are almost evenly split between lexical and commonsense knowledge. About $20\%$ of the pairs in SWOW have no connection in either of the knowledge graphs (categorized as Unknown in Table 2).
+
+Observations of Reasoning Paths The reasoning path provides an explanation of the reasoning process behind a word association. The most frequent reasoning paths are provided in Table 1.
+
+The majority of responses can be reached within 3 hops from the cue word, as illustrated in Figure 2. We found that the length of reasoning paths only has a slightly negative correlation with the relative order with which a response comes up in SWOW (reflected by the association strengths in the SWOW dataset), with a Spearman correlation coefficient of $-0.083$ ( $p < 0.01$ ).
+
+# 3 The WordTies Algorithm
+
+# 3.1 Word Association Mining
+
+The proposed WordTies algorithm finds word associations in a language model by sampling discrete sentences from the language model, with the constraint that the sampled sentence must contain the given cue word (see §3.2 for details). It then applies association rule mining to the sampled sentences, and picks the words that most frequently appear in
+
+
+Figure 2: Distribution of reasoning path lengths in the SWOW dataset. The maximum path in SWOW is 14.
+
+the sampled sentences as the response words1. Intuitively, the language model is asked to "write sentences" with the given cue word. The more likely that the LM uses a word to write such sentences, the higher chance that this word is associated with the cue word by the LM.
+
+More formally, a language model, parametrized by $\Theta$ , is a probability distribution $P(\cdot; \Theta)$ that assigns a probability $P(\mathbf{x}; \Theta)$ to any given word sequence $\mathbf{x} = x_1 x_2 \dots x_n$ . Such probability is commonly factorized by prefixes of the sequence, for example in this form:
+
+$$
+P (\mathbf {x}; \Theta) = P (x _ {1}; \Theta) \cdot \prod_ {i = 2} ^ {n} P (x _ {i} | x _ {1: i - 1}; \Theta). \tag {1}
+$$
+
+Each word pair $w_{1}, w_{2}$ is assigned a score $score(w_{1} \rightarrow w_{2})$ that indicates the associative strength with which the response word $w_{2}$ is associated with the cue word $w_{1}$ . Suppose $\mathbf{x}$ is a random sequence drawn from the distribution defined by the LM, then we would like to use the following conditional probability as the score for word association:
+
+$$
+\begin{array}{l} \operatorname {s c o r e} \left(w _ {1} \rightarrow w _ {2}\right) \\ \triangleq P (\exists i, x _ {i} = w _ {2} | \exists j, x _ {j} = w _ {1}) \tag {2} \\ \end{array}
+$$
+
+which is the conditional probability that given the cue word $w_{1}$ is in the sentence sampled from the LM, the response word $w_{2}$ is also in the sampled sentence.
+
+In practice, the association score is calculated by estimating the expectation:
+
+$$
+\begin{array}{l} s c o r e \left(w _ {1} \rightarrow w _ {2}\right) \\ = \underset {\mathbf {x} \sim P (\cdot ; \Theta)} {\mathbb {E}} \frac {\mathbb {1} \left(\exists i , j x _ {i} = w _ {1} \wedge x _ {j} = w _ {2}\right)}{\mathbb {1} \left(\exists i x _ {i} = w _ {1}\right)} \tag {3} \\ \end{array}
+$$
+
+which is done by sampling from the LM with the hard constraint that the cue word $w_{1}$ is in the sentence, and counting the words that co-occur with $w_{1}$ in the sampled sentences. It is computationally infeasible to estimate the score from unconstrained samples, i.e. sampling sentences directly from the LM and discarding the sentences without the appearance of the cue word. Word frequencies of common corpora, from which LMs are trained, follow Zipf's law and have a long-tail distribution (Zhao and Marcus, 2012), which means exponentially more samples are needed for rarer cue words.
+
+For each cue word, we pick the words with the highest association scores as the response words, while filtering out stop words. In practice, we use the spaCy (Honnibal et al., 2020) tokenizer from its en_core_web_sm model to tokenize the sampled sentences, and only keep words that exist in WordNet to reduce noise. Readers can refer to Table 6-8 in the appendix for samples of the mined word associations. In the terms of association rule mining literature (e.g. Piatetsky-Shapiro's (1991)), the association score we define is the confidence, and the filtering of stop words is equivalent to setting a threshold on the lift.
+
+# 3.2 Constrained Sampling
+
+From Masked LMs Masked LMs (Devlin et al., 2019; Liu et al., 2019), or MLMs in short, are not trained with the traditional language modeling objective of minimizing the negative log likelihood of training sequences. Instead, they are autoencoder based de-noising models trained to predict what the masked tokens should be as a distribution $P_{MLM}(x_m | \mathbf{x}_{\backslash m})$ over the vocabulary, given an input sequence $\mathbf{x}_{\backslash m}$ where the token at position $m$ is replaced with a mask. Wang and Cho (2019) proved mathematically that a masked LM trained with this different objective still conforms to the definition of a language model described in §3.1, in the sense that it provides a probability for each sequence as a Markov random field. In the Markov random field defined by a masked LM, tokens of a sequence form a fully-connected graph, and the probability of a sequence is the normalized potential of that graph (the largest clique):
+
+$$
+P _ {M L M} (\mathbf {x}; \Theta) = \frac {1}{Z} \prod_ {i = 1} ^ {n} P _ {M L M} \left(x _ {i} | \mathbf {x} \backslash i\right). \tag {4}
+$$
+
+Although the exact value of the normalizing factor $Z$ cannot be tractably computed, it is still possible to sample from the distribution with Markov
+
+chain Monte Carlo methods. For example, Wang and Cho (2019) provide a Gibbs sampling algorithm for masked LMs. Starting from a randomly initialized sequence, at each step we choose a random position $i$ , sample a token from the distribution $P_{MLM}(x_i | \mathbf{x}_{\backslash i})$ , and replace the token at position $i$ with the sampled token. We modified it to impose the hard constraint that the cue word is in the sequence while sampling by keeping certain tokens fixed as the cue, as shown in Algorithm 1.
+
+Algorithm 1 Sampling from a masked LM with a hard constraint that the cue word must be in the sequence. $L_{min}$ and $L_{max}$ control the length of the sampled sequence, and $S$ is the number of MCMC steps.
+```txt
+sample $L\sim$ Uniform(\{Lmin··Lmax})
+sample pos $\sim$ Uniform(\{0··L})
+s← [MASK]··[MASK] L
+ $s_{pos}\gets cue$
+for step $\in \{1\dots S\}$ do ▷ Gibbs sampling
+modified from Wang and Cho's (2019). sample $i\sim$ Uniform(\{0··L} \{\{pos\}) $s_i\gets [\mathrm{MASH}]$ sample $w\sim P_{MLM}(s_i|\mathbf{s};\Theta)$ $s_i\gets w$
+end for
+return s
+```
+
+From Causal LMs Causal LMs factor the probability of a sequence in an autoregressive way as described in Eq. 1. Usually, sampling or decoding from a causal LM is also done in an autoregressive fashion, for example generating one token at a time from left to right. However, the conditional probability $P(\mathbf{x}|c)$ of a sequence $c$ with the constraint $c$ will no longer have the nice linear structure, and this poses a major obstacle for sampling.
+
+Recent practice utilizes the fact that $P(\mathbf{x}|c) \propto P(\mathbf{x};\Theta) \cdot P(c|\mathbf{x})$ where $P(c|\mathbf{x})$ is a differentiable classifier for the constraint, and samples from the unnormalized distribution defined by the product of the two distribution functions with variations of Hamiltonian Monte Carlo (Neal, 2011), such as Langevin Monte Carlo (Kumar et al., 2022; Qin et al., 2022). As a Markov Chain Monte Carlo process, randomly initialized text sample is updated with enough steps by gradient descent with added Gaussian noise.
+
+In our case, we could define the constraint classi-
+
+fier $P(c|\mathbf{x})$ to be based on the distance (measured in embedding or simplex space) between the cue word and a token in the sequence, as suggested by Kumar et al. (2022). This Langevin Dynamics-based method provides a theoretically plausible way to apply WordTies to causal LMs such as GPT-2 (Radford et al., 2019). However, we are yet unable to produce good samples with the hyperparameters provided and some tuning from Kumar et al.'s (2022) algorithm. We leave it as future work to continue on this direction.
+
+# 3.3 Evaluation
+
+We evaluate the performance of WordTies as a word association mining algorithm, by calculating the alignment with human associations and the precision of finding asymmetric associations, and comparing to methods from previous work.
+
+# 3.3.1 Setting
+
+Dataset We execute the experiments on a subset of 3,000 cues in SWOW. The subset of cues is chosen by uniformly sampling without replacement from the set of cues, and is available in the supplement materials. For the filtering of responses, we use English WordNet 2020 (McCrae et al., 2020) and the stop word list from NLTK (Bird et al., 2009).
+
+Pre-trained Models The LMs we use for evaluation are BERT (base-uncased; Devlin et al., 2019), RoBERTa (base; Liu et al., 2019), and DistilBERT (base-uncased; Sanh et al., 2019), all implemented by Wolf et al. (2020).
+
+Hyper-parameters In Algorithm 1, the range of sequence length is set to $L_{min} = 5$ and $L_{max} = 16$ , and the number of MCMC steps $S = 100$ as suggested by Wang and Cho (2019).
+
+# 3.3.2 Baselines
+
+Contextualized2Static Bommasani et al. (2020) evaluated a scheme for averaging contextualized embeddings of a word in various contexts to a static embedding. The obtained static embeddings were then used by Kaneko and Bollegala (2021) to find associated words via cosine similarity. We replicate the static embeddings in Bommasani et al.'s (2020) work by using the best hyper-parameters they found and WikiText-103 (Merit et al., 2017). For every word in the vocabulary of the WikiText-103 corpus, we sample at most 1,000 context sentences containing that word, and average the embeddings from
+
+
Model
Method
Precision@k
Spearman's ρ
1
3
5
10
15
30
BERT
C2S
0.171
0.215
0.219
0.196
0.148
0.132
0.098
Vocab
0.247
0.250
0.222
0.158
0.119
0.073
0.063
WordTies
0.368
0.352
0.327
0.281
0.250
0.195
0.213
RoBERTa
C2S
0.149
0.132
0.119
0.093
0.076
0.053
-0.086
Vocab
0.158
0.139
0.117
0.094
0.081
0.063
0.051
WordTies
0.255
0.320
0.212
0.181
0.161
0.127
0.163
DistilBERT
C2S
0.177
0.222
0.197
0.200
0.191
0.152
0.091
Vocab
0.254
0.256
0.207
0.167
0.132
0.085
0.050
WordTies
0.263
0.245
0.223
0.189
0.168
0.133
0.151
Corpus
0.543
0.452
0.399
0.325
0.287
0.224
0.228
+
+Table 3: Evaluation results for the alignment between human word associations and LM word associations. C2S is short for the Contextualized2Static baseline, VOCab is short for the VOCab Embedding baseline, and Corpus is short for the corpus-only baseline. All reported Spearman's $\rho$ s are statistically significant ( $p < 0.01$ ). The best results for each metric and model combination are marked in bold.
+
+the first layer of the model for each substring of the word and each context.
+
+Vocab Embedding In Rodriguez and Merlo's (2020) recent analysis of word associations in LMs, the authors directly measured the cosine similarity between embeddings in the vocabulary layer without contextualization.
+
+Corpus Only We directly apply the same algorithm and score (2) as in WordTies to the same corpora, English Wikipedia and BookCorpus (Zhu et al., 2015), that were used to train BERT.
+
+# 3.3.3 Statistical Tests
+
+Since the WordTies algorithm involves sampling, we introduce statistical tests to make sure that an irrelevant word will not be chosen as a response by chance. Words are sampled from the multinomial distribution defined in Eq. 2, and to say that a response is not a noisy word that ends up in the top 50 most probable words by chance, the following null hypothesis needs to be rejected: there exist at least $N - 50$ words whose probability as defined in Eq. 2 is significantly lower than the chosen word where $N$ is the size of the vocabulary. And for each pair of words, we test the null hypothesis that the probability of the first word is significantly higher than the second by a binomial test. In our experiments, most of the words in the top-10 response list are statistically significant ( $p < 0.1$ ). Responses that passed the tests are highlighted in Table 6-8 in the appendix. Such tests provide a guideline for choosing the number of samples to generate.
+
+# 3.3.4 Alignment
+
+We measure how good the word associations produced from LMs by the algorithms align with human associations. The alignment is measured by both precision@k, which reflects the overlap, and Spearman's correlation coefficient $(\rho)$ between the scores in the algorithms and the strengths in SWOW, which provides an indication of whether LM and human produce word associations in the same order. The results are shown in Table 3. Our method achieves much better precision@k than the baselines on both BERT and RoBERTa, and results at the same level for DistilBERT. It also achieves higher $\rho$ on all three models. This means associations obtained with WordTies share more similarity with human associations in terms of both word choices and strengths.
+
+# 3.3.5 Asymmetry
+
+We test if the association scores produced by WordTies can be used to find asymmetries in word associations, an important feature of human word associations that previous methods fail to accommodate. The level of asymmetry is measured by the ratio between scores of both directions of association:
+
+$$
+a s y m m e t r y (w _ {1}, w _ {2}) =
+$$
+
+$$
+\underline {{\operatorname* {m a x} \left(\operatorname {s c o r e} \left(w _ {1} \rightarrow w _ {2}\right) , \operatorname {s c o r e} \left(w _ {2} \rightarrow w _ {1}\right)\right)}} \tag {5}
+$$
+
+$$
+\min \left(\operatorname {s c o r e} \left(w _ {1} \rightarrow w _ {2}\right), \operatorname {s c o r e} \left(w _ {2} \rightarrow w _ {1}\right)\right)
+$$
+
+We evaluate the precision by whether the found asymmetric pair has the correct direction as in the SWOW dataset. It is meaningless to measure recall,
+
+
Model
Precision
Spearman's ρ
BERT
98.5%
0.138
RoBERTa
99.6%
0.755
DistilBERT
97.8%
0.166
+
+Table 4: Precision and Spearman's $\rho$ of WordTies for finding asymmetric association pairs. All $\rho$ s are statistically significant $(p < 0.01)$ .
+
+because virtually every pair of words is asymmetric in human associations. For the same reason, precision is only calculated on the overlap between SWOW and the output of WordTies. Additionally, we measure the Spearman's $\rho$ of the asymmetric measure between WordTies and human word associations to see if LM and human perceive similar level of asymmetry.
+
+See Table 4 for the results. The baseline methods are unable to find asymmetric word associations because cosine similarity is symmetric, and therefore they are not listed for comparison. Meanwhile, WordTies is able to find asymmetric word associations that have the same direction as in SWOW, and there is a positive correlation for the level of asymmetry.
+
+# 3.3.6 Discussion
+
+Running Time On average it takes around 12s to generate 1,000 samples from BERT or RoBERTa with Algorithm 1, and 6s for DistilBERT. Time is measured on a single NVIDIA A40 GPU with a batch size of 2048. In our experiments we generated at least 3,000 samples per cue. For other models, the statistical tests described in §3.3.3 provide a framework for estimating the number of samples needed and hence the running time.
+
+Comparison of Methods We have already discussed how the symmetrical nature of cosine similarity used in previous methods do not fit well with word association. Adding to that, we suspect there are 2 other reasons behind the inferior performance of baseline methods: First, previous methods try to obtain a unified embedding for each word from contextualized models by either averaging embeddings in different contexts, or simply using the layers before contextualization. Such conversions defeat the purpose of building contextualized models and incur information loss. For example, contextualized BERT embeddings for a polysemous word are distinct enough for accurate word sense disambiguation (Hadiwinoto et al., 2019) while averaging elim
+
+inates the distinctions. Second, embeddings from a contextualized model must be computed with a context sentence, which is sampled from an external corpus in the Contextualized2Static method. The choice of corpus or context affects embeddings, which introduces confounding biases to word association measurements. Conversely, a “pseudo corpus” is generated from the LM in WordTies, similar to a training data extraction attack (Carlini et al., 2021) on the LM. No external factors are involved so it is certain that we are only examining the LM itself. When we apply the same score as in WordTies to the real corpus used to train the LM, we observe an overlap with human word associations that is larger than any of the LMs evaluated. This observation hints that, no matter a LM can overcome reporting bias (Shwartz and Choi, 2020) and extrapolate beyond the corpus or not, it still has a gap to reach the upper bound of word associations.
+
+Comparison of Models BERT was trained on English Wikipedia and BookCorpus (Zhu et al., 2015), and achieves the best overlap with human word associations. RoBERTa is a replica of BERT with more carefully selected hyper-parameters and a larger training corpus, which additionally incorporates news, stories, and web content. However, with better training settings it performs worse than BERT on the word association task. In this sense, world knowledge in RoBERTa is not as similar to that of humans, and we suspect it is because of the relevance and quality of the additional training corpus. The sampling process in WordTies is a reflection of the corpus (Carlini et al., 2021), and we observed more URLs and email addresses in the samples from RoBERTa, which are irrelevant to the knowledge involved in word association. DistilBERT is a smaller model trained on the same corpus as BERT and with BERT as the teacher. Embedding-based baselines perform on par with sampling-based method for DistilBERT, and we conjecture the reason to be that DistilBERT is not as good an MLM in the first place. Sanh et al. (2019) only reported that the model performs equally well as BERT on downstream tasks but not the MLM objective, and few studies used DistilBERT in MLM-based zero-shot tasks.
+
+# 4 Language Model Word Associations
+
+WordTies, as a more suitable method for probing word associations in LMs, enables us to scrutinize
+
+
+Figure 3: Precision@k for cue-response pairs involving commonsense (upper) and lexical (lower, hatched) knowledge. In each group of bars, the bars from the left to the right are for WordTies, Contextualized2Static and VOCab Embedding respectively.
+
+
Model
Spearman's ρ
BERT
-0.248
RoBERTa
-0.243
DistilBERT
-0.239
+
+Table 5: Correlation between precision@50 and reasoning path length for different models. All $\rho$ s are statistically significant ( $p < 0.01$ ).
+
+the properties of those associations.
+
+Semantic Knowledge We observed that LMs are slightly better at associating words by commonsense knowledge than lexically, judging by the precision@k for cue-response pairs broken down by the type of knowledge (Figure 3). This is consistent with the finding that humans use commonsense knowledge more for associations (Table 2).
+
+Reasoning Path Length LMs' ability to find human-like associations is negatively associated with the length of the reasoning path to the response. In other words, the more hops to get from the cue to the response in the KGs, the harder for LMs to associate the cue with the response. See Table 5 for the correlation coefficients. Meanwhile, longer reasoning paths only slightly degrade human association strength (§2.2).
+
+# 5 Related Work
+
+The study of Rodriguez and Merlo (2020) is mostly similar to ours, where they concluded that properties of human word associations, discovered in the 1970s (Tversky, 1977; Tversky and Gati, 1978; Tversky and Hutchinson, 1986), still hold in language models. They probed associations by ranking words by the cosine similarity of embeddings
+
+in the vocabulary layer, and measured asymmetry by handcrafted templates. Evert and Lapesa (2021) also tested word associations with word embeddings, but they held the same view as us that it is self-contradictory to obtain decontextualized embeddings from contextualized LM, and therefore did not extend their study on LMs. Measuring and mitigating social biases in pre-trained LMs, often formulated as measuring associations to a certain set of words, is a more popular task. Associations to the words related to social aspects are often measured by the cosine similarity of embeddings aggregated from context sentences (May et al., 2019; Bommasani et al., 2020; Kaneko and Bollegala, 2021). As we have been arguing, cosine similarity is not compatible with the asymmetry of word associations, while our algorithm takes asymmetry into consideration. In some work, biases are also measured via constrained generation, where the constraints (often prompts or templates) are collected from the web (Dhamala et al., 2021) or by crowdsourcing (Nangia et al., 2020). In comparison, our method relies on no external resources, and no confounder is introduced consequently.
+
+Constrained text generation is used to evaluate the commonsense reasoning ability of LMs through other tasks. CommonGen (Lin et al., 2020) is a task where, instead of only one cue word as in our study, multiple words pertaining to commonsense concepts are required to be present in the generated text, as a way to measure how well LMs can link concepts together with commonsense knowledge. In abductive commonsense reasoning (Bhagavatula et al., 2020), LMs are used to complete text when the beginning and ending are given, to test their ability to reason about pre- and post-conditions.
+
+It is considered non-trivial to impose constraints on left-to-right generations for causal LMs. Mostly, recent work (Qin et al., 2022; Dathathri et al., 2020) focus on constrained (also known as controlled) decoding, a related problem of finding the sequence that maximizes the likelihood, by modifying the original distribution. Prior to the Langevin Dynamics algorithm by Qin et al. (2022) and Kumar et al. (2022), Miao et al. (2019) proposed CGMH, a constrained sampling algorithm in discrete space based on Metropolis-Hasting sampling, but it uses a bidirectional causal LM to reduce computation (i.e. the LM also predicts the previous word based on suffixes). More recent causal LMs, such as GPT (Radford et al., 2019; Brown et al., 2020), are uni-
+
+directional, and it is therefore not very meaningful to apply CGHM in our work.
+
+In a broader context, it has been an interesting idea to try to explain the behavior of neural networks by optimizing over the input. Sampling from a language model, as in our WordTies algorithm and in Carlini et al.'s (2021), can be seen as optimizing over the discrete input text sequence to minimize the negative log-likelihood with noise, and it provides a way to uncover how LMs associate words or properties of the training corpus. Bauerle and Wexler (2020) optimized the activation of certain neurons in BERT over the input sequence, as an attempt to find the responsibilities of individual neurons, and Goh et al. (2021) applied similar thoughts on vision-language models.
+
+# 6 Conclusion
+
+In this study, we verified the proposition that examining discrete sequence samples from LMs is a better approach than inspecting embedding spaces for word associations. We also explored properties related to semantic knowledge and reasoning in both human and LM word associations. These revealed the high potential of using word associations as a proxy for probing, and as a signal for finetuning language models.
+
+# Limitations
+
+We have yet to apply the WordTies algorithm to popular LMs such as GPT-2 that are causal, despite having provided a theoretically sound method to do so in §3.2. Due to the constraint of computation resources, we only evaluated our algorithm on the base version of popular pre-trained LMs. Models with a larger number of parameters, such as bert-large-cased and roberta-large, are yet to be evaluated. For the same reason, we were only able to run the experiments on a subset of SWOW. Our method is notably slower than simply running k-nearest-neighbor search on embedding spaces, although the running time is still acceptable and we have a method for estimating the running time required (§3.3.6). Potential downstream use cases of word associations, such as measuring social biases in language models, are not evaluated in this paper.
+
+# Ethics Statement
+
+As discussed in §5, measuring and mitigating social biases have been a prominent and motivating application of word associations. The algorithm
+
+we proposed contributes a practical way to measure associations to words related to social aspects (such as profession, gender, race, and other aspects) in language models with higher precisions and fewer confounders. These associations, in addition to being a measure of biases, could potentially serve as a signal for fine-tuning LMs, and lead to language models with less biases.
+
+# Acknowledgements
+
+We acknowledge the support of the Natural Sciences and Engineering Research Council of Canada (NSERC), 5010405. This work is also supported in part by a gift from Scotiabank. Robot icon in Figure 1 was designed by OpenMoji, under CC BY-SA 4.0 license.
+
+# References
+
+Alex Bäuerle and James Wexler. 2020. What does BERT dream of? In 3rd Workshop on Visualization for AI Explainability (VISxAI) at IEEE VIS, Online, October 26, 2020.
+Chandra Bhagavatula, Ronan Le Bras, Chaitanya Malaviya, Keisuke Sakaguchi, Ari Holtzman, Hannah Rashkin, Doug Downey, Wen-tau Yih, and Yejin Choi. 2020. Abductive commonsense reasoning. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.
+Steven Bird, Edward Loper, and Ewan Klein. 2009. Natural Language Processing with Python. O'Reilly Media Inc.
+Rishi Bommasani, Kelly Davis, and Claire Cardie. 2020. Interpreting pretrained contextualized representations via reductions to static embeddings. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4758-4781, Online. Association for Computational Linguistics.
+Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In Advances in Neural Information Processing Systems, volume 33, pages 1877-1901. Curran Associates, Inc.
+Nicholas Carlini, Florian Tramér, Eric Wallace, Matthew Jagielski, Ariel Herbert-Voss, Katherine
+
+Lee, Adam Roberts, Tom B. Brown, Dawn Song, Ülfar Erlingsson, Alina Oprea, and Colin Raffel. 2021. Extracting training data from large language models. In 30th USENIX Security Symposium, USENIX Security 2021, August 11-13, 2021, pages 2633-2650. USENIX Association.
+Sumanth Dathathri, Andrea Madotto, Janice Lan, Jane Hung, Eric Frank, Piero Molino, Jason Yosinski, and Rosanne Liu. 2020. Plug and play language models: A simple approach to controlled text generation. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.
+Simon De Deyne, Danielle J. Navarro, Amy Perfors, Marc Brysbaert, and Gert Storms. 2019. The "Small World of Words" English word association norms for over 12,000 cue words. Behavior Research Methods, 51(3):987-1006.
+Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 4171-4186. Association for Computational Linguistics.
+Jwala Dhamala, Tony Sun, Varun Kumar, Satyapriya Krishna, Yada Pruksachatkun, Kai-Wei Chang, and Rahul Gupta. 2021. BOLD: dataset and metrics for measuring biases in open-ended language generation. In *FAccT'21: 2021 ACM Conference on Fairness, Accountability, and Transparency*, Virtual Event / Toronto, Canada, March 3-10, 2021, pages 862-872. ACM.
+Stefan Evert and Gabriella Lapesa. 2021. FAST: A carefully sampled and cognitively motivated dataset for distributional semantic evaluation. In Proceedings of the 25th Conference on Computational Natural Language Learning, pages 588-595, Online. Association for Computational Linguistics.
+Gabriel Goh, Chelsea Voss, Daniela Amodei, Shan Carter, Michael Petrov, Justin Jay Wang, Nick Cammarata, and Chris Olah. 2021. Multimodal neurons in artificial neural networks. OpenAI blog.
+Christian Hadiwinoto, Hwee Tou Ng, and Wee Chung Gan. 2019. Improved word sense disambiguation using pre-trained contextualized word representations. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5297-5306, Hong Kong, China. Association for Computational Linguistics.
+Matthew Honnibal, Ines Montani, Sofie Van Landeghem, and Adriane Boyd. 2020. spaCy: Industrial-strength Natural Language Processing in Python.
+
+Masahiro Kaneko and Danushka Bollegala. 2021. Debiasing pre-trained contextualised embeddings. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, EACL 2021, Online, April 19 - 23, 2021, pages 1256-1266. Association for Computational Linguistics.
+George R Kiss, Christine Armstrong, Robert Milroy, and James Piper. 1973. An associative thesaurus of English and its computer analysis. The computer and literary studies, 153.
+Sachin Kumar, Biswajit Paria, and Yulia Tsvetkov. 2022. Constrained sampling from language models via Langevin dynamics in embedding spaces. CoRR, abs/2205.12558.
+Bill Yuchen Lin, Wangchunshu Zhou, Ming Shen, Pei Zhou, Chandra Bhagavatula, Yejin Choi, and Xi-ang Ren. 2020. Commongen: A constrained text generation challenge for generative commonsense reasoning. In Findings of the Association for Computational Linguistics: EMNLP 2020, Online Event, 16-20 November 2020, volume EMNLP 2020 of Findings of ACL, pages 1823-1840. Association for Computational Linguistics.
+Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A robustly optimized BERT pretraining approach. CoRR, abs/1907.11692.
+Chandler May, Alex Wang, Shikha Bordia, Samuel R. Bowman, and Rachel Rudinger. 2019. On measuring social biases in sentence encoders. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 622-628. Association for Computational Linguistics.
+John Philip McCrae, Alexandre Rademaker, Ewa Rudnicka, and Francis Bond. 2020. English WordNet 2020: Improving and extending a WordNet for English using an open-source methodology. In Proceedings of the LREC 2020 Workshop on Multimodal Wordnets (MMW2020), pages 14-19, Marseille, France. The European Language Resources Association (ELRA).
+Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. 2017. Pointer sentinel mixture models. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net.
+Ning Miao, Hao Zhou, Lili Mou, Rui Yan, and Lei Li. 2019. CGMH: constrained sentence generation by metropolis-hastings sampling. In The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, Honolulu, Hawaii, USA, January 27 - February 1, 2019, pages 6834-6842. AAAI Press.
+
+George A. Miller. 1995. WordNet: A lexical database for english. Communications of the ACM, 38(11):39-41.
+Nikita Nangia, Clara Vania, Rasika Bhalerao, and Samuel R. Bowman. 2020. Crows-pairs: A challenge dataset for measuring social biases in masked language models. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pages 1953-1967. Association for Computational Linguistics.
+Radford M. Neal. 2011. MCMC using Hamiltonian dynamics. In Steve Brooks, Andrew Gelman, Galin Jones, and Xiao-Li Meng, editors, Handbook of Markov Chain Monte Carlo, chapter 5. Chapman and Hall/CRC, Boca Raton, FL, USA.
+Douglas L Nelson, Cathy L McEvoy, and Thomas A Schreiber. 2004. The university of south florida free association, rhyme, and word fragment norms. Behavior Research Methods, Instruments, & Computers, 36(3):402-407.
+Tuan-Phong Nguyen, Simon Razniewski, Julien Romero, and Gerhard Weikum. 2021. Refined commonsense knowledge from large-scale web contents. CoRR, abs/2112.04596.
+Gregory Piatetsky-Shapiro. 1991. Discovery, analysis, and presentation of strong rules. In Gregory Piatetsky-Shapiro and William J. Frawley, editors, Knowledge Discovery in Databases, pages 229-248. AAAI/MIT Press.
+Lianhui Qin, Sean Welleck, Daniel Khashabi, and Yejin Choi. 2022. COLD decoding: Energy-based constrained text generation with Langevin dynamics. CoRR, abs/2202.11705.
+Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9.
+Maria A. Rodriguez and Paola Merlo. 2020. Word associations and the distance properties of context-aware word embeddings. In Proceedings of the 24th Conference on Computational Natural Language Learning, pages 376-385, Online. Association for Computational Linguistics.
+Victor Sanh, Lysandre Debut, Julien Chaumont, and Thomas Wolf. 2019. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. CoRR, abs/1910.01108.
+Maarten Sap, Vered Shwartz, Antoine Bosselut, Yejin Choi, and Dan Roth. 2020. Commonsense reasoning for natural language processing. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: Tutorial Abstracts, pages 27-33, Online. Association for Computational Linguistics.
+
+Vered Shwartz and Yejin Choi. 2020. Do neural language models overcome reporting bias? In Proceedings of the 28th International Conference on Computational Linguistics, COLING 2020, Barcelona, Spain (Online), December 8-13, 2020, pages 6863-6870. International Committee on Computational Linguistics.
+Avijit Thawani, Biplav Srivastava, and Anil Singh. 2019. SWOW-8500: Word association task for intrinsic evaluation of word embeddings. In Proceedings of the 3rd Workshop on Evaluating Vector Space Representations for NLP, pages 43-51, Minneapolis, USA. Association for Computational Linguistics.
+Amos Tversky. 1977. Features of similarity. Psychological review, 84(4):327-352.
+Amos Tversky and Itamar Gati. 1978. Studies of similarity. Cognition and categorization, pages 79-98.
+Amos Tversky and J Hutchinson. 1986. Nearest neighbor analysis of psychological spaces. Psychological review, 93(1):3-22.
+Alex Wang and Kyunghyun Cho. 2019. BERT has a mouth, and it must speak: BERT as a Markov random field language model. In Proceedings of the Workshop on Methods for Optimizing and Evaluating Neural Language Generation, pages 30-36, Minneapolis, Minnesota. Association for Computational Linguistics.
+Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, EMNLP 2020 - Demos, Online, November 16-20, 2020, pages 38-45. Association for Computational Linguistics.
+Qiuye Zhao and Mitch Marcus. 2012. Long-tail distributions and unsupervised learning of morphology. In Proceedings of COLING 2012, pages 3121-3136, Mumbai, India. The COLING 2012 Organizing Committee.
+Yukun Zhu, Ryan Kiros, Richard S. Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. In 2015 IEEE International Conference on Computer Vision, ICCV 2015, Santiago, Chile, December 7-13, 2015, pages 19-27. IEEE Computer Society.
+
+# A Example Associations
+
+The following tables provide examples of word associations found by WordTies. Human associations from SWOW are also included for reference. Words that did not pass the statistical tests are in italics.
+
+
Model
Top-10 Responses
BERT
web, search, site, page, www, on-line, news, map, available, internet
calculate, computer, add, figure, understand, think, math, data, does not, figure out
+
+Table 8: Example associations with compute as the cue.
\ No newline at end of file
diff --git a/wordtiesmeasuringwordassociationsinlanguagemodelsviaconstrainedsampling/images.zip b/wordtiesmeasuringwordassociationsinlanguagemodelsviaconstrainedsampling/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..26f08728bedb27e1f329c225757a553cf3aaab97
--- /dev/null
+++ b/wordtiesmeasuringwordassociationsinlanguagemodelsviaconstrainedsampling/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9373aca78ea5151985fa9d2cff1e371de09916b53a2776e53db07d9d9f53a8a8
+size 446073
diff --git a/wordtiesmeasuringwordassociationsinlanguagemodelsviaconstrainedsampling/layout.json b/wordtiesmeasuringwordassociationsinlanguagemodelsviaconstrainedsampling/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..9b0d8009a7c3b1b4428c2932c3c72bcbb48f0faa
--- /dev/null
+++ b/wordtiesmeasuringwordassociationsinlanguagemodelsviaconstrainedsampling/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:cf72717438451244febbc88b52043df1d8d009a5640d1492730cdf34d5d1ae3c
+size 346637
diff --git a/wspellerrobustwordsegmentationforenhancingchinesespellingcheck/f5c3ed13-9786-46fb-bd08-41453656c705_content_list.json b/wspellerrobustwordsegmentationforenhancingchinesespellingcheck/f5c3ed13-9786-46fb-bd08-41453656c705_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..4f0d0b5bf1c6fd385be8d12c9c1e349c8560922e
--- /dev/null
+++ b/wspellerrobustwordsegmentationforenhancingchinesespellingcheck/f5c3ed13-9786-46fb-bd08-41453656c705_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:8b4566337f488caf20b88d2df382eb899d43dca851c921d37ac185619ba64e7f
+size 77626
diff --git a/wspellerrobustwordsegmentationforenhancingchinesespellingcheck/f5c3ed13-9786-46fb-bd08-41453656c705_model.json b/wspellerrobustwordsegmentationforenhancingchinesespellingcheck/f5c3ed13-9786-46fb-bd08-41453656c705_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..4c7e6f7b5120b0ab81d157dc88f2213b933fd890
--- /dev/null
+++ b/wspellerrobustwordsegmentationforenhancingchinesespellingcheck/f5c3ed13-9786-46fb-bd08-41453656c705_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:7d8500d3c8d80cc1fef1d5e334d7ef8f56914cbe773613b491fb77439462e76e
+size 92002
diff --git a/wspellerrobustwordsegmentationforenhancingchinesespellingcheck/f5c3ed13-9786-46fb-bd08-41453656c705_origin.pdf b/wspellerrobustwordsegmentationforenhancingchinesespellingcheck/f5c3ed13-9786-46fb-bd08-41453656c705_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..b012155bf3989382ae62da6b58c1de82a2d2c6fd
--- /dev/null
+++ b/wspellerrobustwordsegmentationforenhancingchinesespellingcheck/f5c3ed13-9786-46fb-bd08-41453656c705_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:2758760a1ec1acd682dd93749a3e2d1ca4985ecd245d7dbf46bdbf0cabf8cdac
+size 580845
diff --git a/wspellerrobustwordsegmentationforenhancingchinesespellingcheck/full.md b/wspellerrobustwordsegmentationforenhancingchinesespellingcheck/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..ecf2fc7f2b507a7c480dc25fbd078eb67d574c7c
--- /dev/null
+++ b/wspellerrobustwordsegmentationforenhancingchinesespellingcheck/full.md
@@ -0,0 +1,360 @@
+# WSpeller: Robust Word Segmentation for Enhancing Chinese Spelling Check
+
+Fangfang Li $^{1}$ , Youran Shan $^{1}$ , Junwen Duan $^{1,*}$ , Xingliang Mao $^{2}$ , MinLie Huang $^{3}$
+
+$^{1}$ School of Computer Science and Engineering, Central South University
+
+$^{2}$ Institute of Big Data And Internet Innovation, Hunan University of Technology and Business $^{3}$ Beijing National Research Center for Information Science and Technology, Tsinghua University
+
+{lifangfang, shanyouran, jwduan}@csu.edu.cn
+
+xingliangmao0929@163.com
+
+aihuang@tsinghua.edu.cn
+
+# Abstract
+
+Chinese spelling check (CSC) detects and corrects spelling errors in Chinese texts. Previous approaches have combined character-level phonetic and graphic information, ignoring the importance of segment-level information. According to our pilot study, spelling errors are always associated with incorrect word segmentation. When appropriate word boundaries are provided, CSC performance is greatly enhanced. Based on these findings, we present WSpeller, a CSC model that takes into account word segmentation. A fundamental component of WSpeller is a W-MLM, which is trained by predicting visually and phonetically similar words. Through modification of the embedding layer's input, word segmentation information can be incorporated. Additionally, a robust module is trained to assist the W-MLM-based correction module by predicting the correct word segmentations from sentences containing spelling errors. We evaluate WSpeller on the widely used benchmark datasets SIGHAN13, SIGHAN14, and SIGHAN15. Our model is superior to state-of-the-art baselines on SIGHAN13 and SIGHAN15 and maintains equal performance on SIGHAN14.
+
+# 1 Introduction
+
+Chinese Spelling Check (CSC) aims to detect and correct spelling errors in Chinese text, and can be used in many other natural language processing (NLP) applications, including search optimization (Martins and Silva, 2004; Gao et al., 2010), location extraction (Middleton et al., 2018), optical character recognition (OCR) (Afli et al., 2016), Automatic Speech Recognition (ASR) (Chao and Chang, 2020) and text classification (Xu et al., 2021).
+
+Typos in the text are mostly phonetic or graphic of the correct character (Liu et al., 2010). Previous
+
+
State
Text
Correct
比起 前段时间 已经 很 轻松 了
Wrong
比 七天段时间 已经 很 轻 送 了
Translation
It's easier than it's been for a while
+
+Table 1: An example of Chinese spelling error. Typos and the right characters are highlighted in red, while the spaces within the text show the word boundaries.
+
+work focus on how to use the features of pronunciation and shape (Cheng et al., 2020; Ji et al., 2021). However, little attention has been devoted to the incorrect word segmentation induced by typos. As shown in Table 1, several typos not only affect semantic but also affect word segmentation. In this case, we perform a preliminary experiment detailed in Sec 3 to determine the impact of word segmentation on CSC. Experimental results indicate that if the appropriate word boundaries are provided, CSC performance will improve dramatically. However, it is also challenging to achieve robust word segmentations in the presence of typos.
+
+To cope with the problem, we propose WSpeller (Word Speller), which consists of two submodules, namely the word segmentation module (WS-Module) and the correction module (C-Module). The WS-Module predicts the right word boundaries based on the original text (may have typos). Following that, the segment-level information is provided to C-Module for improving correction. The two submodules adopt different text encoding modules, and the hidden state of the WS-Module is passed to the C-Module, where the two submodules interact and are trained in a multi-task manner for mutual benefit. W-MLM (Word-Masked Language Model) is proposed as the text encoding module of C-Module because it adapts BERT (Devlin et al., 2019) with different embedding information and special replacement strategies and pre-trained on a large Chinese Corpus. WSpeller
+
+is evaluated against three widely used datasets: SIGHAN13 (Wu et al., 2013), SIGHAN14 (Yu et al., 2014), SIGHAN15 (Tseng et al., 2015). The experimental results show that WSpeller improves the performance of CSC. WSpeller increased the F1 scores by $3\%$ and $2.1\%$ in SIGHAN13 and SIGHAN15 and achieved almost the same performance as the state-of-the-art on SIGHAN14. Ablation studies reveal that both the word segmentation and the W-MLM contribute considerably to the performance. The contribution of this paper is summarized as follows:
+
+- We are among the first to use the robust word segmentation in CSC, and our findings confirm its efficacy.
+- We propose W-MLM to improve task adaptability, in which the embedding and replacement strategy of the masked language model is tailored to the specific needs of the CSC task.
+- Experiments on the SIGHAN benchmark datasets demonstrate that our method outperformed strong competitors.
+
+# 2 Related Work
+
+In recent years, CSC has received widespread attention. Early works focus on rules and confusion sets to deal with CSC, follows pipelines, including detection, candidate generation and selection (Xie et al., 2015; Xin et al., 2014; Zhang et al., 2015; Chen et al., 2013). These methods are limited by manually set rules and fixed confusion sets.
+
+With the development of deep learning, most methods focus on how to integrate the phonetic and graphic features of characters into the model. Hong et al. (Hong et al., 2019) proposed FASPell, which exploited the phonetic and graphic information to exclude candidates with low similarity. Cheng et al. (Cheng et al., 2020) proposed SpellGCN, which employed GCN to combine the phonetic and graphic information as the output layer. Ji et al. (Ji et al., 2021) proposed SpellBERT, which used GCN to model the Pinyin and radical of characters as the embedding layer. Wang et al. (Wang et al., 2021) proposed DCN, which added phonetic embedding when generating candidates, used attention mechanism to calculate the fraction between adjacent characters, and obtained the best path. Bao et al. (Bao et al., 2020) proposed to use semantic
+
+比七天段时间已经很轻送了
+
+0 0 0 0 0 0 0 0 0 0 0 0
+
+01234567891011
+
+比七天段时间已经很轻送了
+
+001010101101
+
+01234567891011
+
+Embedding of Original BERT
+
+Token
+
+embedding
+
+Segment
+
+embeddin
+
+Position
+
+embedding
+
+of preliminary experiment
+
+Token
+
+embeddir
+
+Word
+
+embedding
+
+Position
+
+embedding
+
+Figure 1: Change of the embedding layer. The sequence corresponding to word embedding represents the word segmentation of the text.
+
+candidates to expand the confusion set and use a block-based structure to correct errors.
+
+Some methods focus on training skills and the positive impact of detection on correction. Zhang et al. (Zhang et al., 2020) adopted GRU as detection network and BERT as correction network, and propose a soft masking strategy. Gan et al. (Gan et al., 2021) propose to judge the difficulty of learning samples through loss and apply self-supervised curriculum learning to CSC.
+
+In recent years, the methods focus on integrating the similarity between characters into the model. However, they ignore the importance of word segmentation to CSC. In this paper, the word segmentation information of text is integrated into the model through pre-training and fine-tuning. Only a simple model structure can achieve a good correction effect.
+
+# 3 Preliminary Experiment
+
+Our primary premise is that the presence of typos hurts word segmentation and the ability correct would improve if better word boundaries were available. To validate this, we conduct a preliminary experiment in which we inject the precise word segmentation into BERT. As shown in Figure 1, the boundaries of words are represented by 0s and 1s, where 1 indicates segmentation is required before the current character. The word segmentation is obtained by LAC1. The result is shown in Table 2. We find that even a simple BERT can improve the F1 score of detection and correction by $6.2\%$ and $7.4\%$ , with precise word boundaries, implying that word boundaries play a significant role in the CSC.
+
+
Method
Detection Level
Correction Level
Pre
Rec
F1
Pre
Rec
F1
BERT
75.8
74.8
75.2
73.4
72.4
72.9
Seg
79.9
79.9
79.9
78.3
78.3
78.3
+
+Table 2: Results of preliminary experiments. BERT means the conventional bert-based method. Seg means the method to change the input information of the embedding layer.
+
+We, therefore, propose WSpeller, which includes word segmentation information for CSC. However, obtaining correct word boundaries for sentences containing typos is a challenge. WSpeller has been further enhanced with a robust WS-Module so that it can predict the word boundaries correctly.
+
+# 4 Problem Definition
+
+CSC aims to detect and correct typos in a given text $X = (x_{1}, x_{2}, \dots, x_{n})$ , and finally get the detection sequence $D = (d_{1}, d_{2}, \dots, d_{n})$ representing whether corresponding $x_{i}$ is a typo and the correction $Y = (y_{1}, y_{2}, \dots, y_{n})$ without spelling error.
+
+# 5 The Proposed Model
+
+In this section, we present the WSpeller, whose overall architecture is illustrated in Figure 2. WSpeller consists of a WS-Module (5.1) and a C-Module (5.3). While the WS-Module aims to predict the correct word boundaries in spite of spelling errors, the C-Module aims to correct the typos in conjunction with the WS-Module. To fully exploit the word segmentation information, the C-Module uses a pre-trained language model W-MLM (5.2).
+
+# 5.1 Word Segmentation Module
+
+In this section, WS-Module which aims to predict the word segmentations of corrected text from source text is introduced in detail. Given a source text $X$ , we can easily get the hidden state $\pmb{H}$ by BERT. Then we can get the predicted word segmentation result $S = (s_{1}, s_{2}, \dots, s_{n})$ by word segmentation network. The method is as follows:
+
+$$
+\boldsymbol {H} = \operatorname {B E R T} (X) \tag {1}
+$$
+
+$$
+S = \operatorname {S o f t m a x} \left(W _ {1} H + b _ {1}\right) \tag {2}
+$$
+
+where $W_{1}$ and $b_{1}$ are learnable matrices. Each item $s_i \in \{0, 1\}$ in $S$ indicates the word boundaries, and $s_i = 1$ means that word segmentation is required
+
+
+Figure 2: Overview of the WSpeller. WS-Module aims to predict the word segmentation of the target text based on the source text. After that, the predicted word segmentation result is used in C-Module, and finally obtain the target text.
+
+before the current character $x_{i}$ . In the training phase, we use LAC to segment the target text $Y$ as the label of the WS-Module.
+
+# 5.2 Pre-training W-MLM
+
+The text encoding module of the C-Module is WMLM, whose overall structure is shown in Figure 3. The skeleton of W-MLM is BERT. Since the text is always single sentence input in CSC, the W-MLM ignores the next sentence prediction (NSP).
+
+The key points of W-MLM lie in three parts, including replacement strategy selection, replacement character generation and word segmentation information integration. Replacement strategy selection and replacement character generation are used to generate training sets.
+
+# 5.2.1 Replacement Strategy Selection
+
+In order for W-MLM to learn the knowledge of continuous errors, misuse of correct words and similarity between characters, different from the conventional replacement, we introduce the re
+
+
+Figure 3: Overview of W-MLM.
+
+
Strategy
Prob
Example
Origin&Same
10%
跟我一起(qi)去爬山吗
[MASK]
20%
跟我一[MASK]去爬山吗
Random
10%
跟我一里(li)去爬山吗
Graphics
10%
跟我一超(chao)去爬山吗
Pronunciation
30%
跟我一气(qi)去爬山吗
Continuous
10%
跟我仪器(yi qi)去爬山吗
10%
跟我已棋(yi qi)去爬山吗
+
+Table 3: Probability of each replacement strategy being selected. The replaced characters are marked in red, and the pronunciation of it is in parentheses. Origin&Same means the original text and the same character replacement. [MASK] means [MASK] replacement. Random means random character replacement. Graphics means graphically similar character replacement. Pronunciation means phonetically similar character replacement. Continuous means continuous similar character replacement and correct word replacement.
+
+placement of continuous similar characters, correct words and similar characters.
+
+In accordance with BERT, we randomly select $15\%$ of the characters in the text for replacement. Based on empirical evidence, we establish the probability of a certain replacement strategy being used for a particular character. Table 3 shows the probability of each strategy being chosen. As shown in Figure 3, W-MLM is finally trained to predict its original text based on the partially replaced text.
+
+# 5.2.2 Replacement Character Generation
+
+We are prone to mistype characters as similar and commonly used characters when using input methods. Inspired by this, we comprehensively consider similarity and commonality to generate candidates.
+
+For a character $ch$ to be replaced, its initial can-
+
+didate list is the vocabulary. Firstly, we obtain the phonetic and graphic similarity between each candidate and $ch$ by calculating the edit distance of ideographic description sequence (IDS) and Pinyin sequence (Hong et al., 2019). At the same time, the frequency of candidates is counted in SogouCA $^2$ .
+
+Then, we initialize the phonetic candidate list $\text{Candidate}_p = (cp_1, cp_2, \dots, cp_k)$ and graphic candidate list $\text{Candidate}_g$ for $ch$ according to the rule of preferentially taking the characters with higher similarity and higher frequency. We empirically set $k$ to 30, and update the frequency list $\text{Count}^p = (\text{count}_1^p, \text{count}_2^p, \dots, \text{count}_k^p)$ and $\text{Count}^g$ . After that, we comprehensively consider similarity and commonality to obtain the phonetic score $\text{Score}P = (sp_1, sp_2, \dots, sp_m)$ and graphic score $\text{Score}G$ . $\text{Score}G$ is calculated in the same way as $\text{Score}P$ . Take $sp_i$ as an example:
+
+$$
+\operatorname {s c o r e} _ {i} ^ {c} = \frac {\operatorname {c o u n t} _ {i} ^ {p} - \operatorname {m i n} \left(\operatorname {C o u n t} ^ {p}\right)}{\operatorname {m a x} \left(\operatorname {C o u n t} ^ {p}\right) - \operatorname {m i n} \left(\operatorname {C o u n t} ^ {p}\right)} \tag {3}
+$$
+
+$$
+\operatorname {s c o r e} _ {i} ^ {s} = \text {P r o n u n c i a t i o n} \left(\operatorname {c p} _ {i}, \operatorname {c h}\right) \tag {4}
+$$
+
+$$
+s p _ {i} = w _ {c} \times s c o r e _ {i} ^ {s} + w _ {s} \times s c o r e _ {i} ^ {s} \tag {5}
+$$
+
+where $\max(\text{Count}^p)$ and $\min(\text{Count}^p)$ is the Max and Min in $\text{Count}^p$ . $\text{score}_i^c$ , $\text{score}_i^s$ is the frequency and similarity score. Pronunciation is to calculate the pronunciation similarity between $c p_i$ and $c h$ . $w_c$ and $w_s$ is the weights of $\text{score}_i^c$ , $\text{sim}_j^p$ . We empirically set them to 0.3 and 0.7.
+
+Finally, the higher the score $sp_i$ of $cp_i$ , the more likely $cp_i$ is to be used to replace $ch$ in phonetically similar character replacement, as are other replacement strategies.
+
+# 5.2.3 Word Segmentation Information Integration
+
+In WSpeller, the text is input as a single sentence. Therefore, the segment sequence of the original BERT for the sentence number is actually all zero. Obviously, it does not contain any information useful for correction. In this regard, we integrate word segmentation information into W-MLM by replacing segmentation embedding with word embedding. The word segmentation results $S$ of the text before random replacement is obtained by LAC. Thus, the model can learn the changes of embedding layer in the pre-training stage, so that the model can perform better in the fine-tuning stage.
+
+# 5.3 Correction Module
+
+As shown in Figure 2, the C-Module is composed of W-MLM and correction network. Different from the WS-Module, we have obtained the predicted word segmentation result $S$ at this stage. We assume that $S$ is the word segmentation of the target text $Y$ , and this information will play a great role in correction.
+
+Hidden state $\tilde{H}$ can be obtained by token sequence $T$ , word segmentation $S$ , and position sequence $P$ . In addition, we believe that C-Module should have a positive impact on WS-Module. Therefore, we add the $H$ of the WS-Module and the $\tilde{H}$ of the C-Module as the final hidden state $\tilde{H}'$ . After that, the target text $Y$ is further obtained:
+
+$$
+\tilde {\boldsymbol {H}} = W - M L M (T, S, P) \tag {6}
+$$
+
+$$
+\tilde {\boldsymbol {H}} ^ {\prime} = \boldsymbol {H} + \tilde {\boldsymbol {H}} \tag {7}
+$$
+
+$$
+\tilde {\boldsymbol {H}} ^ {\prime \prime} = \text {L a y e r N o r m} \left(G E L U \left(W _ {2} \tilde {H} ^ {\prime} + b _ {2}\right)\right) \tag {8}
+$$
+
+$$
+Y = \operatorname {S o f t m a x} \left(W _ {3} \tilde {H} ^ {\prime \prime} + b _ {3}\right) \tag {9}
+$$
+
+where $W_{2}, W_{3}, b_{2}, b_{3}$ are learnable parameters, LayerNorm and GELU represent the layer normalization and activation functions in BERT, respectively.
+
+# 5.4 Learning
+
+WS-Module and C-Module are jointly trained. The learning process is driven by optimizing correction and word segmentation respectively.
+
+$$
+\mathcal {L} _ {s} = - \sum_ {i = 1} ^ {n} \log P _ {s} \left(s _ {i} = \operatorname {t r u t h} _ {s i} | X\right) \tag {10}
+$$
+
+$$
+\mathcal {L} _ {c} = - \sum_ {i = 1} ^ {n} \log P _ {c} \left(y _ {i} = \operatorname {t r u t h} _ {y i} | X\right) \tag {11}
+$$
+
+$$
+\mathcal {L} = \lambda \times \mathcal {L} _ {s} + (1 - \lambda) \times \mathcal {L} _ {c} \tag {12}
+$$
+
+
Training Data
Sentences
ASL
Typos
Hybrid
271,329
42.6
381,962
SIGHAN13
700
41.8
343
SIGHAN14
3,437
49.6
5,122
SIGHAN15
2,338
31.3
3,037
Total
277,804
42.6
390,464
Testing Data
Sentences
ASL
Typos
SIGHAN13
1,000
74.3
1,224
SIGHAN14
1,062
50.0
771
SIGHAN15
1,100
30.6
703
Total
3,162
50.9
2,698
+
+Table 4: Statistics of the datasets. ASL is the average length of sentences
+
+where $\mathcal{L}_s$ is the loss of WS-Module, $truth_{si}$ is the correct result of $s_i$ . $\mathcal{L}_c$ is the loss of C-Module, $truth_{yi}$ is the correct result of $y_i$ . $\lambda \in [0,1]$ is the weight of $\mathcal{L}_s$ , which represents the focus of model training. When $\lambda$ is closer to 1, it means that more attention is paid to the WS-Module.
+
+# 6 Experiments
+
+# 6.1 Dataset and Metrics
+
+In the pre-training stage, we used many preprocessed data from Wikipedia used in TaCL (Su et al., 2021). In the fine-training, the data used for training includes all the data in Hybrid (Wang et al., 2018) and the training sets in SIGHAN13 (Wu et al., 2013), SIGHAN14 (Yu et al., 2014), and SIGHAN15 (Tseng et al., 2015). To evaluate, we use the test sets of SIGHAN13, SIGHAN14, and SIGHAN15. Since the SIGHAN is traditional, we use the processed data3 which follow the previous work (Cheng et al., 2020) and convert them to Simplified Chinese using the OpenCC tool4. The statistics for the datasets are shown in Table 4.
+
+We report Precision, Recall, and F1 scores at sentence level in detection and correction, which are commonly used in the CSC. Sentence level means that the current sentence can only be judged to be correct if all typos have been detected and corrected.
+
+# 6.2 Baseline
+
+We compare WSpeller with five typical baselines.
+
+
Dataset
Method
Detection Level
Correction Level
Pre
Rec
F1
Pre
Rec
F1
SIGHAN13
FASpell (Hong et al., 2019)
76.2
63.2
69.1
73.1
60.5
66.2
SpellGCN (Cheng et al., 2020)
80.1
74.4
77.2
78.3
72.7
75.4
SpellGCN* (Wang et al., 2021)
85.2
77.7
81.2
83.4
76.1
79.6
DCN* (Wang et al., 2021)
86.8
79.6
83.0
86.7
77.7
81.0
BERT*
81.7
86.0
83.8
79.7
83.9
81.8
WSpeller*
82.3
86.9
84.6
81.2
85.7
83.4
SIGHAN14
FASpell (Hong et al., 2019)
61.0
53.5
57.0
59.4
52.0
55.4
SpellGCN (Cheng et al., 2020)
65.1
69.5
67.2
63.1
67.2
65.3
DCN (Wang et al., 2021)
67.4
70.4
68.9
65.8
68.7
67.2
BERT
66.9
63.9
65.4
64.6
61.7
63.1
WSpeller
70.4
66.3
68.3
69.0
65.0
67.0
SIGHAN15
FASpell (Hong et al., 2019)
67.6
60.0
63.5
66.6
59.1
62.6
Soft-Masked BERT (Zhang et al., 2020)
73.7
73.2
73.5
66.7
66.2
66.4
SpellGCN (Cheng et al., 2020)
74.8
80.7
77.7
72.1
77.7
75.9
DCN (Wang et al., 2021)
77.1
80.9
79.0
74.5
78.2
76.3
BERT
78.7
74.5
76.5
75.8
71.7
73.7
WSpeller
81.9
78.0
79.9
79.9
76.1
77.9
+
+Table 5: Results of Precision, Recall, and F1 scores(%).* indicates that "的", "得", "地" are ignored when calculating results on SIGHAN13, and the results of DCN* and SpellGCN* are reported by DCN (Wang et al., 2021).
+
+- FASPell (Hong et al., 2019) measures the similarity between characters and filters characters with low similarity.
+SpellGCN (Cheng et al., 2020) uses GCN to model the similarity between characters and integrates the knowledge into the output layer.
+- Soft-Masked BERT (Zhang et al., 2020) proposes a Soft-Mask layer that adjusts the embedding according to the probability of typos.
+SpellBERT (Ji et al., 2021) uses GCN to model the Pinyin and radical of characters as the embedding layer.
+- DCN (Wang et al., 2021) adds pronunciation information when generating candidates, and models the correlation between adjacent characters to select the best candidates.
+
+# 6.3 Experiment Setting
+
+We use one Tesla V100 for pre-training and two GeForce RTX 2080Ti for follow-up experiments. In the fine-tuning stage, maximum sequence length is set to 160, batch size is set to 22, learning rate is set to 5e-5, training epoch is set to 20, warmup is set to 0.1, $\lambda$ is set to 0.2, optimization method
+
+is Adam. In pre-training stage, batch size is set to 32, learning rate is set to 1e-4, training epoch is set to 1, optimization method is Adam. In addition, there are lots of labeling errors about "的", "得", "地" in SIGHAN13, which affect the evaluation, so we follow the previous work (Wang et al., 2021) to ignore all corrections about "的", "得", "地".
+
+Furthermore, since the ability of W-Module is poor at the beginning of training, it will affect the training of C-Module. Therefore, we use scheduled sampling to provide a high proportion of correct word segmentation in the initial phase of training, and then slowly decrease the proportion until it reaches zero. In the first epoch, the probability of providing the correct word segmentation results is set to 0.9 and decreases to 0 in the 10th epoch.
+
+# 6.4 Main Results
+
+Table 5 illustrates the detection and correction performance on SIGHAN13, SIGHAN14, and SIGHAN15 of the proposed method and baseline models. As shown in Table 5, WSpeller significantly outperforms the other methods and the results show the effectiveness of our method and the enhancement of word segmentation for CSC.
+
+Soft-Masked BERT uses a Soft-Masked layer
+
+
Method
Detection Level
Correction Level
Acc
Pre
Rec
F1
Acc
Pre
Rec
F1
SpellGCN (Cheng et al., 2020)
83.7
85.9
80.6
83.1
82.2
85.4
77.6
81.3
SpellBERT (Ji et al., 2021)
-
87.5
73.6
80.0
-
87.1
71.5
78.5
DCN (Wang et al., 2021)
84.6
88.0
80.2
83.9
83.2
87.6
77.3
82.1
BERT
82.4
85.1
77.8
81.3
80.9
84.6
74.9
79.4
WSpeller
84.7
87.1
81.0
83.9
83.6
86.8
78.7
82.6
+
+to smooth the hidden state with typo probability. FASPell, SpellGCN, and DCN use different methods to integrate the similarity between characters to achieve better results. Different from these methods, WSpeller focuses on the occurrence of typos, which often leads to word segmentation errors. Compared with DCN, the F1 score of correction improves by $2.4\%$ and $1.6\%$ on SIGHAN13 and SIGHAN15. Meanwhile, competitive results are obtained on SIGHAN14. The further case study is given in Sec 6.6. At the same time, compared with BERT, the F1 scores improve significantly on SIGHAN13, SIGHAN14, and SIGHAN15, suggesting that the integration of word segmentation information has improved WSpeller over the basic model.
+
+Further, we follow the previous works (Cheng et al., 2020; Wang et al., 2021) and use the official tool to evaluate the performance of WSpeller on SIGHAN15. The results are shown in Table 6. SpellBERT uses GCN to model the features of characters as the embedding layer. Compared with it, the F1 scores of WSpeller in detection and correction are increased by $3.9\%$ and $4.1\%$ . At the same time, WSpeller achieves the best result among all methods, which further indicates the effectiveness of WSpeller.
+
+# 6.5 Ablation Study
+
+Ablation study is presented to understand how the components of WSpeller and the hyperparameter influence the performance. The metrics reported in this section are averaged over SIGHAN13, SIGHAN14, and SIGHAN15.
+
+# 6.5.1 Effect of Each Module
+
+We remove the following components from WSpeller to study their contributions to the overall
+
+Table 6: Accuracy, Precision, Recall and F1 scores(%) at sentence level in detection and correction evaluated by SIGHAN15 official tools.
+
+
Method
Detection Level
Pre
Rec
F1
BERT
75.8
74.8
75.2
WSpeller
78.2
77.1
77.6
-W-MLM
76.4
75.9
76.1
-Different encoding
76.8
75.6
76.1
-Schedule sampling
77.2
76.5
76.8
LAC
77.9
77.1
77.4
Method
Correction Level
Pre
Rec
F1
BERT
73.4
72.4
72.9
WSpeller
76.7
75.6
76.1
-W-MLM
74.5
74.1
74.2
-Different encoding
75.1
74.0
74.5
-Schedule sampling
76.2
75.5
75.8
LAC
75.9
75.1
75.5
+
+Table 7: Ablation experiment results of Precision, Recall and F1 scores (\%). BERT is BERT-based method. WSpeller is our proposed method. -W-MLM is to replace the W-MLM module with BERT. -Different encoding is to use the same text encoding module in WS-Module and C-Module. -Schedule sampling removes the training strategy. LAC integrates word segmentation information from the source text.
+
+performance, and the results are shown in Table 7.
+
+As shown in Table 7, W-MLM contributes the most to WSpeller's correction ability. If W-MLM is removed, the F1 score of detection and correction will decrease by $1.5\%$ and $1.9\%$ , respectively. This suggests that learning the similarity between characters in the pre-training stage and adapting to changes in the embedding layer information has a positive effect on CSC.
+
+LAC introduces the word segmentation information of the source text on the basis of BERT. Since
+
+
+Figure 4: Experimental results on hyperparameter. The sentence level Precision, Recall, and F1 scores $(\%)$ are reported on the detection and correction. The abscissa is the value of $\lambda$ , that is, the weight of WS-Module loss. Solid lines represent the detection subtask, dashed lines represent the correction subtask.
+
+the integrated word segmentation information contains some errors, its correction ability is worse than that of WSpeller, but higher than that of BERT. This indicates that the integration of word segmentation information is effective, and the accuracy of word segmentation information will affect the final correction effect, and WSpeller has stronger word segmentation ability in texts with typos.
+
+# 6.5.2 Effect of Hyperparameter
+
+In the joint training, $\lambda \in [0,1]$ represents the weight of WS-Module loss. When $\lambda$ is large, the model will pay more attention to the training of WS-Module. Empirically, word segmentation is simpler than correction, so the weight of the WS-Module should be less than that of the C-Module in joint training.
+
+We set $\lambda$ to nine different values. When $\lambda$ is set to a different value, the effect of WSpeller is shown in Figure 4. As shown in Figure 4, the Precision, Recall, and F1 score of the correction and detection of WSpeller reached the best when $\lambda = 0.2$ . On the whole, when the loss of WS-Module accounts for small weight, WSpeller has better performance.
+
+# 6.6 Case Study
+
+Three representative examples of WSpeller are shown in Table 8. In the first example, the typo led to the wrong word segmentation. WSpeller not only correct the typo, but also get the correct word boundaries. In the second example, the typo led to the wrong word segmentation, but it is not
+
+
No.
mean
sentences
1
source
应为他学得很好...
target
因为他学得很好...
predict
因为他学得很好...
translate
Because he learned it well...
2
source
...我要跟旁东说一声...
target
...我要跟旁东说一声...
predict
...我要跟房东说一声...
translate
...I have to tell the landlord...
3
source
...有一个刻板观念...
target
...有一个刻版观念...
predict
...有一个刻板观念...
translate
...There is a stereotype...
+
+Table 8: Example of WSpeller's corrected results. Parts of text are omitted by ellipsis because the text is too long. Typos and their correct characters are marked in red and green. Spaces in the text represent the word boundaries. The word boundaries of source and target come from LAC, and the word boundaries of predict come from WSpeller.
+
+labeled in the dataset. WSpeller correct the typo and get the correct word segmentation result. In the third example, "板" is correct, but is incorrectly labeled as "版". In the last two examples, the prediction results of WSpeller are correct. However, due to mislabeling, these correct prediction would not improve the F1 scores, but reduce it. For further analysis, we manually count the proportion of text containing label errors according to the badcases of WSpeller.
+
+In SIGHAN13, SIGHAN14, and SIGHAN15, $28.2\%$ , $35.3\%$ , and $30.6\%$ of badcases contain label errors, which greatly affects the evaluation. The proportion of label errors in SIGHAN14 is the highest, which may be one of the reasons why the effect of WSpeller in SIGHAN14 is not satisfactory. Several examples show that WSpeller has better word segmentation ability when typos exist. Meanwhile, integrating word segmentation information contributes to the correction ability.
+
+Further, according to our statistics, of the typos in three datasets, about $52.5\%$ , $42.2\%$ , and $39.5\%$ , respectively, resulted in word segmentation errors. WSpeller can correct $84.3\%$ , $66.6\%$ , and $63.6\%$ error word boundaries only based on source text respectively. This indicates that a considerable part of typos will affect word segmentation, and WSpeller has good word segmentation ability even
+
+if there are typos in the text.
+
+# 7 Conclusion
+
+In this paper, we have proposed WSpeller, a novel end-to-end framework for CSC. Unlike most of the previous methods, which included the phonetic and graphic information of characters in the model through various methods, we emphasize the importance of robust word segmentation in the CSC task. We added an additional WS-Module to predict the correct word boundaries for sentences with typos. The experimental results also suggest that adding word segmentation information to the model can improve the performance.
+
+In the future, we will investigate other methods for providing word segmentation information to the model for correction. We will also explore how to use WSpeller for Chinese semantic corrections.
+
+# 8 Limitations
+
+In our study, we discovered that correct word segmentation would improve the ability of correction, but achieving accurate word segmentation from source texts that contain typos remains a challenge. However, although WSpeller is capable of segmenting words with high accuracy, there is still a lot of room for improvement. By having more accurate word boundaries available, the model's correction capabilities can be further enhanced, as the upper bound in the preliminary experimental results in Section 3. As WSpeller encodes text twice, its inference speed is limited. A future attempt will be made to improve WS-Module to reduce its complexity.
+
+# Acknowledgements
+
+This research is supported by National Natural Science Foundation of China [62172449, 62006251], Hunan Provincial Natural Science Foundation of China [2021JJ30870,2022JJ30211], Changsha Municipal Natural Science Foundation [kq2202300]. And this research was carried out in part using computing resources at the High Performance Computing Center of Central South University.
+
+# References
+
+Haithem Afli, Zhengwei Qiu, Andy Way, and Páraic Sheridan. 2016. Using SMT for OCR error correction of historical texts. In Proceedings of the Tenth International Conference on Language Resources
+
+and Evaluation LREC 2016, Porto Roz, Slovenia, May 23-28, 2016. European Language Resources Association (ELRA).
+Zuyi Bao, Chen Li, and Rui Wang. 2020. Chunk-based chinese spelling check with global optimization. In Findings of the Association for Computational Linguistics: EMNLP 2020, Online Event, 16-20 November 2020, volume EMNLP 2020 of Findings of ACL, pages 2031-2040. Association for Computational Linguistics.
+Yu-Chieh Chao and Chia-Hui Chang. 2020. Automatic spelling correction for ASR corpus in traditional chinese language using seq2seq models. In International Computer Symposium, ICS 2020, Tainan, Taiwan, December 17-19, 2020, pages 553-558. IEEE.
+Kuan-Yu Chen, Hung-Shin Lee, Chung-Han Lee, HsinMin Wang, and Hsin-Hsi Chen. 2013. A study of language modeling for chinese spelling check. In Proceedings of the Seventh SIGHAN Workshop on Chinese Language Processing, SIGHAN@IJCNLP 2013, Nagoya, Japan, October 14-18, 2013, pages 79-83. Asian Federation of Natural Language Processing.
+Xingyi Cheng, Weidi Xu, Kunlong Chen, Shaohua Jiang, Feng Wang, Taifeng Wang, Wei Chu, and Yuan Qi. 2020. Spellgen: Incorporating phonological and visual similarities into language models for Chinese spelling check. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 871-881. Association for Computational Linguistics.
+Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 4171-4186. Association for Computational Linguistics.
+Zifa Gan, Hongfei Xu, and Hongying Zan. 2021. Self-supervised curriculum learning for spelling error correction. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 7-11 November; 2021, pages 3487-3494. Association for Computational Linguistics.
+Jianfeng Gao, Xiaolong Li, Daniel Micol, Chris Quirk, and Xu Sun. 2010. A large scale ranker-based system for search query spelling correction. In COLING 2010, 23rd International Conference on Computational Linguistics, Proceedings of the Conference, 23-27 August 2010, Beijing, China, pages 358-366. Tsinghua University Press.
+Yuzhong Hong, Xianguo Yu, Neng He, Nan Liu, and Junhui Liu. 2019. Faspell: A fast, adaptable, simple,
+
+powerful chinese spell checker based on dae-decoder paradigm. In Proceedings of the 5th Workshop on Noisy User-generated Text, W-NUT@EMNLP 2019, Hong Kong, China, November 4, 2019, pages 160-169. Association for Computational Linguistics.
+Tuo Ji, Hang Yan, and Xipeng Qiu. 2021. Spellbert: A lightweight pretrained model for chinese spelling check. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 7-11 November; 2021, pages 3544-3551. Association for Computational Linguistics.
+Chao-Lin Liu, Min-Hua Lai, Yi-Hsuan Chuang, and Chia-Ying Lee. 2010. Visually and phonologically similar characters in incorrect simplified chinese words. In COLING 2010, 23rd International Conference on Computational Linguistics, Posters Volume, 23-27 August 2010, Beijing, China, pages 739-747. Chinese Information Processing Society of China.
+Bruno Martins and Mário J. Silva. 2004. Spelling correction for search engine queries. In Advances in Natural Language Processing, 4th International Conference, EsTAL 2004, Alicante, Spain, October 20-22, 2004, Proceedings, volume 3230 of Lecture Notes in Computer Science, pages 372-383. Springer.
+Stuart E. Middleton, Giorgos Kordopatis-Zilos, Symeon Papadopoulos, and Yiannis Kompatsiaris. 2018. Location extraction from social media: Geoparsing, location disambiguation, and geotagging. ACM Trans. Inf. Syst., 36(4).
+Yixuan Su, Fangyu Liu, Zaiqiao Meng, Lei Shu, Ehsan Shareghi, and Nigel Collier. 2021. Tacl: Improving BERT pre-training with token-aware contrastive learning. CoRR, abs/2111.04198.
+Yuen-Hsien Tseng, Lung-Hao Lee, Li-Ping Chang, and Hsin-Hsi Chen. 2015. Introduction to SIGHAN 2015 bake-off for chinese spelling check. In Proceedings of the Eighth SIGHAN Workshop on Chinese Language Processing, SIGHAN@IJCNLP 2015, Beijing, China, July 30-31, 2015, pages 32-37. Association for Computational Linguistics.
+Baoxin Wang, Wanxiang Che, Dayong Wu, Shijin Wang, Guoping Hu, and Ting Liu. 2021. Dynamic connected networks for chinese spelling check. In Findings of the Association for Computational Linguistics: ACL/IJCNLP 2021, Online Event, August 1-6, 2021, volume ACL/IJCNLP 2021 of Findings of ACL, pages 2437-2446. Association for Computational Linguistics.
+Dingmin Wang, Yan Song, Jing Li, Jialong Han, and Haisong Zhang. 2018. A hybrid approach to automatic corpus generation for chinese spelling check. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018, pages 2517-2527. Association for Computational Linguistics.
+
+Shih-Hung Wu, Chao-Lin Liu, and Lung-Hao Lee. 2013. Chinese spelling check evaluation at SIGHAN bake-off 2013. In Proceedings of the Seventh SIGHAN Workshop on Chinese Language Processing, SIGHAN@IJCNLP 2013, Nagoya, Japan, October 14-18, 2013, pages 35-42. Asian Federation of Natural Language Processing.
+Weijian Xie, Peijie Huang, Xinrui Zhang, Kaiduo Hong, Qiang Huang, Bingzhou Chen, and Lei Huang. 2015. Chinese spelling check system based on n-gram model. In Proceedings of the Eighth SIGHAN Workshop on Chinese Language Processing, SIGHAN@IJCNLP 2015, Beijing, China, July 30-31, 2015, pages 128-136. Association for Computational Linguistics.
+Yang Xin, Hai Zhao, Yuzhu Wang, and Zhongye Jia. 2014. An improved graph model for chinese spell checking. In Proceedings of The Third CIPS-SIGHAN Joint Conference on Chinese Language Processing, Wuhan, China, October 20-21, 2014, pages 157-166. Association for Computational Linguistics.
+JunLi Xu, JiaHui Hao, XiMo Bian, and XiaoMei Wang. 2021. Multi-task fine-tuning on bert using spelling errors correction for Chinese text classification robustness. In 2021 IEEE 4th International Conference on Big Data and Artificial Intelligence (BDAI), pages 110-114.
+Liang-Chih Yu, Lung-Hao Lee, Yuen-Hsien Tseng, and Hsin-Hsi Chen. 2014. Overview of SIGHAN 2014 bake-off for chinese spelling check. In Proceedings of The Third CIPS-SIGHAN Joint Conference on Chinese Language Processing, Wuhan, China, October 20-21, 2014, pages 126-132. Association for Computational Linguistics.
+Shaohua Zhang, Haoran Huang, Jicong Liu, and Hang Li. 2020. Spelling error correction with soft-masked BERT. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 882-890. Association for Computational Linguistics.
+Shuiyuan Zhang, Jinhua Xiong, Jianpeng Hou, Qiao Zhang, and Xueqi Cheng. 2015. Hanspeller++: A unified framework for Chinese spelling correction. In Proceedings of the Eighth SIGHAN Workshop on Chinese Language Processing, SIGHAN@IJCNLP 2015, Beijing, China, July 30-31, 2015, pages 38-45. Association for Computational Linguistics.
\ No newline at end of file
diff --git a/wspellerrobustwordsegmentationforenhancingchinesespellingcheck/images.zip b/wspellerrobustwordsegmentationforenhancingchinesespellingcheck/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..8200fee0bfc1686906b65824136e34585c9f0ebc
--- /dev/null
+++ b/wspellerrobustwordsegmentationforenhancingchinesespellingcheck/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:153cec450c58348baca26f7744271846d5279fbca35d1bceb5a8abbaa5a1f47c
+size 603247
diff --git a/wspellerrobustwordsegmentationforenhancingchinesespellingcheck/layout.json b/wspellerrobustwordsegmentationforenhancingchinesespellingcheck/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..40d351212b3b69297e7b90fbcf042c321c2b1594
--- /dev/null
+++ b/wspellerrobustwordsegmentationforenhancingchinesespellingcheck/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ed856209bf869e08369afb7327b14ee1dfc4453cf30642ca55edf882814f881c
+size 382678