SlowGuess commited on
Commit
8327dcc
·
verified ·
1 Parent(s): 8998a4a

Add Batch 5c14c675-4bea-4978-b9e3-02c6227772ea

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. taskcompassscalingmultitaskpretrainingwithtaskprefix/68a17a92-d0f4-45da-a460-ec836d419fed_content_list.json +3 -0
  2. taskcompassscalingmultitaskpretrainingwithtaskprefix/68a17a92-d0f4-45da-a460-ec836d419fed_model.json +3 -0
  3. taskcompassscalingmultitaskpretrainingwithtaskprefix/68a17a92-d0f4-45da-a460-ec836d419fed_origin.pdf +3 -0
  4. taskcompassscalingmultitaskpretrainingwithtaskprefix/full.md +354 -0
  5. taskcompassscalingmultitaskpretrainingwithtaskprefix/images.zip +3 -0
  6. taskcompassscalingmultitaskpretrainingwithtaskprefix/layout.json +3 -0
  7. texteditingasimitationgame/24b4bbd1-c35f-4895-99bd-c96bd69e407b_content_list.json +3 -0
  8. texteditingasimitationgame/24b4bbd1-c35f-4895-99bd-c96bd69e407b_model.json +3 -0
  9. texteditingasimitationgame/24b4bbd1-c35f-4895-99bd-c96bd69e407b_origin.pdf +3 -0
  10. texteditingasimitationgame/full.md +347 -0
  11. texteditingasimitationgame/images.zip +3 -0
  12. texteditingasimitationgame/layout.json +3 -0
  13. texthackerlearningbasedhybridlocalsearchalgorithmfortexthardlabeladversarialattack/503598f3-dab7-4382-abf2-7075200db809_content_list.json +3 -0
  14. texthackerlearningbasedhybridlocalsearchalgorithmfortexthardlabeladversarialattack/503598f3-dab7-4382-abf2-7075200db809_model.json +3 -0
  15. texthackerlearningbasedhybridlocalsearchalgorithmfortexthardlabeladversarialattack/503598f3-dab7-4382-abf2-7075200db809_origin.pdf +3 -0
  16. texthackerlearningbasedhybridlocalsearchalgorithmfortexthardlabeladversarialattack/full.md +439 -0
  17. texthackerlearningbasedhybridlocalsearchalgorithmfortexthardlabeladversarialattack/images.zip +3 -0
  18. texthackerlearningbasedhybridlocalsearchalgorithmfortexthardlabeladversarialattack/layout.json +3 -0
  19. textonlytrainingforimagecaptioningusingnoiseinjectedclip/140abd8e-8322-4276-88e8-1cd1f845c2ae_content_list.json +3 -0
  20. textonlytrainingforimagecaptioningusingnoiseinjectedclip/140abd8e-8322-4276-88e8-1cd1f845c2ae_model.json +3 -0
  21. textonlytrainingforimagecaptioningusingnoiseinjectedclip/140abd8e-8322-4276-88e8-1cd1f845c2ae_origin.pdf +3 -0
  22. textonlytrainingforimagecaptioningusingnoiseinjectedclip/full.md +230 -0
  23. textonlytrainingforimagecaptioningusingnoiseinjectedclip/images.zip +3 -0
  24. textonlytrainingforimagecaptioningusingnoiseinjectedclip/layout.json +3 -0
  25. textualenhancedcontrastivelearningforsolvingmathwordproblems/8686ea23-2acd-4e21-bd8d-e449d28ae474_content_list.json +3 -0
  26. textualenhancedcontrastivelearningforsolvingmathwordproblems/8686ea23-2acd-4e21-bd8d-e449d28ae474_model.json +3 -0
  27. textualenhancedcontrastivelearningforsolvingmathwordproblems/8686ea23-2acd-4e21-bd8d-e449d28ae474_origin.pdf +3 -0
  28. textualenhancedcontrastivelearningforsolvingmathwordproblems/full.md +318 -0
  29. textualenhancedcontrastivelearningforsolvingmathwordproblems/images.zip +3 -0
  30. textualenhancedcontrastivelearningforsolvingmathwordproblems/layout.json +3 -0
  31. thechallengesoftemporalalignmentontwitterduringcrises/9ebccff9-b096-4e5a-b002-3e1f13279edb_content_list.json +3 -0
  32. thechallengesoftemporalalignmentontwitterduringcrises/9ebccff9-b096-4e5a-b002-3e1f13279edb_model.json +3 -0
  33. thechallengesoftemporalalignmentontwitterduringcrises/9ebccff9-b096-4e5a-b002-3e1f13279edb_origin.pdf +3 -0
  34. thechallengesoftemporalalignmentontwitterduringcrises/full.md +355 -0
  35. thechallengesoftemporalalignmentontwitterduringcrises/images.zip +3 -0
  36. thechallengesoftemporalalignmentontwitterduringcrises/layout.json +3 -0
  37. thecuriouscaseofabsolutepositionembeddings/5dfe178e-adb0-45b0-a63b-f83192bfd3a1_content_list.json +3 -0
  38. thecuriouscaseofabsolutepositionembeddings/5dfe178e-adb0-45b0-a63b-f83192bfd3a1_model.json +3 -0
  39. thecuriouscaseofabsolutepositionembeddings/5dfe178e-adb0-45b0-a63b-f83192bfd3a1_origin.pdf +3 -0
  40. thecuriouscaseofabsolutepositionembeddings/full.md +690 -0
  41. thecuriouscaseofabsolutepositionembeddings/images.zip +3 -0
  42. thecuriouscaseofabsolutepositionembeddings/layout.json +3 -0
  43. theeffectsofcorpuschoiceandmorphosyntaxonmultilingualspaceinduction/4bbdabc5-6f81-495a-a5b0-93929344a445_content_list.json +3 -0
  44. theeffectsofcorpuschoiceandmorphosyntaxonmultilingualspaceinduction/4bbdabc5-6f81-495a-a5b0-93929344a445_model.json +3 -0
  45. theeffectsofcorpuschoiceandmorphosyntaxonmultilingualspaceinduction/4bbdabc5-6f81-495a-a5b0-93929344a445_origin.pdf +3 -0
  46. theeffectsofcorpuschoiceandmorphosyntaxonmultilingualspaceinduction/full.md +257 -0
  47. theeffectsofcorpuschoiceandmorphosyntaxonmultilingualspaceinduction/images.zip +3 -0
  48. theeffectsofcorpuschoiceandmorphosyntaxonmultilingualspaceinduction/layout.json +3 -0
  49. theundesirabledependenceonfrequencyofgenderbiasmetricsbasedonwordembeddings/ade8f037-4c0e-4118-b6fb-59ca86898416_content_list.json +3 -0
  50. theundesirabledependenceonfrequencyofgenderbiasmetricsbasedonwordembeddings/ade8f037-4c0e-4118-b6fb-59ca86898416_model.json +3 -0
taskcompassscalingmultitaskpretrainingwithtaskprefix/68a17a92-d0f4-45da-a460-ec836d419fed_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f609d281b8e188a9d2d50dce6059b5e90e8361913bc77c14a1fcb65ddca177eb
3
+ size 101106
taskcompassscalingmultitaskpretrainingwithtaskprefix/68a17a92-d0f4-45da-a460-ec836d419fed_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ec62bfc0f8fd8c70987ccf2c6538f30243520deeab2f5faa06fcb2aef26e6f93
3
+ size 129718
taskcompassscalingmultitaskpretrainingwithtaskprefix/68a17a92-d0f4-45da-a460-ec836d419fed_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:110e08b6d67fd31dc0da9243f06792074687dff34378723599c724590159b97f
3
+ size 1341685
taskcompassscalingmultitaskpretrainingwithtaskprefix/full.md ADDED
@@ -0,0 +1,354 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Task Compass: Scaling Multi-task Pre-training with Task Prefix
2
+
3
+ Zhuosheng Zhang $^{1*}$ , Shuohang Wang $^{2}$ , Yichong Xu $^{2}$ , Yuwei Fang $^{2}$ , Wenhao Yu $^{3*}$ , Yang Liu $^{2}$ , Hai Zhao $^{1}$ , Chenguang Zhu $^{2}$ and Michael Zeng $^{2}$
4
+
5
+ $^{1}$ Shanghai Jiao Tong University, Shanghai, China
6
+
7
+ $^{2}$ Microsoft Cognitive Services Research, Redmond, WA, USA
8
+
9
+ <sup>3</sup>University of Notre Dame, Notre Dame, IN, USA
10
+
11
+ <sup>1</sup>zhangzs@sjtu.edu.cn, zhaohai@cs.sjtu.edu.cn;
12
+
13
+ 2{shuowa, yicxu, yuwfan, yaliu10, chezhu, nzeng}@microsoft.com;3wyu1@nd.edu
14
+
15
+ # Abstract
16
+
17
+ Leveraging task-aware annotated data as supervised signals to assist with self-supervised learning on large-scale unlabeled data has become a new trend in pre-training language models. Existing studies show that multitask learning with large-scale supervised tasks suffers from negative effects across tasks. To tackle the challenge, we propose a task prefix guided multi-task pre-training framework to explore the relationships among tasks. We conduct extensive experiments on 40 datasets, which show that our model can not only serve as the strong foundation backbone for a wide range of tasks but also be feasible as a probing tool for analyzing task relationships. The task relationships reflected by the prefixes align transfer learning performance between tasks. They also suggest directions for data augmentation with complementary tasks, which help our model achieve human-parity results on commonsense reasoning leaderboards. Code is available at https://github.com/cooelf/CompassMTL
18
+
19
+ # 1 Introduction
20
+
21
+ Recent years have witnessed a growing interest in leveraging a unified pre-trained language model (PrLM) to solve a wide range of natural language processing tasks (Tay et al., 2022; Chowdhery et al., 2022; Xie et al., 2022; Zhang et al., 2022). The pre-training recipe of a PrLM is driving from self-supervised learning (Peters et al., 2018; Radford et al., 2018; Devlin et al., 2019; Lan et al., 2020; Clark et al., 2020) to multi-task learning (MTL) with a mixture of standard self-supervised tasks and various supervised tasks,
22
+
23
+ ![](images/f2757d91b4ea4b2e5d59884339cd68eb84c9c757097dfb71310943261103461e.jpg)
24
+ Figure 1: Input-output view. We append a task prefix for each data sequence to capture common patterns from the dataset and require the model to predict some randomly masked prefixes to capture task differences.
25
+
26
+ which takes advantage of learning from both large-scale unlabeled corpus and high-quality human-labeled datasets (Raffel et al., 2019; Aribandi et al., 2021).<sup>1</sup> Benefitting from supervision from related tasks, MTL approaches reduce the cost of curating deep learning models for an individual task and provide a shared representation that is generally applicable for a range of tasks (Wu et al., 2020b).
27
+
28
+ In the research line of multi-task learning for PrLMs, a typical solution is to cast all tasks into a text-to-text format and utilize an encoder-decoder PrLM such as T5 to predict the target sequences (Raffel et al., 2019; Aribandi et al., 2021). Despite the extensive efforts on leveraging supervised tasks in strengthening PrLMs, the latest trend is extreme scaling of task numbers, with little attention paid to the relationships between tasks (Sanh et al., 2021; Wei et al., 2021). Aribandi et al. (2021) investigated co-training transfer effects amongst task-families and empirically found that tasks in different families may have side effects between each other, e.g., summarization tasks generally seem to hurt performance on other task families such as dialogue system (Mehri et al., 2020),
29
+
30
+ natural language inference (Bowman et al., 2015), and commonsense reasoning (Lourie et al., 2021).
31
+
32
+ When the task number scales up, the training of PrLMs would be more vulnerable to negative transfer due to the severe inconsistency of domain and data distribution between tasks (Wu et al., 2020b; Padmakumar et al., 2022). As one of the key concepts underlying MTL, task relationships potentially provide an effective basis for employing PrLMs in a more effective and interpretable way.
33
+
34
+ To handle the issue of negative transfer during multi-task learning, early studies have taken task relationships into account by employing a dual-process model architecture that is composed of a shared encoder and task-specific layers. The two parts are supposed to integrate the common features of all the learning tasks and explore the task relationship in a predefined manner (Zheng et al., 2019; Liu et al., 2019a; Bai et al., 2020; Ma et al., 2021), respectively. However, these methods require additional modifications to model architecture and increase the model complexity and computation cost. Therefore, they are suboptimal for applying to PrLMs in terms of generality and computational bottlenecks.
35
+
36
+ All the considerations above lay down our goal to investigate simple yet effective ways to measure the task relationship without additional cost and keep the generality of PrLMs. In this work, we propose a prefix-guided multi-task learning framework (CompassMTL) to explore the mutual effects between tasks (Figure 1) and improve model performance with complementary tasks. Targeting natural language understanding (NLU) tasks, we employ a discriminative $\mathrm{PrLM}^2$ as the backbone model and train the model on 40 tasks. Experimental results show that our model achieves human-parity performance on commonsense reasoning tasks. We further probe into the task relationship entailed in the tasks prefix representations, finding that the measured relationship highly correlates with task-to-task transfer performance, and it is also of referenced value for optimizing the PrLM on a target task with its complementary tasks during MTL, i.e., fewer tasks with better performance.
37
+
38
+ In summary, our contributions are three folds:
39
+
40
+ 1) A unified discriminative multi-task PrLM for
41
+
42
+ NLU tasks will be released as a strong counterpart for the dominant T5-based encoder-decoder PrLMs trained with MTL.
43
+
44
+ 2) A probing tool of using task prefix to explore the task relationships in large-scale MTL. We observe that the task relationships reflected by the prefixes manifest a correlation with transfer learning performance, and they help our model achieve better results with complementary tasks.
45
+ 3) State-of-the-art results on a variety of NLU tasks, especially human-parity benchmark performance on commonsense reasoning leaderboards, i.e., HellaSwag and $\alpha$ NLI.
46
+
47
+ # 2 Background and Related Work
48
+
49
+ # 2.1 Self-supervised Pre-training
50
+
51
+ PrLMs are commonly pre-trained on large-scale corpora and then used for fine-tuning individual tasks. One of the most widely-used pre-training tasks is masked language modeling (MLM), which first masks out some tokens from the input sentences and then trains the model to predict them by the rest tokens. There are derivatives of MLM including permuted language modeling in XLNet (Yang et al., 2019) and sequence-to-sequence MLM in MASS (Song et al., 2019) and T5 (Raffel et al., 2019). Beyond the general-purpose pre-training, domain-adaptive pre-training and task-adaptive pretraining have attracted attention in recent studies.
52
+
53
+ 1) Domain-adaptive Pre-training. To incorporate specific in-domain knowledge, domain-aware pretraining is designed, which directly post-trains the original PrLMs using the domain-specific corpus. Popular models have been proposed in the dialogue domain (Whang et al., 2020; Wu et al., 2020a), as well as in the medical and science domains (Lee et al., 2020; Beltagy et al., 2019; Huang et al., 2019a; Yu et al., 2022).
54
+ 2) Task-adaptive Pre-training. The goal of task-adaptive pre-training is to capture task-specific skills by devising the pre-training tasks. The popular application scenarios include logical reasoning and dialogue-related tasks Kumar et al. (2020); Gu et al. (2020); Zhang and Zhao (2021); Li et al. (2021). For example, Whang et al. (2021) proposed various utterance manipulation strategies, including utterance insertion, deletion, and retrieval, to maintain dialog coherence.
55
+
56
+ ![](images/9e2065a62f40688d3f4b71ce3b402206ae218db77cfc97ef2f5338bea23ed3f4.jpg)
57
+ Figure 2: Comparison with existing paradigms of multi-task learning. Typical unified text-to-text methods include T5 (Raffel et al., 2019), ExT5 (Aribandi et al., 2021), FLAN (Wei et al., 2021), and T0 (Sanh et al., 2021).
58
+
59
+ ![](images/666eaaea809da6d34880578ab9dbb577b7ab5cdcd32412d5d35769c5c01fc675.jpg)
60
+
61
+ ![](images/6827bcdfea4c01a056ea14ba22b951482cf243eb125c499ec7e5d250c4233552.jpg)
62
+
63
+ # 2.2 Multi-task Learning for PrLMs
64
+
65
+ Our concerned MTL in the field of PrLMs is partially related to the studies of task-adaptive pretraining discussed above. The major difference is that the PrLMs in MTL are fed with human-annotated datasets instead of those automatically constructed ones for self-supervised tasks. Figure 2 overviews the paradigms of MTL PrLMs. Existing methods in this research line mostly vary in model architectures and training stages. For example, MT-DNN (Liu et al., 2019a) applied multi-task learning to train a shared model on all the target datasets in the fine-tuning stage, and there are several task-aware output modules to adapt the shared representations to each task. Recent studies, such as ExT5 (Aribandi et al., 2021), TO (Sanh et al., 2021), and FLAN (Wei et al., 2021), commonly applied an Encoder-Decoder architecture and convert a variety of tasks into the same text-to-text format and train those tasks jointly (Figure 2-a). We argue that they are not the optimal solution considering the model complexity and the gap between original and transformed task formats, especially for natural language understanding tasks that are in a discriminative manner, e.g., classification, multiple-choice, etc. Actually, there are studies (McCann et al., 2018; Keskar et al., 2019; Li et al., 2020; Khashabi et al., 2020) that transform traditional tasks into other formats like reading comprehension or question answering and achieve better results than prior methods. These studies motivate us to explore superior model backbones and data formats, especially for the application in NLU tasks.
66
+
67
+ # 2.3 Modeling Task Relationships in MTL
68
+
69
+ Modeling task relationships is a classic topic in deep learning studies. Bingel and Søgaard (2017) studied the research question about what task relations make gains in traditional natural language processing tasks and investigated when and why MTL works in sequence labeling tasks such as chunking, sentence compression, POS tagging, keyphrase detection, etc. Wu et al. (2020b) found that task data alignment can significantly affect the performance of MTL and proposed architecture with a shared module for all tasks and a separate output module for each task.
70
+
71
+ Since these methods require additional modifications of model architecture, they are suboptimal for employment in PrLMs, considering computational bottlenecks and generality when task scaling. In the era of pre-trained models, Geva et al. (2021) analyzed the behavior transfer in PrLMs between related jointly-trained tasks such as QA and summarization and thus provided evidence for the extrapolation of skills as a consequence of multi-task training. ExT5 (Aribandi et al., 2021) evaluated the transfer performance among task families in a multi-task co-training setup and observed that negative transfer is common, especially when training across task families. Although there are recent studies that insert prompts to describe the task requirements in the data sequences (Liu et al., 2021; Su et al., 2022; Qin et al., 2021; Vu et al., 2022), it is still not clear whether the prompts help negative transfer or whether the prompts necessarily capture task relationships. In this work, we find that using task
72
+
73
+ prefixes along with the MLM for prefix prediction effectively indicates task relationships and helps MTL with fewer datasets but better performance.
74
+
75
+ # 3 Methodology
76
+
77
+ # 3.1 Task Format
78
+
79
+ According to prior studies (McCann et al., 2018; Keskar et al., 2019; Khashabi et al., 2020), the benchmark results on a task can be affected dramatically by training a model on different formats of the same dataset. In contrast to converting all tasks in a text-to-text manner, we choose to model our tasks in a multiple-choice-like format to minimize the format transformation for NLU tasks. Our transformation aims to ensure that each example in a task has a specific number of $k$ candidate options during the multi-task training stage. The original pair-wise input texts are regarded as context and question in the view of the multiple-choice problem. If there is only one text given, then the question will be kept empty. For the outliers, the data will be processed as follows (Examples are provided in Appendix A.1).
80
+
81
+ 1) If the number of candidate options $>k$ , the redundant options will be randomly discarded;
82
+ 2) If the number of candidate options $< k$ , add "N/A" placeholder options.
83
+ 3) If the ground truth is a list, randomly select a correct option from the gold list and randomly sample $k - 1$ negative options from the held-out set except the left items in the gold list.
84
+ 4) If the ground truth is a list and there is an empty choice, construct the truth option manually. For example, "there is no violation"; the negative examples are constructed as the same as 3).
85
+
86
+ As a result, each training example will be formed as a sequence like {{Prefix}: context, question, option}, where [Prefix] indicates the task name in natural language such as [hellawag] presupposed to each data example.
87
+
88
+ # 3.2 CompassMTL
89
+
90
+ Our model is encoder-only, which is based on the DeBERTa architecture (He et al., 2021). The model is trained by using both the supervised task objective and the standard self-supervised denoising objective as described below.
91
+
92
+ Suppose that we have a dataset $\mathcal{D} = \{(y_i, c_i, q_i, r)\}_{i=1}^N$ , where $c_i$ represents the context,
93
+
94
+ $q_{i}$ represents the question, $r$ denotes a set of answer options $r = \{r_1,\dots ,r_k\}$ , and $y_{i}$ is the label. $N$ is the number of training data. Each data example is formed as $x = [\mathrm{CLS}][\mathrm{Prefix}]c_i[\mathrm{SEP}]q_i r_j[\mathrm{SEP}],^4$ $r_j\in r$ . The goal is to learn a discriminator $g(\cdot ,\cdot)$ from $\mathcal{D}$ . For the supervised task, the loss function is: $\mathcal{L}_{mtl} = -\sum_{i = 1}^{N}\sum_{j = 1}^{k}\log (g(c_i,q_i\circ r_j))$
95
+
96
+ At the inference phase, given any new context $c_{i}$ , question $q_{i}$ and options $r$ , we use the discriminator to calculate $g(c_{i}, q_{i} \circ r_{j})$ as their matching score where $\circ$ denotes concatenation. The option with the highest score is chosen as the answer for the $i$ -th example.
97
+
98
+ Let $\hat{x}_i$ denote the masked sequence where a certain proportion of tokens in $x_i$ are randomly replaced with a special [MASK] symbol. Using $\hat{x}_i$ as the input fed to the model in parallel with $x$ , the self-supervised denoising objective is computed in the way of MLM: $\mathcal{L}_{mlm} = -\sum_{i=1}^{N}\sum_{j\in\mathcal{M}}\log p_{\theta}(t_{i,j}|\hat{x}_i)$ , where $t_{i,j}$ is the $j$ -th token in $x_i$ and $\mathcal{M}$ denotes the index set of masked tokens for which the loss will be computed. To encourage the model to learn from both supervised and self-supervised signals, we combine $\mathcal{L}_{mtl}$ and $\mathcal{L}_{mlm}$ during training: $\mathcal{L} = \mathcal{L}_{mtl} + \lambda \mathcal{L}_{mlm}$ where $\lambda$ is a hyper-parameter to balance the weight of the training objectives.
99
+
100
+ Compared with traditional MTL methods, CompassMTL is data-centric, without any modification of model architecture (Figure 2-b). It can be regarded as an efficient implementation of the traditional MTL method composed of a shared representation module and multiple task-aware modules. Since the data from the same datasets share the same task prefix, the prefix is supposed to reflect the common patterns from the dataset, which works in a similar operational principle to the shared representation module. During the training with our self-supervised objective, task prefixes will be randomly masked in a specific probability. The model is required to distinguish the task prefixes and predict the right prefix according to the input data. Therefore, the task differences will also be necessarily captured.
101
+
102
+ # 3.3 Task Relationship Exploration
103
+
104
+ Regarding the task prefixes as the compass to navigate the task relationships, it is possible to use our framework to analyze the relevance of
105
+
106
+ ![](images/cca48a9b43affe3555e6f254f0ef55ca76f41f35e3b36bbcb36b5c814c4415e1.jpg)
107
+ Figure 3: Task taxonomy used in this work.
108
+
109
+ ![](images/59fc5c9d6af7f740311a68e21abdb405ab895f55545ff11771f443e86946aaae.jpg)
110
+
111
+ ![](images/e9b40ab330c7a2add95dd42f56cb0315fabca9e6e2a1120319f91d24ae461122.jpg)
112
+
113
+ tasks (Section 5.2). Our model for prefix probing experiments is slightly revised from CompassMTL, which only uses the MLM objective and is fed by the data without options to alleviate possible shortcuts in options. After the model is pre-trained with MTL, we fetch the prefix embeddings from the model embedding layer and calculate the Pearson correlation between each task pair with min-max normalization. Assuming that we have $n$ tasks, the process will result in $n \times n$ correlation scores to indicate the task relationships.
114
+
115
+ For a target task, we can directly rank the top-related tasks according to the correlation scores and use those complementary tasks for MTL before fine-tuning a target task (Figure 2-c).
116
+
117
+ # 4 Experiments
118
+
119
+ # 4.1 Datasets
120
+
121
+ There are 40 datasets used for training our multi-task model, some of which are collected from GLUE (Wang et al., 2019b), SuperGLUE (Wang et al., 2019a), Rainbow (Lourie et al., 2021), and LexGLUE (Chalkidis et al., 2021). Figure 3 illustrates the composition of our task families.
122
+
123
+ GLUE GLUE (The General Language Understanding Evaluation benchmark) (Wang et al., 2019b) is a collection of 9 various tasks for sentence-level classification. We only use 8 of them: CoLA(Warstadt et al., 2019), SST-2 (Socher et al., 2013), MRPC (Dolan and Brockett, 2005), STS-B (Cer et al., 2017), QQP (Chen et al., 2018), QNLI (Rajpurkar et al., 2016), MNLI (Nangia et al., 2017) and RTE (Bentivogli et al., 2009).
124
+
125
+ Rainbow Rainbow (Lourie et al., 2021) is a suite of commonsense question answering tasks including $\alpha$ NLI (Bhagavatula et al., 2020), CosmosQA (Huang et al., 2019b), HellaSwag (Zellers et al., 2019), PIQA (Bisk et al., 2020), SocialIQA (Sap et al., 2019), Winogrande (Sakaguchi et al., 2020).
126
+
127
+ LexGLUE LexGLUE (Legal General Language Understanding Evaluation) (Chalkidis et al., 2021) is a collection of datasets for evaluating model performance across a diverse set of legal NLU tasks, which contain 7 subtasks, namely ECtHR (Task A), ECtHR (Task B), SCOTUS, EUR-LEX, LEDGAR, UNFAIR-ToS, and CaseHOLD.
128
+
129
+ Domain-specific Classification We use seven datasets that cover specific domains (biomedical and computer science publications, news, and reviews) following Gururangan et al. (2020). The datasets are CHEMPROT (Kringelum et al., 2016), RCT (Dernoncourt and Lee, 2017), ACL-ARC (Jurgens et al., 2018), HYPERPARTISAN (Kiesel et al., 2019), AGNEWS (Zhang et al., 2015), HELPFULNESS (McAuley et al., 2015), and IMDB (Maas et al., 2011).
130
+
131
+ Multiple-choice QA The datasets include DREAM (Sun et al., 2019), QuAIL (Rogers et al., 2020), QuaRTz (Tafjord et al., 2019), WiQA (Tandon et al., 2019), QASC (Khot et al., 2020), SCiQ (Welbl et al., 2017), ARC (Clark et al., 2018). We follow Sanh et al. (2021) to organize this task family.
132
+
133
+ Miscellaneous The other datasets are BookQ (Clark et al., 2019), CB (De Marneffe et al., 2019), CommonsenseQA v1/v2 (Talmor et al., 2019, 2021), and COPA (Roemmelo et al., 2011). BoolQ, CB, and COPA are also collected in SuperGLUE (Wang et al., 2019a). We select those tasks as they can be easily transformed into our unified format.
134
+
135
+ # 4.2 Implementations
136
+
137
+ Our model is implemented using Pytorch and based on the Transformers Library (Wolf et al., 2019). To save computation, we initialize our model with the released checkpoints of DeBERTa-V3-Large, and the hyper-parameter setting generally follows DeBERTa (He et al., 2021). Our experiments
138
+
139
+ <table><tr><td>Model</td><td>Arch.</td><td>Tasks</td><td>Params.</td><td>αNLI</td><td>CosmosQA</td><td>HellaSwag</td><td>PIQA</td><td>SocialIQA</td><td>Winogrande</td><td>Average</td></tr><tr><td>UNICORN</td><td>Enc-Dec</td><td>6</td><td>770M</td><td>79.5</td><td>83.2</td><td>83.0</td><td>82.2</td><td>75.5</td><td>78.7</td><td>80.4</td></tr><tr><td>ExT5</td><td>Enc-Dec</td><td>107</td><td>770M</td><td>82.3</td><td>85.9</td><td>89.0</td><td>85.0</td><td>79.7</td><td>82.5</td><td>84.1</td></tr><tr><td>ExDeBERTa</td><td>Enc only</td><td>40</td><td>567M</td><td>87.9</td><td>85.3</td><td>83.6</td><td>85.5</td><td>79.6</td><td>87.0</td><td>84.8</td></tr><tr><td>CompassMTL</td><td>Enc only</td><td>40</td><td>567M</td><td>91.7</td><td>87.8</td><td>95.6</td><td>87.3</td><td>81.7</td><td>89.6</td><td>89.0</td></tr><tr><td>w/ Tailor</td><td>Enc only</td><td>14</td><td>567M</td><td>92.5</td><td>88.8</td><td>96.1</td><td>88.3</td><td>82.2</td><td>90.5</td><td>89.7</td></tr></table>
140
+
141
+ Table 1: Results on the Rainbow commonsense reasoning validation sets. The baseline models are UNICORN $_{large}$ (Lourie et al., 2021) and ExT5 $_{large}$ (Aribandi et al., 2021). ExDeBERTa is our imitation of ExT5-style (Aribandi et al., 2021) MTL training by using DeBERTa backbone trained on 40 datasets with a multi-task objective of self-supervised denoising and supervised task objective, after which is transferred to each individual task. "w/Tailor" denotes multi-task training with related datasets (14-subset) according to our discovery in Section 5.3.
142
+
143
+ <table><tr><td rowspan="2">Method</td><td colspan="2">ECtHR (A)</td><td colspan="2">ECtHR (B)</td><td colspan="2">SCOTUS</td><td colspan="2">EUR-LEX</td><td colspan="2">LEDGAR</td><td colspan="2">UNFAIR-ToS</td><td>CaseHOLD</td></tr><tr><td>μ-F1</td><td>m-F1</td><td>μ-F1</td><td>m-F1</td><td>μ-F1</td><td>m-F1</td><td>μ-F1</td><td>m-F1</td><td>μ-F1</td><td>m-F1</td><td>μ-F1</td><td>m-F1</td><td>μ/m-F1</td></tr><tr><td>BERT</td><td>71.2</td><td>63.6</td><td>79.7</td><td>73.4</td><td>68.3</td><td>58.3</td><td>71.4</td><td>57.2</td><td>87.6</td><td>81.8</td><td>95.6</td><td>81.3</td><td>70.8</td></tr><tr><td>RoBERTa</td><td>69.2</td><td>59.0</td><td>77.3</td><td>68.9</td><td>71.6</td><td>62.0</td><td>71.9</td><td>57.9</td><td>87.9</td><td>82.3</td><td>95.2</td><td>79.2</td><td>71.4</td></tr><tr><td>DeBERTa</td><td>70.0</td><td>60.8</td><td>78.8</td><td>71.0</td><td>71.1</td><td>62.7</td><td>72.1</td><td>57.4</td><td>88.2</td><td>83.1</td><td>95.5</td><td>80.3</td><td>72.6</td></tr><tr><td>Longformer</td><td>69.9</td><td>64.7</td><td>79.4</td><td>71.7</td><td>72.9</td><td>64.0</td><td>71.6</td><td>57.7</td><td>88.2</td><td>83.0</td><td>95.5</td><td>80.9</td><td>71.9</td></tr><tr><td>BigBird</td><td>70.0</td><td>62.9</td><td>78.8</td><td>70.9</td><td>72.8</td><td>62.0</td><td>71.5</td><td>56.8</td><td>87.8</td><td>82.6</td><td>95.7</td><td>81.3</td><td>70.8</td></tr><tr><td>Legal-BERT</td><td>70.0</td><td>64.0</td><td>80.4</td><td>74.7</td><td>76.4</td><td>66.5</td><td>72.1</td><td>57.4</td><td>88.2</td><td>83.0</td><td>96.0</td><td>83.0</td><td>75.3</td></tr><tr><td>CaseLaw-BERT</td><td>69.8</td><td>62.9</td><td>78.8</td><td>70.3</td><td>76.6</td><td>65.9</td><td>70.7</td><td>56.6</td><td>88.3</td><td>83.0</td><td>96.0</td><td>82.3</td><td>75.4</td></tr><tr><td>ExDeBERTa</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>74.8</td></tr><tr><td>CompassMTL</td><td>71.7</td><td>60.7</td><td>80.6</td><td>73.2</td><td>77.7</td><td>68.9</td><td>67.2</td><td>42.1</td><td>88.1</td><td>82.3</td><td>96.3</td><td>84.3</td><td>76.1</td></tr><tr><td>w/ Tailor</td><td>73.0</td><td>64.7</td><td>80.7</td><td>72.3</td><td>76.3</td><td>68.6</td><td>66.9</td><td>44.9</td><td>88.3</td><td>83.2</td><td>96.2</td><td>83.2</td><td>78.1</td></tr></table>
144
+
145
+ Table 2: Results on LexGLUE test sets. The baseline results except ours in the last column are from Chalkidis et al. (2021). Since the LexGlue tasks except CaseHold are multi-label classification problems, the ExDeBERTa model is not directly applicable for those tasks without extra task-specific fine-tuning; thus, the results are not reported. "w/Tailor" denotes multi-task training with the seven datasets in the same LexGLUE family.
146
+
147
+ are run on 8x32GB Tesla A100 GPUs. The maximum input sequence length is 512. Similar to Lourie et al. (2021), the implementation of CompassMTL includes two procedures. We first conduct multi-task pre-training on all the datasets and then continue to train on each target dataset alone to verify the performance. For multi-task pretraining, we use a peak learning rate of 6e-6 with a warm-up rate of 0.1. We run up to 6 epochs using a batch size of 128. The masking ratio of MLM is 0.25, and $\lambda$ is set to 0.1. To avoid large-scale datasets dominating the pre-training, the training data is randomly sampled by a limit of $10k$ on the maximum dataset size according to Raffel et al. (2019). For fine-tuning experiments, the initial learning rate is selected in $\{3\mathrm{e} - 6,6\mathrm{e} - 6,8\mathrm{e} - 5\}$ with a warm-up rate of 0.1. The batch size is selected in $\{16,32\}$ . The maximum number of epochs is chosen from $\{6,10\}$ . More fine-tuning details are available in Appendix A.2.
148
+
149
+ # 4.3 Main Results
150
+
151
+ Our main results are reported on the Rainbow and LexGLUE benchmark datasets for comparisons with public methods. As the statistics shown in
152
+
153
+ Tables 1-2, we see that CompassMTL models outperform the related public models in general. Specifically, it is observed that our encoder-only models yield better performance than the T5-based encoder-decoder models under similar model sizes. Further, the comparison in the second column discloses the potential to achieve comparable or better performance by multi-task learning with related tasks (w/ Tailor). How to find the related tasks and use them to enhance model performance will be discussed in the following section.
154
+
155
+ # 5 Analysis
156
+
157
+ # 5.1 Ablation Study
158
+
159
+ Table 3 presents our ablation study to dive into the effectiveness of different training objectives and the influence of task prefixes in our method. For the training objectives, MTL and MLM denote the training objectives of $\mathcal{L}_{mtl}$ and $\mathcal{L}_{mlm}$ , respectively. The results suggest that both supervised and self-supervised tasks contribute to the overall model performance, and the supervised task is more beneficial than the self-supervised task in our study. Further, to inspect the role of the task prefixes, we
160
+
161
+ ![](images/facdac42044a6d5cd7741c17922eaf4fcdb859bbbab14bb655554d3ce67704b2.jpg)
162
+ Figure 4: Heatmap of task relationships probed by prefix embeddings.
163
+
164
+ ablate the model with three conditions: 1) must: the prefixes are masked with the probability of 1.0; 2) no: the prefixes are masked with the probability of 0.0; 3) only: only prefixes will be masked, i.e., the prefix of each example will be masked, while the other tokens are left as original. The results in Table 3 show that using prefixes (Prefix $_{\text{must}}$ and Prefix $_{\text{only}}$ ) indeed boosts the model performance generally.
165
+
166
+ # 5.2 Relationship Probing
167
+
168
+ Figure 4 illustrates the heatmap of task relationships probed by prefix embeddings. We see that the datasets inside the same task family (e.g., GLUE and Rainbow) correlate highly with each other. The LexGLUE tasks are less related to other tasks because the texts are mainly legal descriptions. In addition, the correlation scores also accord with the common practice of data augmentation. For example, the NLI datasets (MNLI, QNLI, RTE)
169
+
170
+ <table><tr><td>Model</td><td>Accuracy</td></tr><tr><td>Single</td><td>84.6</td></tr><tr><td>CompassMTL</td><td>89.4</td></tr><tr><td>- MTL</td><td>85.0</td></tr><tr><td>- MLM</td><td>88.8</td></tr><tr><td>Prefixmust</td><td>89.3</td></tr><tr><td>Prefixno</td><td>88.9</td></tr><tr><td>Prefixonly</td><td>89.1</td></tr></table>
171
+
172
+ Table 3: Ablation Study of the training objectives and task prefixes. We calculate the average accuracy scores on the development sets of all the 40 datasets.
173
+
174
+ share close relevance, and it is helpful to initialize parameters from an MNLI model to fine-tune RTE (Liu et al., 2019b; Qu et al., 2020).
175
+
176
+ We are interested in whether the probed relationship scores coordinate with the model performance transferred between tasks. We first obtain transfer accuracy between tasks in a dual-task training setup (Aribandi et al., 2021). Assume that we have 13 source tasks from GLUE and Rainbow tasks and 5 target tasks ( $\alpha$ NLI, HellaSwag, MRPC, PIQA, QNLI, and RTE). We first train individual models using the mixture of training sets from each pair of source and target tasks, and then evaluate the model on the validation set of the target dataset. As a result, we have $5 \times 13$ transfer results. For each
177
+
178
+ <table><tr><td>Dataset</td><td>RTE</td><td>MRPC</td><td>QNLI</td><td>HellaSwag</td><td>αNLI</td><td>Avg.</td></tr><tr><td>Probing</td><td>0.19</td><td>0.22</td><td>0.38</td><td>0.12</td><td>0.51</td><td>0.28</td></tr><tr><td>Length</td><td>-0.12</td><td>0.43</td><td>-0.17</td><td>0.04</td><td>-0.07</td><td>0.02</td></tr><tr><td>Vocab</td><td>0.37</td><td>-0.27</td><td>-0.001</td><td>0.09</td><td>0.31</td><td>0.10</td></tr></table>
179
+
180
+ Table 4: Pearson correlation between each relationship measure and the transfer accuracy.
181
+
182
+ <table><tr><td>Model</td><td>Tasks</td><td>RTE</td><td>MRPC</td><td>QNLI</td><td>HellaSwag</td><td>αNLI</td></tr><tr><td>Single</td><td>1</td><td>61.4</td><td>89.2</td><td>95.0</td><td>95.1</td><td>91.3</td></tr><tr><td>40-fullset</td><td>40</td><td>92.8</td><td>90.4</td><td>95.5</td><td>95.6</td><td>91.7</td></tr><tr><td>Top 5</td><td>5</td><td>92.4</td><td>91.9</td><td>95.3</td><td>95.6</td><td>91.6</td></tr><tr><td>Family</td><td>6/7</td><td>91.4</td><td>90.2</td><td>95.0</td><td>95.7</td><td>91.9</td></tr><tr><td>14-subset</td><td>14</td><td>91.8</td><td>90.3</td><td>95.6</td><td>96.1</td><td>92.5</td></tr></table>
183
+
184
+ target dataset, we calculate Pearson correlation between relationship scores and transfer accuracy among the source datasets. In Table 4, we find that the relationship scores are positively bound up with the transfer performance. The results indicate the potential to find related tasks by the relationship scores. In other words, the relationship scores essentially reflect task relationships.
185
+
186
+ Task relationships may also be reflected by shallow token distributions, such as vocabulary overlap or sentence length. To investigate if our relationship probing can be replaced by comparing the token distributions, we further analyze the correlation between the similarity of token distributions and dual-task transfer accuracy. For sentence length, we first calculate the absolute values of the average length difference between source and target datasets and then convert them to negative values (intuitively less difference in length, more close the relationship). The vocab overlap of the source and target datasets is also computed for comparison. The similarity between datasets reflects weak correlations with the transfer accuracy (2/5 and 3/5 datasets, respectively in Table 4). These results are less consistent than our probing method, which indicates that our method mines more complex patterns toward task relationships.
187
+
188
+ # 5.3 Complementary Transfer
189
+
190
+ To inspect whether using more datasets always leads to better performance and whether using the most related datasets can lead to competitive
191
+
192
+ Table 5: Complementary transfer results using different mixtures of datasets for MTL. The last three rows represent the mixture in different granularity inspired by our relationship probing.
193
+
194
+ <table><tr><td rowspan="2">Model</td><td colspan="2">SQuADv1.1</td><td colspan="2">SQuADv2.0</td><td>NER</td></tr><tr><td>EM</td><td>F1</td><td>EM</td><td>F1</td><td>F1</td></tr><tr><td>Baseline</td><td>88.8</td><td>94.8</td><td>87.1</td><td>90.5</td><td>96.5</td></tr><tr><td>CompassMTL</td><td>89.7</td><td>95.1</td><td>88.5</td><td>91.3</td><td>96.9</td></tr></table>
195
+
196
+ Table 6: Results on the SQuAD v1.1/V2.0 and CoNLL2003 (NER) development sets. The evaluation metrics are Exact-Match (EM) and F1 scores.
197
+
198
+ <table><tr><td>Model</td><td>HellaSwag</td><td>αNLI</td></tr><tr><td>Human Performance</td><td>95.60</td><td>92.90</td></tr><tr><td>Previous SOTA</td><td>94.87</td><td>92.20</td></tr><tr><td>Our Results</td><td>95.94</td><td>92.80</td></tr></table>
199
+
200
+ Table 7: Leaderboard tests of HellaSwag and $\alpha$ NLI.
201
+
202
+ results. In this part, we conduct a complementary transfer analysis by selecting a group of datasets to train an MTL model and fine-tuning the model on target datasets. Four choices of dataset mixture are compared: 1) 40-fullset: the same as our basic setting of CompassMTL in this work; 2) Top-5 ranked dataset according to based on our probed relationship scores; 3) Family: the datasets belonged to the same family with the target dataset, i.e., 6 datasets for Rainbow tasks and 7 datasets for GLUE tasks; 4) 14-subset: the mixture of Rainbow and GLUE datasets.
203
+
204
+ Table 5 presents the comparison results. We observe that the top-5 ranked variant yields comparable, even better results than the others, which indicates that models trained with more datasets may not always bring benefits. The results also indicate that small-scale datasets (e.g., MRPC and RTE), which have relatively high average correlation scores with the other datasets, are more likely to benefit from the complementary transfer. With the tasks scaling up, the performance (family $\rightarrow$ 14-subset) may improve as more related tasks are involved in training.
205
+
206
+ # 5.4 Human-parity on Commonsense Reasoning Leaderboards
207
+
208
+ Table 7 presents our test evaluation on the official leaderboards of HellaSwag<sup>7</sup> and $\alpha$ NLI<sup>8</sup>. The submissions are based on the ensemble of three models selected according to Section 5.3. Compared with public methods that use much larger PrLMs, model ensemble, and knowledge
209
+
210
+ <table><tr><td>Model</td><td>αNLI</td><td>CosmosQA</td><td>HellaSwag</td><td>PIQA</td><td>SocialIQA</td><td>Winogrande</td><td>Average</td></tr><tr><td>T5</td><td>68.5</td><td>69.6</td><td>56.6</td><td>67.7</td><td>65.1</td><td>62.4</td><td>65.0</td></tr><tr><td>UNICORN</td><td>65.3</td><td>72.8</td><td>56.2</td><td>73.3</td><td>66.1</td><td>61.8</td><td>65.9</td></tr><tr><td>CompassMTL</td><td>69.1</td><td>72.6</td><td>57.7</td><td>73.6</td><td>66.6</td><td>64.9</td><td>67.4</td></tr></table>
211
+
212
+ Table 8: Results on the Rainbow validation sets by using T5-base as the backbone model.
213
+
214
+ graphs, our models establish new state-of-the-art results and reach human-parity performance.
215
+
216
+ # 5.5 Beyond The Unified Format
217
+
218
+ To verify whether our model can be employed for tasks that are unavailable to be transformed into our unified format, we evaluate the effectiveness of CompassMTL by using the typical reading comprehension datasets SQuAD v1.1/2.0 (Rajpurkar et al., 2016, 2018) and named entity recognition (NER) dataset CoNLL 2003 (Tjong Kim Sang and De Meulder, 2003), which represent extractive question answering and sequence labeling task formats, respectively. We first replicate the baselines for fine-tuning QA and NER tasks using the Transformers toolkit. For comparison, we initialize the baseline parameters with our model weights to see if CompassMTL is better than the baselines. Results in Table 6 show that our model is generally effective across formats. The results also indicate that CompassMTL can serve as a strong off-the-shelf representation encoder that is applicable for new tasks without needing to be pretrained again.
219
+
220
+ # 5.6 Implementation Using The T5 Backbone
221
+
222
+ Although our method is implemented by the encoder-only backbone to compete in NLU tasks, it is supposed to be generally applicable to other kinds of PrLMs, such as encoder-decoder T5. To verify the effectiveness, we employ the pre-trained T5-base model (Raffel et al., 2019) as the backbone. We use the Rainbow datasets for MTL and convert the data into text-to-text format following the standard processing for T5 training, with task prefixes inserted before each data sequence. The baselines are the single-task T5 trained on each individual task and UNICORN (Lourie et al., 2021) trained on the Rainbow datasets. Results in Table 8 verify that our method is generally effective.
223
+
224
+ # 6 Conclusions
225
+
226
+ This work presents a task prefix guided multi-task method by making use of task prefix to explore the mutual effects between tasks and improve model performance with complementary tasks. Our released model can not only serve as the strong foundation backbone for a wide range of NLU tasks but also be used as a probing tool for analyzing task relationships. Our model shows generalizable advances over tasks in diverse formats and establishes human-parity results on commonsense reasoning tasks. Based on our pre-trained model, we find that the prefixes necessarily reflect task relationships, which correlate with transfer learning performance between tasks and suggest directions for data augmentation of complementary tasks. In summary, our work has the following prospects for future studies:
227
+
228
+ 1) Collaborative multi-task learning of PrLMs. The recipe of using task prefixes in conjunction with prefix prediction in MLM training has shown effective for large-scale MTL pre-training.
229
+
230
+ 2) Suggestive choice for data augmentation. The task relationships probed by the prefix embeddings have shown informative in finding the complementary tasks. Using complementary tasks helps obtain better performance for a target task, especially for small-scale task datasets.
231
+
232
+ 3) Guidance for skill-aware model evaluation. The discovery of task relationships may help determine redundant datasets that assess similar patterns of models. Recently, there has been a trend to evaluate the comprehensive skills of deep learning models by using a large number of datasets (Srivastava et al., 2022), the selection of distinctive datasets can be guided by our relationship discovery criteria to avoid evaluation redundancy and save computation.
233
+
234
+ Limitations. We acknowledge the major limitation of this work is that our model may not readily apply to new tasks. It is based on the common assumption of MTL that the set of tasks is known at training time. Adaptation to new tasks could be future work.
235
+
236
+ # References
237
+
238
+ Armen Aghajanyan, Anchit Gupta, Akshit Shrivastava, Xilun Chen, Luke Zettlemoyer, and Sonal Gupta. 2021. Muppet: Massive multi-task representations with pre-finetuning. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 5799-5811.
239
+ Vamsi Aribandi, Yi Tay, Tal Schuster, Jinfeng Rao, Huaixiu Steven Zheng, Sanket Vaibhav Mehta, Honglei Zhuang, Vinh Q Tran, Dara Bahri, Jianmo Ni, et al. 2021. Ext5: Towards extreme multitask scaling for transfer learning. arXiv preprint arXiv:2111.10952.
240
+ Lu Bai, Yew-Soon Ong, Tiantian He, and Abhishek Gupta. 2020. Multi-task gradient descent for multitask learning. Memetic Computing, 12(4):355-369.
241
+ Iz Beltagy, Kyle Lo, and Arman Cohan. 2019. SciBERT: A pretrained language model for scientific text. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3615-3620, Hong Kong, China. Association for Computational Linguistics.
242
+ Luisa Bentivogli, Peter Clark, Ido Dagan, and Danilo Giampiccolo. 2009. The fifth pascal recognizing textual entailment challenge. In ACL-PASCAL.
243
+ Chandra Bhagavatula, Ronan Le Bras, Chaitanya Malaviya, Keisuke Sakaguchi, Ari Holtzman, Hannah Rashkin, Doug Downey, Wen-tau Yih, and Yejin Choi. 2020. Abductive commonsense reasoning. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.
244
+ Joachim Bingel and Anders Søgaard. 2017. Identifying beneficial task relations for multi-task learning in deep neural networks. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 164-169, Valencia, Spain. Association for Computational Linguistics.
245
+ Yonatan Bisk, Rowan Zellers, Ronan LeBras, Jianfeng Gao, and Yejin Choi. 2020. PIQA: reasoning about physical commonsense in natural language. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pages 7432-7439. AAAI Press.
246
+ Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing,
247
+
248
+ pages 632-642, Lisbon, Portugal. Association for Computational Linguistics.
249
+ Daniel Cer, Mona Diab, Eneko Agirre, Inigo Lopez-Gazpio, and Lucia Specia. 2017. SemEval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 1-14, Vancouver, Canada. Association for Computational Linguistics.
250
+ Ilias Chalkidis, Abhik Jana, Dirk Hartung, Michael Bommarito, Ion Androutsopoulos, Daniel Martin Katz, and Nikolaos Aletras. 2021. Lexglue: A benchmark dataset for legal language understanding in english. arXiv preprint arXiv:2110.00976.
251
+ Zihan Chen, Hongbo Zhang, Xiaoji Zhang, and Leqi Zhao. 2018. Quora question pairs.
252
+ Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. 2022. Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311.
253
+ Christopher Clark, Kenton Lee, Ming-Wei Chang, Tom Kwiatkowski, Michael Collins, and Kristina Toutanova. 2019. *BoolQ: Exploring the surprising difficulty of natural yes/no questions.* In *Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies*, Volume 1 (Long and Short Papers), pages 2924–2936, Minneapolis, Minnesota. Association for Computational Linguistics.
254
+ Kevin Clark, Minh-Thang Luong, Quoc V. Le, and Christopher D. Manning. 2020. ELECTRA: pretraining text encoders as discriminators rather than generators. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.
255
+ Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. 2018. Think you have solved question answering? try arc, the ai2 reasoning challenge. arXiv preprint arXiv:1803.05457.
256
+ Marie-Catherine De Marneffe, Mandy Simons, and Judith Tonhauser. 2019. The commitmentbank: Investigating projection in naturally occurring discourse. In proceedings of Sinn und Bedeutung, volume 23, pages 107-124.
257
+ Franck Dernoncourt and Ji Young Lee. 2017. PubMed 200k RCT: a dataset for sequential sentence classification in medical abstracts. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 308-313, Taipei, Taiwan. Asian Federation of Natural Language Processing.
258
+
259
+ Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.
260
+ William B. Dolan and Chris Brockett. 2005. Automatically constructing a corpus of sentential paraphrases. In Proceedings of the Third International Workshop on Paraphrasing (IWP2005).
261
+ Mor Geva, Uri Katz, Aviv Ben-Arie, and Jonathan Berant. 2021. What's in your head? emergent behaviour in multi-task transformer models. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 8201-8215.
262
+ Xiaodong Gu, Kang Min Yoo, and Jung-Woo Ha. 2020. Dialogbert: Discourse-aware response generation via learning to recover and rank utterances. arXiv:2012.01775.
263
+ Suchin Gururangan, Ana Marasovic, Swabha Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey, and Noah A. Smith. 2020. Don't stop pretraining: Adapt language models to domains and tasks. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8342-8360, Online. Association for Computational Linguistics.
264
+ Pengcheng He, Jianfeng Gao, and Weizhu Chen. 2021. Debertav3: Improving deberta using electra-style pre-training with gradient-disentangled embedding sharing. arXiv preprint arXiv:2111.09543.
265
+ Kexin Huang, Jaan Altosaar, and R. Ranganath. 2019a. Clinicalbert: Modeling clinical notes and predicting hospital readmission. arXiv:1904.05342.
266
+ Lifu Huang, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. 2019b. Cosmos QA: Machine reading comprehension with contextual commonsense reasoning. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2391-2401, Hong Kong, China. Association for Computational Linguistics.
267
+ David Jurgens, Srijan Kumar, Raine Hoover, Dan McFarland, and Dan Jurafsky. 2018. Measuring the evolution of a scientific field through citation frames. Transactions of the Association for Computational Linguistics, 6:391-406.
268
+ Nitish Shirish Keskar, Bryan McCann, Caiming Xiong, and Richard Socher. 2019. Unifying question answering, text classification, and regression via span extraction. arXiv preprint arXiv:1904.09286.
269
+
270
+ Daniel Khashabi, Sewon Min, Tushar Khot, Ashish Sabharwal, Oyvind Tafjord, Peter Clark, and Hannaneh Hajishirzi. 2020. UNIFIEDQA: Crossing format boundaries with a single QA system. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 1896-1907, Online. Association for Computational Linguistics.
271
+ Tushar Khot, Peter Clark, Michal Guerquin, Peter Jansen, and Ashish Sabharwal. 2020. QASC: A dataset for question answering via sentence composition. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pages 8082-8090. AAAI Press.
272
+ Johannes Kiesel, Maria Mestre, Rishabh Shukla, Emmanuel Vincent, Payam Adineh, David Corney, Benno Stein, and Martin Potthast. 2019. SemEval 2019 task 4: Hyperpartisan news detection. In Proceedings of the 13th International Workshop on Semantic Evaluation, pages 829-839, Minneapolis, Minnesota, USA. Association for Computational Linguistics.
273
+ Jens Kringelum, Sonny Kim Kjaerulff, Søren Brunak, Ole Lund, Tudor I Oprea, and Olivier Taboureau. 2016. Chemprot-3.0: a global chemical biology diseases mapping. Database, 2016.
274
+ Pawan Kumar, Dhanajit Brahma, Harish Karnick, and Piyush Rai. 2020. Deep attentive ranking networks for learning to order sentences. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pages 8115-8122. AAAI Press.
275
+ Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2020. ALBERT: A lite BERT for self-supervised learning of language representations. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.
276
+ Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, D. Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2020. Biobert: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics.
277
+ Lu Li, Chenliang Li, and Donghong Ji. 2021. Deep context modeling for multi-turn response selection in dialogue systems. Information Processing & Management, 58(1):102415.
278
+ Xiaoya Li, Jingrong Feng, Yuxian Meng, Qinghong Han, Fei Wu, and Jiwei Li. 2020. A unified
279
+
280
+ MRC framework for named entity recognition. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5849-5859, Online. Association for Computational Linguistics.
281
+ Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, and Graham Neubig. 2021. Pretrain, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586.
282
+ Xiaodong Liu, Pengcheng He, Weizhu Chen, and Jianfeng Gao. 2019a. Multi-task deep neural networks for natural language understanding. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4487-4496, Florence, Italy. Association for Computational Linguistics.
283
+ Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019b. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692.
284
+ Nicholas Lourie, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. 2021. *Unicorn on rainbow: A universal commonsense reasoning model on a new multitask benchmark.* In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 35, pages 13480-13488.
285
+ Yanbao Ma, Hao Xu, Junzhou He, Kun Qian, and Tiebing Li. 2021. Adaptive transfer learning via fine-grained multi-task pre-training. In 2021 4th International Conference on Algorithms, Computing and Artificial Intelligence, pages 1-5.
286
+ Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 142-150, Portland, Oregon, USA. Association for Computational Linguistics.
287
+ Julian J. McAuley, Christopher Targett, Qinfeng Shi, and Anton van den Hengel. 2015. Image-based recommendations on styles and substitutes. In Proceedings of the 38th International ACM SIGIR Conference on Research and Development in Information Retrieval, Santiago, Chile, August 9-13, 2015, pages 43-52. ACM.
288
+ Bryan McCann, Nitish Shirish Keskar, Caiming Xiong, and Richard Socher. 2018. The natural language decathlon: Multitask learning as question answering. arXiv preprint arXiv:1806.08730.
289
+ Shikib Mehri, Mihail Eric, and Dilek Hakkani-Tur. 2020. Dialoglue: A natural language understanding benchmark for task-oriented dialogue. arXiv preprint arXiv:2009.13570.
290
+
291
+ Nikita Nangia, Adina Williams, Angeliki Lazaridou, and Samuel Bowman. 2017. The RepEval 2017 shared task: Multi-genre natural language inference with sentence representations. In Proceedings of the 2nd Workshop on Evaluating Vector Space Representations for NLP, pages 1-10, Copenhagen, Denmark. Association for Computational Linguistics.
292
+ Vishakh Padmakumar, Leonard Lausen, Miguel Ballesteros, Sheng Zha, He He, and George Karypis. 2022. Exploring the role of task transferability in large-scale multi-task learning. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2542-2550, Seattle, United States. Association for Computational Linguistics.
293
+ Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227-2237, New Orleans, Louisiana. Association for Computational Linguistics.
294
+ Yu jia Qin, Xiaozhi Wang, Yusheng Su, Yankai Lin, Ning Ding, Zhiyuan Liu, Juanzi Li, Lei Hou, Peng Li, Maosong Sun, et al. 2021. Exploring low-dimensional intrinsic task subspace via prompt tuning. arXiv preprint arXiv:2110.07867.
295
+ Yanru Qu, Dinghan Shen, Yelong Shen, Sandra Sajeev, Weizhu Chen, and Jiawei Han. 2020. Coda: Contrast-enhanced and diversity-promoting data augmentation for natural language understanding. In International Conference on Learning Representations.
296
+ Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training.
297
+ Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, W. Li, and Peter J. Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text transformer. arXiv: 1910.10683.
298
+ Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you don't know: Unanswerable questions for SQuAD. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 784-789, Melbourne, Australia. Association for Computational Linguistics.
299
+ Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383-2392, Austin, Texas. Association for Computational Linguistics.
300
+
301
+ Melissa Roemmele, Cosmin Adrian Bejan, and Andrew S Gordon. 2011. Choice of plausible alternatives: An evaluation of commonsense causal reasoning. In 2011 AAAI Spring Symposium Series.
302
+ Anna Rogers, Olga Kovaleva, Matthew Downey, and Anna Rumshisky. 2020. Getting closer to ai complete question answering: A set of prerequisite real tasks. In Proceedings of the AAAI conference on artificial intelligence, volume 34, pages 8722-8731.
303
+ Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. 2020. Winogrande: An adversarial winograd schema challenge at scale. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 8732-8740.
304
+ Victor Sanh, Albert Webson, Colin Raffel, Stephen H. Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Teven Le Scao, Arun Raja, et al. 2021. Multitask prompted training enables zero-shot task generalization. arXiv preprint arXiv:2110.08207.
305
+ Maarten Sap, Hannah Rashkin, Derek Chen, Ronan Le Bras, and Yejin Choi. 2019. Social IQa: Commonsense reasoning about social interactions. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4463-4473, Hong Kong, China. Association for Computational Linguistics.
306
+ Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1631-1642, Seattle, Washington, USA. Association for Computational Linguistics.
307
+ Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, and Tie-Yan Liu. 2019. MASS: masked sequence to sequence pre-training for language generation. In Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, volume 97 of Proceedings of Machine Learning Research, pages 5926-5936. PMLR.
308
+ Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu Awal Md Shoeb, Abubakar Abid, Adam Fisch, Adam R Brown, Adam Santoro, Aditya Gupta, Adrià Garriga-Alonso, et al. 2022. Beyond the imitation game: Quantifying and extrapolating the capabilities of language models. arXiv preprint arXiv:2206.04615.
309
+ Yusheng Su, Xiaozhi Wang, Yujia Qin, Chi-Min Chan, Yankai Lin, Huadong Wang, Kaiyue Wen, Zhiyuan Liu, Peng Li, Juanzi Li, et al. 2022. On transferability of prompt tuning for natural language processing. In Annual Conference of the North American Chapter
310
+
311
+ of the Association for Computational Linguistics (NAACL).
312
+ Kai Sun, Dian Yu, Jianshu Chen, Dong Yu, Yejin Choi, and Claire Cardie. 2019. DREAM: A challenge data set and models for dialogue-based reading comprehension. Transactions of the Association for Computational Linguistics, 7:217-231.
313
+ Oyvind Tafjord, Matt Gardner, Kevin Lin, and Peter Clark. 2019. QuaRTz: An open-domain dataset of qualitative relationship questions. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5941-5946, Hong Kong, China. Association for Computational Linguistics.
314
+ Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. 2019. CommonsenseQA: A question answering challenge targeting commonsense knowledge. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4149-4158, Minneapolis, Minnesota Association for Computational Linguistics.
315
+ Alon Talmor, Ori Yoran, Ronan Le Bras, Chandra Bhagavatula, Yoav Goldberg, Yejin Choi, and Jonathan Berant. 2021. Commonsenseqa 2.0: Exposing the limits of ai through gamification. In Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 1).
316
+ Niket Tandon, Bhavana Dalvi, Keisuke Sakaguchi, Peter Clark, and Antoine Bosselut. 2019. WIQA: A dataset for "what if..." reasoning over procedural text. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 6076-6085, Hong Kong, China. Association for Computational Linguistics.
317
+ Yi Tay, Mostafa Dehghani, Vinh Q Tran, Xavier Garcia, Dara Bahri, Tal Schuster, Huaixiu Steven Zheng, Neil Houlsby, and Donald Metzler. 2022 Unifying language learning paradigms. arXiv preprint arXiv:2205.05131.
318
+ Erik F. Tjong Kim Sang and Fien De Meulder 2003. Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition. In Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003, pages 142-147.
319
+ Tu Vu, Brian Lester, Noah Constant, Rami Al-Rfou, and Daniel Cer. 2022. Spot: Better frozen model adaptation through soft prompt transfer. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 5039-5059.
320
+
321
+ Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019a. Superglue: A stickier benchmark for general-purpose language understanding systems. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 3261-3275.
322
+ Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019b. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net.
323
+ Alex Warstadt, Amanpreet Singh, and Samuel R. Bowman. 2019. Neural network acceptability judgments. Transactions of the Association for Computational Linguistics, 7:625-641.
324
+ Jason Wei, Maarten Bosma, Vincent Y Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M Dai, and Quoc V Le. 2021. Finetuned language models are zero-shot learners. arXiv preprint arXiv:2109.01652.
325
+ Johannes Welbl, Nelson F. Liu, and Matt Gardner. 2017. Crowdsourcing multiple choice science questions. In Proceedings of the 3rd Workshop on Noisy User-generated Text, pages 94-106, Copenhagen, Denmark. Association for Computational Linguistics.
326
+ Taesun Whang, Dongyub Lee, Chanhee Lee, Kisu Yang, Dongsuk Oh, and Heuseok Lim. 2020. An effective domain adaptive post-training method for bert in response selection. *INTERSPEECH*.
327
+ Taesun Whang, Dongyub Lee, Dongsuk Oh, Chanhee Lee, Kijong Han, Dong-hun Lee, and Saebyeok Lee. 2021. Do response selection models really know what's next? utterance manipulation strategies for multi-turn response selection. In The Thirty-Fifth AAAI Conference on Artificial Intelligence (AAAI-21).
328
+ Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierrick Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, et al. 2019. Huggingface's transformers: State-of-the-art natural language processing. arXiv preprint arXiv:1910.03771.
329
+ Chien-Sheng Wu, Steven C.H. Hoi, Richard Socher, and Caiming Xiong. 2020a. TOD-BERT: Pre-trained natural language understanding for task-oriented dialogue. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 917-929, Online. Association for Computational Linguistics.
330
+ Sen Wu, Hongyang R. Zhang, and Christopher Ré. 2020b. Understanding and improving information
331
+
332
+ transfer in multi-task learning. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.
333
+ Tianbao Xie, Chen Henry Wu, Peng Shi, Ruiqi Zhong, Torsten Scholak, Michihiro Yasunaga, Chien-Sheng Wu, Ming Zhong, Pengcheng Yin, Sida I Wang, et al. 2022. Unifiedskg: Unifying and multi-tasking structured knowledge grounding with text-to-text language models. arXiv preprint arXiv:2201.05966.
334
+ Zhilin Yang, Zihang Dai, Yiming Yang, Jaime G. Carbonell, Ruslan Salakhutdinov, and Quoc V. Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 5754-5764.
335
+ Wenhao Yu, Chenguang Zhu, Yuwei Fang, Donghan Yu, Shuohang Wang, Yichong Xu, Michael Zeng, and Meng Jiang. 2022. Dict-bert: Enhancing language model pre-training with dictionary. In *Findings of the Association for Computational Linguistics: ACL* 2022, pages 1907-1918.
336
+ Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. 2019. HellaSwag: Can a machine really finish your sentence? In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4791-4800, Florence, Italy. Association for Computational Linguistics.
337
+ Xiang Zhang, Junbo Jake Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classification. In Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, December 7-12, 2015, Montreal, Quebec, Canada, pages 649-657.
338
+ Zhihan Zhang, Wenhao Yu, Mengxia Yu, Zhichun Guo, and Meng Jiang. 2022. A survey of multi-task learning in natural language processing: Regarding task relatedness and training methods. arXiv preprint arXiv:2204.03508.
339
+ Zhuosheng Zhang and Hai Zhao. 2021. Structural pretraining for dialogue comprehension. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume I: Long Papers), pages 5134-5145, Online. Association for Computational Linguistics.
340
+ Zimu Zheng, Yuqi Wang, Quanyu Dai, Huadi Zheng, and Dan Wang. 2019. Metadata-driven task relation discovery for multi-task learning. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI 2019, Macao, China, August 10-16, 2019, pages 4426-4432. ijcai.org.
341
+
342
+ <table><tr><td>Context</td><td>Question</td><td>Option(s)</td></tr><tr><td>[sciq] A wetland is an area that is wet for all or part of the year. Wetlands are home to certain types of plants.</td><td>What is an area of land called that is wet for all or part of the year?</td><td>[&quot;tundra&quot;, &quot;plains&quot;, &quot;grassland&quot;, &quot;wetland&quot;]</td></tr><tr><td>[commonsense_qa] revolving door</td><td>A revolving door is convenient for two direction travel, but it also serves as a security measure at a what?</td><td>[ &quot;bank&quot;, &quot;library&quot;, &quot;department store&quot;, &quot;mall&quot;, &quot;new-york&quot;]</td></tr><tr><td>[dream] M: I am considering dropping my dancing class. I am not making any progress.&quot;, &quot;W: If I were you, I stick with it. It&#x27;s definitely worth time and effort.</td><td>What does the man suggest the woman do?</td><td>[ &quot;Consult her dancing teacher.&quot;, &quot;Take a more interesting class.&quot;, &quot;Continue her dancing class.&quot;, &quot;N/A&quot;]</td></tr><tr><td>[scotus] The Interstate Commerce Commission, acting under § 19a of the Interstate Commerce Act, ordered the appellant to furnish certain inventories, schedules, maps and charts of its pipe line property ...</td><td>-</td><td>[ &quot;Unions&quot;, &quot;Economic Activity&quot;, &quot;Jurisdictional Power&quot;, &quot;Federalism&quot;]</td></tr><tr><td>[unfair_tos] you must provide accurate and complete data during the registration and update your registration data if it changes.</td><td>-</td><td>[ &quot;there is no unfair contractual term&quot;, &quot;Limitation of liability&quot;, &quot;Unilateral termination&quot;, &quot;Arbitration&quot;]</td></tr></table>
343
+
344
+ Table 9: Examples of transformed datasets.
345
+
346
+ # A Appendix
347
+
348
+ # A.1 Examples of transformed datasets
349
+
350
+ Table 9 shows examples of transformed datasets. The first column presents the standard multiple-choice dataset, followed by four types of outlier datasets (Section 3.1) that are transformed into our unified format.
351
+
352
+ # A.2 Fine-tuning Details
353
+
354
+ According to Section 3.1, our training datasets are converted into a multiple-choice-like format for multi-task pre-training. During fine-tuning, because our evaluated GLUE and Rainbow tasks for public comparisons are either single-label classification or multiple-choice tasks, the conversion would not affect the performance according to our preliminary experiments as the predictions can be easily mapped to the original formats by choosing the best-ranked options. For the other tasks, such as the multi-label classification tasks in LexGLUE, where the conversion will result in the clip of ground-true labels, we use the original datasets for fine-tuning and initialize the corresponding baseline models with our pre-trained weights after MTL. The criteria for choosing the baseline models for different types of tasks basically follows the standard practice in literature (He et al., 2021; Chalkidis et al., 2021).
taskcompassscalingmultitaskpretrainingwithtaskprefix/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:44c7498e8a9343a0c96289696de6e980b251c0a12e03ecd215c460aa507b1588
3
+ size 823257
taskcompassscalingmultitaskpretrainingwithtaskprefix/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e7745c1161b00741e58d0a8219a60e454bf78b90fc08e3d8977a311e5c453c54
3
+ size 460273
texteditingasimitationgame/24b4bbd1-c35f-4895-99bd-c96bd69e407b_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7b2bca7d643851f2ddaa827cd7f79eac3b588fb0326b1678fbb2936950042d1f
3
+ size 87724
texteditingasimitationgame/24b4bbd1-c35f-4895-99bd-c96bd69e407b_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5b66d02d019d218a7a0418018a76b82158db37510a883d2b477adeb044245950
3
+ size 107418
texteditingasimitationgame/24b4bbd1-c35f-4895-99bd-c96bd69e407b_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:518939324f554f2150ca2762450fe3b6839f647f857a5f0e75c424cba0353d56
3
+ size 597326
texteditingasimitationgame/full.md ADDED
@@ -0,0 +1,347 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Text Editing as Imitation Game
2
+
3
+ *Ning Shi\* Bin Tang Bo Yuan Longtao Huang Yewen Pu Jie Fu Zhouhan Lin
4
+
5
+ $\spadesuit$ Alberta Machine Intelligence Institute, Dept. of Computing Science, University of Alberta
6
+
7
+ Alibaba Group ★Shanghai Jiao Tong University
8
+
9
+ $\clubsuit$ Autodesk Research $\diamond$ Beijing Academy of Artificial Intelligence Ning.shi@ualberta.ca,{tangbin.tang,qiufu.yb,kaiyang.hlt}@alibaba-inc.com yewen.pu@autodesk.com,fujie@baai.ac.cn,lin.zhouhan@gmail.com
10
+
11
+ # Abstract
12
+
13
+ Text editing, such as grammatical error correction, arises naturally from imperfect textual data. Recent works frame text editing as a multi-round sequence tagging task, where operations – such as insertion and substitution – are represented as a sequence of tags. While achieving good results, this encoding is limited in flexibility as all actions are bound to token-level tags. In this work, we reformulate text editing as an imitation game using behavioral cloning. Specifically, we convert conventional sequence-to-sequence data into state-to-action demonstrations, where the action space can be as flexible as needed. Instead of generating the actions one at a time, we introduce a dual decoders structure to parallel the decoding while retaining the dependencies between action tokens, coupled with trajectory augmentation to alleviate the distribution shift that imitation learning often suffers. In experiments on a suite of Arithmetic Equation benchmarks, our model consistently outperforms the autoregressive baselines in terms of performance, efficiency, and robustness. We hope our findings will shed light on future studies in reinforcement learning applying sequence-level action generation to natural language processing.
14
+
15
+ # 1 Introduction
16
+
17
+ Text editing (Malmi et al., 2022) is an important domain of processing tasks to edit the text in a localized fashion, applying to text simplification (Agrawal et al., 2021), grammatical error correction (Li et al., 2022), punctuation restoration (Shi et al., 2021), to name a few. Neural sequence-to-sequence (seq2seq) framework (Sutskever et al., 2014) establishes itself as the primary approach to text editing tasks, by framing the problem as machine translation (Wu et al., 2016). Applying a seq2seq modeling has the advantage of simplic
18
+
19
+ ![](images/9167aa69c63d872520b1f61b15ebd6d5bdc3cd7a23e49c70fec439a281bd63c6.jpg)
20
+ Figure 1: Three approaches - sequence tagging (left), end-to-end (middle), sequence generation (right) - to turn an invalid arithmetic expression "1 1 2" into a valid one $1 + 1 = 2$ . In end-to-end, the entire string "1 1 2" is encoded into a latent state, which the string $1 + 1 = 2$ is generated directly. In sequence tagging, a localized action (such as "INSERT_+"), meaning insert a "+" symbol after this token) is applied/tagged to each token; these token-level actions are then executed, modifying the input string. In contrast, sequence generation output an entire action sequence, generating the location (rather than tagging it), and the action sequence is executed, modifying the input string. Both token-level actions and sequence-level actions can be applied multiple times to polish the text further (up to a fixed point).
21
+
22
+ ity, where the system can simply be built by giving input-output pairs consisting of pathological sequences to be edited, and the desired sequence output, without much manual processing efforts (Junczys-Dowmunt et al., 2018).
23
+
24
+ However, even with a copy mechanism (See et al., 2017; Zhao et al., 2019; Panthaplackel et al., 2021), an end-to-end model can struggle in carrying out localized, specific fixes while keeping the rest of the sequence intact. Thus, sequence tagging is often found more appropriate when outputs highly overlap with inputs (Dong et al., 2019; Mallinson et al., 2020; Stahlberg and Kumar, 2020). In such cases, a neural model predicts a tag sequence - representing localized fixes such as insertion and substitution - and a programmatic interpreter implements these edit operations through. Here, each tag represents a token-level action and determines the operation on its attached token (Kohita et al., 2020). A model can avoid modifying the overlap by assigning no-op (e.g., KEEP), while the action space is limited to token-level modifications,
25
+
26
+ such as deletion or insertion after a token (Awasthi et al., 2019; Malmi et al., 2019).
27
+
28
+ In contrast, alternative approaches (Gupta et al., 2019) train the agent to explicitly generate freeform edit actions and iteratively reconstructs the text during the interaction with an environment capable of altering the text based on these actions. This sequence-level action generation (Branavan et al., 2009; Guu et al., 2017; Elgohary et al., 2021) allows higher flexibility of action design not limited to token-level actions, and is more advantageous given the narrowed problem space and dynamic context in the edit (Shi et al., 2020).
29
+
30
+ The mechanisms of sequence tagging and sequence generation against end-to-end are exemplified in Figure 1. Both methods allow multiple rounds of sequence refinement (Ge et al., 2018; Liu et al., 2021) and imitation learning (IL) (Pomerleau, 1991). Essentially an agent learns from the demonstrations of an expert policy and later imitates the memorized behavior to act independently (Schaal, 1996). On the one hand, IL in sequence tagging functions as a standard supervised learning in its nature and thus has attracted significant interest and been widely used recently (Agrawal et al., 2021; Yao et al., 2021; Agrawal and Carpuat, 2022), achieving good results in the token-level action generation setting (Gu et al., 2019; Reid and Zhong, 2021). On the other hand, IL in sequence-level action generation is less well defined even though its principle has been followed in text editing (Shi et al., 2020) and many others (Chen et al., 2021). As a major obstacle, the training is on state-action demonstrations, where the encoding of the states and actions can be very different (Gu et al., 2018). For instance, the mismatch of the lengths dimension between the state and action makes it tricky to implement for an auto-regressive modeling that benefits from a single, uniform representation.
31
+
32
+ To tackle the issues above, we reformulate text editing as an imitation game controlled by a Markov Decision Process (MDP). To begin with, we define the input sequence as the initial state, the required operations as action sequences, and the output target sequence as the goal state. A learning agent needs to imitate an expert policy, respond to seen states with actions, and interact with the environment until the success of the eventual editing. To convert existing input-output data into state-action pairs, we utilize trajectory generation (TG), a skill to leverage dynamic programming (DP) for
33
+
34
+ an efficient search of the minimum operations given a predefined edit metric. We backtrace explored editing paths and automatically express operations as action sequences. Regarding the length misalignment, we first take advantage of the flexibility at the sequence-level to fix actions to be of the same length. Secondly, we employ a linear layer after the encoder to transform the length dimension of the context matrix into the action length. By that, we introduce a dual decoders (D2) structure that not only parallels the decoding but also retains capturing interdependencies among action tokens. Taking a further step, we propose trajectory augmentation (TA) as a solution to the distribution shift problem most IL suffers (Ross et al., 2011). Through a suite of three Arithmetic Equation (AE) benchmarks (Shi et al., 2020), namely Arithmetic Operators Restoration (AOR), Arithmetic Equation Simplification (AES), and Arithmetic Equation Correction (AEC), we confirm the superiority of our learning paradigm. In particular, D2 consistently exceeds standard autoregressive models from performance, efficiency, and robustness perspectives.
35
+
36
+ In theory, our methods also apply to other imitation learning scenarios where a reward function exists to further promote the agent. In this work, we primarily focus on a proof-of-concept of our learning paradigm landing at supervised behavior cloning (BC) in the context of text editing. To this end, our contributions<sup>1</sup> are as follows:
37
+
38
+ 1. We frame text editing into an imitation game formally defined as an MDP, allowing the highest degrees of flexibility to design actions at the sequence-level.
39
+ 2. We involve TG to translate input-output data to state-action demonstrations for IL.
40
+ 3. We introduce D2, a novel non-autoregressive decoder, boosting the learning in terms of accuracy, efficiency, and robustness.
41
+ 4. We propose a corresponding TA technique to mitigate distribution shift IL often suffers.
42
+
43
+ # 2 Imitation Game
44
+
45
+ We aim to cast text editing into an imitation game by defining the task as a recurrent sequence generation, as presented in Figure 2 (a). In this section, we describe the major components of our proposal, including (1) the problem definition, (2) the data translation, (3) the model structure, and (4) a solution to the distribution shift.
46
+
47
+ ![](images/e1736407de3b8b2a45bc938c32f134756998c64785068338b2cf826772267b42.jpg)
48
+ Figure 2: (a) shows the imitation game of AOR. Considering input text $\mathbf{x}$ as initial state $\mathbf{s}_1$ , the agent interacts with the environment to edit "1 1 2" into "1 + 1 = 2" via action $\mathbf{a}_1$ to insert "+" at the first position and $\mathbf{a}_2$ to insert "=" at the thrid position. After $\mathbf{a}_3$ , the agent stops editing and calls the environment to return $\mathbf{s}_3$ as the output text $\mathbf{y}$ . Using the same example, (b) explains how to achieve shifted state $\mathbf{s}_2'$ by skipping action $\mathbf{a}_1^*$ and doing $\mathbf{a}_2'$ . Here we update $\mathbf{a}_2^*$ to $\mathbf{a}_2'$ accordingly due to the previous skipping. The new state $\mathbf{s}_2'$ was not in the expert demonstrations.
49
+
50
+ ![](images/1f2fb3dfe27430820b8486b07f5b4a5bed28e6380bd08592dcdf84d432184a96.jpg)
51
+ (a)
52
+
53
+ ![](images/b72d9b6ef00dce50ea99d51f5f44ba0fe4d74db4e98e9d1f910ce35b3e045546.jpg)
54
+
55
+ ![](images/382377ef35f4a7de754151e2277c3a7eec5e7d88505c22ec8609c816a56540d4.jpg)
56
+
57
+ ![](images/8a95885e86e375960ed46d9bec6615dfddd646883069722e70c1f90453d2f86d.jpg)
58
+ (b)
59
+
60
+ # 2.1 Behavior cloning
61
+
62
+ We tear a text editing task $\mathcal{X} \mapsto \mathcal{Y}$ into recurrent subtasks of sequence generation $\mathcal{S} \mapsto \mathcal{A}$ defined by an MDP tuple $\mathcal{M} = (\mathcal{S}, \mathcal{A}, \mathcal{P}, \mathcal{E}, \mathcal{R})$ .
63
+
64
+ State $S$ is a set of text sequences $\mathbf{s} = s_{j\leq m}$ , where $s\in \mathcal{V}_S$ . We think of a source sequence $\mathbf{x}\in \mathcal{X}$ as the initial state $\mathbf{s}_1$ , its target sequence $\mathbf{y}\in \mathcal{Y}$ as the goal state $\mathbf{s}_T$ , and every edited sequence in between as an intermediate state $\mathbf{s}_t$ . The path $\mathbf{x}\mapsto \mathbf{y}$ can be represented as a set of sequential states $\mathbf{s}_{t < T}$ .
65
+
66
+ Action $\mathcal{A}$ is a set of action sequences $\mathbf{a} = a_{i\leq n}$ , where $a\in \mathcal{V}_A$ . In Figure 3, "INSERT", "POS_3", and "=" are three action tokens belonging to the vocabulary space of action $\nu_{\mathcal{A}}$ . In contrast to token-level actions in sequence tagging, sentence-level ones set free the editing by varying edit metrics $\mathbf{E}$ (e.g., Levenshtein distance) as long as $\mathcal{X}\xrightarrow{\mathcal{A}_{\mathbf{E}}}\mathcal{Y}$ . It serves as an expert policy $\pi^{*}$ to demonstrate the path to the goal state. A better expert usually means better demonstrations and imitation results. Hence, depending on the task, a suitable $\mathbf{E}$ is essential.
67
+
68
+ Transition matrix $\mathcal{P}$ models the probability $p$ that an action $\mathbf{a}_t$ leads a state $\mathbf{s}_t$ to the state $\mathbf{s}_{t + 1}$ . We know $\forall \mathbf{s},\mathbf{a}.p(\mathbf{s}_{t + 1}|\mathbf{s}_t,\mathbf{a}_t) = 1$ due to the nature of text editing. So we can omit $\mathcal{P}$ .
69
+
70
+ Environment $\mathcal{E}$ responds to an action and updates the game state accordingly by $\mathbf{s}_{t + 1} = \mathcal{E}(\mathbf{s}_t,\mathbf{a}_t)$ with process control. For example, the environment can refuse to execute actions that fail to pass the verification and terminate the game if a maximum number of iterations has been consumed.
71
+
72
+ Reward function $\mathcal{R}$ calculates a reward for each action. It is a major factor contributing to the success of reinforcement learning. In the scope of this paper, we focus on BC, the simplest form of IL. So we can also omit $\mathcal{R}$ and leave it for future work.
73
+
74
+ # Algorithm 1 Trajectory Generation (TG)
75
+
76
+ Input: Initial state $\mathbf{x}$ , goal state $\mathbf{y}$ , environment $\mathcal{E}$ , and edit metric $\mathbf{E}$ .
77
+ Output: Trajectories $\tau$ .
78
+ 1: $\tau \gets \emptyset$
79
+ 2: $\mathbf{s}\gets \mathbf{x}$
80
+ 3: $ops\gets \mathrm{DP}(\mathbf{x},\mathbf{y},E)$
81
+ 4: for $op\in ops$ do
82
+ 5: $\mathbf{a} \gets \operatorname{Action}(op) \quad \triangleright$ Translate operation to action
83
+ 6: $\tau \gets \tau \cup [(s, a)]$
84
+ 7: $\mathbf{s} \gets \mathcal{E}(\mathbf{s}, \mathbf{a})$
85
+ 8: end for
86
+ 9: $\tau \gets \tau \cup [(\mathbf{s},\mathbf{a}_T)]\triangleright$ Append goal state and output action
87
+ 10: return $\tau$
88
+
89
+ The formulation turns out to be a simplified $\mathcal{M}_{BC} = (\mathcal{S},\mathcal{A},\mathcal{E})$ . Interacting with the environment $\mathcal{E}$ , we hope a trained agent is able to follow its learned policy $\pi : S \mapsto \mathcal{A}$ , and iteratively edit the initial state $\mathbf{s}_0 = \mathbf{x}$ into the goal state $\mathbf{s}_T = \mathbf{y}$ .
90
+
91
+ # 2.2 Trajectory generation
92
+
93
+ A data set to learn $\mathcal{X} \mapsto \mathcal{Y}$ consists of input-output pairs. It is necessary to convert it into state-action ones so that an agent can mimic the expert policy $\pi^{*}: S \mapsto \mathcal{A}$ via supervised learning. A detailed TG is described in Algorithm 1.
94
+
95
+ Treating a pre-defined edit metric $\mathbf{E}$ as the expert policy $\pi^{*}$ , we can leverage DP to efficiently find the minimum operations required to convert $\mathbf{x}$ into $\mathbf{y}$ in a left-to-right manner and backtrace this path to get specific operations.
96
+
97
+ Operations are later expressed as a set of sequential actions $\mathbf{a}_{t\leq T}^{*}$ . Here we utilize a special symbol DONE to mark the last action $\mathbf{a}_T^*$ where $\forall a\in \mathbf{a}_T^*$ . $a =$ DONE. Once an agent performs $\mathbf{a}_T^*$ , the current state is returned by the environment as the final output.
98
+
99
+ Given $\mathbf{s}_1^* = \mathbf{x}$ , we attain the next state $\mathbf{s}_2^* =$
100
+
101
+ ![](images/90dff7ca21ac8fca37b7fbef6aa1883e519061964118be91ed97bb8ee38daf06.jpg)
102
+ Figure 3: The conventional autoregressive decoder (a) compared with the proposed non-autoregressive D2 (b) in which the linear layer aligns the sequence length dimension for the subsequent parallel decoding.
103
+
104
+ ![](images/69731ffb56ba14d07a952a64fce2fd14df4f880162a25c2269bdec92979814c5.jpg)
105
+
106
+ $\mathcal{E}(\mathbf{s}_1^*, \mathbf{a}_1^*)$ and continue the rest until achieving $\mathbf{s}_T^* = \mathbf{y}$ , resulting in a set of sequential states $\mathbf{s}_{t < T}^*$ .
107
+
108
+ After one-to-one correspondence between states and actions, we collect a set of sequential expert's demonstrations $\tau^{*} = [(\mathbf{s}_{t\leq T}^{*},\mathbf{a}_{t\leq T}^{*})]$ . Repeating the same process, we eventually convert $\mathcal{X}\mapsto \mathcal{Y}$ into trajectories $\mathcal{T}^*:S\mapsto \mathcal{A}$ .
109
+
110
+ # 2.3 Model architecture
111
+
112
+ We form $S \mapsto \mathcal{A}$ as sequence generation. More precisely, a neural model (i.e., the agent) takes states as input and outputs actions. Training an imitation policy with BC corresponds to fitting a parametric model $\pi_{\theta}$ that minimizes the negative log-likelihood loss $l(\mathbf{a}^{*}, \pi_{\theta}(\mathbf{s}))$ . Most seq2seq models have an encoder-decoder structure.
113
+
114
+ Encoder takes an embedded state $\operatorname{E}(\mathbf{s}) \in \mathbb{R}^{m \times d}$ and generates an encoded hidden state $\mathbf{h}_E \in \mathbb{R}^{m \times d}$ with $d$ being the hidden dimension.
115
+
116
+ Autoregressive decoder in Figure 3 (a) conditions the current step on the encoded context and previously predictions to overcome the mismatch of sequence length. It calculates step by step
117
+
118
+ $$
119
+ h _ {D} ^ {i} = \mathrm {A R} (\mathrm {E} (a _ {< i}), \mathbf {h} _ {E}) \in \mathbb {R} ^ {d}, i = 0, \dots , n + 1,
120
+ $$
121
+
122
+ $\hat{a}_i = \mathrm{LogSoftmax}(h_D^i)\in \mathbb{R}^{|\mathcal{V}_A|},i = 0,\dots ,n + 1,$ and in the end, returns $\hat{\mathbf{a}}\in \mathbb{R}^{n\times |\mathcal{V}_A|}$ . The training is conducted as back-propogating $l(\mathbf{a}^{*},\hat{\mathbf{a}})$ . Note that $a_0^* = \mathtt{BOS}$ and $a_{n + 1}^* = \mathtt{EOS}$ encourage the decoder to learn to begin and end the autoregression.
123
+
124
+ Non-autoregressive decoder instead provides hidden states in one time. It is feasible to apply techniques of non-autoregressive machine translation. However, one of the primary issues solved by that is the uncertainty of the target sequence length. When it comes to state-action prediction, thanks to the flexibility at the sequence-level, we are allowed to design actions on purpose to eliminate such uncertainty. Specifically, we enforce action sequences
125
+
126
+ to be of fixed length. On this basis, we propose D2 as shown in Figure 3 (b). To address the misalignment of sequence length between state and action, we insert a fully connected feed-forward network between the encoder and $\mathrm{decoder}_0$ .
127
+
128
+ $$
129
+ \operatorname {F F N} \left(\mathbf {h} _ {E}\right) = \left(\mathbf {h} _ {E} ^ {\mathbf {T}} W + b\right) ^ {\mathbf {T}} \in \mathbb {R} ^ {n \times d}
130
+ $$
131
+
132
+ where $W \in \mathbb{R}^{m \times n}$ and $b \in \mathbb{R}^{d \times n}$ transform the length dimension from $m$ to $n$ so as to project $\mathbf{h}_E$ into $\mathbf{h}_F \in \mathbb{R}^{n \times d}$ . The alignment of the sequence length allows us to trivially pass $\mathbf{h}_F$ to $\mathrm{decoder}_0$ .
133
+
134
+ $$
135
+ \mathbf {h} _ {D _ {0}} = \mathrm {N A R} _ {0} (\mathbf {h} _ {F}, \mathbf {h} _ {E}) \in \mathbb {R} ^ {n \times d}
136
+ $$
137
+
138
+ $$
139
+ \hat {\mathbf {a}} ^ {0} = \operatorname {L o g S o f t m a x} \left(\mathbf {h} _ {\mathrm {D} _ {0}}\right) \in \mathbb {R} ^ {n \times | \mathcal {V} _ {\mathcal {A}} |}
140
+ $$
141
+
142
+ For a clear comparison with the autoregressive decoder, we make minimal changes to the structure and keep modeling the dependence between two contiguous steps through $\mathrm{decoder}_1$ . To elaborate, we shift $\mathbf{a}^0$ one position to the right as $\acute{\mathbf{a}}^0$ by appending $a_0^*$ at the beginning and remove $a_n^0$ to maintain the sequence length. After that, we continue to feed $\acute{\mathbf{a}}^0$ to $\mathrm{decoder}_1$ .
143
+
144
+ $$
145
+ \mathbf {h} _ {D _ {1}} = \mathrm {N A R} _ {1} (\mathrm {E} (\mathbf {\acute {a}} ^ {0}), \mathbf {h} _ {E}) \in \mathbb {R} ^ {n \times d}
146
+ $$
147
+
148
+ $$
149
+ \hat {\mathbf {a}} ^ {1} = \operatorname {L o g S o f t m a x} \left(\mathbf {h} _ {\mathrm {D} _ {1}}\right) \in \mathbb {R} ^ {n \times | \mathcal {V} _ {\mathcal {A}} |}
150
+ $$
151
+
152
+ At last, we conduct backpropagation with respect to the loss summation $l(\mathbf{a}^*, \hat{\mathbf{a}}^0) \oplus l(\mathbf{a}^*, \hat{\mathbf{a}}^1)$ . Conventional seq2seq architectures are often equipped with intermediate modules such as a full attention distribution over the encoded context (Bahdanau et al., 2015), which is omitted in the above formulation for simplicity. In the implementation, we always assume to train $\mathrm{decoder}_0$ and $\mathrm{decoder}_1$ separately to increase the model capacity, yet weight sharing is possible.
153
+
154
+ # 2.4 Trajectory augmentation
155
+
156
+ IL suffers from distribution shift and error accumulation (Ross et al., 2011). An agent's mistakes can easily put it into a state that the expert demonstrations do not involve and the agent has never seen during training. This also means errors can add up, so the agent drifts farther and farther away from the demonstrations. To tackle this issue, we propose TA that expands the expert demonstrations and actively exposes shifted states to the agent. We accomplish this by diverting intermediate states and consider them as initial states for TG. An example is offered in Figure 2 (b).
157
+
158
+ Given expert states $\mathbf{s}_{t\leq T}^{*}$ and corresponding actions $\mathbf{a}_{t\leq T}^{*}$ , we utilize the divide-and-conquer technique to (1) break down the chain of state generation $\mathbf{s}_t^*\xrightarrow{\mathbf{a}_t^*}\mathbf{s}_{t + 1}^*$ into two by either executing $\mathbf{a}_t^*$ to stay on the current path or skipping $\mathbf{a}_t^*$ to branch the current path; (2) recursively calling this process until reaching the goal state $\mathbf{s}_T^*$ ; (3) merge intermediate states from branches and return from bottom to top in the end. As illustrated in Algorithm 2, we collect a set of shifted states
159
+
160
+ $$
161
+ \mathbf {S} ^ {\prime} = \mathrm {T A} (\emptyset , \mathbf {s} _ {1} ^ {*}, \mathbf {s} _ {t \leq T} ^ {*}, \mathbf {a} _ {t \leq T} ^ {*}, \mathcal {E}),
162
+ $$
163
+
164
+ regard them as initial states paired with the same goal state to produce extra trajectories
165
+
166
+ $$
167
+ \tau^{\prime} = \bigcup_{\mathbf{s}^{*}\in \mathbf{S}^{*}}\operatorname {TG}(\mathbf{s}^{*},\mathbf{s}_{T}^{*},\mathcal{E},\mathbf{E}),
168
+ $$
169
+
170
+ and finally yield the augmented expert demonstrations $\mathcal{T}^{*}\cup \mathcal{T}^{\prime}$ after looping through $\mathcal{X}$ .
171
+
172
+ TA is advantageous because it (i) only exploits existing expert demonstrations to preserve the i.i.d assumption; (ii) is universally applicable to our proposed paradigm without a dependency on the downstream task; (iii) does not need domain knowledge, labeling work, and further evaluation.
173
+
174
+ # 3 Experiments
175
+
176
+ We adapt recurrent inference to our paradigm and evaluate them across AE benchmarks.
177
+
178
+ # 3.1 Setup
179
+
180
+ Data. Arithmetic Operators Restoration (AOR) is a short-to-long editing to complete an array into a true equation. It is also a one-to-many task as an array can be completed as multiple true equations differently. Arithmetic Equation Simplification (AES) aims to calculate the parenthesized parts and keep the equation hold, resulting in a long-to-short and
181
+
182
+ Algorithm 2 Trajectory Augmentation (TA)
183
+ Input: States S, state $\mathbf{s}_t$ , expert states $\mathbf{S}^*$ , actions A, and environment $\mathcal{E}$
184
+ Output: Augmented states S.
185
+ 1: if $|\mathbf{A}| > 1$ then
186
+ 2: $\mathbf{a}_t\gets \mathbf{A}.pop(0)$
187
+ 3: $\mathbf{s}_{t + 1}\leftarrow \mathcal{E}(\mathbf{s}_t,\mathbf{a}_t)$
188
+ 4: $\mathbf{S}\gets \mathbf{S}\cup \mathrm{TA}(\mathbf{S},\mathbf{s}_{t + 1},\mathbf{S}^*,\mathbf{A},\mathcal{E})$ $\triangleright$ Execute action
189
+ 5: $\mathbf{A}\gets \mathrm{Update}(\mathbf{A},\mathbf{s}_t,\mathbf{s}_{t + 1})$
190
+ 6: $\mathbf{S}\gets \mathbf{S}\cup \mathrm{TA}(\mathbf{S},\mathbf{s}_t,\mathbf{S}^*,\mathbf{A},\mathcal{E})$ $\triangleright$ Skip action
191
+ 7: else if $\mathbf{s}_t\notin \mathbf{S}^*$ then
192
+ 8: $\mathbf{S}\gets \mathbf{S}\cup [\mathbf{s}_t]$ $\triangleright$ Merge shifted state
193
+ 9: end if
194
+ 10: return S
195
+
196
+ many-to-one editing. Arithmetic Equation Correction (AEC) targets to correct potential mistakes in an equation. Diverse errors perturb the equation, making AEC a mixed many-to-many editing. To align with the previous work, we follow the same data settings $N$ , $L$ , and $D$ for data generation, as well as the same action design for trajectory generation. The edit metric $\mathbf{E}$ for AOR and AEC is Levenshtein, while $\mathbf{E}$ for AES is a self-designed one (SELF) that instructs to replace tokens between two parentheses with the target token. Examples are presented in Table 2. We refer readers to Shi et al. (2020) for an exhaustive explanation. As shown in Table 1, the data splits are $7\mathrm{K} / 1.5\mathrm{K} / 1.5\mathrm{K}$ for training, validation, and testing respectively.
197
+
198
+ Evaluation. Sequence accuracy and equation accuracy are two primary metrics with token accuracy for a more fine-grained reference. In contrast to sequence accuracy for measuring whether an equation exactly matches the given label, equation accuracy emphasizes whether an equation holds, which is the actual goal of AE tasks. It is noted that there is no hard constraint to guarantee that all the predicted actions are valid. However, when the agent makes an inference mistake, the environment can refuse to execute invalid actions and keep the current state. This is also one of the beauties of reformulating text editing as a controllable MDP.
199
+
200
+ Baselines. Recurrent inference (Recurrence) exhibits advantages over conventional end-to-end (End2end) and sequence tagging (Tagging) (Shi et al., 2020). However, for AES and AEC, it² allows feeding training samples to a data generator and exposing more variants to models. These variants, as source samples paired with corresponding target samples, are used as the augmented dataset. This is impractical due to the strong dependency on domain knowledge. Given an input “1 + (2 + 2) =
201
+
202
+ <table><tr><td colspan="3">AOR (N=10, L=5, D=10K)</td><td colspan="3">AES (N=100, L=5, D=10K)</td><td colspan="3">AEC (N=10, L=5, D=10K)</td></tr><tr><td>Train/Valid/Test</td><td>Train TA</td><td>Traj. Len.</td><td>Train/Valid/Test</td><td>Train TA</td><td>Traj. Len.</td><td>Train/Valid/Test</td><td>Train TA</td><td>Traj. Len.</td></tr><tr><td>7,000/1,500/1,500</td><td>145,176</td><td>6</td><td>7,000/1,500/1,500</td><td>65,948</td><td>6</td><td>7,000/1,500/1,500</td><td>19,764</td><td>4</td></tr></table>
203
+
204
+ Table 1: Data statistics of AE benchmarks.
205
+
206
+ <table><tr><td>Term</td><td>AOR (N=10, L=5, D=10K)</td><td>AES (N=100, L=5, D=10K)</td><td>AEC (N=10, L=5, D=10K)</td></tr><tr><td>Source x</td><td>36293</td><td>65+(25-20)-(64+32)+(83-24)=-25+58</td><td>-2*+410+8/8=8</td></tr><tr><td>Target y</td><td>-3-6/2+9=3</td><td>65+5-96+59=33</td><td>-2+10*8/8=8</td></tr><tr><td>State st*</td><td>-3-6/293</td><td>65+5-(64+32)+(83-24)=-25+58</td><td>-2+410+8/8=8</td></tr><tr><td>Action at*</td><td>[POS_6,+]</td><td>[POS_4, POS_8, 96]</td><td>[DELETE, POS_3, POS_3]</td></tr><tr><td>Next State st+1</td><td>-3-6/2+93</td><td>65+5-96+(83-24)=-25+58</td><td>-2+10+8/8=8</td></tr><tr><td>Shifted State st&#x27;</td><td>-3-6/29=3</td><td>65+5-(64+32)+59=(-25+58)</td><td>-2+410*8/8=8</td></tr></table>
207
+
208
+ Table 2: Examples from AE with specific $N$ for integer size, $L$ for the number of integers,and $D$ for data size.
209
+
210
+ 5" and output "1 + 4 = 5" in AES, a variant "1 + (1 + 3) = 5" can be generated based on the knowledge $\mathbf{1} + \mathbf{3} = \mathbf{4}$ . Nevertheless, if this knowledge is not provided in the other training samples, the model should only know $\mathbf{2} + \mathbf{2} = \mathbf{4}$ .
211
+
212
+ Models. As discussed, since the previously reported experiments are not practical, we re-run Recurrence source code for a more reasonable baseline (Recurrence*) that only has access to the fixed training set. Meanwhile, in our development environment, we reproduce Recurrence* within the proposed paradigm according to the compatibility in between. The encoder-decoder architecture inherits the same recurrent network as the backbone with long short-term memory units (Hochreiter and Schmidhuber, 1997) and an attention mechanism (Luong et al., 2015). The dimension of the bidirectional encoder is 256 in each direction and 512 for both the embedding layer and decoder. We apply a dropout of 0.5 to the output of each layer (Srivastava et al., 2014). This provides us a standard autoregressive baseline AR, as well as a more powerful AR* after increasing the number of encoder layers from 1 to 4. On the one hand, to construct a non-autoregressive baseline NAR, we replace the decoder of AR* with a linear layer that directly maps the context to a probability distribution over the action vocabulary. In addition, we add two more encoder layers to maintain a similar amount of trainable parameters. On the other hand, replacing the decoder of AR* with D2 leads to our model NAR*. We strictly unify the encoder for a fair comparison regarding the decoder. Model configurations are shared across AE tasks for a comprehensive assessment avoiding particular tuning against any of them.
213
+
214
+ Training. We train on a single NVIDIA Titan RTX with a batch size of 256. We use the Adam opti
215
+
216
+ mizer (Kingma and Ba, 2015) with a learning rate of $10^{-3}$ and an $\ell_2$ gradient clipping of 5.0 (Pascanu et al., 2013). A cosine annealing scheduler helps manage the training process and restarts the learning every 32 epochs to get it out of a potential local optimum. We adopt early stopping to wait for a lower validation loss until there are no updates for 512 epochs (Prechelt, 1998). Teacher forcing with a rate of 0.5 spurs up the training process (Williams and Zipser, 1989). In AES and AEC, the adaptive loss weighting guides the model to adaptively focus on particular action tokens in accordance with the training results. Reported metrics attached with standard deviation are the results of five runs using random seeds from [0, 1, 2, 3, 4].
217
+
218
+ # 3.2 Results
219
+
220
+ Baselines. As summarized in Table 3, prohibiting the access of Recurrence to domain knowledge outcomes a fair baseline and significantly weakens Recurrence* in AES and AEC. We also would like to point out that, even in the same impractical setting, our NAR* can achieve around $99.33\%$ and $67.49\%$ for AES and AEC with respect to equation accuracy, which is still much higher than that $(87.73\%)$ and $58.27\%$ for AES and AEC) reported in the previous work. In AOR, a one-to-many editing, no augmented source sequence is retrieved from the target side. We confirm that the slight accuracy drop of Recurrence* in AOR results from bias through multiple tests. Although AR is our reproduction of Recurrence*, the overall advancement of AR over Recurrence* proves the goodness of our framework and implementation. Participation of added three encoder layers in AR* improves model capacity and thus contributes to higher accuracy. A simple linear header already enables NAR to parallel the decoding; nevertheless, it dramatically reduces performance, especially in AES.
221
+
222
+ <table><tr><td rowspan="2">Method</td><td colspan="3">AOR (N=10,L=5,D=10K)</td><td colspan="2">AES (N=100,L=5,D=10K)</td><td colspan="3">AEC (N=10,L=5,D=10K)</td></tr><tr><td>Tok. Acc. %</td><td>Seq. Acc. %</td><td>Eq. Acc. %</td><td>Tok. Acc. %</td><td>Eq. Acc. %</td><td>Tok. Acc. %</td><td>Seq. Acc. %</td><td>Eq. Acc. %</td></tr><tr><td>End2end</td><td>-</td><td>-</td><td>29.33</td><td>84.60</td><td>25.20</td><td>88.08</td><td>57.27</td><td>57.73</td></tr><tr><td>Tagging</td><td>-</td><td>-</td><td>51.40</td><td>87.00</td><td>36.67</td><td>84.46</td><td>46.93</td><td>47.33</td></tr><tr><td>Recurrence</td><td>-</td><td>-</td><td>58.53</td><td>98.63</td><td>87.73</td><td>83.64</td><td>57.47</td><td>58.27</td></tr><tr><td>Recurrence*</td><td>60.30 ± 1.30</td><td>27.31 ± 1.33</td><td>56.73 ± 1.33</td><td>79.82 ± 0.37</td><td>22.28 ± 0.52</td><td>82.32 ± 0.56</td><td>41.72 ± 0.74</td><td>42.13 ± 0.75</td></tr><tr><td>AR</td><td>61.85 ± 0.51</td><td>28.83 ± 1.14</td><td>59.09 ± 0.95</td><td>88.12 ± 2.37</td><td>37.05 ± 6.57</td><td>82.61 ± 0.53</td><td>45.81 ± 0.36</td><td>46.31 ± 0.31</td></tr><tr><td>AR*</td><td>62.51 ± 0.62</td><td>30.85 ± 0.41</td><td>61.35 ± 0.33</td><td>99.27 ± 0.32</td><td>93.57 ± 2.91</td><td>82.29 ± 0.39</td><td>45.99 ± 0.49</td><td>46.35 ± 0.52</td></tr><tr><td>NAR</td><td>59.72 ± 0.70</td><td>24.16 ± 1.16</td><td>51.64 ± 1.97</td><td>83.87 ± 1.60</td><td>29.49 ± 2.51</td><td>80.28 ± 0.76</td><td>44.91 ± 1.71</td><td>45.40 ± 1.78</td></tr><tr><td>NAR*</td><td>62.81 ± 0.89</td><td>30.13 ± 1.31</td><td>61.45 ± 1.61</td><td>99.51 ± 0.13</td><td>95.67 ± 0.93</td><td>81.82 ± 0.68</td><td>45.97 ± 1.07</td><td>46.43 ± 1.10</td></tr><tr><td>AR +TA</td><td>62.35 ± 0.61</td><td>32.28 ± 0.67</td><td>63.56 ± 1.06</td><td>88.05 ± 1.20</td><td>38.39 ± 3.45</td><td>83.94 ± 0.42*</td><td>49.36 ± 1.23</td><td>49.83 ± 1.21</td></tr><tr><td>AR* +TA</td><td>62.58 ± 0.63</td><td>33.01 ± 1.31</td><td>65.73 ± 1.38</td><td>99.44 ± 0.27</td><td>95.24 ± 2.38</td><td>83.39 ± 0.74</td><td>48.95 ± 0.65</td><td>49.47 ± 0.73</td></tr><tr><td>NAR +TA</td><td>61.30 ± 0.86</td><td>32.04 ± 1.99</td><td>63.75 ± 2.08</td><td>90.38 ± 2.21</td><td>47.91 ± 8.18</td><td>81.36 ± 0.40</td><td>48.01 ± 1.07</td><td>48.47 ± 1.15</td></tr><tr><td>NAR* +TA</td><td>63.48 ± 0.38*</td><td>34.23 ± 0.92*</td><td>67.13 ± 0.99*</td><td>99.58 ± 0.15*</td><td>96.44 ± 1.29*</td><td>82.70 ± 0.42</td><td>49.64 ± 0.59*</td><td>50.15 ± 0.55*</td></tr></table>
223
+
224
+ Table 3: Evaluation results on AOR, AES, and AEC with specific $N$ , $L$ , and $D$ . The token and sequence accuracy for AOR were not reported, thus we leave these positions blank here. With or without TA, our proposed NAR* achieves the best performance in terms of equation accuracy across the board.
225
+
226
+ ![](images/d07665ecad7178a1b17e44ec66814fb4a7452ed87242f4e8a2640a07f49d77b3.jpg)
227
+ Figure 4: The learning curve of AR* (left column) and NAR* (right column) across AE tasks (rows). The red and blue lines represent the training on actions w.r.t sequence accuracy. The orange line stands for the validation on returned states w.r.t equation accuracy. The dashed line in green marks the earlier stop epoch of NAR* than that of AR* during training.
228
+
229
+ Non-autoregressive. What stands out is the dominance of $\mathrm{NAR}^*$ , achieving $61.45\%$ , $95.67\%$ , and $46.43\%$ in terms of equation accuracy for AOR, AES, and AEC, separately. Particularly in AES, its better performance over $\mathrm{AR}^*$ by more than $2.1\%$ equation accuracy underlines the success of $\mathrm{NAR}^*$ in capturing the interdependencies among target tokens. Its superiority with respect to equation accuracy boosting by around $66.18\%$ over NAR highlights the contributions of D2 again.
230
+
231
+ Trajectory augmentation. As expected, the incorporation of TA consistently promotes the accuracy
232
+
233
+ ![](images/23f6b1b29afc97373518c9062f80213821abb0305dc4d0c1b62e10825ac374ca.jpg)
234
+ Figure 5: Inference time of AR* and NAR* to predict action (left) and return state (right) across AE tasks.
235
+
236
+ of all models in our learning regime throughout AE tasks. Taking NAR as an example, training with TA brings it a substantial equation accuracy gain, remarkably up to $18.42\%$ in AES. Even more, it pushes the gap between $\mathrm{NAR}^*$ and the other baselines. The most notable advance comes from AOR, where $\mathrm{NAR}^*$ outperforms $\mathrm{AR}^*$ by a substantial margin of $5.68\%$ equation accuracy. It appears that TA is more effective for non-autoregressive models than autoregressive ones.
237
+
238
+ # 4 Analysis
239
+
240
+ We conduct extensive sensitivity analyses to better illustrate and understand our methods.
241
+
242
+ # 4.1 Efficiency
243
+
244
+ From the learning curve (Figure 4) and inference time (Figure 5) of AR* and NAR* in AE, in addition to a higher accuracy, we find NAR* needs less number of training epochs to converge and trigger the early stopping. The periodic fluctuation of the learning curve is the consequence of using a scheduler. When it comes to inference, NAR* saves much time for every step of action determini
245
+
246
+ <table><tr><td>Design</td><td>Action Sequence</td><td>Method</td><td>Tok. Acc. %</td><td>Eq. Acc. %</td></tr><tr><td rowspan="4">#1</td><td rowspan="4">[Pos.L, Pos.R, Tok.]</td><td>AR*</td><td>99.27 ± 0.32</td><td>93.57 ± 2.91</td></tr><tr><td>NAR*</td><td>99.51 ± 0.13</td><td>95.67 ± 0.93</td></tr><tr><td>AR* +TA</td><td>99.44 ± 0.27</td><td>95.24 ± 2.38</td></tr><tr><td>NAR* +TA</td><td>99.58 ± 0.15*</td><td>96.44 ± 1.29*</td></tr><tr><td rowspan="4">#2</td><td rowspan="4">[Pos.L, Tok., Pos.R]</td><td>AR*</td><td>99.08 ± 0.93</td><td>92.35 ± 7.21</td></tr><tr><td>NAR*</td><td>99.50 ± 0.27</td><td>95.55 ± 2.28</td></tr><tr><td>AR* +TA</td><td>99.52 ± 0.29</td><td>95.68 ± 2.49</td></tr><tr><td>NAR* +TA</td><td>99.54 ± 0.20*</td><td>95.97 ± 1.64*</td></tr><tr><td rowspan="4">#3</td><td rowspan="4">[Tok., Pos.L, Pos.R]</td><td>AR*</td><td>98.06 ± 0.79</td><td>83.79 ± 6.25</td></tr><tr><td>NAR*</td><td>99.53 ± 0.14</td><td>95.99 ± 0.81</td></tr><tr><td>AR* +TA</td><td>98.43 ± 0.49</td><td>87.29 ± 3.70</td></tr><tr><td>NAR* +TA</td><td>99.61 ± 0.06*</td><td>96.55 ± 0.46*</td></tr></table>
247
+
248
+ nation and ends up returning the edited state faster. As AR* and NAR* share exactly the same encoder structure, we conclude that D2 contributes to the advanced efficiency.
249
+
250
+ # 4.2 Action design
251
+
252
+ Due to the liberty of sequence generation, the same operation can be represented as different action sequences. In AES, the operation, instructing to substitute tokens between left and right parentheses with the required token, can fit the three action designs in Table 4, where $\text{Pos}_{\mathsf{L}}$ , $\text{Pos}_{\mathsf{R}}$ , and $\text{Tok}$ denote the positions of two parentheses and the target token. Design #1 is the default one. A simple swap of action tokens offers designs #2 and #3.
253
+
254
+ AR* severely suffers such perturbation, causing an equation accuracy decline by $9.78\%$ in #3. Contrastly, NAR* holds around its results and even slightly improves to $95.99\%$ in #3. Despite the joining of TA, AR* still goes down from $95.24\%$ in #1 to $87.29\%$ in #3, while NAR* stays nearly consistent across three designs. It is reasonable that AR* is sensitive to the order of action tokens because the position information helps the inference of the target token. This also reflects that NAR* can catch the position information but with little dependence on token order. Such robustness allows greater freedom of action design.
255
+
256
+ # 4.3 Trajectory optimization
257
+
258
+ A better edit metric $\mathbf{E}$ often means a smaller action vocabulary space $|\mathcal{V}_A|$ , shorter trajectory length $T_{\mathrm{max}}$ , and, therefore, an easier IL. Taking AES as an instance, a SELF-action, replacing tokens enclosed in parentheses with the target one, actually is the compression of several Levenshtein-actions
259
+
260
+ Table 4: Evaluation of AR* and NAR* in AES across three action designs that vary from each other by token order. They direct to the same operation with Pos.L/Pos.R/Tok. denoting left parenthesis/right parenthesis/target token.
261
+
262
+ <table><tr><td>Edit Metric E</td><td>Tmax</td><td>Method</td><td>Tok. Acc. %</td><td>Eq. Acc. %</td></tr><tr><td rowspan="2">SELF</td><td rowspan="2">6</td><td>AR*</td><td>99.27 ± 0.32</td><td>93.57 ± 2.91</td></tr><tr><td>NAR*</td><td>99.51 ± 0.13</td><td>95.67 ± 0.93</td></tr><tr><td rowspan="2">Levenshtein</td><td rowspan="2">31</td><td>AR*</td><td>69.53 ± 2.29</td><td>18.37 ± 0.70</td></tr><tr><td>NAR*</td><td>67.58 ± 0.87</td><td>17.93 ± 0.07</td></tr></table>
263
+
264
+ Table 5: Evaluation of AR* and NAR* trained with edit metrics SELF and Levenshtein in AES. $T_{\text{max}}$ refers to the maximum length of expert trajectories.
265
+
266
+ ![](images/8b6aa010677dd1897631c402020678e953bb96c3998292cde6137cf49b440a07.jpg)
267
+ Figure 6: Evaluation of $\mathrm{NAR^{*}}$ trained with edit metrics LCS and Levenshtein in AEC. Results are grouped by two trajectory lengths caused by whether the policy involves REPLACE.
268
+
269
+ including multiple deletions and one substitution. Although either can serve as an expert policy, SELF causes a much shorter $T_{\mathrm{max}}$ as indicated in Table 5. The change from SELF to Levenshtein brings on a longer $T_{\mathrm{max}}$ and consequently a significant performance gap of $75.2\%$ and $77.74\%$ for AR* and NAR* in terms of equation accuracy. Doing one edit in 31 steps rather than 6 undoubtedly raises the difficulty of the imitation game.
270
+
271
+ As one more exploration, we introduce Longest Common Subsequence (LCS) as an alternative $\mathbf{E}$ to AEC. Token replacement is not allowed in LCS but in Levenshtein. A replacement action has to be decomposed as one deletion and one insertion in LCS. From this, LCS has a small $|\mathcal{V}_A|$ , while Levenshtein has a shorter $T_{\mathrm{max}}$ . We train $\mathrm{NAR}^*$ with these two and report in Figure 6. For a clear comparison, the test set is divided into two groups. In w/o REPLACE, both yield the same $T_{\mathrm{max}}$ , but, in w/ REPLACE, Levenshtein takes a shorter $T_{\mathrm{max}}$ .
272
+
273
+ <table><tr><td rowspan="2">Decoder</td><td colspan="3">AOR (N = 10, L = 5, D = 10K)</td><td colspan="2">AES (N = 100, L = 5, D = 10K)</td><td colspan="3">AEC (N = 10, L = 5, D = 10K)</td></tr><tr><td>Tok. Acc. %</td><td>Seq. Acc. %</td><td>Eq. Acc. %</td><td>Tok. Acc. %</td><td>Eq. Acc. %</td><td>Tok. Acc. %</td><td>Seq. Acc. %</td><td>Eq. Acc. %</td></tr><tr><td>Linear</td><td>61.84 ± 0.94</td><td>28.55 ± 1.57</td><td>57.72 ± 1.55</td><td>99.41 ± 0.26</td><td>95.01 ± 2.01</td><td>81.35 ± 0.92</td><td>42.47 ± 1.85</td><td>42.81 ± 1.87</td></tr><tr><td>Decoder0</td><td>61.78 ± 0.83</td><td>28.20 ± 1.57</td><td>58.36 ± 1.58</td><td>99.24 ± 0.23</td><td>93.49 ± 2.03</td><td>80.84 ± 0.66</td><td>43.97 ± 1.82</td><td>44.32 ± 1.82</td></tr><tr><td>Shared D2</td><td>61.74 ± 0.71</td><td>28.68 ± 0.94</td><td>58.05 ± 1.01</td><td>99.28 ± 0.24</td><td>93.85 ± 2.14</td><td>81.38 ± 1.04</td><td>43.64 ± 2.03</td><td>44.09 ± 2.02</td></tr><tr><td>D2 (NAR*)</td><td>62.81 ± 0.89</td><td>30.13 ± 1.31</td><td>61.45 ± 1.61</td><td>99.51 ± 0.13</td><td>95.67 ± 0.93</td><td>81.82 ± 0.68</td><td>45.97 ± 1.07</td><td>46.43 ± 1.10</td></tr><tr><td>Linear +TA</td><td>61.41 ± 0.28</td><td>31.75 ± 0.93</td><td>63.15 ± 0.96</td><td>99.42 ± 0.17</td><td>95.08 ± 1.47</td><td>81.54 ± 0.66</td><td>46.79 ± 2.26</td><td>47.33 ± 2.30</td></tr><tr><td>Decoder0 +TA</td><td>62.50 ± 1.24</td><td>32.48 ± 1.87</td><td>64.47 ± 1.88</td><td>99.47 ± 0.13</td><td>95.33 ± 1.13</td><td>82.02 ± 0.40</td><td>46.80 ± 2.04</td><td>47.32 ± 1.91</td></tr><tr><td>Shared D2 +TA</td><td>61.64 ± 0.87</td><td>31.21 ± 0.34</td><td>62.77 ± 0.85</td><td>99.53 ± 0.12</td><td>95.91 ± 1.25</td><td>81.80 ± 0.47</td><td>47.23 ± 1.07</td><td>47.61 ± 1.14</td></tr><tr><td>D2 (NAR*) +TA</td><td>63.48 ± 0.38*</td><td>34.23 ± 0.92*</td><td>67.13 ± 0.99*</td><td>99.58 ± 0.15*</td><td>96.44 ± 1.29*</td><td>82.70 ± 0.42*</td><td>49.64 ± 0.59*</td><td>50.15 ± 0.55*</td></tr></table>
274
+
275
+ Table 6: Evaluation of agents equipped with same encoders but different decoders on AE benchmarks.
276
+
277
+ In the former, LCS exceeds Levenshtein with or without TA. In the latter, the opposite is true, where Levenshtein outperforms LCS under the same condition. This support our assumption at the beginning that an appropriate $\mathbf{E}$ , leading to a small $|\mathcal{V}_A|$ and a short $T_{\mathrm{max}}$ , is conducive to IL, suggesting trajectory optimization an interesting future work.
278
+
279
+ # 4.4 Dual decoders
280
+
281
+ As an ablation study, we freeze the encoder of NAR* and vary its decoder to reveal the contributions of each component in D2. As listed in Table 6, replacing the decoder with a linear layer leads to Linear and removing the second decoder from NAR* results in Decoder $_0$ . Moreover, sharing the parameters between two decoders of NAR* gives the Shared D2. All of them can parallel the decoding process. We then borrow the setup of Section 3 and test them on AE.
282
+
283
+ Among four decoders, $\mathrm{NAR}^*$ dominates three imitation games. The performance decrease caused by shared parameters is more significant than expected. Besides the reason that saved parameters limit the model capacity, another potential one is the input mismatch of two decoders. The input of $\mathrm{decoder}_0$ is the projected context from the linear layer after the encoder, yet that of $\mathrm{decoder}_1$ is the embedded prediction from the embedding layer. When incorporating TA, we find the same trend persists. The gap between $\mathrm{NAR}^*$ and the others is even more apparent. Since they share the same encoder, such a gap clarifies the benefits of D2.
284
+
285
+ # 5 Conclusion
286
+
287
+ We reformulate text editing as an imitation game defined by an MDP to allow action design at the sequence-level. We propose D2, a non-autoregressive decoder for state-action learning, coupled with TG for data translation and TA for distribution shift alleviation. Achievements on AE benchmarks evidence the advantages of our
288
+
289
+ methods in performance, efficiency, and robustness. Sequence-level actions are arguably more controllable, interpretable, and similar to human behavior. Turning tasks into games that agents feel more comfortable with sheds light on future studies in the direction of reinforcement learning in the application of text editing. The involvement of a reward function, the optimization of the trajectories, the design of sequence-level actions, and their applications in more practical tasks, to name a few, are interesting for future work. Suggesting text editing as a new testbed, we hope our findings will shed light on future studies in reinforcement learning applying to natural language processing.
290
+
291
+ # Limitations
292
+
293
+ Each time the state is updated, the agent can get immediate feedback on the previous action and thus a dynamic context representation during the editing. This also means that the encoder (e.g., a heavy pretrained language model) will be called multiple times to refresh the context matrix. Consequently, as the trajectory grows, the whole task becomes slow even though we have paralleled the decoding process. Meanwhile, applying our methods in more realistic editing tasks (e.g., grammatical error correction) remains a concern and needs to be explored in the near future.
294
+
295
+ # Acknowledgements
296
+
297
+ We gratefully appreciate Che Wang (Watcher), Yichen Gong, and Hui Xue for sharing their pearls of wisdom. We also would like to express our special thanks of gratitude to Yingying Huo for the support, as well as EMNLP anonymous reviewers for their constructive feedback. This work was supported by Shining Lab and Alibaba Group.
298
+
299
+ # References
300
+
301
+ Sweta Agrawal and Marine Carpuat. 2022. An imitation learning curriculum for text editing with non-autoregressive models. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 7550-7563, Dublin, Ireland. Association for Computational Linguistics.
302
+ Sweta Agrawal, Weijia Xu, and Marine Carpuat. 2021. A non-autoregressive edit-based approach to controllable text simplification. In *Findings of the Association for Computational Linguistics: ACL-IJCNLP* 2021, pages 3757-3769, Online. Association for Computational Linguistics.
303
+ Abhijeet Awasthi, Sunita Sarawagi, Rasna Goyal, Sabyasachi Ghosh, and Vihari Piratla. 2019. Parallel iterative edit models for local sequence transduction. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4260-4270, Hong Kong, China. Association for Computational Linguistics.
304
+ Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.
305
+ S.R.K. Branavan, Harr Chen, Luke Zettlemoyer, and Regina Barzilay. 2009. Reinforcement learning for mapping instructions to actions. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 82-90, Suntec, Singapore. Association for Computational Linguistics.
306
+ Yangyi Chen, Jin Su, and Wei Wei. 2021. Multi-granularity textual adversarial attack with behavior cloning. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 4511-4526, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
307
+ Yue Dong, Zichao Li, Mehdi Rezagholizadeh, and Jackie Chi Kit Cheung. 2019. EditNTS: An neural programmer-Interpreter model for sentence simplification through explicit editing. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3393-3402, Florence, Italy. Association for Computational Linguistics.
308
+ Ahmed Elgohary, Christopher Meek, Matthew Richardson, Adam Fourney, Gonzalo Ramos, and Ahmed Hassan Awadallah. 2021. NL-EDIT: Correcting semantic parse errors through natural language interaction. In Proceedings of the 2021 Conference of the North American Chapter of the
309
+
310
+ Association for Computational Linguistics: Human Language Technologies, pages 5599-5610, Online. Association for Computational Linguistics.
311
+ Tao Ge, Furu Wei, and Ming Zhou. 2018. Fluency boost learning and inference for neural grammatical error correction. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1055-1065, Melbourne, Australia. Association for Computational Linguistics.
312
+ Jiatao Gu, James Bradbury, Caiming Xiong, Victor O.K. Li, and Richard Socher. 2018. Nonautoregressive neural machine translation. In International Conference on Learning Representations.
313
+ Jiatao Gu, Changhan Wang, and Junbo Zhao. 2019. Levenshtein transformer. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems 32, pages 11181-11191. Curran Associates, Inc.
314
+ Rahul Gupta, Aditya Kanade, and Shirish Shevade. 2019. Deep reinforcement learning for syntactic error repair in student programs. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01):930-937.
315
+ Kelvin Guu, Panupong Pasupat, Evan Liu, and Percy Liang. 2017. From language to programs: Bridging reinforcement learning and maximum marginal likelihood. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1051-1062, Vancouver, Canada. Association for Computational Linguistics.
316
+ Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.
317
+ Marcin Junczys-Dowmunt, Roman Grundkiewicz, Shubha Guha, and Kenneth Heafield. 2018. Approaching neural grammatical error correction as a low-resource machine translation task. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 595-606, New Orleans, Louisiana. Association for Computational Linguistics.
318
+ Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.
319
+ Ryosuke Kohita, Akifumi Wachi, Yang Zhao, and Ryuki Tachibana. 2020. Q-learning with language model for edit-based unsupervised summarization. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP),
320
+
321
+ pages 470-484, Online. Association for Computational Linguistics.
322
+ Jiquan Li, Junliang Guo, Yongxin Zhu, Xin Sheng, Deqiang Jiang, Bo Ren, and Linli Xu. 2022. Sequence-to-action: Grammatical error correction with action guided sequence generation. Proceedings of the AAAI Conference on Artificial Intelligence, 36(10):10974-10982.
323
+ Zhongkun Liu, Pengjie Ren, Zhumin Chen, Zhaochun Ren, Maarten de Rijke, and Ming Zhou. 2021. Learning to ask conversational questions by optimizing Levenshtein distance. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 5638-5650, Online. Association for Computational Linguistics.
324
+ Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1412-1421, Lisbon, Portugal. Association for Computational Linguistics.
325
+ Jonathan Mallinson, Aliaksei Severyn, Eric Malmi, and Guillermo Garrido. 2020. FELIX: Flexible text editing through tagging and insertion. In *Findings of the Association for Computational Linguistics: EMNLP* 2020, pages 1244–1255, Online. Association for Computational Linguistics.
326
+ Eric Malmi, Yue Dong, Jonathan Mallinson, Aleksandr Chuklin, Jakub Adamek, Daniil Mirylenka, Felix Stahlberg, Sebastian Krause, Shankar Kumar, and Aliaksei Severyn. 2022. Text generation with text-editing models. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Tutorial Abstracts, pages 1-7, Seattle, United States. Association for Computational Linguistics.
327
+ Eric Malmi, Sebastian Krause, Sascha Rothe, Daniil Mirylenka, and Aliaksei Severyn. 2019. Encode, tag, realize: High-precision text editing. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5054–5065, Hong Kong, China. Association for Computational Linguistics.
328
+ Sheena Panthaplackel, Miltiadis Allamanis, and Marc Brockschmidt. 2021. Copy that! editing sequences by copying spans. Proceedings of the AAAI Conference on Artificial Intelligence, 35(15):13622-13630.
329
+ Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. 2013. On the difficulty of training recurrent neural networks. In Proceedings of the 30th International Conference on International Conference
330
+
331
+ on Machine Learning - Volume 28, ICML'13, page III-1310-III-1318. JMLR.org.
332
+ Dean A. Pomerleau. 1991. Efficient Training of Artificial Neural Networks for Autonomous Navigation. Neural Computation, 3(1):88-97.
333
+ Lutz Prechelt. 1998. Early stopping-but when? In Neural Networks: Tricks of the trade, pages 55-69. Springer.
334
+ Machel Reid and Victor Zhong. 2021. LEWIS: Levenshtein editing for unsupervised text style transfer. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 3932-3944, Online. Association for Computational Linguistics.
335
+ Stéphane Ross, Geoffrey Gordon, and Drew Bagnell. 2011. A reduction of imitation learning and structured prediction to no-regret online learning. In Proceedings of the fourteenth international conference on artificial intelligence and statistics, pages 627-635. JMLR Workshop and Conference Proceedings.
336
+ Stefan Schaal. 1996. Learning from demonstration. In Advances in Neural Information Processing Systems, volume 9. MIT Press.
337
+ Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointer-generator networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1073-1083, Vancouver, Canada. Association for Computational Linguistics.
338
+ Ning Shi, Wei Wang, Boxin Wang, Jinfeng Li, Xiangyu Liu, and Zhouhan Lin. 2021. Incorporating External POS Tagger for Punctuation Restoration. In Proc. Interspeech 2021, pages 1987-1991.
339
+ Ning Shi, Ziheng Zeng, Haotian Zhang, and Yichen Gong. 2020. Recurrent inference in text editing. In *Findings of the Association for Computational Linguistics: EMNLP* 2020, pages 1758-1769, Online. Association for Computational Linguistics.
340
+ Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research, 15(1):1929-1958.
341
+ Felix Stahlberg and Shankar Kumar. 2020. Seq2Edits: Sequence transduction using span-level edit operations. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 5147-5159, Online. Association for Computational Linguistics.
342
+ Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 27, pages 3104-3112. Curran Associates, Inc.
343
+
344
+ Ronald J Williams and David Zipser. 1989. A learning algorithm for continually running fully recurrent neural networks. *Neural computation*, 1(2):270-280.
345
+ Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google's neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144.
346
+ Ziyu Yao, Frank F. Xu, Pengcheng Yin, Huan Sun, and Graham Neubig. 2021. Learning structural edits via incremental tree transformations. In International Conference on Learning Representations.
347
+ Wei Zhao, Liang Wang, Kewei Shen, Ruoyu Jia, and Jingming Liu. 2019. Improving grammatical error correction via pre-training a copy-augmented architecture with unlabeled data. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 156-165, Minneapolis, Minnesota. Association for Computational Linguistics.
texteditingasimitationgame/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:27ccf142e2466355339af81234aa61c9a67c89ad8833b4f1af36541f0221de82
3
+ size 580950
texteditingasimitationgame/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:11ea254ed700f441123a5032ff696b196e80428ad6393c3146b109fb0967edc8
3
+ size 507780
texthackerlearningbasedhybridlocalsearchalgorithmfortexthardlabeladversarialattack/503598f3-dab7-4382-abf2-7075200db809_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:52d5cf1d09c169f4e01ce50744d165c0f5517997787ccf2fd120d14d0d2373c8
3
+ size 104566
texthackerlearningbasedhybridlocalsearchalgorithmfortexthardlabeladversarialattack/503598f3-dab7-4382-abf2-7075200db809_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:76a01b90b9cfb5ce3cc947a30dc4b8c056549489d39ef3f5f1c11d8b30db0698
3
+ size 124554
texthackerlearningbasedhybridlocalsearchalgorithmfortexthardlabeladversarialattack/503598f3-dab7-4382-abf2-7075200db809_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a087cf781e22e34e34976950eaf6f4ad9eba875cee7cfa01b1031bad305da20b
3
+ size 4900377
texthackerlearningbasedhybridlocalsearchalgorithmfortexthardlabeladversarialattack/full.md ADDED
@@ -0,0 +1,439 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # TextHacker: Learning based Hybrid Local Search Algorithm for Text Hard-label Adversarial Attack
2
+
3
+ Zhen $\mathbf{Y}\mathbf{u}^{1*}$
4
+
5
+ Xiaosen Wang $^{1,2*}$ ,
6
+
7
+ Wanxiang Che<sup>3</sup>,
8
+
9
+ Kun He $^{1\dagger}$
10
+
11
+ <sup>1</sup> School of Computer Science and Technology,
12
+
13
+ Huazhong University of Science and Technology, Wuhan, China
14
+
15
+ $^{2}$ Huawei Singular Security Lab, Beijing, China
16
+
17
+ <sup>3</sup> Research Center for SCIR, Harbin Institute of Technology, Harbin, China
18
+
19
+ {baising15,xiaosen}@hust.edu.cn, car@ir.hit.edu.cn, brooklet60@hust.edu.cn
20
+
21
+ # Abstract
22
+
23
+ Existing textual adversarial attacks usually utilize the gradient or prediction confidence to generate adversarial examples, making it hard to be deployed in real-world applications. To this end, we consider a rarely investigated but more rigorous setting, namely hard-label attack, in which the attacker can only access the prediction label. In particular, we find we can learn the importance of different words via the change on prediction label caused by word substitutions on the adversarial examples. Based on this observation, we propose a novel adversarial attack, termed Text Hard-label attacker (TextHacker). TextHacker randomly perturbs lots of words to craft an adversarial example. Then, TextHacker adopts a hybrid local search algorithm with the estimation of word importance from the attack history to minimize the adversarial perturbation. Extensive evaluations for text classification and textual entailment show that TextHacker significantly outperforms existing hard-label attacks regarding the attack performance as well as adversary quality. Code is available at https://github.com/JHLHUST/TextHacker.
24
+
25
+ # 1 Introduction
26
+
27
+ Despite the unprecedented success of Deep Neural Networks (DNNs), they are known to be vulnerable to adversarial examples (Szegedy et al., 2014), in which imperceptible modification on the correctly classified samples could mislead the model. Adversarial examples bring critical security threats to the widely adopted deep learning based systems, attracting enormous attention on adversarial attacks and defenses in various domains, e.g. Computer Vision (CV) (Szegedy et al., 2014; Goodfellow et al., 2015; Madry et al., 2018; Wang et al., 2021a) and Natural Language Processing (NLP) (Papernot et al., 2016; Liang et al., 2018; Ren et al., 2019; Wang et al., 2022; Yang et al., 2022), etc.
28
+
29
+ Compared with adversarial attacks in CV, textual adversarial attacks are more challenging due to the discrete input space and lexicality, semantics and fluency constraints. Recently, various textual adversarial attacks have been proposed, including white-box attacks (Ebrahimi et al., 2018; Li et al., 2019; Wang et al., 2021c), score-based attacks (Alzantot et al., 2018; Zang et al., 2020b) and hard-label attacks (Saxena, 2020; Maheshwary et al., 2021). Among these methods, hard-label attacks that only obtain the prediction label are more realistic in real-world applications but also more challenging.
30
+
31
+ Existing white-box attacks (Li et al., 2019; Wang et al., 2021c) and score-based attacks (Ren et al., 2019; Yang et al., 2020) usually evaluate the word importance using either the gradient or change on logits after modifying the given word to craft adversarial examples. In contrast, due to the limited information (i.e., only the prediction labels) for hard-label attacks, it is hard to estimate the word importance, leading to relatively low effectiveness and efficiency on existing hard-label attacks (Maheshwary et al., 2021; Ye et al., 2022).
32
+
33
+ Zang et al. (2020a) have shown that estimating the word importance by reinforcement learning algorithm via the prediction confidence exhibits good attack performance for score-based attacks, but performs poorly for hard-label attacks. We speculate that it cannot effectively estimate the word importance via the prediction label since most of the times the label does not change when turning benign samples into adversaries. It inspires us to investigate the problem: How to effectively estimate the word importance using the prediction label? In contrast, Wang et al. (2021b) show that replacing some words with synonyms could easily convert adversarial examples into benign samples. Thus, we could obtain abundant and useful information (i.e., changes of prediction label) for word importance estimation by word substitutions on the adversarial examples during the attack process. Such learned
34
+
35
+ word importance could in turn guide us to minimize the word perturbation between adversarial examples and original samples.
36
+
37
+ Based on the above observation, we propose a novel adversarial attack, named Text Hard-label attacker (TextHacker). TextHacker contains two stages, namely adversary initialization and perturbation optimization. At the adversary initialization stage, we substitute each word in the input text with its synonym iteratively till we find an adversarial example. At the perturbation optimization stage, TextHacker highlights the importance of each word based on the prediction label of the initialized adversarial example after synonym substitutions. Then TextHacker adopts the hybrid local search algorithm with local search (Aarts et al., 2003) as well as recombination (Radcliffe, 1993) to optimize the adversarial perturbation using such word importance, and simultaneously updates the word importance based on the model output.
38
+
39
+ To validate the effectiveness of the proposed method, we compare TextHacker with two hard-label attacks (Maheshwary et al., 2021; Ye et al., 2022) and two evolutionary score-based attacks (Alzantot et al., 2018; Zang et al., 2020b) for text classification and textual entailment. Empirical evaluations demonstrate that TextHacker significantly outperforms the baselines under the same amount of queries, achieving higher average attack success rate with lower perturbation rate and generating higher-quality adversarial examples.
40
+
41
+ # 2 Related Work
42
+
43
+ This section briefly introduces the textual adversarial attacks and hybrid local search algorithm.
44
+
45
+ # 2.1 Textual Adversarial Attacks
46
+
47
+ Existing textual adversarial attacks fall into two settings: a) white-box attacks (Liang et al., 2018; Li et al., 2019; Zhang et al., 2019; Meng and Wattenhofer, 2020; Wang et al., 2021c) allow full access to the target model, e.g. architecture, parameters, loss function, gradient, output, etc. b) black-box attacks only allow access to the model output. Black-box attacks could be further split into two categories, in which score-based attacks (Gao et al., 2018; Alzantot et al., 2018; Ren et al., 2019; Jin et al., 2020; Zang et al., 2020a,b; Garg and Ramakrishnan, 2020) could access the output logits (i.e., prediction confidence) while hard-label attacks (Saxena, 2020; Maheshwary et al., 2021; Ye
48
+
49
+ et al., 2022) could only utilize the prediction labels.
50
+
51
+ Intuitively, hard-label attacks are much harder but more applicable in the real world and gain increasing interests. TextDeceptor (Saxena, 2020) hierarchically identifies the significant sentence among the input text and the critical word in the chosen sentence for attack. Hard label black-box attack (HLBB) (Maheshwary et al., 2021) initializes an adversarial example via multiple random synonym substitutions and adopts a genetic algorithm to minimize the adversarial perturbation between the initialized adversarial example and original text. TextHoaxter (Ye et al., 2022) randomly initializes an adversarial example and optimizes the perturbation matrix in the continuous embedding space to maximize the semantic similarity and minimize the number of perturbed word between the current adversarial example and the original text.
52
+
53
+ Existing hard-label attacks access the prediction labels which are only used to evaluate adversarial examples without exploiting more information about the victim model. In this work, we learn the importance of each word w.r.t. the model based on the attack history, which is used to enhance the effectiveness of the attack.
54
+
55
+ # 2.2 Hybrid Local Search Algorithm
56
+
57
+ Hybrid local search algorithm is a popular population based framework, which is effective on typical combinatorial optimization problems (Galinier and Hao, 1999). It usually contains two key components, i.e., local search and recombination. Given a population containing multiple initial solutions, the local search operator searches for a better one from the neighborhood of each solution to approach the local optima. The recombination operator crossovers the existing solutions to accept non-improved solutions so that it could jump out of the local optima. Then it adopts the fixed number of top solutions for the next iteration. Compared to other evolutionary algorithms, e.g. genetic algorithm (Anderson and Ferris, 1994), particle swarm optimization (Kennedy and Eberhart, 1995), etc., hybrid local search algorithm balances the local and global exploitation that helps explore the search space with much higher efficiency.
58
+
59
+ In this work, we follow the two-stage attack strategy in HLBB (Maheshwary et al., 2021). At the optimization stage, we utilize the word importance learned from the attack history to guide the local search and recombination. Thus, our method can
60
+
61
+ focus on more critical words in the neighborhood which helps us find the optimal adversarial example from the whole search space more efficiently.
62
+
63
+ # 3 Methodology
64
+
65
+ In this section, we first introduce the preliminary, symbols and definitions in TextHacker, then provide a detailed description of the proposed method.
66
+
67
+ # 3.1 Preliminary
68
+
69
+ Given the input space $\mathcal{X}$ containing all the input texts and the output space $\mathcal{Y} = \{y_1, y_2, \ldots, y_k\}$ , a text classifier $f: \mathcal{X} \to \mathcal{Y}$ predicts the label $f(x)$ for any input text $x = \langle w_1, w_2, \ldots, w_n \rangle \in \mathcal{X}$ , in which $f(x)$ is expected to be equal to its ground-truth label $y_{true} \in \mathcal{Y}$ . The adversary typically adds an imperceptible perturbation on the correctly classified input text $x$ to craft a textual adversarial example $x^{adv}$ that misleads classifier $f$ :
70
+
71
+ $$
72
+ f (x ^ {a d v}) \neq f (x) = y _ {t r u e}, \quad \mathrm {s . t .} \quad d (x ^ {a d v}, x) < \epsilon ,
73
+ $$
74
+
75
+ where $d(\cdot, \cdot)$ is a distance metric (e.g. the $\ell_p$ -norm distance or perturbation rate) that measures the distance between the benign sample and adversarial example, and $\epsilon$ is a hyper-parameter for the maximum magnitude of perturbation. We adopt the perturbation rate as the distance metric:
76
+
77
+ $$
78
+ d (x ^ {a d v}, x) = \frac {1}{n} \sum_ {i = 1} ^ {n} \mathbb {1} (w _ {i} ^ {a d v} \neq w _ {i}),
79
+ $$
80
+
81
+ where $\mathbb{1}(\cdot)$ is the indicator function and $w_{i}\in x$ , $w_{i}^{adv}\in x^{adv}$ . Given a correctly classified text $x$ , we could reformulate the adversarial attack as minimizing the perturbation between benign sample and adversarial example while keeping adversarial:
82
+
83
+ $$
84
+ \underset {x ^ {a d v}} {\operatorname {a r g m i n}} d \left(x ^ {a d v}, x\right) \quad \text {s . t .} \quad f \left(x ^ {a d v}\right) \neq f (x). \tag {1}
85
+ $$
86
+
87
+ In this work, we propose a novel hard-label attack, named TextHacker, to craft textual adversarial examples by only accessing the prediction label $f(x)$ for any input sample $x$ .
88
+
89
+ # 3.2 Symbols and Definitions
90
+
91
+ - Candidate set $\mathcal{C}(w_i)$ . For each word $w_i \in x$ , we construct the candidate set $\mathcal{C}(w_i) = \{\hat{w}_i^0, \hat{w}_i^1, \dots, \hat{w}_i^m\}$ containing the word $w_i$ ( $\hat{w}_i^0 = w_i$ ) and its top $m$ nearest synonyms in the counter-fitted embedding space (Mrkšić et al., 2016). All the substitutions would be constrained in this set.
92
+
93
+ - Weight table $\mathcal{W}$ . We construct a weight table $\mathcal{W}$ , a matrix with the shape of $(n, m + 1)$ , in which each item $\mathcal{W}_{i,j}$ represents the word importance of $\hat{w}_i^j \in \mathcal{C}(w_i)$ and $\mathcal{W}_{i,:} = \sum_{j=0}^{m} \mathcal{W}_{i,j}$ denotes the position importance of word $w_i \in x$ . The weight table $\mathcal{W}$ could guide the hybrid local search algorithm to determine the substitution at each iteration, which is initialized with all 0s.
94
+ - $\delta$ -neighborhood $N_{\delta}(x)$ . Given an input sample $x$ , we define its $\delta$ -neighborhood as the set of texts in the input space $\mathcal{X}$ with at most $\delta$ different words from the sample $x$ :
95
+
96
+ $$
97
+ N _ {\delta} (x) = \left\{x ^ {k} \mid \sum_ {i = 1} ^ {n} \mathbb {1} \left(w _ {i} ^ {k} \neq w _ {i}\right) \leq \delta , x ^ {k} \in \mathcal {X} \right\},
98
+ $$
99
+
100
+ where $w_{i}^{k}\in x^{k},w_{i}\in x$ and $\delta$ is the maximum radius of the neighborhood. The neighborhood $N_{\delta}(x)$ reflects the search space for local search on input sample $x$ .
101
+
102
+ - Fitness function $F\left( {x}^{\prime }\right)$ . Given an input sample ${x}^{\prime }$ and benign text $x$ ,we could define the fitness function as:
103
+
104
+ $$
105
+ F \left(x ^ {\prime}\right) = \mathbb {1} \left(f \left(x ^ {\prime}\right) \neq f (x)\right) \cdot \left(1 - d \left(x ^ {\prime}, x\right)\right). \tag {2}
106
+ $$
107
+
108
+ The fitness function could evaluate the quality of adversarial example to construct the next generation for TextHacker.
109
+
110
+ # 3.3 The Proposed TextHacker Algorithm
111
+
112
+ As illustrated in Figure 1, TextHacker contains two stages, i.e., adversary initialization to initialize an adversarial example and perturbation optimization to minimize the adversarial perturbation. In general, there are four operators used in TextHacker, namely WordSubstitution for adversary initialization, LocalSearch, WeightUpdate and Recombination for the hybrid local search algorithm at the perturbation optimization stage. The details of these operators are summarized as follows:
113
+
114
+ - WordSubstitution $(x_{t},\mathcal{C})$ : Given an input text $x_{t}$ at $t$ -th iteration with the candidate set $\mathcal{C}$ of each word $w_{i}\in x_{t}$ , we randomly substitute each word $w_{i}\in x_{t}$ with a candidate word $\hat{w}_i^j\in \mathcal{C}(w_i)$ to craft a new text $x_{t + 1}$ . WordSubstitution aims to search for an adversarial example in the entire search space by random word substitutions.
115
+ - LocalSearch $(x_{t}^{adv},\mathcal{C},\mathcal{W})$ : As shown in Figure 2, for an adversarial example $x_{t}^{adv}$ at $t$ -th
116
+
117
+ ![](images/193fe20b740fb2cf7af90f867cd1d1c82e4709798b8ca9addd9d4f95fc0a3e4e.jpg)
118
+ Figure 1: The overall framework of the proposed TextHacker algorithm. At the adversary initialization stage, for a given input text $x$ , after generating the candidate set for each word $w_i \in x$ , we randomly substitute each word with its candidate words till we obtain an adversarial example $x_1^{adv}$ . At the perturbation optimization stage, we first utilize local search to construct an initial population $\mathcal{P}^0$ . Subsequently, we iteratively adopt recombination as well as local search to maximize the fitness function, and update the weight table after each local search.
119
+
120
+ iteration with the candidate set $\mathcal{C}$ and weight table $\mathcal{W}$ , we randomly sample several (at most $\delta$ ) less important words $\hat{w}_i^{jt} \in x_t^{adv}$ with the probability $p_i$ from all the perturbed words in $x_t^{adv}$ :
121
+
122
+ $$
123
+ p _ {i} = \frac {1 - \sigma \left(\mathcal {W} _ {i , :}\right)}{\sum_ {i = 1} ^ {n} \left[ 1 - \sigma \left(\mathcal {W} _ {i , :}\right) \right]},
124
+ $$
125
+
126
+ where $\sigma(x) = 1 / (1 + e^{-x})$ is the sigmoid function. The coarse-grained learning strategies in WeightUpdate could easily make the gap between the word importance too large, resulting in probability distortion and getting stuck during the candidate word selection. To solve this problem, we utilize the sigmoid function with the saturation characteristic to reduce the excessive gap and make the probability more reasonable. Then, we substitute each chosen word $\hat{w}_i^{j_t}$ with the original word $\hat{w}_i^0$ or with an arbitrary word $\hat{w}_i^{j_{t+1}} \in \mathcal{C}(w_i)$ using the probability $p_{i,j_{t+1}}$ equally to generate a new sample $x_{t+1}^{adv}$ :
127
+
128
+ $$
129
+ p _ {i, j _ {t + 1}} = \frac {\sigma (\mathcal {W} _ {i , j _ {t + 1}})}{\sum_ {j _ {t + 1} = 0} ^ {m} \sigma (\mathcal {W} _ {i , j _ {t + 1}})}.
130
+ $$
131
+
132
+ We accept $x_{t+1}^{adv}$ if it is still adversarial, otherwise we return the input adversarial example $x_{t}^{adv}$ . LocalSearch greedily substitutes unimportant word with the original word or critical word using the weight table to search for better adversarial example from the $\delta$ -neighborhood of $x_{t}^{adv}$ .
133
+
134
+ - WeightUpdate $(x_{t}^{adv}, x_{t+1}^{adv}, f, \mathcal{W})$ : Given an adversarial example $x_{t}^{adv}$ at $t$ -th iteration with the generated adversary $x_{t+1}^{adv}$ by local search, we update the word importance of each operated word
135
+
136
+ ![](images/d57ec0cba051d8118af362fc492ff88d74183a93d3cde033d3d2f35ec6c1040e.jpg)
137
+ Figure 2: The overview of the LocalSearch and WeightUpdate. For an adversary $x_{t}^{adv}$ , we sample several words with probability $p_{i}$ based on the weight table. Then, we substitute each sampled word with original word or its candidate word with probability $p_{i,j}$ to generate a new text $x_{t+1}^{adv}$ . Finally, we use the prediction label of the new text $x_{t+1}^{adv}$ to update the weight table.
138
+
139
+ $\hat{w}_i^{j_t} \in x_t^{adv}$ and $\hat{w}_i^{j_{t+1}} \in x_{t+1}^{adv}$ , and the position importance of $w_i$ using the following rules:
140
+
141
+ Rule I: For each replaced word $\hat{w}_i^{j_{t + 1}}$ , if $x_{t + 1}^{adv}$ is still adversarial, it has positive impact on the adversary generation. So we increase its weight $\mathcal{W}_{i,j_{t + 1}}$ , and vice versa.
142
+
143
+ Rule II: For each operated position $i$ , if $x_{t+1}^{adv}$ is still adversarial, it has little impact on the adversary generation. So we decrease the position weight $\mathcal{W}_{i,:}$ , and vice versa.
144
+
145
+ Specifically, if $x_{t+1}^{adv}$ is still adversarial, we assign the positive reward $r$ to each replaced word $\hat{w}_i^{j_{t+1}}$
146
+
147
+ using Rule I, and reward $-2r$ to each $\hat{w}_i^{jt}$ to decrease the weight summation $\mathcal{W}_{i,:} = \sum_{j=0}^{m} \mathcal{W}_{i,j}$ in each operated position $i$ using Rule II:
148
+
149
+ $$
150
+ \mathcal {W} _ {i, j _ {t + 1}} ^ {\prime} = \mathcal {W} _ {i, j _ {t + 1}} + r, \quad \mathcal {W} _ {i, j _ {t}} ^ {\prime} = \mathcal {W} _ {i, j _ {t}} - 2 r,
151
+ $$
152
+
153
+ where $r$ is the predefined reward value and $\mathcal{W}'$ is the weight table after this update. Otherwise, we assign the reward $-r$ to each $\hat{w}_i^{j_{t+1}}$ and $2r$ to each $\hat{w}_i^{j_t}$ . WeightUpdate highlights the important words and positions by assigning different reward for each operated word, which helps the LocalSearch select more critical positions and synonyms to substitute.
154
+
155
+ - Recombination $(\mathcal{P}^t, \mathcal{W})$ : For the $t$ -th generation population $\mathcal{P}^t$ that contains multiple adversarial examples, we combine two randomly sampled texts $x^a = \langle w_1^a, w_2^a, \ldots, w_n^a \rangle \in \mathcal{P}^t$ and $x^b = \langle w_1^b, w_2^b, \ldots, w_n^b \rangle \in \mathcal{P}^t$ to construct a recombined text $x^c = \langle w_1^c, w_2^c, \ldots, w_n^c \rangle$ , where each word $w_i^c$ is randomly sampled from $\{w_i^a, w_i^b\}$ based on their weights in the weight table $\mathcal{W}$ . We repeat the operation $|\mathcal{P}^t| / 2$ times, and then return all the recombined texts. Recombination crafts non-improved solutions by randomly mixing two adversarial examples, which globally changes the text to avoid poor local optima.
156
+
157
+ In summary, as shown in Figure 1, at the adversary initialization stage, for an input text $x$ , we adopt WordSubstitution iteratively to search for an adversarial example. At the perturbation optimization stage, we initialize the weight table $\mathcal{W}$ and adopt the hybrid local search algorithm to minimize the adversary perturbation. Specifically, we first utilize the LocalSearch to construct an initial population. At each iteration, we adopt Recombination and LocalSearch to generate several adversarial examples using the weight table $\mathcal{W}$ . Then we utilize the fitness function in Equation (2) to filter adversarial examples for the next generation. After the adversary optimization, the adversary with the highest fitness would be regarded as the final adversarial example. The overall algorithm of TextHacker is summarized in Algorithm 1.
158
+
159
+ # 4 Experiments
160
+
161
+ In this section, we conduct extensive experiments on eight benchmark datasets and four models to validate the effectiveness of TextHacker.
162
+
163
+ Algorithm 1: The TextHacker Algorithm
164
+ Input: Input sample $x$ , target classifier $f$ , query budget $T$ , reward $r$ , population size $S$ , maximum number of local search $N$
165
+ Output: Attack result and adversarial example
166
+ 1 $\triangleright$ Adversary Initialization
167
+ 2 Construct the candidate set $\mathcal{C}(w_i)$ for each $w_i \in x$
168
+ 3 $x_1 = x$ , $x_1^{adv} = \text{None}$
169
+ 4 for $t = 1 \rightarrow T$ do
170
+ 5 $\begin{array}{l} x_{t+1} = \text{WordSubstitution}(x_t, \mathcal{C}) \\ \text{if } f(x_{t+1}) \neq f(x) \text{ then} \\ x_1^{adv} = x_{t+1}; \text{break} \end{array}$
171
+ 6
172
+ 7
173
+ 8 if $x_1^{adv}$ is None then
174
+ 9 return False, None
175
+ 10 $\triangleright$ Perturbation Optimization
176
+ 11 Initialize the weight table $\mathcal{W}$ with all 0s
177
+ 12 $x_{i+1}^{adv} = \text{LocalSearch}(x_i^{adv}, \mathcal{C}, \mathcal{W})$
178
+ 13 $\mathcal{P}^1 = \{x_1^{adv}, \dots, x_i^{adv}, \dots, x_S^{adv}\}$
179
+ 14 $t = t + S - 1; g = 1$
180
+ 15 while $t \leq T$ do
181
+ 16 $\mathcal{P}^g = \mathcal{P}^g \cup \{\text{Recombination}(\mathcal{P}^g, \mathcal{W})\}$
182
+ 17 for each text $x_g^{adv} \in \mathcal{P}^g$ do
183
+ 18 With $x_1^{adv} = x_g^{adv}$ for $i = 1 \rightarrow N$ :
184
+ 19 $x_{i+1}^{adv} = \text{LocalSearch}(x_i^{adv}, \mathcal{C}, \mathcal{W})$ ;
185
+ 20 WeightUpdate $(x_i^{adv}, x_{i+1}^{adv}, f, \mathcal{W})$
186
+ 21 $\mathcal{P}^g = \mathcal{P}^g \cup \{x_{N+1}^{adv}\}$
187
+ 22 $t = t + N$
188
+ 23 Construct $\mathcal{P}^{g+1}$ with the top $S$ fitness in $\mathcal{P}^g$ based on Equation (2)
189
+ 24 Record global optima $x_{best}^*$ with the highest fitness
190
+ 25 $g = g + 1$
191
+ 26 return True, $x_{best}^*$ $\triangleright$ Attack succeeds
192
+
193
+ # 4.1 Experimental Setup
194
+
195
+ Datasets. We adopt five widely investigated datasets, i.e., AG's News (Zhang et al., 2015), IMDB (Maas et al., 2011), MR (Pang and Lee, 2005), Yelp (Zhang et al., 2015), and Yahoo! Answers (Zhang et al., 2015) for text classification. For textual entailment, we select SNLI (Bowman et al., 2015) and MultiNLI (Williams et al., 2018), where MultiNLI includes matched version (MNLI) and mismatched version (MNLIm).
196
+
197
+ Baselines. We take the hard-label attacks HLBB (Maheshwary et al., 2021) and TextHoaxter (Ye et al., 2022) as our baselines. Since there are only few hard-label attacks proposed recently, we also adopt two evolutionary score-based attacks, i.e., GA (Alzantot et al., 2018) and PSO (Zang et al., 2020b) for reference, which extra utilize the prediction confidence for attack.
198
+
199
+ Victim Models. We adopt WordCNN (Kim, 2014), WordLSTM (Hochreiter and Schmidhuber, 1997), and BERT base-uncased (Devlin et al.,
200
+
201
+ <table><tr><td rowspan="2">Model</td><td rowspan="2">Attack</td><td colspan="2">AG&#x27;s News</td><td colspan="2">IMDB</td><td colspan="2">MR</td><td colspan="2">Yelp</td><td colspan="2">Yahoo! Answers</td></tr><tr><td>Succ.</td><td>Pert.</td><td>Succ.</td><td>Pert.</td><td>Succ.</td><td>Pert.</td><td>Succ.</td><td>Pert.</td><td>Succ.</td><td>Pert.</td></tr><tr><td rowspan="5">BERT</td><td>GA</td><td>40.5</td><td>13.4</td><td>50.9</td><td>5.0</td><td>65.6</td><td>10.9</td><td>36.6</td><td>8.6</td><td>64.2</td><td>7.6</td></tr><tr><td>PSO</td><td>45.8</td><td>12.1</td><td>60.3</td><td>3.7</td><td>74.4</td><td>10.7</td><td>47.9</td><td>7.5</td><td>64.7</td><td>6.6</td></tr><tr><td>HLBB</td><td>54.7</td><td>13.4</td><td>77.0</td><td>4.8</td><td>65.8</td><td>11.4</td><td>57.1</td><td>8.2</td><td>82.0</td><td>7.7</td></tr><tr><td>TextHoaxter</td><td>52.0</td><td>12.8</td><td>78.8</td><td>5.1</td><td>67.1</td><td>11.1</td><td>58.3</td><td>8.5</td><td>83.1</td><td>7.6</td></tr><tr><td>TextHacker</td><td>63.2</td><td>11.9</td><td>81.5</td><td>3.4</td><td>73.1</td><td>11.4</td><td>63.2</td><td>6.7</td><td>87.2</td><td>6.3</td></tr><tr><td rowspan="5">Word CNN</td><td>GA</td><td>70.0</td><td>12.1</td><td>59.6</td><td>5.9</td><td>72.9</td><td>11.1</td><td>44.4</td><td>9.0</td><td>62.0</td><td>8.7</td></tr><tr><td>PSO</td><td>83.5</td><td>10.4</td><td>55.6</td><td>4.2</td><td>80.7</td><td>10.7</td><td>45.6</td><td>7.4</td><td>52.7</td><td>7.0</td></tr><tr><td>HLBB</td><td>74.0</td><td>11.7</td><td>74.0</td><td>4.2</td><td>71.1</td><td>11.2</td><td>67.1</td><td>7.6</td><td>78.7</td><td>7.8</td></tr><tr><td>TextHoaxter</td><td>73.5</td><td>11.5</td><td>76.5</td><td>4.6</td><td>71.1</td><td>10.7</td><td>68.1</td><td>8.0</td><td>78.6</td><td>7.8</td></tr><tr><td>TextHacker</td><td>81.7</td><td>10.2</td><td>77.8</td><td>3.0</td><td>78.3</td><td>11.1</td><td>75.4</td><td>6.4</td><td>84.5</td><td>6.3</td></tr><tr><td rowspan="5">Word LSTM</td><td>GA</td><td>45.5</td><td>12.4</td><td>50.8</td><td>5.7</td><td>67.2</td><td>11.2</td><td>40.7</td><td>8.1</td><td>51.2</td><td>8.6</td></tr><tr><td>PSO</td><td>54.2</td><td>11.6</td><td>42.5</td><td>4.5</td><td>73.0</td><td>10.9</td><td>44.5</td><td>6.7</td><td>43.3</td><td>7.3</td></tr><tr><td>HLBB</td><td>56.8</td><td>12.7</td><td>72.1</td><td>4.1</td><td>68.3</td><td>11.2</td><td>61.0</td><td>6.6</td><td>70.8</td><td>8.3</td></tr><tr><td>TextHoaxter</td><td>56.5</td><td>12.3</td><td>73.5</td><td>4.5</td><td>67.9</td><td>10.7</td><td>61.8</td><td>6.7</td><td>70.1</td><td>8.1</td></tr><tr><td>TextHacker</td><td>64.7</td><td>11.2</td><td>76.2</td><td>3.0</td><td>75.2</td><td>11.2</td><td>65.4</td><td>5.5</td><td>75.5</td><td>6.9</td></tr></table>
202
+
203
+ 2019) models for text classification and BERT base-uncased model for textual entailment.
204
+
205
+ Evaluation Settings. For TextHacker, we set the neighborhood size $\delta = 5$ , reward $r = 0.5$ , population size $S = 4$ , maximum number of local search $N = 8$ . The parameter studies are given in Appendix A. For a fair comparison, we adjust the population size and adopt the same values for other parameters as in their original papers to achieve better performance for the score-based attacks of GA and PSO. All the evaluations are conducted on 1,000 randomly sampled texts from the corresponding testset. We set the synonym number $m = 4$ . The attack succeeds if the perturbation rate of the generated adversarial example is smaller than $25\%$ to ensure the semantic constraints of the adversarial examples. As the task complexity varies across datasets, we set different query budget $T$ (i.e., the maximum query number to the victim model) for different tasks (2,000 for text classification and 500 for textual entailment). The results are averaged on five runs to eliminate randomness.
206
+
207
+ # 4.2 Evaluation on Attack Effectiveness
208
+
209
+ We first conduct evaluations for text classification using five datasets on three models under the same query budget of 2,000. The results, including attack success rate and perturbation rate, are summarized in Table 1. We could observe that TextHacker consistently achieves higher attack success rate with lower perturbation rate across almost all the datasets and victim models than the hard-label
210
+
211
+ Table 1: Attack success rate (Succ., %) ↑, perturbation rate (Pert., %) ↓ of various attacks on three models using five datasets for text classification under the query budget of 2,000. ↑ denotes the higher the better. ↓ denotes the lower the better. We bold the highest attack success rate and lowest perturbation rate among the hard-label attacks.
212
+
213
+ <table><tr><td rowspan="2">Attack</td><td colspan="2">SNLI</td><td colspan="2">MNLI</td><td colspan="2">MNLIm</td></tr><tr><td>Succ.</td><td>Pert.</td><td>Succ.</td><td>Pert.</td><td>Succ.</td><td>Pert.</td></tr><tr><td>GA</td><td>67.2</td><td>14.6</td><td>67.6</td><td>12.6</td><td>66.9</td><td>12.2</td></tr><tr><td>PSO</td><td>70.7</td><td>15.0</td><td>72.0</td><td>12.9</td><td>70.8</td><td>12.4</td></tr><tr><td>HLBB</td><td>57.2</td><td>14.0</td><td>58.3</td><td>12.2</td><td>58.6</td><td>11.8</td></tr><tr><td>TextHoaxer</td><td>61.0</td><td>14.1</td><td>64.0</td><td>12.4</td><td>63.8</td><td>12.0</td></tr><tr><td>TextHacker</td><td>70.3</td><td>15.0</td><td>68.3</td><td>12.8</td><td>69.0</td><td>12.4</td></tr></table>
214
+
215
+ Table 2: Attack success rate (Succ., %) ↑, perturbation rate (Pert., %) ↓ of TextHacker and the baselines on BERT using three datasets for textual entailment under the query budget of 500.
216
+
217
+ attacks. Even for the score-based attacks of GA and PSO, TextHacker exhibits better attack performance on most datasets and victim models.
218
+
219
+ To further validate the effectiveness of the proposed TextHacker, we also conduct evaluations on BERT for three textual entailment tasks. As shown in Table 2, under the same query budget of 500, TextHacker outperforms HLBB by a clear margin of $10.0\% - 13.1\%$ and TextHoaxter by $4.3\% - 9.3\%$ on three datasets with similar perturbation rate. Compared with the score-based attacks, TextHacker achieves lower attack success rate than PSO, but still gains better attack success rate than GA. It is acceptable since GA and PSO extra utilize the changes on prediction confidence introduced by synonym substitution, making the attack much easier than the hard-label attacks.
220
+
221
+ In conclusion, under the same query budgets, the proposed TextHacker exhibits much better attack performance than existing hard-label attacks, for
222
+
223
+ ![](images/2242dd72dc1487a46c808e5dcdefa83ef4b714fbb3d733a46c0efaf1ad8620df.jpg)
224
+ Figure 3: Attack success rate $(\%)$ ↑ of various attacks on BERT using IMDB dataset under various query budgets.
225
+
226
+ either text classification or textual entailment, and achieves comparable or even better attack performance than the advanced score-based attacks.
227
+
228
+ # 4.3 Evaluation on Attack Efficiency
229
+
230
+ In practice, the victim could block the attack by simply denying the access if they detect overload access within a short period. Hence, the attack efficiency, which often refers to the query budget for victim model, plays a key role in evaluating the effectiveness of black-box attacks. On the other hand, the query budget significantly affects the attack performance of the algorithm. Thus, a good attack should exhibit consistent and superior attack performance under various query budgets.
231
+
232
+ We report the attack success rate of TextHacker and the baselines under various query budgets on BERT using IMDB dataset in Figure 3. TextHacker, HLBB and TextHoaxter exhibit remarkably higher attack success rate than GA and PSO under the limited query budget ( $\leq 2,000$ ). We further analyze why GA and PSO perform poorly under the limited query budget in Appendix B. When we continue to increase the query budget, the attack success rate of GA and PSO starts to increase rapidly but is still lower than that of TextHacker, which maintains stable and effective performance. In general, TextHacker consistently exhibits better attack performance under various query budgets, which further demonstrates the superiority of TextHacker.
233
+
234
+ # 4.4 Evaluation on Adversary Quality
235
+
236
+ Adversarial examples should be indistinguishable from benign samples for humans but mislead the model prediction. Hence, textual adversarial examples should maintain the original meaning without
237
+
238
+ <table><tr><td>Attack</td><td>Succ.</td><td>Pert.</td><td>Sim.</td><td>Gram.</td></tr><tr><td>GA</td><td>50.9</td><td>5.0</td><td>79.3</td><td>0.9</td></tr><tr><td>PSO</td><td>60.3</td><td>3.7</td><td>81.8</td><td>0.7</td></tr><tr><td>HLBB</td><td>77.0</td><td>4.8</td><td>84.9</td><td>0.6</td></tr><tr><td>TextHoaxter</td><td>78.8</td><td>5.1</td><td>85.8</td><td>0.6</td></tr><tr><td>TextHacker</td><td>81.5</td><td>3.4</td><td>82.3</td><td>0.4</td></tr></table>
239
+
240
+ Table 3: Attack success rate (Succ., %) ↑, perturbation rate (Pert., %) ↓, average semantic similarity (Sim., %) ↑, grammatical error increase rate (Gram., %) ↓ of TextHacker and the baselines on BERT using IMDB dataset under the query budget of 2,000.
241
+
242
+ apparent typos or grammatical errors. Though existing word-level attacks adopt synonym substitution to maintain semantic consistency, it is still possible to introduce grammatical error and semantic inconsistency. Apart from the perturbation rate, we further evaluate the semantic similarity and grammatical error increase rate using the Universal Sequence Encoder (USE) (Cer et al., 2018) and Language-Tool $^{1}$ , respectively.
243
+
244
+ We compare TextHacker with the baselines on BERT using IMDB dataset and summarize the results in Table 3. With the lowest perturbation rate, TextHacker exhibits better semantic similarity than the score-based attacks of GA and PSO but is lower than HLBB and TextHoaxter, which consider the semantic similarity of synonyms using the USE tool during the attack. However, USE tool is time-consuming and computationally expensive, resulting in HLBB and TextHoaxter running slower than TextHacker as shown in Table 4, and their CPU occupancy rate is seven times that of TextHacker. Also, TextHacker achieves the lowest grammatical error increase rate compared with the baselines. The human evaluation in Appendix C shows that the adversarial examples generated by TextHacker are of high quality and difficult to be detected by humans. These evaluations demonstrate the high lexicality, semantic similarity and fluency of the generated adversarial examples of TextHacker.
245
+
246
+ # 4.5 Evaluation on Real-world Applications
247
+
248
+ With the rapid development and broad application of DNNs, numerous companies have deployed many commercial Application Programming Interfaces (APIs) for various tasks, e.g. sentiment analysis, named entity recognition, etc. The user can obtain the prediction label by calling the service API, making it possible for hard-label attackers to attack. To validate the attack effectiveness
249
+
250
+ ![](images/e45899584c29c2118b309a98411f66779cd1c12ca8975265e4bff68b340e159c.jpg)
251
+ Weight Table
252
+ Figure 4: Visualization of the weight table in TextHacker and the word importance table from the victim model, representing the word importance of nouns, verbs, adjectives, adverbs, and their candidate words in the original text. The original words are highlighted in Cyan, with each row representing the candidate words. The substituted words are highlighted in Red with marker $\star$ . A darker color indicates a more important word.
253
+
254
+ ![](images/c977775839207280abd96367e8751433e7f706d3e851bfca60b2934305ed02ab.jpg)
255
+ Original Text. Label: Positive A gripping movie, played with performance that are all understated and touching. Adversarial Text. Label: Negative A gripping films, played with representations that sunt all devaluted and touching.
256
+ Word Importance Table
257
+
258
+ <table><tr><td>Attack</td><td>Succ.</td><td>Pert.</td><td>Sim.</td><td>Gram.</td><td>Time</td></tr><tr><td>HLBB</td><td>65.0</td><td>5.7</td><td>82.1</td><td>0.5</td><td>8.7</td></tr><tr><td>TextHoaxter</td><td>65.0</td><td>5.2</td><td>82.2</td><td>0.4</td><td>9.3</td></tr><tr><td>TextHacker</td><td>75.0</td><td>3.1</td><td>80.9</td><td>0.3</td><td>5.7</td></tr></table>
259
+
260
+ Table 4: Attack success rate (Succ., %) ↑, perturbation rate (Pert., %) ↓, average semantic similarity (Sim., %) ↑, grammatical error increase rate (Gram., %) ↓, and running time per attack (Time, in minutes) ↓ of various hard-label attacks on Amazon Cloud APIs under the query budget of 2,000.
261
+
262
+ of TextHacker in the real world, we evaluate the attack performance of TextHacker, HLBB, and TextHoaxter on Amazon Cloud sentiment analysis $\mathsf{API}^2$ . Besides, attacks that run faster in the real world are more available and convenient. So we also report the average running time per attack. Due to the high cost of commercial APIs, we sample 20 texts from IMDB dataset for the test. As shown in Table 4, TextHacker achieves higher attack success rate, generates higher quality adversarial examples and runs faster than HLBB and TextHoaxter when facing real world APIs under tight query budget.
263
+
264
+ # 4.6 Visualization of Weight Table
265
+
266
+ Existing attacks (Ren et al., 2019; Jin et al., 2020) usually take the model's output changes to different words as the word importance and perturb the top important words to generate adversarial examples. In this work, the weight table plays such a role, which learns the word importance from the attack history. Thus, the precise estimation of model's behavior is the key to generating better adversarial examples. To further explore TextHacker, we conduct comparison and visualization to analyze
267
+
268
+ <table><tr><td>Attack</td><td>Succ.</td><td>Pert.</td><td>Sim.</td><td>Gram.</td></tr><tr><td>Weight table</td><td>22.4</td><td>11.9</td><td>71.5</td><td>1.3</td></tr><tr><td>Hybrid local search</td><td>79.6</td><td>6.2</td><td>77.5</td><td>0.7</td></tr><tr><td>TextHacker</td><td>81.5</td><td>3.4</td><td>82.3</td><td>0.4</td></tr></table>
269
+
270
+ Table 5: Ablation study on the hybrid local search algorithm and weight table in TextHacker on BERT using IMDB dataset under the query budget of 2,000.
271
+
272
+ the difference between the weight table and the word importance table from the model. We generate the adversarial example of one benign text sampled from MR dataset by TextHacker. For the word importance table, we calculate the word importance of each word by the prediction confidence difference after replacing the original word with the candidate word on BERT. We map the values in the learned weight table and word importance table into [-1, 1] and illustrate their heatmaps in Figure 4. More case studies are presented in Appendix D. We find that the weight table is consistent with the word importance table for the most important words. It helps TextHacker optimize the adversarial perturbation more efficiently and hold on the most important words for better adversarial example. This is important and challenging in the hard-label attack setting, which also explains the superiority of TextHacker.
273
+
274
+ # 4.7 Ablation Study
275
+
276
+ To study the impact of different components of TextHacker, we conduct a series of ablation studies on BERT using IMDB dataset under the query budget of 2,000.
277
+
278
+ The impact of weight table and hybrid local search. We design two variants to evaluate the impact of various components in TextHacker. a)
279
+
280
+ <table><tr><td>Attack</td><td>Succ.</td><td>Pert.</td><td>Sim.</td><td>Gram.</td></tr><tr><td>Local search → Mutation</td><td>79.1</td><td>6.1</td><td>77.5</td><td>0.7</td></tr><tr><td>Recombination → Crossover</td><td>81.3</td><td>3.7</td><td>81.9</td><td>0.4</td></tr><tr><td>TextHacker</td><td>81.5</td><td>3.4</td><td>82.3</td><td>0.4</td></tr></table>
281
+
282
+ Table 6: Ablation study on the hybrid local search in TextHacker and genetic algorithm in HLBB on BERT using IMDB dataset under the query budget of 2,000.
283
+
284
+ <table><tr><td>Attack</td><td>Succ.</td><td>Pert.</td><td>Sim.</td><td>Gram.</td></tr><tr><td>Random-search</td><td>80.2</td><td>5.3</td><td>77.8</td><td>0.7</td></tr><tr><td>Random-flip</td><td>81.0</td><td>5.3</td><td>76.4</td><td>0.7</td></tr><tr><td>TextHacker</td><td>81.5</td><td>3.4</td><td>82.3</td><td>0.4</td></tr></table>
285
+
286
+ Table 7: Ablation study on the hybrid local search in TextHacker and alternative strategies on BERT using IMDB dataset under the query budget of 2,000.
287
+
288
+ weight table: we remove the hybrid local search and greedily substitute the sampled word with its synonyms iteratively based on the weight table. b) Hybrid local search: we utilize the hybrid local search to search for better adversaries without weight table. The experiments in Table 5 show the effectiveness and rationality of different components in TextHacker.
289
+
290
+ Hybrid local search vs. genetic algorithms. Genetic algorithm in HLBB is inefficient in exploring the search space compared to the hybrid local search algorithm in TextHacker that balances the local and global exploitation. Compared with random synonym substitutions on mutation in HLBB, the local search replaces more critical words using word importance, making it reach the local optima faster. To further illustrate their differences, we replace local search with mutation and recombination with crossover respectively. The experiments in Table 6 demonstrate that the first change drops the success rate by $2.4\%$ and increases the perturbation rate by $2.7\%$ . The second change drops the success rate by $0.2\%$ and increases the perturbation rate by $0.3\%$ . This study validates the better performance of local search and recombination.
291
+
292
+ Local search vs. alternative strategies. We replace the local search with two alternative strategies, namely random-search that randomly substitutes the sampled word with its synonyms, and random-flip that directly substitutes the sampled word with the original word. The experiments in Table 7 demonstrate that local search achieves better attack performance than random-search and random-flip, showing the superiority of the local search in TextHacker.
293
+
294
+ # 5 Conclusion
295
+
296
+ In this work, we propose a new text hard-label attack called TextHacker. TextHacker captures the words that have higher impact on the adversarial example via the changes on prediction label. By incorporating the learned word importance into the search process of the hybrid local search, TextHacker can reduce the adversarial perturbation between the adversarial example and benign text more efficiently to generate more natural adversarial examples. Extensive evaluations for two typical NLP tasks, namely text classification and textual entailment, using various datasets and models demonstrate that TextHacker achieves higher attack success rate and lower perturbation rate than existing hard-label attacks and generates higher-quality adversarial examples. We believe that TextHacker could shed new light on more precise estimation of the word importance and inspire more researches on hard-label attacks.
297
+
298
+ # Limitations
299
+
300
+ As shown in Table 3, adversarial examples generated by TextHacker have a slightly lower semantic similarity than HLBB and TextHoaxter from the automatic metric perspective. However, the quality (i.e., lexicality, semantic similarity and fluency) of adversarial examples depend not only on semantic similarity evaluation, but also on perturbation rate, grammatical error rate, human evaluation, etc. In our experiments, the quality in Table 3 and human evaluation experiment in Appendix C have demonstrated the higher quality and the harder detection by humans of the adversarial example generated by our TextHacker. In addition, the semantic similarity metric is usually measured by the USE tool which will lead to high computing resource occupancy and slow running speed of the attack algorithm, as described in Section 4.4. However, a faster and less resource-intensive attack attack is usually more suitable and convenient in the real world. Considering semantic similarity alone may not be a good choice for generating high quality adversarial examples. Hence, this limitation is acceptable.
301
+
302
+ # Acknowledgement
303
+
304
+ This work is supported by National Natural Science Foundation (62076105,U22B2017).
305
+
306
+ # References
307
+
308
+ Emile Aarts, Emile HL Aarts, and Jan Karel Lenstra. 2003. Local search in combinatorial optimization. In Princeton University Press.
309
+ Moustafa Alzantot, Yash Sharma, Ahmed Elgohary, Bo-Jhang Ho, Mani Srivastava, and Kai-Wei Chang. 2018. Generating natural language adversarial examples. In Conference on Empirical Methods in Natural Language Processing.
310
+ Edward J Anderson and Michael C Ferris. 1994. Genetic algorithms for combinatorial optimization: the assemble line balancing problem. ORSA Journal on Computing.
311
+ Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In Conference on Empirical Methods in Natural Language Processing.
312
+ Daniel Cer, Yinfei Yang, Sheng-yi Kong, Nan Hua, Nicole Limtiaco, Rhomni St John, Noah Constant, Mario Guajardo-Cespedes, Steve Yuan, Chris Tar, et al. 2018. Universal sentence encoder. In arXiv preprint arXiv:1803.11175.
313
+ Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In North American Chapter of the Association for Computational Linguistics: Human Language Technologies.
314
+ Javid Ebrahimi, Anyi Rao, Daniel Lowd, and Dejing Dou. 2018. HotFlip: White-box adversarial examples for text classification. In Association for Computational Linguistics.
315
+ Philippe Galinier and Jin-Kao Hao. 1999. Hybrid evolutionary algorithms for graph coloring. In Springer.
316
+ Ji Gao, Jack Lanchantin, Mary Lou Soffa, and Yanjun Qi. 2018. Black-box generation of adversarial text sequences to evade deep learning classifiers. In 2018 IEEE Security and Privacy Workshops (SPW).
317
+ Siddhant Garg and Goutham Ramakrishnan. 2020. BAE: BERT-based adversarial examples for text classification. In Conference on Empirical Methods in Natural Language Processing.
318
+ Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. 2015. Explaining and harnessing adversarial examples. In International Conference on Learning Representations.
319
+ Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. In Neural Computation.
320
+ Di Jin, Zhijing Jin, Joey Tianyi Zhou, and Peter Szolovits. 2020. Is BERT really robust? A strong baseline for natural language attack on text classification and entailment. In AAAI Conference on Artificial Intelligence.
321
+
322
+ James Kennedy and Russell Eberhart. 1995. Particle swarm optimization. In Proceedings of ICNN'95-international conference on neural networks. IEEE.
323
+ Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Conference on Empirical Methods in Natural Language Processing.
324
+ Jinfeng Li, Shouling Ji, Tianyu Du, Bo Li, and Ting Wang. 2019. Textbugger: Generating adversarial text against real-world applications. In Network and Distributed System Security Symposium.
325
+ Bin Liang, Hongcheng Li, Miaoqiang Su, Pan Bian, Xirong Li, and Wenchang Shi. 2018. Deep text classification can be fooled. In International Joint Conference on Artificial Intelligence.
326
+ Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In Association for Computational Linguistics.
327
+ Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. 2018. Towards deep learning models resistant to adversarial attacks. In International Conference on Learning Representations.
328
+ Rishabh Maheshwary, Saket Maheshwary, and Vikram Pudi. 2021. Generating natural language attacks in a hard label black box setting. In AAAI Conference on Artificial Intelligence.
329
+ Zhao Meng and Roger Wattenhofer. 2020. A geometry-inspired attack for generating natural language adversarial examples. In International Conference on Computational Linguistics.
330
+ Nikola Mrksic, Diarmuid Řeaghdha, Blaise Thomson, Milica Gašić, Lina M. Rojas-Barahona, Pei-Hao Su, David Vandyke, Tsung-Hsien Wen, and Steve Young. 2016. Counter-fitting word vectors to linguistic constraints. In North American Chapter of the Association for Computational Linguistics: Human Language Technologies.
331
+ Bo Pang and Lillian Lee. 2005. Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales. In Association for Computational Linguistics.
332
+ Nicolas Papernot, Patrick McDaniel, Ananthram Swami, and Richard Harang. 2016. Crafting adversarial input sequences for recurrent neural networks. In MILCOM IEEE Military Communications Conference.
333
+ Nicholas J Radcliffe. 1993. Genetic set recombination. In Foundations of Genetic Algorithms.
334
+ Shuhuai Ren, Yihe Deng, Kun He, and Wanxiang Che. 2019. Generating natural language adversarial examples through probability weighted word saliency. In Association for Computational Linguistics.
335
+
336
+ Sachin Saxena. 2020. Textdeceptor: Hard label black box attack on text classifiers. In arXiv preprint arXiv:2008.06860.
337
+ Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian J. Goodfellow, and Rob Fergus. 2014. Intriguing properties of neural networks. In International Conference on Learning Representations.
338
+ Xiaosen Wang, Xuanran He, Jingdong Wang, and Kun He. 2021a. Admix: Enhancing the transferability of adversarial attacks. In International Conference on Computer Vision, pages 16138-16147.
339
+ Xiaosen Wang, Hao Jin, Yichen Yang, and Kun He. 2021b. Natural language adversarial defense through synonym encoding. In Conference on Uncertainty in Artificial Intelligence.
340
+ Xiaosen Wang, Yifeng Xiong, and Kun He. 2022. Randomized substitution and vote for textual adversarial example detection. In Conference on Uncertainty in Artificial Intelligence.
341
+ Xiaosen Wang, Yichen Yang, Yihe Deng, and Kun He. 2021c. Adversarial training with fast gradient projection method against synonym substitution based text attacks. In AAAI Conference on Artificial Intelligence.
342
+ Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In *North American Chapter of the Association for Computational Linguistics: Human Language Technologies*.
343
+ Puyudi Yang, Jianbo Chen, Cho-Jui Hsieh, Jane-Ling Wang, and Michael I Jordan. 2020. Greedy attack and gumbel attack: Generating adversarial examples for discrete data. In Journal of Machine Learning Research.
344
+ Yichen Yang, Xiaosen Wang, and Kun He. 2022. Robust textual embedding against word-level adversarial attacks. In Conference on Uncertainty in Artificial Intelligence.
345
+ Muchao Ye, Chenglin Miao, Ting Wang, and Fenglong Ma. 2022. Texthoaxer: Budgeted hard-label adversarial attacks on text. In AAAI Conference on Artificial Intelligence.
346
+ Yuan Zang, Bairu Hou, Fanchao Qi, Zhiyuan Liu, Xiaojun Meng, and Maosong Sun. 2020a. Learning to attack: Towards textual adversarial attacking in real-world situations. In arXiv preprint arXiv:2009.09192.
347
+ Yuan Zang, Fanchao Qi, Chenghao Yang, Zhiyuan Liu, Meng Zhang, Qun Liu, and Maosong Sun. 2020b. Word-level textual adversarial attacking as combinatorial optimization. In Association for Computational Linguistics.
348
+
349
+ Huangzhao Zhang, Hao Zhou, Ning Miao, and Lei Li. 2019. Generating fluent adversarial examples for natural languages. In Association for Computational Linguistics.
350
+ Xiang Zhang, Junbo Jake Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classification. In Advances in Neural Information Processing Systems.
351
+
352
+ # A Parameter Study
353
+
354
+ To gain more insights into the effectiveness of our TextHacker, we conduct a series of parameter studies to explore the impact of hyper-parameters for the neighborhood size $\delta$ , population size $S$ , and maximum number of local search $N$ in TextHacker. We conduct parameter studies on BERT using IMDB dataset to determine the best hyperparameters and use the same hyper-parameters on all other datasets.
355
+
356
+ On the neighborhood size. In Figure 5a, we study the impact of the neighborhood size $\delta$ . The small $\delta$ would restrict the search scope of local search, making it difficult to find the local optimal solution from the vast search space, resulting in low attack success rate and high perturbation rate under limited query budgets. As $\delta$ increases, the attack success rate increases and the perturbation rate decreases until $\delta = 5$ . When we continually increase $\delta$ , the vast search scope of local search makes it difficult to converge to local optima, resulting in an increase in perturbation rate. Thus, we set $\delta = 5$ in our experiments.
357
+
358
+ On the population size. As shown in Figure 5b, we study the impact of population size $S$ . When $S = 1$ , the hybrid local search algorithm degrades to the non-population-based algorithm which exhibits high perturbation rate. With the increment on the value of $S$ , the perturbation rate decreases until $S = 4$ . When we continually increase $S$ , the local search operator costs many queries for each candidate solution in the population. This limits the number of iterations of the overall algorithm under tight query budget, leading to low attack success rate and high perturbation rate. Thus, we set $S = 4$ in our experiments.
359
+
360
+ On the maximum number of local search. We finally study the impact of maximum number of local search $N$ , as shown in Figure 5c. When $N = 2$ , the recombination operator is performed for every two steps of the local search operator. It is difficult for local search operator to thoroughly explore the neighborhood space, resulting in low attack success rate and high perturbation rate. When $N$ is too large, there are too few recombination operations under tight budgets, making TextHacker insufficient to explore the entire search space, leading to unstable performance. Therefore, we adopt an intermediate value $N = 8$ to balance the local search and recombination in our experiments.
361
+
362
+ <table><tr><td rowspan="2">Attack</td><td colspan="2">S = 4</td><td colspan="2">S = 30</td></tr><tr><td>Succ.</td><td>Pert.</td><td>Succ.</td><td>Pert.</td></tr><tr><td>GA</td><td>88.2</td><td>9.4</td><td>35.5</td><td>3.4</td></tr><tr><td>PSO</td><td>75.6</td><td>6.4</td><td>47.3</td><td>2.8</td></tr><tr><td>HLBB</td><td>65.3</td><td>4.5</td><td>77.0</td><td>4.8</td></tr><tr><td>TextHacker</td><td>81.5</td><td>3.4</td><td>80.6</td><td>4.7</td></tr></table>
363
+
364
+ Table 8: Attack success rate (Succ., %) ↑, perturbation rate (Pert., %) ↓ of TextHacker and the baselines on BERT using IMDB dataset under the query budget of 2,000 when the population size $S = 4$ and $S = 30$ .
365
+
366
+ # B Why Do Population-based Baselines Perform Poor?
367
+
368
+ To further analyze why the baselines perform poorly under tight budgets, we show the performance of our TextHacker and the population-based baselines on BERT using IMDB dataset under the same population size $S = 4$ and $S = 30$ (commonly used in GA, PSO and HLBB). Note that TextHoaxter is a non-population-based algorithm and is not considered in this experiment. As shown in Table 8, when $S = 4$ , the low population size makes it difficult to seriously explore the search space and find the optimal adversarial example for GA and PSO, resulting in high perturbation rate. When $S = 30$ , GA and PSO cost too many queries in each iteration. Thus, tight budget makes it difficult for them to fully explore the entire search space to find adversarial examples, resulting in low attack success rate. In contrast, the adversary initialization by random walks ensures high attack success rate of TextHacker and HLBB even under tight budgets. And the word importance learned by attack history helps TextHacker explore more efficiently and obtain lower perturbation rate.
369
+
370
+ # C Human Evaluation
371
+
372
+ Human beings are very sensitive and subjective to texts. Even minor synonym substitutions may change the feeling of people, resulting in different evaluations. Therefore, human evaluation is also necessary to evaluate the quality of adversarial examples. We perform the human evaluation on 20 benign texts and the corresponding adversarial examples generated by TextHacker, HLBB and TextHoaxer on BERT using MR dataset. Note that the texts in the MR dataset are shorter, averaging only 20 words per sentence, making it easier for humans to detect the adversarial examples. We invite 20 volunteers to label the adversarial exam
373
+
374
+ ![](images/45099580ab2c306172a7fe0dd16c0e529b10d94c8c7e92b87742877420458419.jpg)
375
+ (a) Parameter study for various $\delta$
376
+
377
+ ![](images/cebc26e5ea9576f3434d92f8f6696a7f72c4fec7e3cdeb62ecdc55f241ea3cf6.jpg)
378
+ (b) Parameter study for various $S$
379
+ Figure 5: The attack success rate $(\%)$ ↑ and perturbation rate $(\%)$ ↓ of TextHacker on BERT using IMDB dataset, when varying the neighborhood size $\delta$ , population size $S$ or maximum number of local search $N$ .
380
+
381
+ ![](images/ecf0a897a827244e237368eaeff79b88bbe14faf56fe346b9dd5be255b8f8df2.jpg)
382
+ (c) Parameter study for various $N$
383
+
384
+ plies, i.e., positive or negative, and score for the similarity between the benign sample and its adversarial example from 1 (very similar) to 5 (very different). The survey results show that $84.5\%$ of the adversarial examples on TextHacker (vs. $79.0\%$ on HLBB and $81.5\%$ on TextHoaxter) are labeled the same as the original samples, and the average similarity score is 1.9 (vs. 2.4 on HLBB and 2.1 on TextHoaxter). It demonstrates that the adversarial examples generated by TextHacker are of higher quality and harder to be detected by humans than that of HLBB and TextHoaxter.
385
+
386
+ # D More Visualizations of Weight Table
387
+
388
+ Here we present more case studies as the extension of Section 4.6 in Figure 6, 7, 8, and the adversarial examples generated by various hard-label attacks in Table 9, 10, 11. These visualizations further verify the consistency between the weight table and the word importance table, proving the effectiveness of the learned weight table in TextHacker.
389
+
390
+ Original Text. Label: Positive
391
+
392
+ Adversarial Text. Label: Negative
393
+
394
+ Both lead performances are oscar size quaid is utterly fearless as the tortured husband living a painful lie, and moore wonderfully underplays the long suffering heroine with an unflappable 50s dignity somewhere between jane wyman and june cleaver.
395
+
396
+ Both lead performances are oscar size quaid is utterly fearless as the tortured husband living a painful lie, and moore marvellously underplays the long suffers heroine with an unflappable 50s decency somewhere between jane wyman and june cleaver.
397
+
398
+ ![](images/87daf2a4415ab271a2b33067dbd41b45f5202f412bf6abdb85043555f2b5a56e.jpg)
399
+ Weight Table
400
+
401
+ ![](images/9d0d43b22ed32e2b9c21fcd3f013022cd2c345d198b30d2a0ea7685ee7a6b074.jpg)
402
+ Word Importance Table
403
+ Figure 6: Visualization of the weight table in TextHacker and the word importance table from the victim model, representing the word importance of nouns, verbs, adjectives, adverbs, and their candidate words in the original text as shown in Table 9. The original words are highlighted in Cyan, with each row representing the candidate words. The substituted words are highlighted in Red with marker $\star$ . A darker color indicates a more important word.
404
+
405
+ <table><tr><td>Attack</td><td>Original Text &amp; Adversarial Example</td><td>Prediction</td></tr><tr><td>Original Text</td><td>Both lead performances are Oscar size quaid is utterly fearless as the tortured husband living a painful lie, and moore wonderfully underplays the long suffering heroine with an unflappable 50s dignity somewhere between jane wyman and june cleaver.</td><td>Positive</td></tr><tr><td>HLBB</td><td>Both lead (leaded) performances are Oscar size quaid is utterly fearless (brave) as the tortured (tortures) husband (hobby) living a painful (agonizing) lie, and moore wonderfully underplays the long suffering (suffer) heroine (smack) with an unflappable 50s dignity (decency) somewhere between jane wyman and june cleaver.</td><td>Negative</td></tr><tr><td>TextHoaxter</td><td>Both lead performances are Oscar size quaid is utterly fearless as the tortured (tortures) husband (hobby) living a painful (agonizing) lie, and moore wonderfully underplays the long suffering (suffers) heroine (smack) with an unflappable (easygoing) 50s dignity somewhere (nowhere) between jane wyman and june cleaver.</td><td>Negative</td></tr><tr><td>TextHacker</td><td>Both lead performances are Oscar size quaid is utterly fearless as the tortured husband living a painful lie, and moore wonderfully (marvellously) underplays the long suffering (suffers) heroine with an unflappable 50s dignity (decency) somewhere between jane wyman and june cleaver.</td><td>Negative</td></tr></table>
406
+
407
+ Table 9: The original text from MR dataset and the adversarial example generated by various hard-label attacks (HLBB, TextHoaxter and TextHacker) on BERT. We highlight the words replaced by the attacks in Red. The corresponding original words are highlighted in Cyan.
408
+
409
+ <table><tr><td>Original Text.</td><td>Label: Business</td><td>Skulls on your symbian phone? don&#x27;t panic! petaling jaya : virus experts at british software security firm sophos plc have advised customers not to panic, following media reports of a trojan horse which infects cellphones.</td></tr><tr><td>Adversarial Text.</td><td>Label: Sports</td><td>Frantz on your symbian phone? don&#x27;t panic! petaling jaya : virus experts at british software insurance firm sophos plc have advised customers not to panic, following media reports of a troy horse which injury cellphones.</td></tr></table>
410
+
411
+ ![](images/4023784588f5e87e527c5b80f2998796da217248c5d2bfd71a6a7e21baa36956.jpg)
412
+ Weight Table
413
+
414
+ ![](images/db289122da1ac7d4322772af4c89f461f62abeae866acfae2986d3d397a2f88b.jpg)
415
+ Word Importance Table
416
+ Figure 7: Visualization of the weight table in TextHacker and the word importance table from the victim model, representing the word importance of nouns, verbs, adjectives, adverbs, and their candidate words in the original text as shown in Table 10. The original words are highlighted in Cyan, with each row representing the candidate words. The substituted words are highlighted in Red with marker $\star$ . A darker color indicates a more important word.
417
+
418
+ <table><tr><td>Attack</td><td>Original Text &amp; Adversarial Example</td><td>Prediction</td></tr><tr><td>Original Text</td><td>Skulls on your symbian phone? don’t panic! petaling jaya : virus experts at british software security firm sophos plc have advised customers not to panic, following media reports of a trojan horse which infects cellphones.</td><td>Business</td></tr><tr><td>HLBB</td><td>Skulls on your symbian phone? don’t panic! petaling jaya : virus (infection) experts at british software (sw) security firm sophos plc have advised customers not to panic, following media reports of a trojan (spartans) horse which infects (injury) cellphones (telephones).</td><td>Sports</td></tr><tr><td>TextHoaxter</td><td>Skulls on your symbian phone? don’t panic! petaling jaya (gaya) : virus experts at british software (sw) security (insurance) firm (resolute) sophos plc have advised customers not to panic, following media reports of a trojan (spartans) horse which infects cellphones.</td><td>Sports</td></tr><tr><td>TextHacker</td><td>Skulls (Frantz) on your symbian phone? don’t panic! petaling jaya : virus experts at british software security (insurance) firm sophos plc have advised customers not to panic, following media reports of a trojan (troy) horse which infects (injury) cellphones.</td><td>Sports</td></tr></table>
419
+
420
+ Table 10: The original text from AG's News dataset and the adversarial example generated by various hard-label attacks (HLBB, TextHoaxter and TextHacker) on BERT. We highlight the words replaced by the attacks in Red. The corresponding original words are highlighted in Cyan.
421
+
422
+ Original Text. Label: Entertainment & Music
423
+
424
+ What movie is the saying odoyle rules in ?? I think it might have been billy madison but I'm not sure. Yes you're right Billy Madison.
425
+
426
+ Adversarial Text. Label: Education & Reference
427
+
428
+ What filmmaking is the saying odoyle regulation in ?? I think it might have been billy madison but I'm not sure. Yes you're right Billy Madison.
429
+
430
+ ![](images/7769e9f408d8d468dda8e4f8182a62ef8331378daa5f2ad9f6279790e4c1df07.jpg)
431
+ Weight Table
432
+
433
+ ![](images/6e02d413fba552b996a0ead91495b1305da76c3fc1f10d6607508cd9105eaa54.jpg)
434
+ Word Importance Table
435
+ Figure 8: Visualization of the weight table in TextHacker and the word importance table from the victim model, representing the word importance of nouns, verbs, adjectives, adverbs, and their candidate words in the original text as shown in Table 11. The original words are highlighted in Cyan, with each row representing the candidate words. The substituted words are highlighted in Red with marker $\star$ . A darker color indicates a more important word.
436
+
437
+ <table><tr><td>Attack</td><td>Original Text &amp; Adversarial Example</td><td>Prediction</td></tr><tr><td>Original Text</td><td>What movie is the saying odoyle rules in ?? I think it might have been billy madison but I&#x27;m not sure. Yes you&#x27;re right Billy Madison.</td><td>Entertainment &amp; Music</td></tr><tr><td>HLBB</td><td>What movie (filmmaking) is the saying (proverb) odoyle rules in ?? I think it might have been billy madison but I&#x27;m not (no) sure (secure). Yes you&#x27;re right Billy Madison.</td><td>Education &amp; Reference</td></tr><tr><td>TextHoaxter</td><td>What movie (filmmaking) is the saying (proverb) odoyle rules in ?? I think it might (perhaps) have (ha) been (undergone) billy madison but I&#x27;m not sure. Yes you&#x27;re right Billy Madison.</td><td>Education &amp; Reference</td></tr><tr><td>TextHacker</td><td>What movie (filmmaking) is the saying odoyle rules (regulation) in ?? I think it might have been billy madison but I&#x27;m not sure. Yes you&#x27;re right Billy Madison.</td><td>Education &amp; Reference</td></tr></table>
438
+
439
+ Table 11: The original text from Yahoo! Answers dataset and the adversarial example generated by various hard-label attacks (HLBB, TextHoaxter and TextHacker) on BERT. We highlight the words replaced by the attacks in Red. The corresponding original words are highlighted in Cyan.
texthackerlearningbasedhybridlocalsearchalgorithmfortexthardlabeladversarialattack/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f07d0027d573d3ca60f1adda7f72af7273c886e46453aeceab08061eac3633af
3
+ size 1166985
texthackerlearningbasedhybridlocalsearchalgorithmfortexthardlabeladversarialattack/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a89e344c0f4ec5bea7a2fa8a1dde41bdf6a404271bb803ea3ab0a1cb62bed13e
3
+ size 591175
textonlytrainingforimagecaptioningusingnoiseinjectedclip/140abd8e-8322-4276-88e8-1cd1f845c2ae_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5428af4b693c307b338cec82d1c37c2dcc32f07224b08dfd132405d58cedc4c3
3
+ size 54953
textonlytrainingforimagecaptioningusingnoiseinjectedclip/140abd8e-8322-4276-88e8-1cd1f845c2ae_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:eb6c4f21af9a9365ec21c47f6b9fedc3b40c6bcc35f9f9f7e162e0e270c06b0e
3
+ size 70980
textonlytrainingforimagecaptioningusingnoiseinjectedclip/140abd8e-8322-4276-88e8-1cd1f845c2ae_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c2fc4771f11d274f3941530a6058fb741bd2dbba755c29f8e074001c391a3e7c
3
+ size 952576
textonlytrainingforimagecaptioningusingnoiseinjectedclip/full.md ADDED
@@ -0,0 +1,230 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Text-Only Training for Image Captioning using Noise-Injected CLIP
2
+
3
+ David Nukrai
4
+
5
+ Ron Mokady
6
+
7
+ Amir Globerson
8
+
9
+ Blavatnik School of Computer Science, Tel Aviv University
10
+
11
+ # Abstract
12
+
13
+ We consider the task of image-captioning using only the CLIP model and additional text data at training time, and no additional captioned images. Our approach relies on the fact that CLIP is trained to make visual and textual embeddings similar. Therefore, we only need to learn how to translate CLIP textual embeddings back into text, and we can learn how to do this by learning a decoder for the frozen CLIP text encoder using only text. We argue that this intuition is "almost correct" because of a gap between the embedding spaces, and propose to rectify this via noise injection during training. We demonstrate the effectiveness of our approach by showing SOTA zero-shot image captioning across four benchmarks, including style transfer. Code, data, and models are available at https://github.com/DavidHuji/CapDec.
14
+
15
+ # 1 Introduction
16
+
17
+ Vision and language are closely intertwined, as they are two ways of describing the world. This raises the potential for developing models that map images and text into a shared semantic space. Indeed, this approach has recently achieved great success with models like CLIP (Radford et al., 2021) and ALIGN (Jia et al., 2021). These models use parallel image-text data to train a joint representation, where the embeddings of image-text pairs are similar. Such models have been employed for various vision-language tasks.
18
+
19
+ Image captioning is a key task in vision-language perception. Yet, training image captioning models typically requires large datasets of captioned images, and these are challenging to collect. Furthermore, it is not clear how one could adapt a pretrained vision-language model to generate captions in new styles. In this work, we present an approach to captioning that only requires CLIP and text data, and generates styled captions using only unpaired textual examples from that style. This
20
+
21
+ alleviates the need for paired text-image data, and also allows for simple style transfer.
22
+
23
+ A first approach one could consider for this setting is to train a decoder model to reconstruct texts from their respective CLIP embeddings, and at inference use this decoder to decode image embeddings. However, we observed that this approach fails at inference, and we conjecture this is due to the known domain gap between the image and text modalities (Liang et al., 2022). We propose a simple approach to mitigate this, by injecting noise into the embedding during training. This has the effect of creating a ball in embedding space that will map to the same caption, and corresponding image-embedding is more likely to be inside this ball, as illustrated in Fig. 1.a.
24
+
25
+ We evaluate our "Captioning via Decoding" (CapDec) method extensively, showing that it works well on several image captioning tasks, including standard, cross-domain, and style-guided captioning. Overall, our main contributions are as follows: 1) A simple and intuitive approach to learning a captioning model based on CLIP and additional text training data, but no images for training. 2) Evaluation of CapDec on image captioning tasks, including generating captions in various styles, shows it outperforms other methods which use the same supervision.
26
+
27
+ # 2 Related Work
28
+
29
+ Image captioning methods (Chen and Zitnick, 2014; Chen et al., 2017; Yang et al., 2019; Herdade et al., 2019; Luo et al., 2021; Tsimpoukelli et al., 2021) typically extract visual features using a pre-trained network. These are passed to a textual decoder that produces the final captions. To bridge the gap between vision and language, other works employ pre-training to create a shared latent space of vision and text (Tan and Bansal, 2019; Laina et al., 2019; Lu et al., 2019; Li et al., 2020; Zhou et al., 2020; Zhang et al., 2021; Wang et al., 2021;
30
+
31
+ ![](images/43ff6f8c722f6dd2018184476d7460f3f01583536a55854a7ed2eac4a61a8461.jpg)
32
+
33
+ ![](images/78abc15a9f3306eeb07ea4e04a64e171481b0a968d1cb5d99ec7b5d2d98d8139.jpg)
34
+ Figure 1: Overview of our CapDec captioning approach. (a) An illustration of the CLIP joint embedding space. Embedded text is relatively close to its corresponding visual embedding, but with a certain gap. (b) CapDec trains a model that decodes the CLIP embedding of text $T$ back to text $T$ , after noise-injection. The encoders remain frozen. (c) At inference, CapDec simply decodes the embedding of an image using the trained decoder.
35
+
36
+ ![](images/30185665cdfb52eab2734a50ca832f2ffd1334a24a228e3cf3858a84c88fcf1f.jpg)
37
+
38
+ Hu et al., 2022). However, all of these approaches require extensive training and large paired datasets that are hard to collect. Gan et al. (2017) and Zhao et al. (2020) have suggested style-guided captioning, but also employ training over paired data.
39
+
40
+ CLIP (2021) marked a turning point in vision-language perception, and has been utilized for vision-related tasks by various distillation techniques (Gu et al., 2021; Song et al., 2022; Jin et al., 2021; Gal et al., 2021; Xu et al., 2021; Khandelwal et al., 2022). Recent captioning methods use CLIP for reducing training time (Mokady et al., 2021), improved captions (Shen et al., 2021; Luo et al., 2022a,b; Cornia et al., 2021; Kuo and Kira, 2022), and in zero-shot settings (Su et al., 2022; Tewel et al., 2022). However, zero-shot techniques often result in inferior performance, as the produced captions are not compatible with the desired target style, which is usually dictated by a dataset. In this work, we suggest a new setting, where we adapt CLIP to image captioning using only textual data. As a result, we can easily adapt captions to any desired caption style given instances of text in that style. Concurrent work by Su et al. (2022) efficiently produces high-quality captions with the minimal supervision of text-only pre-training by employing CLIP-induced score at inference. Our approach is arguably simpler and also outperforms Su et al. (2022) empirically. Note that Zhou et al. (2021) have also employed noise-injection, but for the opposite problem of CLIP-based text-free text-to-image generation.
41
+
42
+ # 3 Method
43
+
44
+ Text-Only Training. Our goal is to learn a model that produces a caption for a given image $I$ . Unlike supervised approaches, we assume that during training we only have access to a set of texts $\mathcal{T}$ . These can be obtained by harvesting a text corpus. We next introduce notation for the CLIP model. Given an image $I$ let $\phi(I) \in \mathbb{R}^d$ be its embedding, and given a text $T$ let $\psi(T) \in \mathbb{R}^d$ be its embedding. For converting a vector $\pmb{v} \in \mathbb{R}^d$ into a caption, we use a textual decoder $C(\pmb{v})$ consisting of a lightweight mapping network and a pretrained auto-regressive language model, as suggested in Mokady et al. (2021).
45
+
46
+ We train the decoder as follows (except for the noise-injection which we introduce below). Each text $T \in \mathcal{T}$ is first mapped to CLIP space via $\psi(T)$ and then decoded back into a text via $C(\psi(T))$ . We would like this decoding to be similar to the original text $T$ . Namely, our training objective is a reconstruction of the input text from CLIP's textual embedding. At inference, given an image $I$ we simply apply the decoder to $\phi(I)$ , returning the caption $C(\phi(I))$ .
47
+
48
+ Noise-Injected CLIP Embeddings. We observed that the above training scheme results in inaccurate captions during inference. We conjecture this is because the embeddings of the text and image modalities are separated by a domain gap, as shown in Liang et al. (2022). As a result, while text reconstruction is successful during training,
49
+
50
+ (A) Image Captioning
51
+
52
+ <table><tr><td rowspan="2">Model</td><td colspan="5">MS-COCO</td><td colspan="5">Flickr30k</td></tr><tr><td>B@1</td><td>B@4</td><td>M</td><td>R-L</td><td>CIDEr</td><td>B@1</td><td>B@4</td><td>M</td><td>R-L</td><td>CIDEr</td></tr><tr><td colspan="11">Fully Supervised Approaches</td></tr><tr><td>BUTD</td><td>77.2</td><td>36.2</td><td>27.0</td><td>56.4</td><td>113.5</td><td>-</td><td>27.3</td><td>21.7</td><td>-</td><td>56.6</td></tr><tr><td>UniVLP</td><td>-</td><td>36.5</td><td>28.4</td><td>-</td><td>116.9</td><td>-</td><td>30.1</td><td>23.0</td><td>-</td><td>67.4</td></tr><tr><td>ClipCap</td><td>74.7</td><td>33.5</td><td>27.5</td><td>-</td><td>113.1</td><td>-</td><td>21.7</td><td>22.1</td><td>47.3</td><td>53.5</td></tr><tr><td>Oscar</td><td>-</td><td>36.5</td><td>30.3</td><td>-</td><td>123.7</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>LEMON</td><td>-</td><td>40.3</td><td>30.2</td><td>-</td><td>133.3</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td colspan="11">Weakly or Unsupervised Approaches</td></tr><tr><td>ZeroCap</td><td>49.8</td><td>7.0</td><td>15.4</td><td>31.8</td><td>34.5</td><td>44.7</td><td>5.4</td><td>11.8</td><td>27.3</td><td>16.8</td></tr><tr><td>MAGIC</td><td>56.8</td><td>12.9</td><td>17.4</td><td>39.9</td><td>49.3</td><td>44.5</td><td>6.4</td><td>13.1</td><td>31.6</td><td>20.4</td></tr><tr><td>CapDec</td><td>69.2</td><td>26.4</td><td>25.1</td><td>51.8</td><td>91.8</td><td>55.5</td><td>17.7</td><td>20.0</td><td>43.9</td><td>39.1</td></tr></table>
53
+
54
+ (B) Cross-Domain Captioning
55
+
56
+ <table><tr><td rowspan="2"></td><td colspan="5">Flickr30k ⇒ MS-COCO</td><td colspan="5">MS-COCO ⇒ Flickr30k</td></tr><tr><td>B@1</td><td>B@4</td><td>M</td><td>R-L</td><td>CIDEr</td><td>B@1</td><td>B@4</td><td>M</td><td>R-L</td><td>CIDEr</td></tr><tr><td>MAGIC</td><td>41.4</td><td>5.2</td><td>12.5</td><td>30.7</td><td>18.3</td><td>46.4</td><td>6.2</td><td>12.2</td><td>31.3</td><td>17.5</td></tr><tr><td>CapDec</td><td>43.3</td><td>9.2</td><td>16.3</td><td>36.7</td><td>27.3</td><td>60.2</td><td>17.3</td><td>18.6</td><td>42.7</td><td>35.7</td></tr></table>
57
+
58
+ Table 1: Results for image captioning. (A) We use captions from the COCO and Flickr30k to train CapDec and evaluate on the datasets the captions were taken from. We report results for fully supervised methods that train on captioned images, and on methods that use no training text (ZeroCap), or just training text and no images (CapDec and MAGIC). (B) Similar setting to (A), but in cross-domain setup where training text is taken from one dataset, and evaluation is done on the second dataset.
59
+
60
+ inference fails when using image embeddings instead. If image-text pairs were available, we could attempt to learn a mapping between these domains. Nevertheless, as we aim for text-only training, we shall seek a different approach.
61
+
62
+ Specifically, we assume that the visual embedding corresponding to a text embedding lies somewhere within a ball of small radius $\epsilon$ around the text embedding (see Fig. 1). We would like all text embeddings in this ball to decode to the same caption, which should also correspond to the visual content mapped to this ball. We implement this intuition by adding zero-mean Gaussian noise of STD $\epsilon$ to the text embedding before decoding it.
63
+
64
+ The value of $\epsilon$ is calculated by estimating the spread of captions corresponding to the same image. Specifically, we set $\epsilon$ to the mean $\ell_{\infty}$ norm of embedding differences between five captions that correspond to the same image. We estimated this based on captions of only 15 MS-COCO images. Since this calculation requires very few captions and there is no need to recalculate it for every new dataset, we do not view it as additional supervision.
65
+
66
+ Our overall training objective is thus to minimize:
67
+
68
+ $$
69
+ \sum_ {T \in \mathcal {T}} \ell (C (\boldsymbol {\psi} (T) + \boldsymbol {n}), T), \tag {1}
70
+ $$
71
+
72
+ where $\pmb{n} \in \mathbb{R}^d$ is a random standard Gaussian noise with standard-deviation $\epsilon$ and $\ell$ is an auto-regressive cross-entropy loss for all tokens in $T$ . We train just the parameters of the textual decoder $C$ , while the encoder $\psi(\cdot)$ is kept frozen. The noise is sampled independently at each application of the encoder.
73
+
74
+ # 4 Results
75
+
76
+ We next evaluate CapDec on several captioning tasks, demonstrating state-of-the-art results. See supplementary for additional details.
77
+
78
+ Image Captioning. We compare CapDec caption quality to several baselines with different supervision levels, as presented in Tab. 1(A). Here, all methods were trained end evaluated over the same dataset, using the commonly used MS-COCO (Lin et al., 2014; Chen et al., 2015) and Flickr30k (Young et al., 2014). We begin by evaluating fully supervised techniques: BUTD (Anderson et al., 2018), UniVLP (Zhou et al., 2020), ClipCap (Mokady et al., 2021), Oscar (Li et al., 2020), and Lemon (Hu et al., 2022). As expected, these achieve a better score than CapDec, as they exploit the additional supervision of image-text pairs. Nevertheless, compared to the unsupervised ap
79
+
80
+ <table><tr><td rowspan="2">Model</td><td colspan="4">Romantic</td><td colspan="4">Humorous</td></tr><tr><td>B@1</td><td>B@3</td><td>M</td><td>C</td><td>B@1</td><td>B@3</td><td>M</td><td>C</td></tr><tr><td>StyleNet</td><td>13.3</td><td>1.5</td><td>4.5</td><td>7.2</td><td>13.4</td><td>0.9</td><td>4.3</td><td>11.3</td></tr><tr><td>MemCap</td><td>21.2</td><td>4.8</td><td>8.4</td><td>22.4</td><td>19.9</td><td>4.3</td><td>7.4</td><td>19.4</td></tr><tr><td>CapDec + Image-Text Pre-training</td><td>27.9</td><td>8.9</td><td>12.6</td><td>52.2</td><td>29.4</td><td>8.8</td><td>13.2</td><td>55.1</td></tr><tr><td>CapDec + Text-Only Pre-training</td><td>23.0</td><td>4.6</td><td>9.1</td><td>27.4</td><td>22.7</td><td>4.3</td><td>9.7</td><td>29.0</td></tr><tr><td>CapDec</td><td>21.4</td><td>5.0</td><td>9.6</td><td>26.9</td><td>24.9</td><td>6.0</td><td>10.2</td><td>34.1</td></tr></table>
81
+
82
+ Table 2: Style-Guided captioning results on FlickrStyle10K (Gan et al., 2017).
83
+
84
+ proaches of MAGIC (Su et al., 2022) and ZeroCap (Tewel et al., 2022), CapDec achieves superior scores. Note that ZeroCap does not require any training data, while MAGIC requires text data similar to our setting.
85
+
86
+ Cross-Domain Captioning. We test our generalization ability by training on one dataset while evaluating on another, as in Su et al. (2022). Again, as can be seen in Tab. 1(B), CapDec outperforms MAGIC (Su et al., 2022), which uses the same supervision as CapDec.
87
+
88
+ Style-Guided Captioning. Several works (Zhao et al., 2020; Gan et al., 2017) have studied the task of adapting a captioning model to a new style, such as "romantic" or "humorous". Since collecting paired examples for each style requires great effort, these consider the setting where the new style is only learned from text. This is easy to do in our setting, since we can train the decoder on any given style text. Fig. 2 shows captions generated with CapDec in several styles (same setting and data as in Zhao et al. (2020)). Tab. 2 reports quantitative results for this setting, showing CapDec outperforms other baselines. To further analyze our approach, we present our results without pretraining (i.e., training on styled data only), with a text-only pre-training over COCO, and with text-image pre-training over COCO (similar to (Zhao et al., 2020)). As can be seen, we outperform (Zhao et al., 2020) even with considerably less supervision at pre-training. Moreover, both other variations improve results, demonstrating that CapDec can effectively use additional training data where available.
89
+
90
+ The Effect of Noise Level. A key element of CapDec is noise injection before decoding. To demonstrate the effect of noise, we report results as a function of the noise variance $\epsilon^2$ in Fig. 3. It can be seen that too little or too much noise is
91
+
92
+ ![](images/0c184ff5c5a43e8245e4d5ba8ff74216c45dcc47e7bfded2159adaac5043e4cd.jpg)
93
+ Humorous: two golden retrievers fight for supremacy at a beach contest
94
+
95
+ ![](images/4558726e9441f32a28766b9ef2a2c8e38d80a583d09b43a0b62033166f7b9da8.jpg)
96
+ Humorous: a person hikes down a snowy mountain to reach outer space
97
+
98
+ ![](images/b48cd514396fad10b5822e18a1100b79bc042c72ddae595d12fb8fda2151bd0d.jpg)
99
+ Humorous: a hobbit walks through a tunnel to find a gateway to the future
100
+
101
+ ![](images/0f373f832803b587b86ca103f5dc0d319498384bbdd1fb28ddee5a2cc83d2c4a.jpg)
102
+ Humorous: little girl giggles as she slides down a playground slide thinking she's a monkey
103
+ Romantic: a climber hikes down a snowy mountain to conquer the high
104
+
105
+ ![](images/0063e92c1ed608a99557e15ca6e0f544f9c56688fc126763af51e404fa3f206b.jpg)
106
+ Romantic: two tan dogs play tag in the water celebrating their friendship
107
+ Romantic: a person walks through a water tunnel to reach his destiny
108
+ Romantic: a little girl in a pink diaper shows off for her favorite toy at the park
109
+ Figure 2: Example for styled captions of CapDec on FlickrStyle10K (Gan et al., 2017).
110
+ Figure 3: The effect of the noise variance on MS-COCO performance.
111
+
112
+ suboptimal. We note that the noise variance we chose, $\epsilon^2 = 0.016$ , is based only on text, and not on the results in Fig. 3 which are shown for analysis purposes only.
113
+
114
+ # 5 Noise Injection Analysis
115
+
116
+ Noise-injection is a well-known technique for improving generalization (Reed and MarksII, 1999; Bishop, 1995; An, 1996; Vincent et al., 2010), and can be viewed as a data augmentation mechanism
117
+
118
+ ![](images/87d4d072ea97a0cf92544a47fda3b5dd1c8d68793825b41ce72f0076f8f1b661.jpg)
119
+ Figure 4: Analysis of performance of different methods as a function of the noise level (see Sec.5). We show the CiDER metric (higher is better), as other metrics show similar trends. CapDec here is the same as in Fig.3
120
+
121
+ (Goodfellow et al., 2016). In our case, the use of noise was also meant to address the modality-gap observed in Liang et al. (2022). In order to examine the specific effect of noise, we perform additional evaluations on COCO and show the results in Fig.4.
122
+
123
+ Text-Reconstruction: We encode COCO captions using CLIP text embedding and decode them using the learned CapDec model. This does not involve images at all, and is meant to test whether noise injection simply serves as regularization for text auto-encoding. Fig.4 shows that adding noise does not help, and thus suggests that noise is not merely functioning as augmentation.
124
+
125
+ ClipCap: Recall that ClipCap is trained on joint text-image pairs (Mokady et al., 2021). Here we trained ClipCap by adding noise to the image embeddings during training. It can be seen that noise does not improve performance, again suggesting that improvement is due to its specific role in domain-gap correction.
126
+
127
+ Modalities Offset: Given sufficient training paired-data, one could presumably learn the modalities-gap and correct for it. Here we test a simple approximation of the gap, that does not require image-text data to be paired, by calculating the shift between the mean of text embeddings and the mean of image embeddings in COCO. Then, given an image, we add the shift to its embedding to "correct" for this gap, and apply the CapDec trained decoder to the resulting embedding. Had this mapping been perfect, CapDec would not have needed additional noise injection. The results in Fig.4 show that the offset-correction does outperform CapDec at $\epsilon^2 < 0.01$ , but underperforms overall. This suggests that the gap was not perfectly estimated, and that noise injection still serves to
128
+
129
+ mitigate it. We leave it for future research to consider a more complex or fully-supervised model that learns the modality-gap explicitly.
130
+
131
+ # 6 Conclusion
132
+
133
+ The image captioning task has been extensively studied, with considerable progress in recent years. However, the number of available training datasets, containing image-text pairs, is still rather limited. Consequently, image captioning models inherit the limitations of their training data, such as biases (Hendricks et al., 2018) or confinement to neutral style (Gan et al., 2017). In this work, we suggest a new paradigm, where a generic vision-language model (e.g., CLIP) is adapted to image captioning using a text-only dataset. Furthermore, we demonstrate a simple and intuitive technique to overcome the inherent domain gap of CLIP (Liang et al., 2022). For future work, we plan to study text-only training for other tasks, such as visual question answering and visual scene graph generation.
134
+
135
+ # 7 Ethics Statement
136
+
137
+ Image captioning models are notorious for their internal biases (Hendricks et al., 2018). These biases are usually inherited from the training data itself. We observe that since balancing a text-only dataset is much more feasible than collecting balanced text-image pairs, CapDec can be used to mitigate those biases. For instance, consider the problem of a dataset containing significantly more images of snowboarding men than women. Collecting more images requires substantial effort while replacing "man" with "woman" (and their synonyms) in all captions is quite simple. Therefore, our text-only training might mitigate some of the inherited bias.
138
+
139
+ # 8 Limitations
140
+
141
+ We observe that although CapDec achieves superior results compared to the baselines that use only text at training, it is still outperformed by fully supervised baselines. Since CLIP captures rich semantics in its latent space, we believe that text-only training can be further improved up to the almost same quality as supervised techniques in future work. In addition, note that CapDec relies on CLIP and a language model both of which were pre-trained on large English corpora. Therefore, we find the important task of extending CapDec's capabilities to other languages to be a significant challenge.
142
+
143
+ # Acknowledgments
144
+
145
+ This work was supported by the Blavatnik Interdisciplinary Research Center (ICRC). We thank to Amir Hertz for sharing relevant code parts from his work on ClipCap (Mokady et al., 2021).
146
+
147
+ # References
148
+
149
+ Guozhong An. 1996. The effects of adding noise during backpropagation training on a generalization performance. Neural computation, 8(3):643-674.
150
+ Peter Anderson, Xiaodong He, Chris Buehler, Damien Teney, Mark Johnson, Stephen Gould, and Lei Zhang. 2018. Bottom-up and top-down attention for image captioning and visual question answering. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 6077-6086.
151
+ Chris M Bishop. 1995. Training with noise is equivalent to Tikhonov regularization. Neural computation, 7(1):108-116.
152
+ Long Chen, Hanwang Zhang, Jun Xiao, Liqiang Nie, Jian Shao, Wei Liu, and Tat-Seng Chua. 2017. Scacnn: Spatial and channel-wise attention in convolutional networks for image captioning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 5659-5667.
153
+ Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Dólar, and C Lawrence Zitnick. 2015. Microsoft coco captions: Data collection and evaluation server. arXiv preprint arXiv:1504.00325.
154
+ Xinlei Chen and C Lawrence Zitnick. 2014. Learning a recurrent visual representation for image caption generation. arXiv preprint arXiv:1411.5654.
155
+ Marcella Cornia, Lorenzo Baraldi, Giuseppe Fiameni, and Rita Cucchiara. 2021. Universal captioner: Long-tail vision-and-language model training through content-style separation.
156
+ Michael Denkowski and Alon Lavie. 2014. Meteor universal: Language specific translation evaluation for any target language. In Proceedings of the ninth workshop on statistical machine translation, pages 376-380.
157
+ Rinon Gal, Or Patashnik, Haggai Maron, Gal Chechik, and Daniel Cohen-Or. 2021. Stylegan-nada: Clip-guided domain adaptation of image generators. arXiv preprint arXiv:2108.00946.
158
+ Chuang Gan, Zhe Gan, Xiaodong He, Jianfeng Gao, and Li Deng. 2017. Stylenet: Generating attractive visual captions with styles. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3137-3146.
159
+ Ian Goodfellow, Yoshua Bengio, and Aaron Courville. 2016. Deep learning. MIT press.
160
+ Xiuye Gu, Tsung-Yi Lin, Weicheng Kuo, and Yin Cui. 2021. Open-vocabulary object detection via vision and language knowledge distillation. arXiv preprint arXiv:2104.13921.
161
+
162
+ Lisa Anne Hendricks, Kaylee Burns, Kate Saenko, Trevor Darrell, and Anna Rohrbach. 2018. Women also snowboard: Overcoming bias in captioning models. In Proceedings of the European Conference on Computer Vision (ECCV), pages 771-787.
163
+ Simao Herdade, Armin Kappeler, Kofi Boakye, and Joao Soares. 2019. Image captioning: Transforming objects into words. arXiv preprint arXiv:1906.05963.
164
+ Xiaowei Hu, Zhe Gan, Jianfeng Wang, Zhengyuan Yang, Zicheng Liu, Yumao Lu, and Lijuan Wang. 2022. Scaling up vision-language pre-training for image captioning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 17980-17989.
165
+ Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc Le, Yun-Hsuan Sung, Zhen Li, and Tom Duerig. 2021. Scaling up visual and vision-language representation learning with noisy text supervision. In International Conference on Machine Learning, pages 4904-4916. PMLR.
166
+ Ying Jin, Yinpeng Chen, Lijuan Wang, Jianfeng Wang, Pei Yu, Zicheng Liu, and Jenq-Neng Hwang. 2021. Is object detection necessary for human-object interaction recognition? arXiv preprint arXiv:2107.13083.
167
+ Andrej Karpathy and Li Fei-Fei. 2015. Deep visual-semantic alignments for generating image descriptions. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3128-3137.
168
+ Apoorv Khandelwal, Luca Weihs, Roozbeh Mottaghi, and Aniruddha Kembhavi. 2022. Simple but effective: Clip embeddings for embodied ai. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 14829-14838.
169
+ Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. CoRR, abs/1412.6980.
170
+ Chia-Wen Kuo and Zsolt Kira. 2022. Beyond a pretrained object detector: Cross-modal textual and visual context for image captioning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 17969-17979.
171
+ Iro Laina, Christian Rupprecht, and Nassir Navab. 2019. Towards unsupervised image captioning with shared multimodal embeddings. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 7414-7424.
172
+ Xiujun Li, Xi Yin, Chunyuan Li, Pengchuan Zhang, Xiaowei Hu, Lei Zhang, Lijuan Wang, Houdong Hu, Li Dong, Furu Wei, et al. 2020. Oscar: Object-semantics aligned pre-training for vision-language tasks. In European Conference on Computer Vision, pages 121-137. Springer.
173
+
174
+ Weixin Liang, Yuhui Zhang, Yongchan Kwon, Serena Yeung, and James Zou. 2022. Mind the gap: Understanding the modality gap in multi-modal contrastive representation learning. arXiv preprint arXiv:2203.02053.
175
+ Chin-Yew Lin and Franz Josef Och. 2004. Automatic evaluation of machine translation quality using longest common subsequence and skip-bigram statistics. In Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics (ACL-04), pages 605-612.
176
+ Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dólar, and C Lawrence Zitnick. 2014. Microsoft coco: Common objects in context. In European conference on computer vision, pages 740-755. Springer.
177
+ Ilya Loshchilov and Frank Hutter. 2017. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101.
178
+ Jiasen Lu, Dhruv Batra, Devi Parikh, and Stefan Lee. 2019. Vilbert: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks. arXiv preprint arXiv:1908.02265.
179
+ Yunpeng Luo, Jiayi Ji, Xiaoshuai Sun, Liujuan Cao, Yongjian Wu, Feiyue Huang, Chia-Wen Lin, and Rongrong Ji. 2021. Dual-level collaborative transformer for image captioning. arXiv preprint arXiv:2101.06462.
180
+ Ziyang Luo, Yadong Xi, Rongsheng Zhang, and Jing Ma. 2022a. A frustratingly simple approach for end-to-end image captioning.
181
+ Ziyang Luo, Yadong Xi, Rongsheng Zhang, and Jing Ma. 2022b. I-tuning: Tuning language models with image for caption generation. arXiv preprint arXiv:2202.06574.
182
+ Ron Mokady, Amir Hertz, and Amit H Bermano. 2021. Clipcap: Clip prefix for image captioning. arXiv preprint arXiv:2111.09734.
183
+ Kishore Papineni, Salim Roukos, Todd Ward, and Wei Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting of the Association for Computational Linguistics, pages 311-318.
184
+ Bryan A Plummer, Liwei Wang, Chris M Cervantes, Juan C Caicedo, Julia Hockenmaier, and Svetlana Lazebnik. 2015. Flickr30k entities: Collecting region-to-phrase correspondences for richer imageto-sentence models. In Proceedings of the IEEE international conference on computer vision, pages 2641-2649.
185
+ Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. 2021. Learning transferable visual models from natural language supervision. arXiv preprint arXiv:2103.00020.
186
+
187
+ Russell Reed and Robert J MarksII. 1999. Neural smithing: supervised learning in feedforward artificial neural networks. Mit Press.
188
+ Sheng Shen, Liunian Harold Li, Hao Tan, Mohit Bansal, Anna Rohrbach, Kai-Wei Chang, Zhewei Yao, and Kurt Keutzer. 2021. How much can clip benefit vision-and-language tasks? arXiv preprint arXiv:2107.06383.
189
+ Haoyu Song, Li Dong, Wei-Nan Zhang, Ting Liu, and Furu Wei. 2022. Clip models are few-shot learners: Empirical studies on vqa and visual entailment. arXiv preprint arXiv:2203.07190.
190
+ Yixuan Su, Tian Lan, Yahui Liu, Fangyu Liu, Dani Yogatama, Yan Wang, Lingpeng Kong, and Nigel Collier. 2022. Language models can see: Plugging visual controls in text generation. arXiv preprint arXiv:2205.02655.
191
+ Hao Tan and Mohit Bansal. 2019. Lxmert: Learning cross-modality encoder representations from transformers. arXiv preprint arXiv:1908.07490.
192
+ Yoad Tewel, Yoav Shalev, Idan Schwartz, and Lior Wolf. 2022. Zerocap: Zero-shot image-to-text generation for visual-semantic arithmetic. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 17918-17928.
193
+ Maria Tsimpoukelli, Jacob Menick, Serkan Cabi, SM Eslami, Oriol Vinyals, and Felix Hill. 2021. Multimodal few-shot learning with frozen language models. Advances in Neural Information Processing Systems, 34.
194
+ Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems, pages 5998-6008.
195
+ Ramakrishna Vedantam, C Lawrence Zitnick, and Devi Parikh. 2015. Cider: Consensus-based image description evaluation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4566-4575.
196
+ Pascal Vincent, Hugo Larochelle, Isabelle Lajoie, Yoshua Bengio, Pierre-Antoine Manzagol, and León Bottou. 2010. Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. Journal of machine learning research, 11(12).
197
+ Zirui Wang, Jiahui Yu, Adams Wei Yu, Zihang Dai, Yulia Tsvetkov, and Yuan Cao. 2021. Simvlm: Simple visual language model pretraining with weak supervision. arXiv preprint arXiv:2108.10904.
198
+ Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le
199
+
200
+ Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Association for Computational Linguistics.
201
+
202
+ Mengde Xu, Zheng Zhang, Fangyun Wei, Yutong Lin, Yue Cao, Han Hu, and Xiang Bai. 2021. A simple baseline for zero-shot semantic segmentation with pre-trained vision-language model. arXiv preprint arXiv:2112.14757.
203
+
204
+ Xu Yang, Hanwang Zhang, and Jianfei Cai. 2019. Learning to collocate neural modules for image captioning. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 4250-4260.
205
+
206
+ Peter Young, Alice Lai, Micah Hodosh, and Julia Hockenmaier. 2014. From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions. Transactions of the Association for Computational Linguistics, 2:67-78.
207
+
208
+ Pengchuan Zhang, Xiujun Li, Xiaowei Hu, Jianwei Yang, Lei Zhang, Lijuan Wang, Yejin Choi, and Jianfeng Gao. 2021. Vinvl: Revisiting visual representations in vision-language models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5579-5588.
209
+
210
+ Wentian Zhao, Xinxiao Wu, and Xiaoxun Zhang. 2020. Memcap: Memorizing style knowledge for image captioning. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 12984-12992.
211
+
212
+ Luowei Zhou, Hamid Palangi, Lei Zhang, Houdong Hu, Jason Corso, and Jianfeng Gao. 2020. Unified vision-language pre-training for image captioning and vqa. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 13041-13049.
213
+
214
+ Yufan Zhou, Ruiyi Zhang, Changyou Chen, Chunyuan Li, Chris Tensmeyer, Tong Yu, Jiumiang Gu, Jinhui Xu, and Tong Sun. 2021. Lafite: Towards language-free training for text-to-image generation. arXiv preprint arXiv:2111.13792.
215
+
216
+ # A Appendix
217
+
218
+ # A.1 Implementation Details
219
+
220
+ We use the RN-50x4 backbone for CLIP image encoder, and GPT-2 (large) as our language model (implementation of Wolf et al.(Wolf et al., 2020)). Following ClipCap (Mokady et al., 2021), for the decoder architecture, we use a transformer-based (Vaswani et al., 2017) mapping network where we set the CLIP embedding length of $K = 40$ with additional $K = 40$ constants tokens and use 8 multi-head self-attention layers with 8 heads each.
221
+
222
+ For optimization, we employed AdamW (Kingma and Ba, 2015) with weight decay as introduced by Loshchilov et al. (Loshchilov and Hutter, 2017), with a learning rate of $2e^{-5}$ and 5000 warm-up steps.
223
+
224
+ # A.2 Datasets and Evaluation Metrics
225
+
226
+ When evaluating over MS-COCO (Chen et al., 2015) and Flickr30k (Plummer et al., 2015), we followed Karpathy(Karpathy and Fei-Fei, 2015) split, similar to (Su et al., 2022) and (Mokady et al., 2021). Considering the FlickrStyle10K (Gan et al., 2017) dataset, we followed (Zhao et al., 2020), and split the dataset randomly to $6/7$ , and $1/7$ of training and test sets, correspondingly. For qualitative evaluation, we employ the commonly used BLEU (Papineni et al., 2002) $(\mathbf{B}@\mathbf{1},\mathbf{B}@\mathbf{4})$ , METEOR (Denkowski and Lavie, 2014) (M), ROUGE-L (Lin and Och, 2004) (R-L), and CIDEr (Vedantam et al., 2015) (C) metrics.
227
+
228
+ # A.3 Qualitative Comparison
229
+
230
+ All qualitative scores were reproduced or obtained from the works of (Su et al., 2022) and (Zhao et al., 2020) after carefully validating we use the same splits. Our metrics implementation is adapted from the official implementation of (Li et al., 2020).
textonlytrainingforimagecaptioningusingnoiseinjectedclip/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b017442b458ea715a44a91c793a2c0f4876ff6cfc0c6005482bde9822d7ceb18
3
+ size 239839
textonlytrainingforimagecaptioningusingnoiseinjectedclip/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d9801ee85bb69425f29841b723ad8f3b0fb1840d5e91f6b1c01a4cc34a6f5994
3
+ size 269888
textualenhancedcontrastivelearningforsolvingmathwordproblems/8686ea23-2acd-4e21-bd8d-e449d28ae474_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ac10b39b40fc237dbab8fb4a9b058cbad7bf8e6a93b7f22abca92abf509f5f1a
3
+ size 77453
textualenhancedcontrastivelearningforsolvingmathwordproblems/8686ea23-2acd-4e21-bd8d-e449d28ae474_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e5b0ced8b0712bf897e153ded97f1f3cc342615a1a45b00c656a80d294550294
3
+ size 93947
textualenhancedcontrastivelearningforsolvingmathwordproblems/8686ea23-2acd-4e21-bd8d-e449d28ae474_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7f5643decb97e16df047f363c42591bdc36cb1e2c949ec7a965facc0a12ff1df
3
+ size 782269
textualenhancedcontrastivelearningforsolvingmathwordproblems/full.md ADDED
@@ -0,0 +1,318 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Textual Enhanced Contrastive Learning for Solving Math Word Problems
2
+
3
+ Yibin Shen $^{1}$ , Qianying Liu $^{2}$ , Zhuoyuan Mao $^{2}$ , Fei Cheng $^{2}$ and Sadao Kurohashi $^{2}$
4
+
5
+ 1 Meituan
6
+
7
+ 2 Graduate School of Informatics, Kyoto University
8
+
9
+ shenyibin@meituan.com; {ying,zhuoyuanmao}@nlp.ist.i.kyoto-u.ac.jp; {feicheng, kuro}@i.kyoto-u.ac.jp
10
+
11
+ # Abstract
12
+
13
+ Solving math word problems is the task that analyses the relation of quantities and requires an accurate understanding of contextual natural language information. Recent studies show that current models rely on shallow heuristics to predict solutions and could be easily misled by small textual perturbations. To address this problem, we propose a Textual Enhanced Contrastive Learning framework, which enforces the models to distinguish semantically similar examples while holding different mathematical logic. We adopt a self-supervised manner strategy to enrich examples with subtle textual variance by textual reordering or problem re-construction. We then retrieve the hardest to differentiate samples from both equation and textual perspectives and guide the model to learn their representations. Experimental results show that our method achieves state-of-the-art on both widely used benchmark datasets and also exquisitely designed challenge datasets in English and Chinese.
14
+
15
+ # 1 Introduction
16
+
17
+ Solving Math Word Problems (MwPs) is the task of automatically performing logical inference and generating a mathematical solution from a natural language described math problem. Solving MwPs is a challenging task that cannot rely on shallow keyword matching but requires a comprehensive understanding of contextual information. For example, as shown in Figure 1, while the first problem shares high token-level overlapping with the third problem, the underlying mathematical logic is different. While on the other hand, the first and second problems have very low similarity at the textual level, while the equation solution is the same. The challenge of the task is that the underlying
18
+
19
+ Problem: $P = (T,E)$
20
+
21
+ T: Dave bought $n_0$ boxes of chocolate candy and gave $n_1$ to his little brother. If each box has $n_2$ pieces inside it, how many pieces did Dave still have?
22
+
23
+ $$
24
+ E: (n _ {0} - n _ {1}) * n _ {2}
25
+ $$
26
+
27
+ Different Textual, Similar logic: $P^{+} = (T^{+}, E^{+})$
28
+
29
+ $T^{+}$ : A new building needed $n_0$ windows. The builder had already installed $n_1$ of them. If it takes $n_2$ hours to install each window how long will it take him to install the rest?
30
+
31
+ $$
32
+ E ^ {+}: \left(n _ {0} - n _ {1}\right) * n _ {2}
33
+ $$
34
+
35
+ Similar Textual, Different Logic: $P^{-} = (T^{-}, E^{-})$
36
+
37
+ $T^{-}$ : For hallowen Faye got $n_0$ pieces of candy. She ate $n_1$ pieces the first night and then her sister gave her $n_2$ more pieces. How many pieces of candy does Faye have now?
38
+
39
+ $$
40
+ E ^ {-}: n _ {0} - n _ {1} + n _ {2}
41
+ $$
42
+
43
+ Figure 1: Example of positive data point $P^{+} = (T^{+}, E^{+})$ and negative data point $P^{-} = (T^{-}, E^{-})$ for an anchor $P = (T, E)$ .
44
+
45
+ mathematical logic would change even with minor modifications in the text. While neural network based models have greatly boosted the performance on benchmarks datasets, Patel et al. (2021) argued that state-of-the-art (SOTA) models use shallow heuristics to solve a majority of word problems, and struggle to solve challenge sets that have only small textual variations between examples.
46
+
47
+ Motivated by recent progress in contrastive learning methods, which is a flexible framework that has been successfully employed to representation learning in various fields (Chopra et al., 2005; Fang and Xie, 2020; Gao et al., 2021), we propose Textual Enhanced Contrastive Learning, which is an end-to-end framework that uses both textual and mathematical logic information to build effective representations. For each anchor data point, we find the hard example triplet pair, which consists of a textual-different but logic-similar positive data point $P^{+}$ , and a textual-similar but logic-different
48
+
49
+ negative data point $P^{-}$ . Our method aims to learn an embedding space where the vector representations of $P$ and $P^{+}$ in Figure 1 are mapped close together, since they hold the same mathematical logic even though the textual expression is entirely different; on the other hand, because $P$ and $P^{-}$ have similar textual expression but different mathematical logic, their vector representations could be separated apart.
50
+
51
+ To build such triplet pairs, we use a retrieval-based method to search in the training data. We consider the equation annotation as the representation of the mathematical logic in the example, and retrieve a positive and negative bag of data points according to equation similarity. Then we further use textual similarity to choose the hard examples in the bags, where positive examples have low textual similarity with the anchor and vice versa. Given such hard sample data, Contrastive Learning could empower the representations by leading the model to distinguish these potential disorienting examples in the training stage.
52
+
53
+ Such approaches to retrieving triplet pairs from human-annotated training data via label annotation are considered as supervised contrastive learning. Another research line of contrastive learning is self-supervised contrastive learning, which does not require labeled data and use data augmentation methods to generate the positive or negative data points (Chen et al., 2020; He et al., 2020; Grill et al., 2020). In the task of solving MwPs, we can leverage self-supervised supervision by generating new examples via performing synchronized changes to text and equations. The generated data is naturally hard sample data, because the textual expression is similar to the origin example, while the equation could be either changed or the same. Specifically, we leverage Reversed Operation-based Data Augmentation (Liu et al., 2021) and a Question Reordering-based augmentation to form new data points. By enhancing the model to detect the small perturbations in the augmented examples, contrastive learning forces the model to learn more effective representations of contextual information.
54
+
55
+ While previous studies also used Contrastive Learning to improve representations for solving MwPs (Li et al., 2022), their method is limited to supervised contrastive learning, ignores textual information during constructing the contrastive learning pairs and requires two step pre-training and re-training. Our method pushes the model to
56
+
57
+ learn better text representations and understand the most minor textual variance from these textual enhanced hard samples from both supervised and self-supervised perspectives.
58
+
59
+ We conduct experiments on two widely used datasets, the English dataset ASdiv-A (Miao et al., 2020) and the Chinese dataset Math23K (Wang et al., 2017). To further investigate how our method improves the ability of the model to detect small textual perturbations, we collect a Chinese challenge set Hard Example (HE)-MWP. We perform experiments on two challenge sets of MwPs, the English Asdiv-Adv-SP dataset (Kumar et al., 2021) and the Chinese HE-MWP dataset. Experimental results show that our method achieves consistent gains under different languages and settings, demonstrating the effectiveness of our method.
60
+
61
+ # 2 Related Work
62
+
63
+ # 2.1 Solving Math Word Problems
64
+
65
+ There are various research lines in solving math word problems. Early studies majorly rely on rule-based methods (Bobrow, 1964; Charniak, 1969). Statistical machine learning methods were developed to map math word problems to specific equation templates (Kushman et al., 2014; Roy and Roth, 2015; Koncel-Kedziorski et al., 2015; Roy and Roth, 2017). Another research line uses semantic parsing-based methods to transform the input text into structured representations that could be parsed to obtain the answer (Roy and Roth, 2018; Shi et al., 2015; Zou and Lu, 2019). Recent studies focus on using a sequence-to-sequence (seq2seq) framework that takes in the text descriptions of the MwPs and predicts the answer equation. To improve the framework, various studies have investigated task designing task specialized encoder and decoder architectures (Wang et al., 2018, 2019; Xie and Sun, 2019; Liu et al., 2019; Guan et al., 2019; Zhang et al., 2020b,a; Shen and Jin, 2020), using pre-trained models (Tan et al., 2021; Liang et al., 2021) and leveraging auxiliary tasks (Liu et al., 2021; Shen et al., 2021; Li et al., 2022; Shen et al., 2022). Various auxiliary tasks have been introduced to improve model performance. Shen et al. (2021) introduced a reranking loss that reranks the beam search predictions. Huang et al. (2021) introduced a memory augmented subtask that gives guidance during the decoding stage. The closest study to our research is (Li et al., 2022), which uses equations as searching schema to build positive-
66
+
67
+ negative pairs, and then perform contrastive learning. However, their research ignores textual information while building contrastive learning triplet pairs and limits supervised contrastive learning.
68
+
69
+ MWP solvers have achieved relatively high performance on benchmark datasets. However, the extent to which these solvers truly understand language and numbers remains unclear. Various studies either use data augmentation to help the model improve robustness and performance on hard cases or develop adversarial examples and challenge sets to evaluate the robustness of MWP solvers against textual variance. Liu et al. (2021) proposed a data augmentation method that reverses the mathematical logic in the problem to generate a new example. Patel et al. (2021) constructed a challenge set of the math word problem in which the problem text only has a small variance. Kumar et al. (2021) investigated adversarial attack on MWP solvers. The challenge sets and adversarial attacks show that current MWP solvers use shallow heuristics to solve a majority of word problems and fail to detect subtle textual variance.
70
+
71
+ # 2.2 Contrastive Learning
72
+
73
+ Contrastive Learning was first adopted in Computer Vision to learn representations of images via self-supervision without human annotation (Chen et al., 2020; He et al., 2020; Grill et al., 2020). Self-supervised contrastive learning is applied in NLP to learn sentence representations. Back translation (Fang and Xie, 2020) and dropout (Gao et al., 2021) are used to construct positive-negative contrastive learning triplets. These perturbation-based techniques are not suitable for MWP solvers, that MWPs are sensitive to small textual variance and the perturbation might introduce noise.
74
+
75
+ Khosla et al. (2020) first introduced supervised contrastive learning in Computer Vision by modifying the loss to allow supervision from label annotations. In NLP, various studies have introduced natural language inference (NLI) datasets as supervised annotations for contrastive learning (Reimers and Gurevych, 2019; Gao et al., 2021). The agreement of equation annotations of MwPs can be considered a form of NLI, that our supervised contrastive learning could be considered a transformation of these methods.
76
+
77
+ # 3 Methodology
78
+
79
+ We use Contrastive Learning to obtain text features with high differentiation of small perturbations, so that for each anchor data point $P = (T,E)$ , where $T$ stands for the text and $E$ stands for the equation, we construct a pair of examples positive data point $P^{+} = (T^{+},E^{+})$ and negative data point $P^{-} = (T^{-},E^{-})$ , and then use contrastive learning loss to map the representation of $P$ and $P^{+}$ closer and vice versa. The pipeline of the triplet pairs retrieval is shown in Figure 2. We first construct a candidate pool, which consists of supervised training data $\{P_i\}$ and augmented self-supervised data $\{P_{i}^{\text{aug}}\}$ as shown in the blue part of Figure 2. The self-supervised data is generated by two methods, Reversed Operation based Data Augmentation (RODA) and Question Reordering (QR), which is explained in Section 3.1. Then we perform two-step retrieval to retrieve the triplet pairs as described in Section 3.2. We first use an equation-based retrieval strategy to extract positive candidate set $\{\widetilde{P}^{+}\}$ and negative candidate set $\{\widetilde{P}^{-}\}$ , and then further introduce textual information by choosing one example from the candidate set via a text-based retrieval strategy. Finally, we train the MWP solving model that maps $T$ to $E$ by considering both the contrastive learning and solution equation generation objective, as described in Section 3.3.
80
+
81
+ # 3.1 Enriching Candidate Pool via Self-Supervised Augmentation
82
+
83
+ The self-supervised examples are challenging for the model to distinguish; while the perturbation in the text expression is extremely subtle, the corresponding mathematical logic could still change. Compared to the supervised examples that are retrieved from the training data, these self-supervised samples place a higher demand on the model's ability to detect subtle changes and understand contextual information. We generate task-orientated augmented examples from training set data point $P = (T,E)$ via two methods that obtain reliable new text-equation examples by modifying the text and equation in the same logic at the same time. We split the sentences with punctuation marks to a question followed by various declarative sentences $T = \{S_{1},S_{2},\dots,S_{k - 1},Q_{k}\}$ . The question sentence is always the last sentence for Asdiv, and we check whether interrogative pronouns are in the last sentence for Math23K.
84
+
85
+ ![](images/59c5f9b8ed5c46e4d7cc281bbcacbe56b4e52f34f3d73be5fad7ef28c9b07729.jpg)
86
+ Figure 2: Overview of the contrastive learning triplet pairs retrieval procedure.
87
+
88
+ # 3.1.1 Question Reordering
89
+
90
+ We move the question to the front of the MWP to form a reordered new MWP similar to Kumar et al. (2021). Given a problem text $T = \{S_1, S_2, \dots, S_{k-1}, Q_k\}$ , we move the question $Q_k$ to the front of the problem text to form a new problem text $T^{QR} = \{Q_k, S_1, \dots, S_{k-1}\}$ while the rest of the text remains the same. We simultaneously edit the equation $E^{QR}$ so that the variables match with the new text order. The new example $P^{QR} = (T^{QR}, E^{QR})$ could either be a positive example that holds the same equation as $P$ or a negative example that holds a different equation since the variable order might change during the reordering. The high textual similarity but rotated variable order pushes the model to learn representations that can differ from these small textual perturbations.
91
+
92
+ # 3.1.2 Reversed Operation based Data Augmentation
93
+
94
+ We perform RODA (Liu et al., 2021) that generates a new example by asking a question about one of the original given variable. Given a problem text $T = \{S_{1}, S_{2}, \dots, S_{k-1}, Q_{k}\}$ where the question $Q$ asks about an unknown variable $n_{ans}$ , RODA chooses a known variable $n$ in one of the declarative sentence $S_{i}$ , and then generates a problem text which asks about this variable. To generate such an example, $S_{i}$ is transformed to a question $Q_{S_{i}}$ which asks a question of $n$ , while $Q$ is transformed to a declarative sentence $S_{k}$ describing $n_{ans}$ . We reorder the problem text by swapping the two sentences, that a new problem text $T^{RODA} = \{S_{1}, \dots, S_{k}, \dots, S_{k-1}, Q_{i}\}$ is generated. Simultaneously we edit the equation by resolv
95
+
96
+ ing the equation expression $E^{RODA}$ of $n$ given $n_{ans}$ . While $P^{RODA} = (T^{RODA}, E^{RODA})$ has a very similar textual description of $P$ , the underlying equation could be completely different, which could benefit the model via contrastive learning. RODA requires text parsing and transformation rules to modify the text and equation. For Chinese, it can cover $93\%$ of the examples, and for English, it covers $60\%$ of the examples. The generated text has a 0.83 out of 1 coherent score reported by human evaluation by Liu et al. (2021).
97
+
98
+ # 3.2 Triplet Pair Retrieval
99
+
100
+ We construct the positive and negative triplet pairs from both textual and logical perspectives. For a given problem $P$ , the positive sample $P^{+}$ is considered to be a problem with similar equation expressions but relatively different text descriptions; the negative sample $P^{-}$ is considered to be a problem with highly textual similarity but different equation expression. However, it requires a time-consuming bruce-forth enumeration of all possible example pairs to find such optimal positive and negative samples. Considering the computational complexity, we break down the retrieval to a two-step pipeline. We adopt a heuristic searching algorithm to construct positive and negative samples $(P^{+}, P^{-})$ as follows:
101
+
102
+ 1. Construct a similarity matrix $M$ of all equation expressions $\{E_1, E_2, \ldots, E_n\}$ in the training set, where $M_{ij}$ is the similarity of equation expression $E_i, E_j$ .
103
+ 2. For a given anchor $P$ , Retrieve a positive candidate set $\{\widetilde{P}^{+}\}$ and a negative candidate set
104
+
105
+ $\{\widetilde{P}^{-}\}$ of samples from the training set of the data via equation expression similarity.
106
+
107
+ 3. Extract the best positive example $P^{+}$ and the best negative example $P^{-}$ via textual similarity.
108
+
109
+ We investigate various strategies to retrieve $(P^{+}, P^{-})$ from both equation-based and text-based perspectives.
110
+
111
+ # 3.2.1 Equation-based Retrieval Strategy
112
+
113
+ To evaluate the equation similarity during the retrieval, we design an equation similarity metric $\text{Sim}_{eq}$ based on length-wise normalized tree edit distance (TED). TED is defined as the minimum-cost sequence of node operations that transform one tree into another and is a well-known distance measure for hierarchical data. We define the TED of two equation expressions $E_1, E_2$ as the TED of their abstract syntax tree. The similarity of two equation expressions $E_1, E_2$ is defined as:
114
+
115
+ $$
116
+ S i m _ {e q} (E _ {1}, E _ {2}) = 1 - \frac {T E D (E _ {1} , E _ {2})}{| E _ {1} | + | E _ {2} |}
117
+ $$
118
+
119
+ Given this equation similarity metric, we design two retrieval strategies.
120
+
121
+ Exact Match The positive candidate set $\{\widetilde{P}^{+}\}$ is constructed of the examples that meets $Sim_{eq}(E,E_i) = 1$ , which means their equation expression satisfies $E = E_{j}$ . If only the anchor itself holds this equation expression, the positive candidate set $\{\widetilde{P}^{+}\}$ has only the anchor $P$ . The negative candidate set $\{\widetilde{P}^{-}\}$ is constructed of the examples that meets $\argmax_{E_i \neq E} (Sim(E,E_i))$ , which holds the closest equation considering the anchor.
122
+
123
+ Nearest Neighbour The positive candidate set is constructed of the examples that meets $\operatorname{argmax}_{E_i, T_i \neq T} (Simeq(E, E_i))$ . If no other example holds the same equation expression as the anchor, the positive candidate set $\{\widetilde{P}^+\}$ takes the examples that has the nearest neighbour equation expression. The negative candidate set $\{\widetilde{P}^-\}$ is constructed of the examples that meets $\operatorname{argmax}_{E_i \neq E^+} (Simeq(E, E_i))$ , which holds the closest equation considering the positive example.
124
+
125
+ The positive and negative candidate sets are then further screened by the text-based strategy.
126
+
127
+ # 3.2.2 Text-based Retrieval Strategy
128
+
129
+ To lead the model to differentiate mathematical logic from similar textual expressions, we use textual-based information to select the $(P^{+}, P^{-})$ pair. We select the lowest textual similarity score example from the positive candidate set $\{\widetilde{P}^{+}\}$ , which is the example with different textual expression but the same mathematical logic; and select the highest textual similarity score example from the negative candidate set $\{\widetilde{P}^{-}\}$ , which is the example with similar textual expression but different mathematical logic. We design two similarity measurement metrics for this stage.
130
+
131
+ BERTSim Sentence-BERT (SBERT) is a strong sentence representation baseline model (Reimers and Gurevych, 2019). We calculate the cosine similarity of the SentBERT representation of the two sentences to obtain the similarity score:
132
+
133
+ $$
134
+ S i m _ {t e x t} ^ {B E R T S i m} = \frac {S B E R T (T _ {1}) \cdot S B E R T (T _ {2})}{| | S B E R T (T _ {1}) | | | | | S B E R T (T _ {2}) | |}
135
+ $$
136
+
137
+ The value range of $Sim_{text}^{BERTSim}$ is from $[-1, 1]$ .
138
+
139
+ Bi-direction BLEU BLEU is a widely used evaluation metric for text generation that measures the similarity between the generated text and the reference. We design a Bi-direction BLEU since BLEU is a not symmetrical similarity metric, which is defined as:
140
+
141
+ $$
142
+ S i m _ {t e x t} ^ {B i B L E U} = \frac {B L E U (T _ {1} , T _ {2}) + B L E U (T _ {2} , T _ {1})}{2}
143
+ $$
144
+
145
+ The value range of $Sim_{text}^{BiBLEU}$ is from [0, 1].
146
+
147
+ # 3.3 Training Procedure
148
+
149
+ We show the training procedure in Figure 3. The training loss consists of the MWP solving loss $\mathcal{L}_{\text {solver }}$ and the contrastive learning loss $\mathcal{L}_{cl}$ .
150
+
151
+ MWP Solving Model We follow Li et al. (2022) and use the strong baseline model BERT-GTS as MWP solving model. The pre-trained language model BERT, which provides strong textual representations, is leveraged as the encoder. For the decoder, we use Goal-driven tree-structured MwP solver (GTS) (Xie and Sun, 2019). GTS directly generates the prefix notation of the solution equation by using a recursive neural network to encode subtrees based on the representations of its children
152
+
153
+ <table><tr><td>Dataset</td><td>Math23K</td><td>Asdiv-A</td><td>HE-MWP</td><td>Adv-Asdiv</td></tr><tr><td>Language</td><td>zh</td><td>en</td><td>zh</td><td>en</td></tr><tr><td>Domain</td><td>general</td><td>general</td><td>challenge</td><td>challenge</td></tr><tr><td>#Train</td><td>21,162</td><td>1,218</td><td>-</td><td>-</td></tr><tr><td>#Dev/#Test</td><td>1,000 / 1,000</td><td>- / -</td><td>- / 400</td><td>- / 239</td></tr><tr><td>#Equation Templates</td><td>3,104</td><td>66</td><td>231</td><td>66</td></tr></table>
154
+
155
+ Table 1: Statistics and details of the datasets.
156
+
157
+ ![](images/42ad8dd3fa3a07d892b8dd4e54a4f773e766579dd415808cb6983a3f9e470fed.jpg)
158
+ Figure 3: Overview of the training procedure.
159
+
160
+ nodes with the gate mechanism. With the subtree representations, this model can well structured information of the generated part to predict a new token.
161
+
162
+ Contrastive learning. Contrastive learning is performed on triplets pairs $(P, P^{+}, P^{-})$ by pulling the representations of $T$ and $T^{+}$ together and pushing apart the representations of $T$ and $T^{-}$ . We follow the contrastive learning framework in Chen et al. (2020), which takes an in-batch cross-entropy objective. Let $x_{i}$ denote the encoder representation of $P$ , the training objective for $(x_{i}, x_{i}^{+}, x_{i}^{-})$ within the batch of $N$ triplet pairs is:
163
+
164
+ $$
165
+ \begin{array}{l} \mathcal {L} _ {c l} = \\ - \frac {1}{N} \sum_ {i = 1} ^ {N} l o g \frac {e ^ {c o s (x _ {i} , x _ {i} ^ {+}) / \tau}}{\sum_ {j = 1} ^ {N} (e ^ {c o s (x _ {i} , x _ {j} ^ {+}) / \tau} + e ^ {c o s (x _ {i} , x _ {j} ^ {-}) / \tau})} \\ \end{array}
166
+ $$
167
+
168
+ where $\tau$ is the temperature hyperparameter.
169
+
170
+ Assume the prediction target equation of $P$ is $y$ , the final training objective is to minimize the sum of the MWP solution equation generation negative log likelihood loss $\mathcal{L}_{\text{solver}}$ and the contrastive learning loss $\mathcal{L}_{cl}$ :
171
+
172
+ $$
173
+ \mathcal {L} = \mathcal {L} _ {\text {s o l v e r}} + \alpha * \mathcal {L} _ {c l}
174
+ $$
175
+
176
+ # 4 Experiments
177
+
178
+ # 4.1 Datasets
179
+
180
+ We perform experiments on four datasets, including two widely used datasets to verify the generalization ability of our method and two challenge test sets to show further how our method can enhance the robustness of the model. We show detailed statistics of the datasets in Table 1.
181
+
182
+ Math23K is a Chinese dataset that contains 23,161 math word problems of elementary school level (Wang et al., 2017). We use the standard train-test split setting of this dataset for the experiment.
183
+
184
+ Asdiv-A is the arithmetic subset of ASDiv which has 1,218 MwPs mostly up to grade level 4 (Miao et al., 2020). Experiments of this dataset are evaluated by 5-cross validation.
185
+
186
+ HE-MWP Since no challenge dataset has been developed for Chinese MWP solving, and existing challenge datasets have limited types of equation templates, we use RODA and QR on Math23K validation set to generate examples that are semantically similar to the original input but deceive the model into generating an incorrect prediction. We randomly sample a subset of 600 examples from the RODA result of the development set of Math23K and then manually delete the examples that the text is not coherent. Then we randomly select 400 examples out of this cleaned subset.
187
+
188
+ Adv-Asdiv-SP is a challenge set of Asdiv-A, which is constructed of adversarial examples (Kumar et al., 2021). These adversarial examples are generated by sentence paraphrasing.
189
+
190
+ Results of the challenge datasets are tested on the highest performance models trained on the corresponding benchmark datasets.
191
+
192
+ There exists other MWP datasets, which are relatively less challenging such as ALG514, DRAW1K and MAWPS (Kushman et al., 2014; Upadhyay and Chang, 2017; Koncel-Kedziorski et al., 2016), or
193
+
194
+ <table><tr><td>Model</td><td>Cand. Pool</td><td>Math23K</td><td>Asdiv-A</td><td>HE-MWP</td><td>Adv-Asdiv-SP</td></tr><tr><td>GTS (Xie and Sun, 2019)</td><td>-</td><td>75.6</td><td>68.5</td><td>-</td><td>21.2</td></tr><tr><td>G2T (Zhang et al., 2020b)</td><td>-</td><td>77.4</td><td>71.0</td><td>-</td><td>23.8</td></tr><tr><td>pattern CL (Li et al., 2022)</td><td>train</td><td>83.2</td><td>-</td><td>-</td><td>-</td></tr><tr><td>BERT-GTS</td><td>-</td><td>82.9</td><td>73.4</td><td>55.5</td><td>59.9</td></tr><tr><td>w/ supervised CL</td><td>train</td><td>84.1</td><td>74.2</td><td>57.2</td><td>63.7</td></tr><tr><td>w/ RODA CL</td><td>RODA+train</td><td>84.3</td><td>74.3</td><td>64.1</td><td>64.1</td></tr><tr><td>w/ QR CL</td><td>QR+train</td><td>84.2</td><td>74.4</td><td>62.5</td><td>66.2</td></tr><tr><td>w/ CL</td><td>RODA+QR+train</td><td>85.0</td><td>74.6</td><td>69.5</td><td>66.9</td></tr></table>
195
+
196
+ noisy such as Dolphin18K (Huang et al., 2016) or use semantic parsing as annotation such as MathQA (Amini et al., 2019). We use the two benchmarks Math23K and Asdiv-A because they are both clean and challenging with mathematical equation annotations.
197
+
198
+ # 4.2 Implementation Details
199
+
200
+ We use two language-specific BERT-base models as the problem encoder<sup>2</sup>. For both models, the maximum text length of the encoder is fixed at 256, and the maximum equation generation length of the decoder is fixed at 45. The decoder embedding size is 128. The batch size is 16, with learning rate of 5e-5. We tune the hyperparameters temperature $\tau$ in the set of $\{0.05, 0.1, 0.2\}$ and $\alpha$ in the range [0.1, 0.9]. Experiments of the Chinese datasets are conducted on V100 and RTX 3090 with approximately 6 hours of runtime. Experiments of the English datasets are conducted on 1080Ti with approximately 1-hour runtime.
201
+
202
+ # 5 Results and Analysis
203
+
204
+ # 5.1 Pre-examination on Retrieval Strategy
205
+
206
+ We conduct a breakdown analysis on the most complex dataset Math23K of different retrieval strategies. We investigate the performance of different retrieval strategies for supervised contrastive learning. As shown in Table 3, for the equation-based retrieval strategy, the exact match equation strategy is more effective than the nearest neighbour strategy. This shows that the positive sample for the anchor must have accurate same mathematical logic for contrastive learning to benefit the performance. Both text-based retrieval strategies can improve the MWP solving performance compared
207
+
208
+ Table 2: Results on MWP datasets. All experiments only compute MWP solving loss on the training set. The candidate pool only affects the choice of positive and negative examples in the CL loss.
209
+
210
+ <table><tr><td></td><td colspan="2">Eq Strategies</td></tr><tr><td>Text Strategies</td><td>EM</td><td>NN</td></tr><tr><td>Random</td><td>83.2</td><td>82.3</td></tr><tr><td>BERTSim</td><td>83.6</td><td>83.1</td></tr><tr><td>Bi-BLEU</td><td>84.1</td><td>83.2</td></tr><tr><td>BERT-GTS</td><td>82.9</td><td></td></tr></table>
211
+
212
+ Table 3: Results of different retrieval strategies for supervised contrastive learning. EM denotes exact match. NN denotes nearest neighbour. Random denotes randomly choosing an example from the candidate set. BERTSim and Bi-BLEU denotes choosing the examples by similarity metric.
213
+
214
+ to the random choosing baseline, demonstrating the effectiveness of introducing textual information for contrastive training. With textual-based retrieval, the extracted positive and negative examples would form hard examples that can push the model to differ textual-similar but logic different examples. Bi-BLEU also has a slightly higher performance than BERTSim. In the following experiments, we use the best combination of EM and Bi-BLEU as retrieval strategies.
215
+
216
+ # 5.2 Main Results
217
+
218
+ We show the results of our method compared with other baselines in Table 2. In addition to our baseline BERT-GTS model, we also investigate three strong baseline models. GTS (Xie and Sun, 2019) uses an LSTM encoder and the same decoder as BERT-GTS that generates the abstract syntax trees through a tree structure decoder in a goal-driven manner. G2T (Zhang et al., 2020b) is a graph-to-tree model that uses a graph-based encoder for representing the relationships and order information among the quantities. Pattern CL (Li et al., 2022) proposes a pattern-based contrastive learning, that considers the equation similarity with supervised
219
+
220
+ ![](images/407a28d26e81fe49b7e396e602a9ef8ff8799baabf79514345d1d2ad5a977790.jpg)
221
+ Figure 4: T-SNE Visualization results of BERT-GTS w/o (left) and w/ CL (right).
222
+
223
+ ![](images/326e86c33b9bd8f5e0abae404cb1186cdfe823cf9625b5b745209a9a572289de.jpg)
224
+ Figure 5: T-SNE visualization for the case study on BERT-GTS w/o (left) and w/ CL (right).
225
+
226
+ contrastive learning. We can see from the results that our method outperforms previous studies in all datasets. Compared to Pattern CL which ignores textual information, our method allows the model to have a stronger ability to bridge text descriptions to mathematical logic even using the same candidate pool. The self-supervised methods outperform the supervised settings, especially on challenge datasets, demonstrating the effectiveness of leading the model to learn contextual representations of small textual perturbations.
227
+
228
+ On benchmark datasets, we achieve $2.1\%$ points of improvement on Math23K and $1.2\%$ points of improvement on Asdiv-A. One major reason is that RODA can only generate 394 examples for the English dataset Asdiv-A. In contrast, it can generate 47,318 examples for the Chinese dataset Math23K because English has more strict grammar than Chinese. On challenge datasets, we achieve $14\%$ points of improvement on HE-MWP dataset and $7.0\%$ points of improvement on Adv-Asdiv-SP dataset. For HE-MWP ablation, RODA is more effective since it could introduce new mathematical logic examples. For Adv-Asdiv-SP, since QR is similar to paraphrasing techniques, it gains more improvement with self-supervised supervision.
229
+
230
+ # 5.3 Visualization and Case Study
231
+
232
+ We show T-SNE visualization results of the representations of examples from the top-five frequent equation templates in Math23K: $n_1 * n_2 / n_3$ , $n_1 * n_2$ , $n_1 / n_2$ , $n_2 / n_1$ and $n_1 * (1 - n_2)$ , which refers to
233
+
234
+ <table><tr><td>Text</td><td>用一张长n1厘米,宽n2厘米的长方形纸围成一个最大的圆柱,圆柱的侧面积为多少平方厘米?</td></tr><tr><td>EN</td><td>Given a piece of paper n1 centimeters long and n2 centimeters wide, How many square centimeters is the lateral area of the largest cylinder enclosed by the rectangle?</td></tr><tr><td>w/o CL</td><td>π * n2()</td></tr><tr><td>w/ CL</td><td>n1 * n2()</td></tr></table>
235
+
236
+ Table 4: Case study on Math23K example. w/o CL denotes the BERT-GTS baseline. w/ CL denotes using contrastive learning.
237
+
238
+ orange, red, blue, green and purple in Figure 4. We can see that compared to the BERT-GTS baseline on the left subfigure, in the right subfigure, the text representations of the same equations are pulled closer via our contrastive learning, and the representation of different equations are separated apart, which shows that our method can benefit the representation learning.
239
+
240
+ We further investigate how our method improves the representation via case study. In Table 4 the BERT-GTS baseline could not infer from the textual description that the side area of a cylinder is the area of a rectangle but rather uses shallow heuristics when the word "cylinder" is encountered and generates the constant $\pi$ . By constructing positive and negative sample pairs from both expressions and textual descriptions and changing the representation space via contrastive learning, the model is not misled by the keywords and correctly infers that the mathematical logic is to calculate the area of a rectangle so that the model with contrastive learning generated the correct result. We also show T-SNE visualization of the representation in Figure 5. The red dots are examples with the keyword rectangle and hold the equation $n_1 * n_2$ . The blue dots are the examples that hold the equation $\pi * n_1$ or $\pi * n_2$ . The green dot is the studied case. We can see that while BERT-GTS fails to separate the representation of the case from the cylinder or circle-related equations, contrastive learning helps the model to differentiate such confusing examples, learn better representations, and predict the answer correctly.
241
+
242
+ # 5.4 Combination with Data Augmentation
243
+
244
+ While the high-quality and challenging augmented examples have shown remarkable effectiveness for contrastive learning, a question remains whether contrastive learning is still effective when these augmented examples are directly used as training data.
245
+
246
+ <table><tr><td>Model</td><td>Acc</td></tr><tr><td>baseline</td><td>82.9</td></tr><tr><td>+QR aug w/o CL</td><td>84.9</td></tr><tr><td>+QR aug w/ CL</td><td>85.2</td></tr><tr><td>+RODA aug w/o CL</td><td>84.8</td></tr><tr><td>+RODA aug w/ CL</td><td>86.4</td></tr></table>
247
+
248
+ Table 5: Results of using augmented example for both training and contrastive learning.
249
+
250
+ Thus, we further investigate using the augmented examples as anchors. We use the augmented examples and the original data as training data and perform supervised contrastive learning in the training data. As shown in Table 5, we can see that while the augmented examples improve the performance, contrastive learning can further boost the performance, achieving SOTA results on Math23K.
251
+
252
+ # 6 Conclusion
253
+
254
+ In this paper, we propose a Textual Enhanced Contrastive Learning framework, which leverages both supervised and self-supervised supervision to help the model understand contextual information and bridge subtle textual variance to mathematical logic. We use two novel task-specific data augmentation methods to enrich the candidate pool with examples with minor textual variance for contrastive learning triplet pair retrieval. We design a two-stage retrieval method to find hard example triplet pairs with both equation and textual information and investigate various retrieval strategies. Experimental results show that our method gained improvement on both benchmark datasets and challenge datasets in English and Chinese. We also conduct visualization for representation distribution on different equations and also on a case study, which shows our method can benefit the representation learning. With the combination of data augmentation, our method still improves the performance and achieves SOTA results on Math23K dataset.
255
+
256
+ # Limitations
257
+
258
+ While our framework extracts contrastive learning triplet pairs with light computational complexity, we observe that such a two-stage retrieval strategy might not be optimal under certain circumstances.
259
+
260
+ We build the framework assuming that methods with similar mathematical logic (i.e., high equation similarity) would form challenging negative examples. However, especially for self-supervision, such
261
+
262
+ an assumption can block out the augmented small variance examples from consideration for triplet pairs because their equation might not be the most similar one. This is more severe when using RODA for self-supervised augment. The generated examples of RODA usually have relatively low equation similarity with the origin example. However, the RODA examples remain challenging as we can see the performance of HE-MWP still has a gap of $15\%$ points compared to the original Math23K datasets.
263
+
264
+ A strategy that considers equation and textual similarity in the same stage could be introduced to fill this gap. However, such strategies cannot avoid the heavy computational complexity caused by calculating the metric of all data pairs. This could be reduced by recent studies in rapid embedding retrieval algorithms such as FAISS (Johnson et al., 2019) by transforming the equation similarity to embedding similarity via embedding training methods. This remains as future work in this paper.
265
+
266
+ # Acknowledgements
267
+
268
+ This work is partially supported by JST SPRING Grant No.JPMJSP2110 and KAKENHI No.22J13719, Japan.
269
+
270
+ # References
271
+
272
+ Aida Amini, Saadia Gabriel, Shanchuan Lin, Rik Koncel-Kedziorski, Yejin Choi, and Hannaneh Hajishirzi. 2019. Mathqa: Towards interpretable math word problem solving with operation-based formalisms. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2357-2367.
273
+ Daniel G. Bobrow. 1964. Natural language input for a computer problem solving system. Technical report, Cambridge, MA, USA.
274
+ Eugene Charniak. 1969. Computer solution of calculus word problems. In Proceedings of the 1st International Joint Conference on Artificial Intelligence, IJCAI'69, pages 303-316, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc.
275
+ Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey E. Hinton. 2020. A simple framework for contrastive learning of visual representations. In Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event, volume 119 of Proceedings of Machine Learning Research, pages 1597-1607. PMLR.
276
+ Sumit Chopra, Raia Hadsell, and Yann LeCun. 2005. Learning a similarity metric discriminatively, with
277
+
278
+ application to face verification. In 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05), volume 1, pages 539-546. IEEE.
279
+ Hongchao Fang and Pengtao Xie. 2020. CERT: contrastive self-supervised learning for language understanding. CoRR, abs/2005.12766.
280
+ Tianyu Gao, Xingcheng Yao, and Danqi Chen. 2021. Simcse: Simple contrastive learning of sentence embeddings. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 7-11 November; 2021, pages 6894-6910. Association for Computational Linguistics.
281
+ Jean-Bastien Grill, Florian Strub, Florent Altché, Corentin Tallec, Pierre H. Richemond, Elena Buchatskaya, Carl Doersch, Bernardo Ávila Pires, Zhaohan Guo, Mohammad Gheshlaghi Azar, Bilal Piot, Koray Kavukcuoglu, Rémi Munos, and Michal Valko. 2020. Bootstrap your own latent - A new approach to self-supervised learning. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual.
282
+ Wenyv Guan, Qianying Liu, Guangzhi Han, Bin Wang, and Sujian Li. 2019. An improved coarse-to-fine method for solving generation tasks. In Proceedings of the The 17th Annual Workshop of the Australasian Language Technology Association, pages 178-185, Sydney, Australia. Australasian Language Technology Association.
283
+ Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross B. Girshick. 2020. Momentum contrast for unsupervised visual representation learning. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020, Seattle, WA, USA, June 13-19, 2020, pages 9726-9735. Computer Vision Foundation / IEEE.
284
+ Danqing Huang, Shuming Shi, Chin-Yew Lin, Jian Yin, and Wei-Ying Ma. 2016. How well do computers solve math word problems? large-scale dataset construction and evaluation. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 887-896.
285
+ Shifeng Huang, Jiawei Wang, Jiao Xu, Da Cao, and Ming Yang. 2021. Recall and learn: A memory-augmented solver for math word problems. In *Findings of the Association for Computational Linguistics: EMNLP* 2021, pages 786-796.
286
+ Jeff Johnson, Matthijs Douze, and Herve Jégou. 2019. Billion-scale similarity search with GPUs. IEEE Transactions on Big Data, 7(3):535-547.
287
+ Prannay Khosla, Piotr Teterwak, Chen Wang, Aaron Sarna, Yonglong Tian, Phillip Isola, Aaron Maschinot, Ce Liu, and Dilip Krishnan. 2020. Supervised contrastive learning. In Advances in Neural
288
+
289
+ Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual.
290
+ Rik Koncel-Kedziorski, Hannaneh Hajishirzi, Ashish Sabharwal, Oren Etzioni, and Siena Dumas Ang. 2015. Parsing algebraic word problems into equations. Transactions of the Association for Computational Linguistics, 3:585-597.
291
+ Rik Koncel-Kedziorski, Subhro Roy, Aida Amini, Nate Kushman, and Hannaneh Hajishirzi. 2016. Mawps: A math word problem repository. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1152-1157.
292
+ Vivek Kumar, Rishabh Maheshwary, and Vikram Pudi. 2021. Adversarial examples for evaluating math word problem solvers. In *Findings of the Association for Computational Linguistics: EMNLP* 2021, Virtual Event / Punta Cana, Dominican Republic, 16-20 November; 2021, pages 2705-2712. Association for Computational Linguistics.
293
+ Nate Kushman, Yoav Artzi, Luke Zettlemoyer, and Regina Barzilay. 2014. Learning to automatically solve algebra word problems. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 271-281.
294
+ Zhongli Li, Wenxuan Zhang, Chao Yan, Qingyu Zhou, Chao Li, Hongzhi Liu, and Yunbo Cao. 2022. Seeking patterns, not just memorizing procedures: Contrastive learning for solving math word problems. In *Findings of the Association for Computational Linguistics: ACL* 2022, pages 2486-2496.
295
+ Zhenwen Liang, Jipeng Zhang, Jie Shao, and Xiangliang Zhang. 2021. Mwp-bert: A strong baseline for math word problems.
296
+ Qianying Liu, Wenyu Guan, Sujian Li, Fei Cheng, Daisuke Kawahara, and Sadao Kurohashi. 2021. Roda: Reverse operation based data augmentation for solving math word problems. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 30:1-11.
297
+ Qianying Liu, Wenyv Guan, Sujian Li, and Daisuke Kawahara. 2019. Tree-structured decoding for solving math word problems. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2370-2379, Hong Kong, China. Association for Computational Linguistics.
298
+ Shen-Yun Miao, Chao-Chun Liang, and Keh-Yih Su. 2020. A diverse corpus for evaluating and developing english math word problem solvers. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 975-984. Association for Computational Linguistics.
299
+
300
+ Arkil Patel, Satwik Bhattachamishra, and Navin Goyal. 2021. Are NLP models really able to solve simple math word problems? In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2021, Online, June 6-11, 2021, pages 2080-2094. Association for Computational Linguistics.
301
+ Nils Reimers and Iryna Gurevych. 2019. Sentence-bert: Sentence embeddings using siamese bert-networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, pages 3980-3990. Association for Computational Linguistics.
302
+ Subhro Roy and Dan Roth. 2015. Solving general arithmetic word problems. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1743-1752, Lisbon, Portugal. Association for Computational Linguistics.
303
+ Subhro Roy and Dan Roth. 2017. Unit dependency graph and its application to arithmetic word problem solving. In Thirty-First AAAI Conference on Artificial Intelligence.
304
+ Subhro Roy and Dan Roth. 2018. Mapping to declarative knowledge for word problem solving. Transactions of the Association of Computational Linguistics, 6:159-172.
305
+ Jianhao Shen, Yichun Yin, Lin Li, Lifeng Shang, Xin Jiang, Ming Zhang, and Qun Liu. 2021. Generate & rank: A multi-task framework for math word problems. In *Findings of the Association for Computational Linguistics: EMNLP* 2021, pages 2269-2279.
306
+ Yibin Shen and Cheqing Jin. 2020. Solving math word problems with multi-encoders and multi-decoders. In Proceedings of the 28th International Conference on Computational Linguistics, pages 2924-2934, Barcelona, Spain (Online). International Committee on Computational Linguistics.
307
+ Yibin Shen, Qianying Liu, Zhuoyuan Mao, Zhen Wan, Fei Cheng, and Sadao Kurohashi. 2022. Seeking diverse reasoning logic: Controlled equation expression generation for solving math word problems. arXiv preprint arXiv:2209.10310.
308
+ Shuming Shi, Yuehui Wang, Chin-Yew Lin, Xiaojiang Liu, and Yong Rui. 2015. Automatically solving number word problems by semantic parsing and reasoning. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1132-1142.
309
+ Minghuan Tan, Lei Wang, Lingxiao Jiang, and Jing Jiang. 2021. Investigating math word problems using pretrained multilingual language models.
310
+
311
+ Shyam Upadhyay and Ming-Wei Chang. 2017. Annotating derivations: A new evaluation strategy and dataset for algebra word problems. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 494-504.
312
+ Lei Wang, Dongxiang Zhang, Lianli Gao, Jingkuan Song, Long Guo, and Heng Tao Shen. 2018. Math-dqn: Solving arithmetic word problems via deep reinforcement learning. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 32.
313
+ Lei Wang, Dongxiang Zhang, Jipeng Zhang, Xing Xu, Lianli Gao, Bing Tian Dai, and Heng Tao Shen. 2019. Template-based math word problem solvers with recursive neural networks. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 7144-7151.
314
+ Yan Wang, Xiaojiang Liu, and Shuming Shi. 2017. Deep neural solver for math word problems. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP 2017, Copenhagen, Denmark, September 9-11, 2017, pages 845-854. Association for Computational Linguistics.
315
+ Zhipeng Xie and Shichao Sun. 2019. A goal-driven tree-structured neural model for math word problems. In *IJCAI*, pages 5299–5305.
316
+ Jipeng Zhang, Roy Ka-Wei Lee, Ee-Peng Lim, Wei Qin, Lei Wang, Jie Shao, and Qianru Sun. 2020a. Teacher-student networks with multiple decoders for solving math word problem. In Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI-20, pages 4011-4017. International Joint Conferences on Artificial Intelligence Organization. Main track.
317
+ Jipeng Zhang, Lei Wang, Roy Ka-Wei Lee, Yi Bin, Yan Wang, Jie Shao, and Ee-Peng Lim. 2020b. Graph-to-tree learning for solving math word problems. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3928-3937.
318
+ Yanyan Zou and Wei Lu. 2019. Text2math: End-to-end parsing text into math expressions. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5330-5340.
textualenhancedcontrastivelearningforsolvingmathwordproblems/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7f5afef43deeefde0ab7b8a604ca7f627f4b2d8ed28f376d6f48fe4a9eb5be1d
3
+ size 321315
textualenhancedcontrastivelearningforsolvingmathwordproblems/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fb4b25aa18dd606d0feee987488bebdd6748075ca006002cb07e81c822cba4fa
3
+ size 403693
thechallengesoftemporalalignmentontwitterduringcrises/9ebccff9-b096-4e5a-b002-3e1f13279edb_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a60fc78e9a787922c58a4094e2f574cb1fc1e119d57889374f1ae974303a602d
3
+ size 100116
thechallengesoftemporalalignmentontwitterduringcrises/9ebccff9-b096-4e5a-b002-3e1f13279edb_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3534933977ebed80dfa010d3e650643ef8ce74812905962fbd3f4d7027dfcd53
3
+ size 116130
thechallengesoftemporalalignmentontwitterduringcrises/9ebccff9-b096-4e5a-b002-3e1f13279edb_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ac62b01ddac947994ded9b853f36547f4c81d052ace6d7937c246346a1426389
3
+ size 434766
thechallengesoftemporalalignmentontwitterduringcrises/full.md ADDED
@@ -0,0 +1,355 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # The challenges of temporal alignment on Twitter during crises
2
+
3
+ Aniket Pramanick, Tilman Beck, Kevin Stowe and Iryna Gurevych
4
+
5
+ Ubiquitous Knowledge Processing Lab (UKP Lab)
6
+
7
+ Department of Computer Science and Hessian Center for AI (hessian.AI)
8
+
9
+ Technical University Darmstadt
10
+
11
+ www.ukp.tu-darmstadt.de
12
+
13
+ # Abstract
14
+
15
+ Language use changes over time, and this impacts the effectiveness of NLP systems. This phenomenon is even more prevalent in social media data during crisis events where meaning and frequency of word usage may change over the course of days. Contextual language models fail to adapt temporally, emphasizing the need for temporal adaptation in models which need to be deployed over an extended period of time. While existing approaches consider data spanning large periods of time (from years to decades), shorter time spans are critical for crisis data. We quantify temporal degradation for this scenario and propose methods to cope with performance loss by leveraging techniques from domain adaptation. To the best of our knowledge, this is the first effort to explore effects of rapid language change driven by adversarial adaptations, particularly during natural and human-induced disasters. Through extensive experimentation on diverse crisis datasets, we analyze under what conditions our approaches outperform strong baselines while highlighting the current limitations of temporal adaptation methods in scenarios where access to unlabeled data is scarce.
16
+
17
+ # 1 Introduction
18
+
19
+ Patterns of language use change constantly over time, often in predictable and analyzable ways (Hamilton et al., 2016a; Kulkarni et al., 2015; Sommerauer and Fokkens, 2019). As language changes, the performance of NLP systems can be negatively impacted (Lazaridou et al., 2021). In most scenarios, training corpora are derived from a snapshot of data at some moment of time in the past, which puts the reliability of model performance on future data into question. Yet, there lacks a concrete reasoning or evidence that temporal adaptation elevates
20
+
21
+ model performance. Despite the popularity of large language models and their usefulness in many NLP domains (Devlin et al., 2019), the representation of temporal knowledge in those models so far remains an open challenge.
22
+
23
+ The increased interest in temporal adaptation (i.e. scenarios in which the training and test datasets are drawn from different periods of time) has led to the curation of a number of datasets such as NYT Annotated Corpus (Sandhaus, 2008) and Amazon Reviews (Ni et al., 2019) that have been the focus of most of the recent work in this area. However, these benchmark datasets are curated in such a way that they can only capture temporal change of language over long periods of time (from years to decades), giving access to a large amount of data. In the contrary, on social media, language changes can happen rapidly (Kulkarni et al., 2015; Eisenstein, 2013). Word usage and topics can even change over the span of a single day (Golder and Macy, 2011), especially during very dynamic scenarios like crisis or disastrous events (Reynolds and Seeger, 2005; Del Tredici et al., 2019). We denote these phenomena induced by linguistic and semantic changes over time as temporal drift.
24
+
25
+ Accounting for temporal drift is critical in crisis situations in which information patterns can vary greatly between the phases of emergency management for crisis. For this purpose, we study short text classification in crisis situations. Given the time-critical nature of crisis scenarios, gathering annotations is too time-consuming and transfer learning is challenging due to the innate differences among the type of events (hurricane vs. earthquake) and the respective information needs. Thus, we offer a study investigating the impact of temporal drift on crisis datasets spanning shorter time periods (days/weeks), as well as datasets with relatively few samples (ranging from $\sim 1\mathrm{k}$ to $22\mathrm{k}$ ).
26
+
27
+ Assessing rapid temporal drift is a challenging problem due to different linguistic phenomena
28
+
29
+ # Tweets about Hurricane Sandy in 2012
30
+
31
+ ![](images/2f0c59b349209f024444333e7cb2e3d737bb59cc9d3bee10bd679986eb2c5c62.jpg)
32
+ Figure 1: The blue line indicates the frequency of tweets during the hurricane Sandy (Stowe et al., 2018). The displayed tweets demonstrate challenging linguistic phenomena for a text classification model, e.g. semantic shifts (#irene as reference to a hurricane rather than a person) or neologisms (pre-sandy).
33
+
34
+ which often require extensive knowledge about the temporal structure of the context. In Figure 1, we provide examples from Stowe et al. (2018) dataset, which were collected from the 2012 New York City landfall of Hurricane Sandy.
35
+
36
+ Existing approaches like continual learning (Gururangan et al., 2020; Loureiro et al., 2022a) or learning time-specific models (Agarwal and Nenkova, 2022) cannot be applied to this scenario as access to a large set of unlabeled data from the temporal target distribution is missing. Unlike existing approaches, which react to incoming annotated data to update their models, we use temporal metadata as a training signal such that the existing contextualized representations are adapted temporally. More specifically, we are the first to apply projection methods (Wang et al., 2014) and domain adaptation approaches (Ganin et al., 2016; Bamler and Mandt, 2017) to learn time-aware contextualized embeddings. Our results highlight the challenges of integrating temporal information into contextualized embeddings with improvements being dependent on factors like dataset size - and thereby emphasizing that temporal adaptation remains a challenge in scenarios where we do not have access to large unlabeled data.
37
+
38
+ In summary, we make the following contributions:
39
+
40
+ 1. We investigate temporal drift during crisis events and its adversarial effect on task performance. To the best of our knowledge, this
41
+
42
+ is the first study of temporal effects on text classification performance in crisis scenarios, when temporal drift is rapid and access to data is scarce.
43
+
44
+ 2. We investigate the role of the domain of data in temporal drift and propose a simple metric to quantify the impact of temporal degradation on task performance.
45
+ 3. We propose methods that adapt future data to known models, improving performance with no additional labeled data.
46
+ 4. Through experiments on a multitude of diverse text classification datasets collected during crisis events, we analyse the effectiveness of our proposed methods over strong baselines.
47
+
48
+ # 2 Related Work
49
+
50
+ Analyzing semantic change of text over time has been of great interest since the pioneering work by Hamilton et al. (2016b) and others (Kutuzov et al., 2018; Rudolph and Blei, 2018; Martinc et al., 2020; Gonen et al., 2020). However, its influence on downstream task performance has only recently gained attention. Most importantly, the advent of contextualized word embeddings and large pretrained language models has led researchers to re-evaluate the role of temporality in language modeling (Jawahar and Seddah, 2019; Lazaridou et al., 2021; Hofmann et al., 2021; Kulkarni et al., 2021)
51
+
52
+ and text classification (Bjerva et al., 2020; Florio et al., 2020; Röttger and Pierrehumbert, 2021; Agarwal and Nenkova, 2022).
53
+
54
+ The performance degradation due to temporal factors has been confirmed in several studies and across multiple domains. Jaidka et al. (2018) analyzed the temporal performance degradation of age and gender classification models based on user's social media posts. Based on features derived from Latent Dirichlet Allocation and word embeddings, they find that models perform best if test and training data come from the same time span. Florio et al. (2020) investigated temporal effects on Hate Speech detection in Italian social media over the period of five months. Their results suggest that models trained on data temporally closer to the test data perform better with transformer based models. Loureiro et al. (2022b) studied semantic shifts in social media and proposed a dataset annotated with words that have undergone a semantic shift over the past two years. Loureiro et al. (2022a) focus on Twitter as text domain and contribute pretrained language models which have been further trained on time-specific data from Twitter.
55
+
56
+ Bjerva et al. (2020) propose to use sequential subspace alignment (SSA) to adapt contextualized word embeddings for language change over time. Their results suggest that SSA applied on past data is able to outperform baselines which have access to data from all time-steps. Röttger and Pierrehumbert (2021) compared time-agnostic domain adaptation with temporal domain adaptation which considers the temporal order of the data. They found that, while temporal adaptation clearly outperforms domain adaptation in language modeling, this does not necessarily translate onto downstream classification performance due to updated tokens not being relevant for the task. Agarwal and Nenkova (2022) found the temporal model performance deterioration to be less significant when using language representations which have been pretrained on temporally closer data.
57
+
58
+ Finally, Luu et al. (2022) have made the effort of conducting a large-scale study of temporal misalignment, the generalized scenario where training and evaluation data are drawn from different periods of time. Across multiple NLP classification tasks and domains they identify performance degradation with varying degrees but with social media and news being the most affected domains.
59
+
60
+ We contribute to the existing line of work by
61
+
62
+ quantifying the temporal effects on downstream task performance over short time periods (days and weeks) during crisis events. In such a scenario and in contrast to previous work, we do not assume access to large corpora of unlabeled data for temporal adaptation via continuous pretraining. Our proposed approaches temporally adapt pretrained contextualized embeddings to learn time-aware embeddings and we evaluate their effects on downstream classification tasks.
63
+
64
+ # 3 Methods Overview
65
+
66
+ Luu et al. (2022) describe three distinct stages of a typical NLP system which consist of a pretraining stage, a domain (or temporal) adaptation stage and a fine-tuning stage. Separating the adaptation and fine-tuning stages makes the implicit assumption that there is access to unlabeled data from the (temporal) target distribution which has been proven to be beneficial for temporal adaptation (Luu et al., 2022). In contrast, we are looking at the dynamic setting during crisis events. Temporal alignment through continuous pre-training is not feasible due to the lack of unlabeled data and time constraints imposed by the application scenario (e.g. crisis monitoring). The latter also limits the feasibility of an online learning setup which requires new annotations in a continuous stream. Finally, transfer learning is difficult due to inherent differences in information needs (i.e. the type of labels) and domains (e.g. hurricane vs. earthquake).
67
+
68
+ Therefore, in this section we adapt and evaluate methods which are specifically designed for combining temporal adaptation and fine-tuning. Their training procedures are adapted to incorporate temporal information about the data along with the textual input. We describe each approach in the following:
69
+
70
+ # Adapted Language Modelling (ALM)
71
+
72
+ Similar to previous work (see Section 2), we explore temporal adaptation via pretraining but use only the available training data. We therefore continue with the language modeling objective of our respective pretrained language model on the training data and use the resulting fine-tuned model (FT) for downstream task training. Following Dhingra et al. (2022), we investigate a variation for temporal modelling (TM) by concatenating time as textual information to the input to encourage the language model to learn temporally relevant features during pretraining.
73
+
74
+ # DCWE: Dynamic Contextualized Word Embeddings
75
+
76
+ Hofmann et al. (2021) introduced a principled way to impart extra-linguistic knowledge into contextualized word embeddings by involving a prior distribution. This enables us to integrate temporal information into the embeddings during training. More specifically, for each temporal snapshot (e.g. days, months, years, etc.) present in the training data, an additional set of parameters is learned which acts as a temporal offset added to the original word embeddings. This way the model is able to maintain the semantic meaning of a word embedded in its temporal context. We adapt this idea to our setting by introducing additional parameters for shifting the pre-trained contextualized embeddings. Given a sequence of words/tokens $W = [w_{1}, w_{2}, \dots, w_{n}]$ and their corresponding pre-trained embeddings $H = [h_{1}, h_{2}, \dots, h_{n}]$ . To account for the temporal effect on the word meanings, we model word embeddings as a function of temporal context $t$ associated to $W$ .
77
+
78
+ $$
79
+ h _ {i} ^ {*} = f \left(h _ {i}, t\right) \tag {1}
80
+ $$
81
+
82
+ Since meanings of most of the words in the vocabulary are temporally stable, we can place a Normal prior on $h_i^*$ .
83
+
84
+ $$
85
+ h _ {i} ^ {*} \sim \mathcal {N} \left(h _ {i}, \lambda^ {- 1} I\right) \tag {2}
86
+ $$
87
+
88
+ Hence, we write as $h_i^* = h_i + d_i$ , where the offset $d_i$ is normally distributed as $d_i \sim \mathcal{N}(0, \lambda^{-1}I)$ . However, pre-trained LMs make this temporal adaptation easily applicable to any task by adding only a regularization term $L_{temporal}$ on top of the task specific loss $L_{task}$ .
89
+
90
+ $$
91
+ L _ {\text {t e m p o r a l}} = \frac {\lambda}{n} \sum_ {i = 1} ^ {n} \left(\left\| d _ {i} \right\| _ {2} ^ {2} + K \left\| d _ {i} - d _ {i - 1} \right\| _ {2} ^ {2}\right) \tag {3}
92
+ $$
93
+
94
+ For training the model, the overall loss $L = L_{task} + L_{temporal}$ is minimized. Similarly to Hofmann et al. (2021), we use $K = 10^{3}$ from Bamler and Mandt (2017), to enforce that $h_i^*$ s change smoothly over time.
95
+
96
+ # LMSOC: Socio-temporally Sensitive Language Modeling
97
+
98
+ Similar to DCWE, Kulkarni et al. (2021) propose a method to learn extra-linguistic context using
99
+
100
+ graph representation learning algorithms and then primes with language models to generate language representations grounded in a socio-temporal context. We model the temporal order information as a linear chain graph and adapt this method to our setting by appending temporal graph embeddings to the initial layers of the pre-trained language model. During fine-tuning of the language model, the graph embeddings are kept frozen to inductively yield temporally-aware embeddings.
101
+
102
+ # TAPH: Time Aware Projection on Hyperplanes
103
+
104
+ Time adds an additional context or dimension to the knowledge making temporal scoping an imperative part while deriving context embeddings. Therefore, we model temporal information as a hyperplane and define a projection operation (Wang et al., 2014) on it. To build a time-invariant classification model, we project the sentence-embedding (Reimers and Gurevych, 2019) of each text on a hyperplane to obtain a time-aware sentence embedding. We describe the method in more detail.
105
+
106
+ Let $X = [x_{1}, x_{2}, \ldots, x_{n}]$ be a given sequence of words and $H$ be its sentence embedding. Since the temporal span of our data is short, we assume that the temporal hyperplane $w_{t}$ represents the time frame of the training data. We derive time-aware sentence embeddings $H_{t}$ using our defined projection operation as follows:
107
+
108
+ $$
109
+ H _ {t} = H - w _ {t} ^ {\top} H w _ {t} \tag {4}
110
+ $$
111
+
112
+ While training the model, we learn the hyperplane representation $w_{t}$ in addition to fine-tuning the pre-trained embeddings in an end-to-end fashion. However during inference, we assume that we could 'teleport' the data to the past by projecting their sentence embeddings on the hyperplane $w_{t}$ in order to revert their temporal changes. We then use these embeddings in the downstream tasks.
113
+
114
+ # TDA: Temporal Domain Adaptation
115
+
116
+ Temporal Adaptation can also be interpreted as a variant of domain adaptation with the difference that the language change happens within the same domain, e.g. induced by external events or the general dynamic characteristics of the source infrastructure (e.g. social media platforms or news outlets). We adapt a widely used domain adaptation method (Ramponi and Plank, 2020) to our
117
+
118
+ setting. We learn time-aware word representations by adding an additional classification layer during training to predict the time of each text and apply the Gradient Reversal method (Ganin et al., 2016). In this way, the input does not change during the forward pass but this additional layer affects the model parameters during back-propagation of error by an additional penalizing factor. $^4$ This acts as an adversarial training objective forcing the model to adapt to the temporal structure of the data.
119
+
120
+ # 4 Experimental Setup
121
+
122
+ # 4.1 Data
123
+
124
+ We identify a collection of social media data during crisis with observable temporal phases (pre-, acute- and post-crisis), rapid change in language and a natural change in distribution over time - enabling us to evaluate how well temporally adapted models generalize over time. We use three datasets sampled from Twitter: Sandy, T26, and Humaid. We provide an overview here and refer to the Appendix A for dataset details.
125
+
126
+ Sandy The dataset by Stowe et al. (2018) collected during hurricane Sandy in 2012 contains approximately 22,000 tweets spanning 17 days centered on landfall in New York City, annotated for binary relevance to the storm and its effects.5 The tweets were collected by first identifying users impacted by the event, then retroactively pulling their data from before, during, and after the event. As opposed to keyword collection, this provides a relatively broad collection of both relevant and nonrelevant tweets and a more complete dataset for evaluating temporal drift, as each tweet doesn't necessarily contain the same keyword(s).
127
+
128
+ T26 The CrisisLex T26 (T26) dataset (Olteanu et al., 2015) includes labeled tweets for 26 different crisis events, labeled by informativeness into four different categories: (1) related to the crisis and informative, (2) related to the crisis but not informative, (3) not related to the crisis, and (4) not applicable category. This collection reflects a wide variety of events covering natural and human-created emergencies, with the added difficulty that the individual datasets are relatively small, with
129
+
130
+ ![](images/d934c2c83813da59766954c404c0c329c60c5953546800a992fe24eedf8253f4.jpg)
131
+ Figure 2: Overview of the data splits used in our experiments. Bins in blue are used during training, bins in yellow for testing, grey bins are not used. The PROGRESSIVE setting comprises multiple experiments with increasing training data size and a single test data bin moving forward temporally.
132
+
133
+ each event containing only approximately 1,000 tweets.
134
+
135
+ Humaid The Humaid dataset (Alam et al., 2021) is similar to T26, containing data about 19 different events with dataset sizes ranging from 575 to 9467 tweets. They are annotated with 11 different classes designed to capture fine-grained information related to disaster events.
136
+
137
+ # 4.2 Data Splits
138
+
139
+ We follow previous work (Lazaridou et al., 2021; Agarwal and Nenkova, 2022) and create time-based data splits to assess the temporal performance degradation. Specifically, we use three variants of dataset splits: CONTROL, TEMPORAL and PROGRESSIVE. We illustrate this in Figure 2.
140
+
141
+ TEMPORAL Setup First, we split the entire data into two halves which cover equally-sized time periods. We call these first temporal half and the second temporal half, respectively. In the TEMPORAL setting, we use all the data from the first temporal half as the training data and a test set which is comprised of a randomly sampled $50\%$ of data from the second temporal half of a dataset. This evaluates the model's temporal generalization capabilities on test data from a temporally distant distribution than the training data.
142
+
143
+ CONTROL Setup To assess whether TEMPORAL setup constrains model's generalization capabilities, we compare its performance with a CONTROL
144
+
145
+ setup. Here, we evenly spread the training data over time frames, exposing the model to the full knowledge of all time. In this setting, the training data comprises of $50\%$ of instances from the first temporal half, along with $50\%$ instances from the second temporal half, matching the total training data from the TEMPORAL setup. We use the same test set as in TEMPORAL setup while ensuring that there is no overlap between the train and test split from the second temporal half.
146
+
147
+ Under the assumption that a temporal gap between training and target distribution leads to performance decay, we expect that the CONTROL setup will yield better scores, as the model has access to training instances from the same temporal distribution as the test data.
148
+
149
+ PROGRESSIVE Setup As described previously, semantic changes are likely to occur in short time spans within crisis-related data streams. Therefore, to investigate a more fine-grained analysis of temporal performance decay, we simulate a scenario in which an event is progressing, we have access to all the previous data, and need to take decisions about the incoming data. In this setup, we split the entire dataset into ten temporally ordered bins with even samples. Then, for each test bin $B_{t}$ , we use all preceding bins $B_{0}$ to $B_{t-2}$ for training. To identify the best performing model across all training epochs, we use bin $B_{t-1}$ for development.
150
+
151
+ # 4.3 Baseline
152
+
153
+ For a consistent performance comparison, all proposed models use bert-base-cased as their underlying backbone model for deriving pretrained embeddings.
154
+
155
+ For the FT setup (see Section 3), we use the available training data for each dataset to run masked language modelling for three epochs to adapt the model to the data. We then fine-tune for the downstream task on the relevant training data using the updated pre-trained model. This will indicate whether the domain is the issue, or whether there is additional temporal effects. In the temporal modeling setup (TM) setup, we follow Dhingra et al. (2022) and presuppose the textual representation of the timestamp for each tweet to the tweet text, then train an additional three epochs of masked language modelling. We then fine-tune for the downstream task on the relevant training data.
156
+
157
+ Finally, we apply another baseline where we use the timestamp text as second input to the model
158
+
159
+ during supervised training, separated via a special token (i.e. [SEP] for BERT). We refer to this baseline as SEP.
160
+
161
+ # 4.4 Hyperparameters and Infrastructure
162
+
163
+ For a fair comparison, we run all experiments using the same hyperparameters and data splits. We use a learning rate of $1e - 4$ , batch size of 64, weight decay of $1e - 3$ and no warmup due to the limited amount of training data. We use Adam (Loshchilov and Hutter, 2019) as optimization algorithm and train for three epochs. Based on the performance on the development split, we load the best performing model at the end of the training procedure.
164
+
165
+ We repeat each experiment using five different seeds and take the most frequent prediction across all runs as the final prediction by a model. All models are implemented in Python 3.6 using PyTorch 1.10.2 (Paszke et al., 2019) and the HuggingFace (Wolf et al., 2020) framework (4.18) as model backend. We used a computation cluster containing a mixture of NVIDIA Tesla P100 (16GB), NVIDIA A100 (40GB) and NVIDIA V100 (32GB) GPUs.
166
+
167
+ # 4.5 Evaluation
168
+
169
+ We report binary-F1 Score for Sandy and macro-F1 score for multi-class classification task on T26 and Humaid datasets. The comparison of the CONTROL and TEMPORAL setting serves two purposes; first, to quantify the degradation of model performance due to temporal drift and second, to estimate the temporal adaptation ability for our approaches. We expect that models considering temporal information should experience less performance degradation between these two settings compared to the baseline model.
170
+
171
+ Additionally, we evaluate the mean model performance in the PROGRESSIVE setting for a more fine-grained analysis of temporal degradation.
172
+
173
+ Temporal Rigidity: While analyzing the effects of temporal drift on model performance, it is necessary to quantify the degradation of model performance due to this phenomenon. We quantify the temporal adaptability of a model using a metric called Temporal Rigidity (TR) score, that summarizes the performance deterioration of a model from aligned to misaligned test data. Higher values of TR imply that the model is not able to adapt itself temporally.
174
+
175
+ We denote $f_{M}(B_{i}, B_{j})$ as the F1 performance score of a model $M$ when trained using data sam
176
+
177
+ pled from bin $B_{i}$ and evaluated using data sampled from bin $B_{j}$ . We define TR as:
178
+
179
+ $$
180
+ T R = \frac {1}{N} \sum_ {i \neq j} \frac {\left| f _ {M} \left(B _ {i} , B _ {j}\right) - f _ {M} \left(B _ {i} , B _ {i}\right) \right|}{\left| i - j \right|} \tag {5}
181
+ $$
182
+
183
+ In Eqn.5 the normalization factor is given as $N = |\{(i,j):i\neq j\} |$ . Unlike Luu et al. (2022), who do not take temporal proximity of bins into account. We use $\frac{1}{|i - j|}$ as the penalizing factor for the model when training and test bins are temporally close but the performance degradation is significant.
184
+
185
+ Crisis Phases: Additionally, we utilize the well-known temporal structures of the crisis events (Reynolds and Seeger, 2005; Yang et al., 2013) to analyze model performance. The temporal structure of the Sandy dataset is annotated using pre-, acute- and post-crisis labels. For each model we cluster the time-aware embeddings using KMeans algorithm $(k = 3)$ and report the Normalized Mutual Information score (NMI). NMI gives the correlation between the time-aware embeddings and the temporal structure of the underlying data.
186
+
187
+ # 5 Results and Analysis
188
+
189
+ In this section, we attempt to answer the following questions:
190
+
191
+ Q1. To what degree is temporal performance degradation present in short-term Twitter data during crisis events? (Section 5.1)
192
+ Q2. Does temporal adaptation improve model performance? (Section 5.2)
193
+ Q3. Does the domain of the data play a role in temporal drift? (Section 5.3)
194
+ Q4. How do the proposed models perform when trained continually? (Section 5.4)
195
+
196
+ # 5.1 Temporal Performance Degradation
197
+
198
+ In order to estimate the degree of temporal performance degradation in the crisis scenario, we compare the classification performance of the baseline model in the CONTROL and TEMPORAL setting. Table 1 provides the averaged performance difference for all datasets. Given that we only change the temporal distribution of the training data, the effect is substantial with a difference in F1 up to 6.52 points for the Sandy dataset and slightly less
199
+
200
+ <table><tr><td>Data</td><td>Sandy</td><td>T26</td><td>Humaid</td></tr><tr><td>CONTROL - TEMPORAL</td><td>6.52</td><td>4.37</td><td>4.10</td></tr></table>
201
+
202
+ pronounced on the T26 (4.37) and Humaid (4.10) dataset collections. Therefore, we conclude that, even in short-term scenarios like crisis events on Twitter, temporal distribution of the training data influences the classification performance.
203
+
204
+ # 5.2 Performance Comparison
205
+
206
+ Table 1: Temporal Performance Degradation: Averaged F1 performance difference of the CONTROL to TEMPORAL setting for the BERT baseline model. Overall results show that contextualized language models fail to adapt temporally. Refer Section 5.1 for details.
207
+
208
+ <table><tr><td rowspan="2">Method</td><td colspan="3">Sandy</td></tr><tr><td>CONT</td><td>TEMP</td><td>DIFF</td></tr><tr><td>BERT</td><td>87.70</td><td>81.18</td><td>6.52</td></tr><tr><td>BERT+TM</td><td>82.55</td><td>70.48</td><td>12.07</td></tr><tr><td>BERT+SEP</td><td>87.79</td><td>79.65</td><td>8.14</td></tr><tr><td>BERT+LMSOC</td><td>73.78</td><td>67.24</td><td>6.54</td></tr><tr><td>BERT+DCWE</td><td>86.92</td><td>79.95</td><td>6.97</td></tr><tr><td>BERT+TAPH</td><td>87.40</td><td>82.02</td><td>5.38</td></tr><tr><td>BERT+TDA</td><td>87.10</td><td>82.53</td><td>4.57</td></tr><tr><td>BERTFT</td><td>86.96</td><td>81.84</td><td>5.12</td></tr><tr><td>BERTFT+LMSOC</td><td>74.89</td><td>67.90</td><td>6.99</td></tr><tr><td>BERTFT+DCWE</td><td>86.85</td><td>79.53</td><td>7.32</td></tr><tr><td>BERTFT+TAPH</td><td>87.12</td><td>82.60</td><td>4.52</td></tr><tr><td>BERTFT+TDA</td><td>86.71</td><td>83.43</td><td>3.28</td></tr></table>
209
+
210
+ Table 2: Temporal Adaptation Evaluation on Sandy: Text classification performance measured in binary F1. Overall, TDA outperforms other approaches in TEMPORAL setting, with and without temporal adaptation (FT). Refer Section 5.2 and 5.3 for details.
211
+
212
+ We summarize the results on Sandy in Table 2. Overall we find that TDA outperforms all other methods in TEMPORAL setting. We obtain around $1.6\%$ absolute increase over the baseline. We also observe that the difference between model performance in CONTROL and TEMPORAL setting (DIFF) is lowest for TDA $(30.8\%)$ lower than the baseline) indicating the higher robustness of the model. TAPH achieves $1\%$ absolute improvement in performance over the baseline in TEMPORAL setting (DIFF is lower by $16.9\%$ ).
213
+
214
+ <table><tr><td>Method</td><td>T26</td><td>Humaid</td></tr><tr><td>BERT+TM</td><td>4 / 26</td><td>3 / 19</td></tr><tr><td>BERT+SEP</td><td>5 / 26</td><td>3 / 19</td></tr><tr><td>BERT+DCWE</td><td>0 / 26</td><td>1 / 19</td></tr><tr><td>BERT+TAPH</td><td>6 / 26</td><td>0 / 19</td></tr><tr><td>BERT+TDA</td><td>10 / 26</td><td>4 / 19</td></tr><tr><td>BERTFT+DCWE</td><td>0 / 26</td><td>0 / 19</td></tr><tr><td>BERTFT+TAPH</td><td>5 / 26</td><td>0 / 19</td></tr><tr><td>BERTFT+TDA</td><td>8 / 26</td><td>0 / 19</td></tr></table>
215
+
216
+ The $T26$ and Humaid datasets contain data for a multitude of events. Therefore, we aggregate model performances in Table 3 and provide detailed results per event in the Appendix A.2. We see that model performance varies greatly between the Sandy dataset and the others. This is due to two main reasons: (i) Data Size: Most of the event datasets in $T26$ and Humaid are very small, the temporal adaptation methods do not get enough training data to learn the parameters involved in temporal reasoning. To support our argument, we observe, in "Boston Bombings (2013)" dataset of $T26$ , which contains 81,172 annotated tweets, TDA outperforms the baseline by an absolute increases of $6.17\%$ and TAPH comes second with an absolute performance improvement of $2.9\%$ under TEMPORAL setting, a performance pattern which is similar to Sandy dataset. (ii) Data Quality: Unlike Sandy, $T26$ and Humaid have been collected using keyword-based search. This data collection technique has two main drawbacks: firstly, it restricts the data size and secondly, harms the completeness of the dataset collecting tweets that contain same keywords. All the improvements we report are statistically significant ( $p < 0.05$ , using McNemar's Test).
217
+
218
+ Learning from Temporal Information: To understand the cause of the performance improvement of the models, we utilize the annotated temporal structure of the Sandy dataset. In Table 4 we report two additional metrics: TR Score and NMI, in TEMPORAL setting. Compared to the baseline, TDA is lowest (15.74% decrease) which suggests that TDA
219
+
220
+ performs most robustly over time across all models. TAPH comes in second with a $9.26\%$ decrease in TR Score from the baseline. NMI scores show similar patterns, with TDA achieving the highest score. We conclude that TDA learns the most meaningful time-aware embeddings.
221
+
222
+ Table 3: Performance Comparison on T26 and Humaid: The number of datasets for which the specific temporal adaptation method outperforms its baseline counterpart in the TEMPORAL setting. Refer Section 5.2 and 5.3 for details.
223
+
224
+ <table><tr><td rowspan="2">Method</td><td colspan="2">Sandy</td></tr><tr><td>TR</td><td>NMI</td></tr><tr><td>BERT</td><td>0.108</td><td>0.051</td></tr><tr><td>BERT+TM</td><td>0.130</td><td>0.050</td></tr><tr><td>BERT+DCWE</td><td>0.111</td><td>0.105</td></tr><tr><td>BERT+TAPH</td><td>0.098</td><td>0.185</td></tr><tr><td>BERT+TDA</td><td>0.091</td><td>0.194</td></tr></table>
225
+
226
+ Table 4: Temporal Information Learning: Comparison of methods on TR (lower is better) and NMI scores (higher is better). Refer section 5.2 for details.
227
+
228
+ # 5.3 Effect of Domain of Data
229
+
230
+ To understand whether the data domain is the main issue behind performance degradation or temporal effects indeed play a significant role, we perform additional experiments. We fine-tune the initial bert-base-cased embeddings for an additional three epochs with Masked Language Modeling Task (MLM) on the training data, before applying the Temporal Adaptation methods. We report the results for Sandy dataset in Table 2. For all models, there remains a substantial performance difference between the CONTROL and TEMPORAL settings which demonstrates the influence of temporal drift on performance. Similar to previous work (Agarwal and Nenkova, 2022), we observe that additional pre-training improves performance for most of the models. Still, TDA outperforms the baseline and TAPH comes in second.
231
+
232
+ # 5.4 Effect of Continual Learning:
233
+
234
+ Continual Learning requires continuous annotation of incoming data, which is not feasible during crisis events. However, for the analytical completeness of this paper, we simulate continual learning in the PROGRESSIVE setting to show the effectiveness of our proposed methods. In this setting, initially the models get access to very small amount of data to learn from, which affects model performance. Performance improves as the size of training data increases gradually. In Table 5 we report the model
235
+
236
+ <table><tr><td>Method</td><td>Sandy</td></tr><tr><td>BERT</td><td>68.67</td></tr><tr><td>BERT+TM</td><td>60.13</td></tr><tr><td>BERT+DCWE</td><td>67.39</td></tr><tr><td>BERT+TAPH</td><td>69.13</td></tr><tr><td>BERT+TDA</td><td>69.50</td></tr></table>
237
+
238
+ Table 5: Continual Learning Effects: Average model performance across all bins in PROGRESSIVE setting, in terms of F1 Score. Refer section 5.4 for details.
239
+
240
+ ![](images/2db5bc017e8536956a383c4c6f7ccdc984cbabe01fa6c9225f4fa4196cf78de4.jpg)
241
+ Figure 3: Representative example shows that in comparison with other models TDA correctly puts maximum attention weight on the word katrina (another storm) in the temporal context of the hurricane while computing the contextual embeddings. Refer Section 6 for details.
242
+
243
+ performance averaged over all the bins. The results show that TDA outperforms and improves the BERT baseline by $1.2\%$ .
244
+
245
+ # 6 Discussion
246
+
247
+ Adapting temporally by training on timestamp patterns as text presupended to the input (BERT+TM) underperforms in all experiments. We argue that the added information affects all tokens equally via the self-attention mechanism although only some tokens will experience a semantic shift relevant for text classification in the crisis scenario.
248
+
249
+ Similarly, the LMSOC and DCWE adaptation approaches cannot outperform the baseline without any temporal adaptation. The additional parameters for computing the temporal offset are not well
250
+
251
+ tuned for predicting temporal distributions which have not been observed during training.
252
+
253
+ Figure 3 shows that TDA correctly learns to put maximum attention weight on the word Katrina (i.e. reference to a previous hurricane) in the temporal context of hurricane. We provide representative examples of tweets in Appendix A that all other models but TDA fail to classify correctly. Forcing the model to learn time-invariant embeddings during training using an adversarial signal leads to TDA performing better over all other approaches. Although, TAPH does not fall far behind, it approximates temporal information to create time-static bins. The discrete approximation of temporal information is the main reason behind its performance drop.
254
+
255
+ # 7 Conclusion
256
+
257
+ The usage of natural language inevitably changes over time which influences performance of text classification models applied on data from different temporal distributions. We show that this effect is also prevalent for rapid temporal drift using social media during crisis events as an example. With the rise of pretrained contextualized embeddings, a dominant approach is to continue language modeling on data temporally closer to the target distribution. However, during crisis events such data is not available and annotated data is often scarce.
258
+
259
+ We investigate approaches which work without any additional data besides the input text and its temporal metadata. Our results show that under ideal conditions, i.e. high data quality and sufficient annotated instances, they outperform strong baselines. However, most crucially, our work highlights a critical gap of temporal adaptation for rapid temporal drift, namely if unlabeled data for alignment is missing and annotated data is scarce. Our work opens the door for future research on methods which do not rely on pretraining in unlabeled target domain data. In this sense, crisis data provides an interesting use case for evaluation. We release all our code and models, fostering future work in this area.
260
+
261
+ # Limitations
262
+
263
+ While existing approaches account for temporal change of language over long periods of time, in social media this change can happen over the span of a single day during dynamic scenarios like crisis or disastrous events. In this work we study rapid tem
264
+
265
+ poral drift prevalently observed in social media during a crisis. We observe that often data from social media are collected using keyword based search and data sampling techniques, where data containing the same set of keywords are collected. Since data collected using such techniques are both limited by size and vocabulary, as well by the issues inherent in keyword collection, the datasets naturally affect the performance of the methods described in the paper. Moreover, there exists differences among the types of crisis events (hurricane vs. earthquake) and their respective information needs. Hence, it is difficult to find a solution that works in all scenarios. Additionally, we highlight that evaluation of all the models was done on datasets annotated in presence of a crisis and that may not exactly reflect their performance in a real-world setting without annotated data, especially when differences among the types of crises are relevant. In a nutshell, we observe that during real-world crises, pre-trained language models turn out to be a good solution when access to unlabeled data is scarce and sufficient annotated data is unavailable.
266
+
267
+ # Acknowledgements
268
+
269
+ We thank Ilia Kuznetsov, Jan Buchmann, Luke Bates and the anonymous reviewers for their valuable feedback and Firoj Alam for providing us full access to the HumAID dataset. This work has been funded by the German Research Foundation (DFG) as part of the Research Training Group KRITIS No. GRK 2222. It has further been funded by the project "Open Argument Mining" (GU 798/25-1), associated with the Priority Program "Robust Argumentation Machines (RATIO)" (SPP-1999), and by the LOEWE initiative (Hesse, Germany) within the emergenCITY center.
270
+
271
+ # References
272
+
273
+ Oshin Agarwal and Ani Nenkova. 2022. Temporal effects on pre-trained models for language processing tasks. Transactions of the Association for Computational Linguistics, 10:904-921.
274
+ Firoj Alam, Umair Qazi, Muhammad Imran, and Ferda Ofli. 2021. Humaid: Human-annotated disaster incidents data from twitter with deep learning benchmarks. In Proceedings of the Fifteenth International AAAI Conference on Web and Social Media, ICWSM. AAAI Press, pages 933-942.
275
+ Robert Bamler and Stephan Mandt. 2017. Dynamic word embeddings. In Proceedings of the 34th International Conference on Machine Learning, ICML
276
+
277
+ 2017, Sydney, NSW, Australia, 6-11 August 2017, volume 70 of Proceedings of Machine Learning Research, pages 380-389. PMLR.
278
+ Johannes Bjerva, Wouter Kouw, and Isabelle Augenstein. 2020. Back to the future — sequential alignment of text representations. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 7440-7447, United States. AAAI Press.
279
+ Marco Del Tredici, Raquel Fernandez, and Gemma Boleda. 2019. Short-term meaning shift: A distributional exploration. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2069-2075, Minneapolis, Minnesota. Association for Computational Linguistics.
280
+ Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.
281
+ Bhuwan Dhingra, Jeremy R. Cole, Julian Martin Eisenschlos, Daniel Gillick, Jacob Eisenstein, and William W. Cohen. 2022. Time-Aware Language Models as Temporal Knowledge Bases. Transactions of the Association for Computational Linguistics, 10:257-273.
282
+ Jacob Eisenstein. 2013. What to do about bad language on the internet. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 359-369, Atlanta, Georgia. Association for Computational Linguistics.
283
+ Komal Florio, Valerio Basile, Marco Polignano, Pierpaolo Basile, and Viviana Patti. 2020. Time of Your Hate: The Challenge of Time in Hate Speech Detection on Social Media. Applied Sciences, 10(12).
284
+ Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, François Lavi-olette, Mario Marchand, and Victor Lempitsky. 2016. Domain-adversarial training of neural networks. The journal of machine learning research, 17(1):2096-2030.
285
+ Scott A. Golder and Michael W. Macy. 2011. Diurnal and seasonal mood vary with work, sleep, and daylength across diverse cultures. Science, 333(6051):1878-1881.
286
+ Hila Gonen, Ganesh Jawahar, Djame Seddah, and Yoav Goldberg. 2020. Simple, interpretable and stable method for detecting words with usage change across corpora. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics,
287
+
288
+ pages 538-555, Online. Association for Computational Linguistics.
289
+ Suchin Gururangan, Ana Marasovic, Swabha Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey, and Noah A. Smith. 2020. Don't stop pretraining: Adapt language models to domains and tasks. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8342-8360, Online. Association for Computational Linguistics.
290
+ William L. Hamilton, Jure Leskovec, and Dan Jurafsky. 2016a. Diachronic word embeddings reveal statistical laws of semantic change. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1489-1501, Berlin, Germany. Association for Computational Linguistics.
291
+ William L. Hamilton, Jure Leskovec, and Dan Jurafsky. 2016b. Diachronic word embeddings reveal statistical laws of semantic change. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1489-1501, Berlin, Germany. Association for Computational Linguistics.
292
+ Valentin Hofmann, Janet Pierrehumbert, and Hinrich Schütze. 2021. Dynamic contextualized word embeddings. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 6970-6984, Online. Association for Computational Linguistics.
293
+ Kokil Jaidka, Niyati Chhaya, and Lyle Ungar. 2018. Diachronic degradation of language models: Insights from social media. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 195-200, Melbourne, Australia. Association for Computational Linguistics.
294
+ Ganesh Jawahar and Djamé Seddah. 2019. Contextualized diachronic word representations. In Proceedings of the 1st International Workshop on Computational Approaches to Historical Language Change, pages 35-47, Florence, Italy. Association for Computational Linguistics.
295
+ Vivek Kulkarni, Rami Al-Rfou, Bryan Perozzi, and Steven Skiena. 2015. Statistically significant detection of linguistic change. In Proceedings of the 24th International Conference on World Wide Web, pages 1384-1397, Florence, Italy. Association for Computing Machinery.
296
+ Vivek Kulkarni, Shubhanshu Mishra, and Aria Haghighi. 2021. LMSOC: An approach for socially sensitive pretraining. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 2967-2975, Punta Cana, Dominican Republic. Association for Computational Linguistics.
297
+
298
+ Andrey Kutuzov, Lilja Øvrelid, Terrence Szymanski, and Erik Velldal. 2018. Diachronic word embeddings and semantic shifts: a survey. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1384-1397, Santa Fe, New Mexico, USA. Association for Computational Linguistics.
299
+ Angeliki Lazaridou, Adhiguna Kuncoro, Elena Gribovskaya, Devang Agrawal, Adam Liska, Tayfun Terzi, Mai Gimenez, Cyprien de Masson d'Autume, Tomás Kočisky, Sebastian Ruder, Dani Yogatama, Kris Cao, Susannah Young, and Phil Blunsom. 2021. Mind the Gap: Assessing Temporal Generalization in Neural Language Models. In Advances in Neural Information Processing Systems.
300
+ Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net.
301
+ Daniel Loureiro, Francesco Barbieri, Leonardo Neves, Luis Espinosa Anke, and Jose Camacho-collados. 2022a. TimeLMs: Diachronic language models from Twitter. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 251-260, Dublin, Ireland. Association for Computational Linguistics.
302
+ Daniel Loureiro, Aminette D'Souza, Areej Nasser Muhajab, Isabella A. White, Gabriel Wong, Luis Espinosa Anke, Leonardo Neves, Francesco Barbieri, and Jose Camacho-Collados. 2022b. Tempowic: An evaluation benchmark for detecting meaning shift in social media.
303
+ Kelvin Luu, Daniel Khashabi, Suchin Gururangan, Karishma Mandyam, and Noah A. Smith. 2022. Time waits for no one! analysis and challenges of temporal misalignment. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5944-5958, Seattle, United States. Association for Computational Linguistics.
304
+ Matej Martinc, Petra Kralj Novak, and Senja Pollak. 2020. Leveraging contextual embeddings for detecting diachronic semantic shift. In Proceedings of the 12th Language Resources and Evaluation Conference, pages 4811-4819, Marseille, France. European Language Resources Association.
305
+ Jianmo Ni, Jiacheng Li, and Julian McAuley. 2019. Justifying recommendations using distantly-labeled reviews and fine-grained aspects. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 188-197, Hong Kong, China. Association for Computational Linguistics.
306
+ Alexandra Olteanu, Sarah Vieweg, and Carlos Castillo. 2015. What to expect when the unexpected happens:
307
+
308
+ Kevin Stowe, Jennings Anderson, Martha Palmer, Leysia Palen, and Ken Anderson. 2018. Improving classification of Twitter behavior during hurricane events. In Proceedings of the Sixth International Workshop on Natural Language Processing for Social Media, pages 67-75, Melbourne, Australia. Association for Computational Linguistics.
309
+
310
+ Zhen Wang, Jianwen Zhang, Jianlin Feng, and Zheng Chen. 2014. Knowledge graph embedding by translating on hyperplanes. Proceedings of the AAAI Conference on Artificial Intelligence, 28(1).
311
+
312
+ Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Association for Computational Linguistics.
313
+
314
+ Seungwon Yang, Haeyong Chung, Xiao Lin, Sunshin Lee, Liangzhe Chen, Andrew Wood, Andrea L. Kavanaugh, Steven D. Sheetz, Donald J. Shoemaker, and Edward A. Fox. 2013. Phasevis: What, when, where, and who in visualizing the four phases of emergency management through the lens of social media. In Proceedings of the 10th International IS-CRAM Conference, pages 912–917, Baden-Baden, Germany.
315
+
316
+ # A Appendix: Data
317
+
318
+ # A.1 Data Statistics
319
+
320
+ The Sandy dataset spans 18 days with 23k tweets. The Humaid datasets range from 560 to 9399 tweets, from 1 to 81 days. The T26 datasets range from 1000 to 1442 tweets, over 7 to 56 days. In Table 6 and 7 we show the dataset statistics for the T26 datasets and Humaid datasets, respectively. Note that the Typhoon Pablo event from the original T26 dataset had only seven unlabelled tweets that could be successfully recovered: we therefore remove it from all experiments.
321
+
322
+ # A.2 Detailed Results for T26 and Humaid
323
+
324
+ In Tables 8 and 9 we provide the detailed evaluation results of the proposed approaches on T26 and Humaid.
325
+
326
+ Progressive Events
327
+
328
+ <table><tr><td>Event</td><td>Dates (MM.DD.YY)</td><td>Total Days</td><td>Tweets</td></tr><tr><td>Colorado Floods (2013)</td><td>09.08.13 - 10.01.13</td><td>19</td><td>1,231</td></tr><tr><td>Sardinia Floods (2013)</td><td>11.16.13 - 11.28.13</td><td>13</td><td>824</td></tr><tr><td>Philipinnes Floods (2012)</td><td>08.07.12 - 08.15.12</td><td>13</td><td>1,341</td></tr><tr><td>Alberta Floods (2013)</td><td>06.20.13 - 07.16.13</td><td>24</td><td>4,040</td></tr><tr><td>Manila Flood (2013)</td><td>08.17.13 - 08.27.13</td><td>11</td><td>1,068</td></tr><tr><td>Queensland loods (2013)</td><td>01.17.13 - 02.05.13</td><td>19</td><td>727</td></tr><tr><td>Typhoon Yolanda (2013)</td><td>05.11.13 - 12.30.13</td><td>53</td><td>253</td></tr><tr><td>Australia bushfire (2013)</td><td>10.12.13 - 11.03.13</td><td>22</td><td>1,244</td></tr><tr><td>Colorado wildfires (2012)</td><td>06.08.12 - 07.08.12</td><td>31</td><td>2,901</td></tr><tr><td>Singapore haze (2013)</td><td>06.14.13 - 07.04.13</td><td>18</td><td>1,572</td></tr></table>
329
+
330
+ Instantaneous Events
331
+
332
+ <table><tr><td>Italy earthquakes (2012)</td><td>05.18.12 - 06.14.12</td><td>28</td><td>5,219</td></tr><tr><td>Costa Rica earthquake (2012)</td><td>09.05.12 - 09.21.12</td><td>18</td><td>1,641</td></tr><tr><td>Bohol earthquake (2013)</td><td>10.14.13 - 10.25.13</td><td>12</td><td>1,131</td></tr><tr><td>Guatemala earthquake (2012)</td><td>11.06.12 - 11.25.12</td><td>20</td><td>2,233</td></tr><tr><td>LA airport shootings (2013)</td><td>11.01.13 - 11.12.13</td><td>12</td><td>1,737</td></tr><tr><td>Boston bombings (2013)</td><td>04.15.13 - 06.11.13</td><td>46</td><td>81,172</td></tr><tr><td>West Texas explosion (2013)</td><td>04.18.13 - 05.15.13</td><td>27</td><td>8,152</td></tr><tr><td>Venezuela refinery explosion (2012)</td><td>12.08.24 - 12.09.05</td><td>13</td><td>2,007</td></tr><tr><td>Brazil nightclub fire (2013)</td><td>01.27.13 - 02.11.13</td><td>16</td><td>2,644</td></tr><tr><td>Savar building collapse (2013)</td><td>04.23.13 - 06.01.13</td><td>39</td><td>2,646</td></tr><tr><td>Spain train crash (2013)</td><td>07.24.13 - 08.07.13</td><td>14</td><td>2,288</td></tr><tr><td>Lac Megantic train crash (2013)</td><td>07.06.12 - 07.26.12</td><td>21</td><td>1,755</td></tr><tr><td>NY train crash (2013)</td><td>12.01.13 - 12.08.13</td><td>9</td><td>667</td></tr><tr><td>Glasgow helicopter crash (2013)</td><td>11.29.13 - 12.29.13</td><td>30</td><td>1,541</td></tr><tr><td>Russia meteor (2013)</td><td>02.14.13 - 03.05.13</td><td>19</td><td>4,289</td></tr></table>
333
+
334
+ Table 6: Summary of the T26datasets. The progressive and instantaneous splits were done manually based on the type of crisis event.
335
+ Progressive Events
336
+
337
+ <table><tr><td>Event (Year)</td><td>Dates (MM.DD.YY)</td><td>Total Days</td><td>Nr. Tweets</td></tr><tr><td>Canada Wildfires (2016)</td><td>17.04.16 - 25.12.16</td><td>253</td><td>2,258</td></tr><tr><td>Hurricane Matthew (2016)</td><td>04.10.16 - 05.12.16</td><td>74</td><td>1,659</td></tr><tr><td>Sri Lanka Floods (2017)</td><td>31.05.17 - 03.07.17</td><td>34</td><td>575</td></tr><tr><td>Hurricane Harvey (2017)</td><td>17.08.17 - 19.09.17</td><td>34</td><td>9,164</td></tr><tr><td>Hurricane Irma (2017)</td><td>06.09.17 - 21.09.17</td><td>16</td><td>9,467</td></tr><tr><td>Hurricane Maria (2017)</td><td>16.09.17 - 02.10.17</td><td>17</td><td>7,328</td></tr><tr><td>Maryland Floods (2018)</td><td>28.05.18 - 07.06.18</td><td>11</td><td>747</td></tr><tr><td>Greece Wildfires (2018)</td><td>24.07.18 - 18.08.18</td><td>26</td><td>1,526</td></tr><tr><td>Kerala Floods (2018)</td><td>17.08.18 - 12.09.18</td><td>27</td><td>8,056</td></tr><tr><td>Hurricane Florence (2018)</td><td>11.09.18 - 17.11.18</td><td>68</td><td>6,359</td></tr><tr><td>California Wildfires (2018)</td><td>10.11.18 - 07.12.18</td><td>28</td><td>7,444</td></tr><tr><td>Cyclone Idai (2019)</td><td>15.03.19 - 16.04.19</td><td>33</td><td>3,944</td></tr><tr><td>Midwestern U.S. Floods (2019)</td><td>25.03.19 - 03.04.19</td><td>26</td><td>1,930</td></tr><tr><td>Hurricane Dorian (2019)</td><td>30.08.19 - 02.09.19</td><td>4</td><td>7,660</td></tr></table>
338
+
339
+ Instantaneous Events
340
+
341
+ <table><tr><td>Ecuador Earthquake (2016)</td><td>17.04.16 - 25.12.16</td><td>253</td><td>1,594</td></tr><tr><td>Italy Earthquake (2016)</td><td>24.08.16 - 29.08.16</td><td>6</td><td>1,240</td></tr><tr><td>Kaikoura Earthquake (2016)</td><td>01.09.16 - 22.11.16</td><td>83</td><td>2,217</td></tr><tr><td>Mexico Earthquake (2017)</td><td>20.09.17 - 06.10.17</td><td>17</td><td>2,036</td></tr><tr><td>Pakistan Earthquake (2019)</td><td>24.09.19 - 26.09.19</td><td>3</td><td>1,991</td></tr></table>
342
+
343
+ Table 7: Summary of the Humaid datasets. The progressive and instantaneous splits were done manually based on the type of crisis event.
344
+
345
+ <table><tr><td rowspan="3">Event</td><td colspan="12">Humaid</td></tr><tr><td colspan="2">BERT</td><td colspan="2">BERT+TM</td><td colspan="2">BERT+SEP</td><td colspan="2">BERT+DCWE</td><td colspan="2">BERT+TAPH</td><td colspan="2">BERT+TDA</td></tr><tr><td>CONT</td><td>TEMP</td><td>CONT</td><td>TEMP</td><td>CONT</td><td>TEMP</td><td>CONT</td><td>TEMP</td><td>CONT</td><td>TEMP</td><td>CONT</td><td>TEMP</td></tr><tr><td colspan="13">Progressive Events</td></tr><tr><td>Colorado Floods (2013)</td><td>0.309</td><td>0.309</td><td>0.309</td><td>0.309</td><td>0.309</td><td>0.309</td><td>0.309</td><td>0.309</td><td>0.309</td><td>0.309</td><td>0.309</td><td>0.309</td></tr><tr><td>Sardinia Floods (2013)</td><td>0.255</td><td>0.315</td><td>0.310</td><td>0.287</td><td>0.239</td><td>0.285</td><td>0.179</td><td>0.298</td><td>0.179</td><td>0.211</td><td>0.299</td><td>0.288</td></tr><tr><td>Philipinnes Floods (2012)</td><td>0.276</td><td>0.270</td><td>0.307</td><td>0.269</td><td>0.213</td><td>0.278</td><td>0.213</td><td>0.213</td><td>0.213</td><td>0.213</td><td>0.213</td><td>0.269</td></tr><tr><td>Alberta Floods (2013)</td><td>0.314</td><td>0.202</td><td>0.307</td><td>0.200</td><td>0.300</td><td>0.202</td><td>0.202</td><td>0.202</td><td>0.202</td><td>0.202</td><td>0.296</td><td>0.202</td></tr><tr><td>Manila Floods (2013)</td><td>0.369</td><td>0.369</td><td>0.367</td><td>0.366</td><td>0.337</td><td>0.372</td><td>0.190</td><td>0.350</td><td>0.308</td><td>0.355</td><td>0.380</td><td>0.374</td></tr><tr><td>Queensland Floods (2013)</td><td>0.423</td><td>0.353</td><td>0.486</td><td>0.342</td><td>0.361</td><td>0.331</td><td>0.374</td><td>0.351</td><td>0.318</td><td>0.314</td><td>0.472</td><td>0.355</td></tr><tr><td>Typhoon Yolanda (2013)</td><td>0.211</td><td>0.211</td><td>0.235</td><td>0.260</td><td>0.317</td><td>0.399</td><td>0.211</td><td>0.211</td><td>0.211</td><td>0.211</td><td>0.211</td><td>0.211</td></tr><tr><td>Australia Bushfire (2013)</td><td>0.447</td><td>0.450</td><td>0.583</td><td>0.585</td><td>0.449</td><td>0.522</td><td>0.426</td><td>0.421</td><td>0.422</td><td>0.461</td><td>0.577</td><td>0.547</td></tr><tr><td>Colorado Wildfires (2012)</td><td>0.569</td><td>0.370</td><td>0.584</td><td>0.370</td><td>0.541</td><td>0.335</td><td>0.533</td><td>0.370</td><td>0.446</td><td>0.330</td><td>0.567</td><td>0.222</td></tr><tr><td>Singapore Haze (2013)</td><td>0.363</td><td>0.348</td><td>0.360</td><td>0.340</td><td>0.352</td><td>0.344</td><td>0.357</td><td>0.332</td><td>0.361</td><td>0.349</td><td>0.360</td><td>0.351</td></tr><tr><td colspan="13">Instantaneous Events</td></tr><tr><td>Italy Earthquakes (2012)</td><td>0.332</td><td>0.321</td><td>0.316</td><td>0.285</td><td>0.331</td><td>0.304</td><td>0.287</td><td>0.267</td><td>0.274</td><td>0.316</td><td>0.326</td><td>0.318</td></tr><tr><td>Costa Rica Earthquake (2012)</td><td>0.582</td><td>0.240</td><td>0.564</td><td>0.132</td><td>0.603</td><td>0.102</td><td>0.554</td><td>0.102</td><td>0.537</td><td>0.102</td><td>0.543</td><td>0.102</td></tr><tr><td>Bohol Earthquake (2013)</td><td>0.585</td><td>0.579</td><td>0.566</td><td>0.566</td><td>0.574</td><td>0.568</td><td>0.569</td><td>0.574</td><td>0.574</td><td>0.571</td><td>0.582</td><td>0.577</td></tr><tr><td>Guatemala Earthquake (2012)</td><td>0.568</td><td>0.484</td><td>0.401</td><td>0.437</td><td>0.274</td><td>0.274</td><td>0.425</td><td>0.274</td><td>0.274</td><td>0.274</td><td>0.474</td><td>0.434</td></tr><tr><td>LA Airport Shootings (2013)</td><td>0.534</td><td>0.475</td><td>0.518</td><td>0.465</td><td>0.210</td><td>0.378</td><td>0.376</td><td>0.312</td><td>0.309</td><td>0.192</td><td>0.356</td><td>0.382</td></tr><tr><td>Boston Bombings (2013)</td><td>0.358</td><td>0.340</td><td>0.362</td><td>0.349</td><td>0.360</td><td>0.356</td><td>0.378</td><td>0.300</td><td>0.363</td><td>0.352</td><td>0.354</td><td>0.361</td></tr><tr><td>West Texas Explosion (2013)</td><td>0.411</td><td>0.398</td><td>0.405</td><td>0.396</td><td>0.412</td><td>0.396</td><td>0.396</td><td>0.392</td><td>0.407</td><td>0.405</td><td>0.407</td><td>0.409</td></tr><tr><td>Venezuela Refinery Explosion (2012)</td><td>0.368</td><td>0.347</td><td>0.359</td><td>0.336</td><td>0.360</td><td>0.344</td><td>0.339</td><td>0.339</td><td>0.361</td><td>0.335</td><td>0.362</td><td>0.343</td></tr><tr><td>Brazil Nightclub Fire (2013)</td><td>0.426</td><td>0.431</td><td>0.425</td><td>0.413</td><td>0.416</td><td>0.416</td><td>0.422</td><td>0.302</td><td>0.424</td><td>0.412</td><td>0.431</td><td>0.315</td></tr><tr><td>Savar Building Collapse (2013)</td><td>0.426</td><td>0.352</td><td>0.424</td><td>0.347</td><td>0.404</td><td>0.348</td><td>0.413</td><td>0.227</td><td>0.411</td><td>0.180</td><td>0.413</td><td>0.200</td></tr><tr><td>Spain Train Crash (2013)</td><td>0.463</td><td>0.446</td><td>0.490</td><td>0.539</td><td>0.481</td><td>0.447</td><td>0.355</td><td>0.402</td><td>0.324</td><td>0.460</td><td>0.456</td><td>0.449</td></tr><tr><td>Lac Megantic Train Crash (2013)</td><td>0.319</td><td>0.318</td><td>0.326</td><td>0.318</td><td>0.310</td><td>0.174</td><td>0.289</td><td>0.270</td><td>0.301</td><td>0.210</td><td>0.325</td><td>0.319</td></tr><tr><td>NY Train Crash (2013)</td><td>0.490</td><td>0.573</td><td>0.520</td><td>0.566</td><td>0.490</td><td>0.565</td><td>0.490</td><td>0.490</td><td>0.490</td><td>0.490</td><td>0.490</td><td>0.742</td></tr><tr><td>Glasgow Helicopter Crash (2013)</td><td>0.554</td><td>0.292</td><td>0.527</td><td>0.290</td><td>0.543</td><td>0.292</td><td>0.502</td><td>0.309</td><td>0.390</td><td>0.298</td><td>0.491</td><td>0.321</td></tr><tr><td>Russia Meteor (2013)</td><td>0.392</td><td>0.412</td><td>0.372</td><td>0.339</td><td>0.412</td><td>0.412</td><td>0.296</td><td>0.324</td><td>0.324</td><td>0.305</td><td>0.321</td><td>0.316</td></tr></table>
346
+
347
+ Table 8: Results for the T26 datasets. The progressive and instantaneous splits were done manually based on the type of crisis event.
348
+
349
+ <table><tr><td rowspan="3">Event</td><td colspan="12">Humaid</td></tr><tr><td colspan="2">BERT</td><td colspan="2">BERT+TM</td><td colspan="2">BERT+SEP</td><td colspan="2">BERT+DCWE</td><td colspan="2">BERT+TAPH</td><td colspan="2">BERT+TDA</td></tr><tr><td>CONT</td><td>TEMP</td><td>CONT</td><td>TEMP</td><td>CONT</td><td>TEMP</td><td>CONT</td><td>TEMP</td><td>CONT</td><td>TEMP</td><td>CONT</td><td>TEMP</td></tr><tr><td colspan="13">Progressive Events</td></tr><tr><td>Canada Wildfires (2016)</td><td>0.419</td><td>0.414</td><td>0.420</td><td>0.410</td><td>0.353</td><td>0.319</td><td>0.235</td><td>0.244</td><td>0.248</td><td>0.249</td><td>0.376</td><td>0.367</td></tr><tr><td>Hurricane Matthew (2016)</td><td>0.355</td><td>0.261</td><td>0.396</td><td>0.257</td><td>0.317</td><td>0.131</td><td>0.317</td><td>0.131</td><td>0.335</td><td>0.118</td><td>0.369</td><td>0.273</td></tr><tr><td>Sri Lanka Floods (2017)</td><td>0.092</td><td>0.092</td><td>0.092</td><td>0.092</td><td>0.092</td><td>0.092</td><td>0.092</td><td>0.092</td><td>0.092</td><td>0.092</td><td>0.092</td><td>0.092</td></tr><tr><td>Hurricane Harvey (2017)</td><td>0.635</td><td>0.663</td><td>0.639</td><td>0.669</td><td>0.637</td><td>0.645</td><td>0.589</td><td>0.586</td><td>0.578</td><td>0.587</td><td>0.583</td><td>0.581</td></tr><tr><td>Hurricane Irma (2017)</td><td>0.624</td><td>0.618</td><td>0.639</td><td>0.614</td><td>0.610</td><td>0.579</td><td>0.566</td><td>0.549</td><td>0.568</td><td>0.553</td><td>0.579</td><td>0.545</td></tr><tr><td>Hurricane Maria (2017)</td><td>0.620</td><td>0.628</td><td>0.640</td><td>0.621</td><td>0.603</td><td>0.602</td><td>0.507</td><td>0.575</td><td>0.501</td><td>0.581</td><td>0.600</td><td>0.529</td></tr><tr><td>Maryland Floods (2018)</td><td>0.183</td><td>0.147</td><td>0.197</td><td>0.141</td><td>0.173</td><td>0.077</td><td>0.208</td><td>0.166</td><td>0.188</td><td>0.101</td><td>0.198</td><td>0.155</td></tr><tr><td>Greece Wildfires (2018)</td><td>0.216</td><td>0.199</td><td>0.219</td><td>0.198</td><td>0.212</td><td>0.106</td><td>0.214</td><td>0.104</td><td>0.214</td><td>0.106</td><td>0.232</td><td>0.176</td></tr><tr><td>Kerala Floods (2018)</td><td>0.470</td><td>0.422</td><td>0.421</td><td>0.420</td><td>0.480</td><td>0.382</td><td>0.354</td><td>0.348</td><td>0.341</td><td>0.347</td><td>0.379</td><td>0.346</td></tr><tr><td>Hurricane Florence (2018)</td><td>0.663</td><td>0.510</td><td>0.664</td><td>0.500</td><td>0.658</td><td>0.481</td><td>0.590</td><td>0.435</td><td>0.586</td><td>0.417</td><td>0.649</td><td>0.421</td></tr><tr><td>California Wildfires (2018)</td><td>0.601</td><td>0.484</td><td>0.624</td><td>0.567</td><td>0.571</td><td>0.485</td><td>0.544</td><td>0.455</td><td>0.558</td><td>0.470</td><td>0.575</td><td>0.485</td></tr><tr><td>Cyclone Idai (2019)</td><td>0.372</td><td>0.350</td><td>0.370</td><td>0.350</td><td>0.352</td><td>0.331</td><td>0.287</td><td>0.298</td><td>0.319</td><td>0.294</td><td>0.347</td><td>0.300</td></tr><tr><td>Midwestern U.S. Floods (2019)</td><td>0.300</td><td>0.405</td><td>0.300</td><td>0.362</td><td>0.277</td><td>0.301</td><td>0.137</td><td>0.229</td><td>0.192</td><td>0.217</td><td>0.251</td><td>0.261</td></tr><tr><td>Hurricane Dorian (2019)</td><td>0.560</td><td>0.554</td><td>0.550</td><td>0.559</td><td>0.568</td><td>0.557</td><td>0.553</td><td>0.527</td><td>0.552</td><td>0.470</td><td>0.554</td><td>0.533</td></tr><tr><td colspan="13">Instantaneous Events</td></tr><tr><td>Ecuador Earthquake (2016)</td><td>0.298</td><td>0.186</td><td>0.310</td><td>0.158</td><td>0.260</td><td>0.148</td><td>0.309</td><td>0.163</td><td>0.236</td><td>0.146</td><td>0.311</td><td>0.182</td></tr><tr><td>Italy Earthquake (2016)</td><td>0.395</td><td>0.266</td><td>0.403</td><td>0.260</td><td>0.350</td><td>0.090</td><td>0.118</td><td>0.090</td><td>0.175</td><td>0.090</td><td>0.401</td><td>0.274</td></tr><tr><td>Kaikoura Earthquake (2016)</td><td>0.434</td><td>0.353</td><td>0.426</td><td>0.350</td><td>0.283</td><td>0.251</td><td>0.205</td><td>0.164</td><td>0.229</td><td>0.196</td><td>0.484</td><td>0.266</td></tr><tr><td>Mexico Earthquake (2017)</td><td>0.340</td><td>0.318</td><td>0.341</td><td>0.300</td><td>0.283</td><td>0.262</td><td>0.269</td><td>0.258</td><td>0.245</td><td>0.264</td><td>0.289</td><td>0.281</td></tr><tr><td>Pakistan Earthquake (2019)</td><td>0.273</td><td>0.205</td><td>0.260</td><td>0.200</td><td>0.243</td><td>0.168</td><td>0.203</td><td>0.168</td><td>0.190</td><td>0.162</td><td>0.350</td><td>0.215</td></tr></table>
350
+
351
+ Table 9: Results for the Humaid datasets. The progressive and instantaneous splits were done manually based on the type of crisis event.
352
+
353
+ <table><tr><td>Tweet</td><td>Analysis</td></tr><tr><td>Rep. Michael Grimm says situation in Staten is-land is &quot;another Katrina situation&quot;</td><td>TDA correctly identifies Katrina as the name of the storm in the temporal context of hurricane Sandy, while other models fails.</td></tr><tr><td>#queenscom ingtogether Eric Ulrich brought the keg donated by Russos on the bay.</td><td>Adversarial signal forces TDA to lean time-invariant embedding for the word #queenscom-ingtogether.</td></tr></table>
354
+
355
+ Table 11: Representative examples showing tweets that TDA model correctly classifies while other models fail. Refer Section 6 for details.
thechallengesoftemporalalignmentontwitterduringcrises/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4c8a6d2b543b7164f4167561b030d1c6a890c158d65a4b20465ec32034502123
3
+ size 990804
thechallengesoftemporalalignmentontwitterduringcrises/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7d068cef98b0e5a4976bebf5a148cecec284b367fa5ed7ec84ac67b5785cb2a4
3
+ size 380071
thecuriouscaseofabsolutepositionembeddings/5dfe178e-adb0-45b0-a63b-f83192bfd3a1_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:244726a92dace303e20a934256ed906d7eb5f3f51b546f5a65785c37f738d461
3
+ size 148000
thecuriouscaseofabsolutepositionembeddings/5dfe178e-adb0-45b0-a63b-f83192bfd3a1_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5f6f8f78900a9053beb054a6261724f8f59a8652b4cfd2e74a6190410ce1e036
3
+ size 164821
thecuriouscaseofabsolutepositionembeddings/5dfe178e-adb0-45b0-a63b-f83192bfd3a1_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4d0085e0c7bfcc58134cfd85b59059e83c433046cc050c80b9591310715892ea
3
+ size 902636
thecuriouscaseofabsolutepositionembeddings/full.md ADDED
@@ -0,0 +1,690 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # The Curious Case of Absolute Position Embeddings
2
+
3
+ Koustuv Sinha\* Amirhossein Kazemnejad \*
4
+
5
+ Siva Reddy‡ Joelle Pineau†‡ Dieuwke Hupkes† Adina Williams†
6
+
7
+ $\ddagger$ McGill University / Mila - Quebec AI; † Meta AI
8
+
9
+ {koustuv.sinha,amirhossein.kazemnejad}@mail.mcgill.ca
10
+
11
+ # Abstract
12
+
13
+ Transformer language models encode the notion of word order using positional information. Most commonly, this positional information is represented by absolute position embeddings (APEs), that are learned from the pretraining data. However, in natural language, it is not absolute position that matters, but relative position, and the extent to which APEs can capture this type of information has not been investigated. In this work, we observe that models trained with APE over-rely on positional information to the point that they break-down when subjected to sentences with shifted position information. Specifically, when models are subjected to sentences starting from a non-zero position (excluding the effect of priming), they exhibit noticeably degraded performance on zero- to full-shot tasks, across a range of model families and model sizes. Our findings raise questions about the efficacy of APEs to model the relativity of position information, and invite further introspection on the sentence and word order processing strategies employed by these models.
14
+
15
+ # 1 Introduction
16
+
17
+ Recently, Transformer (Vaswani et al., 2017) language models (TLMs) have been widely used for natural language applications. Such models incorporate positional encodings: vectors encoding information about the order of words in context. Many models, such as RoBERTa (Liu et al., 2019), GPT3 (Brown et al., 2020) and OPT (Zhang et al., 2022), utilize absolute position embeddings (APEs) that directly encode absolute (linear) word order. APEs appear to contribute to the performance of such models; although when they are removed, some models become sensitive to ablative word scrambles (Sinha et al., 2021), while others work optimally (Haviv et al., 2022). Thus, what precisely APEs contribute remains unclear.
18
+
19
+ Zero starting position
20
+ ![](images/a4b34448ca99beb746d182035615570b1900d7e27a2a5029af3b69e26543ce95.jpg)
21
+ Who could Thomas observe without distracting Nathan?
22
+
23
+ Non-zero starting position
24
+ Figure 1: Transformer models with absolute positional embeddings have different representations for sentences starting from non-zero positions.
25
+ ![](images/0d9ad9ea7e025a506c8850e11466ee7164506667fdc6899b09793eef739cd961.jpg)
26
+ 0 1 2 3 4 5 6 7
27
+ Who could Thomas observe without distracting Nathan?
28
+ 100 101 102 103 104 105 106 107 →
29
+
30
+ It is conceivable that APEs may enable the model to handle the relative distances between words. If models were somehow learning relative position information despite using absolute positional embeddings, we would expect sentence encodings to be the same in most cases, regardless of where they appear in the context window. For example, the meaning of "smoking kills" should be constant in "Kim said smoking kills" (positions 2-3) and "It was commonly believed by most adult Americans in the 90s that smoking kills" (positions 13-14), despite the fact that these words appear in different absolute positions. Given this, our central question is: do APEs enable the model to learn the relative distances between the words in a sentence?
31
+
32
+ Prior work has attempted to explore the consequences of APEs using probing methods (Wang et al., 2021). APEs have been found to not capture the meaning of absolute or relative positions (Wang and Chen, 2020). APEs have also been found to bias model output with positional artefacts (Luo et al., 2021), leading to better performance on token to position de-correlation (Ke et al., 2021). Haviv et al. (2022) even find that causal TLMs perform adequately even without an explicit APEs. However, a systematic study on relativity of positional encodings is still needed.
33
+
34
+ To better understand the relativity of absolute
35
+
36
+ position embeddings, we first need to ascertain the robustness of relative position understanding for a given input. TLMs are typically trained in a batch containing multiple sentences, with a limited sequence window size, which is typically much larger than an average sentence. We hypothesize that a systematic model should encode the same sentence equally throughout this context window. However, evaluating the encoding of a sentence starting from any position in this window in isolation is hard, as the representation of the sentence would depend on the prior context (Misra et al., 2020; Kassner and Schütze, 2020).
37
+
38
+ In this work, we subject models from several different architectures and sizes to phase shifting. In this paradigm, the sentences exposed to the model are provided contiguous position identifiers starting from a non-zero position (Figure 1). Such inspection allows us to gauge the model's sentence encodings on different positions, emulating subwindow sentence representation, while factoring out the influence of prior context. We investigate several zero shot, few shot and full shot tasks by shifting the start positions of the sentences. We observe the following:
39
+
40
+ - TLMs display different sub-window sentence representation capabilities, resulting in decreased zero shot task performance and variability in sentence perplexities.
41
+ - Autoregressive models, including the recently published OPT (Zhang et al., 2022), show erratic zero and few-shot performance on subwindow representations, highlighting the brittleness of in-context learning evaluation.
42
+ - Masked Language Models (MLMs) encode sentences in non-standard positions better than their autoregressive counterparts.
43
+ - During fine-tuning models suffer drastically on cross phase-shifted evaluation, suggesting position specific overfitting.
44
+
45
+ We aim to raise awareness about issues with APEs, which are still widely used in pre-training large language models. Our results highlight the severity of position shortcuts taken by the model during pretraining and fine-tuning, and imply that TLMs may have vastly varying sub-window sentence representation capability than previously assumed. We will
46
+
47
+ release the code and analysis used in this work on Github.
48
+
49
+ # 2 Approach
50
+
51
+ Position encodings used by TLMs come in three broad categories: fixed sinusoidal embeddings as proposed by Vaswani et al. (2017), absolute or learned popularized by BERT (Devlin et al., 2019) family of masked language models, and relative positions (Shaw et al., 2018) used by T5 (Raffel et al., 2020). Wang et al. (2021) presents a comprehensive overview of current encoding strategies.
52
+
53
+ Despite being an older method, absolute positional embeddings (APEs) are reportedly better than its relative counterparts on several tasks (Ravishankar et al., 2021), and are still used by majority of the large pre-trained TLMs, including the recently released OPT (Zhang et al., 2022). APEs compute token representation after adding the input token to the position embedding for the corresponding position: $x_{i} = \theta_{W}[w_{i}] + \theta_{P}[i]$ , where, $\theta_W\in \mathbf{R}^{|V|\times d}$ is the token vocabulary of size $|V|$ , embedding dimension $d$ , and the absolute position embedding matrix $\theta_P\in \mathbf{R}^{|T|\times d}$ , where $T$ is the maximum context window size of the model. Now, a sentence $S = [w_{1},w_{2}\dots w_{n}]$ containing $n$ tokens, is mapped during inference to positions 1,2,... $n$ contiguously for all models.
54
+
55
+ TLMs offer various sizes of context window, which is the maximum sequence length in tokens it can train and infer on. Since this context window is usually larger than the average sentence length, multiple sentences can be packed together to "fill" the context window during pre-training. This allows TLMs to learn that sentences can start from various positions in their context window. If models trained with APEs do encode relativity of position, then the sentence representations should be roughly equal throughout the context window, regardless of their starting position.
56
+
57
+ # 2.1 Phase Shift Methodology
58
+
59
+ To understand the relativity of APEs, we examine the model performance under phase shift conditions. Phase shift<sup>2</sup> involves right-shifting the absolute positions of all tokens in the sentence by an equal distance $k$ , such that the tokens are now
60
+
61
+ ![](images/a0fbaf1e7a8fcf0c94f37111c065fc2a219d99a7bf2e5faded1728db529c47dd.jpg)
62
+ Figure 2: Acceptability Scores in BLiMP (Warstadt et al., 2020) dataset across different phase shifts. RoBERTa only supports context window of size $T = 512$ , so we capped the scores to phase shift $k = 300$ to allow for sentences of maximum length in BLiMP to be evaluated.
63
+
64
+ mapped to new positions $1 + k, 2 + k, \ldots, n + k$ , or $x_{i} = \theta_{W}[w_{i}] + \theta_{P}[i + k]$ . As such, phase shifting changes only the absolute position, but preserves the relative distances between tokens in the a sentence. Theoretically, we can shift the positions within the context window as long as $k + n \leq T$ . For example, given phase shift $k = 100$ , and sentence length of $n$ , we could have the following vector of position ids:
65
+
66
+ $$
67
+ \vec {p} = [ 1 0 1, 1 0 2, 1 0 3, \dots , n + 1 0 0 ]
68
+ $$
69
+
70
+ While computing the task scores and perplexities of the models, we observed that all of the models exhibit poor task performance on phase shifts. Due to the non-shiftable nature of the [CLS] token in masked language models (MLMs), we first fix the position of [CLS] token to start position during phase shifting, which results in significantly improved performance for all models:
71
+
72
+ $$
73
+ \vec {p} = [ 1, 1 0 2, 1 0 3, \dots , n + 1 0 0 ]
74
+ $$
75
+
76
+ Furthermore, we observed yet another marked improvement in task performance when we use special tokens in the beginning of the sentence: typically the end-of-sentence ([EOS]) token in case of MLM models (RoBERTa, BART). An explanation for this ambiguity in results is that typically when models are pre-trained, multiple sentences are packed together in the context window by delimiting the start of each sentence with an [EOS]
77
+
78
+ ![](images/800aca84a9953bb8cd1a29a356867a5bfd7053984d571449c7653c01a864114e.jpg)
79
+ Figure 3: Distribution of sentences in BLiMP (Warstadt et al., 2020) having the lowest perplexities (i.e., are deemed most acceptable) for each phase shift.
80
+
81
+ ![](images/af31f621dc42c2e2962004387743e55a8f9b03b7e7d854907d051f4838706aa9.jpg)
82
+
83
+ token $^{3}$ . Thus, in all of our results, we opt with this configuration (adding an [EOS] token before the sentence) to ensure fairer evaluation for all model families. Concretely, the input to a model uses the following template $^{4}$ :
84
+
85
+ $$
86
+ [ C L S ] [ E O S ] < s e n t e n c e >
87
+ $$
88
+
89
+ # 3 Impact of phase shifts on grammatical acceptability
90
+
91
+ First, we investigate the impact of phase shifting on the model performance. We compute the perplexities of several publicly available models—RoBERTa (Liu et al., 2019), BART (Lewis et al., 2020), GPT2 (Radford et al., 2019) and OPT (Zhang et al., 2022)—to evaluate the grammatical acceptability capabilities of the model, using the BLiMP (Warstadt et al., 2020) benchmark. We compute the task score by comparing grammatical and ungrammatical sentence perplexities, and applying the phase shift in increasing values of $k$ to the sentences and models (Figure 2).
92
+
93
+ We observe that the task performance of all models, except for RoBERTa, drastically suffers from phase shifting. Autoregressive models in particular display worse results. This is likely due to a mismatch of position information learned due to
94
+
95
+ ![](images/1951f906975a8488e31aeb5441e4be51a05e349ee91fc1c3aa3058dff554ee43.jpg)
96
+ Figure 4: Aggregate performance of OPT family on six NLP tasks when various phase shifts are applied.
97
+
98
+ ![](images/652b5a46ac578dd879133750f79035d955681e22b32c3156e1dec249d2b2a421.jpg)
99
+
100
+ ![](images/5c362e3e6037f8a86d885031194a739bcf1822e1d9891b610335b3328aa6b405.jpg)
101
+
102
+ the causal language modelling objective vs the position information provided to the model during phase shift (Haviv et al., 2022). We also compare the perplexities of each sentence across different phase shifts and plot the frequency of sentences having the lowest perplexity in each $k$ (Figure 3). We observe in GPT2 that more than $70\%$ of the sentences have their best perplexity in $k = 0$ , highlighting a severe zero-position bias. $\mathrm{OPT}_{350\mathrm{M}}$ has better sub-window sentence representation capacity than similarly sized GPT2, which is also evident from the acceptability results in Figure 2.
103
+
104
+ # 4 Impact of phase shifts on in-context learning
105
+
106
+ More recently, zero-shot and few-shot inference, commonly referred to as in-context learning, have become a de facto standard in evaluating pretrained language models (Brown et al., 2020). In this approach, the model's predictions are produced by conditioning it on certain prompts, such as instructions (zero-shot setting) or a few examples of input-output pairs (few-shot setup). In both cases, the model faces an extended input text, and we suspect it will be affected by deficiencies of APE. To evaluate this hypothesis, we employ an experimental setup similar to §3. Under zero-shot and five-shot inference regimes, we assess the model performance on standard NLP tasks when it is fed with inputs in increasing values of phase shifts. We choose OPT model family, because it is available in a wide range of sizes (125M to 30B parameters), allowing allows us to examine the behavior of APE at different scales. Moreover, our evaluations take into account four tasks reported in the original pa
107
+
108
+ ![](images/6705fe0c23ee3566e0ba31ccdd85ef66709fdfc2a4f543ced7937d4bf856a424.jpg)
109
+ Figure 5: Distribution of prompts with best accuracy across all six tasks.
110
+
111
+ per: Winogrande (Sakaguchi et al., 2020), COPA (Gordon et al., 2012), PIQA (Bisk et al., 2020), and ARC (Clark et al., 2018) as well as two classification datasets from GLUE benchmark (Wang et al., 2019): MRPC and RTE. We provide an aggregated view of the models' performance on all six accuracy-dominated benchmarks in Figure 4. The detailed plots for each task are in Appendix B.
112
+
113
+ In most tasks, the performance deteriorates when the model process inputs in any other phase shift than zero, especially in zero-shot inference. More importantly, the model's performance is not always adversely affected by phase shifts. In fact, Figure 5 shows that non-zero starting positions result in the best accuracy for many prompts. This erratic performance is present in all model sizes, and scaling the number of parameters does not help. Furthermore, one can see larger models are more affected by shifted starting position, which suggests that absolute positional embedding might need more data or training as the number of parameters increases.
114
+
115
+ ![](images/0dc2aa83157477957511bed9a754b9e73f6f394b631276e37ae2695fad599a35.jpg)
116
+ Figure 6: GLUE task heatmap with varying fine-tuning train and test phase shifts, averaged across all models. Darker colors represent better task performance.
117
+
118
+ # 5 Impact of phase-shifts on fine-tuning
119
+
120
+ Finally, we investigate the effect of phase shift in fine-tuning. We ask whether the models can generalize to out-of-phase sentences for a given task. We train RoBERTa, BART, GPT2 and OPT models on CoLA, RTE and MRPC tasks from the GLUE benchmark (Wang et al., 2019) and evaluate them on phase-shifts. We choose these three relatively small tasks in order to decrease the number of gradient updates to position embeddings during fine-tuning. We perform a cross-phase analysis by training and evaluating across different phase shifts $(k = 0, 100, 200, 300)$ for all models on the same set of datasets, and show the averaged performance. We observe for all models, the task performance drops during out-of-phase evaluation (non-diagonals in Figure 6).
121
+
122
+ The drop in performance of evaluating out-of-phase sentences might just be simply attributed to overfitting on position information during finetuning. However, we observe that for all tasks, training and evaluating on the same phase-shift is worse when $k \neq 0$ (diagonals in Figure 6). Out-of-phase training appears to be worst for CoLA, which suffers drastically when fine-tuning on different phase shifts. These results highlight a potential task data bias with respect to different positions.
123
+
124
+ # 6 Conclusion
125
+
126
+ In this work, we investigate the abilities of APEs in encoding the relative positions of the tokens in an input. We observe that TLMs using APEs encode sentences differently based on the starting position of the sentence in the context window. This result has major implications in the way we perceive the sentence processing capabilities of TLMs. Specifically, we observe that the representation of the same sentence varies depending on where it is in the context window, such that it impacts zero shot, few shot and full shot task performance of sub-window sentences. Future work could leverage
127
+
128
+ the start position in building robust and positiongeneralizable models. We hope our work can inform the community on the pitfalls of using APEs, and inspire development and adoption of alternative relative position embedding based approaches.
129
+
130
+ # Limitations
131
+
132
+ Our work primarily focuses on evaluating the relative position encoding of APEs. We do not focus on the relative position embeddings (Shaw et al., 2018; Raffel et al., 2020) (RPE) as our method of phase-shift analysis is not applicable to those classes of models. RPEs employ a window based position information computation on the fly, which does not require it to store embeddings uniquely for each position. Thus, a phase shift in RPE would not change the sentence processing pipeline, as the model recomputes the position information based on the shifted window. Thus, we need different tools to study the relative position encoding of RPE than the one proposed in this paper.
133
+
134
+ We also acknowledge that our study is primarily focused on English language data from BLiMP and GLUE. It is likely the same results would hold in a multi-lingual model, however, since many languages are less word order inflexible than English, that should be investigated in a follow-up work.
135
+
136
+ # Ethical Consideration
137
+
138
+ Our work aims at understanding the difference in sentence representation by shifting position information. In practice, this could yield un-intended results from a TLM deployed in production. Since we observe a large variation in results, we would advise for caution when deploying TLMs in sensitive real world applications, as the relative positioning of a given sentence might evoke different responses from the model. We hope our work can be useful to motivate the use of better positional encoding schemes in pre-training TLMs in future.
139
+
140
+ # Acknowledgements
141
+
142
+ We would like to thank Kanishka Misra, Shagun Sodhani, Stephen Roller and Kushal Arora for their feedback on the initial versions of this draft. We are also grateful for anonymous reviewers' feedback. Siva Reddy acknowledges the support by the Facebook CIFAR AI Chair program.
143
+
144
+ # References
145
+
146
+ Mikel Artetxe, Shruti Bhosale, Naman Goyal, Todor Mihaylov, Myle Ott, Sam Shleifer, Xi Victoria Lin, Jingfei Du, Srinivasan Iyer, Ramakanth Pasunuru, Giri Anantharaman, Xian Li, Shuhui Chen, Halil Akin, Mandeep Baines, Louis Martin, Xing Zhou, Punit Singh Koura, Brian O'Horo, Jeff Wang, Luke Zettlemoyer, Mona T. Diab, Zornitsa Kozareva, and Ves Stoyanov. 2021. Efficient large scale language modeling with mixtures of experts. CoRR, abs/2112.10684.
147
+ Iz Beltagy, Matthew E. Peters, and Arman Cohan. 2020. Longformer: The long-document transformer. CoRR, abs/2004.05150.
148
+ Yonatan Bisk, Rowan Zellers, Ronan LeBras, Jianfeng Gao, and Yejin Choi. 2020. PIQA: reasoning about physical commonsense in natural language. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pages 7432-7439. AAAI Press.
149
+ Sid Black, Leo Gao, Phil Wang, Connor Leahy, and Stella Biderman. 2021. GPT-Neo: Large Scale Autoregressive Language Modeling with Mesh-Tensorflow. If you use this software, please cite it using these metadata.
150
+ Sidney Black, Stella Biderman, Eric Hallahan, Quentin Gregory Anthony, Leo Gao, Laurence Golding, Horace He, Connor Leahy, Kyle McDonell, Jason Phang, Michael Martin Pieler, USVSN Sai Prashanth, Shivanshu Purohit, Laria Reynolds, Jonathan Tow, Ben Wang, and Samuel Weinbach. 2022. GPT-neox-20b: An open-source autoregressive language model. In *Challenges & Perspectives in Creating Large Language Models*.
151
+ Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual.
152
+ Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, Parker Schuh, Kensen Shi, Sasha Tsvyashchenko, Joshua Maynez, Abhishek
153
+
154
+ Rao, Parker Barnes, Yi Tay, Noam Shazeer, Vinodkumar Prabhakaran, Emily Reif, Nan Du, Ben Hutchinson, Reiner Pope, James Bradbury, Jacob Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin, Toju Duke, Anselm Levskaya, Sanjay Ghemawat, Sunipa Dev, Henryk Michalewski, Xavier Garcia, Vedant Misra, Kevin Robinson, Liam Fedus, Denny Zhou, Daphne Ippolito, David Luan, Hyeontaek Lim, Barret Zoph, Alexander Spiridonov, Ryan Sepassi, David Dohan, Shivani Agrawal, Mark Omernick, Andrew M. Dai, Thanumalayan Sankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica Moreira, Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang, Brennan Saeta, Mark Diaz, Orhan First, Michele Catasta, Jason Wei, Kathy Meier-Hellstern, Douglas Eck, Jeff Dean, Slav Petrov, and Noah Fiedel. 2022. Palm: Scaling language modeling with pathways.
155
+ Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. 2018. Think you have solved question answering? try arc, the AI2 reasoning challenge. CoRR, abs/1803.05457.
156
+ Róbert Csordás, Kazuki Irie, and Juergen Schmidhuber. 2021. The devil is in the detail: Simple tricks improve systematic generalization of transformers. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 619-634, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
157
+ Zihang Dai, Zhilin Yang, Yiming Yang, Jaime Carbonell, Quoc Le, and Ruslan Salakhutdinov. 2019. Transformer-XL: Attentive language models beyond a fixed-length context. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2978-2988, Florence, Italy. Association for Computational Linguistics.
158
+ Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.
159
+ William B. Dolan and Chris Brockett. 2005. Automatically constructing a corpus of sentential paraphrases. In Proceedings of the Third International Workshop on Paraphrasing (IWP2005).
160
+ Leo Gao, Jonathan Tow, Stella Biderman, Sid Black, Anthony DiPofi, Charles Foster, Laurence Golding, Jeffrey Hsu, Kyle McDonell, Niklas Muennighoff, Jason Phang, Laria Reynolds, Eric Tang, Anish Thite, Ben Wang, Kevin Wang, and Andy Zou. 2021. A framework for few-shot language model evaluation.
161
+ Danilo Giampiccolo, Bernardo Magnini, Ido Dagan, and Bill Dolan. 2007. The third PASCAL recognizing
162
+
163
+ textual entailment challenge. In Proceedings of the ACL-PASCAL Workshop on Textual Entailment and Paraphrasing, pages 1-9, Prague. Association for Computational Linguistics.
164
+ Andrew Gordon, Zornitsa Kozareva, and Melissa Roemmele. 2012. SemEval-2012 task 7: Choice of plausible alternatives: An evaluation of commonsense causal reasoning. In *SEM 2012: The First Joint Conference on Lexical and Computational Semantics – Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation (SemEval 2012), pages 394–398, Montréal, Canada. Association for Computational Linguistics.
165
+ Adi Haviv, Ori Ram, Ofir Press, Peter Izsak, and Omer Levy. 2022. Transformer Language Models without Positional Encodings Still Learn Positional Information. ArXiv preprint, abs/2203.16634.
166
+ Dieuwke Hupkes, Verna Dankers, Mathijs Mul, and Elia Bruni. 2020. Compositionality decomposed: How do neural networks generalise? (extended abstract). In Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI-20, pages 5065-5069. International Joint Conferences on Artificial Intelligence Organization. Journal track.
167
+ Nora Kassner and Hinrich Schütze. 2020. Negated and misprimed probes for pretrained language models: Birds can talk, but cannot fly. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7811-7818, Online. Association for Computational Linguistics.
168
+ Guolin Ke, Di He, and Tie-Yan Liu. 2021. Rethinking positional encoding in language pre-training. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net.
169
+ Shun Kiyono, Sosuke Kobayashi, Jun Suzuki, and Kentaro Inui. 2021. SHAPE: Shifted absolute position embedding for transformers. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 3309-3321, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
170
+ Brenden M. Lake and Marco Baroni. 2018. Generalization without systematicity: On the compositional skills of sequence-to-sequence recurrent networks. In ICML.
171
+ Hector J. Levesque, Ernest Davis, and L. Morgenstern. 2011. The winograd schema challenge. In KR.
172
+ Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics,
173
+
174
+ pages 7871-7880, Online. Association for Computational Linguistics.
175
+ Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining approach. CoRR, abs/1907.11692.
176
+ Ziyang Luo, Artur Kulmizev, and Xiaoxi Mao. 2021. Positional artefacts propagate through masked language model embeddings. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 5312-5327, Online. Association for Computational Linguistics.
177
+ Brian W. Matthews. 1975. Comparison of the predicted and observed secondary structure of t4 phage lysozyme. Biochimica et biophysica acta, 405 2:442-51.
178
+ Kanishka Misra. 2022. minicons: Enabling flexible behavioral and representational analyses of transformer language models. ArXiv preprint, abs/2203.13112.
179
+ Kanishka Misra, Allyson Ettinger, and Julia Rayz. 2020. Exploring BERT's sensitivity to lexical cues using tests from semantic priming. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 4625-4635, Online. Association for Computational Linguistics.
180
+ Santiago Ontanon, Joshua Ainslie, Zachary Fisher, and Vaclav Cvicek. 2022. Making transformers solve compositional tasks. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 3591-3607, Dublin, Ireland. Association for Computational Linguistics.
181
+ Ofir Press, Noah Smith, and Mike Lewis. 2022. Train short, test long: Attention with linear biases enables input length extrapolation. In International Conference on Learning Representations.
182
+ Ofir Press, Noah A. Smith, and Mike Lewis. 2021. Shortformer: Better language modeling using shorter inputs. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 5493-5505, Online. Association for Computational Linguistics.
183
+ Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners.
184
+ Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research, 21(140):1-67.
185
+
186
+ Maithra Raghu, Thomas Unterthiner, Simon Kornblith, Chiyuan Zhang, and Alexey Dosovitskiy. 2021. Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems, 34:12116-12128.
187
+ Vinit Ravishankar, Andrey Kutuzov, Lilja Øvrelid, and Erik Velldal. 2021. Multilingual ELMo and the effects of corpus sampling. In Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa), pages 378-384, Reykjavik, Iceland (Online). Linköping University Electronic Press, Sweden.
188
+ Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. 2020. Winogrande: An adversarial winograd schema challenge at scale. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pages 8732-8740. AAAI Press.
189
+ Julian Salazar, Davis Liang, Toan Q. Nguyen, and Katrin Kirchhoff. 2020. Masked language model scoring. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2699-2712, Online. Association for Computational Linguistics.
190
+ Peter Shaw, Jakob Uszkoreit, and Ashish Vaswani. 2018. Self-attention with relative position representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 464-468, New Orleans, Louisiana. Association for Computational Linguistics.
191
+ Koustuv Sinha, Robin Jia, Dieuwke Hupkes, Joelle Pineau, Adina Williams, and Douwe Kiela. 2021. Masked language modeling and the distributional hypothesis: Order word matters pre-training for little. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 2888-2913, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
192
+ Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 5998-6008.
193
+ Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net.
194
+
195
+ Ben Wang. 2021. Mesh-Transformer-JAX: Model-Parallel Implementation of Transformer Language Model with JAX. https://github.com/kingoflolz/mesh-transformer-jax.
196
+ Benyou Wang, Lifeng Shang, Christina Lioma, Xin Jiang, Hao Yang, Qun Liu, and Jakob Grue Simonsen. 2021. On position embeddings in BERT. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net.
197
+ Yu-An Wang and Yun-Nung Chen. 2020. What do position embeddings learn? an empirical study of pre-trained language model positional encoding. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6840-6849, Online. Association for Computational Linguistics.
198
+ Alex Warstadt, Alicia Parrish, Haokun Liu, Anhad Mohananey, Wei Peng, Sheng-Fu Wang, and Samuel R. Bowman. 2020. BLiMP: The benchmark of linguistic minimal pairs for English. Transactions of the Association for Computational Linguistics, 8:377-392.
199
+ Alex Warstadt, Amanpreet Singh, and Samuel R. Bowman. 2019. Neural network acceptability judgments. Transactions of the Association for Computational Linguistics, 7:625-641.
200
+ Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Association for Computational Linguistics.
201
+ Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, Todor Mihaylov, Myle Ott, Sam Shleifer, Kurt Shuster, Daniel Simig, Punit Singh Koura, Anjali Sridhar, Tianlu Wang, and Luke Zettlemoyer. 2022. Opt: Open pre-trained transformer language models. ArXiv, abs/2205.01068.
202
+
203
+ # A Experiment Details
204
+
205
+ # A.1 Models
206
+
207
+ We used 11 publicly available pretrained language models in this work, ranging across different architecture families: Encoder, Sequence-to-Sequence, and Auto regressive models. All of them use absolute positional embeddings (APE) that is learned during pretraining. In §4, we follow the standard practice for in-context learning evaluation (Brown et al., 2020; Black et al., 2022; Gao et al., 2021) and use autoregressive models. In our initial experiments, we found GPT2 to have a similar behaviour to OPT models, and since the OPT models are available in a wider range of sizes, we primarily focus on them for these experiments. In fine-tuning (§5) and acceptability (§3) experiments, we assess all model families. However, because of the computational costs associated with these experiments, we opt for model variants with $< 1$ B parameters. The details of all models can be found in Table 1. We use HuggingFace (Wolf et al., 2020) model hub to load, fine-tune train, and run inference for all models.
208
+
209
+ # A.2 Datasets
210
+
211
+ We use BLiMP (Warstadt et al., 2020) for the grammatical acceptability experiments in §3 as it is typically employed in a inference-only setting and does not require additional training. For §5, we take three tasks from the standard language understanding benchmark GLUE (Wang et al., 2019) which is often used for finetuning language models: MRPC, RTE, and COLA. In addition to these three tasks, we use four other datasets, COPA, PIQA, WinoGrande, and ARC, on which the OPT family have previously demonstrated good performance (Zhang et al., 2022). Table 2 shows the statistics of all datasets, and the following provides a brief description of them:
212
+
213
+ - BLiMP (Warstadt et al., 2020) is a challenge set designed to measures the model's ability to distinguish between acceptable and unacceptable English sentences. This benchmark consists of synthetic examples created based on expert-crafted grammars, where each instance comes with two versions: one acceptable and one unacceptable.
214
+ - COPA (Gordon et al., 2012) is an open-domain commonsense causal reasoning task, where the model is given a premise and must
215
+
216
+ correctly identify its cause or effect. COPA consists of short hand-crafted sentences and is provided as a multi-choice task.
217
+
218
+ PIQA (Bisk et al., 2020) is a physical commonsense benchmark dataset, challenging language models' idea of the physical world. Given a physical goal, a model must choose the most plausible solution between two choices. This benchmark is used in the multi-choice format.
219
+ - WinoGrande (Sakaguchi et al., 2020) is a commonsense reasoning benchmark based on the Winograd Schema Challenge (WSC) (Levesque et al., 2011) with increased hardness and scale. The dataset is provided as a pronoun resolution problem, where the model must recover an ambiguous pronoun in a given context.
220
+ - ARC (Clark et al., 2018) is collected from grade-school-level science questions commonly asked in exams. This question-answering dataset is provided in a multi-choice QA format suitable for evaluating pretrained language models. We use the "easy" subset of this benchmark.
221
+ - MRPC (Dolan and Brockett, 2005) is a paraphrase identification dataset collected from online news websites and has become a standard benchmark in the NLP community. We follow the previous works and treat the data as a text classification task.
222
+ - RTE (Giampiccolo et al., 2007) is one of original subtasks in the GLUE benchmark and comprises textual entailment challenges. We follow the standard format and use Natural Language Inference (NLI) protocol for this dataset.
223
+ - CoLA (Warstadt et al., 2019) is a linguistic acceptability dataset, where each example is an English sentence annotated with a binary label showing whether it is a grammatical sentence. This is a text classification dataset and we follow the standard protocol and report Matthews correlation coefficient (Matthews, 1975).
224
+
225
+ <table><tr><td>Model</td><td>Type</td><td>Pretraining Objective</td><td>Context Size</td><td>First Position</td><td># Layers</td><td>Hidden Size</td><td># Params</td></tr><tr><td colspan="8">RoBERTa family (Liu et al., 2019)</td></tr><tr><td>RoBERTaBASE</td><td>encoder-only</td><td>Masked Language Modeling</td><td>514</td><td>2</td><td>12</td><td>768</td><td>123M</td></tr><tr><td>RoBERTaLARGE</td><td>encoder-only</td><td>Masked Language Modeling</td><td>514</td><td>2</td><td>24</td><td>1024</td><td>325M</td></tr><tr><td colspan="8">BART family (Lewis et al., 2020)</td></tr><tr><td>BARTBASE</td><td>encoder-decoder</td><td>Masked Language Modeling</td><td>1024</td><td>2</td><td>6</td><td>768</td><td>140M</td></tr><tr><td>BARTLARGE</td><td>encoder-decoder</td><td>Masked Language Modeling</td><td>1024</td><td>2</td><td>12</td><td>1024</td><td>400M</td></tr><tr><td colspan="8">GPT2 family (Radford et al., 2019)</td></tr><tr><td>GPT2</td><td>decoder-only</td><td>Next Token Prediction</td><td>1024</td><td>0</td><td>12</td><td>768</td><td>125M</td></tr><tr><td>GPT2MEDIUM</td><td>decoder-only</td><td>Next Token Prediction</td><td>1024</td><td>0</td><td>24</td><td>1024</td><td>345M</td></tr><tr><td colspan="8">OPT family (Zhang et al., 2022)</td></tr><tr><td>OPT125M</td><td>decoder-only</td><td>Next Token Prediction</td><td>2048</td><td>2</td><td>12</td><td>768</td><td>125M</td></tr><tr><td>OPT350M</td><td>decoder-only</td><td>Next Token Prediction</td><td>2048</td><td>2</td><td>24</td><td>1024</td><td>350M</td></tr><tr><td>OPT2.7M</td><td>decoder-only</td><td>Next Token Prediction</td><td>2048</td><td>2</td><td>32</td><td>2560</td><td>2.7B</td></tr><tr><td>OPT13B</td><td>decoder-only</td><td>Next Token Prediction</td><td>2048</td><td>2</td><td>40</td><td>5120</td><td>13B</td></tr><tr><td>OPT30B</td><td>decoder-only</td><td>Next Token Prediction</td><td>2048</td><td>2</td><td>48</td><td>7168</td><td>30B</td></tr></table>
226
+
227
+ Table 1: Details of the models we used in this paper.
228
+
229
+ <table><tr><td>Dataset</td><td># Train</td><td># Test/Validation</td></tr><tr><td>BliMP</td><td>-</td><td>67000</td></tr><tr><td>COPA</td><td>400</td><td>100</td></tr><tr><td>PIQA</td><td>16113</td><td>1838</td></tr><tr><td>WinoGrande</td><td>40398</td><td>1267</td></tr><tr><td>ARC (Easy)</td><td>2251</td><td>2376</td></tr><tr><td>MRPC</td><td>3668</td><td>408</td></tr><tr><td>RTE</td><td>2490</td><td>277</td></tr><tr><td>CoLA</td><td>8551</td><td>1043</td></tr></table>
230
+
231
+ Table 2: Dataset statistics we used in this work.
232
+
233
+ <table><tr><td>Parameter</td><td>Value</td></tr><tr><td>Learning rate</td><td>{0.0001, 0.0002, 0.0003}</td></tr><tr><td>Batch size</td><td>{16, 32}</td></tr><tr><td># Train Epochs</td><td>10</td></tr><tr><td>Early Stopping</td><td>On</td></tr><tr><td>Early Stopping Tolerance</td><td>3</td></tr><tr><td>Optimizer</td><td>AdamW</td></tr><tr><td>Learning Rate Schedule</td><td>Linear</td></tr><tr><td>Weight Decay</td><td>0.0</td></tr><tr><td>Warm Up</td><td>6% of initial training steps</td></tr></table>
234
+
235
+ Table 3: Summary of hyperparameters used in finetuning experiments.
236
+
237
+ # A.3 Grammatical acceptability
238
+
239
+ We use all 67 subsets (a total of 67K data instances) of BLiMP (Warstadt et al., 2020). A model achieves a score of 1 if it successfully assigns a lower perplexity to the grammatical version of each example. We report the average score across the entire dataset for starting positions that are shifted in the intervals of 10. The inputs are fed to the models in the format explained in §2.1. Recall that perplexities are ill-defined in case of Masked Language Models. Thus, we follow the formulation of Salazar et al. (2020) to compute a pseudoperplexity for RoBERTa and BART. We adopt the Minicons (Misra, 2022) library to compute the perplexities, which provides a unified interface for models hosted in HuggingFace (Wolf et al., 2020).
240
+
241
+ # A.4 Prompting
242
+
243
+ For evaluating zero-shot inference and in-context learning, we make use of EleutherAI Language Model Evaluation Harness (Gao et al., 2021), an open-source library that is used for evaluating autoregressive pretrained language models (Black et al., 2022). In the zero-shot setting, each ex
244
+
245
+ ample is converted to a prompt using task-specific templates. Then, the prompt is fed to the language model to elicit the answer. Similarly, in the few-shot setup, a prompt is created from the concatenation of few dataset examples base on the same template and are prepended as a context to validation instances. In our experiments, we use default templates provided by the EleutherAI Language Model Evaluation Harness, which can be found in Table 4. The task performance is computed over the validation set of due to the lack of public test sets, except for ARC, where we evaluate the models on the test set. We set the number of few-shots examples to be five and randomly sample them from the training set of each dataset. We report the few-shot results averaged over five random seeds. Note that feeding inputs to the models still follows the same protocol introduced in §2.1.
246
+
247
+ # A.5 Fine-tuning
248
+
249
+ We fine-tune all models on CoLA, RTE and MRPC tasks from the GLUE benchmark on different values of phase shift $k$ , and evaluate across all pos
250
+
251
+ <table><tr><td>Dataset</td><td colspan="2">Template</td></tr><tr><td rowspan="2">COPA</td><td>Prompt</td><td>&lt;premise&gt; because/therefore &lt;possible-continuation&gt;</td></tr><tr><td>Example</td><td>The water in the teapot started to boil therefore the teapot whistled.</td></tr><tr><td rowspan="2">PIQA</td><td>Prompt</td><td>Question: &lt;question&gt;\n Answer: &lt;possible-answer&gt;</td></tr><tr><td>Example</td><td>Question: How can I quickly clean my blender without washing? \n Answer: Put some ice, water, and a half cup of baking soda in the blender and puree for 3 min.</td></tr><tr><td rowspan="2">WinoGrande</td><td>Prompt</td><td>&lt;context&gt; because &lt;replaced-pronoun&gt; &lt;continuation&gt;</td></tr><tr><td>Example</td><td>Angela was better suited to conduct the science experiment than Katrina because Katrina was less disciplined.</td></tr><tr><td rowspan="2">ARC</td><td>Prompt</td><td>Question: &lt;question&gt;\n Answer: &lt;possible-answer&gt;</td></tr><tr><td>Example</td><td>Question: Amanda is learning about different adaptations of animals. Which is an example of a behavioral adaptation? \n Answer: migration of songbirds</td></tr><tr><td rowspan="2">MRPC</td><td>Prompt</td><td>Sentence 1: &lt;sentence1&gt;\n Sentence 2: &lt;sentence2&gt;\n Question: Do both sentences mean the same thing? \n Answer: &lt;label&gt;</td></tr><tr><td>Example</td><td>Sentence 1: Inamed shares closed down nearly 12 percent on Nasdaq, where it was one of the top percentage losers. \n Sentence 2: Inamed shares dropped as much as about 16 percent on Nasdaq, where it was one of the top percentage losers. \n Question: Do both sentences mean the same thing? \n Answer: yes</td></tr><tr><td rowspan="2">RTE</td><td>Prompt</td><td>&lt;premise&gt;\n Question: &lt;sentence2&gt;. True or False? \n Answer: &lt;label&gt;</td></tr><tr><td>Example</td><td>United States astronaut Sunita Williams, currently on board the International Space Station, has today broken the record for... \n Question: Anousheh Ansari paid to go in space. True or False? \n Answer: False</td></tr><tr><td rowspan="2">CoLA</td><td>Prompt</td><td>&lt;sentence&gt; \n Question: Does this sentence make sense? \n Answer: &lt;label&gt;</td></tr><tr><td>Example</td><td>Brandon read every book that Megan did. \n Question: Does this sentence make sense? \n Answer: yes</td></tr></table>
252
+
253
+ Table 4: Prompt templates used in EleutherAI Language Model Evaluation Harness library (Gao et al., 2021)
254
+
255
+ sible phase shifts. Since RoBERTa only supports 512 positions, and maximum sentence length in these datasets amount to 128, we train models upto $k = 300$ . For each fine-tuning experiment, we first run a hyperparameter sweep varying learning rate (0.0001, 0.0002, 0.0003) and training batch size (16, 32) (amounting to 6 runs) with $6\%$ warmup steps, similar to the setting by Liu et al. (2019). We also set the weight decay to zero in order to not harm the existing positional encodings which are not used during training. Table 3 summarizes all of the parameters. Finally, we choose the best hyperparams and repeat the experiment over five different seeds (42 to 46), and present an aggregate over the results. Table 5 lists the outcome of hyperparameters tuning.
256
+
257
+ In Figure 7, we further show the difference in fine-tuned models when trained on no phase shift $(k = 0)$ and evaluated on different phase shifts $(k = 100,200,300)$ . In-line with our experimental results from §3, we observe worse generalization results from BART.
258
+
259
+ # B Detailed results on phase shifting with prompts
260
+
261
+ We displayed a holistic view of zero-shot and five-shot experiments in Figure 4, covering the accuracies averaged over all six datasets. In this section, we now report and analyze the result of each dataset individually. Figure 9 and Figure 10 showcase models' performance in zero-shot and five-shot configurations. The same pattern can be seen across all model sizes in COPA, WinoGrande, PIQA, ARC (Easy), and RTE. Concretely, the zero-shot abilities of the models sharply decrease as we increase the starting position. Moreover, five-shot inference, typically referred to as in-context learning, is also subject to decreased performance, ranging from $-2\%$ to $-40\%$ . However, the degradation is not as severe as with zero-shot setting. Only MRPC exhibits stable phase shift performance, but even in this case, larger models are still adversely affected. Due to the exceptionally poor performance of OPT family on CoLA, we exclude these results from our analysis (Figure 10).
262
+
263
+ The erratic behaviour observed in majority of evaluated datasets makes it evident that models struggle to encode the relative distances of words as
264
+
265
+ ![](images/3d540d37b68d07e78fc848255dd59194fc56342ad831904a05c391eb6b90f5fd.jpg)
266
+ Figure 7: GLUE downstream task results on CoLA, RTE and MRPC. The dashed lines represent the model performance with no phase shifts. The shaded area show the standard deviation from five random seeds.
267
+
268
+ their understanding of inputs heavily change with various phase shifts. It is important to note that our findings demonstrate models' unstable functioning as opposed to solely highlighting their failure. Indeed, Figure 5 shows that one can extract better and improved accuracies with non-zero starting positions. Namely, $\mathrm{OPT}_{30\mathrm{B}}$ has the best zero-shot performance on phase shift $k = 300$ in the case of MRPC; the same pattern can also be observed in RTE five-shot for $\mathrm{OPT}_{13\mathrm{B}}$ on phase shift $k = 300$ . Another noteworthy observation is that the performance drop is often a non-monotonic function of phase shifts. i.e., for some prompts, the model might be more accurate for $k = 1000$ than for $k = 0$ . This observation suggests that some positional biases might be learned during pre-training and are well-captured by APE. So, increasing values of $k$ in some occasions lands the model attention in a "sweet spot" in the processing window, such that the model benefits from some positional biases learned during pre-training.
269
+
270
+ We observe the presence of erratic behavior across a fairly wide range of model sizes in the OPT family. Additionally, it can be seen that larger models are more prone to fail at encoding relative positions than their smaller counterparts. One possible explanation for this is that in order for the models to encode relative positional information, they need to view all combinations of words and sentences in every position. This coverage rarely occurs in natural data, resulting in data sparsity issues. Hence, models with a large number of parameters may require more data/training to learn the relative ordering of words.
271
+
272
+ # C Variation of best perplexity across phase shifts
273
+
274
+ In this section, we investigate the perplexity of individual sentences from the BLiMP dataset across each phase shift for each model. We plot the distribution of sentences achieving lowest perplexity in each phase shift for the range of models in Figure 8. We observe several modes of phase shift for RoBERTa and BART models where they have the least perplexity on phase shifts other than the standard (zero position). In the case of GPT2 and OPT, the distribution is more skewed towards zero, indicating they almost always achieve the lowest perplexity in the zero position, i.e. when there is no phase shift.
275
+
276
+ # D Code and reproducibility
277
+
278
+ For all of the experiments in this work, we used open-source libraries (Wolf et al., 2020; Gao et al., 2021; Misra, 2022) and models with publicly available checkpoints. The code to reproduce the results can be accessed from https://github.com/kazemnejad/lm_pos_investigations. Furthermore, Listing 1 provides a short, easy-to-use code snippet to modify starting position in HuggingFace models. (We will also release a singularity image with all dependencies to facilitate reproducibility.) We ran our experiments on a mix of NVIDIA A100 40G and NVIDIA RTX8000 48G GPUs. In particular, almost all experiments required only one of such GPUs. The exception was only in the prompting section, where the $\mathrm{OPT}_{30\mathrm{B}}$ model required two NVIDIA RTX8000 48G GPUs to fit the model and inputs of batch size 1.
279
+
280
+ ![](images/956e8670380260f07369bbd0225f12b350a20d3bcd5d0121f52761c6d4d568a0.jpg)
281
+
282
+ ![](images/b7e9ddf85d52c0d3b467df69b03534aba2f117bbdee50e791bdbcd78188d59a3.jpg)
283
+
284
+ ![](images/47208cc775ce81366b7485a4e4fd40a3e31f295d0daf3ae411984c449e630800.jpg)
285
+
286
+ ![](images/b5fbf4fb52f80155b1cc0e645160ffbdba831fd5082babe5a53571887a94470f.jpg)
287
+ Phase Shifts (k)
288
+ Figure 8: Distribution of sentences having the lowest perplexities for each phase shift
289
+
290
+ # E Attention analysis
291
+
292
+ We further perform attention analysis on GPT2, RoBERTa and BART to visualize whether the model's attention pattern changes with phase shifts.
293
+
294
+ import torch
295
+ from transformers import AutoModelForCausalLM, AutoTokenizer
296
+
297
+ Download and load the pretrained model
298
+ tokenizer = AutoTokenizer.from_pretrained("GPT2-medium")
299
+ model = AutoModelForCausalLM.from_pretrained("GPT2-medium")
300
+
301
+ text = "The capital of France is"
302
+ inputs = tokenizer(text, return_tensors="pt")
303
+
304
+ Create unshifted position ids from the attention_mask, which $\hookrightarrow$ is equivalent to
305
+ $\hookrightarrow$ torch.arange(inputsh["input_ids"].shape[-1])
306
+ inputs["position_ids"] $=$ inputs["attention_mask"].cumsum(-1)
307
+ $\hookrightarrow$ -1
308
+ print(inputsh["position_ids'])
309
+ #>>>tensor([[0,1,2,3,4]])
310
+
311
+ output1 = model(**inputs, return_dict=True)
312
+ next_token_id = torch.argmax(output1.logits[0, -1])
313
+ print(tokenizer.decode(next_token_id))
314
+ # >>> Paris
315
+
316
+ Add special tokens
317
+ special_tokens = torch.LongTensor([[tokenizer.bos_token_id, $\hookrightarrow$ tokenizer.eos_token_id])
318
+ specialattention_mask $=$ torch.LongTensor([1,1])
319
+ inputs['input_ids'] $=$ torch.cat([[special_tokens, $\hookrightarrow$ inputs['input_ids'] [0]).unsqueeze(0)
320
+ inputs['attention_mask'] $=$ torch.cat([[special attention_mask, $\hookrightarrow$ inputs['attention_mask'] [0]).unsqueeze(0)
321
+
322
+ Recompute position ids inputs["position_ids"] $=$ inputs["attention_mask"].cumsum(-1) $\longleftrightarrow$ -1
323
+
324
+ Shift the position ids by 10
325
+ inputs["position_ids"] += 9
326
+ inputs["position_ids."[0, 0] = 0
327
+ print(inputs["position_ids")]
328
+ # >>> tensor([[0, 10, 11, 12, 13, 14, 15]])
329
+
330
+ output2 $=$ model(\*\*inputs,return_dict=True) next_token_id $\equiv$ torch.argmax(output2.logits[0,-1]) print(tokenizer Decode(next_token_id)) #>>>the
331
+
332
+ Listing 1: Python code example to shift the starting position of a sentence from $k = 0$ to $k = 10$ .
333
+
334
+ Following the experimental protocol of Raghu et al. (2021), we first collect a summary of attention weights computed with token distances for each token-pair in a sentence. This summary metric is then further normalized for sentence length. The values of this metric show whether the attention is local (low values)—focused on small token distances—or global (high values)—i.e. focused on the whole sentence.
335
+
336
+ We compute this attention summary metric on a sample of 5000 sentences drawn from the BLiMP
337
+
338
+ ![](images/87a0ba84f3d9d4c68f577992b96cf9ea61869d431ec9c369074295626e538897.jpg)
339
+ WinoGrande
340
+
341
+ ![](images/7a15d2eb15c7bef3dc034919f6ae45eebd1f951dfd4898f301778e383de40612.jpg)
342
+
343
+ ![](images/2fc7694ae9defc85ac1cab4326914e89ff0680c2162c8cefc22c125d3747ea3a.jpg)
344
+ PIQA
345
+
346
+ ![](images/94cb892c5c822b887ea49c2a726cd7d2fc327c7c40c22f6852037f2cc0f704dc.jpg)
347
+
348
+ ![](images/9c6a84940f8739057c3338415a382c03126251fd93e8818ed46647f2f2c38e77.jpg)
349
+ ARC (Easy)
350
+
351
+ ![](images/fe41a7613815011eae5b9924b7f57c09dd04cdb381e512193af125a297972a0a.jpg)
352
+
353
+ ![](images/9acbe126df2bbee130c6d259923bd5297309c2d2cee91fb92d34a2fbb30e5fd2.jpg)
354
+
355
+ ![](images/8eccc3f5c344f7968672bbc23565520d7db876fcbf46104836539f7b8503eaef.jpg)
356
+
357
+ ![](images/05a4f90308d696fa7c6d3ac4d07d62818e9cb9abbce4f363364036fcb99e1ed0.jpg)
358
+ Figure 9: Zero-shot and Few-shot performance of OPT family with various phase shifts for each individual dataset (Part 1)
359
+
360
+ ![](images/e6ad173768e7d2993c549b1754d806cfdc8e96778e1bc5fae14756c9191884ef.jpg)
361
+
362
+ ![](images/0791410155adc6c8b12ce83a7de4a79a92dd7a9941cc025dc4de269dc1056c61.jpg)
363
+
364
+ ![](images/94472cf23519fe96ac210338aca43f25ea248311b040de536091d98a16aa8aa4.jpg)
365
+
366
+ ![](images/12c4bcded50cadb92992be87a8fd630e27bc0b6425c082d5237c17495d028e90.jpg)
367
+
368
+ ![](images/b6a9f7e3b90d2f8931bece3d6e966e12d14d11e6326806fab588c91303f517ad.jpg)
369
+
370
+ ![](images/6183bd66504ef3902b00480aca047c31ad9b84b9e4c1c070214b8ccd9234537a.jpg)
371
+
372
+ ![](images/6b459e42f76edb4a1e4d7ea0fb8e31e5aa4865891813637526538d62247f71cd.jpg)
373
+ Figure 10: Zero-shot and Few-shot performance of OPT family with various phase shifts for each individual dataset (Part 2)
374
+
375
+ ![](images/e5d9d0d73b1d785c36983e938b81f402c2b9135782ac264f8067d1c35b326f06.jpg)
376
+ CoLA
377
+
378
+ ![](images/9415c89789d77f9f3ca26293586fb9afe4c3bf1ca6aaf8f28191694e579dfde4.jpg)
379
+ MRPC
380
+ RoBERTa (base)
381
+
382
+ ![](images/caef3ad499966465f928023518d5a79260abb0c8d94190862e449e2a80a5e080.jpg)
383
+ RTE
384
+
385
+ ![](images/948297a198587987013eb738d068462c28184d5b2b4b11367f3fd4ed1ce0ac56.jpg)
386
+
387
+ ![](images/470a14e668927478818764b68322ad5f1981469548c1d54ac1b5d73f7aef8a0e.jpg)
388
+ RoBERTa (large)
389
+
390
+ ![](images/422b49cc09d3fd07776369cbfdd4f464fbd51dd84fae620aae04d32e6690b263.jpg)
391
+
392
+ ![](images/748d6f38b739dcce293a74e7eef38996808ef38e77ff605b302151410f7c2240.jpg)
393
+
394
+ ![](images/2e93299ca04168178aa4ef55bee398d52672a45cec9fcd5135d5a1a2072bf9ba.jpg)
395
+ BART (base)
396
+
397
+ ![](images/6e2f02292f4039f970ef809a34d592d213c2a854efc306b76be3709d4d927b96.jpg)
398
+
399
+ ![](images/80f7adc35b780a3f057846b889cf16082212ac225cad5d46c59dd851280515a6.jpg)
400
+ Eval Phase Shift
401
+ Figure 11: Individual heatmap for each GLUE task and model with varying train (fine-tune) and test phase. (Part 1)
402
+
403
+ ![](images/84783b00a63b912ad0c5b9fcb891c2814d159e58bfa9e3c7160186ef029f1412.jpg)
404
+ BART (large)
405
+ Eval Phase Shift
406
+
407
+ ![](images/cb8ce77a854536f89fb03d35b0064829337f605e0e8779759650966bba02fcc2.jpg)
408
+ Eval Phase Shift
409
+
410
+ ![](images/c6b2dddbd0140afcd57e12ae5f61487fe599656e59dc06f7405bfa49a4cbb119.jpg)
411
+ CoLA
412
+
413
+ ![](images/e33f33113b886aa2fff4d4f7260100f63185f16304502e9b2ee89eae43641b93.jpg)
414
+ MRPC
415
+
416
+ ![](images/aeaa2644634104c45dbbcc3aa21c9c886b2f6a349fff60b4cf65e9cfb2de646d.jpg)
417
+ RTE
418
+
419
+ ![](images/b1f6293edbe6cd5d9b3329004da7a1bf3298da03ebf512350ee56430c534990f.jpg)
420
+
421
+ ![](images/ab3850b1b5870f7a699038eaf8164db290314e952327300677bd2d866c3a023e.jpg)
422
+ GPT2 (Medium)
423
+
424
+ ![](images/e09a3968dede737b37c904704f77a28713998bccf4453f67197da194d95033cb.jpg)
425
+
426
+ ![](images/45f131f7ea04f099b30234d11a2f8407b3d0eb1ac801d798cd75d698a36952da.jpg)
427
+
428
+ ![](images/03697baedbbb818f6579c39a97e6fb152bfd709a4703c9b8ab445f03f035340f.jpg)
429
+ OPT (125M)
430
+
431
+ ![](images/1dcd9e2db6bd4940f8c2d13c05d7ed96ecd1836619f1c849669b0d23b49cb9f5.jpg)
432
+
433
+ ![](images/09924ae94a9b5afcbea5c148e3dab9ebeec2f6bbe0f2f1a9cce1d64d434809d7.jpg)
434
+ Eval Phase Shift
435
+
436
+ ![](images/8fdc6b48c8a10683d10ddf3691996e8bd94d06436a8a6785b5e27365b902ece4.jpg)
437
+ OPT (350M)
438
+ Eval Phase Shift
439
+
440
+ ![](images/8d6d752ee86ee27aa869df60e8be5bbe68593bb2493a82bb650601677ac2629d.jpg)
441
+ Eval Phase Shift
442
+ Figure 12: Individual heatmap for each GLUE task and model with varying train (fine-tune) and test phase. (Part 2)
443
+
444
+ <table><tr><td rowspan="3">Model</td><td colspan="8">Phase shifts</td></tr><tr><td colspan="2">k=0</td><td colspan="2">k=100</td><td colspan="2">k=200</td><td colspan="2">k=300</td></tr><tr><td>Learning Rate</td><td>Batch Size</td><td>Learning Rate</td><td>Batch Size</td><td>Learning Rate</td><td>Batch Size</td><td>Learning Rate</td><td>Batch Size</td></tr><tr><td colspan="9">CoLA</td></tr><tr><td>RoBERTaBASE</td><td>0.00002</td><td>32</td><td>0.00002</td><td>16</td><td>0.00002</td><td>16</td><td>0.00002</td><td>16</td></tr><tr><td>RoBERTaLARGE</td><td>0.00003</td><td>32</td><td>0.00003</td><td>32</td><td>0.00001</td><td>32</td><td>0.00002</td><td>16</td></tr><tr><td>BARTBASE</td><td>0.00002</td><td>32</td><td>0.00003</td><td>16</td><td>0.00002</td><td>16</td><td>0.00002</td><td>32</td></tr><tr><td>BARTLARGE</td><td>0.00002</td><td>16</td><td>0.00003</td><td>32</td><td>0.00003</td><td>16</td><td>0.00003</td><td>32</td></tr><tr><td>GPT2</td><td>0.00002</td><td>16</td><td>0.00003</td><td>32</td><td>0.00003</td><td>16</td><td>0.00003</td><td>16</td></tr><tr><td>GPT2MEDIUM</td><td>0.00002</td><td>32</td><td>0.00001</td><td>16</td><td>0.00003</td><td>16</td><td>0.00003</td><td>16</td></tr><tr><td>OPT125M</td><td>0.00002</td><td>16</td><td>0.00001</td><td>16</td><td>0.00001</td><td>32</td><td>0.00001</td><td>16</td></tr><tr><td>OPT350M</td><td>0.00001</td><td>16</td><td>0.00001</td><td>32</td><td>0.00002</td><td>32</td><td>0.00001</td><td>16</td></tr><tr><td colspan="9">MRPC</td></tr><tr><td>RoBERTaBASE</td><td>0.00002</td><td>32</td><td>0.00003</td><td>16</td><td>0.00003</td><td>32</td><td>0.00001</td><td>32</td></tr><tr><td>RoBERTaLARGE</td><td>0.00002</td><td>32</td><td>0.00001</td><td>16</td><td>0.00002</td><td>32</td><td>0.00002</td><td>16</td></tr><tr><td>BARTBASE</td><td>0.00001</td><td>16</td><td>0.00003</td><td>32</td><td>0.00002</td><td>16</td><td>0.00003</td><td>16</td></tr><tr><td>BARTLARGE</td><td>0.00002</td><td>16</td><td>0.00003</td><td>16</td><td>0.00002</td><td>16</td><td>0.00003</td><td>16</td></tr><tr><td>GPT2</td><td>0.00002</td><td>16</td><td>0.00003</td><td>16</td><td>0.00002</td><td>16</td><td>0.00003</td><td>16</td></tr><tr><td>GPT2MEDIUM</td><td>0.00002</td><td>16</td><td>0.00003</td><td>16</td><td>0.00003</td><td>16</td><td>0.00003</td><td>16</td></tr><tr><td>OPT125M</td><td>0.00003</td><td>16</td><td>0.00002</td><td>32</td><td>0.00002</td><td>16</td><td>0.00003</td><td>32</td></tr><tr><td>OPT350M</td><td>0.00003</td><td>32</td><td>0.00001</td><td>16</td><td>0.00001</td><td>32</td><td>0.00001</td><td>32</td></tr><tr><td colspan="9">RTE</td></tr><tr><td>RoBERTaBASE</td><td>0.00002</td><td>16</td><td>0.00003</td><td>16</td><td>0.00002</td><td>16</td><td>0.00002</td><td>16</td></tr><tr><td>RoBERTaLARGE</td><td>0.00003</td><td>32</td><td>0.00001</td><td>32</td><td>0.00003</td><td>32</td><td>0.00001</td><td>32</td></tr><tr><td>BARTBASE</td><td>0.00003</td><td>16</td><td>0.00003</td><td>32</td><td>0.00002</td><td>32</td><td>0.00003</td><td>16</td></tr><tr><td>BARTLARGE</td><td>0.00003</td><td>32</td><td>0.00003</td><td>16</td><td>0.00002</td><td>16</td><td>0.00003</td><td>16</td></tr><tr><td>GPT2</td><td>0.00001</td><td>16</td><td>0.00003</td><td>16</td><td>0.00003</td><td>16</td><td>0.00003</td><td>16</td></tr><tr><td>GPT2MEDIUM</td><td>0.00002</td><td>16</td><td>0.00003</td><td>16</td><td>0.00001</td><td>16</td><td>0.00002</td><td>32</td></tr><tr><td>OPT125M</td><td>0.00003</td><td>16</td><td>0.00001</td><td>32</td><td>0.00001</td><td>16</td><td>0.00001</td><td>32</td></tr><tr><td>OPT350M</td><td>0.00001</td><td>16</td><td>0.00001</td><td>16</td><td>0.00001</td><td>32</td><td>0.00001</td><td>16</td></tr></table>
445
+
446
+ Table 5: Result of hyperparameter sweep for finetuning experiments.
447
+
448
+ dataset (Warstadt et al., 2020). We then plot the summary values per layer and sort according to the values for each attention head, as per Raghu et al. (2021). The idea is to discover whether this attention summary metric is drastically different under different phase shift conditions.
449
+
450
+ We do observe drastic differences in attention patterns in all layers for GPT2 (Figure 13) and GPT2-Medium (Figure 14). Comparing this with of RoBERTa (base) (Figure 15) and RoBERTa (large) (Figure 16), we can corroborate our findings from §3—RoBERTa is much more robust to phase shifts. Consequently, BART (Figure 17 and Figure 18) also displays differences in attention patterns, but they are not as drastic as GPT2.
451
+
452
+ # F Extended Related Work
453
+
454
+ Positional encoding has been always an important part of the Transformer architecture, and since it original introduction different variants of it have been deployed by pretrained models (see Table 6 for a summary of positional encoding used by some of popular state-of-the-art models.)
455
+
456
+ Positional encodings have garnered a niche community over the past several years. Wang and Chen (2020) investigate whether position embeddings learn the meaning of positions and how do they affect the learnability for different downstream tasks.
457
+
458
+ Wang et al. (2021) explore different positional encodings and establish monotonicity, translation and symmetry properties of different methods, including APEs. They also report that learned APE's demonstrate superior performance for text classification, further adding to the evidence APE's enable exploitation of positional biases. Luo et al. (2021) report that masked language model embeddings consists of positional artefacts which bias the model output. More related to our work, Kiyono et al. (2021) train a Transformer model from scratch using shifted positional embeddings for machine translation, and observe improved performance in extrapolation and intrapolation setup. Haviv et al. (2022) reports a surprising finding that autoregressive Transformer models trained without explicit positional information still perform on-par with their counterparts having access to positional information. This result is attributed to the causal attention structure induced by the autoregressive training only, as this effect is not observed with masked language models, as highlighted by both Haviv et al. (2022) and Sinha et al. (2021). Ke et al. (2021) proposes a novel technique to de-correlate the position encodings and token embeddings, and achieve better downstream performance than baselines. Ravishankar et al. (2021) find relative positional encoding does not improve over APE in
459
+
460
+ ![](images/5ca00e1cce61342ddc5dc9bdf03dd8005a8febbfcc5e584e18a550dd3b778d19.jpg)
461
+
462
+ ![](images/5bdf5cadec03060d167ee0037186e49eedc64d92cdd5583596c26272789012f4.jpg)
463
+
464
+ ![](images/2651c52491358d063591329a5fe8c40955ff02fad6d82e2d370af2027f4fe6ad.jpg)
465
+
466
+ ![](images/24308e92a947ff8bf2775a1ea46a04b20c056485ab0792ab149318e99536a8c6.jpg)
467
+
468
+ ![](images/fdad59353297b144789f218a38f8f382000a1cd50b9845f2d42abac308581199.jpg)
469
+
470
+ ![](images/21b4303d48177ea1e2e7668ecbcd1e98d2d2b3d4aeb638da1bcea263d4760f12.jpg)
471
+
472
+ ![](images/cc6c247f318ab912b95558f617cb040b76617ad88eb04d314de53e78be279bfd.jpg)
473
+
474
+ ![](images/ea4c3e50eb2e77616e0adf1da4f88e5e49cc48ead41d6b5f59233cfb8f576dbe.jpg)
475
+
476
+ ![](images/9516b528aeead78a66d04f7c4d20f34b6b83285937aee7376c002412ff77196d.jpg)
477
+ Figure 13: Attention globality distributions of GPT2 across different heads (sorted according to value) and averaged over all layers and 5000 data points. Blue curve stands for the no phase shift condition, and orange, green and red curves represent $k = 100, 200$ and 300 respectively.
478
+
479
+ ![](images/07877b94140d6b8b190054c3f27ae7e90fb98d875872fb631cd567a2fa8ecc8a.jpg)
480
+
481
+ ![](images/0fb9f0eb347039da53a376e01a39f2fd3431a2fdb2e72c417340750d99bf0003.jpg)
482
+
483
+ ![](images/39328bc696bee2fa12f41543fafdb79433b1b5896e353b9b7b656d9cb94959c8.jpg)
484
+
485
+ multi-lingual setting.
486
+
487
+ On the other hand, multiple works have shown the advantage of explicit relative positional encoding for length extrapolation. Csordás et al. (2021) show Transformers equipped with variants of relative positional encoding (Dai et al., 2019; Shaw et al., 2018) significantly outperform their absolute counterparts when it comes to length generalization. In the same line of work, Ontanon et al. (2022) also find that for numerous synthetic benchmarks, the best extrapolation performance can only be obtained by relative positional encoding. Press et al. (2022) take the experiments beyond synthetic datasets and show that APE's struggle in generalization to longer sequence of natural language. All of these amount to the evidence that points to APE's as one of the potential reasons Transformers are known to fail in length generalization and productivity (Hupkes et al., 2020; Lake and Baroni, 2018). Although the benefits of using explicit relative positional bias is mentioned in various works, they typically come at the cost of slowing the training down: (Press et al., 2022) report that training T5 (which uses a relative variant of positional encod
488
+
489
+ ing) is almost twice as slow as training a model with sinusoidal absolute embedding. Thus, the gained runtime efficiency allows longer training of the APE model, which in turn enables the further extrapolation capabilities. These works suggest that we have a lot left to explore about positional encoding and highlight the fact that the consequences of particular choices is still an open field of ongoing research.
490
+
491
+ ![](images/806535a98a77a8dbb8c7c2eefd0c644b0f7635b46db6028137b4693f420bae62.jpg)
492
+
493
+ ![](images/4bab117e18dfe0286480c8b81cd4c86258e1f501595077d2ae97325d24ff6e47.jpg)
494
+
495
+ ![](images/245a425af8865a328e68432b676c4cf2732bd09b71855f976b63c51c50a3e9b7.jpg)
496
+
497
+ ![](images/65c98e8e1d4a1fcbca5e92fa9ac127002e75aad94b1e9377b1691ce58d4e458a.jpg)
498
+
499
+ ![](images/d540f67f761063e45ec78c4b8e3db90c2f006e38c8b6865d3a2cd95fdd56b68a.jpg)
500
+
501
+ ![](images/63e7228cf9da1e7d9f179074c56622134bf60bcb727ec5bebe4662bd0fd2b44b.jpg)
502
+
503
+ ![](images/0cb4ea72bb5532582785e4e438efd5ada428aa393558c2f16ace8450dd342d1e.jpg)
504
+
505
+ ![](images/38931bff63f3fdd6e5ffb5dbee3eef2a1d3a65194a07af5365ffd0af1f9bd337.jpg)
506
+
507
+ ![](images/7e9403fdd7e02908f3e39ca02d142c1d81cdfe1d27211fe87ff196a53be4b58f.jpg)
508
+
509
+ ![](images/b8dbae5e4922cbb3af3dbaf4412462c72aed18fae96a7a416139d13445bff2c2.jpg)
510
+
511
+ ![](images/d7a1617dfa79829556f71a35556d9876163401d1ba16e6648bf5a9f93f0b1493.jpg)
512
+
513
+ ![](images/e4602639f9e36a602f8bf1cc58f87d0da6611ca8d31ff5723e9f5962d9cd9a81.jpg)
514
+
515
+ ![](images/9a982b468791492420f071b1e7a22148e9ff8f5eb47d4b67090a30d012dbce83.jpg)
516
+
517
+ ![](images/71f608145dfbf2fd81c46c714bb2019d866f9c93fa85402dd7b6c7cc6059d95e.jpg)
518
+
519
+ ![](images/b5680d2a2d31386cc0f6661d51a579530851b56267751b7eb75f4d777a9568e6.jpg)
520
+
521
+ ![](images/78d970e27582e67d0f05fa15aa72c1868e3626f37fa762d03c9577243c45a9b9.jpg)
522
+
523
+ ![](images/5e39302dc79a360347d08e3bec8a94e9899a1df2ba6fbbdc1b0cb35caa96d5fd.jpg)
524
+
525
+ ![](images/0e28501fe58cc1e9ce3c556237ed0fbd345195c7adacbb894ff5842325c85479.jpg)
526
+
527
+ ![](images/552cb75214c99cc320b619435530413028c23caee24e690b59afce9a7f184a65.jpg)
528
+
529
+ ![](images/9a31f68d9b3c77b043693b2681def8aeedad659520a7f67dab7741dfc8f26264.jpg)
530
+
531
+ ![](images/998f03cbd7a6c07e01a370641c3f211d4592f1e907d9e06369917393c99a4500.jpg)
532
+ Figure 14: Attention globality distributions of GPT2-Medium across different heads (sorted according to value) and averaged over all layers and 5000 data points. Blue curve stands for the no phase shift condition, and orange, green and red curves represent $k = 100, 200$ and 300 respectively.
533
+
534
+ ![](images/e9ed9e6f0c5805153172679b29a2120cd2acd05fb17ff50dc8d33df0ef142fcf.jpg)
535
+
536
+ ![](images/c23f35072c8d7c8c2b20f1bb01b4836bf90886e89298ff4e98a88bdedafe8cc9.jpg)
537
+
538
+ ![](images/6baabc2ed8c511dd87a9308aa762b32e9f541b077903d8d7c6b611b2946ff308.jpg)
539
+
540
+ ![](images/a1a67bdfc0d9705d1e4bc26fed71a2894e5dd036c754a2d4b692b48e8a314d49.jpg)
541
+
542
+ ![](images/af125c3634f31f57c7e9111f75ec9c383042e9ebd269d038c8e36a9aa18b8af7.jpg)
543
+
544
+ ![](images/58ec2bc878f8bcd8fe74a2d4d3fe4b6f0f846873d32efdc480d73b228bf902d7.jpg)
545
+
546
+ ![](images/7bedd2ae79f7a2f9c05c3d9ce347b61b986de048fd1e8e4f7396df1c233a92ff.jpg)
547
+
548
+ ![](images/1e2988e3d6ea36c1337f121afd7a7c447cec255c15500619b053515c839b6682.jpg)
549
+
550
+ ![](images/7ba996f52e043f2de263bed5ac6282f1b48eeea584206e6fa08138da8c19afa7.jpg)
551
+
552
+ ![](images/d10eee15fb4ee7b9d324083a86f8690b117932033e8c0d10070554ba63718486.jpg)
553
+
554
+ ![](images/ab26964774224582014c24d0c0b3e6f2c9442fac9171b04990ef50337fc1e653.jpg)
555
+
556
+ ![](images/1bced6f56dadafcb66c52ab58cdc081a33f2acca23151bb6234ffc1063e2e297.jpg)
557
+ Figure 15: Attention globality distributions of RoBERTa (base) across different heads (sorted according to value) and averaged over all layers and 5000 data points. Blue curve stands for the no phase shift condition, and orange, green and red curves represent $k = 100, 200$ and 300 respectively.
558
+
559
+ ![](images/082281c70faed4cd5616bb6c3d486cbfba84e660938856c8c4c2f28ed5a21ba5.jpg)
560
+
561
+ ![](images/d720e05fb33a138fac1736aa1c1bd9d20eb5870aced9eb366796b5e21975887c.jpg)
562
+
563
+ ![](images/15af0fafc8b93d3627f4742ef38013a303b8729b11292436f5f6fbd75d763477.jpg)
564
+
565
+ <table><tr><td>Name</td><td>Release Year</td><td>Positional Encoding Type</td></tr><tr><td>BERT (Devlin et al., 2019)</td><td>2019</td><td>Learned Absolute</td></tr><tr><td>RoBERTa (Liu et al., 2019)</td><td>2019</td><td>Learned Absolute</td></tr><tr><td>GPT2 (Radford et al., 2019)</td><td>2019</td><td>Learned Absolute</td></tr><tr><td>BART (Lewis et al., 2020)</td><td>2020</td><td>Learned Absolute</td></tr><tr><td>LongFormer (Beltagy et al., 2020)</td><td>2020</td><td>Learned Absolute</td></tr><tr><td>T5 (Raffel et al., 2020)</td><td>2020</td><td>Relative Learned Bias</td></tr><tr><td>GPT3 (Brown et al., 2020)</td><td>2020</td><td>Learned Absolute</td></tr><tr><td>GPT-Neo (Black et al., 2021)</td><td>2021</td><td>Learned Absolute</td></tr><tr><td>Fairseq-Dense (Artetxe et al., 2021)</td><td>2021</td><td>Fixed Absolute</td></tr><tr><td>ShortFormer (Press et al., 2021)</td><td>2021</td><td>Fixed Absolute</td></tr><tr><td>GPT-J (Wang, 2021)</td><td>2021</td><td>Rotary</td></tr><tr><td>GPT-NeoX (Black et al., 2022)</td><td>2022</td><td>Rotary</td></tr><tr><td>OPT (Zhang et al., 2022)</td><td>2022</td><td>Learned Absolute</td></tr><tr><td>PaLM (Chowdhery et al., 2022)</td><td>2022</td><td>Rotary</td></tr></table>
566
+
567
+ Table 6: Positional encoding of commonly used pretrained language models.
568
+
569
+ ![](images/290d66d98a983397856b8a23bda2c7fb3be32e0dc865bd04ce4132f25db19915.jpg)
570
+
571
+ ![](images/550ff50f3dd9cb6e875a32ead5fe0b6f944b661a425100c417031e18cc0d83f4.jpg)
572
+
573
+ ![](images/1f86849dacb0ed8261cce0dc800706fd08dbcfa11761370488f1451022ba89b2.jpg)
574
+
575
+ ![](images/cb87b415e56a20c7720d226afd2be102e328e9f6e3cf56e976fba08e81f3b606.jpg)
576
+
577
+ ![](images/e61b8bf3d0fe2d54545405011672979707b52889e8bebde5db9497cdf45b43e3.jpg)
578
+
579
+ ![](images/2dd280bd147f3fdfdeeeac9cbe8455626a4abe397fc56e557ff55808b7ad9f10.jpg)
580
+
581
+ ![](images/cc370791d38e89176210736a6b878765073cd0bf6bc9850e06a1731ec6dfee25.jpg)
582
+
583
+ ![](images/56b0eea00566c7eda80fdd7d01a28431fcf4fb8130edb010868a3c3ed355161f.jpg)
584
+
585
+ ![](images/895fcf15f18e458d006e2d63d0ca145a42fdc94c48265fa47b5ca7727ccb5cd0.jpg)
586
+
587
+ ![](images/ff319b49f8a3630adbc2c8865b1b79b40fde54ddded4406c88066691e032a9c2.jpg)
588
+
589
+ ![](images/168cf1799722e9c8ed0fe5ae7214618f4740c9fcf546eb2573230beee4cfc7b5.jpg)
590
+
591
+ ![](images/b7851f5c3af43d12a4696e4934dc72f2bdf1cb3f6a768745dfba76250bb6710e.jpg)
592
+
593
+ ![](images/bc2f247009d97a219118d6e9b72f2edd561fcce632ec23a3f26f4acc4727970e.jpg)
594
+
595
+ ![](images/8d112e9c9a2a288c055956e21d5447040b2b97f473926db4bc1cf6b7508723da.jpg)
596
+
597
+ ![](images/53772470eacc4c6e5bd1677cb6b33f1c22c8ea81218373a86105e80393182ffa.jpg)
598
+
599
+ ![](images/9baef2127abcd241160ae370eeec15e71a6d249ef7d83b05af2199b0cba030ac.jpg)
600
+
601
+ ![](images/735c135bccfd5bad7c9d3f9c57a96e26d04cdd090d30b4c72bef99bb57dd5380.jpg)
602
+
603
+ ![](images/02fa928b274d8ca8a8c1b424fc85fbdba7d5d391a3eea1e66c737f58af3a3809.jpg)
604
+
605
+ ![](images/af8e37e1509c98614779326dfa5e506c462410d9e74126caf382b112b5ac59c9.jpg)
606
+
607
+ ![](images/f42b819c0b46468464f4cb1d0d8718dcb15c744b1aff8f0c2985f8a56a54d244.jpg)
608
+
609
+ ![](images/cc49cc7a16023a12cfe3e45ed0af64dc9d8788ec19247b41e8239bc594623270.jpg)
610
+ Figure 16: Attention globality distributions of RoBERTa (large) across different heads (sorted according to value) and averaged over all layers and 5000 data points. Blue curve stands for the no phase shift condition, and orange, green and red curves represent $k = 100, 200$ and 300 respectively.
611
+
612
+ ![](images/d54ec120c8ce4afa0132dec272fbe0838812f58e53f744c7535de76ec064739e.jpg)
613
+
614
+ ![](images/f33d2d15c094bae7591faabe57fac7b4aade2edc954dffef00839a779c9b8fcc.jpg)
615
+
616
+ ![](images/8134c23e1a26cc716fcc7b85fa2962930a05cd5147b4b8e2a46dc4fa52a26f1f.jpg)
617
+
618
+ ![](images/37e1e4de2a2d86e9dd0bd03e189777c9beb1a967d7228ea9c2daea0e84f08fe1.jpg)
619
+
620
+ ![](images/e17aa95998953bb8f9d10e51c416dbd3d8cfe43c8238154ca5c5249389a0fd9c.jpg)
621
+
622
+ ![](images/461e71430c4146a1690c674d8f812d335c494a84c84847ac040cbec128e03cdc.jpg)
623
+
624
+ ![](images/06701973c7c05b80a3a6f5090643406206afcae923300707291c6ae0c9a9b20e.jpg)
625
+
626
+ ![](images/a64dbc19a7dd170e8dd5c3d82f098f2eb10c280fd03bbf2727c9f1a23be17f28.jpg)
627
+
628
+ ![](images/7507540df19d65647f51e868b646f39e9b50ebb60c2ccbe4b2dfce46dffcf830.jpg)
629
+
630
+ ![](images/b253758742e19743fdee97d80da90136bd46f8a4a0c361179cf0ef76aab16a9c.jpg)
631
+
632
+ ![](images/249eda83c5ab85eed7bb128cb615c4a713bc42da6152bba4b3321a59695ee29b.jpg)
633
+
634
+ ![](images/9b6ae81a970cc90e8f4c5b850713158118b76509f8c7f1039766a5226d0fcda3.jpg)
635
+ Figure 17: Attention globality distributions of BART (base) across different heads (sorted according to value) and averaged over all layers and 5000 data points. Blue curve stands for the no phase shift condition, and orange, green and red curves represent $k = 100, 200$ and 300 respectively.
636
+
637
+ ![](images/164657cb5971d32b0c5c01f37537d623ada299cac316382f7f3c8229329f0c72.jpg)
638
+
639
+ ![](images/dd87e5d586d8d1035c0805b44a39572969cf37b44513869cf624af6ab39e0d4c.jpg)
640
+
641
+ ![](images/f708dba2c1a159bbc724c7c1d6e6d5d9bf20b319165e738fd5f955bcfc7ef91c.jpg)
642
+
643
+ ![](images/0be252b7b27df68b57405f2cb3efa574449d3ce8ef562006c4ba3025f8cd3e44.jpg)
644
+
645
+ ![](images/7ca7c29548fbc3d94faf2666e8b321a29364fff36a914cbbefac49fbd5ef885c.jpg)
646
+
647
+ ![](images/2c7e94afcc3531f3227745f916a26642ba31f7234438c93d1eaac49df727d696.jpg)
648
+
649
+ ![](images/e2e59a7eca3598f3114dd7d2370b5ba858544d612cb713f08ff5bc490542a95b.jpg)
650
+
651
+ ![](images/2b64e63b74734df64517520a3a42a297dc5e61872e21a4c88dbfb95983db6878.jpg)
652
+
653
+ ![](images/5effe16369399c9e319b1984dcfe00e99db3309846200b5112b5605d97cc912e.jpg)
654
+
655
+ ![](images/992d0932f90b41c9492675089ae4a739f020ec0accba0d26ac7d6ec921e0f25a.jpg)
656
+
657
+ ![](images/b7f3d3dc62b3bbdc8ed2192784403f96abdf16eadefcfa41d4399f3d9e49bd8a.jpg)
658
+
659
+ ![](images/731ec9523ee5ed33b284b5772a521e84f521b917c90069ed214404beb76396eb.jpg)
660
+
661
+ ![](images/b2da4ace050c7ea991248343c7c8cff5e767725d5a23cfe63fdcee8871366595.jpg)
662
+
663
+ ![](images/fad320caaecd5f29cd98595fe632e38a354d72a8e181a4278908cbb42b36d75b.jpg)
664
+
665
+ ![](images/7daf8ebe97e4bf5d59bbaffcc8162b6d405d05d7080d01b2c41ceed996268dec.jpg)
666
+
667
+ ![](images/dd9de7bf9d88e884c1825b1660c84dc65b7a969cebc93e4dea4b3c5f3f1611ed.jpg)
668
+
669
+ ![](images/a0ffc9686f5d4eaa63abb964711c904ad82ef63f558396ad461cc81aed34f1bc.jpg)
670
+
671
+ ![](images/31740eb175776e535cf6f6dbbc089a6111723bd643e505ab7000b853a2868937.jpg)
672
+
673
+ ![](images/7fb5ecf68a885597e09e2bb9d1aa887bfc5d24744f9d81e0f537f88fb2f1987f.jpg)
674
+
675
+ ![](images/db70b745464e366cfed16af125efe80adc1eb189bb33907d636b01a832e4cac9.jpg)
676
+
677
+ ![](images/5707d56c3f24d5391a41ba46719e8ca39ee47cb1c8430f91e740eebfff9982e2.jpg)
678
+
679
+ ![](images/567818e0d650140b8a1b122567f7ecb9ae19dccff1fbc2d6fa125c27c4bedf63.jpg)
680
+
681
+ ![](images/06e9cd0fdea26d12211a7e255c3f42400057ddc7cee55bfc3e5c429110405fa5.jpg)
682
+
683
+ ![](images/3e25c922cdc17aecf52579396cfaf5bdb85b65058db5ba538f43f710cfaa411a.jpg)
684
+ Figure 18: Attention globality distributions of BART (large) across different heads (sorted according to value) and averaged over all layers and 5000 data points. Blue curve stands for the no phase shift condition, and orange, green and red curves represent $k = 100, 200$ and 300 respectively.
685
+
686
+ ![](images/4e36c6c94126891e335eb437ae04489cd80359adc1688c0f0ed3872674cf5d4d.jpg)
687
+
688
+ ![](images/ab92c081e2aac4cc9cc7c7786cb231ff46f36790e794be00450fbc8c3467b07a.jpg)
689
+
690
+ ![](images/60334899ccaef4c25e6d977da09b2de21e8d68146298d47e5058fa34b91d0c22.jpg)
thecuriouscaseofabsolutepositionembeddings/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:471760327acf0ed272824c52af36d5aa476e7a79199ff2c332a66b1a79000cfc
3
+ size 2016389
thecuriouscaseofabsolutepositionembeddings/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:91bea649f77f3f633d2d8f48d10272781167673c41da90e2b4131da236309bec
3
+ size 745412
theeffectsofcorpuschoiceandmorphosyntaxonmultilingualspaceinduction/4bbdabc5-6f81-495a-a5b0-93929344a445_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1241e756e0dbefc360b3c044d8516fe0397f3a93b2cac5da693c656c7b8572ef
3
+ size 59754
theeffectsofcorpuschoiceandmorphosyntaxonmultilingualspaceinduction/4bbdabc5-6f81-495a-a5b0-93929344a445_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bb3c6839891a2f6c202265bdc972dee484e2e2c37f88197215899cd48f7e4e41
3
+ size 74517
theeffectsofcorpuschoiceandmorphosyntaxonmultilingualspaceinduction/4bbdabc5-6f81-495a-a5b0-93929344a445_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f98b455792a701f198c52ebc89e1bd1e6bfb8dd4e6416f859eb865d08b119237
3
+ size 556585
theeffectsofcorpuschoiceandmorphosyntaxonmultilingualspaceinduction/full.md ADDED
@@ -0,0 +1,257 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # The Effects of Corpus Choice and Morphosyntax on Multilingual Space Induction
2
+
3
+ Vinit Ravishankar $^{\S}$ Joakim Nivre†
4
+
5
+ $^{\S}$ Department of Informatics, University of Oslo
6
+
7
+ †RISE Research Institutes of Sweden
8
+
9
+ $^{\dagger}$ Dept. of Linguistics and Philology, Uppsala University
10
+
11
+ $^\S$ vinitr@ifi.uio.no †joakim.nivre@ri.se
12
+
13
+ # Abstract
14
+
15
+ In an effort to study the inductive biases of language models, numerous studies have attempted to use linguistically motivated tasks as a proxy of sorts, wherein performance on these tasks would imply an inductive bias towards a specific linguistic phenomenon. In this study, we attempt to analyse the inductive biases of language models with respect to natural language phenomena in the context of building multilingual embedding spaces. We sample corpora from 2 sources in 15 languages and train language models on pseudo-bilingual variants of each corpus, created by duplicating each corpus and shifting token indices for half the resulting corpus. We evaluate the cross-lingual capabilities of these LMs, and show that while correlations with language families tend to be weak, other corpus-level characteristics, such as type-token ratio, tend to be more strongly correlated. Finally, we show that multilingual spaces can be built, albeit less effectively, even when additional destructive perturbations are applied to the training corpora, implying that (effectively) bag-of-words models also have an inductive bias that is sufficient for inducing multilingual spaces.
16
+
17
+ # 1 Introduction
18
+
19
+ A variety of proxies and analytical methods have been used to study the inductive biases of language models towards natural language. This work includes targeted syntactic evaluation (Gulordava et al., 2018; Linzen et al., 2016), language model responses to formulaic synthetic languages (Ravfogel et al., 2019; White and Cotterell, 2021), as well as attempts to correlate differences in language modeling performance to language features over a wide range of languages (Cotterell et al., 2018).
20
+
21
+ In this paper, we combine two strands that have, of late, been fairly active research threads. The first of these concerns the inductive biases of language models towards languages that exhibit a specific
22
+
23
+ grammar; the second addresses the inductive biases of these models towards multilingualism, which in this context refers to a model's ability to build a multilingual space (rather than distinct monolingual spaces), when trained on corpora consisting of text in multiple languages.
24
+
25
+ Prior work in this domain is focused on either a) quantifying language model performance across a variety of languages, or b) studying the effects of different architectural components on the quality of the induced multilingual space. We attempt to unite the two strands of research by studying transformer-based masked language models in an effort to quantify the extent to which the grammar of the language being modelled affects the model's ability to build a multilingual space. We use Dufter and Schütze's (2021) metrics, namely word translation and sentence retrieval, as a proxy for the utility of this space. Our main findings are:
26
+
27
+ - Masked language models are capable of building multilingual spaces even when destructive perturbations, like lemmatisation and shuffling, are applied to the training corpora.
28
+ - Multilingual performance is only weakly correlated with languages and language families.
29
+ - Multilingual performance correlates better with corpus-level statistics like type-token ratio, and the frequency of hapax legomena.
30
+
31
+ # 2 Related Work
32
+
33
+ Language modelling There has been a considerable amount of research addressing inductive biases that language models may have towards specific grammatical patterns, or towards natural languages with specific structures. An early study by Cotterell et al. (2018) demonstrates, over 21 languages, that certain languages are harder to model than others; the authors find that model performance correlates with the richness of a language's (inflectional) morphology. Later work by Mielke et al. (2019) shows
34
+
35
+ contradictory findings; the authors extend these experiments to 69 languages and find that morphological complexity does not correlate as strongly with performance as simpler factors like vocabulary size and sentence length do.
36
+
37
+ Other work involves studying how language modelling is affected by manually altering corpora. Ravfogel et al. (2019) train RNN-based models on English, altered to display different word orders and different degrees of morphological agreement; White and Cotterell (2021) generate corpora of natural language sentences, with constituents permuted based on Boolean switches, and show that recurrent language models show little variance in performance across word orders, compared to transformers.
38
+
39
+ Multilingualism Moving beyond monolingual language modelling, we examine the numerous works analysing what precisely multilingual language models need, in order to form an adequate multilingual space, which is quantified by measuring a model's performance on some multilingual task. Pires et al. (2019) show that subword overlap tends to improve multilingual alignment, though overlap is by no means necessary, as languages with different scripts can exist in the same multilingual space. Deshpande et al. (2021) show that while structurally similar languages do not necessarily need subword overlap, dissimilar languages rely heavily on overlap; they also show that well-aligned non-contextual word embedding spaces allow for better transfer.
40
+
41
+ On the other hand, Artetxe et al. (2020) have somewhat contradictory results, and show that neither shared vocabulary items nor joint pre-training are essential to build a multilingual encoder. K et al. (2020) and Dufter and Schütze (2021) analyse encoders from an architectural point of view. The former work shows that model depth (and not the number of attention heads) contributes to transfer performance, even when the number of parameters is kept constant. The latter points out that multilingual spaces exist because languages are forced to share parameters, and that even in the absence of shared subwords and special tokens, position embeddings play a significant role in building these spaces. Dufter and Schütze (2021) go on to show that the removal of shared position embeddings is sufficient to reduce a model's multilingual performance (as measured on word translation and sentence retrieval) to approximately random. This,
42
+
43
+ we show, is not universally the case.
44
+
45
+ # 3 Methodology
46
+
47
+ # 3.1 General approach
48
+
49
+ In order to evaluate the quality of our models' multilingual spaces, we use word translation and sentence retrieval as proxy tasks; this contrasts with, for example, Deshpande et al. (2021), who use (zero-shot) transfer performance instead. We avoid this largely due to performance constraints: small models are unlikely to be parameterised enough to handle transfer.
50
+
51
+ To create synthetic multilingual (more precisely, bilingual) corpora, we follow the approach of K et al. (2020) and Dufter and Schütze (2021). Starting from a monolingual corpus, we shift the vocabulary index for every token in the original corpus up by the model's vocabulary size. For instance, the token convenient, with token index 42, would have a "mirror": convenient, with token index 2090. This effectively gives us a parallel second half, which has the same structure as the original language, but a guarantee of no vocabulary overlap.
52
+
53
+ While this is a somewhat unrealistic simulation – after all, multilingual models are trained on languages with different structures – we use our formulation in order to a) have a simplified test bed where the structure of the language plays a role, but the structural differences between the two languages are ignored; and b) to avoid the complexity of the experimental space from exploding, when each language can conceivably be paired with every other language.
54
+
55
+ # 3.2 Data
56
+
57
+ In an effort to have a reasonably comprehensive search space of languages, we experiment over two corpora (Wikipedia and Commoncrawl) and fifteen languages – namely Arabic, Czech, Danish, German, English, Spanish, Finnish, French, Hebrew, Italian, Dutch, Polish, Portuguese, Russian and Swedish. While Indo-European languages are still rather overrepresented in our data, these languages exhibit a wide range of head-dependent entropies (Levshina, 2019). This is also part of the reason we avoid completely synthetic corpora: while it is trivial to generate synthetic corpora from some descriptive grammar, the stochasticity and random variation inherent to most natural languages is harder to synthetically model. Both corpora have been parsed into Universal Dependencies
58
+
59
+ # Default
60
+
61
+ he spent most of his childhood in sunamganj with his mother . david s. mack ( born 1941 ) is an american businessman . he spent most of his childhood in sunamganj with his mother . david s. mack ( born 1941 ) is an american businessman .
62
+
63
+ # Lemmatised
64
+
65
+ the episode be generally well receive.
66
+ the software be sell and support only in japan.
67
+ the episode be generally well receive.
68
+ the software be sell and support only in japan.
69
+
70
+ # Shuffled
71
+
72
+ most his with in of childhood spent sunamganj . mother his hes. american . born is david 1941 ) businessman an ( mackmost his with in of childhood spent sunamganj . mother his hes. american . born is david 1941 ) businessman an ( mack
73
+
74
+ # Corrupted
75
+
76
+ be generally . receive well episode the software be the sell in and support . japan only be generally . receive well episode the software be the sell in and support . japan only
77
+
78
+ Table 1: Sample sentences extracted from real corpora, with each of our modifications applied. Note that while the original and lemmatised corpora are sampled differently, the shuffled and corrupted corpora are modified variants of the former.
79
+
80
+ (UD) (Nivre et al., 2016, 2020; de Marneffe et al., 2021).
81
+
82
+ From each of the large corpora (Wikipedia and Common Crawl), we sample five corpora of 20k sentences for each language, with different random seeds, and split them into train and validation splits of 15k and 5k tokens, respectively. We employ a number of simple heuristics to filter out sentences that we suspect to be titles, or other noisy text. We generate two variants of each corpus: one that we tokenise with a BPE tokenizer, and another that retains UD-style tokenisation. The motivation behind this is to control for subwords: the absence of subword tokenisation is harder for our models to recover from, as they must be able to cluster tokens that have the same morphological affixes without explicit access to these affixes.
83
+
84
+ For our BPE segmented corpora, we use a model vocabulary of size 2048; this vocabulary is derived by training a fastBPE tokenizer on the respective training corpus. For UD-style tokenisation, we also use a vocab with 2048 unique tokens. We handle unknown tokens by replacing them with <unk> tokens; we also filter out sentences that have over $90\%$ OOV tokens in the process of sentence selection, to avoid noise. As both our corpora are fairly noisy, we also apply a set of heuristics to eliminate corpus noise; for instance, we filter out sentences based on the number of title-cased tokens in them, to avoid scraping Wikipedia titles.
85
+
86
+ # 3.3 Perturbations
87
+
88
+ To adequately isolate the effects of word order and morphology, we apply three modifications to each combination of tokenisation method and corpus, giving us a total of $2 * 2 * 4 = 16$ corpora per language; with 15 languages and 5 seeds, this equates to $16 * 15 * 5 = 1200$ experiments in all.
89
+
90
+ Original Our original, unmodified corpus, presented with both UD- and BPE-based tokenisation.
91
+
92
+ Shuffled We modify our corpus by shuffling every sentence at a word level. Note that the shuffling procedure takes place before BPE segmentation, similar to Sinha et al. (2021). Ideally, given no word-order context, our masked language models should only be able to rely on morphological information, or bag-of-words distributions, in order to build a multilingual space. This also has a similar effect to removing positional embeddings from the transformer, as described in Sinha et al. (2021). Positional embeddings act as an ordering mechanism in masked language modelling; without them, a corpus is similar to our shuffled corpus.
93
+
94
+ Lemmatised We use the LEMMA Universal Dependencies field to generate our corpus, instead of the usual FORM field. The motivation here is to eliminate all morphological information; the difference between this and avoiding BPE tokenisation is that lemmatisation prevents unique word forms from having separate vocab indices.
95
+
96
+ Corrupted This corpus is both lemmatised and shuffled. Given this precondition, and UD-style tokenisation, there ought to be no information accessible to our model, beyond bag-of-word lemma statistics. We therefore expect word translation and sentence retrieval to be close to 0 in this setting.
97
+
98
+ # 3.4 Models and Evaluation
99
+
100
+ To evaluate our models' multilingual capabilities, we first train lower-capacity language models on each corpus. Each model is trained on the task of masked language modelling, on the concatenation of both halves (original and shifted) of a corpus. We use Dufter and Schütze (2021)'s BERT variant, which downsizes the original BERT model; we use
101
+
102
+ ![](images/899435dbc8277d414ff71c4016d4b3ef7c9e536e4b33e94829d794ecdea7ae2c.jpg)
103
+ Figure 1: Results for our four perturbations, with and without BPE, with data from Common Crawl (top) and Wikipedia (bottom). Scores (sentence retrieval on the X-axis, word translation on the Y-axis) are averaged over layers 0 and 8.
104
+
105
+ single-headed, 12 layer transformer, with a head dimensionality of 64 and a feed-forward dimensionality of 256. This allows us to rapidly train a model on our corpora (in approximately 30-60 minutes per model). We set the random seed of each model to the same as the random seed used to generate the corpus we train it on; i.e. the model with seed 0, for English, is trained on the English corpus that was generated using a random seed of 0. Models are trained on V100 GPUs, each for approximately 1 hour.
106
+
107
+ Finally, we evaluate word translation and sentence retrieval scores for these models by using the deterministic gold labels, obtained by simply adding the vocab size (for translation) and by di
108
+
109
+ viding the corpus into two halves and generating a sequential mapping (for retrieval). Note that this evaluation does not involve fine-tuning language models: we use the cosine similarity between either a word or a sentence and its fake parallel, for word translation and sentence retrieval resepectively. For word translation, we ensure that non-initial subwords are not included in the evaluation; while this is not ideal, none of our languages are morphologically prefixing, implying that the bulk of the semantic content is in the initial subword.
110
+
111
+ # 4 Results
112
+
113
+ We present results per language and experiment on Common Crawl (top) and Wikipedia (bottom) in
114
+
115
+ Figure 1. We begin by making a few general observations before moving on to study correlations with morphosyntactic and corpus factors.
116
+
117
+ 'Fails' are frequent We note, first, that across most of our experiments, we have several 'fails', where our model effectively has near 0 retrieval and translation capacity. While this observation in isolation is somewhat meaningless – the model might have failed to learn effectively, either due to the random seed or due to the hyperparameters – the sheer number of experiments we run for each scenario makes these results more meaningful, when used as a comparison between training scenarios, as evidence that a certain scenario is likelier to result in a fail than another.
118
+
119
+ BPE makes word translation harder Despite controlling for non-initial subwords, using BPE tokenisation results in a drop in translation score for all our experiments. We hypothesise that this is due to common word-initial subwords being distributionally 'overloaded'; they are more likely to appear in a wider range of contexts than whole tokens are, due to the variety in consecutive subwords.
120
+
121
+ Multilingualism is robust to lemmatisation Perhaps somewhat unsurprisingly, lemmatisation does not significantly affect model scores, indicating that our model relies more on word order to build multilingual spaces. Interestingly, removing BPE segmentation results in an increase in fails on lemmatised corpora.
122
+
123
+ # Bag-of-words is enough for (some) experiments
124
+
125
+ Our most unexpected observation is that for both shuffling and corrupting, for both BPE and non-BPE, several experiments do appear to result in fairly successful retrieval/translation models, often with an accuracy higher than $50\%$ on either task. This is surprising, given that a) this appears to contradict the findings of Dufter and Schütze (2021) about position embeddings being critical for multilingual spaces, and b) it implies that a simple bag-of-words model is enough to build a multilingual space. We attempt, in the following sections, to tease out what factors might enable this transfer. It is plausible that some part of this signal stems from the fact that the shuffling operation was carried out prior to BPE segmentation (Abdou et al., 2022); we discuss this further in Section 5.4.
126
+
127
+ # 5 Analysis
128
+
129
+ # 5.1 Clustering
130
+
131
+ In order to find potential explanations for our results, we automatically cluster our scores, using retrieval and translation scores as our cluster metrics. To determine whether either languages (given that we have five experiments per language) or language families tend to actually represent logical, meaningful clusters, we set the number of clusters to be equivalent to the number of families, and use the adjusted Rand score (Vinh et al., 2010) to measure the distance between two clusterings – clusterings based on language/family, and learnt clusterings.
132
+
133
+ We present these results in Table 2. First, clustering by language family shows little to no correlation with score-based clusters. Clusters of corpora in a single language ('language-based' clusters) are slightly clearer: while similarities are relatively low for all our BPE-based clusters, when we switch to UD tokenisation, the default and lemmatised cases begin to form more typologically relevant clusters, resembling languages. While these are by no means perfect overlaps, they are almost twice as realistic as for BPE-based tokenisation, implying that there exist language-specific features that correlate somewhat to the model's ability to form multilingual spaces. To investigate these findings in greater detail, we look for language-specific features – both corpus-specific features, and vocabulary features – and look for correlations that might explain our results.
134
+
135
+ # 5.2 Corpus correlations
136
+
137
+ We analyse our corpora, and measure correlations of model performance to a range of descriptive statistics, applied to the corpora that the models were trained on. For a single 'performance' metric, we follow Dufter and Schütze (2021) in defining a model's ML score as the average of its word translation and sentence retrieval scores, at layers 0 and 7. We measure correlations with:
138
+
139
+ - The number of training tokens
140
+ - The type-token ratio
141
+ - The number of one-letter types
142
+ - The number of one-letter tokens
143
+ - Average type length (in characters);
144
+ Average token length
145
+ Average sentence length
146
+ - Frequency of hapax, dis and tris legomena
147
+
148
+ ![](images/f13f78eb496f7362c6db35d7529fbe1e9b3e7bc43229c800128fa7df39aefaa4.jpg)
149
+ Figure 2: Spearman correlations $(\alpha = 0.001)$ . Greyed-out values indicate insufficient evidence.
150
+
151
+ <table><tr><td></td><td colspan="2">Language</td><td colspan="2">Family</td></tr><tr><td></td><td>BPE</td><td>UD</td><td>BPE</td><td>UD</td></tr><tr><td>Default</td><td>0.17/0.05</td><td>0.35/0.25</td><td>0.07/0.05</td><td>0.04/0.08</td></tr><tr><td>Lemmatised</td><td>0.16/0.11</td><td>0.38/0.14</td><td>0.10/0.04</td><td>0.14/0.07</td></tr><tr><td>Shuffled</td><td>0.15/0.13</td><td>0.03/0.01</td><td>0.07/0.10</td><td>0.02/0.05</td></tr><tr><td>Corrupted</td><td>0.14/0.12</td><td>0.05/0.02</td><td>0.13/0.09</td><td>0.01/0.02</td></tr></table>
152
+
153
+ Table 2: Cluster similarities (adjusted Rand score) between language, or language family clusters, and $k$ -means clustering, with a random seed of 42. Results on Wikipedia and Common Crawl are separated with a backslash.
154
+
155
+ We present these statistics in Figure 2. A clear difference between doing nothing/lemmatising and shuffling/corrupting leaps out. With UD tokenisation, none of our corpus metrics correlates well with model performance, while BPE tokenisation consistently throws out a range of correlations. There is also a clear difference between Wikipedia and Common CWEl; in general, we find that correlations tend to be either weaker or less significant with Common CWEl than with Wikipedia. We hypothesise that this is due to Wikipedia being both more homogeneous and less noisy as a corpus.
156
+
157
+ Type-token ratio is a strong predictor For the default (and, to some extent, lemmatised) models, we find that type-token ratio has a strong positive correlation to ML-score (particularly retrieval), implying that lexical diversity enables better transfer. This is perhaps unsurprising – infrequent types might act as ‘anchors’, allowing easier transfer for their surrounding contexts. This is somewhat backed up by the disappearance of this metric in
158
+
159
+ shuffled models.
160
+
161
+ Avg. token length predicts BPE performance Over our scrambled corpora, for both Wikipedia and Common Crawl, $^{1}$ it appears that average token length correlates strongly to downstream performance. The fact that this occurs for BPE tokenisation and not UD implies that this is likely a proxy for the number of BPE splits, rather than a realistic cross-linguistic measure; the more aggressive the BPE, the poorer the model. This is also somewhat backed up by the fact that the number of tokens inversely correlates to BPE performance; the shorter the average BPE split, the more the actual number of tokens in a corpus, for a given language.
162
+
163
+ Sentence length often correlates negatively This finding is consistent across all our BPE models; longer sentence lengths (in tokens) imply poorer multilingual scores. This is likely at least partially related to the previous observation - the
164
+
165
+ ![](images/fdb86dd4e9461f92a9aaea72a041672c610bc4b078eb20408ee82a1c58847832.jpg)
166
+ (a) Sentence retrieval
167
+
168
+ ![](images/83723e4f675dcf9300dac608b034a21e902d72607f7b842fb9480d3db49de402.jpg)
169
+ (b) Word translation
170
+ Figure 3: Spearman correlations, with a more relaxed $\alpha = 0.01$ . X-axis indicates vocabulary statistics. Y-axis indicates tokenisation method. Correlations are on Common Crawl data, with the appropriate metric averaged at layers 0 and 7.
171
+
172
+ longer the average token, the less aggressive the BPE, and the less aggressive the BPE, the shorter the average sentence.
173
+
174
+ Hapax/dis/tris ratios Results generally tend to correlate positively with the ratio of hapax legomena to the total number of tokens, when BPE tokenisation is used. This difference is likely due to the presence of more morphemic hapaxes in BPE-tokenised models: UD tokenisation is likely to result in a long tail of rarer morphological forms of rarer tokens. Curiously, this correlation, albeit weaker, is reversed for dis and tris legomena.
175
+
176
+ # 5.3 Vocabulary correlations
177
+
178
+ Next, we examine ML score correlations with different properties of the size 2048 UD/BPE vocabulary for each model. Note that as each model is trained with a unique corpus, each model has a unique vocabulary. Our features include:
179
+
180
+ - Average token length; for non-initial wordpieces, we do not include the length of the prefix.
181
+ - Counting complexity, using UniMorph (Kirov et al., 2020) to count the number of distinct morphological features in a given language.
182
+ - The frequency of single-letter vocab items.
183
+ - The frequency of digits in the vocab.
184
+ - The frequency of punctuation in the vocab.
185
+
186
+ We present these correlations in two heatmaps in Figures 3a and 3b. Some of our observations back
187
+
188
+ up the observations in the previous section (eg. token length correlates inversely with ML score).
189
+
190
+ Counting complexity is complex. Gratifyingly, the counting complexity metric (Sagot, 2013) appears to match Cotterell et al. (2018)'s observation, and is positively correlated with both retrieval and (to a larger extent) translation. Strangely, however, this correlation also appears to hold for both corrupted corpora; this is odd, as these corpora are lemmatised, implying the absence of inflectional morphology. It is plausible that this effect is still visible (albeit weakened) due to differences in the distribution of function words and stems, when compared with a language with actual differences in counting complexity; a language with strong case-marking, for instance, is likely to have a very different distribution of adpositions than a language without. This finding also backs up Mielke et al. (2019), who suggest that vocabulary-level measures may correlate better.
191
+
192
+ Specific tokens may act as anchors For the task of word translation, we notice that positive correlations tend to occur with the frequency of noninitial subwords, the frequency of digits, and the frequency of single-letter tokens. This effect, visible across all three categories, might indicate that these tokens act as anchors, enabling easier transfer in their contexts.
193
+
194
+ No clear patterns exist for retrieval We notice no clear factors contributing to retrieval. While the number of unused tokens does appear to correlate
195
+
196
+ ![](images/09f32a09b2256282152bf8245aca79e7f620eef1243e18ef8c7987b857698aa9.jpg)
197
+ Figure 4: Retrieval/translation scores for (learnt) absolute position, (fixed) sinusoidal position and no position. English in bold black for easier comparison with Duffer and Schütze (2021).
198
+
199
+ in the lemmatised models, this is mild and is likely to be an effect of the vocab size being effectively smaller.
200
+
201
+ # 5.4 Ablation experiments
202
+
203
+ While somewhat tangential to our original research question, we attempted to modify the positional embedding bias in our model. Dufter and Schütze (2021) show that positional embeddings are critical to building a multilingual space; Sinha et al. (2021) show that positional embeddings are critical to building monolingual language models, a finding backed up in other work (Abdou et al., 2022; Papadimitriou et al., 2022), where the authors also emphasise the importance of meaningful word order. These observations are somewhat contradictory to our findings, where shuffling corpora at a token-level still allows for successful multilingual space induction.
204
+
205
+ To resolve this, we train two additional models, on a corrupted variant of Common Crawl, presented in Figure 4. The first of these has its learnt, absolute position embeddings (Devlin et al., 2019) replaced with sinusoidal embeddings, as in the original transformer paper (Vaswani et al., 2017), and the other has them removed entirely. While we
206
+
207
+ would expect to see model performance drop considerably without position embeddings, this is often not the case at all; there is no real visible difference in performance across either of the tasks, implying that certain 'clues' are perhaps sufficient to build a multilingual space, even when a functional monolingual space might not exist for any of the languages.
208
+
209
+ Having said that, we note that English (annotated in black) is not one of the easier languages to build multilingual spaces for, even with absent position embeddings; as such, our English results are more similar to the results reported by Dufter and Schütze (2021).
210
+
211
+ # 6 Conclusion
212
+
213
+ In this work, we attempted to measure the variance in the ability of masked language models to build multilingual spaces with the underlying typology of the language. In doing so, we have shown that these models are capable of building multilingual spaces even when sentences are lemmatised and scrambled at a token level, showing that multilingualism can exist even when transformers act, functionally, like bag-of-words models. This does not, however, necessarily imply the ability to effectively model language (Abdou et al., 2022), but merely the ability to align two disjoint linguistic spaces.
214
+
215
+ We have also shown that, on the one hand, the ability to build a multilingual space is only weakly correlated to language (given multiple corpora) and to language family, and that, on the other hand, certain corpus-level metrics (specifically, type-token ratios and the presence of hapax legomena) are relatively good predictors of multilingual space quality, while others (such as the number of tokens or the average sentence length) are negatively correlated.
216
+
217
+ Our work is not without its caveats. For one, a lot of our correlating factors muddy the waters between what is an inherent property of the language itself, and what is a property of the corpus we use. While we use texts from the same domain in all our languages, both Wikipedia and Common Crawl are widely inconsistent across language, unless explicitly made comparable (Otero and López, 2010). Further, as discussed earlier, our scenario is not strictly realistic: first, this is a bilingual setup meant to approximate a multilingual one; second, both our languages have exactly the same structure; third, our language models are very underparame
218
+
219
+ terised relative to full-scale models. It is unlikely that our observations would hold true in a real-world scenario; given, however, that our aim was to study the inductive biases of masked language models, using full-scale models would defeat the purpose somewhat, as the sheer volume of training data would have overridden these biases. Having said that, we present this work as an attempt to add to the often conflicting pool of papers attempting to shed some light on how language models acquire language.
220
+
221
+ # Limitations
222
+
223
+ This work has several limitations, some of which we have addressed. To reiterate, in order to enable some degree of cross-linguistic diversity in this analysis, our bilingual setup is only an approximation of a true multilingual setup. Conversely, we are limited in the data we have access to: for inclusion in this study, languages had to have large and relatively noiseless dependency-parsed corpora available; as such, we are somewhat biased towards over-representing Indo-European languages.
224
+
225
+ # Ethical considerations
226
+
227
+ The research presented in this work is compatible with the ACL ethics policy; the data we use is a toy subset of openly available corpora, and our models are very underparameterised, relative to the current state-of-the-art. Given the sheer number of models we train, our main experimental findings require approximately 1200 GPU hours for training, approximately equivalent to the amount of time required to train a full-scale BERT model on the same V100 GPUs.[2]
228
+
229
+ # References
230
+
231
+ Mostafa Abdou, Vinit Ravishankar, Artur Kulmizev, and Anders Søgaard. 2022. Word Order Does Matter and Shuffled Language Models Know It. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 6907-6919.
232
+ Mikel Artetxe, Sebastian Ruder, and Dani Yogatama. 2020. On the Cross-lingual Transferability of Monolingual Representations. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4623–4637, Online. Association for Computational Linguistics.
233
+
234
+ Ryan Cotterell, Sebastian J. Mielke, Jason Eisner, and Brian Roark. 2018. Are All Languages Equally Hard to Language-Model? In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 536-541, New Orleans, Louisiana. Association for Computational Linguistics.
235
+ Marie-Catherine de Marneffe, Christopher D. Manning, Joakim Nivre, and Daniel Zeman. 2021. Universal Dependencies. Computational Linguistics, 47(2):255-308.
236
+ Ameet Deshpande, Partha Talukdar, and Karthik Narasimhan. 2021. When is BERT Multilingual? Isolating Crucial Ingredients for Cross-lingual Transfer. arXiv:2110.14782 [cs].
237
+ Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv:1810.04805 [cs].
238
+ Philipp Dufter and Hinrich Schütze. 2021. Identifying Necessary Elements for BERT's Multilinguality. arXiv:2005.00396 [cs].
239
+ Kristina Gulordava, Piotr Bojanowski, Edouard Grave, Tal Linzen, and Marco Baroni. 2018. Colorless green recurrent networks dream hierarchically. arXiv:1803.11138 [cs].
240
+ Karthikeyan K, Zihan Wang, Stephen Mayhew, and Dan Roth. 2020. Cross-Lingual Ability of Multilingual BERT: An Empirical Study. arXiv:1912.07840 [cs].
241
+ Christo Kirov, Ryan Cotterell, John Sylak-Glassman, Geraldine Walther, Ekaterina Vylomova, Patrick Xia, Manaal Faruqui, Sabrina J. Mielke, Arya D. McCarthy, Sandra Kübler, David Yarowsky, Jason Eisner, and Mans Hulden. 2020. UniMorph 2.0: Universal Morphology. arXiv:1810.11101 [cs].
242
+ Natalia Levshina. 2019. Token-based typology and word order entropy: A study based on universal dependencies. Linguistic Typology, 23:533-572.
243
+ Tal Linzen, Emmanuel Dupoux, and Yoav Goldberg. 2016. Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies. Transactions of the Association for Computational Linguistics, 4:521-535.
244
+ Sebastian J. Mielke, Ryan Cotterell, Kyle Gorman, Brian Roark, and Jason Eisner. 2019. What Kind of Language Is Hard to Language-Model? In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4975-4989, Florence, Italy. Association for Computational Linguistics.
245
+ Joakim Nivre, Marie-Catherine De Marneffe, Filip Ginter, Yoav Goldberg, Jan Hajic, Christopher D Manning, Ryan McDonald, Slav Petrov, Sampo Pyysalo, Natalia Silveira, et al. 2016. Universal dependencies
246
+
247
+ v1: A multilingual treebank collection. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 1659-1666.
248
+ Joakim Nivre, Marie-Catherine de Marneffe, Filip Ginter, Jan Hajic, Christopher D. Manning, Sampo Pyysalo, Sebastian Schuster, Francis Tyers, and Daniel Zeman. 2020. Universal Dependencies v2: An evergrowing multilingual treebank collection. In Proceedings of the 12th Language Resources and Evaluation Conference, pages 4034-4043, Marseille, France. European Language Resources Association.
249
+ Pablo Gamallo Otero and Isaac González López. 2010. Wikipedia as multilingual source of comparable corpora. In Proceedings of the 3rd Workshop on Building and Using Comparable Corpora, LREC, pages 21-25. CiteSeer.
250
+ Isabel Papadimitriou, Richard Futrell, and Kyle Mahowald. 2022. When classifying arguments, bert doesn't care about word order... except when it matters. Proceedings of the Society for Computation in Linguistics, 5(1):203-205.
251
+ Telmo Pires, Eva Schlinger, and Dan Garrette. 2019. How Multilingual is Multilingual BERT? In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4996-5001, Florence, Italy. Association for Computational Linguistics.
252
+ Shauli Ravfogel, Yoav Goldberg, and Tal Linzen. 2019. Studying the Inductive Biases of RNNs with Synthetic Variations of Natural Languages. page 11.
253
+ Benoit Sagot. 2013. Comparing complexity measures. In Computational approaches to morphological complexity.
254
+ Koustuv Sinha, Robin Jia, Dieuwke Hupkes, Joelle Pineau, Adina Williams, and Douwe Kiela. 2021. Masked language modeling and the distributional hypothesis: Order word matters pre-training for little. arXiv preprint arXiv:2104.06644.
255
+ Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention Is All You Need. arXiv:1706.03762 [cs].
256
+ Nguyen Xuan Vinh, Julien Epps, and James Bailey. 2010. Information theoretic measures for clusterings comparison: Variants, properties, normalization and correction for chance. Journal of Machine Learning Research, 11(95):2837-2854.
257
+ Jennifer C. White and Ryan Cotterell. 2021. Examining the Inductive Bias of Neural Language Models with Artificial Languages. arXiv:2106.01044 [cs].
theeffectsofcorpuschoiceandmorphosyntaxonmultilingualspaceinduction/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c32404587ee6fe00282bb61b9e7b58e22f0bb1aef4e15671fa650da37b0f553b
3
+ size 213990
theeffectsofcorpuschoiceandmorphosyntaxonmultilingualspaceinduction/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fcd9eaa3e3f6989c781c2c5b90a8390413fc7702b0ed3906d28d511af07a54cb
3
+ size 250512
theundesirabledependenceonfrequencyofgenderbiasmetricsbasedonwordembeddings/ade8f037-4c0e-4118-b6fb-59ca86898416_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5f6c6f26274654e00a60326a40ca23359ff62ae64e20ed66401eed7e7f13656c
3
+ size 45157
theundesirabledependenceonfrequencyofgenderbiasmetricsbasedonwordembeddings/ade8f037-4c0e-4118-b6fb-59ca86898416_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ae9d1a95223d9e861bb6ea4ddbe1351037b8d3b5dc087dd4312515ebe62c15cb
3
+ size 54406