SlowGuess commited on
Commit
f2c935b
·
verified ·
1 Parent(s): db134f9

Add Batch 4439e0f4-95a7-49d9-9cb9-b00df6a0ba1e

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. analyzinggeneralizationofvisionandlanguagenavigationtounseenoutdoorareas/7d9aa44d-d882-40df-8bd9-845b7ed70e52_content_list.json +3 -0
  2. analyzinggeneralizationofvisionandlanguagenavigationtounseenoutdoorareas/7d9aa44d-d882-40df-8bd9-845b7ed70e52_model.json +3 -0
  3. analyzinggeneralizationofvisionandlanguagenavigationtounseenoutdoorareas/7d9aa44d-d882-40df-8bd9-845b7ed70e52_origin.pdf +3 -0
  4. analyzinggeneralizationofvisionandlanguagenavigationtounseenoutdoorareas/full.md +310 -0
  5. analyzinggeneralizationofvisionandlanguagenavigationtounseenoutdoorareas/images.zip +3 -0
  6. analyzinggeneralizationofvisionandlanguagenavigationtounseenoutdoorareas/layout.json +3 -0
  7. aneffectiveandefficiententityalignmentdecodingalgorithmviathirdordertensorisomorphism/297e70eb-b658-4414-9546-9ef0fab36159_content_list.json +3 -0
  8. aneffectiveandefficiententityalignmentdecodingalgorithmviathirdordertensorisomorphism/297e70eb-b658-4414-9546-9ef0fab36159_model.json +3 -0
  9. aneffectiveandefficiententityalignmentdecodingalgorithmviathirdordertensorisomorphism/297e70eb-b658-4414-9546-9ef0fab36159_origin.pdf +3 -0
  10. aneffectiveandefficiententityalignmentdecodingalgorithmviathirdordertensorisomorphism/full.md +404 -0
  11. aneffectiveandefficiententityalignmentdecodingalgorithmviathirdordertensorisomorphism/images.zip +3 -0
  12. aneffectiveandefficiententityalignmentdecodingalgorithmviathirdordertensorisomorphism/layout.json +3 -0
  13. anempiricalstudyofmemorizationinnlp/b84a5b28-32b2-40c5-97fc-724a6c7b148d_content_list.json +3 -0
  14. anempiricalstudyofmemorizationinnlp/b84a5b28-32b2-40c5-97fc-724a6c7b148d_model.json +3 -0
  15. anempiricalstudyofmemorizationinnlp/b84a5b28-32b2-40c5-97fc-724a6c7b148d_origin.pdf +3 -0
  16. anempiricalstudyofmemorizationinnlp/full.md +462 -0
  17. anempiricalstudyofmemorizationinnlp/images.zip +3 -0
  18. anempiricalstudyofmemorizationinnlp/layout.json +3 -0
  19. anempiricalstudyonexplanationsinoutofdomainsettings/7d4b955f-0de5-4d22-a55b-3902668e1fc8_content_list.json +3 -0
  20. anempiricalstudyonexplanationsinoutofdomainsettings/7d4b955f-0de5-4d22-a55b-3902668e1fc8_model.json +3 -0
  21. anempiricalstudyonexplanationsinoutofdomainsettings/7d4b955f-0de5-4d22-a55b-3902668e1fc8_origin.pdf +3 -0
  22. anempiricalstudyonexplanationsinoutofdomainsettings/full.md +0 -0
  23. anempiricalstudyonexplanationsinoutofdomainsettings/images.zip +3 -0
  24. anempiricalstudyonexplanationsinoutofdomainsettings/layout.json +3 -0
  25. anempiricalsurveyoftheeffectivenessofdebiasingtechniquesforpretrainedlanguagemodels/3ed55c1d-5c4c-412e-8ac1-a38332226435_content_list.json +3 -0
  26. anempiricalsurveyoftheeffectivenessofdebiasingtechniquesforpretrainedlanguagemodels/3ed55c1d-5c4c-412e-8ac1-a38332226435_model.json +3 -0
  27. anempiricalsurveyoftheeffectivenessofdebiasingtechniquesforpretrainedlanguagemodels/3ed55c1d-5c4c-412e-8ac1-a38332226435_origin.pdf +3 -0
  28. anempiricalsurveyoftheeffectivenessofdebiasingtechniquesforpretrainedlanguagemodels/full.md +423 -0
  29. anempiricalsurveyoftheeffectivenessofdebiasingtechniquesforpretrainedlanguagemodels/images.zip +3 -0
  30. anempiricalsurveyoftheeffectivenessofdebiasingtechniquesforpretrainedlanguagemodels/layout.json +3 -0
  31. animitationlearningcurriculumfortexteditingwithnonautoregressivemodels/87401af9-cd81-425e-8660-24dadf836103_content_list.json +3 -0
  32. animitationlearningcurriculumfortexteditingwithnonautoregressivemodels/87401af9-cd81-425e-8660-24dadf836103_model.json +3 -0
  33. animitationlearningcurriculumfortexteditingwithnonautoregressivemodels/87401af9-cd81-425e-8660-24dadf836103_origin.pdf +3 -0
  34. animitationlearningcurriculumfortexteditingwithnonautoregressivemodels/full.md +338 -0
  35. animitationlearningcurriculumfortexteditingwithnonautoregressivemodels/images.zip +3 -0
  36. animitationlearningcurriculumfortexteditingwithnonautoregressivemodels/layout.json +3 -0
  37. aninformationtheoreticapproachtopromptengineeringwithoutgroundtruthlabels/dea8c1b5-878f-4d3a-8428-4805f654c3f0_content_list.json +3 -0
  38. aninformationtheoreticapproachtopromptengineeringwithoutgroundtruthlabels/dea8c1b5-878f-4d3a-8428-4805f654c3f0_model.json +3 -0
  39. aninformationtheoreticapproachtopromptengineeringwithoutgroundtruthlabels/dea8c1b5-878f-4d3a-8428-4805f654c3f0_origin.pdf +3 -0
  40. aninformationtheoreticapproachtopromptengineeringwithoutgroundtruthlabels/full.md +0 -0
  41. aninformationtheoreticapproachtopromptengineeringwithoutgroundtruthlabels/images.zip +3 -0
  42. aninformationtheoreticapproachtopromptengineeringwithoutgroundtruthlabels/layout.json +3 -0
  43. aninterpretableneurosymbolicreasoningframeworkfortaskorienteddialoguegeneration/b51dd698-ab14-4f63-be2b-b19394d12c7d_content_list.json +3 -0
  44. aninterpretableneurosymbolicreasoningframeworkfortaskorienteddialoguegeneration/b51dd698-ab14-4f63-be2b-b19394d12c7d_model.json +3 -0
  45. aninterpretableneurosymbolicreasoningframeworkfortaskorienteddialoguegeneration/b51dd698-ab14-4f63-be2b-b19394d12c7d_origin.pdf +3 -0
  46. aninterpretableneurosymbolicreasoningframeworkfortaskorienteddialoguegeneration/full.md +561 -0
  47. aninterpretableneurosymbolicreasoningframeworkfortaskorienteddialoguegeneration/images.zip +3 -0
  48. aninterpretableneurosymbolicreasoningframeworkfortaskorienteddialoguegeneration/layout.json +3 -0
  49. aninvestigationoftheineffectivenessofcounterfactuallyaugmenteddata/2fb073d9-9113-4e11-981d-1abaacaccaaf_content_list.json +3 -0
  50. aninvestigationoftheineffectivenessofcounterfactuallyaugmenteddata/2fb073d9-9113-4e11-981d-1abaacaccaaf_model.json +3 -0
analyzinggeneralizationofvisionandlanguagenavigationtounseenoutdoorareas/7d9aa44d-d882-40df-8bd9-845b7ed70e52_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c202f2b611af264c6f3c0f9f7597931d530546551ac70af5d18c31936b73701c
3
+ size 102952
analyzinggeneralizationofvisionandlanguagenavigationtounseenoutdoorareas/7d9aa44d-d882-40df-8bd9-845b7ed70e52_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:09dd21557ea9764db62e3f692fd6c9c3fb34969e9ebbe416a4bc70677802ba38
3
+ size 115975
analyzinggeneralizationofvisionandlanguagenavigationtounseenoutdoorareas/7d9aa44d-d882-40df-8bd9-845b7ed70e52_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3edfb2f5f254e64824212177a2b132b0728efa6a7ea1c417a4fee1d3705f4600
3
+ size 699059
analyzinggeneralizationofvisionandlanguagenavigationtounseenoutdoorareas/full.md ADDED
@@ -0,0 +1,310 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Analyzing Generalization of Vision and Language Navigation to Unseen Outdoor Areas
2
+
3
+ Raphael Schumann
4
+
5
+ Computational Linguistics
6
+
7
+ Heidelberg University, Germany
8
+
9
+ Stefan Riezler
10
+
11
+ Computational Linguistics & IWR
12
+
13
+ Heidelberg University, Germany
14
+
15
+ {rschuman|riezler}@cl.uni-heidelberg.de
16
+
17
+ # Abstract
18
+
19
+ Vision and language navigation (VLN) is a challenging visually-grounded language understanding task. Given a natural language navigation instruction, a visual agent interacts with a graph-based environment equipped with panorama images and tries to follow the described route. Most prior work has been conducted in indoor scenarios where best results were obtained for navigation on routes that are similar to the training routes, with sharp drops in performance when testing on unseen environments. We focus on VLN in outdoor scenarios and find that in contrast to indoor VLN, most of the gain in outdoor VLN on unseen data is due to features like junction type embedding or heading delta that are specific to the respective environment graph, while image information plays a very minor role in generalizing VLN to unseen outdoor areas. These findings show a bias to specifics of graph representations of urban environments, demanding that VLN tasks grow in scale and diversity of geographical environments.
20
+
21
+ # 1 Introduction
22
+
23
+ Vision and language navigation (VLN) is a challenging task that requires the agent to process natural language instructions and ground them in a visual environment. The agent is embodied in the environment and receives navigation instructions. Based on the instructions, the observed surroundings, and the current trajectory the agent decides its next action. Executing this action changes the position and/or heading of the agent within the environment, and eventually the agent follows the described route and stops at the desired goal location. The most common evaluation metric in VLN is the proportion of successful agent navigations, called task completion (TC).
24
+
25
+ While early work on grounded navigation was confined to grid-world scenarios (MacMahon et al., 2006; Chen and Mooney, 2011), recent work has studied VLN in outdoor environment consisting of real-world urban street layouts and corresponding panorama pictures (Chen et al., 2019). Recent agent models for outdoor VLN treat the task as a sequence-to-sequence problem where the instructions text is the input and the output is a sequence of actions (Chen et al., 2019; Xiang et al., 2020; Zhu et al., 2021b). In contrast to indoor VLN (Anderson et al., 2018; Ku et al., 2020), these works only consider a seen scenario, i.e., the agent is tested on routes that are located in the same area as the training routes. However, studies of indoor VLN (Zhang et al., 2020) show a significant performance drop when testing in previously unseen areas.
26
+
27
+ The main goal of our work is to study outdoor VLN in unseen areas, pursuing the research question of which representations of an environment and of instructions an agent needs to succeed at this task. We compare existing approaches to a new approach that utilizes features based on the observed environment graph to improve generalization to unseen areas. The first feature, called junction type embedding, encodes the number of outgoing edges at the current agent position; the second feature, called heading delta, encodes the agent's heading change relative to the previous timestep. As our experimental studies show, representations of full images do not contribute very much to successful VLN in outdoor scenarios beyond these two features. One reason why restricted features encoding junction type and heading delta are successful in this task is that they seem to be sufficient to encode peculiarities of the graph representation of the environments. Another reason is the current restriction of outdoor environments to small urban areas. In our case, one dataset is the widely used Touchdown dataset introduced by Chen et al. (2019), the
28
+
29
+ other dataset is called map2seq and has recently been introduced by Schumann and Riezler (2021). The map2seq dataset was created for the task of navigation instructions generation but can directly be adopted to VLN. We conduct a detailed analysis of the influence of general neural architectures, specific features such as junction type or heading delta, the role of image information and instruction token types, to outdoor VLN in seen and unseen environments on these two datasets.
30
+
31
+ Our specific findings unravel the contributions of these features on several VLN subtasks such as orientation, directions, stopping. Our general finding is that current outdoor VLN suffers a bias towards urban environments and to artifacts of their graph representation, showing the necessity of more diverse datasets and tasks for outdoor VLN.
32
+
33
+ Our main contributions are the following:
34
+
35
+ - We describe a straightforward agent model that achieves state-of-the-art task completion and is used as a basis for our experiments.
36
+ - We introduce the unseen scenario for outdoor VLN and propose two environment-dependent features to improve generalization in that setting.
37
+ - We compare different visual representations and conduct language masking experiments to study the effect in the unseen scenario.
38
+ - We adopt the map2seq dataset to VLN and show that merging it with Touchdown improves performance on the respective test sets.
39
+
40
+ # 2 VLN Problem Definition
41
+
42
+ The goal of the agent is to follow a route and stop at the desired target location based on natural language navigation instructions. The environment is a directed graph with nodes $v \in \mathbb{V}$ and labeled edges $(u, v) \in \mathbb{E}$ . Each node is associated with a $360^{\circ}$ panorama image $p$ and each edge is labeled with an angle $\alpha_{(u, v)}$ . The agent state $s \in S$ consists of a node and the angle at which the agent is heading: $(v, \alpha_{(v, u)} \mid u \in \mathbb{N}_v^{out})$ , where $\mathbb{N}_v^{out}$ are all outgoing neighbors of node $v$ . The agent can navigate the environment by performing an action $a \in \{\text{FORWARD}, \text{LEFT}, \text{RIGHT}, \text{STOP}\}$ at each timestep $t$ . The FORWARD action moves the agent from state $(v, \alpha_{(v, u)})$ to $(u, \alpha_{(u, u')})$ , where $(u, u')$ is the edge with an angle closest to $\alpha_{(v, u)}$ . The RIGHT and LEFT action rotates the agent towards
43
+
44
+ ![](images/5aae4338749e6dd0b3a2e3835ae1eab70accaf3bb3ace54cb3dd610c7bc9d0d0.jpg)
45
+ Figure 1: The ORAR model for outdoor vision and language navigation follows a sequence-to-sequence architecture. The instructions text is encoded and used along the visual features to predict the next agent action. The recurrent decoder has two layers, the first encodes observations about the current environment state, the second allows attention over the input text and panorama view. The predicted action changes the state of the agent in the environment and with it the panorama view of the next timestep.
46
+
47
+ the closest edge angle in clockwise or counterclockwise direction, respectively: $(v,\alpha_{(v,u^{\prime})})$ . Given a starting state $s_1$ and instructions text $\mathbf{x}$ , the agent performs a series of actions $a_1,\ldots,a_T$ until the STOP action is predicted. If the agent stops within one neighboring node of the desired target node (goal location), the navigation was successful. The described environment and location finding task was first introduced by (Chen et al., 2019) and we will also refer to it as "outdoor VLN task" throughout this paper.
48
+
49
+ # 3 Model Architecture
50
+
51
+ In this section we introduce the model that we use to analyze navigation performance in the unseen and seen scenario for outdoor VLN. The architecture is inspired by the cross-modal attention model for indoor VLN (Krantz et al., 2020). First we give a high level overview of the model architecture and rough intuition. Afterwards we provide a more
52
+
53
+ formaldescription.
54
+
55
+ As depicted in Figure 1, the model follows a sequence-to-sequence architecture where the input sequence is the navigation instructions text and the output is a sequence of agent actions. At each decoding timestep, a new visual representation of the current agent state within the environment is computed, where the agent state is dependent on the previously predicted actions. The decoder RNN has two layers where the first encodes metadata and a visual representation. The second RNN layer encodes a contextualized text and visual representation and eventually predicts the next action.
56
+
57
+ The intuition behind the model architecture is to firstly accumulate plain observations available at the current timestep and entangle them with previous observations in the first recurrent layer. Based on these observations, the model focuses attention to certain parts of the instructions text and visual features which are again entangled in the second recurrent layer. Thus, we use the acronym ORAR (observation-recurrence attention-recurrence) for the model.
58
+
59
+ In detail, the instructions encoder embeds and encodes the tokens in the navigation instructions sequence $\mathbf{x} = x_{1},\dots,x_{L}$ using a bidirectional LSTM (Graves et al., 2005):
60
+
61
+ $$
62
+ \begin{array}{l} \hat {x} _ {i} = \operatorname {e m b e d d i n g} (x _ {i}) \\ \left(\left(w _ {1}, \dots , w _ {L}\right), z _ {L} ^ {w}\right) = \operatorname {B i - L S T M} \left(\hat {x} _ {1}, \dots , \hat {x} _ {L}\right), \\ \end{array}
63
+ $$
64
+
65
+ where $w_{1},\ldots ,w_{L}$ are the hidden representations for each token and $z_L^w$ is the last LSTM cell state. The visual encoder, described in detail below, emits a fixed size representation $\bar{p}_t$ of the current panorama view and a sequence of sliced view representations $\bar{p}_t^1,\dots,\bar{p}_t^S$ . The state $z_0^{first}$ of the cell in the first decoder LSTM layer is initialized using $z_L^w$ . The input to the first decoder layer is the concatenation $(\oplus)$ of visual representation $\bar{p}_t$ , previous action embedding $\bar{a}_{t - 1}$ , junction type embedding $\bar{n}_t$ , and heading delta $d_t$ . The output of the first decoder layer,
66
+
67
+ $$
68
+ h _ {t} ^ {\text {f i r s t}} = \operatorname {L S T M} ^ {\text {f i r s t}} \left(\left[ \bar {a} _ {t - 1} \oplus \bar {n} _ {t} \oplus d _ {t} \oplus \bar {p} _ {t} \right]\right),
69
+ $$
70
+
71
+ is then used as the query of multi-head attention (Vaswani et al., 2017) over the text encoder. The resulting contextualized text representation $c_t^w$ is then used to attend over the sliced visual representations:
72
+
73
+ $$
74
+ \begin{array}{l} c _ {t} ^ {w} = \text {M u l t i H e a d A t t e n t i o n} \left(h _ {t} ^ {\text {f i r s t}}, \left(w _ {1}, \dots , w _ {L}\right)\right) \\ c _ {t} ^ {p} = \operatorname {M u l t i H e a d A t t e n t i o n} \left(c _ {t} ^ {w}, \left(\bar {p} _ {t} ^ {1},..., \bar {p} _ {t} ^ {S}\right)\right). \\ \end{array}
75
+ $$
76
+
77
+ The input and output of the second decoder layer are
78
+
79
+ $$
80
+ h _ {t} ^ {\text {s e c o n d}} = \operatorname {L S T M} ^ {\text {s e c o n d}} \left(\left[ \bar {t} \oplus h _ {t} ^ {\text {f i r s t}} \oplus c _ {t} ^ {w} \oplus c _ {t} ^ {p} \right]\right),
81
+ $$
82
+
83
+ where $\bar{t}$ is the embedded timestep $t$ . The hidden representation $h_t^{second}$ of the second decoder LSTM layer is then passed through a feed forward network to predict the next agent action $a_t$ .
84
+
85
+ # 3.1 Visual Encoder
86
+
87
+ At each timestep $t$ the panorama at the current agent position is represented by extracted visual features. We slice the panorama into eight projected rectangles with $60^{\circ}$ field of view, such that one of the slices aligns with the agent's heading. This centering slice and the two left and right of it are fed into a ResNet pretrained $^2$ on ImageNet (Russakovsky et al., 2015). We consider two variants of ResNet derived panorama features. One variant extracts low level features from the fourth to last layer (4th-to-last) of a pretrained ResNet-18 and concatenates each slice's feature map along the width dimension, averages the 128 CNN filters and cuts out 100 dimensions around the agents heading. This results in a feature matrix of $100 \times 100 (\bar{p}_t^1, \dots, \bar{p}_t^{100})$ . The full procedure is described in detail in Chen et al. (2019) and Zhu et al. (2021b). The other variant extracts high level features from a pretrained ResNet-50's pre-final layer for each of the 5 slices: $\bar{p}_t^1, \dots, \bar{p}_t^5$ . Each slice vector $\bar{p}_t^s$ is of size 2, 048 resulting in roughly the same number of extracted ResNet features for both variants, making a fair comparison. Further, we use the semantic segmentation representation of the panorama images. We employ omnidirectional semantic segmentation (Yang et al., 2020) to classify each pixel by one of the 25 classes of the Mapillary Vistas dataset (Neuhold et al., 2017). The classes include e.g. car, truck, traffic light, vegetation, road, sidewalk. See Figure 1 bottom right for a visualization. Each panorama slice $(\bar{p}_t^1, \dots, \bar{p}_t^5)$ is then represented by a 25 dimensional vector where each value is the normalized area covered by the corresponding class (Zhang et al., 2020). For either feature extraction method, the fixed sized panorama representation $\bar{p}_t$ is computed by concatenating the slice features $\bar{p}_t^1, \dots, \bar{p}_t^S$ and passing them to a feed forward network.
88
+
89
+ ![](images/b6639e8fcea192b719673e9899900217c4eb4cd14723ad8be323159cb8f4ed52.jpg)
90
+ Figure 2: Visualization of automatic agent rotation initiated by the environment. Grey circles and interconnecting edges are part of the environment graph. Black solid arrows are actions initiated by the agent. Black dotted arrows depict agent heading and automatic rotation by the environment. a): 1) The agent moves forward. 2) Agent's heading does not point to an outgoing edge. 3) Agent is automatically rotated to the closest edge without causing problems. b): The agent receives instructions like "Turn right at the next intersection". 1) The agent moves forward. 2) Agent's heading does not point to an outgoing edge. 3) The environment automatically rotates the agent towards the closest outgoing edge. 4) The agent has no explicit information about the automatic rotation and predicts a right turn as instructed, leading to a failed navigation.
91
+
92
+ ![](images/4c7c8a6a3a99e6e466c46653a6cd5048977ce1433d0bf99b155c196bab8657ae.jpg)
93
+
94
+ # 3.2 Junction Type Embedding
95
+
96
+ The junction type embedding is a feature that we introduce to better analyze generalization to unseen areas. It embeds the number of outgoing edges of the current environment node and is categorized into $\{2,3,4, > 4\}$ . It provides the agent information about the type of junction it is positioned on: a regular street segment, a three-way intersection, a four way intersection or an intersection with more than four outgoing streets. We want to point out that the number of outgoing edges isn't oracle information in the environment described in Section 2. The agent can rotate left until the same panorama view is observed and thus counting the number of outgoing edges by purely interacting with the environment. But it is clear that the feature leverages the fact that the environment is based on a graph and it would not be available in a continuous setting (Krantz et al., 2020).
97
+
98
+ # 3.3 Heading Delta
99
+
100
+ As described in Section 2, the environment defined and implemented by Chen et al. (2019) only allows states where the agent is heading towards an outgoing edge. As a consequence the environment automatically rotates the agent towards the closest outgoing edge after transitioning to a new node. The environment behavior is depicted in Fig-
101
+
102
+ ure 2a) for a transition between two regular street segments. However, as depicted in Figure 2b), a problem arises when the agent is walking towards a three-way intersection. The automatic rotation introduces unpredictable behavior for the agent and we hypothesis that it hinders generalization to unseen areas. To correct for this environment artifact, we introduce the heading delta feature $d_{t}$ which encodes the change in heading direction relative to the previous timestep. The feature is normalized to $(-1,1]$ where a negative value indicates a left rotation and a positive value indicates a right rotation. The magnitude signals the degree of the rotation up to $180^{\circ}$ .
103
+
104
+ # 4 Data
105
+
106
+ We use the Touchdown (Chen et al., 2019) and the map2seq (Schumann and Riezler, 2021) datasets in our experiments. Both datasets contain human written navigation instructions for routes located in the same environment. The environment consists of 29,641 panorama images from Manhattan and the corresponding connectivity graph.
107
+
108
+ # 4.1 Touchdown
109
+
110
+ The Touchdown dataset (Chen et al., 2019) for vision and language navigation consists of 9,326 routes paired with human written navigation instructions. The annotators navigated the panorama environment based on a predefined route and wrote down navigation instructions along the way.
111
+
112
+ # 4.2 Map2seq
113
+
114
+ The map2seq (Schumann and Riezler, 2021) dataset was created for the task of navigation instructions generation. The 7,672 navigation instructions were written by human annotators who saw a route on a rendered map, without the corresponding panorama images. The annotators were told to include visual landmarks like stores, parks, churches, and other amenities into their instructions. A different annotator later validated the written navigation instructions by using them to follow the described route in the panorama environment (without the map). This annotation procedure allows us to use the navigation instructions in the map2seq dataset for the vision and language navigation task. We are the first to report VLN results on this dataset.
115
+
116
+ # 4.3 Comparison
117
+
118
+ Despite being located in the same environment, the routes and instructions from each dataset differ in
119
+
120
+ multiple aspects. The map2seq instructions typically include named entities like store names, while Touchdown instructions focus more on visual features like the color of a store. Both do not include street names or cardinal directions and are written in egocentric perspective. Further, in map2seq the agent starts by facing in the correct direction, while in Touchdown the initial heading is random and the first part of the instruction is about orientating the agent ("Turn around such that the scaffolding is on your right"). A route in map2seq includes a minimum of three intersections and is the shortest path from the start to the end location.<sup>3</sup> In Touchdown there are no such constraints and a route can almost be circular. The routes in both datasets are around 35-45 nodes long with some shorter outliers in Touchdown. On average instructions are around 55 tokens long in map2seq and around 89 tokens long in Touchdown.
121
+
122
+ # 5 Experiments
123
+
124
+ We are interested in the generalization ability to unseen areas and how it is influenced by the two proposed features, types of visual representation, navigation instructions and training set size. Alongside of the results in the unseen scenario, we report results in the seen scenario to interpret performance improvements in relation to each other. All experiments<sup>4</sup> are repeated ten times with different random seeds. The reported numbers are the average over the ten repetitions. Results printed in bold are significantly better than non-bold results in the same column. Significance was established by a paired t-test<sup>5</sup> on the ten repetition results and a p-value $\leq 0.05$ without multiple hypothesis corrections factor. Individual results can be found in the Appendix.
125
+
126
+ # 5.1 Data Splits
127
+
128
+ To be able to compare our model with previous work, we use the original training, development and test split (Chen et al., 2019) for the seen scenario on Touchdown. Because we are the first to use the map2seq data for VLN we create a new split for it. The resulting number of instances can be
129
+
130
+ ![](images/aed7fd526a433663a505b6ef83616d58d471ac9044308ffa91096171b2039da8.jpg)
131
+ Figure 3: Visualization of the environment area located in Manhattan. The seen scenario is depicted on the left and the unseen scenario on the right. Each white dot is a training route and each black dot is a test route in the Touchdown and map2seq dataset. The unseen scenario is characterized by geographic separation of the training and testing area.
132
+
133
+ ![](images/4895e8878e589fcf0274c2ed2dd0853b145d8a810afe796dadef26b7511f6eee.jpg)
134
+
135
+ <table><tr><td rowspan="2"></td><td colspan="3">seen</td><td colspan="3">unseen</td></tr><tr><td>train</td><td>dev</td><td>test</td><td>train</td><td>dev</td><td>test</td></tr><tr><td>Touchdown</td><td>6,525</td><td>1,391</td><td>1,409</td><td>6,770</td><td>800</td><td>1,507</td></tr><tr><td>map2seq</td><td>6,072</td><td>800</td><td>800</td><td>5,737</td><td>800</td><td>800</td></tr><tr><td>Merged</td><td>12,597</td><td>2,191</td><td>2,209</td><td>12,507</td><td>1,600</td><td>2,307</td></tr></table>
136
+
137
+ Table 1: Number of instances in the data splits for the seen and unseen scenario of Touchdown and map2seq.
138
+
139
+ seen in the left column of Table 1. For the unseen scenario, we create new splits for both datasets. We separate the unseen area geographically by drawing a boundary across lower Manhattan (see Figure 3). Development and test instances are randomly chosen from within the unseen area. Routes that are crossing the boundary are discarded. The right column of Table 1 shows the number of instances for both splits. Additionally, we merge the two datasets for both scenarios. This is possible because both datasets are located in the same environment and the unseen boundary is equivalent.
140
+
141
+ # 5.2 Training Details
142
+
143
+ We train the models with Adam (Kingma and Ba, 2015) by minimizing cross entropy loss in the teacher forcing paradigm. We set the learning rate to 5e-4, weight decay to 1e-3 and batch size to 64. After 150 epochs we select the model with the best shortest path distance (SPD) performance on the development set. We apply dropout of 0.3 after each dense layer and recurrent connection. The multi-head attention mechanism is regularized
144
+
145
+ <table><tr><td rowspan="3"></td><td colspan="8">Seen</td><td colspan="8">Unseen</td></tr><tr><td colspan="4">Touchdown</td><td colspan="4">map2seq</td><td colspan="4">Touchdown</td><td colspan="4">map2seq</td></tr><tr><td colspan="2">dev</td><td colspan="2">test</td><td colspan="2">dev</td><td colspan="2">test</td><td colspan="2">dev</td><td colspan="2">test</td><td colspan="2">dev</td><td colspan="2">test</td></tr><tr><td>Model</td><td>nDTW</td><td>TC</td><td>nDTW</td><td>TC</td><td>nDTW</td><td>TC</td><td>nDTW</td><td>TC</td><td>nDTW</td><td>TC</td><td>nDTW</td><td>TC</td><td>nDTW</td><td>TC</td><td>nDTW</td><td>TC</td></tr><tr><td>RConcat</td><td>22.5</td><td>10.6</td><td>22.9</td><td>11.8</td><td>30.7</td><td>17.1</td><td>27.7</td><td>14.7</td><td>3.9</td><td>2.3</td><td>3.5</td><td>1.9</td><td>3.7</td><td>2.0</td><td>3.8</td><td>2.1</td></tr><tr><td>GA</td><td>25.2</td><td>12.0</td><td>24.9</td><td>11.9</td><td>33.0</td><td>18.2</td><td>30.1</td><td>17.0</td><td>3.6</td><td>1.8</td><td>4.0</td><td>2.2</td><td>3.9</td><td>1.8</td><td>4.1</td><td>1.7</td></tr><tr><td>ARC</td><td>-</td><td>15.3</td><td>-</td><td>14.1</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>ARC+12s</td><td>-</td><td>19.5</td><td>-</td><td>16.7</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>VLN Transformer</td><td>23.0</td><td>14.0</td><td>25.3</td><td>14.9</td><td>31.1</td><td>18.6</td><td>29.5</td><td>17.0</td><td>4.7</td><td>2.3</td><td>5.2</td><td>3.1</td><td>6.2</td><td>3.6</td><td>6.1</td><td>3.5</td></tr><tr><td>ORAR full model</td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>• ResNet pre-final</td><td>38.9</td><td>26.0</td><td>38.4</td><td>25.3</td><td>65.0</td><td>49.1</td><td>62.3</td><td>46.7</td><td>13.0</td><td>9.6</td><td>12.1</td><td>8.8</td><td>34.6</td><td>24.2</td><td>34.5</td><td>24.6</td></tr><tr><td>• ResNet 4th-to-last</td><td>45.1</td><td>29.9</td><td>44.9</td><td>29.1</td><td>60.0</td><td>43.4</td><td>57.8</td><td>41.7</td><td>22.2</td><td>15.4</td><td>21.6</td><td>14.9</td><td>41.0</td><td>27.6</td><td>42.2</td><td>30.3</td></tr><tr><td>ORAR full model</td><td colspan="4">• ResNet 4th-to-last</td><td colspan="4">• ResNet pre-final</td><td colspan="4">• ResNet 4th-to-last</td><td colspan="4">• ResNet 4th-to-last</td></tr><tr><td>- no heading delta</td><td>45.5</td><td>30.0</td><td>45.3</td><td>29.3</td><td>63.2</td><td>47.7</td><td>60.3</td><td>44.9</td><td>21.6</td><td>15.2</td><td>21.2</td><td>14.8</td><td>33.0</td><td>22.0</td><td>33.6</td><td>23.6</td></tr><tr><td>- no junction type</td><td>40.6</td><td>25.9</td><td>40.9</td><td>25.5</td><td>65.9</td><td>52.9</td><td>62.1</td><td>47.5</td><td>7.9</td><td>4.8</td><td>7.1</td><td>4.3</td><td>13.1</td><td>7.4</td><td>11.8</td><td>7.1</td></tr><tr><td>- no head. &amp; no junc.</td><td>39.2</td><td>24.6</td><td>39.4</td><td>24.2</td><td>62.7</td><td>49.6</td><td>58.9</td><td>45.1</td><td>7.6</td><td>4.6</td><td>7.0</td><td>4.4</td><td>8.9</td><td>5.0</td><td>8.2</td><td>4.7</td></tr></table>
146
+
147
+ Table 2: Results on Touchdown and map2seq for the seen and unseen scenario. Metrics are normalized Dynamic Time Warping (nDTW) and task completion (TC). In the first section we list results for the comparison models: RConcat, GA, VLN Transformer (Zhu et al., 2021b) and ARC, ARC+learn2stop (Xiang et al., 2020). In the second section we present results for the ORAR model with two different types of image features: ResNet pre-final features are extracted from the last layer before the classification and ResNet 4th-to-last are low level features extracted from the fourth to last layer of a pretrained ResNet. The last section ablates the two proposed features: heading delta and junction type embedding.
148
+
149
+ by attention dropout of 0.3 and layer normalization. The navigation instructions are lower-cased and split into byte pair encodings (Sennrich et al., 2016) with a vocabulary of 2,000 tokens and we use BPE dropout (Provilkov et al., 2020) during training. The BPE embeddings are of size 32 and the bidirectional encoder LSTM has two layers of size 256. The feed forward network in the visual encoder consists of two dense layers with 512 and 256 neurons, respectively, and 64 neurons in case of using semantic segmentation features. The embeddings that encode previous action, junction type, and step count are of size 16. The two decoder LSTM layers are of size 256 and we use two attention heads. Training the full model takes around 3 hours on a GTX 1080 Ti.
150
+
151
+ # 5.3 Model Comparison
152
+
153
+ We compare the ORAR model to previous works. Because these works only report results for the seen scenario on Touchdown, we evaluate those for which we could acquire the code, on the map2seq dataset and the unseen scenario. The models RConcat (Mirowski et al., 2018; Chen et al., 2019), GA (Chaplot et al., 2018; Chen et al., 2019) and ARC (Xiang et al., 2020) use an LSTM to encode the instructions text and a single layer decoder LSTM to predict the next action. They differ in how the text and image representations are incorporated during each timestep in the decoder. As the name
154
+
155
+ suggests, in RConcat the two representations are concatenated. GA uses gated attention to compute a fused representation of text and image. ARC uses the hidden representation of the previous timestep to attend over the instructions text. This contextualized text representation is then concatenated to the image representation. They further introduce $ARC + l2s$ which cascades the action prediction into a binary stopping decision and a subsequent direction classification. The VLN-Transformer (Zhu et al., 2021b) uses pretrained BERT (Devlin et al., 2019) to encode the instructions and VLN-BERT (Majumdar et al., 2020) to fuse the modalities.
156
+
157
+ # 5.4 Metrics
158
+
159
+ We use task completion (TC) as the main performance metric. It represents the percentage of successful agent navigations (Chen et al., 2019). We further report normalized Dynamic Time Warping (nDTW) which quantifies agent and gold trajectory overlap for all routes (Ilharco et al., 2019). The shortest path distance (SPD) is measured within the environment graph from the node the agent stopped to the goal node (Chen et al., 2019).
160
+
161
+ # 6 Results & Analysis
162
+
163
+ The two upper sections of Table 2 show the results of the ORAR model introduced in Section 3 in comparison to other work. While the model sig
164
+
165
+ <table><tr><td rowspan="3">Visual Features</td><td colspan="4">Unseen</td></tr><tr><td colspan="2">Touchdown</td><td colspan="2">map2seq</td></tr><tr><td>dev</td><td>test</td><td>dev</td><td>test</td></tr><tr><td>ResNet pre-final</td><td>9.6</td><td>8.8</td><td>24.2</td><td>24.6</td></tr><tr><td>- no junction type</td><td>4.4</td><td>4.0</td><td>10.7</td><td>11.0</td></tr><tr><td>ResNet 4th-to-last</td><td>15.4</td><td>14.9</td><td>27.6</td><td>30.3</td></tr><tr><td>- no junction type</td><td>4.8</td><td>4.3</td><td>7.4</td><td>7.1</td></tr><tr><td>semantic segmentation</td><td>11.5</td><td>11.0</td><td>29.0</td><td>31.1</td></tr><tr><td>- no junction type</td><td>5.5</td><td>5.5</td><td>11.6</td><td>12.1</td></tr><tr><td>no image</td><td>11.5</td><td>9.5</td><td>28.5</td><td>30.5</td></tr><tr><td>- no junction type</td><td>3.0</td><td>2.8</td><td>5.4</td><td>5.5</td></tr></table>
166
+
167
+ nificantly outperforms all previous work on both datasets, our main focus is analyzing generalization to the unseen scenario. It is apparent that the type of image features influences agent performance and will be discussed in the next section. The bottom section of Table 2 ablates the proposed heading delta and junction type features for the best models. Removing the heading delta feature has little impact in the seen scenario, but significantly reduces task completion in the unseen scenario of the map2seq dataset. Surprisingly, the feature has no impact in the unseen scenario of Touchdown. We believe this is a consequence of the different data collection processes. Touchdown was specifically collected for VLN and annotators navigated the environment graph, while map2seq annotators wrote instructions only seeing the map. Removing the junction type embedding leads to a collapse of task completion in the unseen scenario on both datasets. This shows that without this explicit feature, the agent lacks the ability to reliably identify intersections in new areas.
168
+
169
+ # 6.1 Visual Features
170
+
171
+ Table 3 shows results for different types of visual features in the unseen scenario. We compare high level ResNet features (pre-final), low level ResNet features (4th-to-last), semantic segmentation features and using no image features. For the ResNet based features, the low level 4th-to-last features perform better than pre-final on both datasets. On map2seq the no image baseline performs on par with models that have access to visual features. When we remove the junction type embedding, the task completion rate drops significantly, which shows that the agent is not able to reliably locate intersections from any type of visual features.
172
+
173
+ Table 3: Study of visual features for the unseen scenario of Touchdown and map2seq. Metric is task completion.
174
+
175
+ <table><tr><td rowspan="3">Sub-task</td><td colspan="4">Touchdown</td></tr><tr><td colspan="2">Seen</td><td colspan="2">Unseen</td></tr><tr><td>dev</td><td>test</td><td>dev</td><td>test</td></tr><tr><td>ORAR pre-final</td><td>26.0</td><td>25.3</td><td>9.6</td><td>8.8</td></tr><tr><td>orientation</td><td>79.2</td><td>77.5</td><td>66.7</td><td>67.6</td></tr><tr><td>directions</td><td>84.8</td><td>85.5</td><td>45.9</td><td>45.7</td></tr><tr><td>stopping</td><td>40.7</td><td>41.0</td><td>37.4</td><td>36.1</td></tr><tr><td>ORAR 4th layer</td><td>29.9</td><td>29.1</td><td>15.4</td><td>14.9</td></tr><tr><td>orientation</td><td>92.4</td><td>91.5</td><td>84.2</td><td>84.1</td></tr><tr><td>directions</td><td>81.6</td><td>81.1</td><td>53.4</td><td>52.4</td></tr><tr><td>stopping</td><td>39.7</td><td>40.2</td><td>36.4</td><td>35.2</td></tr><tr><td>ORAR no image</td><td>15.2</td><td>13.3</td><td>11.1</td><td>9.5</td></tr><tr><td>orientation</td><td>59.8</td><td>57.0</td><td>61.3</td><td>60.5</td></tr><tr><td>directions</td><td>74.1</td><td>73.3</td><td>58.8</td><td>57.9</td></tr><tr><td>stopping</td><td>39.3</td><td>38.8</td><td>36.1</td><td>34.0</td></tr></table>
176
+
177
+ Table 4: Oracle analysis on Touchdown. Division into three sub-tasks: orientation, directions and stopping. Providing oracle actions for two of the three sub-tasks allows an isolated look at the remaining one. Underlined results are best for the sub-task, e.g. 85.5 is the best TC for the directions task on the test set in the seen scenario.
178
+
179
+ # 6.2 Sub-task Oracle
180
+
181
+ The agent has to predict a sequence of actions in order to successfully reach the goal location. In Touchdown this task can be divided into three sub-tasks (see Section 4). First the agent needs to orientate itself towards the correct starting heading. Next the agent has to predict the correct directions at the intersections along the path. The third sub-task is stopping at the specified location. Providing oracle actions (during testing) for two of the three sub-tasks lets us look at the completion rate of the remaining sub-task. Table 4 shows the completion rates for each of the three sub-tasks when using ResNet pre-final, 4th-to-last and no image features. In the seen scenario we can observe that the pre-final features lead to the best performance for the directions task. The 4th-to-last features on the other hand lead to the best orientation task performance and the stopping task is not influenced by the choice of visual features. In the unseen scenario 4th-to-last features again provide best orientation task performance but no image features lead to the best performance for the directions task. This shows that the ResNet 4th-to-last features are primarily useful for the orientation sub-task and explains the discrepancy of the no image baseline on Touchdown and map2seq identified in the previous subsection. In the Appendix we use this knowledge to train a mixed-model that uses 4th-to-last features
182
+
183
+ ![](images/6efb0f2d098bca1a48809156ccc253c6d869d9386629a7fc718e512815e21c96.jpg)
184
+ Figure 4: Masking experiments on the seen and unseen test set of Touchdown. Object or direction tokens are masked during training and testing.
185
+
186
+ for the orientation sub-task and pre-final/no image features for directions and stopping.
187
+
188
+ # 6.3 Token Masking
189
+
190
+ To analyze the importance of direction and object tokens in the navigation instructions, we run masking experiments similar to Zhu et al. (2021a), except that we mask the tokens during training and testing instead of during testing only. Figure 4 shows the resulting task completion rates for an increasing number of masked direction or object tokens. From the widening gap between masking object and direction tokens, we can see that the direction tokens are more important to successfully reach the goal location. Task completion nearly doesn't change when masking object tokens, indicating that they are mostly ignored by the model. While task completion significantly drops when direction tokens are masked, the agent still performs on a high level. This finding is surprising and in dissent with Zhu et al. (2021a) who report that task completion nearly drops to zero when masking direction tokens during testing only. We believe that in our setting (masking during testing and training), the model learns to infer the correct directions from redundancies in the instructions or context around the direction tokens. Besides the general trend of lower performance on the unseen scenario, we can not identify different utilization of object or
191
+
192
+ direction tokens in the seen and unseen scenario.
193
+
194
+ # 6.4 Merged Datasets
195
+
196
+ We train the ORAR full model on the merged dataset (see Section 5.1). Model selection is performed on the merged development set but results are also reported for the individual test sets of Touchdown and map2seq. For comparison with models trained on the non-merged datasets, the first row of Table 5 shows the best results of Table 2. Training on the merged dataset significantly improves nDTW and task completion across both datasets and scenarios. This shows that both datasets are compatible and the merged dataset can further be used by the VLN community to evaluate their models on more diverse navigation instructions. Despite being trained on twice as many instances, the no image baseline still performs on par on map2seq unseen. From this we conclude that the current bottleneck for better generalization to unseen areas is the number of panorama images seen during training instead of number of instructions.
197
+
198
+ # 7 Related Work
199
+
200
+ Natural language instructed navigation of embodied agents has been studied in generated grid environments that allow a structured representation of the observed environment (MacMahon et al., 2006; Chen and Mooney, 2011). Fueled by the advances in image representation learning (He et al., 2016), the environments became more realistic by using real-world panorama images of indoor locations (Anderson et al., 2018; Ku et al., 2020). Complementary outdoor environments contain street level panoramicas connected by a real-world street layout (Mirowski et al., 2018; Chen et al., 2019; Mehta et al., 2020). Agents in this outdoor environment are trained to follow human written navigation instructions (Chen et al., 2019; Xiang et al., 2020), instructions generated by Google Maps (Hermann et al., 2020), or a combination of both (Zhu et al., 2021b). Recent work focuses on analyzing the navigation agents by introducing better trajectory overlap metrics (Jain et al., 2019; Ilharco et al., 2019) or diagnosing the performance under certain constraints such as uni-modal inputs (Thomason et al., 2019) and masking direction or object tokens (Zhu et al., 2021a). Other work used a trained VLN agent to evaluate automatically generated navigation instructions (Zhao et al., 2021). An open problem in indoor VLN is
201
+
202
+ <table><tr><td rowspan="3"></td><td colspan="8">Seen</td><td colspan="7">Unseen</td><td></td></tr><tr><td colspan="3">Merged</td><td colspan="2">Touchdown</td><td colspan="3">map2seq</td><td colspan="3">Merged</td><td colspan="2">Touchdown</td><td colspan="2">map2seq</td><td></td></tr><tr><td>dev</td><td colspan="2">test</td><td colspan="2">test</td><td colspan="3">test</td><td>dev</td><td colspan="2">test</td><td colspan="2">test</td><td colspan="2">test</td><td></td></tr><tr><td>Model</td><td>nDTW</td><td>TC</td><td>nDTW</td><td>TC</td><td>nDTW</td><td>TC</td><td>nDTW</td><td>TC</td><td>nDTW</td><td>TC</td><td>nDTW</td><td>TC</td><td>nDTW</td><td>TC</td><td>nDTW</td><td>TC</td></tr><tr><td>best non-merged</td><td>-</td><td>-</td><td>-</td><td>-</td><td>44.9</td><td>29.1</td><td>62.3</td><td>46.7</td><td>-</td><td>-</td><td>-</td><td>-</td><td>21.6</td><td>14.9</td><td>42.2</td><td>30.3</td></tr><tr><td>ORAR full model</td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>• no image</td><td>37.5</td><td>26.6</td><td>35.8</td><td>24.7</td><td>23.0</td><td>14.8</td><td>58.3</td><td>42.1</td><td>31.6</td><td>22.3</td><td>27.0</td><td>19.2</td><td>16.6</td><td>11.7</td><td>46.5</td><td>33.2</td></tr><tr><td>• ResNet pre-final</td><td>51.3</td><td>38.8</td><td>49.3</td><td>36.8</td><td>39.1</td><td>27.7</td><td>67.3</td><td>52.8</td><td>28.9</td><td>22.0</td><td>25.7</td><td>20.0</td><td>17.4</td><td>13.6</td><td>41.3</td><td>32.1</td></tr><tr><td>• ResNet 4th-to-last</td><td>53.4</td><td>37.8</td><td>51.8</td><td>35.7</td><td>46.0</td><td>30.1</td><td>62.1</td><td>45.5</td><td>35.7</td><td>25.4</td><td>33.6</td><td>24.2</td><td>27.0</td><td>19.3</td><td>46.1</td><td>33.5</td></tr></table>
203
+
204
+ Table 5: Results for models trained on the merged dataset. Test results are presented for the merged test set and individual Touchdown and map2seq test sets. Metrics are normalized Dynamic Time Warping (nDTW) and task completion (TC). In the first row the best results of Table 2 (non-merged training sets) are listed for comparison. The bottom section presents results on the ORAR full model with different types of image features.
205
+
206
+ the generalization of navigation performance to previously unseen areas. Proposed solutions include back translation with environment dropout (Tan et al., 2019), multi-modal environment representation (Hu et al., 2019) or semantic segmented images (Zhang et al., 2020). Notably the latter work identifies the same problem in the Touchdown task.
207
+
208
+ # 8 Conclusion
209
+
210
+ We presented an investigation of outdoor vision and language navigation in seen and unseen environments. We introduced the heading delta feature and junction type embedding to correct an artifact of the environment and explicitly model the number of outgoing edges, respectively. Both are helpful to boost and analyze performance in the unseen scenario. We conducted experiments on two datasets and showed that the considered visual features poorly generalize to unseen areas. We conjecture that VLN tasks need to grow in scale and diversity of geographical environments and navigation tasks.
211
+
212
+ # Acknowledgments
213
+
214
+ The research reported in this paper was supported by a Google Focused Research Award on "Learning to Negotiate Answers in Multi-Pass Semantic Parsing".
215
+
216
+ # References
217
+
218
+ Peter Anderson, Qi Wu, Damien Teney, Jake Bruce, Mark Johnson, Niko Sunderhauf, Ian Reid, Stephen Gould, and Anton van den Hengel. 2018. Vision-and-language navigation: Interpreting visually-grounded navigation instructions in real environments. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, Utah.
219
+
220
+ Devendra Singh Chaplot, Kanthashree Mysore Sathyendra, Rama Kumar Pasumarthi, Dheeraj Rajagopal, and Ruslan Salakhutdinov. 2018. Gated-attention architectures for task-oriented language grounding. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), New Orleans, Louisiana.
221
+
222
+ David L. Chen and Raymond J. Mooney. 2011. Learning to interpret natural language navigation instructions from observations. In Proceedings of the Twenty-Fifth AAAI Conference on Artificial Intelligence (AAAI), San Francisco, California.
223
+
224
+ Howard Chen, Alane Suhr, Dipendra Misra, Noah Snavely, and Yoav Artzi. 2019. Touchdown: Natural language navigation and spatial reasoning in visual street environments. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, California.
225
+
226
+ Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT)), Minneapolis, Minnesota.
227
+
228
+ Alex Graves, Santiago Fernández, and Jürgen Schmidhuber. 2005. Bidirectional LSTM networks for improved phoneme classification and recognition. In Proceedings of International Conference on Artificial Neural Networks: Formal Models and Their Applications (ICANN), Warsaw, Poland.
229
+
230
+ Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, Nevada.
231
+
232
+ Karl Moritz Hermann, Mateusz Malinowski, Piotr Mirowski, Andras Banki-Horvath, Keith Anderson, and Raia Hadsell. 2020. Learning to follow directions in street view. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), New York, New York.
233
+
234
+ Ronghang Hu, Daniel Fried, Anna Rohrbach, Dan Klein, Trevor Darrell, and Kate Saenko. 2019. Are you looking? grounding to multiple modalities in vision-and-language navigation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6551-6557, Florence, Italy. Association for Computational Linguistics.
235
+ Gabriel Ilharco, Vihan Jain, Alexander Ku, Eugene Ie, and Jason Baldridge. 2019. Effective and general evaluation for instruction conditioned navigation using dynamic time warping. In NeurIPS Visually Grounded Interaction and Language Workshop (ViGIL), Vancouver, Canada.
236
+ Vihan Jain, Gabriel Magalhaes, Alexander Ku, Ashish Vaswani, Eugene Ie, and Jason Baldridge. 2019. Stay on the path: Instruction fidelity in vision-and-language navigation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL), Florence, Italy.
237
+ Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of the International Conference on Learning Representations (ICLR), San Diego, California.
238
+ Jacob Krantz, Erik Wijmans, Arjun Majumdar, Dhruv Batra, and Stefan Lee. 2020. Beyond the nav-graph: Vision-and-language navigation in continuous environments. In Computer Vision - ECCV 2020, Cham.
239
+ Alexander Ku, Peter Anderson, Roma Patel, Eugene Ie, and Jason Baldridge. 2020. Room-Across-Room: Multilingual vision-and-language navigation with dense spatiotemporal grounding. In Conference on Empirical Methods for Natural Language Processing (EMNLP), Online.
240
+ Matt MacMahon, Brian Stankiewicz, and Benjamin Kuipers. 2006. Walk the talk: Connecting language, knowledge, and action in route instructions. In Proceedings of the 21st National Conference on Artificial Intelligence (AAAI), Boston, Massachusetts.
241
+ Arjun Majumdar, Ayush Shrivastava, Stefan Lee, Peter Anderson, Devi Parikh, and Dhruv Batra. 2020. Improving vision-and-language navigation with imag-text pairs from the web. In Proceedings of the European Conference on Computer Vision (ECCV), Glasgow, UK..
242
+ Harsh Mehta, Yoav Artzi, Jason Baldridge, Eugene Ie, and Piotr Mirowski. 2020. Retouchdown: Releasing touchdown on StreetLearn as a public resource for language grounding tasks in street view. In Proceedings of the Third International Workshop on Spatial Language Understanding (SpLU), Online.
243
+ Piotr Mirowski, Matthew Koichi Grimes, Mateusz Malinowski, Karl Moritz Hermann, Keith Anderson, Denis Teplyashin, Karen Simonyan, Koray Kavukcuoglu, Andrew Zisserman, and Raia Hadsell. 2018. Learning to navigate in cities without a
244
+
245
+ map. In Proceedings of the 32nd International Conference on Neural Information Processing Systems (NeurIPS), Montreal, Canada.
246
+ Gerhard Neuhold, Tobias Ollmann, Samuel Rota Bulò, and Peter Kontschieder. 2017. The mapillary vistas dataset for semantic understanding of street scenes. In 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy.
247
+ Ivan Provilkov, Dmitrii Emelianenko, and Elena Voita. 2020. BPE-dropout: Simple and effective subword regularization. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (ACL), Online.
248
+ Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. 2015. Imagenet large scale visual recognition challenge. International Journal of Computer Vision (IJCV), 115:211-252.
249
+ Raphael Schumann and Stefan Riezler. 2021. Generating landmark navigation instructions from maps as a graph-to-text problem. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics (ACL), Online.
250
+ Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL), Berlin, Germany.
251
+ Hao Tan, Licheng Yu, and Mohit Bansal. 2019. Learning to navigate unseen environments: Back translation with environmental dropout. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT), Minneapolis, Minnesota.
252
+ Jesse Thomason, Daniel Gordon, and Yonatan Bisk. 2019. Shifting the baseline: Single modality performance on visual navigation & QA. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT), Minneapolis, Minnesota.
253
+ Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30 (NeurIPS), Long Beach, California.
254
+ Jiannan Xiang, Xin Wang, and William Yang Wang. 2020. Learning to stop: A simple yet effective approach to urban vision-language navigation. In *Findings of the Association for Computational Linguistics (ACL Findings)*, Online.
255
+
256
+ Kailun Yang, Xinxin Hu, Yicheng Fang, Kaiwei Wang, and Rainer Stiefelhagen. 2020. Omnisupervised omnidirectional semantic segmentation. IEEE Transactions on Intelligent Transportation Systems (T-ITS).
257
+ Yubo Zhang, Hao Tan, and Mohit Bansal. 2020. Diagnosing the environment bias in vision-and-language navigation. In Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence (IJCAI), Yokohama, Japan.
258
+ Ming Zhao, Peter Anderson, Vihan Jain, Su Wang, Alexander Ku, Jason Baldridge, and Eugene Ie. 2021. On the evaluation of vision-and-language navigation instructions. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics (EACL), Online.
259
+ Wanrong Zhu, Yuankai Qi, Pradyumna Narayana, Kazoo Sone, Sugato Basu, Xin Eric Wang, Qi Wu, Miguel P. Eckstein, and William Yang Wang. 2021a. Diagnosing vision-and-language navigation: What really matters. CoRR, abs/2103.16561.
260
+ Wanrong Zhu, Xin Wang, Tsu-Jui Fu, An Yan, Pradyumna Narayana, Kazoo Sone, Sugato Basu, and William Yang Wang. 2021b. Multimodal text style transfer for outdoor vision-and-language navigation. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics (EACL), Online.
261
+
262
+ <table><tr><td rowspan="2">Ablation</td><td colspan="2">Touchdown</td><td colspan="2">map2seq</td></tr><tr><td>dev</td><td>test</td><td>dev</td><td>test</td></tr><tr><td>ORAR full model</td><td>29.9</td><td>29.1</td><td>49.1</td><td>46.7</td></tr><tr><td>- no 2nd RNN</td><td>23.2</td><td>23.9</td><td>43.3</td><td>40.6</td></tr><tr><td>- no BPE dropout</td><td>26.6</td><td>25.9</td><td>45.2</td><td>43.1</td></tr><tr><td>- no text attention</td><td>9.4</td><td>10.4</td><td>22.0</td><td>21.8</td></tr><tr><td>- no image attention</td><td>21.5</td><td>20.1</td><td>48.8</td><td>45.7</td></tr></table>
263
+
264
+ Table 6: ORAR full model ablation study on the seen scenario of Touchdown and map2seq. Metric is task completion and ablations are not cumulative.
265
+
266
+ # A Architecture Ablation
267
+
268
+ We perform ablation studies on the ORAR full model in the seen scenario to measure the impact of individual architecture components. As seen in Table 6, removing the second decoder RNN layer or BPE dropout results in a decrease of six and three task completion points, respectively. The largest drop in performance is observed when removing the text attention mechanism. This again shows the importance of attention over the encoder in sequence-to-sequence models. Removing the image attention mechanism on the other hand does not affect task completion on the map2seq dataset.
269
+
270
+ # B Mixed-Model
271
+
272
+ The findings in Section 6.2 inspire us to modify the ORAR model to use distinct visual features for the orientation and directions/stopping task. The orientation task is equivalent to the very first action prediction by the agent. Thus we modify the model architecture to use the ResNet 4th-to-last features (+text representation) to predict the first action and then start the recurrent prediction of the remaining actions with a different set of visual features (pre-final for the seen scenario and no image features for the unseen scenario). The results for this ORAR mixed model trained on the merged dataset are shown in Table 7. We only test it on Touchdown because map2seq does not have the orientation task. The mixed model significantly outperforms the single visual feature model on the Touchdown seen test set but unfortunately shows no improvement in the unseen scenario.
273
+
274
+ # C Additional Metrics and Individual Runs
275
+
276
+ We present the results of the individual repetitions and additional metrics for the main results in Table 2 and the results on the merged dataset
277
+
278
+ in Table 5. The additional metrics are success weighted normalized Dynamic Time Warping (Ilharco et al., 2019) and shortest-path distance (Chen et al., 2019).
279
+
280
+ <table><tr><td rowspan="3"></td><td colspan="7">Seen</td><td colspan="7">Unseen</td></tr><tr><td colspan="3">Merged</td><td colspan="2">Touchdown</td><td colspan="2">map2seq</td><td colspan="3">Merged</td><td colspan="2">Touchdown</td><td colspan="2">map2seq</td></tr><tr><td>dev</td><td colspan="2">test</td><td colspan="2">test</td><td colspan="2">test</td><td>dev</td><td colspan="2">test</td><td colspan="2">test</td><td colspan="2">test</td></tr><tr><td>Model</td><td>nDTW</td><td>TC</td><td>nDTW</td><td>TC</td><td>nDTW</td><td>TC</td><td>nDTW</td><td>TC</td><td>nDTW</td><td>TC</td><td>nDTW</td><td>TC</td><td>nDTW</td><td>TC</td></tr><tr><td>best non-merged</td><td>-</td><td>-</td><td>-</td><td>44.9</td><td>29.1</td><td>62.3</td><td>46.7</td><td>-</td><td>-</td><td>-</td><td>21.6</td><td>14.9</td><td>42.2</td><td>30.3</td></tr><tr><td>best merged</td><td>53.4</td><td>37.8</td><td>51.8</td><td>35.7</td><td>46.0</td><td>30.1</td><td>67.3</td><td>52.8</td><td>35.7</td><td>25.4</td><td>33.6</td><td>24.2</td><td>27.0</td><td>19.3</td></tr><tr><td rowspan="2">ORAR mixed model</td><td colspan="7">• 4th-to-last + pre-final</td><td colspan="7">• 4th-to-last + no image</td></tr><tr><td>58.6</td><td>44.4</td><td>57.4</td><td>42.9</td><td>51.3</td><td>36.9</td><td>-</td><td>-</td><td>36.3</td><td>26.1</td><td>33.6</td><td>23.9</td><td>26.3</td><td>18.3</td></tr></table>
281
+
282
+ Table 7: Results for the mixed model in comparison to previous best results. Metrics are normalized Dynamic Time Warping (nDTW) and task completion (TC). In the first two rows the best results of Table 2 and Table 5 are listed for comparison. The last section presents results for the ORAR mixed model which uses different image features for different sub-tasks.
283
+
284
+ <table><tr><td rowspan="3"></td><td colspan="7">Seen</td><td colspan="7">Unseen</td></tr><tr><td colspan="3">Touchdown</td><td colspan="4">map2seq</td><td colspan="4">Touchdown</td><td colspan="3">map2seq</td></tr><tr><td>dev</td><td colspan="2">test</td><td>dev</td><td colspan="3">test</td><td>dev</td><td colspan="3">test</td><td>dev</td><td colspan="2">test</td></tr><tr><td>Model</td><td>SDTW</td><td>SPD</td><td>SDTW</td><td>SPD</td><td>SDTW</td><td>SPD</td><td>SDTW</td><td>SPD</td><td>SDTW</td><td>SPD</td><td>SDTW</td><td>SPD</td><td>SDTW</td><td>SPD</td></tr><tr><td>RConcat</td><td>9.8</td><td>20.4</td><td>11.1</td><td>20.4</td><td>16.0</td><td>19.0</td><td>13.7</td><td>20.1</td><td>1.8</td><td>29.6</td><td>1.4</td><td>29.3</td><td>1.2</td><td>33.1</td></tr><tr><td>GA</td><td>11.1</td><td>18.7</td><td>10.9</td><td>19.0</td><td>17.2</td><td>16.5</td><td>16.0</td><td>18.0</td><td>1.3</td><td>31.0</td><td>1.7</td><td>30.5</td><td>1.4</td><td>34.3</td></tr><tr><td>ARC</td><td>14.1</td><td>18.6</td><td>13.5</td><td>19.4</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>ARC+12s</td><td>19.0</td><td>17.1</td><td>16.3</td><td>18.8</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>VLN Transformer</td><td>12.9</td><td>21.5</td><td>14.0</td><td>21.2</td><td>17.5</td><td>18.6</td><td>15.9</td><td>19.0</td><td>1.9</td><td>29.5</td><td>2.3</td><td>29.6</td><td>-</td><td>-</td></tr><tr><td>ORAR full model</td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>• ResNet pre-final</td><td>24.5</td><td>15.0</td><td>23.8</td><td>16.2</td><td>46.7</td><td>5.9</td><td>44.4</td><td>6.6</td><td>8.6</td><td>26.7</td><td>7.6</td><td>26.7</td><td>22.3</td><td>15.6</td></tr><tr><td>• ResNet 4th-to-last</td><td>28.3</td><td>11.1</td><td>27.4</td><td>11.7</td><td>41.1</td><td>7.2</td><td>39.5</td><td>7.6</td><td>14.3</td><td>20.0</td><td>13.6</td><td>20.7</td><td>25.8</td><td>11.9</td></tr><tr><td>ORAR full model</td><td colspan="3">• ResNet 4th-to-last</td><td colspan="4">• ResNet pre-final</td><td colspan="4">• ResNet 4th-to-last</td><td colspan="3">• ResNet 4th-to-last</td></tr><tr><td>- no heading delta</td><td>28.3</td><td>10.9</td><td>27.6</td><td>11.5</td><td>45.4</td><td>6.8</td><td>42.7</td><td>7.7</td><td>14.0</td><td>20.5</td><td>13.5</td><td>20.8</td><td>20.4</td><td>16.8</td></tr><tr><td>- no junction type</td><td>23.1</td><td>13.6</td><td>22.8</td><td>13.9</td><td>47.2</td><td>7.6</td><td>43.0</td><td>8.6</td><td>4.0</td><td>26.6</td><td>3.7</td><td>26.7</td><td>4.3</td><td>28.9</td></tr></table>
285
+
286
+ Table 8: Results on Touchdown and map2seq for the seen and unseen scenario. Metrics are success weighted normalized Dynamic Time Warping (SDTW) and shortest-path distance (SPD). For SDTW higher values are better and for SPD lower values are better.
287
+
288
+ <table><tr><td rowspan="2"></td><td colspan="12">Seen</td><td colspan="10">Unseen</td><td></td><td></td></tr><tr><td colspan="9">task completion of the ten repetitions</td><td>mean</td><td>std</td><td colspan="8">task completion of the ten repetitions</td><td>mean</td><td>std</td><td></td><td></td><td></td></tr><tr><td>ORAR full model</td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>• ResNet pre-final</td><td>26.1</td><td>18.5</td><td>25.8</td><td>25.1</td><td>26.8</td><td>28.7</td><td>24.4</td><td>25.5</td><td>25.6</td><td>26.0</td><td>25.3</td><td>2.5</td><td>8.8</td><td>9.2</td><td>7.3</td><td>9.8</td><td>8.5</td><td>8.4</td><td>10.0</td><td>8.2</td><td>9.4</td><td>8.1</td><td>8.8</td><td>0.8</td></tr><tr><td>• ResNet 4th-to-last</td><td>28.2</td><td>30.0</td><td>26.9</td><td>29.6</td><td>27.4</td><td>29.2</td><td>30.4</td><td>30.0</td><td>28.3</td><td>30.7</td><td>29.1</td><td>1.2</td><td>12.0</td><td>15.1</td><td>14.5</td><td>15.5</td><td>14.3</td><td>16.0</td><td>16.5</td><td>14.9</td><td>14.5</td><td>15.3</td><td>14.9</td><td>1.2</td></tr><tr><td>ORAR full model</td><td></td><td></td><td></td><td></td><td colspan="4">ResNet 4th-to-last</td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td colspan="4">ResNet 4th-to-last</td><td></td><td></td><td></td><td></td></tr><tr><td>- no heading delta</td><td>29.2</td><td>30.0</td><td>27.4</td><td>29.9</td><td>29.0</td><td>29.5</td><td>31.2</td><td>29.3</td><td>28.4</td><td>29.0</td><td>29.3</td><td>1.0</td><td>14.7</td><td>13.7</td><td>15.5</td><td>14.9</td><td>14.1</td><td>13.5</td><td>16.0</td><td>15.1</td><td>16.0</td><td>14.5</td><td>14.8</td><td>0.8</td></tr><tr><td>- no junction type</td><td>24.1</td><td>24.5</td><td>22.6</td><td>21.9</td><td>24.4</td><td>25.7</td><td>26.1</td><td>24.5</td><td>24.5</td><td>24.1</td><td>24.2</td><td>1.2</td><td>4.4</td><td>5.0</td><td>4.2</td><td>4.2</td><td>4.2</td><td>3.8</td><td>4.5</td><td>4.2</td><td>5.1</td><td>4.3</td><td>4.4</td><td>0.4</td></tr></table>
289
+
290
+ Table 9: Task completion for the ten individual runs with mean and standard deviation on the Touchdown seen and unseen test set.
291
+
292
+ <table><tr><td rowspan="2"></td><td colspan="12">Seen</td><td colspan="10">Unseen</td><td></td><td></td></tr><tr><td colspan="9">task completion of the ten repetitions</td><td>mean</td><td>std</td><td colspan="8">task completion of the ten repetitions</td><td>mean</td><td>std</td><td></td><td></td><td></td></tr><tr><td>ORAR full model</td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>• ResNet pre-final</td><td>41.0</td><td>48.8</td><td>47.8</td><td>47.9</td><td>45.8</td><td>49.5</td><td>45.8</td><td>48.2</td><td>44.6</td><td>47.4</td><td>46.7</td><td>2.4</td><td>22.4</td><td>18.8</td><td>26.0</td><td>24.5</td><td>26.1</td><td>28.1</td><td>22.1</td><td>26.8</td><td>24.4</td><td>26.6</td><td>24.6</td><td>2.6</td></tr><tr><td>�� ResNet 4th-to-last</td><td>40.5</td><td>42.2</td><td>42.1</td><td>42.1</td><td>38.6</td><td>42.9</td><td>41.2</td><td>42.1</td><td>45.2</td><td>40.5</td><td>41.7</td><td>1.6</td><td>32.9</td><td>29.6</td><td>28.9</td><td>28.5</td><td>27.6</td><td>32.2</td><td>26.8</td><td>33.6</td><td>34.0</td><td>28.4</td><td>30.3</td><td>2.5</td></tr><tr><td>ORAR full model</td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>- no heading delta</td><td>46.0</td><td>43.1</td><td>47.1</td><td>47.5</td><td>45.0</td><td>48.4</td><td>36.2</td><td>44.6</td><td>47.1</td><td>43.8</td><td>44.9</td><td>3.3</td><td>23.2</td><td>24.0</td><td>21.6</td><td>25.8</td><td>24.5</td><td>23.6</td><td>23.8</td><td>23.2</td><td>22.0</td><td>24.5</td><td>23.6</td><td>1.2</td></tr><tr><td>- no junction type</td><td>44.9</td><td>46.1</td><td>46.2</td><td>44.0</td><td>43.2</td><td>46.5</td><td>44.9</td><td>47.1</td><td>45.5</td><td>42.1</td><td>45.1</td><td>1.5</td><td>5.1</td><td>4.5</td><td>5.0</td><td>5.1</td><td>4.6</td><td>3.9</td><td>5.6</td><td>3.8</td><td>4.4</td><td>4.6</td><td>4.7</td><td>0.5</td></tr></table>
293
+
294
+ Table 10: Task completion for the ten individual runs with mean and standard deviation on the map2seq seen and unseen test set.
295
+
296
+ <table><tr><td rowspan="3"></td><td colspan="8">Seen</td><td colspan="7">Unseen</td><td></td></tr><tr><td colspan="4">Merged</td><td colspan="2">Touchdown</td><td colspan="2">map2seq</td><td colspan="3">Merged</td><td colspan="2">Touchdown</td><td colspan="2">map2seq</td><td></td></tr><tr><td>dev</td><td colspan="3">test</td><td colspan="2">test</td><td colspan="2">test</td><td>dev</td><td colspan="2">test</td><td colspan="2">test</td><td colspan="2">test</td><td></td></tr><tr><td>Model</td><td>SDTW</td><td>SPD</td><td>SDTW</td><td>SPD</td><td>SDTW</td><td>SPD</td><td>SDTW</td><td>SPD</td><td>SDTW</td><td>SPD</td><td>SDTW</td><td>SPD</td><td>SDTW</td><td>SPD</td><td>SDTW</td><td>SPD</td></tr><tr><td>ORAR full model</td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>• no image</td><td>25.0</td><td>18.8</td><td>23.2</td><td>19.4</td><td>13.9</td><td>26.1</td><td>39.8</td><td>7.8</td><td>20.6</td><td>17.9</td><td>17.7</td><td>21.3</td><td>10.5</td><td>26.7</td><td>31.1</td><td>11.4</td></tr><tr><td>• ResNet pre-final</td><td>36.8</td><td>12.5</td><td>34.8</td><td>14.1</td><td>26.1</td><td>18.8</td><td>50.2</td><td>5.7</td><td>20.3</td><td>20.1</td><td>18.4</td><td>22.0</td><td>12.2</td><td>25.8</td><td>30.2</td><td>14.8</td></tr><tr><td>• ResNet 4th-to-last</td><td>35.9</td><td>9.3</td><td>33.8</td><td>9.8</td><td>28.4</td><td>11.7</td><td>43.2</td><td>6.5</td><td>23.6</td><td>14.9</td><td>22.5</td><td>16.6</td><td>17.7</td><td>19.2</td><td>31.4</td><td>11.7</td></tr><tr><td>ORAR mixed model</td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>• 4th-to-last + pre-final</td><td>42.1</td><td>8.6</td><td>40.8</td><td>9.3</td><td>34.8</td><td>11.5</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>• 4th-to-last + no image</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>24.1</td><td>15.1</td><td>22.2</td><td>17.2</td><td>16.9</td><td>20.4</td><td>-</td><td>-</td></tr></table>
297
+
298
+ Table 11: Results for models trained on the merged dataset. Test results are presented for the merged test set and individual Touchdown and map2seq test sets. Metrics are success weighted normalized Dynamic Time Warping (SDTW) and shortest-path distance (SPD). For SDTW higher values are better and for SPD lower values are better.
299
+
300
+ <table><tr><td rowspan="2"></td><td colspan="12">Seen</td><td colspan="10">Unseen</td><td></td><td></td></tr><tr><td colspan="10">task completion of the ten repetitions</td><td>mean</td><td>std</td><td colspan="7">task completion of the ten repetitions</td><td>mean</td><td>std</td><td></td><td></td><td></td></tr><tr><td colspan="22">ORAR full model</td><td></td><td></td><td></td></tr><tr><td>no image</td><td>24.0</td><td>24.1</td><td>25.3</td><td>25.8</td><td>24.6</td><td>26.1</td><td>24.4</td><td>24.1</td><td>24.5</td><td>24.7</td><td>0.7</td><td>20.1</td><td>18.2</td><td>19.5</td><td>19.7</td><td>18.6</td><td>18.6</td><td>18.7</td><td>19.8</td><td>18.6</td><td>19.9</td><td>19.2</td><td>0.7</td><td></td></tr><tr><td>ResNet pre-final</td><td>36.7</td><td>34.7</td><td>35.3</td><td>37.1</td><td>36.4</td><td>36.1</td><td>38.8</td><td>36.4</td><td>36.8</td><td>39.5</td><td>36.8</td><td>1.4</td><td>18.9</td><td>19.8</td><td>19.8</td><td>20.8</td><td>20.1</td><td>20.0</td><td>20.5</td><td>19.9</td><td>20.2</td><td>19.9</td><td>20.0</td><td>0.5</td></tr><tr><td>ResNet 4th-to-last</td><td>34.9</td><td>36.2</td><td>35.4</td><td>35.4</td><td>36.2</td><td>36.5</td><td>34.1</td><td>36.3</td><td>36.3</td><td>35.5</td><td>35.7</td><td>0.7</td><td>23.8</td><td>24.9</td><td>25.8</td><td>23.9</td><td>24.1</td><td>23.4</td><td>24.7</td><td>23.9</td><td>23.5</td><td>24.2</td><td>24.2</td><td>0.7</td></tr><tr><td colspan="23">ORAR mixed model</td><td></td><td></td></tr><tr><td>4th-to-last + pre-final</td><td>43.8</td><td>42.6</td><td>43.4</td><td>43.2</td><td>43.6</td><td>42.0</td><td>44.1</td><td>42.1</td><td>41.9</td><td>42.7</td><td>42.9</td><td>0.8</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td></td></tr><tr><td>4th-to-last + no image</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>23.8</td><td>23.4</td><td>23.7</td><td>23.4</td><td>24.1</td><td>24.7</td><td>24.6</td><td>24.1</td><td>23.8</td><td>22.9</td><td>23.8</td><td>0.5</td></tr></table>
301
+
302
+ Table 12: Task completion for the ten individual runs with mean and standard deviation on the merged seen and unseen test set.
303
+
304
+ <table><tr><td rowspan="2"></td><td colspan="12">Seen</td><td colspan="10">Unseen</td><td></td><td></td></tr><tr><td colspan="10">task completion of the ten repetitions</td><td>mean</td><td>std</td><td colspan="7">task completion of the ten repetitions</td><td>mean</td><td>std</td><td></td><td></td><td></td></tr><tr><td colspan="22">ORAR full model</td><td></td><td></td><td></td></tr><tr><td>no image</td><td>14.1</td><td>14.4</td><td>15.8</td><td>16.5</td><td>14.1</td><td>16.3</td><td>14.5</td><td>14.9</td><td>13.3</td><td>14.5</td><td>14.8</td><td>1.0</td><td>12.1</td><td>10.7</td><td>12.1</td><td>12.2</td><td>11.0</td><td>11.5</td><td>11.5</td><td>12.9</td><td>11.0</td><td>12.2</td><td>11.7</td><td>0.7</td></tr><tr><td>ResNet pre-final</td><td>27.5</td><td>25.1</td><td>26.4</td><td>28.4</td><td>26.6</td><td>27.4</td><td>30.3</td><td>27.7</td><td>27.0</td><td>30.2</td><td>27.7</td><td>1.5</td><td>13.1</td><td>13.0</td><td>13.1</td><td>14.1</td><td>13.5</td><td>13.7</td><td>14.1</td><td>13.5</td><td>13.9</td><td>13.7</td><td>13.6</td><td>0.4</td></tr><tr><td>ResNet 4th-to-last</td><td>30.7</td><td>30.4</td><td>30.0</td><td>30.1</td><td>30.0</td><td>30.2</td><td>29.2</td><td>30.2</td><td>30.4</td><td>30.0</td><td>30.1</td><td>0.4</td><td>18.0</td><td>20.3</td><td>20.8</td><td>18.8</td><td>18.9</td><td>18.0</td><td>20.2</td><td>19.8</td><td>18.6</td><td>19.6</td><td>19.3</td><td>0.9</td></tr><tr><td colspan="23">ORAR mixed model</td><td></td><td></td></tr><tr><td>4th-to-last + pre-final</td><td>37.6</td><td>36.3</td><td>36.4</td><td>37.8</td><td>37.9</td><td>35.1</td><td>38.0</td><td>36.6</td><td>35.6</td><td>37.4</td><td>36.9</td><td>1.0</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td></td></tr><tr><td>4th-to-last + no image</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>17.8</td><td>17.9</td><td>18.1</td><td>18.8</td><td>18.8</td><td>19.2</td><td>19.2</td><td>18.4</td><td>17.9</td><td>17.1</td><td>18.3</td><td>0.6</td></tr></table>
305
+
306
+ Table 13: Task completion for the ten individual runs with mean and standard deviation on the Touchdown seen and unseen test set, trained on the merged training set.
307
+
308
+ <table><tr><td rowspan="2"></td><td colspan="12">Seen</td><td colspan="10">Unseen</td><td></td><td></td></tr><tr><td colspan="9">task completion of the ten repetitions</td><td>mean</td><td>std</td><td colspan="8">task completion of the ten repetitions</td><td>mean</td><td>std</td><td></td><td></td><td></td></tr><tr><td colspan="22">ORAR full model</td><td></td><td></td><td></td></tr><tr><td>no image</td><td>41.5</td><td>41.2</td><td>42.1</td><td>42.1</td><td>43.2</td><td>43.4</td><td>41.9</td><td>40.4</td><td>43.1</td><td>42.1</td><td>42.1</td><td>0.9</td><td>35.1</td><td>32.5</td><td>33.6</td><td>33.8</td><td>32.9</td><td>32.0</td><td>32.4</td><td>32.8</td><td>32.8</td><td>34.2</td><td>33.2</td><td>0.9</td></tr><tr><td>ResNet pre-final</td><td>53.0</td><td>51.5</td><td>51.0</td><td>52.4</td><td>53.6</td><td>51.5</td><td>53.6</td><td>51.6</td><td>53.9</td><td>55.8</td><td>52.8</td><td>1.4</td><td>29.9</td><td>32.6</td><td>32.2</td><td>33.4</td><td>32.6</td><td>32.0</td><td>32.6</td><td>32.0</td><td>32.0</td><td>31.5</td><td>32.1</td><td>0.9</td></tr><tr><td>ResNet 4th-to-last</td><td>42.5</td><td>46.4</td><td>44.9</td><td>44.6</td><td>47.1</td><td>47.8</td><td>42.8</td><td>47.1</td><td>46.6</td><td>45.2</td><td>45.5</td><td>1.7</td><td>34.8</td><td>33.5</td><td>35.2</td><td>33.5</td><td>33.9</td><td>33.6</td><td>33.1</td><td>31.6</td><td>32.6</td><td>33.0</td><td>33.5</td><td>1.0</td></tr></table>
309
+
310
+ Table 14: Task completion for the ten individual runs with mean and standard deviation on the map2seq seen and unseen test set, trained on the merged training set.
analyzinggeneralizationofvisionandlanguagenavigationtounseenoutdoorareas/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dc79aeaf22c47230d1c2838ed87b2bdf65976d3208b4f0f1f3824c75b6639c64
3
+ size 785773
analyzinggeneralizationofvisionandlanguagenavigationtounseenoutdoorareas/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2237ef8c1c0c6e51ca6a6d6b820d492945e736227560d5cddd8e24ce772058c7
3
+ size 359578
aneffectiveandefficiententityalignmentdecodingalgorithmviathirdordertensorisomorphism/297e70eb-b658-4414-9546-9ef0fab36159_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0b91220c8bdafe2052cdbdde3aa46b2fead8d5527d05fbcf36dde76c92e0457f
3
+ size 90078
aneffectiveandefficiententityalignmentdecodingalgorithmviathirdordertensorisomorphism/297e70eb-b658-4414-9546-9ef0fab36159_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d2700733a29b41326e8d7f21a058877b7fc40a0906ad0023fd5d56250ac67c98
3
+ size 105198
aneffectiveandefficiententityalignmentdecodingalgorithmviathirdordertensorisomorphism/297e70eb-b658-4414-9546-9ef0fab36159_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e33cb81256d94dc7eb9004bbc36457d8a011cb0dcac03e0e235be636f40ad1d7
3
+ size 783449
aneffectiveandefficiententityalignmentdecodingalgorithmviathirdordertensorisomorphism/full.md ADDED
@@ -0,0 +1,404 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # An Effective and Efficient Entity Alignment Decoding Algorithm via Third-Order Tensor Isomorphism
2
+
3
+ Xin Mao $^{1*}$ , Meirong Ma $^{2}$ , Hao Yuan $^{2}$ , Jianchao Zhu $^{2}$ , Zongyu Wang $^{3}$ , Rui Xie $^{3}$ , Wei Wu $^{3}$ , Man Lan $^{1*}$
4
+
5
+ $^{1}$ School of Computer Science and Technology, East China Normal University $^{2}$ Transsion Group, $^{3}$ Meituan Group
6
+
7
+ xmao@stu.ecnu.edu.cn, mlan@cs.ecnu.edu.cn
8
+
9
+ {meirong.ma,hao.yuan,jianchao.zhu}@transsion.com
10
+
11
+ {wangzongyu02, rui.xie, wuwei30}@meituan.com
12
+
13
+ # Abstract
14
+
15
+ Entity alignment (EA) aims to find the equivalent entity pairs between KGs, which is a crucial step for integrating multi-source KGs. For a long time, most researchers have regarded EA as a pure graph representation learning task and focused on improving graph encoders while paying little attention to the decoding process. In this paper, we propose an effective and efficient EA Decoding Algorithm via Third-order Tensor Isomorphism (DATTI). Specifically, we derive two sets of isomorphism equations: (1) Adjacency tensor isomorphism equations and (2) Gramian tensor isomorphism equations. By combining these equations, DATTI could effectively utilize the adjacency and inner correlation isomorphisms of KGs to enhance the decoding process of EA. Extensive experiments on public datasets indicate that our decoding algorithm can deliver significant performance improvements even on the most advanced EA methods, while the extra required time is less than 3 seconds.
16
+
17
+ # 1 Introduction
18
+
19
+ Knowledge graphs (KGs) illustrate the relations between real-world entities—e.g., objects, situations, or concepts—and usually are stored in the form of triples (subject, relation, object). Over recent years, a large number of KGs have been constructed to provide structural knowledge to facilitate downstream applications, such as recommendation systems (Cao et al., 2019) and question-answering systems (Zhao et al., 2020).
20
+
21
+ Most KGs are independently extracted from different languages or domains. Thus, these KGs usually hold unique information individually but also have some shared parts. Integrating these cross-lingual / domain KGs could provide a broader view for users, especially for the minority language users who usually suffer from lacking language resources. As shown in Figure 1, entity alignment (EA) aims
22
+
23
+ ![](images/c121a218d0f2bb84d7d0813988680678a64dd0d91a8fdd72b1d844c5cd83bc87.jpg)
24
+ Figure 1: An example of cross-lingual entity alignment.
25
+
26
+ to find the equivalent entity pairs between KGs, which is a crucial step for integrating KGs.
27
+
28
+ Existing EA methods are built on the same core premise: equivalent entity pairs between KGs have similar neighborhood structures (i.e., isomorphism). Therefore, most existing EA methods (Wang et al., 2018; Sun et al., 2020b; Mao et al., 2020) could be abstracted into the same architecture (as shown in Figure 2): encoding the structural information of KGs into a low-dimensional vector space by Siamese graph encoders and then mapping equivalent entity pairs into the proximate space by alignment loss functions.
29
+
30
+ For a long time, most researchers have regarded EA as a graph representation learning task and focused on improving graph encoders. Starting from the simplest graph encoder TransE (Bordes et al., 2013), the newest graph encoding methods are successively introduced into EA and achieve decent improvements. For example, GCN-align (Wang et al., 2018) first proposed to use graph convolutional networks (GCN) (Kipf and Welling, 2017) to encode KGs. RSN (Guo et al., 2019) introduces recurrent neural networks (RNN) (Graves et al., 2008) and biased random walk to exploit the long-term relational dependencies existing in KGs. Dual-AMN (Mao et al., 2021a) proposes the proxy-matching layer and normalized hard samples mining loss to speed up the training process.
31
+
32
+ In stark contrast to the efforts on graph encoders, few researchers focus on improving EA decoding
33
+
34
+ algorithms (Sun et al., 2020c), which have been proved to significantly improve performance and reliability in other fields, such as dependency parsing (Zmigrod et al., 2020) and machine translation (He et al., 2021). Earlier EA studies (Wang et al., 2018; Sun et al., 2017) simply calculate the similarities of each pair of entities and select the closest one as the alignment result. This naive strategy results in one entity may be aligned to multiple entities simultaneously, which violates the one-to-one constraint of EA ${}^{1}$ . Thus, some recent studies (Xu et al., 2020; Zhu et al., 2021) propose the global alignment strategy, i.e., regarding the decoding process as a one-to-one assignment problem that could be solved by the Hungarian algorithm (Kuhn, 1955). Overall, these studies just use existing decoding algorithms without further exploration of KGs' characteristics. Similar to graph encoders, we argue that a good EA decoding algorithm should also be capable of exploiting the structural information of KGs.
35
+
36
+ In this paper, we propose an effective and efficient EA Decoding Algorithm via Third-order Tensor Isomorphism (DATTI). Different from recent studies (Fey et al., 2020; Mao et al., 2021b) that regard EA as a matrix (second-order tensor) isomorphism problem, we express the isomorphism of KGs in the form of third-order tensors, which could completely describe the structural information of KGs. Specifically, we derive two sets of tensor isomorphism equations: (1) Adjacency tensor isomorphism equations and (2) Gramian tensor isomorphism equations. By combining these equations, DATTI could effectively utilize the adjacency and inner correlation isomorphisms of KGs to enhance the decoding process of EA, thus significantly improving the performance. Besides, the introduction of third-order tensors will inevitably lead to a quadratic increase in space-time complexity. Therefore, we adopt the randomized truncated singular value decomposition algorithm (RTSVD) (Sarlós, 2006) and Sinkhorn operator (Sinkhorn, 1964) to improve efficiency.
37
+
38
+ To comprehensively evaluate our proposed method, we apply DATTI to three advanced EA methods with different kinds of graph encoders. Experimental results on two widely used public datasets show that DATTI can deliver significant performance improvements (3.9% on Hits@1 and 3.2% on MRR) even on the most advanced EA
39
+
40
+ ![](images/f35042fb703772aea3b5e3b61ed3b61cdbe63232c391130eaeb4a30d486d713c.jpg)
41
+ Figure 2: The architecture of existing EA methods.
42
+
43
+ methods. Furthermore, our decoding algorithm is highly efficient. The decoding time is less than 3 seconds, which is almost negligible compared to the time consumption of the training process. The main contributions are summarized as follows:
44
+
45
+ - We propose an effective and efficient EA Decoding Algorithm via Third-order Tensor Isomorphism (DATTI), which consists of two sets of tensor isomorphism equations: (1) Adjacency tensor isomorphism equations and (2) Gramian tensor isomorphism equations.
46
+ - Extensive experiments on public datasets indicate that our decoding algorithm can deliver significant performance improvements even applied to the SOTA method, while the extra required time is less than 3 seconds.
47
+
48
+ # 2 Task Definition
49
+
50
+ A KG could be defined as $G = (E, R, T)$ , where $E, R$ , and $T$ represent the entity set, relation set, and triple set, respectively. Given a source graph $G_{s} = (E_{s}, R_{s}, T_{s})$ and a target graph $G_{t} = (E_{t}, R_{t}, T_{t})$ , the goal of EA is to explore the one-to-one entity correspondences $P_{e}$ between KGs.
51
+
52
+ # 3 Related Work
53
+
54
+ # 3.1 Encoders and Enhancement
55
+
56
+ The core premise of EA methods is that equivalent entity pairs between KGs have similar neighborhood structures. As shown in Figure 2, most of them could be summarized into two steps: (1) Using KG embedding methods (e.g., TransE, GCN, and GAT (Velickovic et al., 2018)) to encode entities and relations into low-dimensional embeddings. (2) Mapping these embeddings into a unified vector space through pre-aligned entity pairs and alignment loss functions. To organize existing EA methods clearly, we categorize them based on the encoders and enhancement strategies in Table 1.
57
+
58
+ Encoders and Losses. There are mainly two kinds of Encoders: Trans represents TransE (Bordes et al., 2013) and subsequent derivative algorithms. These methods assume that entity and relation embeddings follow the equation $h + r \approx t$ . Because of the easy implementation, the Trans encoders are widely used in early EA methods. More recently, Graph Neural Networks (GNN) gradually became the mainstream encoder because of their powerful modeling capability on graph structures. Inspired by language models, RSN proposes a biased random walk sampling strategy and uses RNN to encode the sampled sequences. As for alignment losses, the vast majority of EA methods (Wang et al., 2018; Wu et al., 2019; Mao et al., 2020) adopt contrastive losses, e.g., Triplet loss (Schroff et al., 2015). These loss functions share one core idea, attracting positive entity pairs and repulsing negative entity pairs.
59
+
60
+ Enhancement. Due to the lack of labeled data, several methods (Sun et al., 2018; Mao et al., 2020) adopt iterative strategies to produce semisupervised aligned entity pairs. Despite significant performance improvements, the time consumption of these methods increases several times more. Some methods (Xu et al., 2019; Yang et al., 2019) introduce textual information (e.g., entity name embeddings) as the initial features of GNN to provide a multi-aspect view. However, literal information is not always available in real applications. For example, there will be privacy risks when using user-generated content. Therefore, we will separately discuss these textual-based methods in the experiment section.
61
+
62
+ As mentioned in Section 1, some studies (Xu et al., 2020; Wu et al., 2019) regard the decoding process as a one-to-one assignment problem. The assignment problem is a fundamental combinatorial optimization problem. An intuitive instance is to assign $N$ jobs for $N$ workers. The assignment problem is to find a one-to-one assignment plan so that the total profit is maximum. Formally, it is equivalent to maximizing the following equation:
63
+
64
+ $$
65
+ \underset {\boldsymbol {P} \in \mathbb {P} _ {N}} {\operatorname {a r g m a x}} \left\langle \boldsymbol {P}, \boldsymbol {X} \right\rangle_ {F} \tag {1}
66
+ $$
67
+
68
+ $X\in \mathbb{R}^{N\times N}$ is the profit matrix. $P$ is a permutation matrix denoting the assignment plan. There are exactly one entry of 1 in each row and each column in $P$ while 0s elsewhere. $\mathbb{P}_N$ represents the set of all N-dimensional permutation matrices. Here, $\langle \cdot \rangle_F$ represents the Frobenius inner product.
69
+
70
+ <table><tr><td>Method</td><td>Encoder</td><td>Enhancement</td></tr><tr><td>JAPE (Sun et al., 2017)</td><td>Trans</td><td>✘</td></tr><tr><td>GCN-Align (Wang et al., 2018)</td><td>GNN</td><td>✘</td></tr><tr><td>OTEA (Pei et al., 2019)</td><td>Trans</td><td>✘</td></tr><tr><td>RSN (Guo et al., 2019)</td><td>RNN</td><td>✘</td></tr><tr><td>BootEA (Sun et al., 2018)</td><td>Trans</td><td>Semi</td></tr><tr><td>TransEdge(Sun et al., 2020a)</td><td>Trans</td><td>Semi</td></tr><tr><td>MRAEA (Mao et al., 2020)</td><td>GNN</td><td>Semi</td></tr><tr><td>Dual-AMN (Mao et al., 2021a)</td><td>GNN</td><td>Semi</td></tr><tr><td>GM-Align (Xu et al., 2019)</td><td>GNN</td><td>Entity Name</td></tr><tr><td>RDGCN (Wu et al., 2019)</td><td>GNN</td><td>Entity Name</td></tr><tr><td>DGMC (Fey et al., 2020)</td><td>GNN</td><td>Entity Name</td></tr><tr><td>AttrGNN (Liu et al., 2020)</td><td>GNN</td><td>Entity Name</td></tr><tr><td>CREA (Xu et al., 2020)</td><td>GNN</td><td>Hungarian</td></tr><tr><td>RAGA (Zhu et al., 2021)</td><td>GNN</td><td>Hungarian</td></tr></table>
71
+
72
+ Table 1: Categorization of some popular EA methods.
73
+
74
+ # 4 The Proposed Method
75
+
76
+ In the following, we describe our proposed decoding algorithm (DATTI), which consists of two sets of tensor isomorphism equations: (1) Adjacency tensor isomorphism equations and (2) Gramian tensor isomorphism equations. Furthermore, we adopt the randomized truncated singular value decomposition (RTSVD) algorithm and the Sinkhorn operator to speed up the decoding process.
77
+
78
+ # 4.1 Adjacency Isomorphism
79
+
80
+ Some recent studies (Fey et al., 2020; Mao et al., 2021b) regard EA as a matrix isomorphism problem. These methods assume that the adjacency matrices $\mathbf{A}_s \in \mathbb{R}^{|E_s| \times |E_s|}$ of source graph $G_s$ and $\mathbf{A}_t \in \mathbb{R}^{|E_t| \times |E_t|}$ of target graph $G_t$ are isomorphic, i.e., $\mathbf{A}_s$ could be transformed into $\mathbf{A}_t$ according to the entity correspondence matrix $P_e$ :
81
+
82
+ $$
83
+ \boldsymbol {P} _ {e} \boldsymbol {A} _ {s} \boldsymbol {P} _ {e} ^ {\top} = \boldsymbol {A} _ {t} \tag {2}
84
+ $$
85
+
86
+ $P_{e_{[i,j]}} = 1$ indicates that $e_i$ and $e_j$ are equivalent. However, matrices (second-order tensors) cannot fully describe the adjacency information of KGs, which is stored in the form of triples. Therefore, we use third-order tensors to express KGs to avoid the information missing from using matrices. Let $\mathbf{A}_s\in \mathbb{R}^{|E_s|\times |R_s|\times |E_s|}$ and $\mathbf{A}_t\in \mathbb{R}^{|E_t|\times |R_t|\times |E_t|}$ be the adjacency tensors of $G_{s}$ and $G_{t}$ . $\mathbf{A}_{[h,r,t]} = 1$ indicates that the triple $(h,r,t)$ is in the KG. The matrix isomorphism Equation (2) could be generalized into the third-order form as follows:
87
+
88
+ $$
89
+ \boldsymbol {A} _ {\mathbf {s}} \times_ {1} \mathbf {P} _ {\mathbf {e}} \times_ {2} \mathbf {P} _ {\mathbf {r}} \times_ {3} \mathbf {P} _ {\mathbf {e}} = \boldsymbol {A} _ {t} \tag {3}
90
+ $$
91
+
92
+ where $P_r$ represents the one-to-one relation correspondence matrix between $G_s$ and $G_t$ and $\times_k$ represents the $k$ -mode tensor-matrix product.
93
+
94
+ ![](images/39412a4cd50dae4faa3e3d88f4b09339a9e855089d44690f539da8296aa102af.jpg)
95
+ Figure 3: The illustration of tensor-matrix product and isomorphic adjacency tensors.
96
+
97
+ As illustrated in Figure 3, Equation (3) can be interpreted as successively reordering the tensor along three axes. Since the number of triples $|T|$ is usually much less than $|E| \times |R| \times |E|$ , $\mathbf{A}_s$ and $\mathbf{A}_t$ are extremely sparse. Unfortunately, existing tensor computing frameworks (e.g., Numpy (Harris et al., 2020) and Tensorflow (Abadi et al., 2015)) can only provide few and limited operators for third-order sparse tensors. Therefore, we have to re-transform Equation (3) into the matrix form:
98
+
99
+ $$
100
+ \boldsymbol {\mathcal {A}} _ {s} \times_ {1} \mathrm {P} _ {\mathrm {e}} \times_ {2} \mathrm {P} _ {\mathrm {r}} \times_ {3} \mathrm {P} _ {\mathrm {e}} = \boldsymbol {\mathcal {A}} _ {t}
101
+ $$
102
+
103
+ $$
104
+ \boldsymbol {P} _ {e} \boldsymbol {\mathcal {A}} _ {s} ^ {(1)} \left(\boldsymbol {P} _ {e} \otimes \boldsymbol {P} _ {r}\right) ^ {\top} = \boldsymbol {\mathcal {A}} _ {t} ^ {(1)}
105
+ $$
106
+
107
+ $$
108
+ \Longleftrightarrow \quad P _ {r} \mathcal {A} _ {s} ^ {(2)} \left(P _ {e} \otimes P _ {e}\right) ^ {\top} = \mathcal {A} _ {t} ^ {(2)} \tag {4}
109
+ $$
110
+
111
+ $$
112
+ \boldsymbol {P} _ {e} \boldsymbol {\mathcal {A}} _ {s} ^ {(3)} \left(\boldsymbol {P} _ {r} \otimes \boldsymbol {P} _ {e}\right) ^ {\top} = \boldsymbol {\mathcal {A}} _ {t} ^ {(3)}
113
+ $$
114
+
115
+ here $\otimes$ represents the Kronecker product, $P_{e}\otimes$ $P_{r}\in \mathbb{P}^{(|E|\cdot |R|)\times (|E|\cdot |R|)}$ . $\pmb{A}^{(k)}$ represents the mode- $k$ unfolding matrix of the tensor $\pmb{A}$ , e.g., $\pmb{A}^{(1)} = [\pmb{A}_{[:,:,0]}\| \pmb{A}_{[:,:,1]}\| \dots \| \pmb{A}_{[:,:,|E|]}]\in \mathbb{R}^{|E|\times (|E|\cdot |R|)}$ , where $\parallel$ is the concatenate operation. When $\pmb{A}_s$ and $\pmb{A}_t$ are second-order adjacency tensors, the above equations degrade to Equation (2):
116
+
117
+ $$
118
+ \boldsymbol {A} _ {s} \times_ {1} \mathbf {P} _ {\mathrm {e}} \times_ {2} \mathbf {P} _ {\mathrm {e}} = \boldsymbol {A} _ {t}
119
+ $$
120
+
121
+ $$
122
+ \Longleftrightarrow \quad P _ {e} \mathcal {A} _ {s} ^ {(1)} P _ {e} ^ {\top} = \mathcal {A} _ {t} ^ {(1)} \tag {5}
123
+ $$
124
+
125
+ # 4.2 Gramian Isomorphism
126
+
127
+ Gramian matrix $G(\mathbf{A}) = \mathbf{A}\mathbf{A}^{\top}$ reflects the inner correlations between each vector of matrix $\mathbf{A}$ . If we regard $\mathbf{A}$ as random variables, $G(\mathbf{A})$ is equivalent to the uncentered covariance matrix. When $\mathbf{A}_s$ and $\mathbf{A}_t$ are isomorphic, their Gramian matrices $\mathbf{A}_s\mathbf{A}_s^\top$ and $\mathbf{A}_t\mathbf{A}_t^\top$ are isomorphic too:
128
+
129
+ $$
130
+ \boldsymbol {A} _ {t} \boldsymbol {A} _ {t} ^ {\top} = \left(\boldsymbol {P} _ {e} \boldsymbol {A} _ {s} \boldsymbol {P} _ {e} ^ {\top}\right) \left(\boldsymbol {P} _ {e} \boldsymbol {A} _ {s} \boldsymbol {P} _ {e} ^ {\top}\right) ^ {\top} = \boldsymbol {P} _ {e} \boldsymbol {A} _ {s} \boldsymbol {A} _ {s} ^ {\top} \boldsymbol {P} _ {e} ^ {\top} (6)
131
+ $$
132
+
133
+ Similar to adjacency matrices, the Gramian matrix isomorphism equation could also be generalized into the third-order form:
134
+
135
+ $$
136
+ \boldsymbol {P} _ {e} G \left(\boldsymbol {A} _ {s} ^ {(1)}\right) \boldsymbol {P} _ {e} ^ {\top} = G \left(\boldsymbol {A} _ {t} ^ {(1)}\right)
137
+ $$
138
+
139
+ $$
140
+ \boldsymbol {P} _ {r} G \left(\boldsymbol {A} _ {\boldsymbol {s}} ^ {(2)}\right) \boldsymbol {P} _ {r} ^ {\top} = G \left(\boldsymbol {A} _ {\boldsymbol {t}} ^ {(2)}\right) \tag {7}
141
+ $$
142
+
143
+ $$
144
+ \boldsymbol {P} _ {e} G \left(\boldsymbol {A} _ {s} ^ {(3)}\right) \boldsymbol {P} _ {e} ^ {\top} = G \left(\boldsymbol {A} _ {t} ^ {(3)}\right)
145
+ $$
146
+
147
+ Furthermore, it is easy to prove that the following equations hold for arbitrary depth $l \in \mathbb{N}$ :
148
+
149
+ $$
150
+ \boldsymbol {P} _ {e} G (\boldsymbol {\mathcal {A}} _ {s} ^ {(1)}) ^ {l} \boldsymbol {P} _ {e} ^ {\top} = G (\boldsymbol {\mathcal {A}} _ {t} ^ {(1)}) ^ {l}
151
+ $$
152
+
153
+ $$
154
+ \boldsymbol {P} _ {r} G \left(\boldsymbol {A} _ {s} ^ {(2)}\right) ^ {l} \boldsymbol {P} _ {r} ^ {\top} = G \left(\boldsymbol {A} _ {t} ^ {(2)}\right) ^ {l} \tag {8}
155
+ $$
156
+
157
+ $$
158
+ \boldsymbol {P} _ {e} G \left(\boldsymbol {A} _ {s} ^ {(3)}\right) ^ {l} \boldsymbol {P} _ {e} ^ {\top} = G \left(\boldsymbol {A} _ {t} ^ {(3)}\right) ^ {l}
159
+ $$
160
+
161
+ # 4.3 Decoding via Isomorphism
162
+
163
+ Although we have derived two sets of isomorphic equations, neither of them could be solved directly. These equations are equivalent to the quadratic or cubic assignment problem (Yan et al., 2016), which has been proved to be NP-hard (Lawler, 1963). Fortunately, these isomorphic equations could be used to enhance the decoding process.
164
+
165
+ Let $\pmb{H}_s^e \in \mathbb{R}^{|E_s| \times d^e}$ and $\pmb{H}_s^r \in \mathbb{R}^{|R_s| \times d^r}$ represent the entity and relation embeddings of $G_s$ . $\pmb{H}_t^e \in \mathbb{R}^{|E_t| \times d^e}$ and $\pmb{H}_t^r \in \mathbb{R}^{|R_t| \times d^r}$ represent the embeddings of $G_t$ . Assume that these embeddings have been approximately aligned by EA methods:
166
+
167
+ $$
168
+ \begin{array}{l} \boldsymbol {P} _ {e} \boldsymbol {H} _ {s} ^ {e} \approx \boldsymbol {H} _ {t} ^ {e} \\ \boldsymbol {D} \boldsymbol {W} ^ {r} = \boldsymbol {W} ^ {r} \end{array} \tag {9}
169
+ $$
170
+
171
+ $$
172
+ \boldsymbol {P} _ {r} \boldsymbol {H} _ {s} ^ {r} \approx \boldsymbol {H} _ {t} ^ {r}
173
+ $$
174
+
175
+ As mentioned in Section 1, some recent studies (Xu et al., 2020; Sun et al., 2020c) regard the decoding process of $P_{e}$ as an assignment problem:
176
+
177
+ $$
178
+ \underset {\boldsymbol {P} _ {e} \in \mathbb {P} _ {| E |}} {\arg \min } \| \boldsymbol {P} _ {e} \boldsymbol {H} _ {s} ^ {e} - \boldsymbol {H} _ {t} ^ {e ^ {\top}} \| _ {F} ^ {2} \tag {10}
179
+ $$
180
+
181
+ $$
182
+ \Longleftrightarrow \quad \underset {\boldsymbol {P} _ {e} \in \mathbb {P} _ {| E |}} {\arg \max } \left\langle \boldsymbol {P} _ {e}, \boldsymbol {H} _ {s} ^ {e} \boldsymbol {H} _ {t} ^ {e ^ {\top}} \right\rangle_ {F} \tag {10}
183
+ $$
184
+
185
+ Since this simple decoding strategy does not utilize the structural information of KGs, we propose to introduce the adjacency and Gramian isomorphism equations into the decoding process. By combining Equations (4), (8), and (9), the connection between the 8-tuple $\{\mathcal{A}_s,\mathcal{A}_t,H_s^e,H_t^e,H_s^r,H_t^r,P_e,P_r\}$ could
186
+
187
+ be described as follows, for arbitrary depth $l \in \mathbb{N}$ :
188
+
189
+ $$
190
+ \boldsymbol {P} _ {e} G \left(\boldsymbol {\mathcal {A}} _ {s} ^ {(1)}\right) ^ {l} \boldsymbol {\mathcal {A}} _ {s} ^ {(1)} \left(\boldsymbol {H} _ {s} ^ {e} \otimes \boldsymbol {H} _ {s} ^ {r}\right) \approx G \left(\boldsymbol {\mathcal {A}} _ {t} ^ {(1)}\right) ^ {l} \boldsymbol {\mathcal {A}} _ {t} ^ {(1)} \left(\boldsymbol {H} _ {t} ^ {e} \otimes \boldsymbol {H} _ {t} ^ {r}\right) \tag {11}
191
+ $$
192
+
193
+ $$
194
+ \left. \boldsymbol {P} _ {r} G \left(\boldsymbol {\mathcal {A}} _ {s} ^ {(2)}\right) ^ {l} \boldsymbol {\mathcal {A}} _ {s} ^ {(2)} \left(\boldsymbol {H} _ {s} ^ {e} \otimes \boldsymbol {H} _ {s} ^ {e}\right) \approx G \left(\boldsymbol {\mathcal {A}} _ {t} ^ {(2)}\right) ^ {l} \boldsymbol {\mathcal {A}} _ {t} ^ {(2)} \left(\boldsymbol {H} _ {t} ^ {e} \otimes \boldsymbol {H} _ {t} ^ {e}\right) \right. \tag {12}
195
+ $$
196
+
197
+ $$
198
+ \boldsymbol {P} _ {e} G \left(\boldsymbol {\mathcal {A}} _ {s} ^ {(3)}\right) ^ {l} \boldsymbol {\mathcal {A}} _ {s} ^ {(3)} \left(\boldsymbol {H} _ {s} ^ {r} \otimes \boldsymbol {H} _ {s} ^ {e}\right) \approx G \left(\boldsymbol {\mathcal {A}} _ {t} ^ {(3)}\right) ^ {l} \boldsymbol {\mathcal {A}} _ {t} ^ {(3)} \left(\boldsymbol {H} _ {t} ^ {r} \otimes \boldsymbol {H} _ {t} ^ {e}\right) \tag {13}
199
+ $$
200
+
201
+ Detailed proof is listed in Appendix A. Although it looks complex, the above equations essentially have the same form as Equation (9). Take Equation (11) as an example, let $\hat{H}_s^l = G(\mathcal{A}_s^{(1)})^l\mathcal{A}_s^{(1)}(H_s^e\otimes H_s^r)$ and $\hat{H}_t^l = G(\mathcal{A}_t^{(1)})^l\mathcal{A}_t^{(1)}(H_t^e\otimes H_t^r)$ , Equation (11) can be simplified into as follows:
202
+
203
+ $$
204
+ \boldsymbol {P} _ {e} \hat {\boldsymbol {H}} _ {s} ^ {l} \approx \hat {\boldsymbol {H}} _ {t} ^ {l} \tag {14}
205
+ $$
206
+
207
+ Therefore, $P_{e}$ could also be solved by maximizing the equation $\underset {P_e\in \mathbb{P}_{|E|}}{\arg \max}\left\langle P_e,\hat{H}_s^l\hat{H}_t^{\top}\right\rangle_F$ . Theoretically, for arbitrarily depth $l\in \mathbb{N}$ , the result of $P_{e}$ should be the same. However, the above equations are based on the ideal isomorphic situation. In practice, $\mathcal{A}_s$ and $\mathcal{A}_t$ can not always be strictly isomorphic. In order to reduce the impact of noise existing in practice, $P_{e}$ should be fit for various $l$ ..
208
+
209
+ $$
210
+ \begin{array}{l} \sum_ {l = 0} ^ {L} \underset {\boldsymbol {P} _ {e} \in \mathbb {P} _ {| E |}} {\arg \max } \left\langle \boldsymbol {P} _ {e}, \hat {\boldsymbol {H}} _ {s} ^ {l} \hat {\boldsymbol {H}} _ {t} ^ {l} ^ {\top} \right\rangle_ {F} \tag {15} \\ \Longleftrightarrow \quad \underset {\boldsymbol {P} _ {e} \in \mathbb {P} _ {| E |}} {\operatorname {a r g m a x}} \left\langle \boldsymbol {P} _ {e}, \sum_ {l = 0} ^ {L} \hat {\boldsymbol {H}} _ {s} ^ {l} \hat {\boldsymbol {H}} _ {t} ^ {l} ^ {\top} \right\rangle_ {F} \\ \end{array}
211
+ $$
212
+
213
+ By Equation (15), we successfully integrate the adjacency and Gramian isomorphism equations into the decoding process of EA. Similar to the above, Equation (12) could obtain the relation alignment result $P_r$ . Because Equation (13) is equivalent to Equation (11), it only needs to solve either of them to obtain the entity alignment result $P_e$ . It is noted that entity scales $|E_s|$ and $|E_t|$ are usually inconsistent in practice, which is called the unbalanced assignment problem. Assuming that $|E_s| > |E_t|$ , a naive solution is to pad the profit matrix with zeros such that its shape becomes $\mathbb{R}^{|E_s| \times |E_s|}$ .
214
+
215
+ # 4.4 Reducing the Complexity
216
+
217
+ Randomized truncated SVD. The introduction of third-order tensors enables DATTI to fully describe the structural information of KGs. However, there is no such thing as a free lunch. The space-time complexity also increases quadratically. The main bottleneck is to compute $\hat{\pmb{H}}_s^l\in \mathbb{R}^{|E_s|\times (d^e\cdot d^r)}$ and
218
+
219
+ ![](images/89a3cc007f857374a67ba8369f87df981fcffc0fc7080f3b72f243f1df7f2e16.jpg)
220
+ Figure 4: The singular value distribution of $\hat{H}_s^l$ obtained by TransEdge on DBP15K. The abscissa represents the top $k\%$ singular values, and the ordinate represents the proportion of these singular values in total.
221
+
222
+ $\hat{H}_t^l \in \mathbb{R}^{|E_t| \times (d^e \cdot d^r)}$ . Even with the sparse optimization trick, the complexity is still up to $O(ld^rd^e |T|)$ which is much worse than most GNN encoders $O(l(d^{e} + d^{r})|T|)$ (Mao et al., 2020).
223
+
224
+ In Figure 4, we list the singular value distribution of $\hat{H}_s^l$ obtained by TransEdge (Sun et al., 2020a) on DBP15K. Interestingly, the distribution is highly concentrated in the top $20\%$ , which means the contained information of $\hat{H}_s^l$ is sparse and compressible. By dropping the smaller singular values of $\hat{H}_s^l$ and $\hat{H}_t^l$ , the space-time complexity could be significantly reduced. This paper adopts randomized truncated SVD (Sarlós, 2006) to decompose matrices approximately and only retains the top $\phi \%$ of the singular values of $\hat{H}_s^l$ and $\hat{H}_t^l$ .
225
+
226
+ Sinkhorn operator. The first and most well-known solving algorithm for the assignment problem is the Hungarian algorithm (Kuhn, 1955), which is based on improving a matching along the augmenting paths. The time complexity of the original Hungarian algorithm is $O(n^4)$ . Then, Jonker and Volgenant (1987) improve the algorithm to achieve an $O(n^3)$ running time.
227
+
228
+ Besides the Hungarian algorithm, the assignment problem could also be regarded as a special case of the optimal transport (OT) problem. Based on the Sinkhorn operator (Sinkhorn, 1964), Cuturi (2013) proposes a fast and completely parallelizable algorithm for OT problem:
229
+
230
+ $$
231
+ S ^ {0} (\boldsymbol {X}) = \exp (\boldsymbol {X}),
232
+ $$
233
+
234
+ $$
235
+ S ^ {k} (\boldsymbol {X}) = \mathcal {N} _ {c} \left(\mathcal {N} _ {r} \left(S ^ {k - 1} (\boldsymbol {X})\right)\right), \tag {16}
236
+ $$
237
+
238
+ $$
239
+ \operatorname {S i n k h o r n} (\boldsymbol {X}) = \lim _ {k \rightarrow \infty} S ^ {k} (\boldsymbol {X}).
240
+ $$
241
+
242
+ where $\mathcal{N}_r(\pmb {X}) = \pmb {X}\oslash (\pmb {X}\mathbf{1}_N\mathbf{1}_N^T)$ and $\mathcal{N}_c = X\oslash (\mathbf{1}_N\mathbf{1}_N^T\mathbf{X})$ are the row and column-wise normalization operators of a matrix, $\varnothing$ represents the element-wise division, and $\mathbf{1}_N$ is a column vector of ones.
243
+
244
+ <table><tr><td colspan="2">Datasets</td><td>|E|</td><td>|R|</td><td>|T|</td></tr><tr><td rowspan="2">DBPZH-EN</td><td>Chinese</td><td>19,388</td><td>1,701</td><td>70,414</td></tr><tr><td>English</td><td>19,572</td><td>1,323</td><td>95,142</td></tr><tr><td rowspan="2">DBPJA-EN</td><td>Japanese</td><td>19,814</td><td>1,299</td><td>77,214</td></tr><tr><td>English</td><td>19,780</td><td>1,153</td><td>93,484</td></tr><tr><td rowspan="2">DBPFR-EN</td><td>French</td><td>19,661</td><td>903</td><td>105,998</td></tr><tr><td>English</td><td>19,993</td><td>1,208</td><td>115,722</td></tr><tr><td rowspan="2">SRPRSFR-EN</td><td>French</td><td>15,000</td><td>177</td><td>33,532</td></tr><tr><td>English</td><td>15,000</td><td>221</td><td>36,508</td></tr><tr><td rowspan="2">SRPRSDE-EN</td><td>German</td><td>15,000</td><td>120</td><td>37,377</td></tr><tr><td>English</td><td>15,000</td><td>222</td><td>38,363</td></tr></table>
245
+
246
+ Table 2: Statistical data of DBP15K and SRPRS.
247
+
248
+ Then, Mena et al. (2018) further prove that the Sinkhorn operation could also solve the assignment problem as a special case of OT problem:
249
+
250
+ $$
251
+ \begin{array}{l} \underset {\boldsymbol {P} \in \mathbb {P} _ {N}} {\operatorname {a r g m a x}} \left\langle \boldsymbol {P}, \boldsymbol {X} \right\rangle_ {F} \\ = \lim _ {\tau \rightarrow 0 ^ {+}} \operatorname {S i n k h o r n} (\boldsymbol {X} / \tau) \tag {17} \\ \end{array}
252
+ $$
253
+
254
+ The time complexity of the Sinkhorn operator is $O(kn^2)$ . According to our experimental results, a small $k$ is enough to achieve decent performance. Compared with the Hungarian algorithm, the Sinkhorn operation is much more efficient. Therefore, this paper adopts the Sinkhorn operator to solve Equation (15).
255
+
256
+ # 5 Experiments
257
+
258
+ Our experiments are conducted on a PC with a GeForce GTX 3090 GPU and a Ryzen ThreadRipper 3970X CPU. The code and datasets are available in Github<sup>2</sup>.
259
+
260
+ # 5.1 Datasets
261
+
262
+ To comprehensively evaluate the proposed decoding algorithm, we experiment with two widely used public datasets: (1) DBP15K (Sun et al., 2017) consists of three cross-lingual subsets from multilingual DBpedia. Each subset contains 15,000 entity pairs. (2) SRPRS (Guo et al., 2019). Each subset also contains 15,000 entity pairs but with much fewer triples compared to DBP15K. The statistics of these datasets are summarized in Table 2. To be consistent with previous studies (Wang et al., 2018; Sun et al., 2018), we randomly split $30\%$ of the prealigned entity pairs for training and development while using the remaining $70\%$ for testing. All the results are the average of five independent runs.
263
+
264
+ # 5.2 Baselines
265
+
266
+ To ensure the universality, we evaluate DATTI on three advanced EA methods with different types of graph encoders: Dual-AMN (Mao et al., 2021a) is the SOTA of GNN-based methods; TransEdge (Sun et al., 2020a) is the SOTA of Trans-based methods; RSN (Guo et al., 2019) is the only EA method using RNN as the encoder. Furthermore, we choose the Hungarian algorithm (Hun.) as the decoding baseline, proven to be effective by recent EA methods (Xu et al., 2020; Zhu et al., 2021).
267
+
268
+ # 5.3 Settings
269
+
270
+ Metrics. Following convention, we use Hits@k and Mean Reciprocal Rank (MRR) as the evaluation metrics. The Hits@k score is calculated by measuring the proportion of correct pairs in the top-k. In particular, Hits@1 equals accuracy.
271
+
272
+ Hyper-parameter. For TransEdge, we retain the top $\phi = 20\%$ of the singular values of $\hat{H}_s^l$ and $\hat{H}_t^l$ . Since the output dimensions of Dual-AMN $(d^{e} = 768, d^{r} = 128)$ and RSN $(d^{e} = d^{r} = 256)$ are much larger than TransEdge $(d^{e} = d^{r} = 75)$ , we only set the retaining ratio $\phi = 2\%$ . Other hyper-parameters keep the same for all datasets and methods: iterations $k = 15$ ; temperature $\tau = 0.02$ ; max depth $L = 3$ .
273
+
274
+ # 5.4 Main Experiments
275
+
276
+ We list the main experimental results in Table 3. Among these three EA methods, Dual-AMN beats other baselines by more than $5.5\%$ on Hits@1 and $4.2\%$ on MRR, which indicates the advantages of GNN encoders. On RSN and TransEdge, the Hungarian algorithm shows decent performance improvements on Hits@1 by at least $3.2\%$ . In contrast, the Hungarian does not positively affect Dual-AMN, probably due to the bi-directional nearest iterative strategy of Dual-AMN that has included the core idea of the Hungarian algorithm.
277
+
278
+ Our proposed DATTI consistently achieves the best performances on all datasets and baselines. On DBP15K, DATTI delivers performance gains by at least $2.8\%$ on $\text{Hits} @ 1$ and $3.2\%$ on MRR. Especially for the SOTA method Dual-AMN, DATTI further raises the performance ceiling of EA by more than $3.9\%$ on $\text{Hits} @ 1$ . On SRPRS, DATTI could significantly improve the performances of RSN and TransEdge. But for Dual-AMN, the improvements are much less. One possible explanation is that SRPRS removes too many triples, resulting in a lower performance ceiling.
279
+
280
+ <table><tr><td rowspan="2">Method</td><td colspan="3">DBPZH-EN</td><td colspan="3">DBPJA-EN</td><td colspan="3">DBPFR-EN</td><td colspan="3">SRPRSFR-EN</td><td colspan="3">SRPRSDE-EN</td></tr><tr><td>H@1</td><td>H@10</td><td>MRR</td><td>H@1</td><td>H@10</td><td>MRR</td><td>H@1</td><td>H@10</td><td>MRR</td><td>H@1</td><td>H@10</td><td>MRR</td><td>H@1</td><td>H@10</td><td>MRR</td></tr><tr><td>RSN</td><td>0.607</td><td>0.829</td><td>0.685</td><td>0.591</td><td>0.815</td><td>0.670</td><td>0.632</td><td>0.864</td><td>0.713</td><td>0.351</td><td>0.638</td><td>0.447</td><td>0.511</td><td>0.744</td><td>0.590</td></tr><tr><td>+ Hun.</td><td>0.661</td><td>-</td><td>-</td><td>0.633</td><td>-</td><td>-</td><td>0.693</td><td>-</td><td>-</td><td>0.374</td><td>-</td><td>-</td><td>0.538</td><td>-</td><td>-</td></tr><tr><td>+ DATTI</td><td>0.721</td><td>0.903</td><td>0.785</td><td>0.686</td><td>0.895</td><td>0.759</td><td>0.720</td><td>0.918</td><td>0.790</td><td>0.407</td><td>0.694</td><td>0.502</td><td>0.559</td><td>0.782</td><td>0.637</td></tr><tr><td>(Imp.%)</td><td>9.1%</td><td>8.9%</td><td>14.6%</td><td>8.4%</td><td>9.8%</td><td>13.3%</td><td>3.9%</td><td>6.3%</td><td>10.8%</td><td>8.8%</td><td>8.8%</td><td>12.3%</td><td>3.9%</td><td>5.1%</td><td>8.0%</td></tr><tr><td>TransEdge</td><td>0.762</td><td>0.921</td><td>0.818</td><td>0.746</td><td>0.929</td><td>0.811</td><td>0.769</td><td>0.940</td><td>0.830</td><td>0.403</td><td>0.675</td><td>0.492</td><td>0.556</td><td>0.753</td><td>0.633</td></tr><tr><td>+ Hun.</td><td>0.787</td><td>-</td><td>-</td><td>0.771</td><td>-</td><td>-</td><td>0.796</td><td>-</td><td>-</td><td>0.427</td><td>-</td><td>-</td><td>0.574</td><td>-</td><td>-</td></tr><tr><td>+ DATTI</td><td>0.814</td><td>0.947</td><td>0.863</td><td>0.804</td><td>0.957</td><td>0.861</td><td>0.818</td><td>0.965</td><td>0.873</td><td>0.441</td><td>0.707</td><td>0.521</td><td>0.593</td><td>0.782</td><td>0.673</td></tr><tr><td>(Imp.%)</td><td>3.4%</td><td>2.8%</td><td>5.5%</td><td>4.3%</td><td>3.0%</td><td>6.2%</td><td>2.8%</td><td>2.7%</td><td>5.2%</td><td>3.3%</td><td>4.7%</td><td>5.9%</td><td>3.5%</td><td>3.8%</td><td>6.3%</td></tr><tr><td>Dual-AMN</td><td>0.804</td><td>0.937</td><td>0.853</td><td>0.803</td><td>0.947</td><td>0.856</td><td>0.834</td><td>0.962</td><td>0.881</td><td>0.483</td><td>0.755</td><td>0.573</td><td>0.612</td><td>0.819</td><td>0.683</td></tr><tr><td>+ Hun.</td><td>0.801</td><td>-</td><td>-</td><td>0.803</td><td>-</td><td>-</td><td>0.839</td><td>-</td><td>-</td><td>0.483</td><td>-</td><td>-</td><td>0.611</td><td>-</td><td>-</td></tr><tr><td>+ DATTI</td><td>0.835</td><td>0.953</td><td>0.880</td><td>0.836</td><td>0.969</td><td>0.884</td><td>0.873</td><td>0.979</td><td>0.913</td><td>0.495</td><td>0.760</td><td>0.583</td><td>0.623</td><td>0.822</td><td>0.691</td></tr><tr><td>(Imp.%)</td><td>3.9%</td><td>1.7%</td><td>3.2%</td><td>4.1%</td><td>2.3%</td><td>3.3%</td><td>4.7%</td><td>1.8%</td><td>3.6%</td><td>2.5%</td><td>0.6%</td><td>1.7%</td><td>1.8%</td><td>0.4%</td><td>1.2%</td></tr></table>
281
+
282
+ Table 3: Main experimental results on DBP15K and SRPRS. All the results and initial embeddings are obtained by their official code with default hyper-parameters. Imp.% represents the percentage increase of DATTI compared to the suboptimal result. Since the Hungarian algorithm only outputs one aligned entity pair for each entity, instead of a rank list, we can only report Hits@1. All improvements are statistically significant with $p < 0.01$ on paired $t$ -test.
283
+
284
+ <table><tr><td rowspan="2">Method</td><td colspan="2">DBP15K</td><td colspan="2">SRPRS</td></tr><tr><td>Train</td><td>DATTI</td><td>Train</td><td>DATTI</td></tr><tr><td>RSN</td><td>3,659</td><td>2.4</td><td>1,279</td><td>1.7</td></tr><tr><td>TransEdge</td><td>1,625</td><td>1.3</td><td>907</td><td>1.2</td></tr><tr><td>Dual-AMN</td><td>177</td><td>3.3</td><td>163</td><td>2.6</td></tr></table>
285
+
286
+ # 5.5 Auxiliary Experiments
287
+
288
+ To explore the behavior of our proposed decoding algorithm in different situations, we design the following experiments:
289
+
290
+ Time Efficiency. By adopting RTSVD and the Sinkhorn operator, our proposed decoding algorithm acquires high efficiency. Table 4 lists the time costs of the training and decoding process (DATTI) of three EA methods on DBP15K and SRPRS. DATTI only requires 3 seconds to obtain the result at most, which is negligible even compared to the training process of the fastest method Dual-AMN.
291
+
292
+ Adjacency and Gramian Isomorphism. The core contribution of DATTI is to introduce the adjacency and Gramian isomorphism equations into the EA decoding process. To demonstrate their effectiveness, we independently add each of them on Dual-AMN. As shown in Table 5, both could slightly improve the performance (less than $1.6\%$ on Hits@1). Interestingly, the performance gain brought by their combination is greater than the sum of their independent gains, which means these two kinds of isomorphism equations could capture non-overlapping information.
293
+
294
+ Table 4: Time costs (second) on DBP15K and SRPRS.
295
+
296
+ <table><tr><td rowspan="2">Method</td><td colspan="2">DBPZH-EN</td><td colspan="2">DBPJA-EN</td><td colspan="2">DBPFR-EN</td></tr><tr><td>Hits@1</td><td>MRR</td><td>Hits@1</td><td>MRR</td><td>Hits@1</td><td>MRR</td></tr><tr><td>Dual-AMN</td><td>0.804</td><td>0.853</td><td>0.803</td><td>0.856</td><td>0.834</td><td>0.881</td></tr><tr><td>+Adj.</td><td>0.820</td><td>0.866</td><td>0.818</td><td>0.868</td><td>0.859</td><td>0.902</td></tr><tr><td>+Gram.</td><td>0.809</td><td>0.857</td><td>0.812</td><td>0.863</td><td>0.848</td><td>0.895</td></tr><tr><td>+DATTI</td><td>0.835</td><td>0.880</td><td>0.836</td><td>0.884</td><td>0.873</td><td>0.913</td></tr></table>
297
+
298
+ Table 5: Ablation studies on DBP15K.
299
+
300
+ ![](images/cfec7230d3be6bce03d64157ade1df8c1fe1871d5b6c4c136e5a26ca187b650f.jpg)
301
+ Figure 5: Hits@1 on DBP $_{\mathrm{ZH - EN}}$ with different $\tau$ .
302
+
303
+ Iterations $k$ and Temperature $\tau$ . The $\tau$ in the Sinkhorn operator is used to make distribution closer to one-hot, which is similar to the $\tau$ in the softmax operator. We set $\tau$ from 0.01 to 0.05 and report the corresponding performance curves of DATTI (Dual-AMN) on $\mathrm{DBP_{ZH - EN}}$ in Figure 5. If we choose an appropriate value, the Sinkhorn operator will converge quickly to the optimal solution. Although $\tau$ theoretically needs to be close to zero, an over small $\tau$ will make the algorithm unstable because of the error of big floating-point numbers. In contrast, an over large $\tau$ will lead the algorithm to fail to converge.
304
+
305
+ ![](images/cfc87515ced231c14646c39aa3ea52d0a1f39925ef5fef56f23351656ae677fa.jpg)
306
+ Figure 6: Hits@1 on DBP15K with different depths $L$ .
307
+
308
+ ![](images/bfca7921b9a0cba9ebc23a6ebd159ee05607608271f8f3241990283655ab0457.jpg)
309
+ Figure 7: Hits@1 and time cost (second) on $\mathrm{DBP_{ZH - EN}}$ with different retaining ratios $\phi$ .
310
+
311
+ Depth $L$ . Figure 6 lists the performances of DATTI (Dual-AMN) with different max depths $L$ . In particular, $L = 0$ is equivalent to only using adjacency isomorphism equations to decode $P_{e}$ . When the depth $L$ is less than 3, each additional layer could deliver significant performance improvements on all subsets of DBP15K. When stacking more layers, the performance gains become negligible or even degrade, which indicates that over-smoothing (Kipf and Welling, 2017) also exists in DATTI.
312
+
313
+ Retaining ratio $\phi$ . To reduce the space-time complexity of DATTI, we only retain the top $\phi \%$ of the singular values of $\hat{H}_s^l$ and $\hat{H}_t^l$ . In Figure 7, we report the Hits@1 and time cost of DATTI (Dual-AMN) on DBP $_{\mathrm{ZH - EN}}$ with different retaining ratios $\phi$ . From the observation, when the retaining ratio exceeds $2\%$ , the growth of Hits@1 becomes very slow, while the time cost still keeps quadratic growing. Therefore, $\phi = 2\%$ is the sweet spot between performance and efficiency in this situation. In practice, the retaining ratio $\phi$ could be adjusted according to computing resources and data scales.
314
+
315
+ # 5.6 Unsupervised Entity Alignment
316
+
317
+ So far, all the experiments are based on pure structural-based EA methods. As mentioned in Section 3.1, some methods (Xu et al., 2020; Wu et al., 2019) introduce textual information (e.g., entity name) to provide a multi-aspect view. Specifically,
318
+
319
+ <table><tr><td rowspan="2">Method</td><td colspan="2">DBPZH-EN</td><td colspan="2">DBPJA-EN</td><td colspan="2">DBPFR-EN</td></tr><tr><td>Hits@1</td><td>Hits@10</td><td>Hits@1</td><td>Hits@10</td><td>Hits@1</td><td>Hits@10</td></tr><tr><td>GM-Align</td><td>0.679</td><td>0.785</td><td>0.740</td><td>0.872</td><td>0.894</td><td>0.952</td></tr><tr><td>RDGCN</td><td>0.697</td><td>0.842</td><td>0.763</td><td>0.763</td><td>0.873</td><td>0.957</td></tr><tr><td>DGMC</td><td>0.801</td><td>0.875</td><td>0.848</td><td>0.897</td><td>0.933</td><td>0.960</td></tr><tr><td>AtrrGNN</td><td>0.796</td><td>0.929</td><td>0.783</td><td>0.920</td><td>0.919</td><td>0.979</td></tr><tr><td>CREA</td><td>0.736</td><td>-</td><td>0.792</td><td>-</td><td>0.924</td><td>-</td></tr><tr><td>RAGA</td><td>0.873</td><td>-</td><td>0.909</td><td>-</td><td>0.966</td><td>-</td></tr><tr><td>Init-Emb</td><td>0.625</td><td>0.756</td><td>0.680</td><td>0.807</td><td>0.848</td><td>0.919</td></tr><tr><td>+Hun.</td><td>0.667</td><td>-</td><td>0.728</td><td>-</td><td>0.893</td><td>-</td></tr><tr><td>+DATTI</td><td>0.890</td><td>0.958</td><td>0.921</td><td>0.971</td><td>0.979</td><td>0.995</td></tr><tr><td>(Imp.%)</td><td>1.9%</td><td>3.1%</td><td>1.3%</td><td>5.5%</td><td>1.3%</td><td>1.6%</td></tr></table>
320
+
321
+ Table 6: Performances of textual-based EA methods. The results of baselines are collected from the origin papers. Init-Emb represents only using the cosine similarity between the averaged name embeddings.
322
+
323
+ these methods first use machine translation systems or cross-lingual word embeddings to map entity and relation names into a unified semantic space and then average the pre-trained word embeddings to construct the initial features for entities and relations. In our opinion, since the initial features of entity $H^{e}$ and relation $H^{r}$ have been pre-mapped, these textual-based EA methods are more like decoding algorithms to eliminate the translation noise. In this situation, DATTI could also play a similar role even without any pre-aligned entity pairs.
324
+
325
+ To make fair comparisons with these textural-based EA methods, we use the same entity name translations and pre-trained word embeddings provided by Xu et al. (2019). For DATTI, we retain the top $10\%$ of the singular values of $\hat{H}_s^l$ and $\hat{H}_t^l$ , while keeping other hyper-parameters the same. Table 6 lists the performances of DATTI and six baselines on DBP15K. Surprisingly, unsupervised DATTI outperforms all the supervised competitors, improves the performance on Hits@1 by more than $1.3\%$ . Besides showing the powerful competitiveness of DATTI, this result also indicates that existing textural-based EA methods have considerable redundancy. When the initial features have been pre-mapped, complex neural networks and pre-aligned entity pairs may not be necessary.
326
+
327
+ # 6 Conclusion
328
+
329
+ In this paper, we propose an effective and efficient EA decoding algorithm via third-order tensor isomorphism (DATTI). Extensive experiments on public datasets indicate that our decoding algorithm can deliver significant performance improvements even on the most advanced EA methods, while the extra required time is less than 3 seconds.
330
+
331
+ # References
332
+
333
+ Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dandelion Mane, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Viégas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaojiang Zheng. 2015. TensorFlow: Large-scale machine learning on heterogeneous systems. Software available from tensorflow.org.
334
+ Antoine Bordes, Nicolas Usunier, Alberto García-Durán, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multi-relational data. In Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems 2013. Proceedings of a meeting held December 5-8, 2013, Lake Tahoe, Nevada, United States, pages 2787-2795.
335
+ Yixin Cao, Xiang Wang, Xiangnan He, Zikun Hu, and Tat-Seng Chua. 2019. Unifying knowledge graph learning and recommendation: Towards a better understanding of user preferences. In The World Wide Web Conference, WWW 2019, San Francisco, CA, USA, May 13-17, 2019, pages 151-161.
336
+ Marco Cuturi. 2013. Sinkhorn distances: Lightspeed computation of optimal transport. In Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems 2013. Proceedings of a meeting held December 5-8, 2013, Lake Tahoe, Nevada, United States, pages 2292-2300.
337
+ Matthias Fey, Jan Eric Lenssen, Christopher Morris, Jonathan Masci, and Nils M. Kriege. 2020. Deep graph matching consensus. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020.
338
+ Alex Graves, Marcus Liwicki, Santiago Fernández, Roman Bertolami, Horst Bunke, and Jürgen Schmidhuber. 2008. A novel connectionist system for unconstrained handwriting recognition. IEEE transactions on pattern analysis and machine intelligence, 31(5):855-868.
339
+ Lingbing Guo, Zequn Sun, and Wei Hu. 2019. Learning to exploit long-term relational dependencies in knowledge graphs. In Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, pages 2505-2514.
340
+ Charles R. Harris, K. Jarrod Millman, Stéfan J. van der Walt, Ralf Gommers, Pauli Virtanen, David Cournaepau, Eric Wieser, Julian Taylor, Sebastian Berg,
341
+
342
+ Nathaniel J. Smith, Robert Kern, Matti Picus, Stephan Hoyer, Marten H. van Kerkwijk, Matthew Brett, Allan Haldane, Jaime Fernández del Río, Mark Wiebe, Pearu Peterson, Pierre Gérard-Marchant, Kevin Sheppard, Tyler Reddy, Warren Weckesser, Hameer Abbasi, Christoph Gohlke, and Travis E. Oliphant. 2020. Array programming with NumPy. Nature, 585(7825):357-362.
343
+ Hao He, Qian Wang, Zhipeng Yu, Yang Zhao, Jiajun Zhang, and Chengqing Zong. 2021. Synchronous interactive decoding for multilingual neural machine translation. In Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI 2021, Thirty-Third Conference on Innovative Applications of Artificial Intelligence, IAAI 2021, The Eleventh Symposium on Educational Advances in Artificial Intelligence, EAAI 2021, Virtual Event, February 2-9, 2021, pages 12981-12988. AAAI Press.
344
+ Roy Jonker and A. Volgenant. 1987. A shortest augmenting path algorithm for dense and sparse linear assignment problems. Computing, 38(4):325-340.
345
+ Thomas N. Kipf and Max Welling. 2017. Semi-supervised classification with graph convolutional networks. In 5th International Conference on Learning Representations, ICLR 2017, Toulouse, France, April 24-26, 2017, Conference Track Proceedings.
346
+ Harold W Kuhn. 1955. The hungarian method for the assignment problem. Naval research logistics quarterly, 2(1-2):83-97.
347
+ Eugene L Lawler. 1963. The quadratic assignment problem. Management science, 9(4):586-599.
348
+ Zhiyuan Liu, Yixin Cao, Liangming Pan, Juanzi Li, and Tat-Seng Chua. 2020. Exploring and evaluating attributes, values, and structures for entity alignment. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pages 6355-6364. Association for Computational Linguistics.
349
+ Xin Mao, Wenting Wang, Yuanbin Wu, and Man Lan. 2021a. Boosting the speed of entity alignment 10 $\times$ : Dual attention matching network with normalized hard sample mining. In WWW '21: The Web Conference 2021, Virtual Event / Ljubljana, Slovenia, April 19-23, 2021, pages 821-832. ACM / IW3C2.
350
+ Xin Mao, Wenting Wang, Yuanbin Wu, and Man Lan. 2021b. From alignment to assignment: Frustratingly simple unsupervised entity alignment. CoRR, abs/2109.02363.
351
+ Xin Mao, Wenting Wang, Huimin Xu, Man Lan, and Yuanbin Wu. 2020. MRAEA: an efficient and robust entity alignment approach for cross-lingual knowledge graph. In WSDM '20: The Thirteenth ACM International Conference on Web Search and Data Mining, Houston, TX, USA, February 3-7, 2020, pages 420-428.
352
+
353
+ Gonzalo E. Mena, David Belanger, Scott W. Linderman, and Jasper Snoek. 2018. Learning latent permutations with gumbel-sinkhorn networks. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. Open-Review.net.
354
+ Shichao Pei, Lu Yu, and Xiangliang Zhang. 2019. Improving cross-lingual entity alignment via optimal transport. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI 2019, Macao, China, August 10-16, 2019, pages 3231-3237.
355
+ Tamás Sarlós. 2006. Improved approximation algorithms for large matrices via random projections. In 47th Annual IEEE Symposium on Foundations of Computer Science (FOCS 2006), 21-24 October 2006, Berkeley, California, USA, Proceedings, pages 143-152. IEEE Computer Society.
356
+ Florian Schroff, Dmitry Kalenichenko, and James Philbin. 2015. Facenet: A unified embedding for face recognition and clustering. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015, Boston, MA, USA, June 7-12, 2015, pages 815-823. IEEE Computer Society.
357
+ Richard Sinkhorn. 1964. A relationship between arbitrary positive matrices and doubly stochastic matrices. The annals of mathematical statistics, 35(2):876-879.
358
+ Zequn Sun, Wei Hu, and Chengkai Li. 2017. Cross-lingual entity alignment via joint attribute-preserving embedding. In *The Semantic Web - ISWC* 2017 - 16th International Semantic Web Conference, Vienna, Austria, October 21-25, 2017, Proceedings, Part I, volume 10587 of Lecture Notes in Computer Science, pages 628-644. Springer.
359
+ Zequn Sun, Wei Hu, Qingheng Zhang, and Yuzhong Qu. 2018. Bootstrapping entity alignment with knowledge graph embedding. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI 2018, July 13-19, 2018, Stockholm, Sweden, pages 4396-4402.
360
+ Zequn Sun, JiaCheng Huang, Wei Hu, Muchao Chen, Lingbing Guo, and Yuzhong Qu. 2020a. Transedge: Translating relation-contextualized embeddings for knowledge graphs. CoRR, abs/2004.13579.
361
+ Zequn Sun, Chengming Wang, Wei Hu, Muhao Chen, Jian Dai, Wei Zhang, and Yuzhong Qu. 2020b. Knowledge graph alignment network with gated multi-hop neighborhood aggregation. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pages 222-229.
362
+
363
+ Zequn Sun, Qingheng Zhang, Wei Hu, Chengming Wang, Muhao Chen, Farahnaz Akrami, and Chengkai Li. 2020c. A benchmarking study of embedding-based entity alignment for knowledge graphs. Proc. VLDB Endow., 13(11):2326-2340.
364
+ Petar Velickovic, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. 2018. Graph attention networks. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings.
365
+ Zhichun Wang, Qingsong Lv, Xiaohan Lan, and Yu Zhang. 2018. Cross-lingual knowledge graph alignment via graph convolutional networks. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018, pages 349-357.
366
+ Yuting Wu, Xiao Liu, Yansong Feng, Zheng Wang, Rui Yan, and Dongyan Zhao. 2019. Relation-aware entity alignment for heterogeneous knowledge graphs. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI 2019, Macao, China, August 10-16, 2019, pages 5278-5284.
367
+ Kun Xu, Linfeng Song, Yansong Feng, Yan Song, and Dong Yu. 2020. Coordinated reasoning for crosslingual knowledge graph alignment. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pages 9354-9361.
368
+ Kun Xu, Liwei Wang, Mo Yu, Yansong Feng, Yan Song, Zhiguo Wang, and Dong Yu. 2019. Cross-lingual knowledge graph alignment via graph matching neural network. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28-August 2, 2019, Volume 1: Long Papers, pages 3156-3161.
369
+ Junchi Yan, Xu-Cheng Yin, Weiyao Lin, Cheng Deng, Hongyuan Zha, and Xiaokang Yang. 2016. A short survey of recent advances in graph matching. In Proceedings of the 2016 ACM on International Conference on Multimedia Retrieval, ICMR 2016, New York, New York, USA, June 6-9, 2016, pages 167-174. ACM.
370
+ Hsiu-Wei Yang, Yanyan Zou, Peng Shi, Wei Lu, Jimmy Lin, and Xu Sun. 2019. Aligning cross-lingual entities with multi-aspect information. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, pages 4430-4440.
371
+
372
+ Chen Zhao, Chenyan Xiong, Xin Qian, and Jordan L. Boyd-Graber. 2020. Complex factoid question answering with a free-text knowledge graph. In WWW '20: The Web Conference 2020, Taipei, Taiwan, April 20-24, 2020, pages 1205-1216.
373
+
374
+ Renbo Zhu, Meng Ma, and Ping Wang. 2021. RAGA: relation-aware graph attention networks for global entity alignment. In Advances in Knowledge Discovery and Data Mining - 25th Pacific-Asia Conference, PAKDD 2021, Virtual Event, May 11-14, 2021, Proceedings, Part I, volume 12712 of Lecture Notes in Computer Science, pages 501-513. Springer.
375
+
376
+ Ran Zmigrod, Tim Vieira, and Ryan Cotterell. 2020. Please mind the root: Decoding arborescences for dependency parsing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pages 4809-4819. Association for Computational Linguistics.
377
+
378
+ # A Appendix
379
+
380
+ Proof: To prove Equation (11), we combine the first sub-equations of Equation (4) and (8):
381
+
382
+ $$
383
+ \left\{ \begin{array}{l} P _ {e} G (\boldsymbol {\mathcal {A}} _ {\boldsymbol {s}} ^ {(1)}) ^ {l} \boldsymbol {P} _ {e} ^ {\top} = G (\boldsymbol {\mathcal {A}} _ {t} ^ {(1)}) ^ {l} \\ P _ {e} \boldsymbol {\mathcal {A}} _ {\boldsymbol {s}} ^ {(1)} (\boldsymbol {P} _ {e} \otimes \boldsymbol {P} _ {r}) ^ {\top} = \boldsymbol {\mathcal {A}} _ {t} ^ {(1)} \end{array} \right.
384
+ $$
385
+
386
+ Because $P_{e}^{\top} P_{e} = E$ , thus:
387
+
388
+ $$
389
+ \boldsymbol {P} _ {e} G (\boldsymbol {\mathcal {A}} _ {s} ^ {(1)}) ^ {l} \boldsymbol {\mathcal {A}} _ {s} ^ {(1)} (\boldsymbol {P} _ {e} \otimes \boldsymbol {P} _ {r}) ^ {\top} = G (\boldsymbol {\mathcal {A}} _ {t} ^ {(1)}) ^ {l} \boldsymbol {\mathcal {A}} _ {t} ^ {(1)}
390
+ $$
391
+
392
+ According to Equation (9), we could obtain:
393
+
394
+ $$
395
+ \boldsymbol {P} _ {e} \boldsymbol {H} _ {s} ^ {e} \otimes \boldsymbol {P} _ {r} \boldsymbol {H} _ {s} ^ {r} \approx \boldsymbol {H} _ {t} ^ {e} \otimes \boldsymbol {H} _ {t} ^ {r} \tag {18}
396
+ $$
397
+
398
+ Finally, because of $(\pmb{P}_e \otimes \pmb{P}_r)^\top (\pmb{P}_e \pmb{H}_s^e \otimes \pmb{P}_r \pmb{H}_s^r) = \pmb{P}_e^\top \pmb{P}_e \pmb{H}_s^e \otimes \pmb{P}_r^\top \pmb{P}_r \pmb{H}_s^r = \pmb{H}_s^e \otimes \pmb{H}_s^r$ , Equation (11) is proved as follows:
399
+
400
+ $$
401
+ \begin{array}{l} P _ {e} G \left(\mathcal {A} _ {s} ^ {(1)}\right) ^ {l} \mathcal {A} _ {s} ^ {(1)} \left(P _ {e} \otimes P _ {r}\right) ^ {\top} \left(P _ {e} H _ {s} ^ {e} \otimes P _ {r} H _ {s} ^ {r}\right) \\ = P _ {e} G \left(\mathcal {A} _ {s} ^ {(1)}\right) ^ {l} \mathcal {A} _ {s} ^ {(1)} \left(H _ {s} ^ {e} \otimes H _ {s} ^ {r}\right) \\ \approx G \left(\boldsymbol {A} _ {t} ^ {(1)}\right) ^ {l} \boldsymbol {A} _ {t} ^ {(1)} \left(\boldsymbol {H} _ {t} ^ {e} \otimes \boldsymbol {H} _ {t} ^ {r}\right) \\ \end{array}
402
+ $$
403
+
404
+ Furthermore, Equations (12) and (13) could also be proved in similar way.
aneffectiveandefficiententityalignmentdecodingalgorithmviathirdordertensorisomorphism/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c9caacb9d4ae68f9e15403733ec348fe1d498c71da3715d049023ecbd947e09d
3
+ size 588313
aneffectiveandefficiententityalignmentdecodingalgorithmviathirdordertensorisomorphism/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0bdf940fdc17878362ce17c5bcf86e87ef03b2cbd83ac1b92c756449d83d64d8
3
+ size 472881
anempiricalstudyofmemorizationinnlp/b84a5b28-32b2-40c5-97fc-724a6c7b148d_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:72c98c02c02e1c39faa83c23c0e8b62754d46a7ed62b405ed11eb00d9fd82e49
3
+ size 108896
anempiricalstudyofmemorizationinnlp/b84a5b28-32b2-40c5-97fc-724a6c7b148d_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4823e74ba48d9a28c8e783304ae6f38c148c70a53629842787a6f48f73ea8d41
3
+ size 125842
anempiricalstudyofmemorizationinnlp/b84a5b28-32b2-40c5-97fc-724a6c7b148d_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9ddff78d8c37350ddca6cb3c212d87d9e38e00e0caad5ef15d5774f72591e3be
3
+ size 756549
anempiricalstudyofmemorizationinnlp/full.md ADDED
@@ -0,0 +1,462 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # An Empirical Study of Memorization in NLP
2
+
3
+ Xiaosen Zheng
4
+
5
+ Singapore Management University xszheng.2020@phdcs.smu.edu.sg
6
+
7
+ Jing Jiang
8
+
9
+ Singapore Management University jingjiang@smu.edu.sg
10
+
11
+ # Abstract
12
+
13
+ A recent study by Feldman (2020) proposed a long-tail theory to explain the memorization behavior of deep learning models. However, memorization has not been empirically verified in the context of NLP, a gap addressed by this work. In this paper, we use three different NLP tasks to check if the long-tail theory holds. Our experiments demonstrate that top-ranked memorized training instances are likely atypical, and removing the top-memorized training instances leads to a more serious drop in test accuracy compared with removing training instances randomly. Furthermore, we develop an attribution method to better understand why a training instance is memorized. We empirically show that our memorization attribution method is faithful and share our interesting finding that the top-memorized parts of a training instance tend to be features negatively correlated with the class label.
14
+
15
+ # 1 Introduction
16
+
17
+ In recent years, there has been an increasing amount of interest in the machine learning community to understand the memorization behaviour of deep neural network models. Studies have shown that deep learning models often have sufficient capacities to "memorize" training examples (Zhang et al., 2017; Arpit et al., 2017). A number of recent studies tried to understand how memorization helps generalization (Chatterjee, 2018; Feldman, 2020; Montanari and Zhong, 2020; Khandelwal et al., 2020, 2021)
18
+
19
+ In NLP, memorization of training examples by deep learning models is also often observed (Li and Wisniewski, 2021; Lewis et al., 2021; Raunak et al., 2021), and existing studies usually see memorization as something that hinders generalization. For example, Elangovan et al. (2021) tried to measure the amount of "data leakage" in NLP datasets in order to assess a model's ability to memorize vs. its ability to generalize.
20
+
21
+ However, recently Feldman (2020) proposed a long-tail theory, which states that memorization is necessary for generalization if the data follows a long-tail distribution. This theory was later empirically validated by Feldman and Zhang (2020), but their validation was done in only the computer vision domain. It is therefore interesting and useful for us to study whether the long-tail theory also holds in NLP; such validation would help us better understand the utility of memorization in the context of NLP.
22
+
23
+ The long-tail theory states that if the training data form a long-tail distribution, where there are many small "sub-populations" that are atypical instances, and if these small sub-populations are also present in the test data, then memorizing these atypical instances helps the model generalize to the test data. In order to validate this long-tail theory in the context of NLP, we follow the experiments and analyses on image classification done by Feldman and Zhang (2020). Specifically, we aim to answer the following questions in this paper: (1) On a few typical NLP tasks, are the training instances memorized by deep learning models indeed atypical instances? (2) Does memorizing these training instances lead to lower generalization error on the test instances?
24
+
25
+ In addition, observing that it is not always straightforward to understand why a training instance is being memorized, we study the following novel research question: (3) Can we provide some explanation about why a training instances is memorized? To be more specific, can we attribute the memorization score of a training instance to its individual tokens such that we can quantify which tokens require the most memorization by the model?
26
+
27
+ To answer these research questions, we first adopt self-influence (Koh and Liang, 2017) as our memorization scoring function. Compared with the estimator proposed by Feldman and Zhang (2020), our self-influence function is also theoretically mo
28
+
29
+ tivated but has the advantage that it is easy for us to derive a memorization attribution method for the third research question above. We present the self-influence function in Section 2.1, and in Section 2.2, we present our novel memorization attribution method. We conduct experiments on three NLP tasks: sentiment classification, natural language inference (NLI) and text classification.
30
+
31
+ Our experiments and analyses demonstrate that the training instances with the highest memorization scores tend to be atypical, at least on sentiment classification and NLI. On all three tasks, we find that removing the top-memorized training instances results in significantly dropped test performance, and the drop is markedly higher compared with removing a random subset of training instances. We also evaluate our memorization attribution method and find that our method can indeed identify input tokens that require the most memorization. Finally, we apply our memorization attribution method to sentiment classification and to an image classification dataset, and we share the interesting finding that the highly-memorized input features tend to be those that are negatively correlated with the class labels. Our code and data are available at https://github.com/xszheng2020/memorization.
32
+
33
+ # 2 Our Approach
34
+
35
+ To validate the long-tail theory in the context NLP, let us first review the main claims of the theory. First, the long-tail theory hypothesizes that training instances with the same class label has a long-tail distribution, with instances at the tail end being those atypical instances that need to be memorized. To verify this assumption, we first identify those training instances that are memorized by a trained deep learning model and then check if they are indeed atypical. Specifically, we follow Feldman and Zhang (2020) and adopt "self-influence" to measure memorization, but we use the influence function proposed by Koh and Liang (2017) to define self-influence. Second, the long tail theory states that memorization of the atypical training instances leads to lower generalization error, because the atypical training instances belong to subpopulations that also have presence in the test data. To verify this statement, we check whether removing the memorized training instances would lead to more significant performance drop on the test data than removing a random sample of training
36
+
37
+ instances.
38
+
39
+ It is worth noting that the approach outlined above follows the experiments conducted by Feldman and Zhang (2020) to validate the long tail theory on image classification.
40
+
41
+ Furthermore, we want to pinpoint which parts of a memorized instance are most critical for memorization. In other words, since each training instance is assigned a memorization score, can we attribute the memorization score to different parts of the input of this instance? This presumably can help us better understand which parts of the input need to be memorized the most. We follow the idea from Integrated Gradients (IG) (Sundararajan et al., 2017) and derive a formula to compute memorization attribution.
42
+
43
+ # 2.1 Memorization: Self-Influence
44
+
45
+ The high level idea of Feldman (2020) to define memorization is that memorization measures how the prediction on a training instance $z = (x, y)$ (where $x$ is the observation and $y$ is the label) changes when $z$ is removed from the training data. This notion is closely related to the influence function defined by Koh and Liang (2017), which measures how much the loss at a test point $z_{\text{test}}$ is influenced by a slight weighting of a training instance $z$ in the training loss function. While influence function is generally used to measure the influence of a training instance on a test instance, if we use it to measure the influence of a training instance on itself, i.e., to measure "self-influence," then this self-influence corresponds to the general notion of memorization defined by Feldman (2020).
46
+
47
+ Adopting the influence function defined by Koh and Liang (2017), we define the memorization score for a training instance $z$ as follows:
48
+
49
+ $$
50
+ \mathcal {M} _ {\text {r e m o v e}} (z) \stackrel {\text {d e f}} {=} - \left. \frac {d P (y | x ; \hat {\theta} _ {\epsilon , - z})}{d \epsilon} \right| _ {\epsilon = 0}, \tag {1}
51
+ $$
52
+
53
+ where $\hat{\theta}_{\epsilon, -z}$ represents the parameters of the model trained with the instance $z$ down-weighted by $\epsilon$ , $P(y|x; \theta)$ is the conditional probability using $\theta$ . Thus $\mathcal{M}_{\mathrm{remove}}(z)$ is the amount of change of $P(y|x; \theta)$ when the instance $z$ is down-weighted by a small amount $\epsilon$ .
54
+
55
+ After several steps of derivation (details to be given in Appendix A), the computation of Eqn 1 follows the following formula:
56
+
57
+ $$
58
+ \mathcal {M} _ {\text {r e m o v e}} (z) = - \nabla_ {\theta} P (y | x; \hat {\theta}) ^ {\top} H _ {\hat {\theta}} ^ {- 1} \nabla_ {\theta} L (z, \hat {\theta}), \tag {2}
59
+ $$
60
+
61
+ where $\hat{\theta}$ is the parameters of the model trained with all instances, $L$ is the loss function (cross entropy in our implementation) and $H_{\hat{\theta}} = \frac{1}{n}\sum_{i=1}^{n}\nabla_{\theta}^2 L(z_i,\hat{\theta})$ , where $(z_1,z_2,\ldots,z_n)$ are the training instances.
62
+
63
+ # 2.2 Memorization Attribution
64
+
65
+ In order to better understand why an instance is memorized, we propose a fine-grained notion of memorization at "feature" level instead of instance level, i.e., to attribute the memorization score of an instance to its individual features. Our proposed memorization attribution method is general and can be applied to any input representation. For NLP tasks, this means we attribute the memorization score defined above to each token of the input sequence. For images, this would be to attribute the memorization scores to pixels.
66
+
67
+ For this memorization attribution, we borrow the idea from Integrated Gradients (IG) (Sundararajan et al., 2017), which is a gradient-based attribution method for understanding which parts of a test instance are more responsible for its prediction. In particular, the IG method requires an uninformative baseline input $x'$ as a reference point. Similarly, here we also assume a baseline $x'$ . This baseline is supposedly an instance that does not have any influence on any test instance, and in our implementation, we use an sequence of the same length as $x$ but consisting of only the [MASK] tokens.
68
+
69
+ We first consider the influence of replacing $z = (x, y)$ with the baseline $z' = (x', y)$ (which is similar to perturbation-based influence from (Koh and Liang, 2017)):
70
+
71
+ $$
72
+ \mathcal {M} _ {\text {r e p l a c e}} (z) \stackrel {\text {d e f}} {=} - \frac {d P (y | x ; \hat {\theta} _ {\epsilon , z ^ {\prime} , - z})}{d \epsilon} \Bigg | _ {\epsilon = 0}, \tag {3}
73
+ $$
74
+
75
+ where $\hat{\theta}_{\epsilon, z', -z}$ represents the parameters resulting from moving $\epsilon$ mass from $z$ to $z'$ , i.e., adding $z'$ to the training data and giving it a weight of $\epsilon$ in the loss function while reducing the weight of the original $z$ by $\epsilon$ . Thus $\mathcal{M}_{\mathrm{replace}}(z)$ is the amount of change of $P(y|x; \theta)$ when a small amount $\epsilon$ of $z$ is replaced by the uninformative $z'$ .
76
+
77
+ It is worth pointing out that we can regard $\mathcal{M}_{\mathrm{replace}}(z)$ as an alternative way of measuring the amount of memorization of $z$ , similar to how perturbation-based influence is an alternative way of measuring influence in (Koh and Liang, 2017).
78
+
79
+ With similar derivation steps, the computation of Eqn 3 is as follows:
80
+
81
+ $$
82
+ \mathcal {M} _ {\text {r e p l a c e}} (z) = - s ^ {\top} \left(\nabla_ {\theta} L (z, \hat {\theta}) - \nabla_ {\theta} L \left(z ^ {\prime}, \hat {\theta}\right)\right), \tag {4}
83
+ $$
84
+
85
+ where $s = H_{\hat{\theta}}^{-1}\nabla_{\theta}P(y|x;\hat{\theta})$ . (For more details, please refer to Appendix B.)
86
+
87
+ The advantage of using this alternative measure of memorization is that $\mathcal{M}_{\mathrm{replace}}(z)$ can be decomposed into a linear combination of scores, each corresponding to a single token in the input sequence. For NLP applications, the input $x$ usually corresponds to an embedding matrix $\mathbf{X}\in \mathbb{R}^{N\times d}$ (where $N$ is the number of tokens and $d$ is the embedding dimensions). We can show that
88
+
89
+ $$
90
+ \mathcal {M} _ {\text {r e p l a c e}} (z) = - \sum_ {t = 1} ^ {N} \sum_ {l = 1} ^ {d} r _ {t, l} \left(\mathbf {X} _ {t, l} - \mathbf {X} _ {t, l} ^ {\prime}\right), \tag {5}
91
+ $$
92
+
93
+ where $r = \left[\int_{\alpha=0}^{1} \frac{dg(\mathbf{X}' + \alpha(\mathbf{X} - \mathbf{X}')}{dx} d\alpha\right] s$ and $g(\mathbf{X}) = \nabla_{\theta} L((\mathbf{X}, y), \hat{\theta})$ , which can be efficiently computed by the hessian-vector product (Pearlmutter, 1994). For more details, please refer to Appendix B.
94
+
95
+ The memorization attribution of the $t$ -th token is thus given by $-\sum_{l=1}^{d} r_{t,l} \times (\mathbf{X}_{t,l} - \mathbf{X}_{t,l}')$ .
96
+
97
+ # 3 Experiments
98
+
99
+ With the memorization score defined in Eqn 2 and the memorization attribution score defined in Eqn 5, we now conduct experiments to answer the three research questions raised in Section 1.
100
+
101
+ # 3.1 Experiment Settings
102
+
103
+ We conduct our experiments on the following three datasets:
104
+
105
+ SST-2 (Socher et al., 2013): This is a dataset for sentence-level binary (positive vs. negative) sentiment classification. It consists of 6,920 training instances, 872 development instances and 1,821 test instances.
106
+
107
+ SNLI (MacCartney and Manning, 2008): This is a dataset for natural language inference, which aims to predict the entailment relation (contradiction, neutral or entailment) between a premise and a hypothesis. We combine the contradiction and neutral classes into a single non-entailment class, and randomly sample 10k training instances, 6,658 development instances and 6,736 test instances.
108
+
109
+ Yahoo! Answers (Zhang et al., 2015): This is a collection of question-answer pairs categorized
110
+
111
+ into 10 topic-based classes. We randomly sample 10k training instances, 10k development instances and 10k test examples.
112
+
113
+ In addition, we also use CIFAR-10 (Krizhevsky et al., 2009), which is a dataset for 10-class image classification. We randomly sample 10k training instances, 5k development instances and 10k test instances. For some tasks, we down-sample the training set because influence function is known to be expensive to compute.
114
+
115
+ For all NLP tasks, we adopt the pre-trained Distill-BERT model (Sanh et al., 2019) that consists of 6 transformer layers, where each layer consists of 12 attention heads. We use the final hidden state of the [CLS] token for classification. For CIFAR-10, we extract visual grid features using a pre-trained ResNet50 (He et al., 2016) first and then train a MLP classifier on top of that.
116
+
117
+ We use the SGD optimizer, setting the learning rate, momentum and batch size to 0.01, 0.9 and 32, respectively. We tune other hyper-parameters on the development set manually.
118
+
119
+ Although influence function is model-dependent and therefore models trained with different random seeds may produce different memorization scores for the same training instance, we found that in practice, ranking training instances based on memorization scores obtained from models trained by different random seeds produces similar rankings across different models. Thus, we only consider a single model checkpoint for computing our self-influence based memorization scores in the following experiments. (See Appendix C for the exact description.) For memorization attribution, the number of Riemann Sum steps is set to be 50.
120
+
121
+ # 3.2 Checking Memorized Instances
122
+
123
+ <table><tr><td>Group</td><td>Negative</td><td>Positive</td></tr><tr><td>Top-10%</td><td>35.80</td><td>74.00</td></tr><tr><td>All</td><td>23.24</td><td>86.39</td></tr><tr><td>Bottom-10%</td><td>14.92</td><td>94.52</td></tr></table>
124
+
125
+ Table 1: The average percentage of positive phrases over (1) the top- ${10}\%$ memorized positive/negative instances, (2) all positive/negative instances, and (3) the bottom- ${10}\%$ memorized positive/negative instances.
126
+
127
+ In the first set of experiments, we use our self-influence-based memorization scoring function as defined in Eqn. 1 to rank the training instances.
128
+
129
+ Our goal is to check if the top-memorized instances are indeed atypical instances. However, it is difficult to measure the typicality of instances. We note that in the prior work (Feldman and Zhang, 2020) where the authors tried to validate the long-tail theory on computer vision datasets, there was not any quantitative experiment, and the authors relied only on qualitative analysis (i.e., manual inspection of the top-ranked instances) to show that memorized instances tend to be atypical. In our experiments, we perform two kinds of checking: (1) First, we adopt qualitative evaluation as Feldman and Zhang (2020) did on both SST-2 and SNLI. For Yahoo! Answers, however, because each instance contains a long document, it is not easy for humans to judge whether or not an instance is atypical. (2) Second, we define quantitative measures of typicality on sentiment analysis because annotations are available on this dataset and these annotations allow us to define some form of typicality.
130
+
131
+ # SST-2
132
+
133
+ For SST-2, we judge whether or not the top-ranked memorized instances are atypical in two ways: (1) The first is based on a heuristic metric. We check the percentage of positive phrases in an instance, where phrase-level sentiment polarity labels are from the annotations provided by SST-2. Intuitively, a typical positive sentence should have a relatively high percentage of positive phrases and a typical negative sentence should have a relatively low percentage of positive phrases. We collect such statistics from SST-2 based on the phrase-level annotations and found that this is to a large extent true. For example, more than $75\%$ of positive sentences have at least $78.31\%$ of positive phrases and more than $75\%$ of negative sentences have at most $35.73\%$ of positive phrases. (See Appendix D for details.) Therefore, by checking the percentage of positive phrases inside a positive or negative instance, we can in a way judge whether that instance is typical or atypical. When calculating the percentage of positive phrases inside a sentence, we apply Laplace smoothing. (2) We also manually inspect the top-ranked and bottom-ranked training instances based on the memorization scores and use our human knowledge to judge whether the top-ranked ones are atypical while the bottom-ranked ones are typical.
134
+
135
+ <table><tr><td colspan="2">Negative</td><td colspan="2">Positive</td></tr><tr><td>Content</td><td>Mem</td><td>Content</td><td>Mem</td></tr><tr><td>Starts out with tremendous promise, introducing an intriguing and alluring premise, only to fall prey to a boatload of screenwriting cliches that sink it faster than a leaky freighter</td><td>14.83</td><td>The director, Mark Pellington, does a terrific job conjuring up a sinister, menacing atmosphere though unfortunately all the story gives us is flashing red lights, a rattling noise, and a bump on the head</td><td>14.28</td></tr><tr><td>Mr. Wolter and Ms. Seldhal give strong and convincing performances, but neither reaches into the deepest recesses of the character to unearth the quaking essence of passion, grief and fear</td><td>13.65</td><td>This is a fascinating film because there is no clear-cut hero and no all-out villain</td><td>14.18</td></tr><tr><td>This is a monumental achievement in practically every facet of inept filmmaking: joyless, idiotic, annoying, heavy-handed visually atrocious, and often downright creepy</td><td>11.01</td><td>The film is reasonably entertaining, though it begins to drag two-thirds through, when the melodramatic aspects start to overtake the comedy</td><td>11.04</td></tr><tr><td>Sadly, Full Frontal plays like the work of a dilettante</td><td>0.00</td><td>The large-format film is well suited to capture these musicians in full regalia and the incredible IMAX sound system lets you feel the beat down to your toes</td><td>0.00</td></tr><tr><td>A mess</td><td>0.00</td><td>PT. Anderson understands the grandness of romance and how love is the great equalizer that can calm us of our daily ills and bring out joys in our lives that we never knew were possible</td><td>0.00</td></tr><tr><td>The images lack contrast, are murky and are frequently too dark to be decipherable</td><td>0.00</td><td>together writer-director Danny Verete&#x27;s three tales comprise a powerful and reasonably fulfilling gestalt</td><td>0.00</td></tr></table>
136
+
137
+ Table 2: Top-3 and Bottom-3 memorized training examples from the SST-2 task. Note that there are many examples having zero memorization score, we randomly sample 3 out of them.
138
+
139
+ <table><tr><td colspan="2">Non-Entail</td><td colspan="2">Entail</td></tr><tr><td>Content</td><td>Mem</td><td>Content</td><td>Mem</td></tr><tr><td>P: A man in a bright pastel blue overcoat plays a unique instrument by the corner of a building with a sign propped against a bag in front of him</td><td rowspan="2">18.85</td><td>P: An older man in a white shirt is playing a keyboard</td><td rowspan="2">23.24</td></tr><tr><td>H: A man plays a guitar outside</td><td>H: A man is playing the piano</td></tr><tr><td>P: A young boy in a yellow rash guard is walking on the shore carrying a surfboard</td><td rowspan="2">17.51</td><td>P: A woman in a white and light green jacket and another woman in a purple shirt</td><td rowspan="2">18.94</td></tr><tr><td>H: A boy is walking on the boardwalk</td><td>, both wearing hats , sit at a table watching a cooking fire</td></tr><tr><td>P: Someone wearing a blue shirt is riding a bike with a child ’ s seat on the front of it</td><td rowspan="2">15.52</td><td>H: A woman in a white and light green jacket</td><td rowspan="2">18.89</td></tr><tr><td>H: A person is riding a bike on the street</td><td>P: A man sits on a folding chair outside while listening to music on his iPod</td></tr><tr><td>P: A brunette woman does a wheelie on a white bicycle with purple tires</td><td rowspan="2">0.00</td><td>H: There is a man on a chair listening to music on an mp3 player</td><td rowspan="2">0.00</td></tr><tr><td>H: A woman rides her motorcycle to town</td><td>P: A married man is taking pictures while standing in a crowd of people</td></tr><tr><td>P: A baseball player hitting a home run</td><td rowspan="2">0.00</td><td>H: There are people in a crowd</td><td>0.00</td></tr><tr><td>H: The cat eats sheep</td><td>P: A man recreates a joust from mid - evil times</td><td>0.00</td></tr><tr><td>P: A child in a vest and hat is posing for a picture</td><td rowspan="2">0.00</td><td>H: A person created something</td><td>0.00</td></tr><tr><td>H: A child is eating his lunch</td><td>P: A boy is wearing a red towel standing on the beach</td><td>0.00</td></tr><tr><td></td><td></td><td>H: A person is at the beach</td><td></td></tr></table>
140
+
141
+ Table 3: Top-3 and Bottom-3 memorized training examples from the SNLI task. Note that there are many examples having zero memorization score, we randomly sample 3 out of them.
142
+
143
+ Table 1 shows the average percentage of positive phrases in the top- $10\%$ of the memorized positive (or negative) training instances and the bottom- $10\%$ of the memorized positive (or negative) training instances. As a reference point, we also show the average percentage over all positive (or negative) training instances. We can see that the top- $10\%$ memorized instances indeed are atypical. Specifically, those negative sentences with high memorization scores have a high percentage of positive phrases on average $(35.80\%)$ , clearly higher than the average percentage of positive phrases of all negative instances $(23.24\%)$ . This makes the top-memorized negative instances very different from typical negative instances. On the other hand, the bottom- $10\%$ negative instances (i.e., those instances that are not memorized) have clearly much lower percentage of positive phrases $(14.92\%)$ , which is what we expect for typical negative instances. Similar observations can be made with the positive training instances. Overall, the results in Table 1 suggest that indeed the top-memorized training instances in SST-2 are atypical.
144
+
145
+ Next, we manually inspect the top-ranked and bottom-ranked training instances of SST-2 in Ta
146
+
147
+ ble 2. We can see that the top-ranked memorized instances tend to express their overall opinions in an indirect way. These sentences often contain a contrast between positive and negative opinions. We therefore believe that they are atypical for sentiment classification. On the other hand, the bottom-ranked instances, i.e., those with 0 memorization scores, tend to directly expression their opinions with strong opinion phrases, and we believe these represent common instances.
148
+
149
+ # SNLI
150
+
151
+ For the task of natural language inference, it is hard to come up with a heuristic metric like the one used for sentiment classification. We therefore focus on manual inspection of the top-ranked and bottom-ranked training instances. In Table 3 we show the top-3 and bottom-3 memorized training instances from SNLI. We can see from the table that in the top-ranked memorized non-entailment instances, the hypothesis tends to be much shorter than the premise and there tends to be no obvious contradiction. In contrast, the bottom-ranked non-entailment instances tend to be contradictions where there are obvious contradictory words/phrases in the premise
152
+
153
+ and the hypothesis, such as "bicycle" vs. "motorcycle," "player" vs. "cat" and "posing for a picture" vs. "eating his lunch." We hypothesize that the top-ranked non-entailment instances are atypical because they do not have obvious signals of non-entailment such as the contradictory word pairs we see in the bottom-ranked non-entailment instances. For entailment cases, we find that the top-ranked instances often contain word pairs that are synonyms but are rare in the training data. For example, we find that the word pair "keyboard" and "piano" appears only two times in the training data, which implies that this instance is an atypical example. Similarly, we find that the word/phrase pair "iPod" and "mp3 player" appear only once in the training data. On the other hand, the bottom-ranked entailment instances tend to be those where the hypothesis contains less information than the premise, which may be a common type of entailment instances.
154
+
155
+ # 3.3 Marginal Utility of Memorized Instances
156
+
157
+ In the second set of experiments, we check whether memorizing those training instances with the highest memorization scores leads to better performance on the unseen test data. To do so, we compare the performance of the model on test data when top-ranked memorized training instances are removed during training versus the performance when the same number of randomly selected training instances are removed. If memorization is beneficial for the test data, then we would expect to see larger performance drop when top-ranked memorized training instances are removed than when random training instances are removed. Therefore, the amount of performance drop represents the marginal effect of the memorized instances on the test accuracy. We show the test accuracy in Figure 1 when $X\%$ of the training instances are removed, where we set $X$ to a few different values. We re-train the model 5 times and show the average test accuracy as well as the standard deviation. We also show the lowest absolute memorization score of the top- $X\%$ of training instances in Figure 1. For reference, here we also use CIFAR-10 to verify that our self-influence estimation using the influence function works similarly to the influence estimator used by Feldman and Zhang (2020).
158
+
159
+ We can observe the following from Figure 1: (1) On CIFAR-10 (Figure 1(d)), we see that clearly the test accuracy drops more significantly when
160
+
161
+ top-ranked memorized training instances instead of random training instances are removed. Because Feldman and Zhang (2020) reported the same observation, this suggests that our memorization score based on the influence function proposed by Koh and Liang (2017) works similarly to the memorization estimator used by Feldman and Zhang (2020). This verifies the reliability of our memorization scoring function. (2) On SST-2, Yahoo! Answers and SNLI, we can see that consistently when the same percentage of training instances are removed, removing top-ranked memorized instances has a clearly bigger impact on the test accuracy compared with removing random instances. For example, on SST-2, the marginal utility of the top- $30\%$ memorized training example is about 1.44 percentage points (vs. 0.70 percentage points for random subset of $30\%$ of training examples).
162
+
163
+ This verifies that on SST-2, Yahoo! Answers and SNLI, memorizing those training instances could help improve the performance on the test data.
164
+
165
+ # 3.4 Evaluating Memorization Attribution
166
+
167
+ In this section, we evaluate whether our memorization attribution method is faithful, i.e., whether it indeed picks up tokens that have higher self-influence.
168
+
169
+ Intuitively, if the memorization attribution method detects those memorized tokens in a training instance faithfully, then removing these tokens in that instance should result in a lower influence $\mathcal{I}$ of the perturbed instance on its original form (details to be given in Appendix A). We therefore define a metric called Reduction Rate as follows:
170
+
171
+ $$
172
+ \frac {1}{| \mathcal {Z} |} \sum_ {z \in \mathcal {Z}} \frac {\mathcal {I} (z , z) - \mathcal {I} (z ^ {\backslash \operatorname {a t t r}} , z)}{\mathcal {I} (z , z)}, \tag {6}
173
+ $$
174
+
175
+ where $\mathcal{Z}$ is the set of top memorized training instances and $z^{\backslash \mathrm{attr}}$ is the perturbed input where the top- $k\%$ memorized tokens are replaced by the baseline token [MASK]. We can see that this Reduction Rate measures how much self-influence has been reduced after the top-memorized tokens are replaced with [MASK].<sup>2</sup>
176
+
177
+ Figure 2 demonstrates the significant effect of the removal of the top-memorized tokens from the top-memorized training instances. One could ask whether this effect is solely due to the input perturbation. To answer this question we include in the
178
+
179
+ ![](images/34f0f5b0826d809ee32a4fd9507a23db719163080ed3019ac7227f6dcbd5d766.jpg)
180
+ (a) SST-2
181
+
182
+ ![](images/13828b6f7c654ce1708af2664a5397f98e364acbfcabf8aeeea5ebb190d680ac.jpg)
183
+ (b) Yahoo! Answers
184
+
185
+ ![](images/fa41c17f35ddb4efe4e66c75c88d4a41bc90aac78cf60405eb30948c0dfff38b.jpg)
186
+ (c) SNLI
187
+
188
+ ![](images/99dc34957b65b5cc6402103199cb888e5cf24067b5c2d6d3ebf48daae9739644.jpg)
189
+ (d) CIFAR-10
190
+
191
+ ![](images/f20b7b03587907c8dca2ccc167b5ceb8f48796a876d6a07a236b2c75ae18f37e.jpg)
192
+ (a) SST-2
193
+
194
+ ![](images/7983775477a598ed7cd38a5a26ebb30e066927767aa29e5233599e2f04c105e3.jpg)
195
+ Figure 1: For each dataset, the top figure shows the test accuracy after we remove the top- $X\%$ memorized training instances or the same number of randomly selected training instances. The test accuracy is averaged over 5 runs of retraining with different random seeds, and standard deviation is shown with the bars. The bottom figure shows the lowest memorization score of the top- $X\%$ of the memorized training instances.
196
+ (b) Yahoo! Answers
197
+
198
+ ![](images/d921ff3fcf86aa7c78e231ae0016e47cdb7d37759b9d93f80e1f121e63b6ea9a.jpg)
199
+ (c) CIFAR-10
200
+ Figure 2: For each dataset, the top figure shows the reduction rate of removing the top- $k\%$ memorized tokens and of removing the same number of randomly selected training tokens.
201
+
202
+ comparison the reduction rate of random attribution, i.e., we randomly remove some tokens from the training instances. We can see that removing tokens picked up by our memorization attribution method results in a much larger Reduction Rate until almost $90\%$ of the tokens are removed. This result suggests that our memorization attribution method can indeed identify those tokens in a training instance that have high self-influence on that instance.
203
+
204
+ # 3.5 Examples of Memorization Attribution
205
+
206
+ To better understand why certain training instances are memorized, we apply our memorization attribution method to SST-2, Yahoo! Answers and CIFAR10. We do not discuss our memorization attribution method applied to the NLI task because we find that it is not easy to interpret the results. In some other studies (e.g., Han et al. (2020)), people have also reported different behaviours of NLI from tasks relying on shallow features such as sentiment classification and topic-based text classification.
207
+
208
+ We find that on SST-2, Yahoo! Answers and
209
+
210
+ CIFAR-10, in most cases our memorization attributions are easy to be interpreted by humans. In particular, without any cherry-picking, we select those instances with the highest memorization scores to present. We find that interestingly, for both SST-2 and CIFAR-10, the trained deep learning model tends to memorize those parts of an instance that are negatively correlated with the class label of that instance, as shown in Table 4 and Figure 3. On SST-2, for example, the model needs to memorize positive phrases such as "tremendous promise" and "intriguing and alluring" that show up in an overall negative instance. On CIFAR-10, we observe that for images that are easily mis-classified, the model memorizes those pixels that are associated with the wrong class label, or in other words, pixels that are negatively correlated with the correct class label. For example, the "cat" image shown in Figure 3 looks like a frog. The model memorizes those pixels (shown in red) around the tummy of the cat
211
+
212
+ <table><tr><td>Content</td><td>Label</td></tr><tr><td>starts out with tremendous promise introducing an intriguing and alluring premise only to fall prey to a boatload of screenwriting cliches that sink it faster than a leaky freighter</td><td>Neg</td></tr><tr><td>mr wolter and ms seldhal give strong and convincing performances but neither reaches into the deepest recesses of the character to unearth the quaking essence of passion grief and fear</td><td>Neg</td></tr><tr><td>this is a monumental achievement in practically every facet of inept filmmaking joyless idiotic annoying heavy handed visually atrocious and often downright creepy</td><td>Neg</td></tr><tr><td>the director mark pellington does a terrific job conjuring up a sinister menacing atmosphere though unfortunately all the story gives us is flashing red lights a rattling noise and a bump on the head</td><td>Pos</td></tr><tr><td>this is a fascinating film because there is no clear cut hero and no all out villain</td><td>Pos</td></tr><tr><td>the film is reasonably entertaining though it begins to drag two thirds through when the melodramatic aspects start to overtake the comedy</td><td>Pos</td></tr></table>
213
+
214
+ Table 4: The top-3 memorized training instances for each class from SST-2. Highlighted words are those with high attribution values (red for positive memorization attribution and blue for negative memorization attribution) as computed by our memorization attribution method.
215
+
216
+ ![](images/4fd2e0550191af583054d7397dff41676a636e11a7862bee735827260374075e.jpg)
217
+ Figure 3: The top-1 memorized training instance for each class from CIFAR-10. Highlighted patches are those having high attribution values (red for positive memorization attribution and blue for negative memorization attribution) as computed by our memorization attribution method.
218
+
219
+ because those pixels make the image look like a frog image. Similarly, in the "dog" image, which looks like a horse, the memorized pixels (shown in red) are around the body of the dog, and these pixels make the image look like a horse image. On the other hand, the dog's head in this image, which is a typical dog's head, has negative memorization attribution scores, which means it does not need to be memorized.
220
+
221
+ Given the interesting results above, we believe that model developers can gain insights about what a model finds hard to learn from other training instances (and thus has to memorize), and model developers can subsequently take actions like upweighting memorized instances or collecting similar data to improve the performance on certain subpopulations if desired.
222
+
223
+ # 4 Related Work
224
+
225
+ The long-tail theory: The long-tail theory proposed by Feldman (2020) is relatively new and has not been systematically validated in NLP. Our work is the first to empirically check the validity of this theory on NLP tasks. Raunak et al. (2021) used the long-tail theory to explain hallucinations under source perturbations in Neural Machine Transla
226
+
227
+ tion. They assume the theory holds in NMT rather than validating the theory itself as we do. Kong and Chaudhuri (2021) investigated the memorization phenomenon for Variational Auto-Encoder also via self-influence.
228
+
229
+ Memorization vs. generalization: It is well-known that deep learning models possess strong capabilities to memorize training instances (Zhang et al., 2017; Arpit et al., 2017). In the context of NLP, Li and Wisniewski (2021) showed that BERT is more likely to memorize shallow patterns from the training data rather than uncover abstract properties. Some recent work has tried to combine interpolation methods with deep learning methods to generalize via memorization (Khandelwal et al., 2020, 2021). However, previous work using interpolation methods did not explain why memorization is necessary in the first place. Our work follows the long-tail theory that views memorization as beneficial to generalization when the data follows a certain type of long-tail distribution. There has also been some work studying "forgetting," which is related to memorization (Toneva et al., 2018; Yaghoobzadeh et al., 2021). However, in this paper we do not study this "forgetting" phenomenon.
230
+
231
+ Influence functions: Influence functions have been studied for large-scale deep neural networks by Koh and Liang (2017) and gained much attention in recent years. In the context of NLP, Han et al. (2020) explored the usage of influence functions to explain model predictions and unveil data artifacts. Meng et al. (2020) proposed a combination of gradient-based methods and influence functions to examine training history and test stimuli simultaneously. Our work, however, adopts influence function as a tool to measure memorization.
232
+
233
+ # 5 Conclusions
234
+
235
+ In this paper, we empirically examine a recently proposed long-tail theory in the context of NLP. We use sentiment classification, natural language inference and text classification to check the validity of the long-tail theory in NLP. We also propose a memorization attribution method to reveal which parts of an instance are being memorized. There are two major takeaway messages: (1) Our experiments empirically validated the long-tail theory on the three NLP datasets, showing that memorization is important for generalization, offers an alternative view and helps NLP researchers to see the value of memorization. (2) Our attribution method can be a tool to help model developers better understand the memorization behaviours of a model and possibly further improve the model.
236
+
237
+ # 6 Ethical Considerations
238
+
239
+ Our work empirically validated the long-tail theory in the context of NLP, offering an alternative view to the relationship between memorization and generalization. This will help NLP researchers see the value of memorization. However, previous work showed that neural networks can be vulnerable to privacy attacks such as membership inference attacks because these models are able to memorize training instances, and sometimes sensitive private information may be contained in the training instances (Shokri et al., 2017; Zhang et al., 2017; Feldman and Zhang, 2020). Thus, there is a tradeoff between the accuracy of a model and the privacy of the data. In other words, although memorization can help reduce generalization error (as we showed in this paper), it also increases the vulnerability of the system to privacy attacks, which raises ethical concerns.
240
+
241
+ The computation of influence functions used in our work is massive because of the computation of
242
+
243
+ inverting the hessian matrices. To reduce the computation costs, i.e., power consumption, we may adopt other influence estimators like TracIn (Pruthi et al., 2020), which is hessian-free and thus faster.
244
+
245
+ # Acknowledgment
246
+
247
+ This research is supported by the Singapore Ministry of Education (MOE) Academic Research Fund (AcRF) Tier 1 grant.
248
+
249
+ # References
250
+
251
+ Devansh Arpit, Stanisław Jastrzebski, Nicolas Ballas, David Krueger, Emmanuel Bengio, Maxinder S. Kanwal, Tegan Maharaj, Asja Fischer, Aaron Courville, Yoshua Bengio, and Simon Lacoste-Julien. 2017. A closer look at memorization in deep networks. In International Conference on Machine Learning.
252
+ Satrajit Chatterjee. 2018. Learning and memorization. In International Conference on Machine Learning.
253
+ Aparna Elangovan, Jiayuan He, and Karin Verspoor. 2021. Memorization vs. generalization: Quantifying data leakage in NLP performance evaluation. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 1325-1335, Online. Association for Computational Linguistics.
254
+ Vitaly Feldman. 2020. Does learning require memorization? a short tale about a long tail. In Annual ACM SIGACT Symposium on Theory of Computing.
255
+ Vitaly Feldman and Chiyuan Zhang. 2020. What neural networks memorize and why: Discovering the long tail via influence estimation. In Advances in Neural Information Processing Systems.
256
+ Han Guo, Nazeneen Rajani, Peter Hase, Mohit Bansal, and Caiming Xiong. 2021. FastIF: Scalable influence functions for efficient model interpretation and debugging. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 10333-10350, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
257
+ Xiaochuang Han, Byron C. Wallace, and Yulia Tsvetkov. 2020. Explaining black box predictions and unveiling data artifacts through influence functions. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5553-5563, Online. Association for Computational Linguistics.
258
+ Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Conference on Computer Vision and Pattern Recognition.
259
+ Harold V Henderson and Shayle R Searle. 1981. On deriving the inverse of a sum of matrices. *Siam Review*.
260
+
261
+ Urvashi Khandelwal, Angela Fan, Dan Jurafsky, Luke Zettlemoyer, and Mike Lewis. 2021. Nearest neighbor machine translation. International Conference on Learning Representations.
262
+ Urvashi Khandelwal, Omer Levy, Dan Jurafsky, Luke Zettlemoyer, and Mike Lewis. 2020. Generalization through memorization: Nearest neighbor language models. International Conference on Learning Representations.
263
+ Pang Wei Koh and Percy Liang. 2017. Understanding black-box predictions via influence functions. In International Conference on Machine Learning.
264
+ Zhifeng Kong and Kamalika Chaudhuri. 2021. Understanding instance-based interpretability of variational auto-encoders. In Advances in Neural Information Processing Systems.
265
+ Alex Krizhevsky, Geoffrey Hinton, et al. 2009. Learning multiple layers of features from tiny images. CiteSeer.
266
+ Patrick Lewis, Pontus Stenetorp, and Sebastian Riedel. 2021. Question and answer test-train overlap in open-domain question answering datasets. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 1000-1008, Online. Association for Computational Linguistics.
267
+ Bingzhi Li and Guillaume Wisniewski. 2021. Are neural networks extracting linguistic properties or memorizing training data? an observation with a multilingual probe for predicting tense. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 3080-3089, Online. Association for Computational Linguistics.
268
+ Bill MacCartney and Christopher D. Manning. 2008. Modeling semantic containment and exclusion in natural language inference. In Proceedings of the 22nd International Conference on Computational Linguistics (Coling 2008), pages 521-528, Manchester, UK. Coling 2008 Organizing Committee.
269
+ Yuxian Meng, Chun Fan, Zijun Sun, Eduard Hovy, Fei Wu, and Jiwei Li. 2020. Pair the dots: Jointly examining training history and test stimuli for model interpretability. arXiv preprint arXiv:2010.06943.
270
+ Andrea Montanari and Yiqiao Zhong. 2020. The interpolation phase transition in neural networks: Memorization and generalization under lazy training. arXiv preprint arXiv:2007.12826.
271
+ Barak A Pearlmutter. 1994. Fast exact multiplication by the hessian. Neural computation.
272
+ Garima Pruthi, Frederick Liu, Satyen Kale, and Mukund Sundararajan. 2020. Estimating training data influence by tracing gradient descent. In Advances in Neural Information Processing Systems.
273
+
274
+ Vikas Raunak, Arul Menezes, and Marcin Junczys-Dowmunt. 2021. The curious case of hallucinations in neural machine translation. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1172-1183, Online. Association for Computational Linguistics.
275
+ Victor Sanh, Lysandre Debut, Julien Chaumont, and Thomas Wolf. 2019. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. In Workshop on Energy Efficient Machine Learning and Cognitive Computing, Advances in Neural Information Processing Systems.
276
+ Reza Shokri, Marco Stronati, Congzheng Song, and Vitaly Shmatikov. 2017. Membership inference attacks against machine learning models. In 2017 IEEE symposium on security and privacy (SP).
277
+ Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1631-1642, Seattle, Washington, USA. Association for Computational Linguistics.
278
+ Mukund Sundararajan, Ankur Taly, and Qiqi Yan. 2017. Axiomatic attribution for deep networks. In International Conference on Machine Learning.
279
+ Mariya Toneva, Alessandro Sordoni, Remi Tachet des Combes, Adam Trischler, Yoshua Bengio, and Geoffrey J Gordon. 2018. An empirical study of example forgetting during deep neural network learning. In International Conference on Learning Representations.
280
+ Yadollah Yaghoobzadeh, Soroush Mehri, Remi Tachet des Combes, T. J. Hazen, and Alessandro Sordoni. 2021. Increasing robustness to spurious correlations using forgettable examples. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 3319-3332, Online. Association for Computational Linguistics.
281
+ Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. 2017. Understanding deep learning requires rethinking generalization. In International Conference on Learning Representations.
282
+ Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classification. In Advances in Neural Information Processing Systems.
283
+
284
+ # A Derivation of the Memorization Scores
285
+
286
+ For clarity, here we repeat the derivation of Influence Functions by Koh and Liang (2017) and provide self-influence functions as its special case. Note that self-influence functions are used as our memorization scores.
287
+
288
+ Given training points $z_{1},\ldots ,z_{n}$ , where $z_{i} = (x_{i},y_{i})$ , $x_{i}$ is the observation and $y_{i}$ is the label, we train a predictor via minimizing the empirical risk $R(\theta)\stackrel {\mathrm{def}}{=}\frac{1}{n}\sum_{i = 1}^{n}L(z_{i},\theta)$ to pick parameters $\theta \in \Theta$ . I.e., the optimal parameters are obtained by $\hat{\theta} = \arg \min_{\theta \in \Theta}R(\theta)$ . We assume that $R$ is twice-differentiable and strongly convex.
289
+
290
+ i.e.,
291
+
292
+ $$
293
+ H _ {\hat {\theta}} \stackrel {\mathrm {d e f}} {=} \nabla^ {2} R (\hat {\theta}) = \frac {1}{n} \sum_ {i = 1} ^ {n} \nabla_ {\theta} ^ {2} L \left(z _ {i}, \hat {\theta}\right) \tag {7}
294
+ $$
295
+
296
+ exists and is positive definite. This guarantees the existence of $H_{\hat{\theta}}^{-1}$ , which we will use in the following derivation.
297
+
298
+ The high-level idea of Influence Functions is to approximate leave-one-out retraining, which corresponds to a removing operation, via computing the parameter change if $z$ were up-weighted or down-weighted by some small amount $\epsilon$ .
299
+
300
+ If we up-weight the training point $z$ , the perturbed parameters $\hat{\theta}_{\epsilon, z}$ can be written as
301
+
302
+ $$
303
+ \hat {\theta} _ {\epsilon , z} = \arg \min _ {\theta \in \Theta} (R (\theta) + \epsilon L (z, \theta)). \tag {8}
304
+ $$
305
+
306
+ Consider the parameter change $\Delta_{\epsilon} = \hat{\theta}_{\epsilon ,z} - \hat{\theta}$ , and note that, as $\hat{\theta}$ does not depend on $\epsilon$ , the quantity we want to compute can be written in terms of it:
307
+
308
+ $$
309
+ \frac {d \hat {\theta} _ {\epsilon , z}}{d \epsilon} = \frac {d \Delta_ {\epsilon}}{d \epsilon}. \tag {9}
310
+ $$
311
+
312
+ Since $\hat{\theta}_{\epsilon, z}$ is a minimizer of Eqn 8, let us examine its first-order optimality condition:
313
+
314
+ $$
315
+ 0 = \nabla R (\hat {\theta} _ {\epsilon , z}) + \epsilon \nabla L (z, \hat {\theta} _ {\epsilon , z}). \tag {10}
316
+ $$
317
+
318
+ Let us define $f(\theta)$ to be $\nabla R(\theta) + \epsilon \nabla L(z,\theta)$ .
319
+
320
+ Next, since $\hat{\theta}_{\epsilon ,z}\rightarrow \hat{\theta}$ as $\epsilon \to 0$ we perform a Taylor expansion on $f(\hat{\theta}_{\epsilon ,z})$ . Given Taylor's Formula $f(\theta +\Delta \theta) = f(\theta) + f^{\prime}(\theta)\Delta \theta +o(\Delta \theta)$ , we have:
321
+
322
+ $$
323
+ \begin{array}{l} 0 = f (\hat {\theta} _ {\epsilon , z}) \\ = f (\hat {\theta} + \Delta_ {\epsilon}) \\ \approx f (\hat {\theta}) + f ^ {\prime} (\hat {\theta}) \Delta_ {\epsilon} \tag {11} \\ = \left[ \nabla R (\hat {\theta}) + \epsilon \nabla L (z, \hat {\theta}) \right] \\ + \left[ \nabla^ {2} R (\hat {\theta}) + \epsilon \nabla^ {2} L (z, \hat {\theta}) \right] \Delta_ {\epsilon}, \\ \end{array}
324
+ $$
325
+
326
+ where we have dropped the term $o(\| \Delta_{\epsilon}\|)$
327
+
328
+ Solving for $\Delta_{\epsilon}$ , we get $\Delta_{\epsilon} \approx -[\nabla^{2}R(\hat{\theta}) + \epsilon \nabla^{2}L(z,\hat{\theta})]^{-1}[\nabla R(\hat{\theta}) + \epsilon \nabla L(z,\hat{\theta})]$ .
329
+
330
+ Since $\hat{\theta}$ minimizes $R$ , we have $\nabla R(\hat{\theta}) = 0$ . Then we have:
331
+
332
+ $$
333
+ \Delta_ {\epsilon} \approx - \left[ \nabla^ {2} R (\hat {\theta}) + \epsilon \nabla^ {2} L (z, \hat {\theta}) \right] ^ {- 1} \epsilon \nabla L (z, \hat {\theta}). \tag {12}
334
+ $$
335
+
336
+ Referring to (Henderson and Searle, 1981), we have:
337
+
338
+ $$
339
+ \begin{array}{l} (A + B) ^ {- 1} = (I + A ^ {- 1} B) ^ {- 1} A ^ {- 1} \\ = A ^ {- 1} - A ^ {- 1} B \left(I + A ^ {- 1} B\right) ^ {- 1} A ^ {- 1} \\ = A ^ {- 1} - A ^ {- 1} B (A + B) ^ {- 1}, \tag {13} \\ \end{array}
340
+ $$
341
+
342
+ which only requires $A$ and $A + B$ to be nonsingular matrix. As mentioned above, the matrices that we are considering are positive definite. The determinant of a positive definite matrix is always positive, so a positive definite matrix is always nonsingular.
343
+
344
+ Substituting $A = \nabla^2 R(\hat{\theta})$ and $B = \epsilon \nabla^{2}L(z,\hat{\theta})$ and dropping $o(\epsilon)$ terms, we have
345
+
346
+ $$
347
+ \Delta_ {\epsilon} \approx - \nabla^ {2} R (\hat {\theta}) ^ {- 1} \nabla L (z, \hat {\theta}) \epsilon . \tag {14}
348
+ $$
349
+
350
+ Combining with Eqn 7 and Eqn 9, we conclude that:
351
+
352
+ $$
353
+ \left. \frac {d \hat {\theta} _ {\epsilon , z}}{d \epsilon} \right| _ {\epsilon = 0} = - H _ {\hat {\theta}} ^ {- 1} \nabla L (z, \hat {\theta}). \tag {15}
354
+ $$
355
+
356
+ We instead down-weight the training point $z$ to keep consistency with our memorization attribution method introduced later, the perturbed parameters $\hat{\theta}_{\epsilon, -z}$ can be written as
357
+
358
+ $$
359
+ \hat {\theta} _ {\epsilon , - z} = \arg \min _ {\theta \in \Theta} (R (\theta) - \epsilon L (z, \theta)). \tag {16}
360
+ $$
361
+
362
+ It is easy to see that
363
+
364
+ $$
365
+ \left. \frac {d \hat {\theta} _ {\epsilon , - z}}{d \epsilon} \right| _ {\epsilon = 0} = H _ {\hat {\theta}} ^ {- 1} \nabla L (z, \hat {\theta}). \tag {17}
366
+ $$
367
+
368
+ Next, we apply the chain rule to measure how down-weighting $z$ changes functions of $\hat{\theta}$ .
369
+
370
+ $$
371
+ \begin{array}{l} \mathcal {I} (z, z _ {\text {t e s t}}) \stackrel {{\text {d e f}}} {{=}} \frac {d F (y _ {\text {t e s t}} , x _ {\text {t e s t}} ; \hat {\theta} _ {\epsilon , - z})}{d \epsilon} \Bigg | _ {\epsilon = 0} \\ = \nabla_ {\theta} F (y _ {\text {t e s t}}, x _ {\text {t e s t}}; \hat {\theta}) ^ {\top} \frac {d \hat {\theta} _ {\epsilon , - z}}{d \epsilon} \Bigg | _ {\epsilon = 0} \\ = \nabla_ {\theta} F \left(y _ {\text {t e s t}}, x _ {\text {t e s t}}; \hat {\theta}\right) ^ {\top} H _ {\hat {\theta}} ^ {- 1} \nabla_ {\theta} L (z, \hat {\theta}), \tag {18} \\ \end{array}
372
+ $$
373
+
374
+ where $F$ is usually the loss function.
375
+
376
+ While influence function is generally used to measure the influence of a training instance on a test instance, if we use it to measure the influence of a training instance on itself, i.e., to measure self-influence, then this self-influence corresponds to the general notion of memorization defined by Feldman (2020); Feldman and Zhang (2020). Based on this notion, we set $F$ as the negative estimated conditional probability $-P(y|x;\theta)$ and define the memorization score for a training instance $z$ as follows:
377
+
378
+ $$
379
+ \begin{array}{l} \mathcal {M} _ {\text {r e m o v e}} (z) \stackrel {{\text {d e f}}} {{=}} - \frac {d P (y | x ; \hat {\theta} _ {\epsilon , - z})}{d \epsilon} \Bigg | _ {\epsilon = 0} \\ = - \nabla_ {\theta} P (y | x; \hat {\theta}) ^ {\top} \frac {d \hat {\theta} _ {\epsilon , - z}}{d \epsilon} \Bigg | _ {\epsilon = 0} \\ = - \nabla_ {\theta} P (y | x; \hat {\theta}) ^ {\top} H _ {\hat {\theta}} ^ {- 1} \nabla_ {\theta} L (z, \hat {\theta}). \tag {19} \\ \end{array}
380
+ $$
381
+
382
+ # B Derivation of Memorization Attribution
383
+
384
+ In order to better understand why an instance is memorized, we propose a fine-grained notion of memorization at "feature" level instead of instance level, i.e., to attribute the memorization score of an instance to its individual features.
385
+
386
+ To conduct attribution, a natural requirement is to introduce a baseline. Thus we first consider a variant of the Influence Functions that approximates the resulting effect of replacing a training point $z$ with a baseline training point $z'$ , which is similar to the perturbation-based influence by Koh and Liang (2017).
387
+
388
+ The perturbed parameter $\hat{\theta}_{\epsilon, z_{\delta}, -z}$ can be written as:
389
+
390
+ $$
391
+ \hat {\theta} _ {\epsilon , z ^ {\prime}, - z} = \arg \min _ {\theta \in \Theta} \left(R (\theta) + \epsilon L \left(z ^ {\prime}, \theta\right) - \epsilon L (z, \theta)\right). \tag {20}
392
+ $$
393
+
394
+ Similar to the derivation shown the previous section, we can derive the following definition of a memorization score based on such perturbation:
395
+
396
+ $$
397
+ \begin{array}{l} \mathcal {M} _ {\text {r e p l a c e}} (z) \stackrel {\text {d e f}} {=} - \frac {d P (y | x ; \hat {\theta} _ {\epsilon , z ^ {\prime} , - z})}{d \epsilon} \Bigg | _ {\epsilon = 0} \\ = - \nabla_ {\theta} P (y | x; \hat {\theta}) ^ {\top} \frac {d \hat {\theta} _ {\epsilon , z ^ {\prime} , - z}}{d \epsilon} \Bigg | _ {\epsilon = 0} \\ = - s ^ {\top} \left(\nabla_ {\theta} L (z, \hat {\theta}) - \nabla_ {\theta} L \left(z ^ {\prime}, \hat {\theta}\right)\right), \tag {21} \\ \end{array}
398
+ $$
399
+
400
+ where $s = H_{\hat{\theta}}^{-1}\nabla_{\theta}P(y|x;\hat{\theta})$
401
+
402
+ We now show that $\mathcal{M}_{\mathrm{replace}}(z)$ can be decomposed into a linear combination of scores, each corresponding to a single token in the input sequence. For NLP applications, the input $x$ usually corresponds to an embedding matrix $\mathbf{X} \in \mathbb{R}^{N \times d}$ (where $N$ is the number of tokens and $d$ is the embedding dimensions). Let us denote $\nabla_{\theta} L((\cdot, y), \hat{\theta})$ as $g(\cdot)$ and consider the path integral along a straight line between $\mathbf{X}$ and $\mathbf{X}'$ , yielding
403
+
404
+ $$
405
+ g (\mathbf {X}) - g \left(\mathbf {X} ^ {\prime}\right) = H ^ {\prime} \left(\mathbf {X} - \mathbf {X} ^ {\prime}\right), \tag {22}
406
+ $$
407
+
408
+ where $H^{\prime} = \left[\int_{\alpha = 0}^{1}\frac{dg(\mathbf{X}^{\prime} + \alpha(\mathbf{X} - \mathbf{X}^{\prime}))}{dx} d\alpha \right]$ and could be efficiently approximated by Riemann Sum as suggested by Sundararajan et al. (2017).
409
+
410
+ The reason of using path integral rather than the gradient at the input $\mathbf{X}$ is that a function's gradient may saturate around the input and integrating along a path can alleviate this issue. As for the reasons of choosing a straight line between the input and the baseline, first of all, it is obviously the simplest path. Besides, using a straight line allows the Integrated Gradients to meet the Symmetry-Preserving property. For more details, please check the original paper on IG (Sundararajan et al., 2017).
411
+
412
+ Substituting Eqn (22) into Eqn (21), we get
413
+
414
+ $$
415
+ \begin{array}{l} \mathcal {M} _ {\text {r e p l a c e}} (z) = - s ^ {\top} \left(g (\mathbf {X}) - g \left(\mathbf {X} ^ {\prime}\right)\right) \\ = - s ^ {\top} H ^ {\prime} \left(\mathbf {X} - \mathbf {X} ^ {\prime}\right) \\ = - r ^ {\top} \left(\mathbf {X} - \mathbf {X} ^ {\prime}\right) \tag {23} \\ = - \sum_ {t = 1} ^ {N} \sum_ {l = 1} ^ {d} r _ {t, l} (\mathbf {X} _ {t, l} - \mathbf {X} _ {t, l} ^ {\prime}), \\ \end{array}
416
+ $$
417
+
418
+ where $r = H^{\prime}s$ , which could be efficiently computed by the hessian-vector product (Pearlmutter, 1994).
419
+
420
+ # C The Effect of Different Checkpoints
421
+
422
+ Our self-influence-based memorization score is dependent on the model used to compute the influence function. A model trained with different random seeds will have different self-influence values, so there is inherently some stochasticity in the measurement of influence or self influence.
423
+
424
+ To address this issue, on SST-2, we train the model using different random seeds to obtain three checkpoints and compute the corresponding memorization scores. We found that the instance rankings
425
+
426
+ produced by these different checkpoints are highly correlated, based on Spearman's Rank Correlation Coefficient, as shown in Table 5. Thus, we only consider one checkpoint when computing the memorization scores.
427
+
428
+ <table><tr><td>a</td><td>b</td><td>c</td></tr><tr><td>1.00</td><td>0.99</td><td>0.98</td></tr><tr><td>0.99</td><td>1.00</td><td>0.99</td></tr><tr><td>0.98</td><td>0.99</td><td>1.00</td></tr></table>
429
+
430
+ Table 5: Spearman's rank correlation coefficients between different rankings of the training instances produced by different checkpoints of the trained model on SST-2.
431
+
432
+ # D The Distribution of Positive Phrase Fraction
433
+
434
+ For the task of sentiment classification, i.e., the experiments on SST-2, we hypothesize that a typical positive sentence should have a relatively high percentage of positive phrases and a typical negative sentence should have a relatively low percentage of positive phrases. Note that here we consider phrase-level sentiment instead of word-level sentiment because we want to take into account the negation phenomena such as the phrase "not bad" expressing a positive sentiment although the word "bad" contains a negative sentiment. To support our hypothesis, we conduct the following quantitative experiment.
435
+
436
+ Given the phrase-level sentiment annotations provided by the SST-2 dataset (Socher et al., 2013), for every instance $z$ , we count how many positive phrases and negative phrases it contains, respectively. Then, we turn the absolute counts into a relative fraction:
437
+
438
+ $$
439
+ \operatorname {f r a c} (z) _ {c} = \frac {\operatorname {c o u n t} (z) _ {c} + k}{\sum_ {c ^ {\prime} \in \{\text {n e g , p o s} \}} (\operatorname {c o u n t} (z) _ {c ^ {\prime}} + k)}, \tag {24}
440
+ $$
441
+
442
+ where $\mathrm{count}(z)_c$ is the number of phrases with sentiment $c$ in instance $z$ , and add- $k$ -smoothing is used to avoid division by zero. Here $k$ is set as 0.01.
443
+
444
+ We plot the distributions of positive phrase fractions for both positive instances and negative instances. The results are shown in Figure 4. The results demonstrate that if we use the positive fraction to characterize the SST-2 data, then SST-2 instances of each class follow a long-tail distribution, and in the main body of our paper, we show that the top-memorized positive and negative instances
445
+
446
+ ![](images/f09079bc2f7060c25d3969caa2b8983b4d14c9ab42c295883ff9c96f95a7e732.jpg)
447
+ seee
448
+ (a) Negative Class
449
+
450
+ ![](images/808276a32e652d2d8aacd105521e78c8997275b38b24290b2cb0ed1947c106b8.jpg)
451
+ (b) Positive Class
452
+ Figure 4: The distribution of positive phrase fraction on SST-2.
453
+
454
+ likely lie in the tail end of the two distributions, judging by their average positive fraction value.
455
+
456
+ # E Examples of Memorization Attribution
457
+
458
+ Some examples of Memorization Attribution on Yahoo! Answers are shown in Table 6. In particular, without any cherry-picking, we select those instances with the highest memorization scores to present. We can observe that on Yahoo! Answers, for most cases, the model tends to memorize those atypical parts of an instance. For example, the model needs to memorize the word "business" that shows up in an instance labeled as "Health" and the word "sports" in the "Education & Reference" instance. However, one might wonder why words like "football" and "field" received high memorization scores for the example in "Sports". Although we are not certain, we hypothesize that this might be due to the fact that the span "football field" is atypical for the "Sports" category, because we find that this span shows up in only 2 instances out of 1000 "Sports" instance in our training set.
459
+
460
+ <table><tr><td>Content</td><td>Label</td></tr><tr><td>why are americans . . . ? ; why are americans so obsessed with saying &quot; god bless america &quot; . i mean there is no other country in the world that says that . why must god bless them when they have been involved in nearly every war to date . i &#x27; m not trying to insult them or anything but why do they do it ? ; we are a nation under god , we was founded from it . . . it is our of respect of leader of our country before us , and the great leader in heaven god . .</td><td>Society &amp; Culture</td></tr><tr><td>is it possible for seven 375 pound men to stand on top of a bus and pee while it races down the hi - way ? ; they would be belted in of course for safety reasons , so the formula is seven 375 pound men , seat - belted on top of a bus , peeing at 75 miles per hour , into a head - wind of 10 mph , at a 30 degree angle , what is the end velocity of the pee ? ? ; first of all it wont look too good . . . thats a lot of pee ! ! ! next , they must have on water proff clothing , it will</td><td>Science &amp; Mathematics</td></tr><tr><td>what would you do ? ; i have an opportunity to take over a business in the womans health field , with a solid cash flow but part of the deal means i must take over an additional location that has a negative cash flow . i have enough money to pay for the business and a little left over to satisfy a shortfall in operating cash flow of just the one . i did not factor anything in for the second location with a negative operating cash flow . the crunch is that i can not have one without the other . the important thing is to know that i am only short operating capitol for one location . . . should</td><td>Health</td></tr><tr><td>my hs son plays two hs sports - hardly find time for h / w - i want to send him to prep school to impress his grades ; i want him to have a high sat / act as well high gpa to go in to college . i hear that prep boarding schools can be expensive . i need help ! ; i know this will sound cold and uncaring but really it &#x27; s not . if he &#x27; s having probns with sports and keeping grades up . . . take away the sports privileges . school work should be his main focus , then sports . my son is in a</td><td>Education &amp; Reference</td></tr><tr><td>out of all the schools in ngeria that have computers , how many have internet access ? ; i &#x27; m looking into some overseas development ideas . do you know roughly what percentage have internet access ( most or just a few ) ? ; my school in ngeria had internet service , it is the best school i have seen till today . . .</td><td>Computers &amp; Internet</td></tr><tr><td>where does the term grid iron originate and how did it get applied to a football field . ? ; what is the original meaning of gridiron . who applied it to a football field and why ? ; hi there . . . here is the answer i found from the word detective site : the use of &quot; gridiron &quot; as a metaphor for the football field , and , by extension , to the game itself , dates back to 1897 . the original &quot; gridirons &quot; were just that : grids made of iron , used to cook fish or meat over an open fire . early</td><td>Sports</td></tr><tr><td>what kind of math would i need to be a real estate appraiser ? ; what kind of math would i need to be a real estate appraiser ? the job as says needs strong math skills ; geometry ( area of a circle , rectangle , triangle , volume of a rectangle , etc . . . ) plus percentages , percentage of change . some minor algebra to find the unknown vairable in the percentage calcs . that ’ s about it .</td><td>Business &amp; Finance</td></tr><tr><td>does anyone know any electro bands ? ; does anyone know of any good electro bands such as metric and robots in disguise ? ; hmm . . . how about : particle lotus pnuma trio soulive brother ’ s past look for these bands and lots of others at : http : // www . archive . org / details / etree</td><td>Entertainment &amp; Music</td></tr><tr><td>how can make a guy know that i like him ? ; there ’ s this guy that takes a class with me . he ’ s really nice and we talk every day . i like wrestling and he does too and we talk about that until the class starts after that , i don ’ t see him anymore until we have the class again . what should do to make him notice that i like him ? help pleasee ! ! ! ; well u should try to stop him in the hall and try to say hi also when u see him try to flirt a little just make sure its not too much a</td><td>Family &amp; Relationships</td></tr><tr><td>can anyone tell me the address . . . ? ; to reach the dixie chicks by ? this is a serious question , so please don ’ t post whether or not you support them about their comments on bush . all i need is the address . thank you ! ; hello , i was not able to find an actual address , but i did find their website where you can sign up for their mailing list and i did find this information as well : the dixie chicks have very recently changed management . i do not yet have a new address for fan mail . once one is available , i will post</td><td>Politics &amp; Government</td></tr></table>
461
+
462
+ Table 6: The top-1 memorized training instances for each class from Yahoo! Answer. Highlighted words are those with high attribution values (red for positive memorization attribution and blue for negative memorization attribution) as computed by our memorization attribution method.
anempiricalstudyofmemorizationinnlp/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:74b805e997d75db39457d6dec88e02f1a56f40c52a1ade186258fd540de9f10e
3
+ size 1060373
anempiricalstudyofmemorizationinnlp/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a636bfec42dc4b799391be9928f87bd1fe154ea623adcc53a00d9bb67dcd2957
3
+ size 502115
anempiricalstudyonexplanationsinoutofdomainsettings/7d4b955f-0de5-4d22-a55b-3902668e1fc8_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b015587b863f570b17bfa0e6225ed2c58fbd8de77fb0681000ee0c090f791164
3
+ size 143915
anempiricalstudyonexplanationsinoutofdomainsettings/7d4b955f-0de5-4d22-a55b-3902668e1fc8_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:96e9320e5a016e315726bcd334e6b882fc66f05f58ca15145ec56ad643d3e17a
3
+ size 163023
anempiricalstudyonexplanationsinoutofdomainsettings/7d4b955f-0de5-4d22-a55b-3902668e1fc8_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1e089a2a6dbad086d464eb941f365e4c6a05274f53cab8d60e9af7a8b59e6b5e
3
+ size 1104986
anempiricalstudyonexplanationsinoutofdomainsettings/full.md ADDED
The diff for this file is too large to render. See raw diff
 
anempiricalstudyonexplanationsinoutofdomainsettings/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c9003dd0cd2846320136aa4b2691a125107080279569f1e8687beeb6fea2cdc6
3
+ size 1945039
anempiricalstudyonexplanationsinoutofdomainsettings/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e28ef7487c66eb8160f07a215412f9620b946bb5c187562fc6fe1b25f61823b6
3
+ size 522584
anempiricalsurveyoftheeffectivenessofdebiasingtechniquesforpretrainedlanguagemodels/3ed55c1d-5c4c-412e-8ac1-a38332226435_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bca6d6abbb52194927c8545e0119e5fa379a05866a6c8da1fc8afcd0f85429b6
3
+ size 132787
anempiricalsurveyoftheeffectivenessofdebiasingtechniquesforpretrainedlanguagemodels/3ed55c1d-5c4c-412e-8ac1-a38332226435_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:68ab0e436e25803063e8410f28334693cc742384dc7a55e372fc262adc0ca779
3
+ size 152953
anempiricalsurveyoftheeffectivenessofdebiasingtechniquesforpretrainedlanguagemodels/3ed55c1d-5c4c-412e-8ac1-a38332226435_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3c6259b5f9d0137621fa92f5f4e156a6cd28449ec8cc1155ed3c28724a55a49b
3
+ size 401728
anempiricalsurveyoftheeffectivenessofdebiasingtechniquesforpretrainedlanguagemodels/full.md ADDED
@@ -0,0 +1,423 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # An Empirical Survey of the Effectiveness of Debiasing Techniques for Pre-trained Language Models
2
+
3
+ Nicholas Meade<sup>1</sup> Elinor Poole-Dayan<sup>1</sup> Siva Reddy<sup>1,2</sup>
4
+
5
+ <sup>1</sup>Mila and McGill University
6
+
7
+ $^{2}$ Facebook CIFAR AI Chair
8
+
9
+ {nicholas.meade, elinor.poole-dayan, siva.reddy}@mila.quebec
10
+
11
+ # Abstract
12
+
13
+ Recent work has shown pre-trained language models capture social biases from the large amounts of text they are trained on. This has attracted attention to developing techniques that mitigate such biases. In this work, we perform an empirical survey of five recently proposed bias mitigation techniques: Counterfactual Data Augmentation (CDA), Dropout, Iterative Nullspace Projection, Self-Debias, and SentenceDebias. We quantify the effectiveness of each technique using three intrinsic bias benchmarks while also measuring the impact of these techniques on a model's language modeling ability, as well as its performance on downstream NLU tasks. We experimentally find that: (1) Self-Debias is the strongest debiasing technique, obtaining improved scores on all bias benchmarks; (2) Current debiasing techniques perform less consistently when mitigating non-gender biases; And (3) improvements on bias benchmarks such as StereoSet and CrowS-Pairs by using debiasing strategies are often accompanied by a decrease in language modeling ability, making it difficult to determine whether the bias mitigation was effective. $^{1}$
14
+
15
+ # 1 Introduction
16
+
17
+ Large pre-trained language models have proven effective across a variety of tasks in natural language processing, often obtaining state of the art performance (Peters et al., 2018; Devlin et al., 2019; Radford et al., 2019; Brown et al., 2020). These models are typically trained on large amounts of text, originating from unmoderated sources, such as the internet. While the performance of these pre-trained models is remarkable, recent work has shown that they capture social biases from the data they are trained on (May et al. 2019; Kurita et al. 2019; Webster et al. 2020; Nangia et al. 2020; Nadeem
18
+
19
+ et al. 2021, inter alia). Because of these findings, an increasing amount of research has focused on developing techniques to mitigate these biases (Liang et al., 2020; Ravfogel et al., 2020; Webster et al., 2020; Kaneko and Bollegala, 2021; Schick et al., 2021; Lauscher et al., 2021). However, the proposed techniques are often not investigated thoroughly. For instance, much work focuses only on mitigating gender bias despite pre-trained language models being plagued by other social biases (e.g., racial or religious bias). Additionally, the impact that debiasing has on both downstream task performance, as well as language modeling ability, is often not well explored.
20
+
21
+ In this paper, we perform an empirical survey of the effectiveness of five recently proposed debiasing techniques for pre-trained language models: Counterfactual Data Augmentation (CDA; Zmigrod et al. 2019; Webster et al. 2020), Dropout (Webster et al., 2020), Iterative Nullspace Projection (INLP; Ravfogel et al. 2020), Self-Debias (Schick et al., 2021), and SentenceDebias (Liang et al., 2020). Following the taxonomy described by Blodgett et al. (2020), our work studies the effectiveness of these techniques in mitigating representational biases from pre-trained language models. More specifically, we investigate mitigating gender, racial, and religious biases in three masked language models (BERT, ALBERT, and RoBERTa) and an autoregressive language model (GPT-2). We also explore how debiasing impacts a model's language modeling ability, as well as a model's performance on downstream natural language understanding (NU) tasks.
22
+
23
+ Concretely, our paper aims to answer the following research questions:
24
+
25
+ Q1 Which technique is most effective in mitigating bias?
26
+
27
+ Q2 Do these techniques worsen a model's language modeling ability?
28
+ Q3 Do these techniques worsen a model's ability to perform downstream NLU tasks?
29
+
30
+ To answer Q1 (\$4), we evaluate debiased models against three intrinsic bias benchmarks: the Sentence Encoder Association Test (SEAT; May et al. 2019), StereoSet (Nadeem et al., 2021), and Crowdsourced Stereotype Pairs (CrowS-Pairs; Nangia et al. 2020). Generally, we found Self-Debias to be the strongest bias mitigation technique. To answer Q2 (\$5) and Q3 (\$6), we evaluate debiased models against WikiText-2 (Merit et al., 2017) and the General Language Understanding Evaluation (GLUE; Wang and Cho 2019) benchmark. We found debiasing tends to worsen a model's language modeling ability. However, our results suggest that debiasing has little impact on a model's ability to perform downstream NLU tasks.
31
+
32
+ # 2 Techniques for Measuring Bias
33
+
34
+ We begin by describing the three intrinsic bias benchmarks we use to evaluate our debiasing techniques. We select these benchmarks as they can be used to measure not only gender bias, but also racial and religious bias in language models.
35
+
36
+ Sentence Encoder Association Test (SEAT). We use SEAT (May et al., 2019) as our first intrinsic bias benchmark. SEAT is an extension of the Word Embedding Association Test (WEAT; Caliskan et al. 2017) to sentence-level representations. Below, we first describe WEAT.
37
+
38
+ WEAT makes use of four sets of words: two sets of bias attribute words and two sets of target words. The attribute word sets characterize a type of bias. For example, the attribute word sets $\{man, he, him, \ldots\}$ and $\{woman, she, her, \ldots\}$ could be used for gender bias. The target word sets characterize particular concepts. For example, the target word sets $\{family, child, parent, \ldots\}$ and $\{work, office, profession, \ldots\}$ could be used to characterize the concepts of family and career, respectively. WEAT evaluates whether the representations for words from one particular attribute word set tend to be more closely associated with the representations for words from one particular target word set. For instance, if the representations for the female attribute words listed above tended to be more closely associated with the representations for the family target words, this may be
39
+
40
+ indicative of bias within the word representations.
41
+
42
+ Formally, let $A$ and $B$ denote the sets of attribute words and let $X$ and $Y$ denote the sets of target words. The SEAT test statistic is
43
+
44
+ $$
45
+ s (X, Y, A, B) = \sum_ {x \in X} s (x, A, B) - \sum_ {y \in Y} s (y, A, B)
46
+ $$
47
+
48
+ where for a particular word $w$ , $s(w, A, B)$ is defined as the difference between $w$ 's mean cosine similarity with the words from $A$ and $w$ 's mean cosine similarity with the words from $B$
49
+
50
+ $$
51
+ s (w, A, B) = \frac {1}{| A |} \sum_ {a \in A} \cos (w, a) - \frac {1}{| B |} \sum_ {b \in B} \cos (w, b).
52
+ $$
53
+
54
+ They report an effect size given by
55
+
56
+ $$
57
+ d = \frac {\mu (\{s (x , A , B) \} _ {x \in X}) - \mu (\{s (y , A , B) \} _ {y \in Y})}{\sigma (\{s (t , X , Y) \} _ {t \in A \cup B})}
58
+ $$
59
+
60
+ where $\mu$ denotes the mean and $\sigma$ denotes the standard deviation. Here, an effect size closer to zero is indicative of a smaller degree of bias in the representations.
61
+
62
+ To create a sentence-level version of WEAT (referred to as SEAT), May et al. (2019) substitute the attribute words and target words from WEAT into synthetic sentence templates (e.g., "this is a [WORD]") to create a collection of sentences. Now, given sets of sentences containing attribute and target words, the WEAT test statistic can be computed using sentence-level representations obtained from a pre-trained language model.
63
+
64
+ We refer readers to Appendix A for a list of the SEAT tests we use to measure each type of bias in our work. We report the effect size for each SEAT test we evaluate.
65
+
66
+ StereoSet. As our second intrinsic bias benchmark, we use StereoSet (Nadeem et al., 2021), a crowdsourced dataset for measuring four types of stereotypical bias in language models. Each StereoSet example consists of a context sentence, for example "our housekeeper is [MASK]", and a set of three candidate associations (completions) for that sentence—one being stereotypical, another being anti-stereotypical, and a third being
67
+
68
+ unrelated. $^{4}$ Using the example above, a stereotypical association might be “our housekeeper is Mexican”, an anti-stereotypical association might be “our housekeeper is American”, and an unrelated association might be “our housekeeper is computer”. To quantify how biased a language model is, we score the stereotypical association and the anti-stereotypical association for each example under a model. We then compute the percentage of examples for which a model prefers the stereotypical association as opposed to the anti-stereotypical association. We define this percentage as the stereotype score of a model.
69
+
70
+ StereoSet also provides a measure of a model's language modeling ability. For each example in the dataset, we also score the unrelated association. We then measure the percentage of examples for which a model prefers a meaningful association (either the stereotypical association or the anti-stereotypical association) as opposed to the unrelated association. We define this percentage as the language modeling score of a model.
71
+
72
+ We evaluate our debiased models against the StereoSet test set. We evaluate debiased models for each domain against their respective StereoSet test set split (e.g., gender debiased models are evaluated against the gender bias examples).
73
+
74
+ Crowdsourced Stereotype Pairs (CrowS-Pairs). We use CrowS-Pairs (Nangia et al., 2020) as our third intrinsic bias benchmark. CrowS-Pairs is a crowdsourced dataset that consists of pairs of minimally distant sentences—that is, sentences that differ only with respect to a small number of tokens. The first sentence in each pair reflects a stereotype about a historically disadvantaged group in the United States. For example, the sentence "people who live in trailers are alcoholics" reflects a possible socioeconomic stereotype. The second sentence in each pair then violates the stereotype introduced in the first sentence. For example, the sentence "people who live in mansions are alcoholics" violates, or in a sense, is the anti-stereotypical version of the first sentence.
75
+
76
+ We quantify how biased a language model is by measuring how frequently a model prefers the stereotypical sentence in each pair over the anti-stereotypical sentence. Nangia et al. (2020) originally proposed using pseudo-likelihood-based
77
+
78
+ scoring (Salazar et al., 2020) for CrowS-Pairs, however, recent work has suggested that pseudolikelihood-based scoring may be subject to model calibration issues (Desai and Durrett, 2020; Jiang et al., 2020). Thus, we score each pair of sentences using masked token probabilities in a similar fashion to StereoSet. For each pair of sentences, we score the stereotypical sentence by computing the masked token probability of the tokens unique to the stereotypical sentence. In the example above, we would compute the masked token probability of trailers. We score each anti-stereotypical sentence in a similar fashion. If multiple tokens are unique to a given sentence, we compute the average masked token probability by masking each differing token individually. We define the stereotype score of a model to be the percentage of examples for which a model assigns a higher masked token probability to the stereotypical sentence as opposed to the anti-stereotypical sentence.
79
+
80
+ # 3 Debiasing Techniques
81
+
82
+ Below, we describe the five debiasing techniques we evaluate in this work. We refer readers to Appendix C for additional experimental details on each debiasing technique.
83
+
84
+ Counterfactual Data Augmentation (CDA). CDA (Zmigrod et al., 2019; Dinan et al., 2020a; Webster et al., 2020; Barikeri et al., 2021) is a databased debiasing strategy often used to mitigate gender bias. Roughly, CDA involves re-balancing a corpus by swapping bias attribute words (e.g., he/she) in a dataset. For example, to help mitigate gender bias, the sentence "the doctor went to the room and he grabbed the syringe" could be augmented to "the doctor went to the room and she grabbed the syringe". The re-balanced corpus is then often used for further training to debias a model. While CDA has been mainly used for gender debiasing, we also evaluate its effectiveness for other types of biases. For instance, we create CDA data for mitigating religious bias by swapping religious terms in a corpus, say church with mosque, to generate counterfactual examples.
85
+
86
+ We experiment with debiasing pre-trained language models by performing an additional phase of pre-training on counterfactually augmented sentences from English Wikipedia.
87
+
88
+ DROPOUT. Webster et al. (2020) investigate using dropout regularization (Srivastava et al., 2014) as a bias mitigation technique. They investigate increasing the dropout parameters for BERT and ALBERT's attention weights and hidden activations and performing an additional phase of pre-training. Experimentally, they find increased dropout regularization reduces gender bias within these models. They hypothesize that dropout's interruption of the attention mechanisms within BERT and ALBERT help prevent them from learning undesirable associations between words. We extend this study to other types of biases. Similar to CDA, we perform an additional phase of pre-training on sentences from English Wikipedia using increased dropout regularization.
89
+
90
+ SELF-DEBIAS. Schick et al. (2021) propose a post-hoc debiasing technique that leverages a model's internal knowledge to discourage it from generating biased text.
91
+
92
+ Informally, Schick et al. (2021) propose using hand-crafted prompts to first encourage a model to generate toxic text. For example, generation from an autoregressive model could be prompted with "The following text discriminates against people because of their gender." Then, a second continuation that is non-discriminative can be generated from the model where the probabilities of tokens deemed likely under the first toxic generation are scaled down.
93
+
94
+ Importantly, since Self-Debias is a post-hoc text generation debiasing procedure, it does not alter a model's internal representations or its parameters. Thus, Self-Debias cannot be used as a bias mitigation strategy for downstream NLU tasks (e.g., GLUE). Additionally, since SEAT measures bias in a model's representations and Self-Debias does not alter a model's internal representations, we cannot evaluate Self-Debias against SEAT.
95
+
96
+ SENTENCEDEBIAS. Liang et al. (2020) extend Hard-Debias, a word embedding debiasing technique proposed by Bolukbasi et al. (2016) to sentence representations. SentenceDebias is a projection-based debiasing technique that requires the estimation of a linear subspace for a particular type of bias. Sentence representations can be debiased by projecting onto the estimated bias subspace and subtracting the resulting projection from the original sentence representation.
97
+
98
+ Liang et al. (2020) use a three step procedure
99
+
100
+ for computing a bias subspace. First, they define a list of bias attribute words (e.g., he/she). Second, they contextualize the bias attribute words into sentences. This is done by finding occurrences of the bias attribute words in sentences within a text corpus. For each sentence found during this contextualization step, CDA is applied to generate a pair of sentences that differ only with respect to the bias attribute word. Finally, they estimate the bias subspace. For each of the sentences obtained during the contextualization step, a corresponding representation can be obtained from a pre-trained model. Principle Component Analysis (PCA; Abdi and Williams 2010) is then used to estimate the principle directions of variation of the resulting set of representations. The first $K$ principle components can be taken to define the bias subspace.
101
+
102
+ Iterative Nullspace Projection (INLP). Ravfogel et al. (2020) propose INLP, a projection-based debiasing technique similar to SentenceDebias. Roughly, INLP debiases a model's representations by training a linear classifier to predict the protected property you want to remove (e.g., gender) from the representations. Then, representations can be debiased by projecting them into the nullspace of the learnt classifier's weight matrix, effectively removing all of the information the classifier used to predict the protected attribute from the representation. This process can then be applied iteratively to debias the representation.
103
+
104
+ In our experiments, we create a classification dataset for INLP by finding occurrences of bias attribute words (e.g., hel/she) in English Wikipedia. For example, for gender bias, we classify each sentence from English Wikipedia into one of three classes depending upon whether a sentence contains a male word, a female word, or no gendered words.
105
+
106
+ # 4 Which Technique is Most Effective in Mitigating Bias?
107
+
108
+ To investigate which technique is most effective in mitigating bias (Q1), we evaluate debiased BERT, ALBERT, RoBERTa, and GPT-2 models against SEAT, StereoSet, and CrowS-Pairs. We present BERT and GPT-2 results in the main paper and defer readers to Appendix E for results for the other models. We use the base uncased BERT model and the small GPT-2 model in our experiments.
109
+
110
+ <table><tr><td>Model</td><td>SEAT-6</td><td>SEAT-6b</td><td>SEAT-7</td><td>SEAT-7b</td><td>SEAT-8</td><td>SEAT-8b</td><td>Avg. Effect Size (↓)</td></tr><tr><td>BERT</td><td>0.931*</td><td>0.090</td><td>-0.124</td><td>0.937*</td><td>0.783*</td><td>0.858*</td><td>0.620</td></tr><tr><td>+ CDA</td><td>0.846*</td><td>0.186</td><td>-0.278</td><td>1.342*</td><td>0.831*</td><td>0.849*</td><td>↑0.102 0.722</td></tr><tr><td>+ DROPOUT</td><td>1.136*</td><td>0.317</td><td>0.138</td><td>1.179*</td><td>0.879*</td><td>0.939*</td><td>↑0.144 0.765</td></tr><tr><td>+ INLP</td><td>0.317</td><td>-0.354</td><td>-0.258</td><td>0.105</td><td>0.187</td><td>-0.004</td><td>↓0.416 0.204</td></tr><tr><td>+ SENTENCEDEBIAS</td><td>0.350</td><td>-0.298</td><td>-0.626</td><td>0.458*</td><td>0.413</td><td>0.462*</td><td>↓0.186 0.434</td></tr><tr><td>GPT-2</td><td>0.138</td><td>0.003</td><td>-0.023</td><td>0.002</td><td>-0.224</td><td>-0.287</td><td>0.113</td></tr><tr><td>+ CDA</td><td>0.161</td><td>-0.034</td><td>0.898*</td><td>0.874*</td><td>0.516*</td><td>0.396</td><td>↑0.367 0.480</td></tr><tr><td>+ DROPOUT</td><td>0.167</td><td>-0.040</td><td>0.866*</td><td>0.873*</td><td>0.527*</td><td>0.384</td><td>↑0.363 0.476</td></tr><tr><td>+ INLP</td><td>0.106</td><td>-0.029</td><td>-0.033</td><td>-0.015</td><td>-0.236</td><td>-0.295</td><td>↑0.006 0.119</td></tr><tr><td>+ SENTENCEDEBIAS</td><td>0.086</td><td>-0.075</td><td>-0.307</td><td>-0.068</td><td>0.306</td><td>-0.667</td><td>↑0.138 0.251</td></tr></table>
111
+
112
+ Table 1: SEAT effect sizes for gender debiased BERT and GPT-2 models. Effect sizes closer to 0 are indicative of less biased model representations. Statistically significant effect sizes at $p < 0.01$ are denoted by * . The final column reports the average absolute effect size across all six gender SEAT tests for each debiased model.
113
+
114
+ <table><tr><td>Model</td><td colspan="2">Avg. Effect Size (↓)</td></tr><tr><td>Race</td><td></td><td></td></tr><tr><td>BERT</td><td colspan="2">0.620</td></tr><tr><td>+ CDA</td><td>↓0.051</td><td>0.569</td></tr><tr><td>+ DROPOUT</td><td>↓0.067</td><td>0.554</td></tr><tr><td>+ INLP</td><td>↑0.019</td><td>0.639</td></tr><tr><td>+ SENTENCEDEBIAS</td><td>↓0.008</td><td>0.612</td></tr><tr><td>GPT-2</td><td colspan="2">0.448</td></tr><tr><td>+ CDA</td><td>↓0.309</td><td>0.139</td></tr><tr><td>+ DROPOUT</td><td>↓0.285</td><td>0.162</td></tr><tr><td>+ INLP</td><td>↓0.001</td><td>0.447</td></tr><tr><td>+ SENTENCEDEBIAS</td><td>↓0.026</td><td>0.421</td></tr><tr><td>Religion</td><td></td><td></td></tr><tr><td>BERT</td><td colspan="2">0.492</td></tr><tr><td>+ CDA</td><td>↓0.152</td><td>0.339</td></tr><tr><td>+ DROPOUT</td><td>↓0.115</td><td>0.377</td></tr><tr><td>+ INLP</td><td>↓0.031</td><td>0.460</td></tr><tr><td>+ SENTENCEDEBIAS</td><td>↓0.053</td><td>0.439</td></tr><tr><td>GPT-2</td><td colspan="2">0.376</td></tr><tr><td>+ CDA</td><td>↓0.238</td><td>0.138</td></tr><tr><td>+ DROPOUT</td><td>↓0.243</td><td>0.134</td></tr><tr><td>+ INLP</td><td>↓0.001</td><td>0.375</td></tr><tr><td>+ SENTENCEDEBIAS</td><td>↑0.170</td><td>0.547</td></tr></table>
115
+
116
+ Table 2: SEAT average absolute effect sizes for race and religion debiased BERT and GPT-2 models. Average absolute effect sizes closer to 0 are indicative of less biased model representations.
117
+
118
+ SEAT Results. In Table 1, we report results for gender debiased BERT and GPT-2 models on SEAT.
119
+
120
+ For BERT, we find two of our four debiased models obtain lower average absolute effect sizes than the baseline model. In particular, INLP performs best on average across all six SEAT tests. Notably, INLP and SentenceDebias both obtain lower average absolute effect sizes than the baseline model while the CDA and Dropout models do not. Intuitively, this may be due to INLP and SentenceDebias taking a more aggressive approach
121
+
122
+ to debiasing by attempting to remove all gender information from a model's representations.
123
+
124
+ For GPT-2, our results are less encouraging. We find all of the debiased models obtain higher average absolute effect sizes than the baseline model. However, we note that SEAT fails to detect any statistically significant bias in the baseline model in any of the six SEAT tests to begin with. We argue, alongside others (Kurita et al., 2019; May et al., 2019), that SEAT's failure to detect bias in GPT-2 brings into question its reliability as a bias benchmark. For our gender debiased ALBERT and RoBERTa models, we observed similar trends in performance to BERT.
125
+
126
+ We also use SEAT to evaluate racial and religious bias in our models. In Table 2, we report average absolute effect sizes for race and religion debiased BERT and GPT-2 models. We find most of our race and religion debiased BERT and GPT-2 models obtain lower average absolute effect sizes than their respective baseline models. These trends were less consistent in our ALBERT and RoBERTa models.
127
+
128
+ StereoSet Results. In Table 3, we report StereoSet results for BERT and GPT-2.
129
+
130
+ For BERT, four of the five gender debiased models obtain lower stereotype scores than the baseline model. However, the race debiased models do not perform as consistently well. We note that for race, only two of the five debiased models obtain lower stereotype scores than the baseline model. Encouragingly, we find four of the five religion debiased BERT models obtain reduced stereotype scores. We observed similar trends to BERT in our ALBERT and RoBERTa results.
131
+
132
+ For GPT-2, the gender debiased models do not perform as consistently well. Notably, we observe
133
+
134
+ <table><tr><td>Model</td><td>Stereotype Score (%)</td></tr><tr><td colspan="2">Gender</td></tr><tr><td>BERT</td><td>60.28</td></tr><tr><td>+ CDA</td><td>↓0.67 59.61</td></tr><tr><td>+ DROPOUT</td><td>↑0.38 60.66</td></tr><tr><td>+ INLP</td><td>↓3.03 57.25</td></tr><tr><td>+ SELF-DEBIAS</td><td>↓0.94 59.34</td></tr><tr><td>+ SENTENCEDEBIAS</td><td>↓0.91 59.37</td></tr><tr><td>GPT-2</td><td>62.65</td></tr><tr><td>+ CDA</td><td>↑1.37 64.02</td></tr><tr><td>+ DROPOUT</td><td>↑0.71 63.35</td></tr><tr><td>+ INLP</td><td>↓2.48 60.17</td></tr><tr><td>+ SELF-DEBIAS</td><td>↓1.81 60.84</td></tr><tr><td>+ SENTENCEDEBIAS</td><td>↓6.59 56.05</td></tr><tr><td colspan="2">Race</td></tr><tr><td>BERT</td><td>57.03</td></tr><tr><td>+ CDA</td><td>↓0.30 56.73</td></tr><tr><td>+ DROPOUT</td><td>↑0.04 57.07</td></tr><tr><td>+ INLP</td><td>↑0.26 57.29</td></tr><tr><td>+ SELF-DEBIAS</td><td>↓2.73 54.30</td></tr><tr><td>+ SENTENCEDEBIAS</td><td>↑0.75 57.78</td></tr><tr><td>GPT-2</td><td>58.90</td></tr><tr><td>+ CDA</td><td>↓1.59 57.31</td></tr><tr><td>+ DROPOUT</td><td>↓1.41 57.50</td></tr><tr><td>+ INLP</td><td>↑0.06 58.96</td></tr><tr><td>+ SELF-DEBIAS</td><td>↓1.58 57.33</td></tr><tr><td>+ SENTENCEDEBIAS</td><td>↓2.47 56.43</td></tr><tr><td colspan="2">Religion</td></tr><tr><td>BERT</td><td>59.70</td></tr><tr><td>+ CDA</td><td>↓1.33 58.37</td></tr><tr><td>+ DROPOUT</td><td>↓0.57 59.13</td></tr><tr><td>+ INLP</td><td>↑0.61 60.31</td></tr><tr><td>+ SELF-DEBIAS</td><td>↓2.44 57.26</td></tr><tr><td>+ SENTENCEDEBIAS</td><td>↓0.97 58.73</td></tr><tr><td>GPT-2</td><td>63.26</td></tr><tr><td>+ CDA</td><td>↑0.29 63.55</td></tr><tr><td>+ DROPOUT</td><td>↑0.91 64.17</td></tr><tr><td>+ INLP</td><td>↑0.69 63.95</td></tr><tr><td>+ SELF-DEBIAS</td><td>↓2.81 60.45</td></tr><tr><td>+ SENTENCEDEBIAS</td><td>↓3.64 59.62</td></tr></table>
135
+
136
+ Table 3: StereoSet stereotype scores for gender, race, and religion debiased BERT and GPT-2 models. Stereotype scores closer to $50\%$ indicate less biased model behaviour. Results are on the StereoSet test set. A random model (which chooses the stereotypical candidate and the anti-stereotypical candidate for each example with equal probability) obtains a stereotype score of $50\%$ in expectation.
137
+
138
+ that the CDA model obtains a higher stereotype score than the baseline model.
139
+
140
+ One encouraging trend in our results is the consistently strong performance of Self-Debias. Across all three bias domains, the Self-Debias BERT and GPT-2 models always obtain reduced stereotype scores. Similarly, five of the six Self-Debias ALBERT and RoBERTa models obtain reduced stereotype scores. These results suggest that
141
+
142
+ Self-Debias is a reliable debiasing technique.
143
+
144
+ CrowS-Pairs Results. In Table 4, we report CrowS-Pairs results for BERT and GPT-2. Similar to StereoSet, we observe that Self-Debias BERT, ALBERT and RoBERTa, and GPT-2 models consistently obtain improved stereotype scores across all three bias domains.
145
+
146
+ We also observe a large degree of variability in the performance of our debiasing techniques on CrowS-Pairs. For example, the GPT-2 religion SentenceDebias model obtains a stereotype score of 35.24, an absolute difference of 27.62 points relative to the baseline model's score. We hypothesize that this large degree of variability is due to the small size of CrowS-Pairs (it is $\sim \frac{1}{4}$ th the size of the StereoSet test set). In particular, there are only 105 religion examples in the CrowS-Pairs dataset. Furthermore, Aribandi et al. (2021) demonstrated the relative instability of the performance of pretrained language models, such as BERT, on CrowS-Pairs (and StereoSet) across different pre-training runs. Thus, we caution readers from drawing too many conclusions from StereoSet and CrowS-Pairs results alone.
147
+
148
+ Do SEAT, StereoSet, and CrowS-Pairs Reliably Measure Bias? SEAT, StereoSet, and CrowS-Pairs alone may not reliably measure bias in language models. To illustrate why this is the case, consider a random language model being evaluated against StereoSet. It randomly selects either the stereotypical or anti-stereotypical association for each example. Thus, in expectation, this model obtains a perfect stereotype score of $50\%$ , although it is a bad language model. This highlights that a debiased model may obtain reduced stereotype scores by just becoming a worse language model. Motivated by this discussion, we now investigate how debiasing impacts language modeling performance.
149
+
150
+ # 5 How Does Debiasing Impact Language Modeling?
151
+
152
+ To investigate how debiasing impacts language modeling (Q2), we measure perplexities before and after debiasing each of our models on WikiText-2 (Merit et al., 2017). We also compute StereoSet language modeling scores for each of our debiased models. We discuss our findings below.
153
+
154
+ WikiText-2 and StereoSet Results. Following a similar setup to Schick et al. (2021), we use $10\%$
155
+
156
+ <table><tr><td>Model</td><td>Stereotype Score (%)</td></tr><tr><td colspan="2">Gender</td></tr><tr><td>BERT</td><td>57.25</td></tr><tr><td>+ CDA</td><td>↓1.14 56.11</td></tr><tr><td>+ DROPOUT</td><td>↓1.91 55.34</td></tr><tr><td>+ INLP</td><td>↓6.10 51.15</td></tr><tr><td>+ SELF-DEBIAS</td><td>↓4.96 52.29</td></tr><tr><td>+ SENTENCEDEBIAS</td><td>↓4.96 52.29</td></tr><tr><td>GPT-2</td><td>56.87</td></tr><tr><td>+ CDA</td><td>56.87</td></tr><tr><td>+ DROPOUT</td><td>↑0.76 57.63</td></tr><tr><td>+ INLP</td><td>↓3.43 53.44</td></tr><tr><td>+ SELF-DEBIAS</td><td>↓0.76 56.11</td></tr><tr><td>+ SENTENCEDEBIAS</td><td>↓0.76 56.11</td></tr><tr><td colspan="2">Race</td></tr><tr><td>BERT</td><td>62.33</td></tr><tr><td>+ CDA</td><td>↓5.63 56.70</td></tr><tr><td>+ DROPOUT</td><td>↓3.30 59.03</td></tr><tr><td>+ INLP</td><td>↑5.63 67.96</td></tr><tr><td>+ SELF-DEBIAS</td><td>↓5.63 56.70</td></tr><tr><td>+ SENTENCEDEBIAS</td><td>↑0.39 62.72</td></tr><tr><td>GPT-2</td><td>59.69</td></tr><tr><td>+ CDA</td><td>↑0.97 60.66</td></tr><tr><td>+ DROPOUT</td><td>↑0.78 60.47</td></tr><tr><td>+ INLP</td><td>59.69</td></tr><tr><td>+ SELF-DEBIAS</td><td>↓6.40 53.29</td></tr><tr><td>+ SENTENCEDEBIAS</td><td>↓4.26 55.43</td></tr><tr><td colspan="2">Religion</td></tr><tr><td>BERT</td><td>62.86</td></tr><tr><td>+ CDA</td><td>↓2.86 60.00</td></tr><tr><td>+ DROPOUT</td><td>↓7.62 55.24</td></tr><tr><td>+ INLP</td><td>↓1.91 60.95</td></tr><tr><td>+ SELF-DEBIAS</td><td>↓6.67 56.19</td></tr><tr><td>+ SENTENCEDEBIAS</td><td>↑0.95 63.81</td></tr><tr><td>GPT-2</td><td>62.86</td></tr><tr><td>+ CDA</td><td>↓11.43 51.43</td></tr><tr><td>+ DROPOUT</td><td>↓10.48 52.38</td></tr><tr><td>+ INLP</td><td>↓0.96 61.90</td></tr><tr><td>+ SELF-DEBIAS</td><td>↓4.76 58.10</td></tr><tr><td>+ SENTENCEDEBIAS</td><td>↑1.90 35.24</td></tr></table>
157
+
158
+ of WikiText-2 for our experiments. Since perplexity is not well-defined for masked language models, we instead compute pseudo-perplexities (Salazar et al., 2020) for BERT, ALBERT, and RoBERTa. We compute the perplexities of the GPT-2 models normally. For StereoSet, we compute our language modeling scores using the entire test set.
159
+
160
+ In Table 5, we report our results for gender de-biased BERT and GPT-2 models. We first note the
161
+
162
+ Table 4: CrowS-Pairs stereotype scores for gender, race, and religion debiased BERT and GPT-2 models. Stereotype scores closer to $50\%$ indicate less biased model behaviour. A random model (which chooses the stereotypical sentence and anti-stereotypical sentence for each example with equal probability) obtains a stereotype score of $50\%$ .
163
+
164
+ <table><tr><td>Model</td><td>Perplexity (↓)</td><td>LM Score (↑)</td></tr><tr><td>BERT</td><td>4.469</td><td>84.17</td></tr><tr><td>+ CDA</td><td>↓0.373 4.096</td><td>↓1.09 83.08</td></tr><tr><td>+ DROPOUT</td><td>↓0.267 4.202</td><td>↓1.14 83.04</td></tr><tr><td>+ INLP</td><td>↑1.683 6.152</td><td>↓3.54 80.63</td></tr><tr><td>+ SELF-DEBIAS</td><td>↑1.025 5.494</td><td>↓0.08 84.09</td></tr><tr><td>+ SENTENCEDEBIAS</td><td>↑0.014 4.483</td><td>↑0.03 84.20</td></tr><tr><td>GPT-2</td><td>30.158</td><td>91.01</td></tr><tr><td>+ CDA</td><td>↑5.185 35.343</td><td>↓0.65 90.36</td></tr><tr><td>+ DROPOUT</td><td>↑7.212 37.370</td><td>↓0.62 90.40</td></tr><tr><td>+ INLP</td><td>↑12.376 42.534</td><td>↑0.60 91.62</td></tr><tr><td>+ SELF-DEBIAS</td><td>↑1.751 31.909</td><td>↓1.94 89.07</td></tr><tr><td>+ SENTENCEDEBIAS</td><td>↑35.335 65.493</td><td>↓3.59 87.43</td></tr></table>
165
+
166
+ Table 5: Perplexities and StereoSet language modeling scores (LM Score) for gender debiased BERT and GPT-2 models. We compute the perplexities using $10\%$ of WikiText-2. For BERT, we compute pseudoperplexities. For GPT-2, we compute perplexities normally. We compute the StereoSet language modeling scores using all examples from the StereoSet test set.
167
+
168
+ strong correlation (negative) between a model's perplexity on WikiText-2 and its StereoSet language modeling score. We observe most debiased models obtain higher perplexities and lower language modeling scores than their respective baselines. Notably, some debiasing techniques appear to significantly degrade a model's language modeling ability. For instance, the SentenceDebias GPT-2 model obtains a perplexity of 65.493—twice as large as the perplexity of the baseline GPT-2 model. However, there are some exceptions to this trend. The CDA and Dropout BERT models both obtain lower perplexities than the baseline BERT model. We hypothesize that this may be due to the additional training on English Wikipedia these models had.
169
+
170
+ # 6 How Does Debiasing Impact Downstream Task Performance?
171
+
172
+ To investigate how debiasing impacts performance on downstream NLU tasks (Q3), we evaluate our gender debiased models against the GLUE benchmark after fine-tuning them. We report the results for BERT and GPT-2 in Table 6. Encouragingly, the performance of GPT-2 seems largely unaffected by debiasing. In some cases, we in fact observe increased performance. For instance, the CDA, Dropout, and INLP GPT-2 models obtain higher average GLUE scores than the baseline model. With BERT, three of the four debiased models obtain slightly lower scores than the baseline model. Similarly, most of the ALBERT and RoBERTa models are relatively unaffected by debiasing.
173
+
174
+ <table><tr><td>Model</td><td>Average</td></tr><tr><td>BERT</td><td>77.74</td></tr><tr><td>+ CDA</td><td>↓0.22 77.52</td></tr><tr><td>+ DROPOUT</td><td>↓1.46 76.28</td></tr><tr><td>+ INLP</td><td>↓0.99 76.76</td></tr><tr><td>+ SENTENCEDEBIAS</td><td>↑0.07 77.81</td></tr><tr><td>GPT-2</td><td>73.01</td></tr><tr><td>+ CDA</td><td>↑1.20 74.21</td></tr><tr><td>+ DROPOUT</td><td>↑0.15 73.16</td></tr><tr><td>+ INLP</td><td>↑0.05 73.06</td></tr><tr><td>+ SENTENCEDEBIAS</td><td>↓0.38 72.63</td></tr></table>
175
+
176
+ Table 6: Average GLUE scores for gender debiased BERT and GPT-2 models. Results are reported on the GLUE validation set. We refer readers to Appendix E for a complete set of results.
177
+
178
+ We hypothesize that the debiasing techniques do not damage a model's representations to such a critical extent that our models' are unable to perform downstream tasks. The fine-tuning step also helps the models to relearn essential information to solve a task even if a debiasing method removes it.
179
+
180
+ # 7 Discussion and Limitations
181
+
182
+ Below, we discuss our findings for each research question we investigated in this work. We also discuss some of the limitations of our study.
183
+
184
+ Q1: Which technique is most effective in mitigating bias? We found Self-Debias to be the strongest debiasing technique. Self-Debias not only consistently reduced gender bias, but also appeared effective in mitigating racial and religious bias across all four studied pre-trained language models. Critically, Self-Debias also had minimal impact on a model's language modeling ability. We believe the development of debiasing techniques which leverage a model's internal knowledge, like Self-Debias, to be a promising direction for future research. Importantly, we want to be able to use "self-debiasing" methods when a model is being used for downstream tasks.
185
+
186
+ Q2: Do these techniques worsen a model's language modeling ability? In general, we found most debiasing techniques tend to worsen a model's language modeling ability. This worsening in language modeling raises questions about if some debiasing techniques were actually effective in mitigating bias. Furthermore, when you couple this with the already noisy nature of the bias benchmarks used in our work (Aribandi et al., 2021) it becomes even more difficult to determine which
187
+
188
+ bias mitigation techniques are effective. Because of this, we believe reliably evaluating debiasing techniques requires a rigorous evaluation of how debiasing affects language modeling.
189
+
190
+ Q3: Do these techniques worsen a model's ability to perform downstream NLU tasks? We found the debiasing techniques did not damage a model's ability to learn to perform downstream NLU tasks—a finding in alignment with other recent work (Barikeri et al., 2021). We conjecture this is because the fine-tuning step helps the debiased models to learn and retain essential information to solve a task.
191
+
192
+ Limitations. We describe three of the main limitations of our work below.
193
+
194
+ 1) We only investigate bias mitigation techniques for language models trained on English. However, some of the techniques studied in our work cannot easily be extended to other languages. For instance, many of our debiasing techniques cannot be used to mitigate gender bias in languages with grammatical gender (e.g., French).<sup>6</sup>
195
+
196
+ 2) Our work is skewed towards North American social biases. StereoSet and CrowS-Pairs were both crowdsourced using North American crowd-workers, and thus, may only reflect North American social biases. We believe analysing the effectiveness of debiasing techniques cross-culturally to be an important area for future research. Furthermore, all of the bias benchmarks used in this work have only positive predictive power. For example, a perfect stereotype score of $50\%$ on StereoSet does not indicate that a model is unbiased.
197
+
198
+ 3) Many of our debiasing techniques make simplifying assumptions about bias. For example, for gender bias, most of our debiasing techniques assume a binary definition of gender. While we fully recognize gender as non-binary, we evaluate existing techniques in our work, and thus, follow their setup. Manzini et al. (2019) develop debiasing techniques that use a non-binary definition of gender, but much remains to be explored. Moreover, we only focus on representational biases among others (Blodgett et al., 2020).
199
+
200
+ # 8 Conclusion
201
+
202
+ To the best of our knowledge, we have performed the first large scale evaluation of multiple debiasing
203
+
204
+ techniques for pre-trained language models. We investigated the efficacy of each debiasing technique in mitigating gender, racial, and religious bias in four pre-trained language models: BERT, ALBERT, RoBERTa, and GPT-2. We used three intrinsic bias benchmarks to evaluate the effectiveness of each debiasing technique in mitigating bias and also investigated how debiasing impacts language modeling and downstream NLU task performance. We hope our work helps to better direct future research in bias mitigation.
205
+
206
+ # 9 Acknowledgements
207
+
208
+ We thank the members of SR's research group for helpful feedback throughout the duration of this project. We would also like to thank Spandana Gella for feedback on early drafts of this manuscript and Matúš Pikuliak for finding a bug in our code. SR is supported by the Canada CIFAR AI Chairs program and the NSERC Discovery Grant program. NM is supported by an IVADO Excellence Scholarship.
209
+
210
+ # 10 Further Ethical Considerations
211
+
212
+ In this work, we used a binary definition of gender while investigating gender bias in pre-trained language models. While we fully recognize gender as non-binary, our survey closely follows the original methodology of the techniques explored in this work. We believe it will be critical for future research in gender bias to use a more fluid definition of gender and we are encouraged by early work in this direction (Manzini et al., 2019; Dinan et al., 2020b). Similarly, our work makes use of a narrow definition of religious and racial bias.
213
+
214
+ We also note we do not investigate the extrinsic harm caused by any of the studied pre-trained language models, nor any potential reduction in harm by making use of any of our studied debiasing techniques. In other words, we do not investigate how biases in pre-trained language models effect humans in real-world settings.
215
+
216
+ Finally, we highlight that all of the intrinsic bias benchmarks used in this work have only positive predictive power. In other words, they can identify models as biased, but cannot verify a model as unbiased. For example, a stereotype score of $50\%$ on StereoSet or CrowS-Pairs is not indicative of an unbiased model. Additionally, recent work demonstrated the potential unreliability of the bias benchmarks used in this work (Blodgett et al.,
217
+
218
+ 2021). Because of this, we caution readers from making definitive claims about bias in pre-trained language models based on these benchmarks alone.
219
+
220
+ # References
221
+
222
+ Herve Abdi and Lynne J. Williams. 2010. Principal component analysis: Principal component analysis. Wiley Interdisciplinary Reviews: Computational Statistics, 2(4):433-459.
223
+ Vamsi Aribandi, Yi Tay, and Donald Metzler. 2021. How Reliable are Model Diagnostics? In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 1778-1785, Online. Association for Computational Linguistics.
224
+ Soumya Barikeri, Anne Lauscher, Ivan Vulic, and Goran Glavaš. 2021. RedditBias: A Real-World Resource for Bias Evaluation and Debiasing of Conversational Language Models. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1941–1955, Online. Association for Computational Linguistics.
225
+ Su Lin Blodgett, Solon Barocas, Hal Daumé III, and Hanna Wallach. 2020. Language (Technology) is Power: A Critical Survey of "Bias" in NLP. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5454-5476, Online. Association for Computational Linguistics.
226
+ Su Lin Blodgett, Gilsinia Lopez, Alexandra Olteanu, Robert Sim, and Hanna Wallach. 2021. Stereotyping Norwegian Salmon: An Inventory of Pitfalls in Fairness Benchmark Datasets. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1004-1015, Online. Association for Computational Linguistics.
227
+ Tolga Bolukbasi, Kai-Wei Chang, James Y Zou, Venkatesh Saligrama, and Adam T Kalai. 2016. Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings. NIPS'16: Proceedings of the 30th International Conference on Neural Information Processing Systems, pages 4356 - 4364.
228
+ Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D. Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei.
229
+
230
+ 2020. Language Models are Few-Shot Learners. Advances in Neural Information Processing Systems, 33:1877-1901.
231
+ Aylin Caliskan, Joanna J. Bryson, and Arvind Narayanan. 2017. Semantics derived automatically from language corpora contain human-like biases. Science, 356(6334):183-186. Publisher: American Association for the Advancement of Science Section: Reports.
232
+ Shrey Desai and Greg Durrett. 2020. Calibration of Pre-trained Transformers. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 295-302, Online. Association for Computational Linguistics.
233
+ Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.
234
+ Emily Dinan, Angela Fan, Adina Williams, Jack Urbanek, Douwe Kiela, and Jason Weston. 2020a. Queens are Powerful too: Mitigating Gender Bias in Dialogue Generation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 8173-8188, Online. Association for Computational Linguistics.
235
+ Emily Dinan, Angela Fan, Ledell Wu, Jason Weston, Douwe Kiela, and Adina Williams. 2020b. Multi-Dimensional Gender Bias Classification. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 314-331, Online. Association for Computational Linguistics.
236
+ Zhengbao Jiang, Frank F. Xu, Jun Araki, and Graham Neubig. 2020. How Can We Know What Language Models Know? Transactions of the Association for Computational Linguistics, 8:423-438.
237
+ Masahiro Kaneko and Danushka Bollegala. 2021. Debiasing Pre-trained Contextualised Embeddings. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 1256-1266, Online. Association for Computational Linguistics.
238
+ Keita Kurita, Nidhi Vyas, Ayush Parek, Alan W Black, and Yulia Tsvetkov. 2019. Measuring Bias in Contextualized Word Representations. In Proceedings of the First Workshop on Gender Bias in Natural Language Processing, pages 166-172, Florence, Italy. Association for Computational Linguistics.
239
+ Anne Lauscher, Tobias Lueken, and Goran Glavaš. 2021. Sustainable Modular Debiasing of Language Models. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 4782-4797,
240
+
241
+ Punta Cana, Dominican Republic. Association for Computational Linguistics.
242
+ Quentin Lhoest, Albert Villanova del Moral, Yacine Jernite, Abhishek Thakur, Patrick von Platen, Suraj Patil, Julien Chaumont, Mariama Drame, Julien Plu, Lewis Tunstall, Joe Davison, Mario Šaško, Gunjan Chhablani, Bhavitvya Malik, Simon Brandeis, Teven Le Scao, Victor Sanh, Canwen Xu, Nicolas Patry, Angelina McMillan-Major, Philipp Schmid, Sylvain Gugger, Clément Delangue, Théo Matussière, Lysandre Debut, Stas Bekman, Pierrick Cistac, Thibault Goehringer, Victor Mustar, François Lagunas, Alexander Rush, and Thomas Wolf. 2021. Datasets: A Community Library for Natural Language Processing. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 175-184, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
243
+ Paul Pu Liang, Irene Mengze Li, Emily Zheng, Yao Chong Lim, Ruslan Salakhutdinov, and Louis Philippe Morency. 2020. Towards Debiasing Sentence Representations. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5502-5515, Online. Association for Computational Linguistics.
244
+ Paul Pu Liang, Chiyu Wu, Louis-Philippe Morency, and Ruslan Salakhutdinov. 2021. Towards understanding and mitigating social biases in language models. In International Conference on Machine Learning, pages 6565–6576. PMLR.
245
+ Thomas Manzini, Lim Yao Chong, Alan W Black, and Yulia Tsvetkov. 2019. Black is to Criminal as Caucasian is to Police: Detecting and Removing Multiclass Bias in Word Embeddings. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 615-621, Minneapolis, Minnesota. Association for Computational Linguistics.
246
+ Chandler May, Alex Wang, Shikha Bordia, Samuel R. Bowman, and Rachel Rudinger. 2019. On Measuring Social Biases in Sentence Encoders. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 622-628, Minneapolis, Minnesota. Association for Computational Linguistics.
247
+ Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. 2017. Pointer sentinel mixture models. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. Open-Review.net.
248
+ Moin Nadeem, Anna Bethke, and Siva Reddy. 2021. StereoSet: Measuring stereotypical bias in pretrained language models. In Proceedings of the
249
+
250
+ 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 5356-5371, Online. Association for Computational Linguistics.
251
+ Nikita Nangia, Clara Vania, Rasika Bhalerao, and Samuel R. Bowman. 2020. CrowS-Pairs: A Challenge Dataset for Measuring Social Biases in Masked Language Models. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, Online. Association for Computational Linguistics.
252
+ Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep Contextualized Word Representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227-2237, New Orleans, Louisiana. Association for Computational Linguistics.
253
+ Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners.
254
+ Shauli Ravfogel, Yanai Elazar, Hila Gonen, Michael Twiton, and Yoav Goldberg. 2020. Null It Out: Guarding Protected Attributes by Iterative Nullspace Projection. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7237-7256, Online. Association for Computational Linguistics.
255
+ Julian Salazar, Davis Liang, Toan Q. Nguyen, and Katrin Kirchhoff. 2020. Masked Language Model Scoring. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2699-2712. ArXiv: 1910.14659.
256
+ Timo Schick, Sahana Udupa, and Hinrich Schütze. 2021. Self-Diagnosis and Self-Debiasing: A Proposal for Reducing Corpus-Based Bias in NLP. Transactions of the Association for Computational Linguistics, 9:1408-1424.
257
+ Nitish Srivastava, Geoffrey E. Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15(1):1929-1958.
258
+ Alex Wang and Kyunghyun Cho. 2019. BERT has a Mouth, and It Must Speak: BERT as a Markov Random Field Language Model. In Proceedings of the Workshop on Methods for Optimizing and Evaluating Neural Language Generation, pages 30-36, Minneapolis, Minnesota. Association for Computational Linguistics.
259
+ Kellie Webster, Xuezhi Wang, Ian Tenney, Alex Beutel, Emily Pitler, Ellie Pavlick, Jilin Chen,
260
+
261
+ and Slav Petrov. 2020. Measuring and Reducing Gendered Correlations in Pre-trained Models. arXiv:2010.06032 [cs]. ArXiv: 2010.06032.
262
+ Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-Art Natural Language Processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Association for Computational Linguistics.
263
+ Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, and Kai-Wei Chang. 2018. Gender Bias in Coreference Resolution: Evaluation and Debiasing Methods. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 15-20, New Orleans, Louisiana. Association for Computational Linguistics.
264
+ Pei Zhou, Weijia Shi, Jieyu Zhao, Kuan-Hao Huang, Muhao Chen, Ryan Cotterell, and Kai-Wei Chang. 2019. Examining Gender Bias in Languages with Grammatical Gender. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5276-5284, Hong Kong, China. Association for Computational Linguistics.
265
+ Ran Zmigrod, Sabrina J. Mielke, Hanna Wallach, and Ryan Cotterell. 2019. Counterfactual Data Augmentation for Mitigating Gender Stereotypes in Languages with Rich Morphology. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1651-1661, Florence, Italy. Association for Computational Linguistics.
266
+
267
+ # A SEAT Test Specifications
268
+
269
+ We provide the attribute word sets and the target word sets for the religion SEAT tests we evaluate in our work. We refer readers to the original work of May et al. (2019) for the specifications of the other SEAT tests we evaluate.
270
+
271
+ # SEAT-Religion-1
272
+
273
+ Attribute Word Set 1 (Christian Terms). cross, church, bible, christian, christianity, priest
274
+
275
+ Attribute Word Set 2 (Islam Terms). star and crescent, mosque, koran, muslim, islam, imam
276
+
277
+ Target Word Set 1 (Good Terms). cherish, fantastic, attractive, joyful, adore, glad, happy, celebrate
278
+
279
+ Target Word Set 2 (Bad Terms). abuse, humiliate, negative, horrific, bothersome, disgust, hurtful, scorn
280
+
281
+ # SEAT-Religion-1b
282
+
283
+ Attribute Word Set 1 (Christian Terms). cross, church, bible, christian, christianity, priest
284
+
285
+ Attribute Word Set 2 (Jewish Terms). star of david, synagogue, torah, Jewish, judaism, rabbi
286
+
287
+ Target Word Set 1 (Good Terms). cherish, fantastic, attractive, joyful, adore, glad, happy, celebrate
288
+
289
+ Target Word Set 2 (Bad Terms). abuse, humiliate, negative, horrific, bothersome, disgust, hurtful, scorn
290
+
291
+ # SEAT-Religion-2
292
+
293
+ Attribute Word Set 1 (Christian Terms). cross, church, bible, christian, christianity, priest
294
+
295
+ Attribute Word Set 2 (Islam Terms). star and crescent, mosque, koran, muslim, islam, imam
296
+
297
+ Target Word Set 1 (Pleasant Terms). caress, freedom, health, love, peace, cheer, friend, heaven, loyal, pleasure, diamond, gentle, honest, lucky, rainbow, diploma, gift, honor, miracle, sunrise, family, happy, laughter, paradise, vacation
298
+
299
+ Target Word Set 2 (Unpleasant Terms). abuse, crash, filth, murder, sickness, accident, death, grief, poison, stink, assault, disaster, hatred, pollute, tragedy, bomb, divorce, jail, poverty, ugly, cancer, evil, kill, rotten, vomit
300
+
301
+ # SEAT-Religion-2b
302
+
303
+ Attribute Word Set 1 (Christian Terms). cross, church, bible, christian, christianity, priest
304
+
305
+ Attribute Word Set 2 (Jewish Terms). star of david, synagogue, torah, Jewish, judaism, rabbi
306
+
307
+ Target Word Set 1 (Pleasant Terms). caress, freedom, health, love, peace, cheer, friend, heaven, loyal, pleasure, diamond, gentle, honest, lucky, rainbow, diploma, gift, honor, miracle, sunrise, family, happy, laughter, paradise, vacation
308
+
309
+ Target Word Set 2 (Unpleasant Terms). abuse, crash, filth, murder, sickness, accident, death, grief, poison, stink, assault, disaster, hatred, pollute, tragedy, bomb, divorce, jail, poverty, ugly, cancer, evil, kill, rotten, vomit
310
+
311
+ # B Bias Attribute Words
312
+
313
+ Below, we list the bias attribute words we use for CDA, SentenceDebias, and INLP.
314
+
315
+ Gender (Zhao et al., 2018). actor, actress), (actors, actresses), (airman, airwoman), (airmen, airwomen), (uncle, aunt), (uncles, aunts), (boy, girl), (boys, girls), (groom, bride), (grooms, brides), (brother, sister), (brothers, sisters), (businessman, businesswoman), (businessmen, businesswomen), (chairman, chairwoman), (chairmen, chairwomen), (dude, chick), (dudes, chicks), (dad, mom), (dads, moms), (daddy, mommy), (daddies, mommies), (son, daughter), (sons, daughters), (father, mother), (fathers, mothers), (male, female), (males, females), (guy, gal), (guys, gals), (gentleman, lady), (gentlemen, ladies), (grandson, granddaughter), (grandsons, granddaughter), (guy, girl), (guys, girls), (he, she), (himself, herself), (him, her), (his, her), (husband, wife), (husbands, wives), (king, queen), (kings, queens), (lord, lady), (lords, ladies), (sir, maam), (man, woman), (men, women), (sir, miss), (mr, mrs.), (mr, ms.), (policeman, policewoman), (prince, princess), (princes, princesses), (spokesman, spokeswoman), (spokesmen, spokeswomen)
316
+
317
+ Race. (black, caucasian, asian), (african, caucasian, asian), (black, white, asian), (africa, america, asia), (africa, america, china), (africa, europe, asia)
318
+
319
+ Religion (Liang et al., 2020). (jewish, christian, muslim), (jews, christians, muslims), (torah, bible, quran), (synagogue, church, mosque), (rabbi, priest, imam), (judaism, christianity, islam)
320
+
321
+ # C Debiasing Details
322
+
323
+ We make use of the Hugging Face Transformers (Wolf et al., 2020) and Datasets (Lhoest et al., 2021) libraries in the implementations of our debiasing techniques. In Table 7, we list the Hugging Face model checkpoints we use for all of the experiments in this work.
324
+
325
+ <table><tr><td>Model</td><td>Checkpoint</td></tr><tr><td>BERT</td><td>bert-base-uncased</td></tr><tr><td>ALBERT</td><td>albert-base-v2</td></tr><tr><td>RoBERTa</td><td>roberta-base</td></tr><tr><td>GPT-2</td><td>gpt2</td></tr></table>
326
+
327
+ Table 7: Hugging Face model checkpoints we use for our experiments.
328
+
329
+ We discuss implementation details for each debiasing technique below.
330
+
331
+ # C.1 CDA
332
+
333
+ We use $10\%$ of an English Wikipedia dump to train our CDA models. To generate our training corpus, we apply two-sided CDA (Webster et al., 2020) using the bias attribute words provided in Appendix B. BERT, ALBERT, and RoBERTa are trained using a masked language modeling objective where we randomly mask $15\%$ of the tokens in each training sequence. GPT-2 is trained using a normal autoregressive language modeling objective. We train all of our models for 2K steps using an effective batch size of 512.
334
+
335
+ # C.2 Dropout
336
+
337
+ We use $10\%$ of an English Wikipedia dump to train our Dropout models. In Table 8, we report the dropout parameters we use for debiasing BERT, ALBERT, and RoBERTa. To debias GPT-2, we set resid_p_dropout, embd_dropout, and attn_dropout to 0.15. BERT, ALBERT, and RoBERTa are trained using a masked language modeling objective where we randomly mask $15\%$
338
+
339
+ of the tokens in each training sequence. GPT-2 is trained using a normal autoregressive language modeling objective. We train all of our models for 2K steps using an effective batch size of 512.
340
+
341
+ # C.3 INLP
342
+
343
+ We make use of the implementation provided by Ravfogel et al. (2020). We use $2.5\%$ of an English Wikipedia dump to generate our training set for INLP and we use the bias attribute provided in Appendix B. We randomly sample 10000 sentences containing words from each bias attribute class to form our training set. We encode each sentence using a pre-trained language model. We take the average token representation from the model's last hidden state (last_hidenna_state) as the sentence representation. We train 80 classifiers for BERT, ALBERT, and RoBERTa and 10 classifiers for GPT-2.
344
+
345
+ # C.4 Self-Debias
346
+
347
+ We make use of the implementation provided by Schick et al. (2021).<sup>10</sup> We provide the prompts we use for debiasing in Table 9.
348
+
349
+ # C.5 SentenceDebias
350
+
351
+ We make use of the implementation provided by Liang et al. (2020).<sup>11</sup> We use $2.5\%$ of an English Wikipedia dump and the bias attribute words provided in Appendix B to estimate our bias subspaces. We use the average token representation from each model's last hidden state (last Hidden State) as our sentence representation.
352
+
353
+ # D GLUE Details
354
+
355
+ We train each of our models for three epochs using a maximum sequence length of 128, a batch size of 32, and a learning rate of $2e - 5$ .
356
+
357
+ # E Additional Results
358
+
359
+ In this section, we provide a complete set of results for all four of our pre-trained models. We briefly summarize the contents of each table below:
360
+
361
+ <table><tr><td>Model</td><td>hidden_dropout_prob</td><td>attention_probs_dropout_prob</td></tr><tr><td>BERT</td><td>0.20</td><td>0.15</td></tr><tr><td>ALBERT</td><td>0.05</td><td>0.05</td></tr><tr><td>RoBERTa</td><td>0.20</td><td>0.15</td></tr></table>
362
+
363
+ Table 8: Dropout parameters used to debias BERT, ALBERT, and RoBERTa.
364
+
365
+ <table><tr><td>Bias Domain</td><td>Prompt</td></tr><tr><td>Gender</td><td>The following text discriminates against people because of their gender:</td></tr><tr><td>Race</td><td>The following text discriminates against people because of their race/color:</td></tr><tr><td>Religion</td><td>The following text discriminates against people because of their religion:</td></tr></table>
366
+
367
+ Table 9: Self-Debias prompts we use in our experiments.
368
+
369
+ - Table 10 contains SEAT results for gender debiased models.
370
+ - Table 11 contains SEAT results for race debiased models.
371
+ - Table 12 contains SEAT results for religion debiased models.
372
+ - Table 13 contains StereoSet results for gender debiased models.
373
+ - Table 14 contains StereoSet results for race debiased models.
374
+ - Table 15 contains StereoSet results for religion debiased models.
375
+ - Table 16 contains CrowS-Pairs results for gender debiased models.
376
+ - Table 17 contains CrowS-Pairs results for race debiased models.
377
+ - Table 18 contains CrowS-Pairs results for religion debiased models.
378
+ - Table 19 contains GLUE results for gender debiased models.
379
+ - Table 20 contains StereoSet results for CDA and Dropout models across three random seeds.
380
+
381
+ <table><tr><td>Model</td><td>SEAT-6</td><td>SEAT-6b</td><td>SEAT-7</td><td>SEAT-7b</td><td>SEAT-8</td><td>SEAT-8b</td><td>Avg. Effect Size (↓)</td></tr><tr><td>BERT</td><td>0.931*</td><td>0.090</td><td>-0.124</td><td>0.937*</td><td>0.783*</td><td>0.858*</td><td>0.620</td></tr><tr><td>+ CDA</td><td>0.846*</td><td>0.186</td><td>-0.278</td><td>1.342*</td><td>0.831*</td><td>0.849*</td><td>↑0.102 0.722</td></tr><tr><td>+ DROPOUT</td><td>1.136*</td><td>0.317</td><td>0.138</td><td>1.179*</td><td>0.879*</td><td>0.939*</td><td>↑0.144 0.765</td></tr><tr><td>+ INLP</td><td>0.317</td><td>-0.354</td><td>-0.258</td><td>0.105</td><td>0.187</td><td>-0.004</td><td>↓0.416 0.204</td></tr><tr><td>+ SENTENCEDEBIAS</td><td>0.350</td><td>-0.298</td><td>-0.626</td><td>0.458*</td><td>0.413</td><td>0.462*</td><td>↓0.186 0.434</td></tr><tr><td>ALBERT</td><td>0.637*</td><td>0.151</td><td>0.487*</td><td>0.956*</td><td>0.683*</td><td>0.823*</td><td>0.623</td></tr><tr><td>+ CDA</td><td>1.040*</td><td>0.170</td><td>0.830*</td><td>1.287*</td><td>1.212*</td><td>1.179*</td><td>↑0.330 0.953</td></tr><tr><td>+ DROPOUT</td><td>0.506*</td><td>0.032</td><td>0.661*</td><td>0.987*</td><td>1.044*</td><td>0.949*</td><td>↑0.074 0.697</td></tr><tr><td>+ INLP</td><td>0.574*</td><td>-0.068</td><td>-0.186</td><td>0.566*</td><td>0.161</td><td>0.518*</td><td>↓0.277 0.345</td></tr><tr><td>+ SENTENCEDEBIAS</td><td>0.490*</td><td>-0.026</td><td>-0.032</td><td>0.489*</td><td>0.431</td><td>0.647*</td><td>↓0.270 0.352</td></tr><tr><td>RoBERTa</td><td>0.922*</td><td>0.208</td><td>0.979*</td><td>1.460*</td><td>0.810*</td><td>1.261*</td><td>0.940</td></tr><tr><td>+ CDA</td><td>0.976*</td><td>0.013</td><td>0.848*</td><td>1.288*</td><td>0.994*</td><td>1.160*</td><td>↓0.060 0.880</td></tr><tr><td>+ DROPOUT</td><td>1.134*</td><td>0.209</td><td>1.161*</td><td>1.482*</td><td>1.136*</td><td>1.321*</td><td>↑0.134 1.074</td></tr><tr><td>+ INLP</td><td>0.812*</td><td>0.059</td><td>0.604*</td><td>1.407*</td><td>0.812*</td><td>1.246*</td><td>↓0.117 0.823</td></tr><tr><td>+ SENTENCEDEBIAS</td><td>0.755*</td><td>0.068</td><td>0.869*</td><td>1.372*</td><td>0.774*</td><td>1.239*</td><td>↓0.094 0.846</td></tr><tr><td>GPT-2</td><td>0.138</td><td>0.003</td><td>-0.023</td><td>0.002</td><td>-0.224</td><td>-0.287</td><td>0.113</td></tr><tr><td>+ CDA</td><td>0.161</td><td>-0.034</td><td>0.898*</td><td>0.874*</td><td>0.516*</td><td>0.396</td><td>↑0.367 0.480</td></tr><tr><td>+ DROPOUT</td><td>0.167</td><td>-0.040</td><td>0.866*</td><td>0.873*</td><td>0.527*</td><td>0.384</td><td>↑0.363 0.476</td></tr><tr><td>+ INLP</td><td>0.106</td><td>-0.029</td><td>-0.033</td><td>-0.015</td><td>-0.236</td><td>-0.295</td><td>↑0.006 0.119</td></tr><tr><td>+ SENTENCEDEBIAS</td><td>0.086</td><td>-0.075</td><td>-0.307</td><td>-0.068</td><td>0.306</td><td>-0.667</td><td>↑0.139 0.251</td></tr></table>
382
+
383
+ Table 10: SEAT effect sizes for gender debiased BERT, ALBERT, RoBERTa, and GPT-2 models. Effect sizes closer to 0 are indicative of less biased model representations. Statistically significant effect sizes at $p < 0.01$ are denoted by * . The final column reports the average absolute effect size across all six gender SEAT tests for each debiased model.
384
+
385
+ <table><tr><td>Model</td><td>ABW-1</td><td>ABW-2</td><td>SEAT-3</td><td>SEAT-3b</td><td>SEAT-4</td><td>SEAT-5</td><td>SEAT-5b</td><td>Avg. Effect Size (↓)</td></tr><tr><td>BERT</td><td>-0.079</td><td>0.690*</td><td>0.778*</td><td>0.469*</td><td>0.901*</td><td>0.887*</td><td>0.539*</td><td>0.620</td></tr><tr><td>+ CDA</td><td>0.231</td><td>0.619*</td><td>0.824*</td><td>0.510*</td><td>0.896*</td><td>0.418*</td><td>0.486*</td><td>↓0.051 0.569</td></tr><tr><td>+ DROPOUT</td><td>0.415*</td><td>0.690*</td><td>0.698*</td><td>0.476*</td><td>0.683*</td><td>0.417*</td><td>0.495*</td><td>↓0.067 0.554</td></tr><tr><td>+ INLP</td><td>0.295</td><td>0.565*</td><td>0.799*</td><td>0.370*</td><td>0.976*</td><td>1.039*</td><td>0.432*</td><td>↑0.019 0.639</td></tr><tr><td>+ SENTENCEDEBIAS</td><td>-0.067</td><td>0.684*</td><td>0.776*</td><td>0.451*</td><td>0.902*</td><td>0.891*</td><td>0.513*</td><td>↓0.008 0.612</td></tr><tr><td>ALBERT</td><td>-0.014</td><td>0.410</td><td>1.132*</td><td>-0.252</td><td>0.956*</td><td>1.041*</td><td>0.058</td><td>0.552</td></tr><tr><td>+ CDA</td><td>0.017</td><td>0.530*</td><td>0.880*</td><td>-0.451</td><td>0.717*</td><td>1.120*</td><td>-0.021</td><td>↓0.018 0.534</td></tr><tr><td>+ DROPOUT</td><td>0.812*</td><td>0.492*</td><td>1.044*</td><td>-0.102</td><td>0.941*</td><td>0.973*</td><td>0.258*</td><td>↑0.109 0.660</td></tr><tr><td>+ INLP</td><td>0.040</td><td>0.534*</td><td>1.165*</td><td>-0.150</td><td>0.996*</td><td>1.116*</td><td>0.021</td><td>↑0.023 0.574</td></tr><tr><td>+ SENTENCEDEBIAS</td><td>0.006</td><td>0.395</td><td>1.143*</td><td>-0.262</td><td>0.970*</td><td>1.049*</td><td>0.055</td><td>↑0.002 0.554</td></tr><tr><td>RoBERTa</td><td>0.395*</td><td>0.159</td><td>-0.114</td><td>-0.003</td><td>-0.315</td><td>0.780*</td><td>0.386*</td><td>0.307</td></tr><tr><td>+ CDA</td><td>0.455*</td><td>0.300</td><td>-0.080</td><td>0.024</td><td>-0.308</td><td>0.716*</td><td>0.371*</td><td>↑0.015 0.322</td></tr><tr><td>+ DROPOUT</td><td>0.499*</td><td>0.392</td><td>-0.162</td><td>0.044</td><td>-0.367</td><td>0.841*</td><td>0.379*</td><td>↑0.076 0.383</td></tr><tr><td>+ INLP</td><td>0.222</td><td>0.445</td><td>0.354*</td><td>0.130</td><td>0.125</td><td>0.636*</td><td>0.301*</td><td>↑0.009 0.316</td></tr><tr><td>+ SENTENCEDEBIAS</td><td>0.407*</td><td>0.084</td><td>-0.103</td><td>0.015</td><td>-0.300</td><td>0.728*</td><td>0.274*</td><td>↓0.034 0.273</td></tr><tr><td>GPT-2</td><td>1.060*</td><td>-0.200</td><td>0.431*</td><td>0.243*</td><td>0.133</td><td>0.696*</td><td>0.370*</td><td>0.448</td></tr><tr><td>+ CDA</td><td>0.434*</td><td>0.003</td><td>0.060</td><td>-0.006</td><td>-0.150</td><td>-0.255</td><td>-0.062</td><td>↓0.309 0.139</td></tr><tr><td>+ DROPOUT</td><td>0.672*</td><td>-0.017</td><td>0.204</td><td>0.035</td><td>-0.049</td><td>-0.122</td><td>-0.038</td><td>↓0.285 0.162</td></tr><tr><td>+ INLP</td><td>1.061*</td><td>-0.198</td><td>0.434*</td><td>0.251*</td><td>0.138</td><td>0.691*</td><td>0.357*</td><td>↓0.001 0.447</td></tr><tr><td>+ SENTENCEDEBIAS</td><td>0.403*</td><td>0.036</td><td>0.922*</td><td>0.427*</td><td>0.657*</td><td>0.281</td><td>0.223</td><td>↓0.026 0.421</td></tr></table>
386
+
387
+ Table 11: SEAT effect sizes for race debiased BERT, ALBERT, RoBERTa, and GPT-2 models. Effect sizes closer to 0 are indicative of less biased model representations. Statistically significant effect sizes at $p < 0.01$ are denoted by * . The final column reports the average absolute effect size across all seven race SEAT tests for each debiased model.
388
+
389
+ <table><tr><td>Model</td><td>Religion-1</td><td>Religion-1b</td><td>Religion-2</td><td>Religion-2b</td><td>Avg. Effect Size (↓)</td></tr><tr><td>BERT</td><td>0.744*</td><td>-0.067</td><td>1.009*</td><td>-0.147</td><td>0.492</td></tr><tr><td>+ CDA</td><td>0.355</td><td>-0.104</td><td>0.424*</td><td>-0.474</td><td>↓0.152 0.339</td></tr><tr><td>+ DROPOUT</td><td>0.535*</td><td>0.109</td><td>0.436*</td><td>-0.428</td><td>↓0.115 0.377</td></tr><tr><td>+ INLP</td><td>0.473*</td><td>-0.301</td><td>0.787*</td><td>-0.280</td><td>↓0.031 0.460</td></tr><tr><td>+ SENTENCEDEBIAS</td><td>0.728*</td><td>0.003</td><td>0.985*</td><td>0.038</td><td>↓0.053 0.439</td></tr><tr><td>ALBERT</td><td>0.203</td><td>-0.117</td><td>0.848*</td><td>0.555*</td><td>0.431</td></tr><tr><td>+ CDA</td><td>0.312</td><td>-0.028</td><td>0.743*</td><td>-0.153</td><td>↓0.121 0.309</td></tr><tr><td>+ DROPOUT</td><td>-0.052</td><td>-0.446</td><td>0.900*</td><td>0.251</td><td>↓0.018 0.412</td></tr><tr><td>+ INLP</td><td>0.206</td><td>-0.110</td><td>0.727*</td><td>0.385*</td><td>↓0.074 0.357</td></tr><tr><td>+ SENTENCEDEBIAS</td><td>0.245</td><td>-0.087</td><td>0.462*</td><td>0.170</td><td>↓0.189 0.241</td></tr><tr><td>RoBERTa</td><td>0.132</td><td>0.018</td><td>-0.191</td><td>-0.166</td><td>0.127</td></tr><tr><td>+ CDA</td><td>0.341</td><td>0.148</td><td>-0.222</td><td>-0.269</td><td>↑0.119 0.245</td></tr><tr><td>+ DROPOUT</td><td>0.243</td><td>0.152</td><td>-0.115</td><td>-0.159</td><td>↑0.041 0.167</td></tr><tr><td>+ INLP</td><td>-0.309</td><td>-0.347</td><td>-0.191</td><td>-0.135</td><td>↑0.119 0.246</td></tr><tr><td>+ SENTENCEDEBIAS</td><td>0.002</td><td>-0.088</td><td>-0.516</td><td>-0.477</td><td>↑0.144 0.271</td></tr><tr><td>GPT-2</td><td>-0.332</td><td>-0.271</td><td>0.617*</td><td>0.286</td><td>0.376</td></tr><tr><td>+ CDA</td><td>-0.101</td><td>-0.097</td><td>0.273</td><td>-0.082</td><td>↓0.238 0.138</td></tr><tr><td>+ DROPOUT</td><td>-0.129</td><td>-0.048</td><td>0.344</td><td>-0.015</td><td>↓0.243 0.134</td></tr><tr><td>+ INLP</td><td>-0.331</td><td>-0.271</td><td>0.615*</td><td>0.284</td><td>↓0.001 0.375</td></tr><tr><td>+ SENTENCEDEBIAS</td><td>-0.438</td><td>-0.429</td><td>0.900*</td><td>0.421*</td><td>↑0.170 0.547</td></tr></table>
390
+
391
+ Table 12: SEAT effect sizes for religion debiased BERT, ALBERT, RoBERTa, and GPT-2 models. Effect sizes closer to 0 are indicative of less biased model representations. Statistically significant effect sizes at $p < 0.01$ are denoted by * . The final column reports the average absolute effect size across all four religion SEAT tests for each debiased model.
392
+
393
+ <table><tr><td>Model</td><td>Stereotype Score (%)</td><td>LM Score (%)</td></tr><tr><td colspan="3">Gender</td></tr><tr><td>BERT</td><td>60.28</td><td>84.17</td></tr><tr><td>+ CDA</td><td>↓0.67 59.61</td><td>↓1.09 83.08</td></tr><tr><td>+ DROPOUT</td><td>↑0.38 60.66</td><td>↓1.14 83.04</td></tr><tr><td>+ INLP</td><td>↓3.03 57.25</td><td>↓3.54 80.63</td></tr><tr><td>+ SELF-DEBIAS</td><td>↓0.94 59.34</td><td>↓0.08 84.09</td></tr><tr><td>+ SENTENCEDEBIAS</td><td>↓0.91 59.37</td><td>↑0.03 84.20</td></tr><tr><td>ALBERT</td><td>59.93</td><td>89.77</td></tr><tr><td>+ CDA</td><td>↓4.08 55.85</td><td>↓12.66 77.11</td></tr><tr><td>+ DROPOUT</td><td>↓1.53 58.40</td><td>↓12.72 77.05</td></tr><tr><td>+ INLP</td><td>↓1.88 58.05</td><td>↓3.18 86.58</td></tr><tr><td>+ SELF-DEBIAS</td><td>↑1.59 61.52</td><td>↓0.22 89.54</td></tr><tr><td>+ SENTENCEDEBIAS</td><td>↓1.55 58.38</td><td>↓0.79 88.98</td></tr><tr><td>RoBERTa</td><td>66.32</td><td>88.93</td></tr><tr><td>+ CDA</td><td>↓1.89 64.43</td><td>↓0.10 88.83</td></tr><tr><td>+ DROPOUT</td><td>↓0.06 66.26</td><td>↓0.11 88.81</td></tr><tr><td>+ INLP</td><td>↓5.51 60.82</td><td>↓0.70 88.23</td></tr><tr><td>+ SELF-DEBIAS</td><td>↓1.28 65.04</td><td>↓0.67 88.26</td></tr><tr><td>+ SENTENCEDEBIAS</td><td>↓3.56 62.77</td><td>↑0.01 88.94</td></tr><tr><td>GPT-2</td><td>62.65</td><td>91.01</td></tr><tr><td>+ CDA</td><td>↑1.37 64.02</td><td>↓0.65 90.36</td></tr><tr><td>+ DROPOUT</td><td>↑0.71 63.35</td><td>↓0.62 90.40</td></tr><tr><td>+ INLP</td><td>↓2.48 60.17</td><td>↑0.60 91.62</td></tr><tr><td>+ SELF-DEBIAS</td><td>↓1.81 60.84</td><td>↓1.94 89.07</td></tr><tr><td>+ SENTENCEDEBIAS</td><td>↓6.59 56.05</td><td>↓3.59 87.43</td></tr></table>
394
+
395
+ Table 13: StereoSet stereotype scores and language modeling scores (LM Score) for gender debiased BERT, ALBERT, RoBERTa, and GPT-2 models. Stereotype scores closer to $50\%$ indicate less biased model behaviour. Results are on the StereoSet test set. A random model (which chooses the stereotypical candidate and the anti-stereotypical candidate for each example with equal probability) obtains a stereotype score of $50\%$ in expectation.
396
+
397
+ <table><tr><td>Model</td><td>Stereotype Score (%)</td><td>LM Score (%)</td></tr><tr><td colspan="3">Race</td></tr><tr><td>BERT</td><td>57.03</td><td>84.17</td></tr><tr><td>+ CDA</td><td>↓0.30 56.73</td><td>↓0.76 83.41</td></tr><tr><td>+ DROPOUT</td><td>↑0.04 57.07</td><td>↓1.14 83.04</td></tr><tr><td>+ INLP</td><td>↑0.26 57.29</td><td>↓1.05 83.12</td></tr><tr><td>+ SELF-DEBIAS</td><td>↓2.73 54.30</td><td>↑0.07 84.24</td></tr><tr><td>+ SENTENCEDEBIAS</td><td>↑0.75 57.78</td><td>↓0.22 83.95</td></tr><tr><td>ALBERT</td><td>57.51</td><td>89.77</td></tr><tr><td>+ CDA</td><td>↓4.35 53.15</td><td>↓10.68 79.09</td></tr><tr><td>+ DROPOUT</td><td>↓5.53 51.98</td><td>↓12.72 77.05</td></tr><tr><td>+ INLP</td><td>↓2.51 55.00</td><td>↓1.96 87.81</td></tr><tr><td>+ SELF-DEBIAS</td><td>↓1.56 55.94</td><td>↓0.14 89.63</td></tr><tr><td>+ SENTENCEDEBIAS</td><td>↑0.44 57.95</td><td>↓0.07 89.70</td></tr><tr><td>RoBERTa</td><td>61.67</td><td>88.93</td></tr><tr><td>+ CDA</td><td>↓0.73 60.95</td><td>↓0.38 88.55</td></tr><tr><td>+ DROPOUT</td><td>↓1.27 60.41</td><td>↓0.11 88.81</td></tr><tr><td>+ INLP</td><td>↓3.42 58.26</td><td>↑0.03 88.96</td></tr><tr><td>+ SELF-DEBIAS</td><td>↓2.89 58.78</td><td>↓0.53 88.40</td></tr><tr><td>+ SENTENCEDEBIAS</td><td>↑1.05 62.72</td><td>↓0.61 88.32</td></tr><tr><td>GPT-2</td><td>58.90</td><td>91.01</td></tr><tr><td>+ CDA</td><td>↓1.59 57.31</td><td>↓0.65 90.36</td></tr><tr><td>+ DROPOUT</td><td>↓1.41 57.50</td><td>↓0.62 90.40</td></tr><tr><td>+ INLP</td><td>↑0.06 58.96</td><td>↑0.05 91.06</td></tr><tr><td>+ SELF-DEBIAS</td><td>↓1.58 57.33</td><td>↓1.48 89.53</td></tr><tr><td>+ SENTENCEDEBIAS</td><td>↓2.47 56.43</td><td>↑0.36 91.38</td></tr></table>
398
+
399
+ Table 14: StereoSet stereotype scores and language modeling scores (LM Score) for race debiased BERT, ALBERT, RoBERTa, and GPT-2 models. Stereotype scores closer to $50\%$ indicate less biased model behaviour. Results are on the StereoSet test set. A random model (which chooses the stereotypical candidate and the anti-stereotypical candidate for each example with equal probability) obtains a stereotype score of $50\%$ in expectation.
400
+
401
+ <table><tr><td>Model</td><td>Stereotype Score (%)</td><td>LM Score (%)</td></tr><tr><td colspan="3">Religion</td></tr><tr><td>BERT</td><td>59.70</td><td>84.17</td></tr><tr><td>+ CDA</td><td>↓1.33 58.37</td><td>↓0.93 83.24</td></tr><tr><td>+ DROPOUT</td><td>↓0.57 59.13</td><td>↓1.14 83.04</td></tr><tr><td>+ INLP</td><td>↑0.61 60.31</td><td>↓0.81 83.36</td></tr><tr><td>+ SELF-DEBIAS</td><td>↓2.44 57.26</td><td>↑0.06 84.23</td></tr><tr><td>+ SENTENCEDEBIAS</td><td>↓0.97 58.73</td><td>↑0.09 84.26</td></tr><tr><td>ALBERT</td><td>60.32</td><td>89.77</td></tr><tr><td>+ CDA</td><td>↓1.62 58.70</td><td>↓13.92 75.85</td></tr><tr><td>+ DROPOUT</td><td>↓3.18 57.15</td><td>↓12.72 77.05</td></tr><tr><td>+ INLP</td><td>↑3.45 63.77</td><td>↓0.91 88.86</td></tr><tr><td>+ SELF-DEBIAS</td><td>↓0.49 59.83</td><td>↓0.18 89.59</td></tr><tr><td>+ SENTENCEDEBIAS</td><td>↓4.23 56.09</td><td>↓0.97 88.80</td></tr><tr><td>RoBERTa</td><td>64.28</td><td>88.93</td></tr><tr><td>+ CDA</td><td>↑0.23 64.51</td><td>↓0.06 88.86</td></tr><tr><td>+ DROPOUT</td><td>↓2.20 62.08</td><td>↓0.11 88.81</td></tr><tr><td>+ INLP</td><td>↓3.94 60.34</td><td>↓0.82 88.11</td></tr><tr><td>+ SELF-DEBIAS</td><td>↓1.44 62.84</td><td>↓0.40 88.53</td></tr><tr><td>+ SENTENCEDEBIAS</td><td>↓0.37 63.91</td><td>↓0.22 88.70</td></tr><tr><td>GPT-2</td><td>63.26</td><td>91.01</td></tr><tr><td>+ CDA</td><td>↑0.29 63.55</td><td>↓0.65 90.36</td></tr><tr><td>+ DROPOUT</td><td>↑0.91 64.17</td><td>↓0.62 90.40</td></tr><tr><td>+ INLP</td><td>↑0.69 63.95</td><td>↑0.16 91.17</td></tr><tr><td>+ SELF-DEBIAS</td><td>↓2.81 60.45</td><td>↓1.65 89.36</td></tr><tr><td>+ SENTENCEDEBIAS</td><td>↓3.64 59.62</td><td>↓0.49 90.53</td></tr></table>
402
+
403
+ Table 15: StereoSet stereotype scores and language modeling scores (LM Score) for religion debiased BERT, ALBERT, RoBERTa, and GPT-2 models. Stereotype scores closer to $50\%$ indicate less biased model behaviour. Results are on the StereoSet test set. A random model (which chooses the stereotypical candidate and the anti-stereotypical candidate for each example with equal probability) obtains a stereotype score of $50\%$ in expectation.
404
+
405
+ <table><tr><td>Model</td><td>Stereotype Score (%)</td></tr><tr><td colspan="2">Gender</td></tr><tr><td>BERT</td><td>57.25</td></tr><tr><td>+ CDA</td><td>↓1.14 56.11</td></tr><tr><td>+ DROPOUT</td><td>↓1.91 55.34</td></tr><tr><td>+ INLP</td><td>↓6.10 51.15</td></tr><tr><td>+ SELF-DEBIAS</td><td>↓4.96 52.29</td></tr><tr><td>+ SENTENCEDEBIAS</td><td>↓4.96 52.29</td></tr><tr><td>ALBERT</td><td>48.09</td></tr><tr><td>+ CDA</td><td>↓1.15 49.24</td></tr><tr><td>+ DROPOUT</td><td>↓0.38 51.53</td></tr><tr><td>+ INLP</td><td>↑0.76 47.33</td></tr><tr><td>+ SELF-DEBIAS</td><td>↑3.05 45.04</td></tr><tr><td>+ SENTENCEDEBIAS</td><td>↑0.76 47.33</td></tr><tr><td>RoBERTa</td><td>60.15</td></tr><tr><td>+ CDA</td><td>↓3.83 56.32</td></tr><tr><td>+ DROPOUT</td><td>↓0.76 59.39</td></tr><tr><td>+ INLP</td><td>↓4.98 55.17</td></tr><tr><td>+ SELF-DEBIAS</td><td>↓3.06 57.09</td></tr><tr><td>+ SENTENCEDEBIAS</td><td>↓8.04 52.11</td></tr><tr><td>GPT-2</td><td>56.87</td></tr><tr><td>+ CDA</td><td>56.87</td></tr><tr><td>+ DROPOUT</td><td>↑0.76 57.63</td></tr><tr><td>+ INLP</td><td>↓3.43 53.44</td></tr><tr><td>+ SELF-DEBIAS</td><td>↓0.76 56.11</td></tr><tr><td>+ SENTENCEDEBIAS</td><td>↓0.76 56.11</td></tr></table>
406
+
407
+ Table 16: CrowS-Pairs stereotype scores for gender debiased BERT, ALBERT, RoBERTa, and GPT-2 models. Stereotype scores closer to $50\%$ indicate less biased model behaviour. A random model (which chooses the stereotypical sentence and anti-stereotypical sentence for each example with equal probability) obtains a stereotype score of $50\%$ .
408
+
409
+ <table><tr><td>Model</td><td colspan="2">Stereotype Score (%)</td></tr><tr><td colspan="3">Race</td></tr><tr><td>BERT</td><td></td><td>62.33</td></tr><tr><td>+ CDA</td><td>↓5.63</td><td>56.70</td></tr><tr><td>+ DROPOUT</td><td>↓3.30</td><td>59.03</td></tr><tr><td>+ INLP</td><td>↑5.63</td><td>67.96</td></tr><tr><td>+ SELF-DEBIAS</td><td>↓5.63</td><td>56.70</td></tr><tr><td>+ SENTENCEDEBIAS</td><td>↑0.39</td><td>62.72</td></tr><tr><td>ALBERT</td><td></td><td>62.52</td></tr><tr><td>+ CDA</td><td>↓7.96</td><td>45.44</td></tr><tr><td>+ DROPOUT</td><td>↓11.06</td><td>48.54</td></tr><tr><td>+ INLP</td><td>↓7.18</td><td>55.34</td></tr><tr><td>+ SELF-DEBIAS</td><td>↓5.43</td><td>57.09</td></tr><tr><td>+ SENTENCEDEBIAS</td><td>↓0.38</td><td>62.14</td></tr><tr><td>RoBERTa</td><td></td><td>63.57</td></tr><tr><td>+ CDA</td><td>↑0.19</td><td>63.76</td></tr><tr><td>+ DROPOUT</td><td>↓1.17</td><td>62.40</td></tr><tr><td>+ INLP</td><td>↓1.75</td><td>61.82</td></tr><tr><td>+ SELF-DEBIAS</td><td>↓1.17</td><td>62.40</td></tr><tr><td>+ SENTENCEDEBIAS</td><td>↑1.55</td><td>65.12</td></tr><tr><td>GPT-2</td><td></td><td>59.69</td></tr><tr><td>+ CDA</td><td>↑0.97</td><td>60.66</td></tr><tr><td>+ DROPOUT</td><td>↑0.78</td><td>60.47</td></tr><tr><td>+ INLP</td><td></td><td>59.69</td></tr><tr><td>+ SELF-DEBIAS</td><td>↓6.40</td><td>53.29</td></tr><tr><td>+ SENTENCEDEBIAS</td><td>↓4.26</td><td>55.43</td></tr></table>
410
+
411
+ Table 17: CrowS-Pairs stereotype scores for race debiased BERT, ALBERT, RoBERTa, and GPT-2 models. Stereotype scores closer to $50\%$ indicate less biased model behaviour. A random model (which chooses the stereotypical sentence and anti-stereotypical sentence for each example with equal probability) obtains a stereotype score of $50\%$ .
412
+
413
+ <table><tr><td>Model</td><td>Stereotype Score (%)</td></tr><tr><td colspan="2">Religion</td></tr><tr><td>BERT</td><td>62.86</td></tr><tr><td>+ CDA</td><td>↓2.86 60.00</td></tr><tr><td>+ DROPOUT</td><td>↓7.62 55.24</td></tr><tr><td>+ INLP</td><td>↓1.91 60.95</td></tr><tr><td>+ SELF-DEBIAS</td><td>↓6.67 56.19</td></tr><tr><td>+ SENTENCEDEBIAS</td><td>↑0.95 63.81</td></tr><tr><td>ALBERT</td><td>60.00</td></tr><tr><td>+ CDA</td><td>↓6.67 46.67</td></tr><tr><td>+ DROPOUT</td><td>↓2.86 42.86</td></tr><tr><td>+ INLP</td><td>↓2.86 57.14</td></tr><tr><td>+ SELF-DEBIAS</td><td>↓2.86 57.14</td></tr><tr><td>+ SENTENCEDEBIAS</td><td>↑14.29 25.71</td></tr><tr><td>RoBERTa</td><td>60.00</td></tr><tr><td>+ CDA</td><td>↓0.95 59.05</td></tr><tr><td>+ DROPOUT</td><td>↓2.86 57.14</td></tr><tr><td>+ INLP</td><td>↑2.86 62.86</td></tr><tr><td>+ SELF-DEBIAS</td><td>↓8.57 51.43</td></tr><tr><td>+ SENTENCEDEBIAS</td><td>↓0.95 40.95</td></tr><tr><td>GPT-2</td><td>62.86</td></tr><tr><td>+ CDA</td><td>↓11.43 51.43</td></tr><tr><td>+ DROPOUT</td><td>↓10.48 52.38</td></tr><tr><td>+ INLP</td><td>↓0.96 61.90</td></tr><tr><td>+ SELF-DEBIAS</td><td>↓4.76 58.10</td></tr><tr><td>+ SENTENCEDEBIAS</td><td>↑1.90 35.24</td></tr></table>
414
+
415
+ Table 18: CrowS-Pairs stereotype scores for religion debiased BERT, ALBERT, RoBERTa, and GPT-2 models. Stereotype scores closer to $50\%$ indicate less biased model behaviour. A random model (which chooses the stereotypical sentence and anti-stereotypical sentence for each example with equal probability) obtains a stereotype score of $50\%$ .
416
+
417
+ <table><tr><td>Model</td><td>CoLA</td><td>MNLI</td><td>MRPC</td><td>QNLI</td><td>QQP</td><td>RTE</td><td>SST</td><td>STS-B</td><td>WNLI</td><td>Average</td></tr><tr><td>BERT</td><td>55.89</td><td>84.50</td><td>88.59</td><td>91.38</td><td>91.03</td><td>63.54</td><td>92.58</td><td>88.51</td><td>43.66</td><td>77.74</td></tr><tr><td>+ CDA</td><td>55.90</td><td>84.73</td><td>88.76</td><td>91.36</td><td>91.01</td><td>66.31</td><td>92.43</td><td>89.14</td><td>38.03</td><td>↓0.22 77.52</td></tr><tr><td>+ DROPOUT</td><td>49.83</td><td>84.67</td><td>88.20</td><td>91.27</td><td>90.36</td><td>64.02</td><td>92.58</td><td>88.47</td><td>37.09</td><td>↓1.46 76.28</td></tr><tr><td>+ INLP</td><td>56.06</td><td>84.81</td><td>88.61</td><td>91.34</td><td>90.92</td><td>64.98</td><td>92.51</td><td>88.70</td><td>32.86</td><td>↓0.99 76.76</td></tr><tr><td>+ SENTENCEDEBIAS</td><td>56.41</td><td>84.80</td><td>88.70</td><td>91.48</td><td>90.98</td><td>63.06</td><td>92.32</td><td>88.45</td><td>44.13</td><td>↑0.07 77.81</td></tr><tr><td>ALBERT</td><td>55.51</td><td>85.58</td><td>91.55</td><td>91.49</td><td>90.65</td><td>71.36</td><td>92.13</td><td>90.43</td><td>43.19</td><td>79.10</td></tr><tr><td>+ CDA</td><td>53.11</td><td>85.17</td><td>91.53</td><td>90.99</td><td>90.69</td><td>65.46</td><td>92.43</td><td>90.62</td><td>42.72</td><td>↓1.02 78.08</td></tr><tr><td>+ DROPOUT</td><td>12.37</td><td>85.33</td><td>90.25</td><td>91.79</td><td>90.39</td><td>56.56</td><td>92.24</td><td>89.93</td><td>52.11</td><td>↓5.66 73.44</td></tr><tr><td>+ INLP</td><td>55.87</td><td>85.32</td><td>92.07</td><td>91.58</td><td>90.53</td><td>72.92</td><td>91.86</td><td>90.80</td><td>47.42</td><td>↑0.72 79.82</td></tr><tr><td>+ SENTENCEDEBIAS</td><td>53.80</td><td>85.48</td><td>91.30</td><td>91.75</td><td>90.68</td><td>70.04</td><td>92.51</td><td>90.67</td><td>39.91</td><td>↓0.64 78.46</td></tr><tr><td>RoBERTa</td><td>57.61</td><td>87.61</td><td>90.38</td><td>92.59</td><td>91.28</td><td>71.24</td><td>94.42</td><td>90.05</td><td>56.34</td><td>81.28</td></tr><tr><td>+ CDA</td><td>59.39</td><td>87.69</td><td>91.49</td><td>92.74</td><td>91.31</td><td>71.12</td><td>94.19</td><td>90.14</td><td>50.70</td><td>↓0.31 80.97</td></tr><tr><td>+ DROPOUT</td><td>51.60</td><td>87.35</td><td>90.13</td><td>92.82</td><td>90.43</td><td>65.70</td><td>94.34</td><td>88.97</td><td>51.17</td><td>↓2.11 79.17</td></tr><tr><td>+ INLP</td><td>58.38</td><td>87.49</td><td>91.39</td><td>92.65</td><td>91.31</td><td>69.31</td><td>94.30</td><td>89.81</td><td>56.34</td><td>↓0.06 81.22</td></tr><tr><td>+ SENTENCEDEBIAS</td><td>58.13</td><td>87.52</td><td>90.80</td><td>92.64</td><td>91.26</td><td>71.36</td><td>94.57</td><td>90.00</td><td>56.34</td><td>↑0.12 81.40</td></tr><tr><td>GPT-2</td><td>29.10</td><td>82.43</td><td>84.51</td><td>87.71</td><td>89.18</td><td>64.74</td><td>91.97</td><td>84.26</td><td>43.19</td><td>73.01</td></tr><tr><td>+ CDA</td><td>37.57</td><td>82.61</td><td>85.91</td><td>88.08</td><td>89.26</td><td>64.86</td><td>92.09</td><td>85.28</td><td>42.25</td><td>↑1.20 74.21</td></tr><tr><td>+ DROPOUT</td><td>30.48</td><td>82.37</td><td>86.12</td><td>87.63</td><td>88.57</td><td>64.14</td><td>91.90</td><td>84.06</td><td>43.19</td><td>↑0.15 73.16</td></tr><tr><td>+ INLP</td><td>31.79</td><td>82.73</td><td>84.34</td><td>87.81</td><td>89.17</td><td>64.38</td><td>92.01</td><td>83.99</td><td>41.31</td><td>↑0.05 73.06</td></tr><tr><td>+ SENTENCEDEBIAS</td><td>30.20</td><td>82.56</td><td>84.43</td><td>87.90</td><td>89.09</td><td>64.86</td><td>91.97</td><td>84.18</td><td>38.50</td><td>↓0.38 72.63</td></tr></table>
418
+
419
+ Table 19: GLUE validation set results for gender debiased BERT, ALBERT, RoBERTa, and GPT-2 models. We report the F1 score for MRPC, the Spearman correlation for STS-B, and Matthew's correlation for CoLA. For all other tasks, we report the accuracy. Reported results are means over three training runs.
420
+
421
+ <table><tr><td>Model</td><td>Stereotype Score (%)</td><td>LM Score (%)</td></tr><tr><td colspan="3">Gender</td></tr><tr><td>BERT</td><td>60.28</td><td>84.17</td></tr><tr><td>+ CDA</td><td>59.45 ± 0.16</td><td>83.21 ± 0.11</td></tr><tr><td>+ DROPOUT</td><td>60.27 ± 0.55</td><td>83.14 ± 0.09</td></tr><tr><td>ALBERT</td><td>59.93</td><td>89.77</td></tr><tr><td>+ CDA</td><td>56.86 ± 1.39</td><td>78.30 ± 1.20</td></tr><tr><td>+ DROPOUT</td><td>57.35 ± 0.91</td><td>77.51 ± 0.58</td></tr><tr><td>RoBERTa</td><td>66.32</td><td>88.93</td></tr><tr><td>+ CDA</td><td>63.99 ± 0.41</td><td>88.83 ± 0.16</td></tr><tr><td>+ DROPOUT</td><td>66.24 ± 0.08</td><td>88.84 ± 0.17</td></tr><tr><td>GPT-2</td><td>62.65</td><td>91.01</td></tr><tr><td>+ CDA</td><td>64.02 ± 0.26</td><td>90.41 ± 0.06</td></tr><tr><td>+ DROPOUT</td><td>63.06 ± 0.26</td><td>90.44 ± 0.03</td></tr><tr><td colspan="3">Race</td></tr><tr><td>BERT</td><td>57.03</td><td>84.17</td></tr><tr><td>+ CDA</td><td>56.72 ± 0.02</td><td>83.25 ± 0.22</td></tr><tr><td>+ DROPOUT</td><td>56.96 ± 0.21</td><td>83.14 ± 0.09</td></tr><tr><td>ALBERT</td><td>57.51</td><td>89.77</td></tr><tr><td>+ CDA</td><td>53.48 ± 0.37</td><td>77.35 ± 1.98</td></tr><tr><td>+ DROPOUT</td><td>51.63 ± 0.42</td><td>77.51 ± 0.58</td></tr><tr><td>RoBERTa</td><td>61.67</td><td>88.93</td></tr><tr><td>+ CDA</td><td>60.94 ± 0.24</td><td>88.64 ± 0.12</td></tr><tr><td>+ DROPOUT</td><td>60.49 ± 0.35</td><td>88.84 ± 0.17</td></tr><tr><td>GPT-2</td><td>58.90</td><td>91.01</td></tr><tr><td>+ CDA</td><td>57.51 ± 0.17</td><td>90.41 ± 0.06</td></tr><tr><td>+ DROPOUT</td><td>57.49 ± 0.13</td><td>90.44 ± 0.03</td></tr><tr><td colspan="3">Religion</td></tr><tr><td>BERT</td><td>59.70</td><td>84.17</td></tr><tr><td>+ CDA</td><td>58.52 ± 0.13</td><td>83.16 ± 0.10</td></tr><tr><td>+ DROPOUT</td><td>59.72 ± 0.59</td><td>83.14 ± 0.09</td></tr><tr><td>ALBERT</td><td>60.32</td><td>89.77</td></tr><tr><td>+ CDA</td><td>56.54 ± 1.87</td><td>76.16 ± 0.75</td></tr><tr><td>+ DROPOUT</td><td>54.71 ± 2.11</td><td>77.51 ± 0.58</td></tr><tr><td>RoBERTa</td><td>64.28</td><td>88.93</td></tr><tr><td>+ CDA</td><td>63.83 ± 0.62</td><td>88.73 ± 0.12</td></tr><tr><td>+ DROPOUT</td><td>62.53 ± 1.26</td><td>88.84 ± 0.17</td></tr><tr><td>GPT-2</td><td>63.26</td><td>91.01</td></tr><tr><td>+ CDA</td><td>64.12 ± 0.50</td><td>90.41 ± 0.06</td></tr><tr><td>+ DROPOUT</td><td>64.28 ± 0.18</td><td>90.44 ± 0.03</td></tr></table>
422
+
423
+ Table 20: StereoSet results (mean ± std) for gender, race, and religion debiased BERT, ALBERT, RoBERTa, and GPT-2 models. Results are reported over three random seeds.
anempiricalsurveyoftheeffectivenessofdebiasingtechniquesforpretrainedlanguagemodels/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f1e091f57f123f45402cdf21429fa58d7a03acb11652a2329e33c39b39f341a8
3
+ size 1572612
anempiricalsurveyoftheeffectivenessofdebiasingtechniquesforpretrainedlanguagemodels/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:eb58629d6f713f8ee876571940bbbe408a8580862e51f45ff155ebcd08a95b28
3
+ size 475119
animitationlearningcurriculumfortexteditingwithnonautoregressivemodels/87401af9-cd81-425e-8660-24dadf836103_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d072cd0185d5937387fb3c73dac686e15d295c11b36952a14f4fde6952ec15e7
3
+ size 86454
animitationlearningcurriculumfortexteditingwithnonautoregressivemodels/87401af9-cd81-425e-8660-24dadf836103_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f65ca72073f49b5476f0b72f34afae47a6d0189c2a38d9e853ad337628874b3d
3
+ size 106275
animitationlearningcurriculumfortexteditingwithnonautoregressivemodels/87401af9-cd81-425e-8660-24dadf836103_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:66a2ab631a8d2ecfa6328dfa68f98bde53aa1445b129069934e0d435fe6591f9
3
+ size 1887053
animitationlearningcurriculumfortexteditingwithnonautoregressivemodels/full.md ADDED
@@ -0,0 +1,338 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # An Imitation Learning Curriculum for Text Editing with Non-Autoregressive Models
2
+
3
+ Sweta Agrawal
4
+
5
+ Department of Computer Science
6
+
7
+ University of Maryland
8
+
9
+ sweagraw@cs.umd.edu
10
+
11
+ Marine Carpuat
12
+
13
+ Department of Computer Science
14
+
15
+ University of Maryland
16
+
17
+ marine@cs.umd.edu
18
+
19
+ # Abstract
20
+
21
+ We propose a framework for training non-autoregressive sequence-to-sequence models for editing tasks, where the original input sequence is iteratively edited to produce the output. We show that the imitation learning algorithms designed to train such models for machine translation introduces mismatches between training and inference that lead to undertraining and poor generalization in editing scenarios. We address this issue with two complementary strategies: 1) a roll-in policy that exposes the model to intermediate training sequences that it is more likely to encounter during inference, 2) a curriculum that presents easy-to-learn edit operations first, gradually increasing the difficulty of training samples as the model becomes competent. We show the efficacy of these strategies on two challenging English editing tasks: controllable text simplification and abstractive summarization. Our approach significantly improves output quality on both tasks and controls output complexity better on the simplification task.
22
+
23
+ # 1 Introduction
24
+
25
+ Neural sequence-to-sequence (seq2seq) models primarily developed and tested for machine translation (MT) Bahdanau et al. (2015); Vaswani et al. (2017); Gu et al. (2018) are increasingly used for other sequence transduction tasks. This paper focuses on editing tasks, such as post-editing of MT output (Simard et al., 2007), style transfer (Jin et al., 2020), or text simplification (Chandrasekar and Srinivas, 1997; Xu et al., 2015), where systems directly edit the input sequence, instead of generating the output from scratch as in MT. As illustrated in Table 1, in these tasks, there might be substantial overlap in content between inputs and outputs, and also diverse rewrites, ranging from local substitutions to more complex restructuring.
26
+
27
+ While dedicated architectures have been designed for these editing tasks, based on e.g., a
28
+
29
+ Original: The Mauritshuis museum is staging an exhibition focusing on the 17th century self-portraits, highlighting the similarities and the differences between modern-day snapshots and historic works of art.
30
+
31
+ Simplified: The Mauritshuis museum is now set to open an exhibit on the 17th century self-portraits. It shows the similarities and differences between modern photos and artworks.
32
+
33
+ Table 1: Text simplification is an editing task, where the output sequence overlaps with the input, while incorporating multiple rewrite types to restructure and simplify content.
34
+
35
+ multistep, tag-then-edit approach (Alva-Manchego et al., 2017; Malmi et al., 2019; Dong et al., 2019; Mallinson et al., 2020), they can also be addressed with non-autoregressive (NAR) seq2seq models which generate their output by iteratively editing intermediate sequences (Lee et al., 2018; Gu et al., 2019; Awasthi et al., 2019; Stern et al., 2019; Chan et al., 2020). NAR models hold the promise of providing a more generic solution, where the model does not need to be tailored to a given editing task.
36
+
37
+ This work is centered on the hypothesis that training NAR models for editing tasks using the same strategy as for MT leads to a mismatch between train and test settings that limits their generalization ability and output quality. Specifically, the learning algorithms designed for MT are aligned with inference strategies that generate output from an empty initial sequence. By contrast, in sequence editing tasks, the inference step is initialized instead with the original input sequence. In addition, since editing samples might range from limited lexical substitutions to more thorough rewrites, training samples cover a wide range of edit distances. During training, the loss can thus be dominated by the more distant samples leading to undertrained models and poor generalization. By contrast, the
38
+
39
+ distance between input and output samples in MT is more uniform, since it always involves at least lexical translation of the input tokens.
40
+
41
+ To address these issues, we introduce a new training framework, EDITING CURRICULUM, which dynamically exposes the model to more relevant edit actions during training and exploits the full spectrum of available training samples more effectively. First, we design a new roll-in strategy, EDITING roll-in, that exposes the model to intermediate sequences that it is more likely to encounter during inference. Second, we introduce a training CURRICULUM to expose the model to training samples in order of increasing edit distance, thus gradually increasing the complexity of oracle edit operations that the model learns to imitate.
42
+
43
+ We show that our approach improves the quality of outputs on two challenging English text editing tasks: controllable text simplification (TS) and abstractive summarization. It also improves the degree of TS control by generating simplified outputs that match the target reading grade level better than the baselines. We conduct an extensive analysis which supports our hypothesis, and show that the sequences generated by our training policy improve exploration during training and are easier to learn from, leading to better generalization across samples with varying edit distances. Training with curriculum further improves output quality.
44
+
45
+ # 2 Background
46
+
47
+ Model NAR edit-based models (Chan et al., 2020; Gu et al., 2019; Stern et al., 2019; Xu and Carpuat, 2021) cast sequence editing as an iterative sequence refinement problem modeled by a Markov Decision Process $(\mathcal{V},\mathcal{A},\mathcal{E},\mathcal{R},\boldsymbol{y}^{0})$ . A state $y = (y_{1},y_{2},\dots,y_{L})\in \mathcal{V}$ is a sequence of tokens where each $y_{i}$ represents a token from the vocabulary $\mathcal{V}$ , $L$ is the sequence length and $y^0\in \mathcal{V}$ is the initial sequence to be refined, using actions drawn from the set $\mathcal{A}$ . The reward $\mathcal{R}$ is based on the distance $\mathcal{D}$ between the generated output and the reference sequence $y^{*}\in \mathcal{V}$ : $\mathcal{R}(y) = -\mathcal{D}(y,y^{*})$ . At each decoding iteration, the model takes an input $y$ , chooses an action $a\in \mathcal{A}$ to refine the sequence using a policy $\pi$ , resulting in state $\mathcal{E}(y,a)$ .
48
+
49
+ Models differ based on the nature of edit actions used and support different operations such as insertion, deletion, reposition and substitution. We select the operations from the EDITOR model based on its competitive performance on constrained de
50
+
51
+ coding tasks that require editing non-empty initial sequences (Xu and Carpuat, 2021). It is a Transformer model that uses two types of actions or edits on sequences, $y$ :
52
+
53
+ 1. The reposition operation, modeled by $\pi_{rps}$ , predicts the new position of each token in the input sequence. For each input position, the reposition policy predicts a value $r$ that corresponds to the index of the input token to be placed at the position and 0 if the input token is to be deleted.
54
+ 2. The insertion operation has two components: placeholder prediction, $\pi_{plh}$ that predicts the number of placeholders to be inserted and token prediction, $\pi_{ins}$ that generates the actual output tokens for each placeholder.
55
+
56
+ At each decoding iteration, the model applies an action $a$ that consists of a reposition and an insertion operation. This refinement process is repeated until two consecutive decoding iterations return the same output (Gu et al., 2019), or a preset maximum number of them is reached (Lee et al., 2018; Ghazvininejad et al., 2019).
57
+
58
+ ![](images/b2460403e7e2d19dcb545d0fc249c4119a8cf7ab36bc0012444cf8be68c67e2b.jpg)
59
+ Figure 1: One refinement iteration for the input sequence: "a b c d e" using the operations generated by the Levenshtein Edit Distance Algorithm.
60
+
61
+ Training NAR models are typically trained via imitation learning that uses a roll-in policy and a roll-out policy. The roll-in policy is used to generate the sequences that the model learns to refine
62
+
63
+ <table><tr><td></td><td>OPERATIONS</td><td>Roll-In</td><td>ROLL-IN POLICIES</td></tr><tr><td rowspan="3">Gu et al. (2019)</td><td rowspan="3">Insertion, Deletion</td><td rowspan="3">Mixed</td><td>y&#x27; = {E(y*, d), d} ∼ πrnd</td></tr><tr><td>yins = {y&#x27; if u &lt; α else E(ys, d*), d* ∼ π*del}</td></tr><tr><td>ydel = {ys if u &lt; β else E(E(yins, p*), t), p* ∼ π*plh, t} ∼ πins</td></tr><tr><td>Stern et al. (2019)</td><td>Insertion</td><td>Expert</td><td>yins = {E(y*, d), d} ∼ πrnd</td></tr><tr><td>Ghazvininejad et al. (2019)</td><td>Substitution</td><td>Expert</td><td>ysub = {E(y*, m), m} ∼ πmask</td></tr><tr><td>Saharia et al. (2020)</td><td>Substitution</td><td>(Offline) Learned</td><td>ysub = {E(y, m), m} ∼ πmask</td></tr><tr><td>Qian et al. (2020)</td><td>Substitution</td><td>Expert</td><td>ysub = {E(y, m), m} ∼ πmask</td></tr><tr><td rowspan="3">Xu and Carpuat (2021)</td><td rowspan="3">Insertion, Reposition (including deletions)</td><td rowspan="3">Learned</td><td>y&#x27; = {E(E(y*, d), p), d} ∼ πrnd, p ∼ πper}</td></tr><tr><td>yins = {y&#x27; if u &lt; α else E(y, r), r} ∼ πrps</td></tr><tr><td>yrps = {y&#x27; if u &lt; β else E(E(y, p*), t), p* ∼ π*plh, t} ∼ πins</td></tr></table>
64
+
65
+ Table 2: Training Policies and Edit Operations performed by different NAR models: $y^{s}$ : original input sequence, $y^{*}$ : output sequence, $y$ : model generated variant of reference sequence, $\pi_{rnd} / \pi_{mask}$ drops/masks random words from $y^{*}$ according to a distribution (e.g. uniform, bernoulli, etc.), $\pi_{p}$ generates a permutation, $u \sim \text{Uniform}[0,1]$ , $\pi_{ins}, \pi_{plh}, \pi_{del}, \pi_{rps}$ are insertion, placeholder prediction, deletion and reposition policies.
66
+
67
+ from. A roll-out policy is then used to estimate the cost-to-go from the generated roll-in sequences to the desired output sequences. The cost-to-go is calculated by comparing the model actions to oracle demonstrations. We summarize the policies of various NAR models proposed for MT in Table 2.
68
+
69
+ For EDITOR, the roll-in sequences for the reposition (or insertion) module are stochastic mixtures (parameterized by $\alpha$ or $\beta$ ) of the output of the insertion (or reposition) module or a noised version of the output sequence. The oracle is the Levenshtein edit distance (Gu et al., 2019). The noisy sequence is generated by applying random word dropping (Gu et al., 2019) and random word shuffle (Lample et al., 2018) with a probability of 0.5 and maximum shuffle distance of 3. Figure 1 shows an example instantiation of the edit actions generated by the Levenshtein Edit Distance to transform the original input sequence ("a b c d e") to the output sequence ("c a t"). In this example, the oracle action is to delete the tokens ["b", "d", "e"], reposition "a" and "c" and insert "t" at the appropriate position. The reposition and the insertion modules are trained in a supervised fashion to predict these oracle operations during training.
70
+
71
+ # 3 Our Approach: EDITING CURRICULUM
72
+
73
+ To tailor training to editing tasks, we propose to modify the roll-in policy to better match the intermediate sequences encountered at inference, and introduce a curriculum to increase the difficulty of oracle actions learned throughout training.
74
+
75
+ EDiTING Roll-in Sequences generated using the roll-in policy control the search space explored during training. Those sequences should therefore be
76
+
77
+ representative of the intermediate sequences generated at inference time (Ross and Bagnell, 2010). While typically, the roll-in policy is a stochastic mixture of the model and the expert demonstrations as described above, the noise incurred early on due to the large difference between the expert demonstration and the learner's policy actions may hurt overall performance (Brantley et al., 2019; He et al., 2012; Leblond et al., 2018). As we will see ( $\S 5$ ), this is what happens on editing tasks when training the model to imitate experts using learned roll-in sequences. At the same time, rolling in with expert demonstrations raises its own issues, as it can limit the exploration of the search space.
78
+
79
+ ![](images/5b3e7b5dcca3f9aa4cc7a8275bb04b9a7f9cdd87568fbe3edb6c7e367b3b087f.jpg)
80
+ Figure 2: Example roll-in sequences for the reposition and the insertion modules: The same initial input sequence $(y^{s})$ can enable the model to learn to generate the reference output $(y^{*})$ using different edit operations from its noised version.
81
+
82
+ Motivated by these observations, we propose a new policy, EDITING, that allows exploration by injecting noise to the input sequence to generate new intermediate sequences for training. This lets the model learn to fix errors without deviating from learning the task at hand. Figure 2 shows an example of intermediate sequences generated by our proposed roll-in policy. Different intermediate sequences encourage the model to learn differ
83
+
84
+ Algorithm 1: Our proposed framework: EDITING CURRICULUM
85
+ Input: Dataset, $D = \{y,y^{*}\}_{i = 1}^{M}$ , difficulty scoring function, $d$ , and competence function, $c$
86
+ 1 Compute the difficulty, $d(s_i)$ , for each $s_i = \{y_i,y_i^*\} \in D$
87
+ 2 Compute the cumulative density function (CDF) of the difficulty scores. This results in one difficulty CDF score per sample, $\tilde{d} (s_i)\in [0,1]$
88
+ 3 Initialize $\pi_{rps}$ and $\pi_{ins}$
89
+ 4 for training step $t = 1\dots T$ do
90
+ 5 Compute the competence value, $c(t)$
91
+ 6 Create training dataset from by selecting all samples, $B_{t}$ using $s_i\in D$ , such that $\tilde{d} (s_i)\leq c(t)$
92
+ 7 for i in 1..|B| do Generate roll-in sequences:
93
+ 9 $y_{rps} = noise(y^s)$
94
+ 10 $y_{ins} = \mathcal{E}(y_{rps},r^{*}),r^{*}\sim \pi_{rps}^{*}$ Train $\pi_{rps}$ and $\pi_{ins}$ on $y_{rps}$ and $y_{ins}$ minimizing cost-to-go to $y^{*}$
95
+ 12 Return best $\pi_{rps}$ and $\pi_{ins}$ evaluated on validation set.
96
+
97
+ ent reposition and insertion edit operations starting from the same input sequence, hence enabling exploration. We modify the roll-in policies to be aligned with the editing inference process, where the reposition operation is followed by insertion on the original input sequence:
98
+
99
+ - The roll-in sequence for training the reposition module, $\pi_{rps}$ , is generated by applying noise to the original source sequence $y^{s}$ , i.e. $y_{rps} = \text{noise}(y^{s}) = \{\mathcal{E}(\mathcal{E}(y^{s},\tilde{d}),\tilde{p}),\tilde{d}\sim \pi_{rnd},\tilde{p}\sim \pi_{per}\}$ . Unlike EDITOR, the random word dropping $(\tilde{d}\sim \pi_{rnd})$ and the word shuffling $(\tilde{p}\sim \pi_{per})$ are applied to the original input sequence instead of the output sequence. This aligns the training with the inference scenario where the model edits an original input sequence instead of generating an output from scratch.
100
+ - The roll-in sequence for training the insertion module, $\pi_{ins}$ is an intermediate sequence generated by applying the expert reposition policy to $y_{rps}$ , i.e. $y_{ins} = \{\mathcal{E}(y_{rps}, r^*), r^* \sim \pi_{rps}^*\}$ . The expert reposition policy corresponds to the deletion and reposition actions derived by using the levenshtein edit distance algorithm between the noisy input sequence, $\text{noise}(y^s)$ and the target sequence, $y^*$ .
101
+
102
+ Curriculum controlled roll-out To prevent undertraining when samples with large edit distances overwhelm the loss, we use a curriculum to expose the model to easy-to-learn actions first, then gradually increase the difficulty of the edit-operations
103
+
104
+ performed as the learner becomes more competent. Prior work on curriculum learning (CL) does not agree on standard measures of sample difficulty for seq2seq tasks (Kumar et al., 2019; Yao et al., 2021; Zhang et al., 2018; Zhou et al., 2020) or apply CL for the different problem of shifting the training of a Transformer model from AR to NAR regimes (Guo et al., 2020; Liu et al.). By contrast, in our settings, the Levenshtein distance provides a measure of difficulty that directly aligns with the model design and the training oracle.
105
+
106
+ Resulting Algorithm Given a training dataset $\mathcal{D} = \{y^s, y^*\}_{i=1}^M$ consisting of $\mathcal{M}$ samples, the difficulty score $d(s_i)$ for each sample $s_i = \{y_i^s, y_i^*\} \in \mathcal{D}$ is measured by the Levenshtein Distance between the input and the output sequence. The cumulative density function (CDF) of the difficulty scores results in one difficulty CDF score per sample, $\tilde{d}(s_i)$ . At each training step $t$ , we estimate the progress made by the learner by computing the competence of the model $c(t) \in (0, 1]$ as follows:
107
+
108
+ $$
109
+ c _ {\mathrm {s q r t}} (t) = \min \left(1, \sqrt {t \frac {1 - c _ {0} ^ {2}}{\lambda_ {t}} + c _ {0} ^ {2}}\right)
110
+ $$
111
+
112
+ where, $\lambda_{t}$ defines the length of the curriculum<sup>1</sup>; $c_{0} = 0.1$ as in Platanios et al. (2019).
113
+
114
+ Based on this competence value $c(t)$ , the model is then trained on all the samples whose difficulty as measured by the Levenshtein distance between the input and the output sequence is lower than that
115
+
116
+ competence value, i.e. $\tilde{d}(s_i) \leq c(t)$ . The resulting algorithm is also shown in Algorithm 1.
117
+
118
+ # 4 Experimental Settings
119
+
120
+ We evaluate our approach on Controllable Simplification and Abstractive Summarization, two challenging sequence editing tasks that are motivated by real world information access needs. They are challenging because they require learning to perform a wide range of rewrites (from local substitution to sentence restructuring).
121
+
122
+ # 4.1 Controllable Simplification
123
+
124
+ Task Definition Given a complex text and a target grade level, the goal is to generate a simplified output that is appropriate for the desired grade level. The type of operations performed across different grade levels span sentence splitting, paraphrasing, deletion, content elaboration and substitution.
125
+
126
+ Data We use English Newsela samples as extracted by Agrawal and Carpuat (2019) with $470\mathrm{k} / 2\mathrm{k} / 19\mathrm{k}$ for training, development and test sets respectively. Grade side-constraints are defined using a distinct special token for each grade level (from 2 to 12) and are introduced as side constraints for both the input and the output grade levels Scarton and Specia (2018).
127
+
128
+ Evaluation Metrics We automatically evaluate truecased detokenized system outputs using: SARI (Xu et al., 2016), which measures the lexical simplicity based on the n-grams kept, added, and deleted by the system relative to the input and the output sequence. It computes the F1 score for the n-grams that are added (add-F1). The model's deletion capability is measured by the F1 score for n-grams that are kept (keep-F1) and precision for the n-grams that are deleted (del-P)<sup>2</sup>; Pearson's correlation coefficient (PCC) between the complexity of the system and reference outputs as measured by Automatic Readability Index (ARI) (Senter and Smith, 1967) and ARI-Accuracy (Heilman et al., 2008) representing the percentage of sentences where the system output grade level is within 1 grade of the reference text according to the ARI.
129
+
130
+ # 4.2 Abstractive Summarization
131
+
132
+ Task Given a short paragraph (one or two sentences on average), the goal is to generate a con
133
+
134
+ cise summary that captures the salient ideas of the source text. It contains heavy deletions with moderate amounts of substitutions and frequent shifts caused by re-orderings.
135
+
136
+ Data We use the dataset from Toutanova et al. (2016), which contains 6K short input texts, with upto 5 summaries each. We use the same split as provided by the authors with 4937/448/786 unique input texts in the training, development and test sets respectively. The human experts were allowed to insert new words and reorder parts of the sentence when generating the summary, which makes this dataset particularly suited for abstractive summarization models.
137
+
138
+ Evaluation Metrics We automatically evaluate truecased detokenized system outputs using: Rouge-L $^3$ (Lin, 2004). Even though it is not a summarization metric, we also report SARI to track the nature and type of edit operations performed. Given multiple references for each input text, we define the corpus level score as the arithmetic mean of automated metrics at the instance level, which is further averaged across the multiple references.
139
+
140
+ # 4.3 Model configurations
141
+
142
+ Data Preprocessing We pre-process all data using Moses tools for normalization, and truecasing. We apply subword segmentation with a joint input-output byte pair encoding model with 32,000 operations. We use ARI to compute the input grade level at the inference time.
143
+
144
+ Architecture We adopt the base Transformer architecture (Vaswani et al., 2017) with $d_{model} = 512$ , $d_{hidden} = 2048$ , $n_{heads} = 8$ , $n_{layers} = 6$ , and $p_{dropout} = 0.1$ for all our models. We add dropout to embeddings (0.1) and label smoothing (0.1). The base EDITOR model is trained using Adam with initial learning rate of 0.0005 and a batch size of 16,000 tokens. The model is further finetuned on the editing task with a learning rate of 0.0001. We train all our models on two GeForce GTX 1080Ti GPUs. The average training time for a single seed of AR model is $\sim 8 - 9$ hrs and for the EDITOR model is $\sim 20 - 22$ hrs. Fine-tuning EDITOR takes additional 5-6 hrs. Training stops after 8 checkpoints without improvement of validation perplexity. All models are implemented using the Ffairseq toolkit.
145
+
146
+ Models We compare our proposed approaches against the following models trained from scratch in controlled conditions: 1) AR is a auto-regressive (AR) transformer model (Scarton and Specia, 2018). 2) We train EDITOR with the dual-path roll-in policy as in Xu and Carpuat (2021), referred to as From Reference. We fine-tune EDITOR with the following policy variants: 3) From Input replaces the reference with the input for generating the initial sequence as in Agrawal et al. (2021). 4) Editing is our proposed roll-in policy. 5) Editing Curriculum, EDITCL, refers to our approach as described in §3. During inference, we start from the input sequence $(y^{s})$ , which is refined iteratively by applying a sequence of actions, as described in §2 until 1) the output sequences from two consecutive iterations are the same, or 2) the maximum number of decoding steps $(N = 10)$ is reached. The edit distance between two sequences is measured by the Levenshtein edit distance (Levenshtein et al., 1966).
147
+
148
+ # 5 Findings
149
+
150
+ Controllable Simplification As can be seen in Table 3, our overall training framework, EDITCL improves over the prior training strategy for EDITOR—From Reference — significantly for all metrics (SARI: +3.8, PCC: +0.091, ARI-Acc: +10.1%), and over the AR baseline. Ablations show that this is a combined effect of multiple factors. Dual-path roll-in, From Input improves over From Reference as expected (SARI: +1.9, PCC: +0.077, ARI-Acc: +8.0%), as the roll-in sequences encountered during training are similar to those encountered during inference. Using expert roll-in (EDITING) performs better than using learned roll-in (dual-path roll-in) across the board, with gains of up to 3 SARI points over From Reference. Training with CL (EDITCL) improves over the best roll-in strategy<sup>4</sup>, improving the precision of deletions (+1.6) and leading to a significant improvement in SARI score (+0.7) over EDITING with no significant change in grade-specific metrics.
151
+
152
+ We also report training and inference statistics. For training, we report the number of training updates to convergence, i.e. when the model achieves the best validation perplexity on the development
153
+
154
+ dataset. For inference, we report the average number of actions taken by the model to generate the refined output counts. Each iteration encompasses a reposition operation followed by an insertion applied to the all the tokens in the input sequence in parallel. CL reduces the average number of actions needed to generate outputs compared to EDITING, while taking only $\sim 2\mathrm{K}$ more updates during training than From Input. These results show that our roll-in policy, EDITING and the curriculum play a complementary role in improving training for editing.
155
+
156
+ Abstractive Summarization On the Abstractive Summarization task (Table 4), EDITCL achieves the best performance across the board compared to alternative training strategies for EDITOR with gain of upto $\sim 4$ SARI, and $\sim 3$ ROUGE points. Our proposed approach improves the precision of the deletion operation (DEL-P, +7). It also preserves the tokens from the source sequence that are present in the reference suggested by the improvement in KEEP-F1(+3.9) over the EDITOR (From Reference) model.
157
+
158
+ For completeness, we also compare our approach with systems trained in prior work: (1) ILP (Clarke and Lapata, 2008), an integer linear programing approach for deletion-based compression, (2) T3 (Cohn and Lapata, 2008), a tree transducer-based model for abstractive compression, (3) SEQ2SEQ (Filippova et al., 2015), a neural network model for deletion-based compression, (4) NAMAS (Rush et al., 2015), a neural model for abstractive compression and summarization and (5) FELIX (Mallinson et al., 2020), a non-autoregressive approach to text editing. We use the outputs provided by Toutanova et al. (2016) for [1-4] and Mallinson et al. (2020) for [5]. We endeavored to make the comparison as fair as possible, but it is not possible to have a fully controlled comparison. In particular, FELIX is trained on uncased data and generates uncased outputs, while we train and evaluate our models with truecasing.
159
+
160
+ When evaluated using our pipeline, our training strategy applied to generic NAR models achieve scores that are on par with, or better than, those of dedicated summarization models (Table 5). However, this evaluation penalizes FELIX as it is trained to address the simpler problem of sum
161
+
162
+ <table><tr><td rowspan="2">Model</td><td colspan="4">SARI</td><td colspan="2">ARI-based</td><td rowspan="2">Training
163
+ Updates</td><td rowspan="2">Inference
164
+ action/sample</td></tr><tr><td>keep-F1</td><td>add-F1</td><td>del-P</td><td>combined</td><td>PCC</td><td>% ARI-Acc</td></tr><tr><td>AR</td><td>66.2 ±0.3</td><td>4.4 ±0.3</td><td>43.4 ±1.4</td><td>38.0 ±0.5</td><td>0.716 ±0.004</td><td>34.5 ±0.4</td><td>-</td><td>-</td></tr><tr><td>dual-path roll-in</td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>FROM REFERENCE</td><td>66.1 ±0.2</td><td>2.2 ±0.2</td><td>45.5 ±1.2</td><td>37.9 ±0.4</td><td>0.656 ±0.003</td><td>29.7 ±0.2</td><td>50K</td><td>1.175</td></tr><tr><td>FROM INPUT</td><td>66.5 ±0.1</td><td>3.6 ±0.2</td><td>49.3 ±0.5</td><td>39.8 ±0.2</td><td>0.733 ±0.003</td><td>37.7 ±0.4</td><td>10K</td><td>2.669</td></tr><tr><td>EDITING</td><td>66.1 ±0.2</td><td>5.2 ±0.1</td><td>51.7 ±0.2</td><td>41.0 ±0.1</td><td>0.745 ±0.005</td><td>39.7 ±0.2</td><td>6K</td><td>2.161</td></tr><tr><td>EDITCL</td><td>66.8 ±0.2</td><td>4.9 ±0.2</td><td>53.3 ±0.4</td><td>41.7 ±0.3</td><td>0.747 ±0.004</td><td>39.8 ±0.3</td><td>12K</td><td>1.802</td></tr></table>
165
+
166
+ Table 3: Results on the Newsela-Grade test dataset for Controllable Simplification: our proposed framework, EDITCL, achieves the best performance on SARI and ARI-based metrics across the board.
167
+
168
+ <table><tr><td rowspan="2">Model</td><td colspan="4">SARI</td><td colspan="3">Rouge-L</td></tr><tr><td>keep-F1</td><td>add-F1</td><td>del-P</td><td>combined</td><td>P</td><td>R</td><td>F1</td></tr><tr><td>AR</td><td>20.0</td><td>1.7</td><td>58.5</td><td>26.8</td><td>35.6</td><td>30.1</td><td>32.1</td></tr><tr><td>dual-path roll-in</td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>FROM REFERENCE</td><td>49.5</td><td>3.7</td><td>58.8</td><td>37.3</td><td>54.2</td><td>70.1</td><td>60.8</td></tr><tr><td>FROM INPUT</td><td>45.5</td><td>3.6</td><td>61.4</td><td>36.8</td><td>52.8</td><td>63.4</td><td>57.2</td></tr><tr><td>EDITING</td><td>54.7</td><td>4.1</td><td>62.9</td><td>40.6</td><td>55.9</td><td>74.6</td><td>63.6</td></tr><tr><td>EDITCL</td><td>54.4</td><td>4.4</td><td>65.5</td><td>41.4</td><td>56.1</td><td>74.0</td><td>63.8</td></tr></table>
169
+
170
+ marization on uncased text. On lower-cased outputs, our best model falls behind FELIX by 1.7 ROUGE points. However, FELIX has about twice as many parameters as our model and benefits from BERT pre-training (Devlin et al., 2019). As a result, this comparison confirms the promise of our approach overall.
171
+
172
+ Table 4: Results on the Summarization dataset: EDITCL improves ROUGE-F1 and SARI over EDITOR.
173
+
174
+ <table><tr><td rowspan="2">Model</td><td colspan="3">Rouge-L</td></tr><tr><td>P</td><td>R</td><td>F1</td></tr><tr><td>ILP (Clarke and Lapata, 2008)</td><td>60.6</td><td>63.2</td><td>60.6</td></tr><tr><td>T3 (Cohn and Lapata, 2008)</td><td>48.3</td><td>20.0</td><td>26.8</td></tr><tr><td>NAMAS (Rush et al., 2015)</td><td>48.8</td><td>55.2</td><td>51.5</td></tr><tr><td>SEQ2SEQ (Filippova et al., 2015)</td><td>57.6</td><td>51.5</td><td>53.1</td></tr><tr><td>FELIX (Mallinson et al., 2020)</td><td>53.7</td><td>58.1</td><td>55.5</td></tr><tr><td>EDITCL</td><td>56.1</td><td>74.0</td><td>63.8</td></tr><tr><td>FELIX (LC)</td><td>65.3</td><td>71.5</td><td>67.8</td></tr><tr><td>EDITCL (LC)</td><td>57.7</td><td>77.2</td><td>66.1</td></tr></table>
175
+
176
+ Table 5: Comparison to prior work on Summarization dataset: Our approach outperforms all the baselines in ROUGE-L (F1). LC:lower-cased.
177
+
178
+ # 6 Analysis
179
+
180
+ We conduct further experiments to better understand the factors that help our training strategies improve editing quality.
181
+
182
+ # 6.1 Impact of EDITING roll-in
183
+
184
+ First, we seek to measure whether our approach has the intended effect of bridging the gap between training and test for editing tasks. Figure 3 shows the distribution of oracle insertion and deletions observed when (a) training with EDITOR s default roll-in policy; (b) refining an original input sequence and (c) exposed to the model with our EDITING roll-in policy for Controllable TS. The plots show that with the default learning policy of the Editor model, the model doesn't learn to perform complex deletion operation at inference time. By contrast, our proposed roll-in exposes the model to the distribution that has higher overlap with the inference distribution as well as additional intermediate sequences that encourages exploration during training.
185
+
186
+ # 6.2 Impact of Curriculum Controlled roll-out
187
+
188
+ Training Dynamics To verify that curriculum learning helps our model better exploit its training data, we train EDITOR on $x\% \in [0,100]$ of the data, and compare using random samples with samples ranked by increasing edit distance. Figure 4 shows the number of updates to convergence on the development dataset for controllable simplification with/without CL. Training converges early (70 iterations only) on $13\%$ of the easiest
189
+
190
+ ![](images/bf0aa4b14043ccc88ae6beed13d0bd65b5cbb22eaba9be7e0cfe94a558d5479e.jpg)
191
+
192
+ ![](images/f773726e078f84fc2a23a4d76e816ced380edf0ed130bca8da9b4a864988b15f.jpg)
193
+
194
+ ![](images/94031e18adeeef506d6938486091c4a9ad9b566f1249b59612973851058477b4.jpg)
195
+ (a) EDITOR s roll-in (Training)
196
+
197
+ ![](images/ddef0009aa3d1dc5e6cbadf82681bb85e28b4b224fa9b424202ff9e304514a19.jpg)
198
+ (b) Inference Distribution
199
+
200
+ ![](images/3b2c96d38bdbe4c886616d4db5a31f1990db2f40eaea4bdaf44bdb922ef9754c.jpg)
201
+ (c) EDITING roll-in (Training)
202
+
203
+ ![](images/dc2a4dc661953056527180d881ab38db14fa72e8b9e5343560d6d59ae37e944b.jpg)
204
+ Figure 3: Distribution of Oracle Edit Operations (Insertions/Deletions) observed on Controllable TS. Our proposed roll-in policy's distribution of edit operations is closer to the inference distribution, while enabling exploration during training.
205
+
206
+ samples with oracle edit distance between the input and the output sequence $<= 2$ . This supports the hypothesis that despite adding noise, our approach yields easier examples to train on. The order in which samples are presented matters, as adding batches with larger edit distance ( $>63\%$ data) without maintaining the order of the samples converges early. By contrast, the curriculum pacing function adds samples in order of increasing difficulty, allowing the model sufficient training time to learn from new samples while improving overall performance across metrics.
207
+
208
+ We also report the learning curves when training EDITOR on the Newsela dataset in Figure 5. Training with curriculum reduces the overall loss consistently on the development dataset, leading to better generalization.
209
+
210
+ Ranking Criteria We compare the edit-distance (EDITCL) with other curriculum criteria in Table 6 where the order of examples is a) random, b) controlled by the length ratio between source and target sequence (Length Ratio), c)
211
+
212
+ <table><tr><td>Criteria</td><td>SARI</td><td>PCC</td><td>% ARI-Acc</td><td>Corr.</td></tr><tr><td>Random</td><td>40.7</td><td>0.749</td><td>38.6</td><td>-</td></tr><tr><td>Length Ratio</td><td>41.0</td><td>0.762</td><td>39.0</td><td>0.26</td></tr><tr><td>Grade Difference</td><td>40.7</td><td>0.730</td><td>38.3</td><td>0.19</td></tr><tr><td>EDITCL</td><td>42.0</td><td>0.758</td><td>39.6</td><td>1.00</td></tr><tr><td>- EDITING roll-in</td><td>40.1</td><td>0.734</td><td>37.8</td><td>-</td></tr><tr><td>- CL</td><td>41.2</td><td>0.742</td><td>39.3</td><td>-</td></tr></table>
213
+
214
+ Table 6: On Newsela-grade dev dataset: Using Edit distance as the difficulty criteria improves over both task-specific (Grade Difference) and task-agnostic (Length ratio) criteria. Our proposed EDITING roll-in and curriculum-controlled roll-out provides complementary advantages to the model training.
215
+
216
+ governed by the difference between the source and target grade levels (Grade Difference). Our proposed criterion outperforms both task-specific (Grade Difference) and task-agnostic criteria (Length Ratio) on the Newsela Grade development set across all the metrics. Length Ratio achieves better correlation with Edit distance than Grade Difference which is also reflected by its performance (SARI: +0.3, PCC: 0.032, ARI: 0.7) on the Controllable Simplification task. This might reflect the fact that higher grade differences do not necessarily require more edits to be performed, for instance when the sentence to be simplified is already relatively simple. These mismatches do not occur when the edit distance itself is used as the sample difficulty criterion.
217
+
218
+ Complementarity of roll-in and roll-out design We report the performance of the From Input model, when trained with curriculum only without the EDITING policy, i.e. EDITCL- EDITING in the same Table 6. Both EDITING roll-in and curriculum controlled roll-out provides complementary advantages to the model training as removing either results in the drop in performance across all the metrics for controllable TS. However, we observe larger drop in the scores when we do not apply the EDITING policy which shows that our proposed roll-in policy is necessary to reap the benefits of curriculum learning.
219
+
220
+ # 7 Related Work
221
+
222
+ NAR models They have been used to enable parallel generation of output tokens for Machine translation. (Stern et al., 2019; Chan et al., 2020; Xu and Carpuat, 2021). Mallinson et al. (2020) design a custom multi-step non-autoregressive edit-based model for sequence editing where each source to-
223
+
224
+ ![](images/310f9df3b28f9cc5587ef2e6a01ed5afcfa592a99579fb646d0d2c2d757ee294.jpg)
225
+ Figure 4: The sample order during training matters as training without curriculum on the same amount of data $(>= 40\%)$ converges early (plot on the left) and to lower performance across all metrics (plot on the right) relative to training with curriculum using the same data.
226
+
227
+ ![](images/c92ddf7ee2458d77877b750debcc70163ec8b9e3411dde7b3c6b841b386c7d6d.jpg)
228
+
229
+ ![](images/964d8973798a67d7a797bb52dbd2f49a1da8fdd07d5f262df54f21d50a38fa5d.jpg)
230
+ Figure 5: Training with curriculum reduces the loss on the development dataset leading to better generalization on Controllable TS.
231
+
232
+ ken is first tagged to represent the type of edit operation to be performed and then a secondary model is used to in-fill new tokens. The tagging and editing models are trained independently. By contrast, we propose approaches to adapt NAR models designed for MT for these tasks and train an end-to-end model to generate an edited sequence.
233
+
234
+ Curriculum Learning for Sequence Refinement While curriculum learning has been applied to many tasks such as MT (Haffari, 2009; Platanios et al., 2019; Kumar et al., 2019), sentiment analysis (Sido and Konopík, 2019), natural language understanding (Xu et al., 2020), reading comprehension (Tay et al., 2019), their application to sequence refinement tasks has not been explored yet. Various strategies have been proposed to control the sample difficulty like n-gram frequency (Haffari, 2009; Platanios et al., 2019), token rarity, and sentence length (Liu et al., 2020). Chang et al. (2021) use Levenshtein edit distance as a sample difficulty criteria to order the samples for the task of data-to-text generation where the training model uses an AR seq2seq model. Instead, we focus on edit distance as a sample difficulty criteria that is directly tied to the training oracle and model design.
235
+
236
+ Roll-in policies There has been a plethora of work in the Imitation learning landscape on algorithms that strike a balance between learned and expert roll-in policies (Ross et al., 2011; Venkatraman et al., 2015; Chang et al., 2015). However, large differences in expert and learner's policy action can hurt performance (Brantley et al., 2019; He et al., 2012; Leblond et al., 2018). In our work, we propose to roll-in with noised states instead, so that the model can be exposed to mimic expert demonstrations from states that the model is more likely to encounter during inference.
237
+
238
+ # 8 Conclusion
239
+
240
+ This paper introduced two complementary strategies to address undertraining and poor generalization when adapting NAR models to editing tasks: 1) a new roll-in policy that generates intermediate sequences that the model is likely to encounter during inference and 2) a curriculum to control the difficulty of the roll-out policy which estimates the cost-to-go from the roll-in sequences to the desired output sequences, throughout training. Together, these strategies improve output quality consistently on controllable simplification and abstractive summarization. These results open space for further research to evaluate the potential of this approach for other editing tasks (e.g., post editing, style transfer), and to further tailor imitation learning policies and curriculum design to these tasks.
241
+
242
+ # Acknowledgments
243
+
244
+ We thank Eleftheria Briakou, Khanh Nguyen, Kianté Brantley, the members of the CLIP lab at UMD, and the anonymous ARR reviewers for their helpful and constructive comments.
245
+
246
+ # References
247
+
248
+ Sweta Agrawal and Marine Carpuat. 2019. Controlling text complexity in neural machine translation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1549-1564.
249
+ Sweta Agrawal, Weijia Xu, and Marine Carpuat. 2021. A non-autoregressive edit-based approach to controllable text simplification. In *Findings of the Association for Computational Linguistics: ACL-IJCNLP* 2021, pages 3757-3769, Online. Association for Computational Linguistics.
250
+ Fernando Alva-Manchego, Joachim Bingel, Gustavo Paetzold, Carolina Scarton, and Lucia Specia. 2017. Learning how to simplify from explicit labeling of complex-simplified text pairs. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 295-305, Taipei, Taiwan. Asian Federation of Natural Language Processing.
251
+ Abhijeet Awasthi, Sunita Sarawagi, Rasna Goyal, Sabyasachi Ghosh, and Vihari Piratla. 2019. Parallel iterative edit models for local sequence transduction. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4251-4261.
252
+ Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural Machine Translation by Jointly Learning to Align and Translate. In International Conference on Learning Representations (ICLR).
253
+ Kiante Brantley, Kyunghyun Cho, Hal Daumé, and Sean Welleck. 2019. Non-monotonic sequential text generation. In Proceedings of the 2019 Workshop on Widening NLP, pages 57-59, Florence, Italy. Association for Computational Linguistics.
254
+ William Chan, Chitwan Sahara, Geoffrey Hinton, Mohammad Norouzi, and Navdeep Jaitly. 2020. *Imputer: Sequence modelling via imputation and dynamic programming*. In *International Conference on Machine Learning*, pages 1403-1413. PMLR.
255
+ Raman Chandrasekar and Bangalore Srinivas. 1997. Automatic induction of rules for text simplification. Knowledge-Based Systems, 10(3):183-190.
256
+ Ernie Chang, Hui-Syuan Yeh, and Vera Demberg. 2021. Does the order of training samples matter? improving neural data-to-text generation with curriculum learning. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 727-733, Online. Association for Computational Linguistics.
257
+
258
+ Kai-Wei Chang, Akshay Krishnamurthy, Alekh Agarwal, Hal Daume, and John Langford. 2015. Learning to search better than your teacher. In International Conference on Machine Learning, pages 2058-2066. PMLR.
259
+ James Clarke and Mirella Lapata. 2008. Global inference for sentence compression: An integer linear programming approach. Journal of Artificial Intelligence Research, 31:399-429.
260
+ Trevor Cohn and Mirella Lapata. 2008. Sentence compression beyond word deletion. In Proceedings of the 22nd International Conference on Computational Linguistics (Coling 2008), pages 137-144, Manchester, UK. Coling 2008 Organizing Committee.
261
+ Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In *NAACL-HLT* (1), pages 4171–4186.
262
+ Yue Dong, Zichao Li, Mehdi Rezagholizadeh, and Jackie Chi Kit Cheung. 2019. EditNTS: An neural programmer-Interpreter model for sentence simplification through explicit editing. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3393-3402, Florence, Italy. Association for Computational Linguistics.
263
+ Katja Filippova, Enrique Alfonseca, Carlos A Colmenares, Lukasz Kaiser, and Oriol Vinyals. 2015. Sentence compression by deletion with lstms. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 360-368.
264
+ Marjan Ghazvininejad, Omer Levy, Yinhan Liu, and Luke Zettlemoyer. 2019. Mask-predict: Parallel decoding of conditional masked language models. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 6114-6123.
265
+ Jiatao Gu, James Bradbury, Caiming Xiong, Victor O.K. Li, and Richard Socher. 2018. Nonautoregressive neural machine translation. In International Conference on Learning Representations.
266
+ Jiatao Gu, Changhan Wang, and Junbo Zhao. 2019. Levenshtein transformer. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems 32, pages 11179-11189. Curran Associates, Inc.
267
+ Junliang Guo, Xu Tan, Linli Xu, Tao Qin, Enhong Chen, and Tie-Yan Liu. 2020. Fine-tuning by curriculum learning for non-autoregressive neural machine translation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 7839-7846.
268
+
269
+ Gholamreza Haffari. 2009. Machine learning approaches for dealing with limited bilingual training data in statistical machine translation. Ph.D. thesis, Simon Fraser University.
270
+ He He, Jason Eisner, and Hal Daume. 2012. Imitation learning by coaching. Advances in Neural Information Processing Systems, 25:3149-3157.
271
+ Michael Heilman, Kevyn Collins-Thompson, and Maxine Eskenazi. 2008. An analysis of statistical models and features for reading difficulty prediction. In Proceedings of the third workshop on innovative use of NLP for building educational applications, pages 71-79. Association for Computational Linguistics.
272
+ Di Jin, Zhijing Jin, Zhiting Hu, Olga Vechtomova, and Rada Mihalcea. 2020. Deep learning for text style transfer: A survey. arXiv preprint arXiv:2011.00416.
273
+ Gaurav Kumar, George Foster, Colin Cherry, and Maxim Krikun. 2019. Reinforcement learning based curriculum optimization for neural machine translation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2054-2061, Minneapolis, Minnesota. Association for Computational Linguistics.
274
+ Guillaume Lample, Alexis Conneau, Ludovic Denoyer, and Marc'Aurelio Ranzato. 2018. Unsupervised machine translation using monolingual corpora only. In International Conference on Learning Representations.
275
+ Rémi Leblond, Jean-Baptiste Alayrac, Anton Osokin, and Simon Lacoste-Julien. 2018. Searnn: Training rns with global-local losses. In International Conference on Learning Representations.
276
+ Jason Lee, Elman Mansimov, and Kyunghyun Cho. 2018. Deterministic non-autoregressive neural sequence modeling by iterative refinement. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1173-1182.
277
+ Vladimir I Levenshtein et al. 1966. Binary codes capable of correcting deletions, insertions, and reversals. In Soviet physics doklady, volume 10, pages 707-710. Soviet Union.
278
+ Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches Out, pages 74-81, Barcelona, Spain. Association for Computational Linguistics.
279
+ Jinglin Liu, Yi Ren, Chen Zhang Xu Tan, Tao Qin, Zhou Zhao, and Tie-Yan Liu. Task-level curriculum learning for non-autoregressive neural machine translation.
280
+
281
+ Xuebo Liu, Houtim Lai, Derek F Wong, and Lidia S Chao. 2020. Norm-based curriculum learning for neural machine translation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 427-436.
282
+ Jonathan Mallinson, Aliaksei Severyn, Eric Malmi, and Guillermo Garrido. 2020. FELIX: Flexible text editing through tagging and insertion. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 1244-1255, Online. Association for Computational Linguistics.
283
+ Eric Malmi, Sebastian Krause, Sascha Rothe, Daniil Mirylenka, and Aliaksei Severyn. 2019. Encode, tag, realize: High-precision text editing. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5057-5068.
284
+ Emmanouil Antonios Platanios, Otilia Stretcu, Graham Neubig, Barnabás Póczos, and Tom M Mitchell. 2019. Competence-based curriculum learning for neural machine translation. In NAACL-HLT (1).
285
+ Lihua Qian, Hao Zhou, Yu Bao, Mingxuan Wang, Lin Qiu, Weinan Zhang, Yong Yu, and Lei Li. 2020. Glancing transformer for non-autoregressive neural machine translation. arXiv e-prints, pages arXiv-2008.
286
+ Stephane Ross and Drew Bagnell. 2010. Efficient reductions for imitation learning. In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, volume 9 of Proceedings of Machine Learning Research, pages 661-668, Chia Laguna Resort, Sardinia, Italy. PMLR.
287
+ Stéphane Ross, Geoffrey Gordon, and Drew Bagnell. 2011. A reduction of imitation learning and structured prediction to no-regret online learning. In Proceedings of the fourteenth international conference on artificial intelligence and statistics, pages 627-635. JMLR Workshop and Conference Proceedings.
288
+ Alexander M Rush, Sumit Chopra, and Jason Weston. 2015. A neural attention model for abstractive sentence summarization. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 379-389.
289
+ Chitwan Sahara, William Chan, Saurabh Saxena, and Mohammad Norouzi. 2020. Non-autoregressive machine translation with latent alignments. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1098-1108, Online. Association for Computational Linguistics.
290
+ Carolina Scarton and Lucia Specia. 2018. Learning Simplifications for Specific Target Audiences. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), volume 2, pages 712-718.
291
+
292
+ R. J. Senter and Edgar A Smith. 1967. Automated readability index. Technical report, CINCINNATI UNIV OH.
293
+ Jakub Sido and Miloslav Konopík. 2019. Curriculum learning in sentiment analysis. In International Conference on Speech and Computer, pages 444-450. Springer.
294
+ Michel Simard, Cyril Goutte, and Pierre Isabelle. 2007. Statistical phrase-based post-editing. In Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Proceedings of the Main Conference, pages 508-515.
295
+ Mitchell Stern, William Chan, Jamie Kiros, and Jakob Uszkoreit. 2019. Insertion transformer: Flexible sequence generation via insertion operations. In International Conference on Machine Learning, pages 5976-5985. PMLR.
296
+ Yi Tay, Shuohang Wang, Anh Tuan Luu, Jie Fu, Minh C Phan, Xingdi Yuan, Jinfeng Rao, Siu Cheung Hui, and Aston Zhang. 2019. Simple and effective curriculum pointer-generator networks for reading comprehension over long narratives. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4922-4931.
297
+ Kristina Toutanova, Chris Brockett, Ke M. Tran, and Saleema Amershi. 2016. A dataset and evaluation metrics for abstractive compression of sentences and short paragraphs. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 340-350, Austin, Texas. Association for Computational Linguistics.
298
+ Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NIPS.
299
+ Arun Venkatraman, Martial Hebert, and J Andrew Bagnell. 2015. Improving multi-step prediction of learned time series models. In Twenty-Ninth AAAI Conference on Artificial Intelligence.
300
+ Benfeng Xu, Licheng Zhang, Zhendong Mao, Quan Wang, Hongtao Xie, and Yongdong Zhang. 2020. Curriculum learning for natural language understanding. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6095-6104.
301
+ Wei Xu, Chris Callison-Burch, and Courtney Naples. 2015. Problems in current text simplification research: New data can help. Transactions of the Association for Computational Linguistics, 3:283-297.
302
+ Wei Xu, Courtney Naples, Ellie Pavlick, Quanze Chen, and Chris Callison-Burch. 2016. Optimizing statistical machine translation for text simplification. Transactions of the Association for Computational Linguistics, 4:401-415.
303
+
304
+ Weijia Xu and Marine Carpuat. 2021. Editor: An edit-based transformer with repositioning for neural machine translation with soft lexical constraints. Transactions of the Association for Computational Linguistics, 9:311-328.
305
+ Ziyu Yao, Frank F. Xu, Pengcheng Yin, Huan Sun, and Graham Neubig. 2021. Learning structural edits via incremental tree transformations. In International Conference on Learning Representations.
306
+ Xuan Zhang, Gaurav Kumar, Huda Khayrallah, Kenton Murray, Jeremy G Winnup, Marianna J Martindale, Paul McNamee, Kevin Duh, and Marine Carpuat. 2018. An empirical exploration of curriculum learning for neural machine translation.
307
+ Yikai Zhou, Baosong Yang, Derek F. Wong, Yu Wan, and Lidia S. Chao. 2020. Uncertainty-aware curriculum learning for neural machine translation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6934-6944, Online. Association for Computational Linguistics.
308
+
309
+ # A Results on Development set
310
+
311
+ <table><tr><td rowspan="2">Model</td><td colspan="4">SARI</td><td colspan="2">ARI-based</td><td rowspan="2">Inference action/sample</td></tr><tr><td>keep-F1</td><td>add-F1</td><td>del-P</td><td>combined</td><td>PCC</td><td>ARI-Acc</td></tr><tr><td>AR</td><td>0.653</td><td>0.043</td><td>0.456</td><td>0.384</td><td>0.711</td><td>0.349</td><td>-</td></tr><tr><td>dual-path roll-in</td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>FROM REFERENCE</td><td>0.648</td><td>0.021</td><td>0.454</td><td>0.374</td><td>0.645</td><td>0.285</td><td>1.188</td></tr><tr><td>FROM INPUT</td><td>0.660</td><td>0.035</td><td>0.510</td><td>0.402</td><td>0.727</td><td>0.368</td><td>2.545</td></tr><tr><td>EDITING</td><td>0.657</td><td>0.049</td><td>0.530</td><td>0.412</td><td>0.742</td><td>0.393</td><td>2.071</td></tr><tr><td>EDITCL</td><td>0.662</td><td>0.043</td><td>0.556</td><td>0.420</td><td>0.758</td><td>0.397</td><td>1.771</td></tr></table>
312
+
313
+ Table 7: Results on the Newsela-Grade development dataset for Controllable Simplification: our proposed framework, EDITCL, achieves the best performance on SARI and ARI-based metrics across the board.
314
+
315
+ # B Impact of Noise
316
+
317
+ Figure 6 shows that adding noise to the training samples smoothes the distribution across training instances by creating intermediate sequences that have relatively lower (or higher) overall edit distance with the reference sequence compared to the original input sequence.
318
+
319
+ ![](images/14b6f98fc06e593fbaed85928993a819a26ffbf656545535023fd6cef8f1a185.jpg)
320
+ Figure 6: Adding noise to the source increases (higher) or decreases (lower) the edit distance uniformly across samples for Controllable TS.
321
+
322
+ # C Oracle Edit Distribution for Summarization
323
+
324
+ ![](images/2889962174ab7bf4af9ceef024d7a2351e1f9244cb6f2f5d93b18498b41fec2a.jpg)
325
+
326
+ ![](images/c1467d686af84daca8d60ba7ee4f6e5ea4000f1ce24cb17b092b19cac5b8af6a.jpg)
327
+
328
+ ![](images/3d932ac3ff4cfb50e2b32570a05c1d149cc6a1e4b0e5e924650b1273984b69d7.jpg)
329
+ (a) EDITOR's roll-in (Training)
330
+
331
+ ![](images/5da5652f530c1f24afcb1578198978779398c48e772ee834bbf46748f8341db4.jpg)
332
+
333
+ ![](images/89dafb7e32310732f7ab428954e04424fee04e6f795d8e1ff21ab72d19c4d05b.jpg)
334
+ (b) Inference Distribution
335
+ (c) EDITING roll-in (Training)
336
+
337
+ ![](images/e7f9d5a4de85b61f2e5cd3be377b596adcc1d8f95c5add975683bce8b93b83dc.jpg)
338
+ Figure 7: Distribution of Oracle Edit Operations (Insertions/Deletions) observed on Abstractive Summarization. Our proposed roll-in policy's distribution of edit operations is closer to the inference distribution, while enabling exploration via generated intermediate sequences during training.
animitationlearningcurriculumfortexteditingwithnonautoregressivemodels/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e5bcb244a60df3a049425948a894c7c69d2a522b5bf5ee693c89faa37860eb69
3
+ size 527521
animitationlearningcurriculumfortexteditingwithnonautoregressivemodels/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:85cd734d6cccbe397465388a7098c45bc3840444edecfbaab354dc973efe423e
3
+ size 433089
aninformationtheoreticapproachtopromptengineeringwithoutgroundtruthlabels/dea8c1b5-878f-4d3a-8428-4805f654c3f0_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:444e65c40943f272701936a17486f750d056035b6318829748a459a92a6fab32
3
+ size 418710
aninformationtheoreticapproachtopromptengineeringwithoutgroundtruthlabels/dea8c1b5-878f-4d3a-8428-4805f654c3f0_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cc1b408f7fbe59fa222d76b21c3bdf74b13def2c45e10d4dd2bae6a299e35f4f
3
+ size 508790
aninformationtheoreticapproachtopromptengineeringwithoutgroundtruthlabels/dea8c1b5-878f-4d3a-8428-4805f654c3f0_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c68cc49ba88461a8f69c7db16e6fa674fd2408ea0a30665f50256a8d034c370a
3
+ size 1613099
aninformationtheoreticapproachtopromptengineeringwithoutgroundtruthlabels/full.md ADDED
The diff for this file is too large to render. See raw diff
 
aninformationtheoreticapproachtopromptengineeringwithoutgroundtruthlabels/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8b02744be6e82b226c64ae01dbe2d3956c9794efe67324dbd78140d914393bca
3
+ size 1596579
aninformationtheoreticapproachtopromptengineeringwithoutgroundtruthlabels/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0a5013eb644b4ee7f95bf3cb8751bfdb935448d97f15a50b37331c386b660a1c
3
+ size 2248590
aninterpretableneurosymbolicreasoningframeworkfortaskorienteddialoguegeneration/b51dd698-ab14-4f63-be2b-b19394d12c7d_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:888be4eb3de1acab98cfa1e57b9f0515b3017f1ff65424c77dd100091f6f9b41
3
+ size 126980
aninterpretableneurosymbolicreasoningframeworkfortaskorienteddialoguegeneration/b51dd698-ab14-4f63-be2b-b19394d12c7d_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:121f126af05a2846403d5e5026643dddd56218dc81240781bd1edd5df547d634
3
+ size 152628
aninterpretableneurosymbolicreasoningframeworkfortaskorienteddialoguegeneration/b51dd698-ab14-4f63-be2b-b19394d12c7d_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:924bd92d5b3ab3eca61e17744961fe8ca9db53d95f00f399cc830dfa7ff3a1c2
3
+ size 945960
aninterpretableneurosymbolicreasoningframeworkfortaskorienteddialoguegeneration/full.md ADDED
@@ -0,0 +1,561 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # An Interpretable Neuro-Symbolic Reasoning Framework for Task-Oriented Dialogue Generation
2
+
3
+ Shiquan Yang $^{1}$ , Rui Zhang $^{2}$ , Sarah Erfani $^{1}$ , and Jey Han Lau $^{1}$
4
+
5
+ <sup>1</sup>The University of Melbourne, <sup>2</sup>www.ruizhang.info
6
+
7
+ $^{1}\{\mathrm{shiquan}@\mathrm{student}, \mathrm{sarah.erdani}@\mathrm{, laujh}@\mathrm{} \} \mathrm{unimelb.edu.au}, {}^{2}\mathrm{rayteam}@\mathrm{yeah.net}$
8
+
9
+ # Abstract
10
+
11
+ We study the interpretability issue of task-oriented dialogue systems in this paper. Previously, most neural-based task-oriented dialogue systems employ an implicit reasoning strategy that makes the model predictions uninterpretable to humans. To obtain a transparent reasoning process, we introduce neuro-symbolic to perform explicit reasoning that justifies model decisions by reasoning chains. Since deriving reasoning chains requires multihop reasoning for task-oriented dialogues, existing neuro-symbolic approaches would induce error propagation due to the one-phase design. To overcome this, we propose a two-phase approach that consists of a hypothesis generator and a reasoner. We first obtain multiple hypotheses, i.e., potential operations to perform the desired task, through the hypothesis generator. Each hypothesis is then verified by the reasoner, and the valid one is selected to conduct the final prediction. The whole system is trained by exploiting raw textual dialogues without using any reasoning chain annotations. Experimental studies on two public benchmark datasets demonstrate that the proposed approach not only achieves better results, but also introduces an interpretable decision process. Code and data: https://github.com/shiquanyang/NS-Dial.
12
+
13
+ # 1 Introduction
14
+
15
+ Neural task-oriented dialogue systems have enjoyed a rapid progress recently (Peng et al., 2020; Hosseini-Asl et al., 2020; Wu et al., 2020), achieving strong empirical results on various benchmark datasets such as SMD (Eric et al., 2017) and MultiWOZ (Budzianowski et al., 2018). However, most existing approaches suffer from the lack of explainability due to the black-box nature of neural networks (Doshi-Velez and Kim, 2017; Lipton, 2018; Bommasani et al., 2021), which may hurt the trustworthiness between the users and the system. For
16
+
17
+ ![](images/8defa29ef519475fcb5f22faa88c15640ea6f06dd3d25369fbd862bf5768cb32.jpg)
18
+ Figure 1: An example dialogue that incorporates external KB. The context entity (i.e., Leichhardt) and answer entity (i.e., Cityroom) are marked as Red and Yellow, respectively. The triple containing the context entity and answer entity is not directly stored in KB and should be derived by a reasoning chain formed by multiple KB triplets.
19
+
20
+ instance, in Figure 1, a user is asking for a hotel recommendation at a given location. The system performs reasoning on a knowledge base (KB) and incorporates the correct entity in the response. However, when the system fails to provide the correct entities, it would be difficult for humans to trace back the issues and debug the errors due to its intrinsic implicit reasoning nature. As a result, such system cannot be sufficiently trusted to be deployed in real-world products.
21
+
22
+ To achieve trustworthy dialogue reasoning, we aim to develop an interpretable KB reasoning as it's crucial for not only providing useful information (e.g., locations in Figure 1) to users, but also essential for communicating options and selecting target entities. Without interpretability, it's difficult for users to readily trust the reasoning process and the returned entities.
23
+
24
+ To tackle this challenge, we present a novel Neuro-Symbolic Dialogue framework (NS-Dial) which combines representation capacities of neural networks and explicit reasoning nature of symbolic approaches (e.g., rule-based expert systems). Existing neuro-symbolic approaches (Vedantam et al.,
25
+
26
+ 2019; Chen et al., 2020) mostly employ a one-phase procedure where a tree-structured program composed of pre-defined human interpretable neural modules (e.g., attention and classification modules in Neural Module Networks (Andreas et al., 2016)) is generated to execute to obtain the final predictions. However, since the KB reasoning task involves a reasoning process spanning over multiple triplets in a diverse and large-scale KB, only generating and following a single program (i.e., a reasoning chain formed by KB triplets) is prone to error propagation where a mistake in one step could lead to a failure of the subsequent reasoning process and may result in sub-optimal performances.
27
+
28
+ To address this, we propose a two-phase procedure to alleviate the effects of error propagation by first generating and then verifying multiple hypotheses. Here, a hypothesis is in the form of a triplet containing an entity mentioned in dialogue context and an entity within KB, and their corresponding relation. The valid (i.e., correct) hypothesis is the one that contains the entity mentioned in the ground-truth response. Once we obtain multiple hypothesis candidates during the generation phase, we employ a reasoning engine for verifying those hypotheses. For instance in Figure 1, given the user query "Can you recommend me a hotel located in Leichhardt?", in order to find the valid hypothesis, the hypothesis generator obtains multiple candidates e.g., [Cityroom, Located_in, Leichhardt] and [Gonville_Hotel, Located_in, Leichhardt]. The reasoning engine will then construct proof trees to verify them, e.g., for the first hypothesis [Cityroom, Located_in, Leichhardt], it can be verified with the following reasoning chain in the KB: [Cityroom, Next_to, Palm_Lawn] $\rightarrow$ [Palm_Lawn, Located_in, Chadstone] $\rightarrow$ [Chadstone, Located_in, Leichhardt]. The whole framework is trained end-to-end using raw dialogues and thus does not require additional intermediate labels for either the hypothesis generation or verification modules.
29
+
30
+ To summarize, our contributions are as follows:
31
+
32
+ - We introduce a novel neuro-symbolic framework for interpretable KB reasoning in task-oriented dialogue systems.
33
+ - We propose a two-phase "generating-and-verifying" approach which generates multiple hypotheses and verifies them via reasoning chains to mitigate the error-propagation issue.
34
+ - We conduct extensive experimental studies on
35
+
36
+ two benchmark datasets to verify the effectiveness of our proposed model. By analyzing the generated hypotheses and the verifications, we demonstrate our model's interpretability.
37
+
38
+ # 2 Related Work
39
+
40
+ Task-Oriented Dialogue Traditionally, task-oriented dialogue systems are built via pipeline-based approaches where task-specific modules are designed separately and connected to generate system responses (Chen et al., 2016; Zhong et al., 2018; Wu et al., 2019a; Chen et al., 2019a; Huang et al., 2020). In another spectrum, many works have started to shift towards end-to-end approaches to reduce human efforts (Bordes et al., 2017; Lei et al., 2018; Madotto et al., 2018; Moon et al., 2019; Jung et al., 2020). Lei et al. (2018) propose a two-stage sequence-to-sequence model to incorporate dialogue state tracking and response generation jointly in a single sequence-to-sequence architecture. Zhang et al. (2020) propose a domain-aware multi-decoder network (DAMD) to combine belief state tracking, action prediction and response generation in a single neural architecture. Most recently, the success of large-scale pre-trained language models (e.g., BERT, GPT-2) (Devlin et al., 2018; Radford et al., 2019) has spurred a lot of recent dialogue studies starting to explore large-scale pre-trained language model for dialogues (Wolf et al., 2019; Zhang et al., 2019). In task-oriented dialogue, Budzianowski and Vulić (2019) use GPT-2 to fine-tune on MultiWOZ dataset for dialogue response generation. Peng et al. (2020) and Hosseini-Asl et al. (2020) employed a single unified GPT-2 model jointly trained for belief state prediction, system action and response generation in a multi-task fashion. However, most existing approaches cannot explain why the model makes a specific decision in a human understandable way. We aim to address this limitation and introduce interpretability for dialogue reasoning in this study.
41
+
42
+ Neuro-Symbolic Reasoning Neuro-Symbolic reasoning has attracted a lot of research attentions recently due to its advantage of exploiting the representational power of neural networks and the compositionality of symbolic reasoning for more robust and interpretable models (Andreas et al., 2016; Hu et al., 2017; Hudson and Manning, 2018; Vedantam et al., 2019; Chen et al., 2019b; Vedantam et al., 2019; van Krieken et al., 2022). The main difference between neuro-symbolic vs. pure
43
+
44
+ neural networks lies in how the former combines basic rules or modules to model complex functions. Rocktäschel and Riedel (2017) propose a neuro-symbolic model that can jointly learn subsymbolic representations and interpretable rules from data via standard back-propagation. In visual QA, Andreas et al. (2016) propose neural module networks to compose a chain of differentiable modules wherein each module implements an operator from a latent program. Yi et al. (2018) propose to discover symbolic program trace from the input question and then execute the program on the structured representation of the image for visual question answering. However, these approaches cannot be easily adapted to task-oriented dialogues due to the error propagation issue caused by multi-hop reasoning on large-scale KBs. Thus, we aim to bridge this gap by developing a neuro-symbolic approach for improving task-oriented dialogues.
45
+
46
+ # 3 Preliminary
47
+
48
+ In this work, we focus on the problem of task-oriented dialogue response generation with KBs. Formally, given the dialogue history $X$ and knowledge base $B$ , our goal is to generate the system responses $Y$ word-by-word. The probability of the generated responses can be written as:
49
+
50
+ $$
51
+ p (Y | X, B) = \prod_ {t = 1} ^ {n} p \left(y _ {t} | X, B, y _ {1}, y _ {2}, \dots , y _ {t - 1}\right) \tag {1}
52
+ $$
53
+
54
+ where $y_{t}$ is the t-th token in the response $Y$ . The overall architecture is shown in Figure 2. We start by introducing the standard modules in our system and then explain the two novel modules afterward.
55
+
56
+ # 3.1 Dialogue Encoding
57
+
58
+ We employ pre-trained language model BERT (Devlin et al., 2019) as the backbone to obtain the distributed representations for each token in the dialogue history. Specifically, we add a $[CLS]$ token at the start of the dialogue history to represent the overall semantics of the dialogue. The hidden states $H_{enc} = (h_{CLS}, h_1, \dots, h_M)$ for all the input tokens $X = ([CLS], x_1, \dots, x_M)$ are computed using:
59
+
60
+ $$
61
+ H _ {\text {e n c}} = \operatorname {B E R T} _ {\text {e n c}} \left(\phi^ {\text {e m b}} (X)\right) \tag {2}
62
+ $$
63
+
64
+ where $M$ is the number of tokens in the dialogue history, $\phi^{emb}$ is the embedding layer of BERT.
65
+
66
+ # 3.2 Response Generation
67
+
68
+ To generate the system response, we first utilize a linear layer to project $H_{enc}$ to $H_{enc}' = (h_{CLS}', h_1', \dots, h_M')$ that are in the same space of the decoder. We initialize the decoder with $h_{CLS}'$ . During decoding timestep $t$ , the model utilizes the hidden state $h_{dec,t}$ to attend $H_{enc}'$ to obtain an attentive representation $h_{dec,t}'$ via standard attention mechanism. We then concatenate $h_{dec,t}$ and $h_{dec,t}'$ to form a context vector $C$ and project it into the vocabulary space $\mathcal{V}$ :
69
+
70
+ $$
71
+ C = \left[ h _ {d e c, t}, h _ {d e c, t} ^ {\prime} \right] \tag {3}
72
+ $$
73
+
74
+ $$
75
+ P _ {v o c a b, t} = \operatorname {S o f t m a x} \left(U _ {1} C\right) \tag {4}
76
+ $$
77
+
78
+ where $U_{1}$ is a learnable linear layer, $P_{\text{vocab},t}$ is the vocabulary distribution for generating the token $y_{t}$ .
79
+
80
+ Next, we aim to estimate the KB distribution $P_{kb,t}$ , i.e., the probability distribution of entities in the KB, in an interpretable way and fuse $P_{vocab,t}$ and $P_{kb,t}$ for generating the final output tokens. We follow See et al. (2017) and employ a soft-switch mechanism to fuse $P_{vocab,t}$ and $P_{kb,t}$ to generate output token $y_t$ . Specifically, the generation probability $p_{gen} \in [0,1]$ is computed from the attentive representation $h_{dec,t}'$ and the hidden state $h_{dec,t}$ :
81
+
82
+ $$
83
+ p _ {g e n} = \sigma \left(U _ {2} \left(\left[ h _ {d e c, t} ^ {\prime}, h _ {d e c, t} \right]\right)\right) \tag {5}
84
+ $$
85
+
86
+ where $\sigma$ is sigmoid function, $U_{2}$ is a linear layer. The output token $y_{t}$ is generated by greedy sampling from the probability distribution $P(w)$ :
87
+
88
+ $$
89
+ P (w) = p _ {g e n} P _ {v o c a b, t} + \left(1 - p _ {g e n}\right) P _ {k b, t} \tag {6}
90
+ $$
91
+
92
+ We next describe how to obtain the KB distribution $P_{kb,t}$ in details using the two novel modules we proposed, i.e., hypothesis generator and hierarchical reasoning engine.
93
+
94
+ # 4 Neuro-Symbolic Reasoning For Task-Oriented Dialogue
95
+
96
+ To compute the KB distribution $P_{kb,t}$ , we present two novel modules: hypothesis generator (HG) and hierarchical reasoning engine (HRE). We take the context vector $C$ (Equation 3) as the input of HG module and generate K hypotheses $\mathbb{H}$ , each of which are then fed into the HRE module to generate the logical reasoning chains and their belief scores. The estimated belief scores are then served as $P_{kb,t}$ , giving us a distribution over the entities in the KB. Next, we describe how each component works in detail and explain how they interact with each other for generating $P_{kb,t}$ .
97
+
98
+ ![](images/5f38ab4e61c2ca1082f10279f891cbc422cc8fc38a89ea5e7e562a331533b97b.jpg)
99
+ Figure 2: Illustration of the overall architecture: (a) hypothesis generator generating a set of synthesized hypotheses; (b) reasoning engine used to verify the generated hypotheses; (c) dialogue encoding; (d) response generation.
100
+
101
+ ![](images/0864a050c0a6a68285321db624e48f1b009828bd1f3f0225cfa05d088a95f836.jpg)
102
+
103
+ ![](images/608d16d16f3dfd09e60be60cbaa60bd289cb44fec3ed4b0284f12cdacff178d2.jpg)
104
+
105
+ # 4.1 Hypothesis Generator
106
+
107
+ Let a hypothesis be a 3-tuple of the form $[H, R, T]$ , where $H$ and $T$ are the head and tail entities, and $R$ is the relation between entities. In this paper, we are interested in three types of hypotheses including the H-Hypothesis, T-Hypothesis, and R-Hypothesis. The H-Hypothesis is the structure where the tail entity $T$ and relation $R$ are inferred from the context and the head entity $H$ is unknown (which needs to be answered using the KB), and it takes the form $[▷, R, T]$ . In a similar vein, the T-Hypothesis and R-Hypothesis have unknown tail entity $T$ and relation $R$ , respectively. The goal of the Hypothesis Generator module is to generate hypotheses in this triple format which will later be verified by the Hierarchical Reasoning Engine.
108
+
109
+ Intuitively, a hypothesis can be determined by its content and structure. The structure indicates the template form of the hypothesis while the content fills up the template. For instance, the H-Hypothesis has its template form of “ $[>$ , $R$ , $T]$ ” and the content that needs to be realised includes candidate entities (i.e., “ $\triangleright$ ”), and query states (i.e., the tail “ $T$ ” and relation entities “ $R$ ”). To this end, we employ a divide-and-conquer strategy to jointly learn three sub-components: structure prediction, query states prediction, and candidates prediction. Next, we describe each sub-component in details.
110
+
111
+ Structure Prediction (SP) The goal of the structure prediction module is to determine the structure of the hypothesis (i.e., H/T/R-Hypothesis) based on the context. For example in Figure 1, one might expect an H-Hypothesis at timestep 0. Specifically, SP uses a shared-private architecture to predict the hypothesis type. It first takes the context vector $C$ (Equation 3) as input and utilizes a shared transformation layer between all the three sub-components to learn task-agnostic feature $h_{\text{share}}$ :
112
+
113
+ $$
114
+ h _ {s h a r e} = W _ {2} \left(\text {L e a k y R e L U} \left(W _ {1} C\right)\right) \tag {7}
115
+ $$
116
+
117
+ where $W_{1}$ and $W_{2}$ are learnable parameters (shared by the structure prediction, query states prediction and candidate prediction components) and LeakyReLU is the activation function.
118
+
119
+ The shared layer can be parameterised with complicated neural architectures. However, to keep our model simple, we use linear layers which we found to perform well in our experiments. SP next uses a private layer on top of the shared layer to learn task-specific features for structure prediction:
120
+
121
+ $$
122
+ h _ {\text {p r i v a t e}} ^ {s p} = W _ {4} \left(\text {L e a k y R e L U} \left(W _ {3} h _ {\text {s h a r e}}\right)\right) \tag {8}
123
+ $$
124
+
125
+ where $W_{3}$ and $W_{4}$ are learnable parameters. For ease of presentation, we define the private feature transformation function as:
126
+
127
+ $$
128
+ \mathcal {F} ^ {\star}: h _ {\text {s h a r e}} \rightarrow h _ {\text {p r i v a t e}} ^ {\star} \tag {9}
129
+ $$
130
+
131
+ where $\star$ denotes any of the three sub-components. To obtain the predicted hypothesis structure, a straightforward approach is to apply softmax on $h_{\text{private}}^{sp}$ . However, this will break the differentiability of the overall architecture since we perform sampling on the outcome and pass it to the neural networks. To avoid this, we utilize the Gumbel-Softmax trick (Jang et al., 2017) over $h_{\text{private}}^{sp}$ to get the sampled structure type:
132
+
133
+ $$
134
+ I _ {s p} = \text {G u m b e l - S o f t m a x} \left(h _ {\text {p r i v a t e}} ^ {s p}\right) \in \mathbb {R} ^ {3} \tag {10}
135
+ $$
136
+
137
+ where $\mathrm{I}_{sp}$ is a one-hot vector and the index of one element can be viewed as the predicted structure. In this paper, we define 0 as H-Hypothesis, 1 as T-Hypothesis and 2 as R-Hypothesis.
138
+
139
+ Query States Prediction (QSP) Query states are the tokens in hypothesis that need to be inferred from the dialogue history. For example, one might want to infer relation $R = \text{Located\_in}$ and tail
140
+
141
+ $T =$ Leichhardt based on the history in Figure 1. Therefore, the goal of the query states prediction is to estimate the state information (e.g., $T$ and $R$ in H-Hypothesis) of hypothesis. Specifically, QSP takes the shared feature $h_{share}$ as the input and next applies the private feature transformation function followed by Gumbel-Softmax to obtain the state tokens of hypothesis using:
142
+
143
+ $$
144
+ \begin{array}{l} h _ {\text {p r i v a t e}} ^ {q s p, k} = \mathcal {F} ^ {q s p, k} \left(h _ {\text {s h a r e}}\right) (11) \\ \mathrm {I} _ {q s p} ^ {k} = \text {G u m b e l - S o f t m a x} \left(h _ {\text {p r i v a t e}} ^ {q s p, k}\right) \in \mathbb {R} ^ {n} (12) \\ \end{array}
145
+ $$
146
+
147
+ where $n$ is the number of tokens (entities and relations) in the KB, $k \in \{0,1\}$ , $\mathrm{I}_{qsp}^0$ and $\mathrm{I}_{qsp}^1$ are two one-hot vectors where their corresponding tokens in KB serve as the state tokens of the hypothesis.
148
+
149
+ Candidates Prediction (CP) To generate the final hypotheses, we need multiple candidates to instantiate the structure of the hypothesis except the state tokens, e.g., Cityroom or Gonville_Hotel as candidate head entities $H$ in Figure 1. To this end, we utilize an embedding layer $\phi_{cp}^{emb}$ to convert all the tokens in the KB to vector representations. We then compute a probability distribution over all the KB tokens using:
150
+
151
+ $$
152
+ \mathrm {P} _ {i} = \text {S i g m o i d} \left(\phi_ {c p} ^ {\text {e m b}} \left(K _ {i}\right) \odot h _ {\text {s h a r e}}\right) \tag {13}
153
+ $$
154
+
155
+ where $K_{i}$ is the $i$ -th token in KB, $\phi_{cp}^{emb}$ is the embedding layer of CP, $\mathrm{P}_i$ is the probability of the $i$ -th token to be candidate, $\odot$ denotes inner-product. We use sigmoid instead of softmax as we find that softmax distribution is too "sharp" making the probability between different tokens are hard to differentiate for sampling multiple reasonable candidates.
156
+
157
+ Hypothesis Synthesizing The final hypotheses $\mathbb{H}$ are composed by combining the outputs of the three sub-components as follows: (i) We generate the hypothesis template according to the predicted structure type. For example, if SP predicts a structure type 0 which denotes H-Hypothesis, the model will form a template of “ $[▷, R, T]$ ”; (ii) We next instantiate the state tokens in the hypothesis sequentially by using the outputs of QSP module. For example, if the output tokens of QSP are “Located_in” $(k=0)$ and “Leichhardt” $(k=1)$ , the hypothesis will become $[▷, Located_in, Leichhardt]$ ; (iii) Finally, we instantiate the candidate (i.e., $\triangleright$ ) with the top- $K$ $(K = 5$ in our best-performing version) entities selected from P. If the top-2 highest probability tokens are Cityroom and Gonville_Hotel, the model will instantiate two hypotheses [Cityroom, Located_in, Leichhardt], [Gonville_Hotel, Located_in, Leichhardt].
158
+
159
+ # 4.2 Hierarchical Reasoning Engine
160
+
161
+ With the hypotheses generated by HG module, we next aim to verify them via logical reasoning chains. Inspired by Neural Theorem Provers (Roektäschel and Riedel, 2017), we develop chain-like logical reasoning with following format:
162
+
163
+ $$
164
+ \alpha , (H, R, T) \leftarrow (H, R _ {n}, Z _ {n}) \wedge \dots \wedge (Z _ {1}, R _ {1}, T) \tag {14}
165
+ $$
166
+
167
+ where $\alpha$ is a weight indicating the belief of the model on the target hypothesis $[H,R,T]$ , and the right part of the arrow is the reasoning chain used to prove that hypothesis, and $R_{i}$ and $Z_{i}$ are relations and entities from the KB. The goal is to find the proof chain and the confidence $\alpha$ for a given hypothesis. To this end, we introduce a neural-network based hierarchical reasoning engine (HRE) that learns to conduct chain-like logical reasoning. At a high level, HRE recursively generates multiple levels of sub-hypotheses using neural networks that form a tree structure as shown in Figure 2. Next, we describe how this module works in details.
168
+
169
+ The module takes the output hypotheses from the HG module as input. Each hypothesis serves as one target hypothesis. To generate the reasoning chain in Equation 14, the module first finds sub-hypotheses of the same format as the target in the hypothesis space. The sub-hypotheses can be viewed as the intermediate reasoning results to prove the target. One straightforward approach is to use neural networks to predict all the tokens in the sub-hypotheses (2 heads, 2 tails and 2 relations). However, this can lead to extremely large search space of triples and is inefficient. Intuitively, sub-hypotheses inherit from the target hypothesis and sub-hypotheses themselves are connected by bridge entities. For example, [Uber;office_in,USA] can be verified by two sub-hypotheses [Uber;office_in,Seattle] and [Seattle,a_city_of,USA], Uber and USA are inherited from the target and Seattle is the bridge entity between sub-hypotheses. Motivated by this, we propose to reduce the triple search complexity by constraining the sub-hypotheses. Specifically, given target $[H,R,T]$ , we generate sub-hypotheses of the format $[H,R_1,Z],[Z,R_2,T]$ , where $Z$ is the bridge entity, $R_{1}$ and $R_{2}$ are relations to be predicted. Therefore, the goal of the neural networks has been reduced to predict three tokens (2 relations and 1 bridge entity). Formally, HRE predicts the vector representation of bridge entity as follows:
170
+
171
+ $$
172
+ h _ {H}, h _ {R}, h _ {T} = \phi_ {c p} ^ {e m b} (H), \phi_ {c p} ^ {e m b} (R), \phi_ {c p} ^ {e m b} (T) \tag {15}
173
+ $$
174
+
175
+ $$
176
+ h _ {Z} = W _ {6} \left(\text {L e a k y R e L U} \left(W _ {5} \left[ h _ {H}, h _ {R}, h _ {T} \right]\right)\right) \tag {16}
177
+ $$
178
+
179
+ where $[h_H, h_R, h_T]$ are the concatenation of the representations of tokens in target hypothesis, $h_Z$ is the vector representation of bridge entity $Z$ . The prediction of $h_{R_1}$ and $h_{R_2}$ uses the same architecture in Equation 16 and the difference is that they use different linear layers for the feature transformation. Note that $h_Z$ denotes a KB token in the embedding space. We can decode the token by finding the nearest KB token to $h_Z$ in vector space. More details on the token decoding can be found in Appendix A. Upon obtaining $h_Z, h_{R_1}, h_{R_2}$ , the module generates the two sub-hypotheses in vector representations. Next, the module iteratively takes each of the generated sub-hypothesis as input and extend the proof process by generating next-level sub-hypotheses in a depth-first manner until the maximum depth $D$ has been reached.
180
+
181
+ Belief Score To model confidence in different reasoning chains, we further measure the semantic similarities between each triple of the leaf node and triples in the KB, and compute the belief score $\alpha_{m}$ of the m-th hypothesis $\mathbb{H}_m$ :
182
+
183
+ $$
184
+ \alpha_ {m} = \min _ {\forall i \in U} \max _ {\forall j \in V} e ^ {- d _ {j} \left(L e a f _ {i}, K B _ {j}\right)} \tag {17}
185
+ $$
186
+
187
+ where $Leaf_{i}$ is the representation (concatenation of $H, R, T$ ) of the i-th leaf node in the proof tree (DFS manner), $KB_{j}$ is the representation of the j-th triple in KB, $U = [0, \dots, u-1]$ , $V = [0, \dots, v-1]$ where $u$ and $v$ are the number of leaf nodes and KB triples correspondingly, $d$ is the distance metric. In general, any distance function can be applied and we adopt Euclidean distance in our implementation since we found that it worked well in our experiments. All the triples in the leaf nodes form the reasoning chain for the input hypothesis as in Equation 14. The hypotheses $\mathbb{H}$ coupled with the belief $\alpha$ form our KB distribution $P_{kb,t}$ . More details can be found in Appendix B. Intuitively, the belief score can be viewed as the likelihood of the hypothesis contains the correct entity. If the hypothesis is valid (i.e., contains the correct answer entity), it should have a high likelihood and thus encourage to generate more proper reasoning chains based on the triples stored in the KB.
188
+
189
+ Training We apply two loss functions to train the whole architecture end-to-end. The first loss function $\mathcal{L}_{gen}$ is for the final output. We use a cross-entropy loss over the ground-truth token and the
190
+
191
+ <table><tr><td>Dataset</td><td>Domains</td><td>Train</td><td>Dev</td><td>Test</td></tr><tr><td>SMD</td><td>Navigate,Weather,Schedule</td><td>2425</td><td>302</td><td>304</td></tr><tr><td>MultiWOZ 2.1</td><td>Restaurant,Attraction,Hotel</td><td>1839</td><td>117</td><td>141</td></tr></table>
192
+
193
+ Table 1: Statistics of SMD and MultiWOZ 2.1.
194
+
195
+ generated token from the final distribution $P(w)$ . The second loss $\mathcal{L}_{cp}$ is for the candidates prediction (CP) module in the hypotheses generator. We apply binary cross-entropy loss over the output distribution for each KB token (Equation 13) and their corresponding labels. The labels for each KB token are computed as follows:
196
+
197
+ $$
198
+ \operatorname {L a b e l} _ {i} = \left\{ \begin{array}{l l} 1, & K _ {i} = y _ {t} \\ 0, & K _ {i} \neq y _ {t} \end{array} \right. \tag {18}
199
+ $$
200
+
201
+ where $K_{i}$ is the $i$ -th token in the KB and $y_{t}$ is the ground-truth output at timestep $t$ . The final loss $\mathcal{L}$ is calculated by:
202
+
203
+ $$
204
+ \mathcal {L} = \gamma_ {g} * \mathcal {L} _ {\text {g e n}} + \gamma_ {c} * \mathcal {L} _ {\text {c p}} \tag {19}
205
+ $$
206
+
207
+ where $\gamma_{g}$ and $\gamma_{c}$ are hyper-parameters and we set them to 1 in our experiments.
208
+
209
+ # 5 Experiments
210
+
211
+ # 5.1 Datasets
212
+
213
+ To evaluate the effectiveness and demonstrate the interpretability of our proposed approach, we conduct experiments on two public benchmark datasets for task-oriented dialogue in this paper, SMD (Eric et al., 2017) and MultiWOZ 2.1 (Budzianowski et al., 2018). We use the partitions created by Eric et al. (2017); Madotto et al. (2018) and Qin et al. (2020) for SMD and MultiWOZ, respectively. Statistics of the datasets are presented in Table 1. In the Appendix E, we present several additional results on a large-scale synthetic dataset to demonstrate our model's multi-hop reasoning capability under complex KB reasoning scenarios.
214
+
215
+ # 5.2 Baselines
216
+
217
+ We compare our model with the following state-of-the-art baselines on KB reasoning in task-oriented dialogues: (1) Mem2Seq (Madotto et al., 2018): employs memory networks to store the KB and combine pointer mechanism to either generate tokens from vocabulary or copy from memory; (2) GLMP (Wu et al., 2019b): uses a global-to-local pointer mechanism to query the KB during decoding; (3) DF-Net (Qin et al., 2020): employs
218
+
219
+ <table><tr><td colspan="7">SMD</td><td colspan="4">MultiWOZ 2.1</td></tr><tr><td>Model</td><td>BLEU</td><td>F1</td><td>Navigate F1</td><td>Weather F1</td><td>Calendar F1</td><td>BLEU</td><td>F1</td><td>Restaurant F1</td><td>Attraction F1</td><td>Hotel F1</td></tr><tr><td>Mem2Seq</td><td>12.6</td><td>33.4</td><td>20.0</td><td>32.8</td><td>49.3</td><td>6.6</td><td>21.6</td><td>22.4</td><td>22.0</td><td>21.0</td></tr><tr><td>GLMP</td><td>13.9</td><td>60.7</td><td>54.6</td><td>56.5</td><td>72.5</td><td>6.9</td><td>32.4</td><td>38.4</td><td>24.4</td><td>28.1</td></tr><tr><td>GraphDialog</td><td>14.2</td><td>61.1</td><td>56.4</td><td>56.9</td><td>72.1</td><td>6.7</td><td>34.1</td><td>39.2</td><td>27.8</td><td>29.6</td></tr><tr><td>DF-Net</td><td>14.4</td><td>62.7</td><td>57.9</td><td>57.6</td><td>73.1</td><td>9.4</td><td>35.1</td><td>40.9</td><td>28.1</td><td>30.6</td></tr><tr><td>Ours (D=1)</td><td>14.9</td><td>63.8</td><td>60.1</td><td>58.7</td><td>75.0</td><td>9.7</td><td>36.5</td><td>42.0</td><td>29.7</td><td>32.8</td></tr><tr><td>Ours (D=3)</td><td>15.6*</td><td>64.5*</td><td>60.3*</td><td>59.2*</td><td>75.6*</td><td>10.6*</td><td>37.2*</td><td>42.6*</td><td>30.6*</td><td>33.7*</td></tr><tr><td>Ours (D=5)</td><td>14.5</td><td>63.5</td><td>59.4</td><td>57.9</td><td>74.8</td><td>9.3</td><td>36.2</td><td>41.7</td><td>28.8</td><td>31.5</td></tr></table>
220
+
221
+ Table 2: Main results. D denotes the maximum depth of HRE module. We run each experiment 5 times with different random seeds and report the average results. * denotes that the improvement of our framework over all baselines are statistically significant with $p < 0.05$ under t-test. Following Qin et al. (2020), we report Navigate, Weather, Calendar on SMD and Restaurant, Attraction, Hotel on MultiWOZ for per-domain results.
222
+
223
+ shared-private architecture to capture both domain-specific and domain-general knowledge to improve the model transferability; (4) GraphDialog (Yang et al., 2020): incorporates graph structural information obtained from sentence dependency parsing results for improving KB reasoning accuracy and response generation quality. Detailed experimental settings are included in Appendix C.
224
+
225
+ # 5.3 Main Results
226
+
227
+ Following prior work (Eric et al., 2017; Madotto et al., 2018; Wu et al., 2019b), we adopt the BLEU and Entity F1 metrics to evaluate the performance of our framework. The results on the two datasets are shown in Table 2. As we can see, our framework consistently outperforms all the previous state-of-the-art baselines on all datasets across both metrics. Specifically, on MultiWOZ dataset, our model achieves more than $2\%$ absolute improvement in Entity F1 and $1.2\%$ improvement in BLEU over baselines. The improvement in Entity F1 indicates that our model enhances KB reasoning, while the increase in BLEU suggests that the quality of the generated responses has been improved. The same trend has also been observed on SMD dataset. This indicates the effectiveness of our proposed framework for task-oriented dialogue generation.
228
+
229
+ # 5.4 Model Interpretability
230
+
231
+ To demonstrate our frameworks's interpretability, we investigate the inner workings of our framework. As shown in Figure 3, given the dialogue history "Can you recommend me a restaurant near Palm_Beach?", the generated response is "There is a Golden_House". During the 3rd timestep, our model has successfully predicted an appropriate
232
+
233
+ H-Hypothesis with Located_in and Palm_Beach as its state tokens. Our model further instantiates five concrete hypotheses and computes their belief scores leveraging the reasoning engine, respectively. As we can see from the table, our model successfully generates five reasonable hypotheses and scores them correctly (with highest score for the oracle KB entity Golden_House). The proof process for the highest score hypothesis is shown in Figure 3. The verification procedure generated by the HRE module has a depth of 3 and the reasoning chaining used to verify the target hypothesis is: [Golden_House, Next_to, Preston_Market] $\rightarrow$ [Preston_Market, Located_in, Williamstown] $\rightarrow$ [Williamstown, Located_in, Herb_Garden] $\rightarrow$ [Herb_Garden, Located_in, Palm_Beach]. This indicates that our framework has successfully utilized the KB information to support the reasoning process explicitly to reach a correct conclusion. More examples and error analyses can be found in the Appendix (Appendix E.4 and F).
234
+
235
+ # 5.5 Ablation Study
236
+
237
+ We ablate each component in our framework to study their effectiveness on both datasets. The results are shown in Table 3. Specifically, 1) w/o HRE denotes that we simply use the probability in candidates prediction (CP) module (Equation 13) as the KB distribution without using the scores from the reasoning engine. 2) w/o BERT denotes that we use standard GRU as encoder instead of BERT. 3) w/o Soft-switch denotes that we simply sum the KB distribution and vocabulary distribution without using a soft gate. As we can see from the table, all the individual components have notably contributed to the overall performance of
238
+
239
+ Dialogue history: Can you recommend me a restaurant near Palm_Beach? Predicted response: There is a Golden_House.
240
+
241
+ <table><tr><td>Structure
242
+ Type</td><td>State
243
+ Tokens</td><td>Top-5 Candidate
244
+ Tokens</td><td>Set of Generated
245
+ Hypotheses</td><td>Belief
246
+ Scores</td></tr><tr><td rowspan="5">H-Hypothesis</td><td rowspan="3">“Located_in (k=0)”</td><td>“The_Hotpot”</td><td>[The_Hotpot, Located_in, Palm_Beach]’</td><td>0.13</td></tr><tr><td>“Hookey_Park”</td><td>[Hookey_Park, Located_in, Palm_Beach]’</td><td>0.07</td></tr><tr><td>“Golden_House”</td><td>[Golden_House, Located_in, Palm_Beach]’</td><td>1.00</td></tr><tr><td rowspan="2">“Palm_Beach (k=1)”</td><td>“Rose_Lands”</td><td>[Rose_Lands, Located_in, Palm_Beach]’</td><td>0.08</td></tr><tr><td>“Princes_Gardens”</td><td>[Princes_Gardens, Located_in, Palm_Beach]’</td><td>0.05</td></tr><tr><td colspan="2">Detailed verifying process of the hierarchical reasoning engine for the highest belief hypothesis:</td><td colspan="2">Golden_House Located in Palm_Beach</td><td>KB:
247
+ Golden_House Price_range_Expensit
248
+ Golden_House Cuisine_Italian
249
+ Golden_House Next_to_Preston_Max
250
+ Preston_Market Located_in_William
251
+ Willamstown Located_in_Herb_Garden
252
+ Herb_Garden Located_in_Palm_Beach</td></tr><tr><td colspan="2">Golden_House Next_to_Preston_Market</td><td colspan="2">Williamstown Located_in_Herb_Garden</td><td>Golden_House Next_to_Preston_Max
253
+ Preston_Market Located_in_William
254
+ Willamstown Located_in_Herb_Garden
255
+ Herb_Garden Located_in_Palm_Beach</td></tr><tr><td colspan="5">Reasoning Chain:
256
+ [Golden_House, Located_in, Palm_Beach]← [Golden_House, Next_to, Preston_Market] ∩ [Preston_Market, Located_in, Williamstown]
257
+ ∩ [Williamstown, Located_in, Herb_Garden] ∩ [Herb_Garden, Located_in, Palm_Beach]</td></tr></table>
258
+
259
+ Figure 3: Example of inner workings of the hypothesis generator and hierarchical reasoning engine for generating Golden_House in the response given dialogue history Can you recommend me a restaurant near Palm_Beach?. Our model has performed a 4-hop reasoning to verify the target hypothesis [Golden_House, Located_in, Palm_Beach].
260
+
261
+ <table><tr><td></td><td colspan="2">SMD</td><td colspan="2">MultiWOZ 2.1</td></tr><tr><td>Model</td><td>F1 (%)</td><td>Δ</td><td>F1 (%)</td><td>Δ</td></tr><tr><td>Ours (Full model)</td><td>64.5</td><td>-</td><td>37.2</td><td>-</td></tr><tr><td>- w/o HRE</td><td>59.4</td><td>5.1</td><td>30.5</td><td>6.7</td></tr><tr><td>- w/o BERT</td><td>61.3</td><td>3.2</td><td>33.4</td><td>3.8</td></tr><tr><td>- w/o Soft-switch</td><td>62.0</td><td>2.5</td><td>35.1</td><td>2.1</td></tr></table>
262
+
263
+ Table 3: Ablation studies on two benchmark datasets.
264
+
265
+ <table><tr><td></td><td colspan="2">SMD</td><td colspan="2">MultiWOZ 2.1</td></tr><tr><td>Model</td><td>Original F1 (%)</td><td>Unseen F1 (%)</td><td>Original F1 (%)</td><td>Unseen F1 (%)</td></tr><tr><td>GLMP</td><td>60.7</td><td>55.3</td><td>32.4</td><td>23.9</td></tr><tr><td>GraphDialog</td><td>61.1</td><td>55.7</td><td>34.1</td><td>25.4</td></tr><tr><td>DF-Net</td><td>62.7</td><td>57.2</td><td>35.1</td><td>26.5</td></tr><tr><td>Ours (Full)</td><td>64.5</td><td>61.1</td><td>37.2</td><td>32.8</td></tr></table>
266
+
267
+ Table 4: Generalization test results on two datasets.
268
+
269
+ our framework. Specifically, when removing HRE module, the performance has decreased substantially (more than $5\%$ absolute drop), which confirms that the effectiveness of the proposed hierarchical reasoner module.
270
+
271
+ # 5.6 Generalization Capability
272
+
273
+ We further investigate the generalization ability of our model under unseen settings. In the original dataset released by prior works, the entity overlap ratio between the train and test split is $78\%$ and
274
+
275
+ $15.3\%$ for MultiWOZ 2.1 and SMD, respectively. To simulate unseen scenario, we construct a new dataset split that reduces the entity overlap ratio to $30\%$ for MultiWOZ 2.1 and $2\%$ for SMD between the train and test split, which is a more challenging setting for all the models. More details of the construction process can be found in Appendix D. We re-run all the baselines with their released codes and our model on the new data split and report the results in Table 4. As we can see, the performance drops significantly for all systems on both datasets. However, our model degrades less compared to other systems, showing that it has better generalisation capability under unseen scenarios. This also verifies that neuro-symbolic approach has the advantage of better generalisation ability which has also been confirmed by many other studies (Andreas et al., 2016; Rocktäschel and Riedel, 2017; Minervini et al., 2020).
276
+
277
+ # 5.7 Human Evaluation
278
+
279
+ Following prior work (Qin et al., 2020), we also conduct human evaluations for our framework and baselines from three aspects: Correctness, Fluency, and Humanlikeness. Details about the scoring criterions can be found in Appendix H. We randomly select 300 different dialogue samples from the test set and ask human annotators to judge the quality of the responses and score them according to the three metrics ranging from 1 to 5. We train the annotators by showing them examples to help them
280
+
281
+ <table><tr><td>Model</td><td>Correct</td><td>Fluent</td><td>Humanlike</td></tr><tr><td>GLMP</td><td>4.01</td><td>3.78</td><td>3.25</td></tr><tr><td>GraphDialog</td><td>4.15</td><td>4.19</td><td>3.40</td></tr><tr><td>DF-Net</td><td>4.16</td><td>4.25</td><td>3.54</td></tr><tr><td>Ours (Full model)</td><td>4.41</td><td>4.28</td><td>3.59</td></tr><tr><td>Human</td><td>4.83</td><td>4.65</td><td>4.57</td></tr><tr><td>Agreement</td><td>75%</td><td>69%</td><td>71%</td></tr></table>
282
+
283
+ Table 5: Human evaluation results.
284
+
285
+ understand the criteria and employ Fleiss' kappa (Fleiss, 1971) to measure the agreement across different annotators. The results are shown in Table 5. As we can see, our model outperforms all baselines across all the three metrics, consistent with our previous observations using automatic evaluations.
286
+
287
+ # 6 Conclusion
288
+
289
+ In this paper, we propose an explicit and interpretable Neuro-Symbolic KB reasoning framework for task-oriented dialogue generation. The hypothesis generator employs a divide-and-conquer strategy to learn to generate hypotheses, and the reasoner employs a recursive strategy to learn to generate verification for the hypotheses. We evaluate our proposed framework on two public benchmark datasets including SMD and MultiWOZ 2.1. Extensive experimental results demonstrate the effectiveness of our proposed framework, as well being more interpretable.
290
+
291
+ # 7 Ethical Considerations
292
+
293
+ For the human evaluation in this paper, we recruit several annotators on Amazon Mechanical Turk from English-speaking countries. We pay the annotators USD$0.15 for each annotation task. Each task can be finished on average in 1 minute, which amounts to $9.0 per hour that is above the US federal minimum wage ($7.25). To ensure the quality of the human evaluation results, we perform quality control in a few ways. First, the annotators will be shown our scoring standards (Appendix H) before their tasks, and are asked to follow them. If the task is not done properly, either as determined by expert judgements (we recruit 3 native English speakers to validate the results of the Turkers' annotations) or there are obvious patterns such as constantly giving the same score for all tasks, we remove their annotations. We also compute agreement score to check for the consistency among the annotators.
294
+
295
+ # References
296
+
297
+ Jacob Andreas, Marcus Rohrbach, Trevor Darrell, and Dan Klein. 2016. Neural module networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 39-48.
298
+
299
+ Rishi Bommasani, Drew A. Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S. Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, Erik Brynjolfsson, Shyamal Buch, Dallas Card, Rodrigo Castellon, Niladri Chatterji, Annie Chen, Kathleen Creel, Jared Quincy Davis, Dora Demszky, Chris Donahue, Moussa Doumbouya, Esin Durmus, Stefano Ermon, John Etchemendy, Kawin Ethayarajh, Li Fei-Fei, Chelsea Finn, Trevor Gale, Lauren Gillespie, Karan Goel, Noah Goodman, Shelby Grossman, Neel Guha, Tatsunori Hashimoto, Peter Henderson, John Hewitt, Daniel E. Ho, Jenny Hong, Kyle Hsu, Jing Huang, Thomas Icard, Saahil Jain, Dan Jurafsky, Pratyusha Kalluri, Siddharth Karamcheti, Geoff Keeling, Fereshte Khani, Omar Khattab, Pang Wei Kohd, Mark Krass, Ranjay Krishna, Rohith Kuditipudi, Ananya Kumar, Faisal Ladhak, Mina Lee, Tony Lee, Jure Leskovec, Isabelle Levent, Xiang Lisa Li, Xuechen Li, Tengyu Ma, Ali Malik, Christopher D. Manning, Suvir Mirchandani, Eric Mitchell, Zanele Munyikwa, Suraj Nair, Avanika Narayan, Deepak Narayanan, Ben Newman, Allen Nie, Juan Carlos Niebles, Hamed Nilforoshan, Julian Nyarko, Giray Ogut, Laurel Orr, Isabel Papadimitriou, Joon Sung Park, Chris Piech, Eva Portelance, Christopher Potts, Aditi Raghunathan, Rob Reich, Hongyu Ren, Frieda Rong, Yusuf Roohani, Camilo Ruiz, Jack Ryan, Christopher Re, Dorsa Sadigh, Shiori Sagawa, Keshav Santhanam, Andy Shih, Krishnan Srinivasan, Alex Tamkin, Rohan Taori, Armin W. Thomas, Florian Tramer Rose E. Wang, William Wang, Bohan Wu, Jiajun Wu, Yuhuai Wu, Sang Michael Xie, Michihiro Yasunaga, Jiaxuan You, Matei Zaharia, Michael Zhang, Tianyi Zhang, Xikun Zhang, Yuhui Zhang, Lucia Zheng, Kaitlyn Zhou and Percy Liang. 2021. On the opportunities and risks of foundation models.
300
+
301
+ Antoine Bordes, Y-Lan Boureau, and Jason Weston. 2017. Learning end-to-end goal-oriented dialog. In ICLR.
302
+
303
+ Paweł Budzianowski and Ivan Vulić. 2019. Hello, it's gpt-2-how can i help you? towards the use of pretrained language models for task-oriented dialogue systems. arXiv preprint arXiv:1907.05774.
304
+
305
+ Paweł Budzianowski, Tsung-Hsien Wen, Bo-Hsiang Tseng, Iñigo Casanueva, Stefan Ultes, Osman Ramadan, and Milica Gasic. 2018. Multiwoz-a largescale multi-domain wizard-of-oz dataset for task-oriented dialogue modelling. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 5016-5026.
306
+
307
+ Wenhu Chen, Jianshu Chen, Pengda Qin, Xifeng Yan, and William Yang Wang. 2019a. Semantically
308
+
309
+ conditioned dialog response generation via hierarchical disentangled self-attention. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3696-3709.
310
+ Xinyun Chen, Chen Liang, Adams Wei Yu, Denny Zhou, Dawn Song, and Quoc V Le. 2019b. Neural symbolic reader: Scalable integration of distributed and symbolic representations for reading comprehension. In International Conference on Learning Representations.
311
+ Xinyun Chen, Chen Liang, Adams Wei Yu, Denny Zhou, Dawn Song, and Quoc V. Le. 2020. Neural symbolic reader: Scalable integration of distributed and symbolic representations for reading comprehension. In International Conference on Learning Representations.
312
+ Yun-Nung Chen, Dilek Hakkani-Tür, Gökhan Tür, Jianfeng Gao, and Li Deng. 2016. End-to-end memory networks with knowledge carryover for multi-turn spoken language understanding. In Interspeech, pages 3245-3249.
313
+ Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.
314
+ Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186.
315
+ Finale Doshi-Velez and Been Kim. 2017. Towards a rigorous science of interpretable machine learning, arXiv preprint arXiv:1702.08608.
316
+ Mihail Eric, Lakshmi Krishnan, Francois Charette, and Christopher D. Manning. 2017. Key-value retrieval networks for task-oriented dialogue. In Proceedings of the 18th Annual SIGdial Meeting on Discourse and Dialogue, pages 37-49.
317
+ Joseph L Fleiss. 1971. Measuring nominal scale agreement among many raters. Psychological bulletin, 76(5):378.
318
+ Ehsan Hosseini-Asl, Bryan McCann, Chien-Sheng Wu, Semih Yavuz, and Richard Socher. 2020. A simple language model for task-oriented dialogue. arXiv preprint arXiv:2005.00796.
319
+ Ronghang Hu, Jacob Andreas, Marcus Rohrbach, Trevor Darrell, and Kate Saenko. 2017. Learning to reason: End-to-end module networks for visual question answering. In Proceedings of the IEEE International Conference on Computer Vision, pages 804-813.
320
+
321
+ Xinting Huang, Jianzhong Qi, Yu Sun, and Rui Zhang. 2020. Semi-supervised dialogue policy learning via stochastic reward estimation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 660-670, Online. Association for Computational Linguistics.
322
+ Drew A Hudson and Christopher D Manning. 2018. Compositional attention networks for machine reasoning. arXiv preprint arXiv:1803.03067.
323
+ Eric Jang, Shixiang Gu, and Ben Poole. 2017. Categorical reparameterization with gumbel-softmax. In ICLR.
324
+ Jaehun Jung, Bokyung Son, and Sungwon Lyu. 2020. AttnIO: Knowledge Graph Exploration with In-and-Out Attention Flow for Knowledge-Grounded Dialogue. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 3484-3497, Online. Association for Computational Linguistics.
325
+ Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
326
+ Wenqiang Lei, Xisen Jin, Min-Yen Kan, Zhaochun Ren, Xiangnan He, and Dawei Yin. 2018. Sequicity: Simplifying task-oriented dialogue systems with single sequence-to-sequence architectures. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics.
327
+ Zachary C Lipton. 2018. The mythos of model interpretability: In machine learning, the concept of interpretability is both important and slippery. Queue, 16(3):31-57.
328
+ Andrea Madotto, Chien-Sheng Wu, and Pascale Fung. 2018. Mem2Seq: Effectively incorporating knowledge bases into end-to-end task-oriented dialog systems. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1468-1478.
329
+ Pasquale Minervini, Sebastian Riedel, Pontus Stenetorp, Edward Grefenstette, and Tim Rocktäschel. 2020. Learning reasoning strategies in end-to-end differentiable proving. In International Conference on Machine Learning, pages 6938-6949. PMLR.
330
+ Seungwhan Moon, Pararth Shah, Anuj Kumar, and Rajen Subba. 2019. OpenDialKG: Explainable conversational reasoning with attention-based walks over knowledge graphs. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 845-854, Florence, Italy. Association for Computational Linguistics.
331
+ Baolin Peng, Chunyuan Li, Jinchao Li, Shahin Shayandeh, Lars Liden, and Jianfeng Gao. 2020. Soloist: Building task bots at scale with transfer learning and machine teaching. arXiv preprint arXiv:2005.05298.
332
+
333
+ Libo Qin, Xiao Xu, Wanxiang Che, Yue Zhang, and Ting Liu. 2020. Dynamic fusion network for multi-domain end-to-end task-oriented dialog. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6344-6354, Online. Association for Computational Linguistics.
334
+ Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9.
335
+ Tim Rocktäschel and Sebastian Riedel. 2017. End-to-end differentiable proving. arXiv preprint arXiv:1705.11040.
336
+ Tim Rocktäschel and Sebastian Riedel. 2017. End-to-end differentiable proving. In Advances in Neural Information Processing Systems, volume 30.
337
+ Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointer-generator networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1073-1083.
338
+ Emile van Krieken, Erman Acar, and Frank van Harmelen. 2022. Analyzing differentiable fuzzy logic operators. Artificial Intelligence, 302:103602.
339
+ Ramakrishna Vedantam, Karan Desai, Stefan Lee, Marcus Rohrbach, Dhruv Batra, and Devi Parikh. 2019. Probabilistic neural symbolic models for interpretable visual question answering. In International Conference on Machine Learning, pages 6428-6437. PMLR.
340
+ Thomas Wolf, Victor Sanh, Julien Chaumont, and Clement Delangue. 2019. Transfertransfo: A transfer learning approach for neural network based conversational agents. arXiv preprint arXiv:1901.08149.
341
+ Chien-Sheng Wu, Steven Hoi, Richard Socher, and Caiming Xiong. 2020. Tod-bert: pre-trained natural language understanding for task-oriented dialogue. arXiv preprint arXiv:2004.06871.
342
+ Chien-Sheng Wu, Andrea Madotto, Ehsan Hosseini-Asl, Caiming Xiong, Richard Socher, and Pascale Fung. 2019a. Transferable multi-domain state generator for task-oriented dialogue systems. arXiv preprint arXiv:1905.08743.
343
+ Chien-Sheng Wu, Richard Socher, and Caiming Xiong. 2019b. Global-to-local memory pointer networks for task-oriented dialogue. In ICLR.
344
+ Shiquan Yang, Rui Zhang, and Sarah Erfani. 2020. GraphDialog: Integrating graph knowledge into end-to-end task-oriented dialogue systems. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1878-1888.
345
+
346
+ Kexin Yi, Jiajun Wu, Chuang Gan, Antonio Torralba, Pushmeet Kohli, and Joshua B Tenenbaum. 2018. Neural-symbolic vqa: Disentangling reasoning from vision and language understanding. arXiv preprint arXiv:1810.02338.
347
+ Yichi Zhang, Zhijian Ou, and Zhou Yu. 2020. Task-oriented dialog systems that consider multiple appropriate responses under the same context. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 9604-9611.
348
+ Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, and Bill Dolan. 2019. Dialogpt: Large-scale generative pre-training for conversational response generation. arXiv preprint arXiv:1911.00536.
349
+ Victor Zhong, Caiming Xiong, and Richard Socher. 2018. Global-locally self-attentive dialogue state tracker. arXiv preprint arXiv:1805.09655.
350
+
351
+ # A Details on Token Decoding in HRE
352
+
353
+ Given the vector representations of the generated sub-hypotheses in hierarchical reasoning engine module, we utilize the similarity-based approach to decode the symbolic representations of those sub-hypotheses. Specifically, given a generated sub-hypotheses $[h_H, h_R, h_T]$ , where $h_H$ , $h_R$ and $h_T$ are the vector representations for the head entity, relation and tail entity correspondingly. To decode the symbolic representations for the head, relation and tail entities, we use:
354
+
355
+ $$
356
+ \underset {\forall i} {\arg \min } \| \phi (K _ {i}) - h _ {H} \| ^ {2}. \tag {20}
357
+ $$
358
+
359
+ $$
360
+ \underset {\forall j} {\arg \min } \| \phi (K _ {j}) - h _ {R} \| ^ {2}. \tag {21}
361
+ $$
362
+
363
+ $$
364
+ \underset {\forall k} {\arg \min } \| \phi (K _ {k}) - h _ {T} \| ^ {2}. \tag {22}
365
+ $$
366
+
367
+ where $i, j$ and $k$ are the indices for the head entity, relation and tail entity in the vocabulary, $K_{i}, K_{j}, K_{k}$ denotes the $i$ -th, $j$ -th, $k$ -th token of the KB, $\phi(K_{i})$ denotes the embedding of the $i$ -th token. Through this, we can decode the generated sub-hypotheses and obtain their explicit symbolic representations.
368
+
369
+ # B Details on KB Distribution Calculation
370
+
371
+ We extract the KB distribution $P_{kb,t}$ at timestep $t$ from the generated hypotheses and their corresponding belief scores as follows. For instance, if the generated hypothesis $[H,R,T]$ is an H-Hypothesis with a belief score $\alpha$ , we extract the candidate token of the H-Hypothesis which is $H$ and then pair $H$ with the belief score $\alpha$ , where $\alpha$ is viewed as the probability of the token $H$ to be selected as the output at timestep $t$ . We conduct this for all the generated hypotheses and their corresponding belief scores from the HG and HRE modules. Finally, all the candidate tokens paired with their belief scores form the $P_{kb,t}$ at timestep $t$ .
372
+
373
+ # C Experimental Settings
374
+
375
+ The dimensionality of the embedding and the decoder RNN hidden units are 128 and embeddings are randomly initialized. The dropout ratio is selected from [0.1, 0.5]. We use Adam (Kingma and Ba, 2014) optimizer to optimize the parameters in our model and the learning rate is selected from $[1e^{-3},1e^{-4}]$ . For the encoder, we fine-tune the BERT-base-uncased model from HuggingFace's
376
+
377
+ library with an the embedding size of 768 with 12 layers and 12 heads. The maximum depth $D$ of the HRE module is selected from [1,5], the maximum number of candidates $K$ in CP module is selected from [1,10], and the temperature of Gumbel-Softmax is 0.1. All hyper-parameters are selected according to the validation set, and we repeat all the experiments 5 times with different random seeds and report the average results.
378
+
379
+ # D Details on Unseen Setting
380
+
381
+ We construct new dataset splits both on SMD and MultiWOZ 2.1 to simulate unseen scenarios for testing the generalization ability of all the models. Specifically, we construct the new dataset split as follows: We first extract all the KB entities that appeared in the dialogue responses and accumulate the percentage of samples for each KB entity. Second, we rank all the entities according to their percentage of samples in a decreasing order. Next, we split the KB entity set into train entities and test entities by accumulating the total percentages of samples. Finally, we iterate each sample in the dataset and assign it to train or test split by checking whether the entity in the response belong to the train entities or test entities. In this way, we obtain a new dataset split for both SMD and MultiWOZ 2.1, which has an entity overlap ratio of $2\%$ and $30\%$ , respectively, between train and test split (overlap ratio in the original SMD and MultiWOZ 2.1 are $15.3\%$ and $78\%$ , respectively).
382
+
383
+ The dataset statistics for the unseen splits are shown in Table 6 and Table 7:
384
+
385
+ <table><tr><td>Dataset</td><td>Train</td><td>Dev</td><td>Test</td></tr><tr><td>SMD</td><td>1850</td><td>311</td><td>870</td></tr><tr><td>MultiWOZ 2.1</td><td>1472</td><td>252</td><td>373</td></tr></table>
386
+
387
+ Table 6: Statistics of Unseen Dataset for SMD and MultiWOZ 2.1.
388
+
389
+ <table><tr><td>Dataset</td><td>Ent. Overlap Standard</td><td>Ent. Overlap Unseen</td><td>Δ↓</td></tr><tr><td>SMD</td><td>15.3%</td><td>2%</td><td>13.3%</td></tr><tr><td>MultiWOZ 2.1</td><td>78%</td><td>30%</td><td>48%</td></tr></table>
390
+
391
+ Table 7: Entity Overlap Ratio Comparisons Between Unseen Split and Original Split for SMD and Multi-WOZ 2.1. Entity Overlap Ratio = |Train Entities ∩ Test Entities| / |Total Entities|.
392
+
393
+ # E Additional Experiments
394
+
395
+ We find that KB reasoning for most existing task-oriented dialogue datasets are quite simple, for the most part only requiring that only one or two hop reasoning over the KB in order to answer the user's request successfully. To further test our model and baseline models' multi-hop reasoning capability under complex reasoning scenarios, we develop a large-scale multi-domain synthetic dataset consisting dialogues requiring multi-hop reasoning over KBs. This is similar in spirit to bAbI dataset, and we hope that this dataset will continue to be used with other dialogue benchmarks in future studies. We will release this dataset upon publication. Next, we describe how we construct the dataset in details and show the experimental results performed on it.
396
+
397
+ # E.1 Dataset Construction
398
+
399
+ As is shown in Figure 4, each sample in the dataset consists of several rounds of dialogues. We generate the questions and answers of the dialogues by randomly sample template utterances with placeholders (e.g., @movie, @director, @location) indicating the types of KB entities to be instantiated to form the complete utterances. To simulate a natural conversation between user and system under different scenarios (i.e., restaurant booking, hotel reservation, movie booking), we designed 18 different types of question-answer templates. For example, movie to director denotes that the user requests the director given the movie name, location to theatre denotes the user requires theatre information given the location. For each conversation, we randomly select several different types of question-answer templates sequentially to form the skeleton of the whole dialogue. To ensure the coherent of the dialogue flow, we provide the guided next types for each question-answer template. For instance, if the current sampled question-answer type is location to restaurant, the guided next types will be randomly sampled from restaurant to price, restaurant to cuisine etc. Thus, we can ensure the generated dialogue turns more coherent in terms of semantics to simulate a real conversation as much as possible.
400
+
401
+ For each conversation, we generate 3 or 4 rounds of dialogues following the existing work such as SMD and MultiWOZ 2.1. At each round of the dialogue, we randomly select a question-answer template and instantiate the placeholders in the template with the corresponding types of KB entities. If there are multiple entities in the KB satisfy
402
+
403
+ the types indicated by the placeholders, we randomly sample one to implement the template. In this way, we can increase the diversity of the generated data. For instance, if the question template is Is there any restaurant located in @district?, the possible sets of entities in the KB for the placeholder @district might include multiple location entities in the KB such as vermont, blackburn etc. We randomly sample one of them to replace the placeholder and generate a final sentence. If we sample vermont, the implemented sentence will be Is there any restaurant located in the vermont?.
404
+
405
+ To make the generated dialogue utterances more natural as human conversations, we further randomly replace the KB entities in the sentence with pronouns such as it, they etc, provided that the entities have been mentioned in previous dialogue turns. Thus, it requires the model to overcome the co-reference resolution to arrive at the correct answer which increases the difficulty. For example, Who is the director of the movie mission impossible? will be rephrased as Who is the director of it? if the movie name mission impossible has been mentioned in the dialogue history.
406
+
407
+ For movie domain, we employ the KB used in the well-known WikiMovie dataset. For hotel and restaurant domain, we use the KB provided in the MultiWOZ 2.1 dataset. For each employed KB, we further extend it by adding information such as hierarchies of locations to enrich the KB in order to make it suitable for testing multi-hop reasoning capability. For example, if the KB contains a hotel entity love lodge, we add different levels of location information to support multi-hop KB reasoning. For instance, we add location information such as love_lodge next_to lincoln_park, lincoln_park is_within waverleyDistrict, waverleyDistrict located_in grattan_county. Thus, if the user asked about the hotel located in grattan_county, it requires the model to conduct multi-hop reasoning over the KB to know that love_lodgeis located in grattan_county. Through this, we make our synthetic dataset suitable for multi-hop reasoning tasks over KB under task-oriented dialogue scenarios. The location information we utilized in the synthetic dataset are obtained from the Wikipedia and the official website of famous cities around the world.
408
+
409
+ # External Knowledge Base (KB)
410
+
411
+ Cityroom Price_range Moderate Cityroom Stars 3
412
+
413
+ Cityroom Next to Palm Lawn Palm Lawn Located in Chadstone
414
+
415
+ Chadstone Located in Leichhardt Gonville Hotel Stars 4
416
+
417
+ Gonville_Hotel Price_range Expensive Gonville_Hotel Located in Moorabbin
418
+
419
+ User: Can you recommend me a hotel located in Leichhardt?
420
+
421
+ System: Cityroom is a nice one there.
422
+
423
+ User: How much does it cost there?
424
+
425
+ System: It has a moderate price range.
426
+
427
+ User: What is the rating of it?
428
+
429
+ System: It is a 3 star hotel.
430
+
431
+ Figure 4: An example dialogue from the hotel domain of the synthetic dataset. The first turn of the dialogue requires a 3-hop reasoning over the KB to get the correct entity Cityroom given the location information Leihhardt. The second and third turn of the dialogue require single-hop reasoning over KB to get the correct entity.
432
+
433
+ # E.2 Dataset Statistics
434
+
435
+ The detailed statistics of the synthetic dataset are shown in Table 8 and Table 9:
436
+
437
+ <table><tr><td>Domain</td><td>Train</td><td>Dev</td><td>Test</td></tr><tr><td>Movie</td><td>7219</td><td>1645</td><td>1667</td></tr><tr><td>Hotel</td><td>7115</td><td>1631</td><td>1639</td></tr><tr><td>Restaurant</td><td>7131</td><td>1672</td><td>1684</td></tr><tr><td>Total</td><td>21465</td><td>4948</td><td>4990</td></tr></table>
438
+
439
+ Table 8: Statistics of synthetic dataset. Numbers in the table are the number of instances for each category.
440
+
441
+ # E.3 Experimental Results
442
+
443
+ Evaluation Metrics. We use the same metrics as on SMD and MultiWOZ 2.1 dataset includes BLEU and Entity $F1$ for performance evaluation.
444
+
445
+ Results. The results on the three domains are shown in Table 10, 11, 12. For each domain, we evaluate the model performance on different subsets of the test data, i.e., 1-hop, 2-hop and $>=3$ -hop. Specifically, we group the test data into three different subsets according to the KB reasoning length for obtaining the ground-truth entity. For instance, 2-hop denotes that the KB entity mentioned in the response needs 2-hop reasoning over the KB. As we can see from the tables, our proposed model consistently outperforms all the baselines by a large margin across all the domains and KB reasoning lengths. We also observe that all the models' performance decrease monotonously as the KB reasoning path length increases, suggesting that KB reasoning with longer range is challenging
446
+
447
+ for all the tested models. However, our framework has less performance degradation compared to all the baselines, and the performance gap between our framework and the baselines has become larger when the length of KB reasoning increases, which demonstrates that our framework has better generalization ability especially under longer KB reasoning paths compared to those baselines.
448
+
449
+ # E.4 Example Outputs
450
+
451
+ We show the generated hypotheses and the proof trees in our framework as shown in Table 13 and Figure 5. As we can see, our model can successfully obtain the correct entities from the KB. Moreover, our framework can formulate sensible hypotheses and generate reasonable proof procedures which can help us gain some insights about the inner workings of our model.
452
+
453
+ # F Error Analysis
454
+
455
+ We conduct error analysis on both SMD and MultiWOZ 2.1 to provide insights in our framework for future improvements. We randomly sample 100 dialogues from each test set and analysis both the generated responses and the inner procedures. The errors have four major categories: 1) structure errors, 2) query states errors, 3) candidates errors, 4) belief score errors. For example, given dialogue history "Where is a nearby parking_garage?", the generated response is "5671_barringer_street is 1-mile away" and the ground-truth is "The nearest one is palo_alto_garage, it's just 1-mile away". The generated hypotheses by HG module at timestep 0 are "[5671_barringer_street, distance, 1-mile], [whole_foods, distance, 1-mile], [home, distance, 1-mile], [valero, distance, 1-mile], [pizza_chicago, distance, 1-mile]". The predicted structure type is a H-Hypothesis with a relation "distance" and tail entity "1-mile". However, a reasonable hypothesis should be an H-Hypothesis such as "[palo_alto_garage, poi_type, parking_garage]" with "poi_type" (place of interest) and "parking_garage" as relation and tail. This illustrates the challenge of the task, and we plan to further improve our framework include SP, QSP, CP and HRE in future work.
456
+
457
+ # F.1 Structure Prediction Errors
458
+
459
+ Given the dialogue history "Find me a nearby shopping mall," the generated response is "midtown_shopping_center is 3_miles away." and
460
+
461
+ <table><tr><td></td><td colspan="3">Movie</td><td colspan="3">Hotel</td><td colspan="3">Restaurant</td></tr><tr><td></td><td>Train</td><td>Dev</td><td>Test</td><td>Train</td><td>Dev</td><td>Test</td><td>Train</td><td>Dev</td><td>Test</td></tr><tr><td>Hop=1</td><td>27238</td><td>5985</td><td>5998</td><td>14321</td><td>3472</td><td>3482</td><td>14386</td><td>2107</td><td>2117</td></tr><tr><td>Hop=2</td><td>6401</td><td>1472</td><td>1507</td><td>3351</td><td>594</td><td>614</td><td>4527</td><td>564</td><td>609</td></tr><tr><td>Hop&gt;=3</td><td>5359</td><td>1508</td><td>1530</td><td>3328</td><td>514</td><td>524</td><td>4545</td><td>562</td><td>593</td></tr><tr><td>Total</td><td>38998</td><td>8965</td><td>9035</td><td>21000</td><td>4580</td><td>4620</td><td>23458</td><td>3233</td><td>3319</td></tr></table>
462
+
463
+ Table 9: Detailed statistics of the synthetic dataset with respect to the number of hops needed by KB reasoning. Numbers in the table are the number of dialogue turns for each category. $\mathrm{Hop} = k$ denotes that the KB reasoning path length for the entity in the dialogue response is $k$ .
464
+
465
+ <table><tr><td colspan="9">Movie Domain</td></tr><tr><td></td><td colspan="2">1-Hop</td><td colspan="2">2-Hop</td><td colspan="2">Hop&gt;=3</td><td colspan="2">All</td></tr><tr><td></td><td>BLEU</td><td>F1</td><td>BLEU</td><td>F1</td><td>BLEU</td><td>F1</td><td>BLEU</td><td>F1</td></tr><tr><td>Mem2Seq</td><td>25.6</td><td>68.9</td><td>21.8</td><td>60.8</td><td>19.5</td><td>49.2</td><td>23.9</td><td>62.0</td></tr><tr><td>GLMP</td><td>30.1</td><td>77.2</td><td>28.7</td><td>72.9</td><td>27.1</td><td>61.5</td><td>28.3</td><td>73.2</td></tr><tr><td>GraphDialog</td><td>29.2</td><td>76.6</td><td>25.6</td><td>69.1</td><td>24.7</td><td>60.6</td><td>27.2</td><td>71.6</td></tr><tr><td>DF-Net</td><td>30.6</td><td>77.4</td><td>29.5</td><td>71.6</td><td>28.9</td><td>62.1</td><td>30.3</td><td>73.5</td></tr><tr><td>Ours (Full model)</td><td>33.2</td><td>82.6</td><td>31.3</td><td>80.4</td><td>30.7</td><td>74.9</td><td>32.7</td><td>80.6</td></tr></table>
466
+
467
+ Table 10: Experimental results on the movie domain of the synthetic dataset.
468
+
469
+ <table><tr><td colspan="9">Hotel Domain</td></tr><tr><td></td><td colspan="2">1-Hop</td><td colspan="2">2-Hop</td><td colspan="2">Hop&gt;=3</td><td colspan="2">All</td></tr><tr><td></td><td>BLEU</td><td>F1</td><td>BLEU</td><td>F1</td><td>BLEU</td><td>F1</td><td>BLEU</td><td>F1</td></tr><tr><td>Mem2Seq</td><td>14.4</td><td>79.8</td><td>13.1</td><td>71.2</td><td>11.4</td><td>68.6</td><td>13.2</td><td>75.4</td></tr><tr><td>GLMP</td><td>21.3</td><td>85.5</td><td>19.8</td><td>79.4</td><td>18.9</td><td>76.2</td><td>21.0</td><td>82.9</td></tr><tr><td>GraphDialog</td><td>20.6</td><td>83.8</td><td>19.1</td><td>78.8</td><td>18.8</td><td>75.9</td><td>19.3</td><td>81.0</td></tr><tr><td>DF-Net</td><td>22.1</td><td>86.7</td><td>19.9</td><td>80.2</td><td>19.5</td><td>76.8</td><td>21.5</td><td>83.2</td></tr><tr><td>Ours (Full model)</td><td>23.3</td><td>92.4</td><td>21.3</td><td>89.6</td><td>20.7</td><td>87.8</td><td>22.1</td><td>91.6</td></tr></table>
470
+
471
+ Table 11: Experimental results on the hotel domain of the synthetic dataset.
472
+
473
+ <table><tr><td colspan="9">Restaurant Domain</td></tr><tr><td></td><td colspan="2">1-Hop</td><td colspan="2">2-Hop</td><td colspan="2">Hop&gt;=3</td><td colspan="2">All</td></tr><tr><td></td><td>BLEU</td><td>F1</td><td>BLEU</td><td>F1</td><td>BLEU</td><td>F1</td><td>BLEU</td><td>F1</td></tr><tr><td>Mem2Seq</td><td>19.0</td><td>79.8</td><td>17.3</td><td>69.4</td><td>12.4</td><td>66.3</td><td>17.0</td><td>73.7</td></tr><tr><td>GLMP</td><td>22.0</td><td>90.4</td><td>19.1</td><td>83.7</td><td>18.4</td><td>80.4</td><td>20.9</td><td>86.1</td></tr><tr><td>GraphDialog</td><td>23.2</td><td>89.9</td><td>21.2</td><td>82.1</td><td>20.6</td><td>79.8</td><td>21.4</td><td>85.0</td></tr><tr><td>DF-Net</td><td>24.5</td><td>91.5</td><td>23.0</td><td>84.2</td><td>21.1</td><td>81.0</td><td>23.3</td><td>87.3</td></tr><tr><td>Ours (Full model)</td><td>26.8</td><td>96.7</td><td>24.4</td><td>93.1</td><td>22.7</td><td>92.2</td><td>25.1</td><td>94.2</td></tr></table>
474
+
475
+ Table 12: Experimental results on the restaurant domain of the synthetic dataset.
476
+
477
+ <table><tr><td>Structure Type</td><td>State Tokens</td><td>Top-5 Candidate Tokens</td><td>Generated Hypotheses</td><td>Belief Scores</td></tr><tr><td rowspan="5">H-Hypothesis</td><td rowspan="3">“located_in (k=0)”</td><td>“Shipping_News”</td><td>“[Shipping_News, located_in, Springfield]”</td><td>0.15</td></tr><tr><td>“Vaudeville”</td><td>“[Vaudeville, located_in, Springfield]”</td><td>0.17</td></tr><tr><td>“Brown_Eyes”</td><td>“[Brown_Eyes, located_in, Springfield]”</td><td>0.13</td></tr><tr><td rowspan="2">“Springfield (k=1)”</td><td>“Oakland”</td><td>“[Oakland, located_in, Springfield]”</td><td>1.00</td></tr><tr><td>“Coburg”</td><td>“[Coburg, located_in, Springfield]”</td><td>0.04</td></tr></table>
478
+
479
+ Table 13: Example outputs on the movie domain of synthetic dataset. Dialogue history: "I'm looking for a theatre in the Springfield district". Generated response: "Sure I have found a Oakland for you". Detailed model working process when generating Oakland in the response is shown above.
480
+
481
+ Figure 5: Proof tree generated by HRE module for the highest score hypothesis [Oakland, Located_in, Springfield] in Table 13. The red parts are the predicted bridge entities and the blue parts are the predicted relations for the sub-hypotheses via neural networks. In this case, the model performs 2-hop reasoning (the two leaf node triples) to find the correct KB entity for generating the response.
482
+ ![](images/e6851e20691e599fa835448cd2c672fc7471ff1cc6c2a334ea2782ecf870dbc3.jpg)
483
+ Reasoning Chain:
484
+ Oakland Located in Springfield $\leftarrow$ Oakland Next_to Centenary_Park $\cap$ Centenary_Park Located in Springfield
485
+ KB:
486
+ Vaudeville Has movie David and Lisa
487
+ David and Lisa Directly by Frank Perry
488
+ David and his Direcly by Frank Terry David and his Direcly by Frank Terry
489
+ David and lisa Starred actors Kei_Dun
490
+ Vaudeville Next_to_Anderson_Reserve
491
+ Anderson Reserve Located i
492
+ Oakland Has Movie Dreams
493
+ Dreams Directed_by akira_kurosawa
494
+ Dreams Starred actors akira terao
495
+ Oakland Next to Centenary Park
496
+ Centenary Park Located in Springfield
497
+ Springfield Located in Razerback
498
+ Spring
499
+
500
+ the ground-truth response is "The closest shopping_center is midtown_shopping_center 3_miles away." The generated hypotheses by HG module at decoding timestep 0 are "[home, address, 5671_barringer_street], [home, address, 481_amaranta_ave], [home, address, midtown_shopping_center], [home, address, 5_miles], [home, address, 1_miles)". As we can see, the predicted structure type is an T-Hypothesis with an head entity "home" and relation "address". However, a reasonable structure type should be an H-Hypothesis such as "[midtown_shopping_center, poi_type, shopping_center]" which the model has mispredicted to a T-Hypothesis.
501
+
502
+ # F.2 Query States Prediction Errors
503
+
504
+ Given the dialogue history "What time is dinner tonight?", the generated response is "Dinner is at 11pm." and the ground-truth response is "Your dinner is 5pm tonight". The generated hypotheses in HG module for decoding timestep 3 are "[dinner, traffic_info, 11pm], [dinner, traffic_info, 7pm], [dinner, traffic_info, 3pm], [dinner, traffic_info, Monday], [dinner, traffic_info, Tuesday)". As we can see, the SP and QSP modules have successfully predicted a sensible T-Hypothesis with "dinner" as head entity. However, the predicted relation "traffic_info" is not reasonable according to context where an expected relation would be relations such as "time".
505
+
506
+ # F.3 Candidates Prediction Errors
507
+
508
+ Given the dialogue history "What time is my doctor Appointment?", the generated response is "Your doctor Appointment is on Thursday." and the ground-truth response is "Your doctor Appointment is scheduled for Friday at 11am." The generated hypotheses by HG for decoding timestep 4 are "[doctor Appointment, date, Thursday], [doctor Appointment, date, Monday], [doctor Appointment, date, 7pm], [doctor Appointment, date, Tuesday], [doctor Appointment, date, 2pm]". As we can see, our framework has correctly predicted a T-Hypothesis and state tokens. However, the predicted top-5 candidate tokens don't include the ground-truth date which is Friday, which might affect the final response generation.
509
+
510
+ # F.4 Belief Score Prediction Errors
511
+
512
+ Given the dialogue history "Where is home?", the generated response is "Home is at 819_alma_st." and the ground-truth is "Home is where is your heart, at 56_cadwell_street." The generated hypotheses for decoding timestep 3 are "[home, address, 819_alma_st], [home, address, 56_cadwell_street], [home, address, 6_miles], [home, address, 611_ames_ave], [home, address, 3_miles]". As we can see, our framework has predicted a sensible T-Hypothesis with "home" as head entity and "address" as relation. Also, the CP module has predicted top-5 candidate tail
513
+
514
+ entities which include the ground-truth 56_cadwell_street. But the HRE module ranked "[home, address, 819_alma_st]" highest with a score of 0.78 while the ground-truth one "[home, address, 56_cadwell_street]" is only ranked the second highest with a score of 0.41, which indicates that there is still room for improvements for the HRE module. We are interested in continually improving our framework include all the modules in future work.
515
+
516
+ # G Discussions
517
+
518
+ # G.1 Why not use search-based techniques for generating reasoning chains?
519
+
520
+ This is an alternative approach to our learning-based method. However, search-based approach cannot be jointly learnt end-to-end with other modules in our framework, and thus may face error propagation and credit assignment issues like in the traditional pipeline-based task-oriented dialogue approaches. In this work, we want to explore the possibility of learning end-to-end the logical reasoning chain directly from the dialogues. Also, the time complexity of search-based approach is approximately $O(n^{k})$ , where $n$ is the average degree of nodes in the external knowledge base, $k$ is the number of reasoning hops. In other words, the time complexity tends to have polynomial growth (when $k > 1$ ); and it's worse when the reasoning complexity $(k)$ increases (exponential). In contrast, when the number of KB nodes increases it only impacts the size of the input embedding layer in our framework (Equation 15), and the efficiency can be further improved by leveraging modern accelerating hardware such as GPU (which search-based approaches cannot).
521
+
522
+ # G.2 Why sample with Gumbel-Softmax instead of directly applying argmax in Hypothesis Generator and Hierarchical Reasoning Engine modules?
523
+
524
+ Argmax function is non-differentiable which hinders our aim of end-to-end differentiability of the whole system. We tried utilizing REINFORCE (reward is obtained by comparing predicted entities with ground-truth entities) to mitigate this issue. However, we find that the results of using argmax+REINFORCE is worse than using Gumbel-Softmax. By checking the sampled tokens from Gumbel-Softmax, we find that it can generate reasonable tokens (Figure 3 in the main paper, state tokens etc.), since we have set the temperature pa
525
+
526
+ riameter of Gumbel-Softmax to 0.1 which is a close approximation to argmax.
527
+
528
+ # G.3 Why not expand the KB using KB completion methods and then use semantic parsing to query KB?
529
+
530
+ In this work, we are interested in developing an end-to-end trainable framework with explainable KB reasoning. Semantic parsing is one possible alternative. However, when adapting to our own dataset, it requires further annotations for fine-tuning which is costly and time-consuming, and might be not feasible for large-scale datasets. Also, it might induce the error propagation issue since the different modules (KB completion, semantic parsing, dialogue encoding and response generation etc.) are not jointly learnt.
531
+
532
+ # G.4 KB scale.
533
+
534
+ The average nodes of KB for each sample in the training data is 63.5 for SMD and 57.6 for MultiWOZ. The average number of relations is 5.5 for SMD and 9.4 for MultiWOZ.
535
+
536
+ # H Human Evaluation Details
537
+
538
+ The Fluency of the predicated responses is evaluated according to the following standards:
539
+
540
+ - 5: The predicted responses contain no grammar errors or repetitions at all.
541
+ - 4: Only one grammar error or repetition appeared in the generated responses.
542
+ - 3: One grammar error one repetition, or two grammar errors, or two repetitions are observed in the responses.
543
+ - 2: One grammar error two repetitions, or one repetition two grammar errors, or three grammar errors, or three repetitions appeared in the generated responses.
544
+ - 1: More than three inappropriate language usages with regard to grammar errors or repetitions are observed in the responses.
545
+
546
+ The Correctness is measured as follows:
547
+
548
+ - 5: Provide the correct entities.
549
+ - 4: Minor mistakes in the provided entities.
550
+ - 3: Noticeable errors in the provided entities but acceptable.
551
+
552
+ - 2: Poor in the provided entities.
553
+ - 1: Wrong in the provided entities.
554
+
555
+ The Humanlikeness is measured as:
556
+
557
+ - 5: $100\%$ sure that the sentences are generated by a human, not by system.
558
+ - 4: $80\%$ chance that the sentences are generated by a human.
559
+ - 3: Cannot tell whether the sentences is generated by a human or system, $50\%$ for human and $50\%$ for system.
560
+ - 2: $20\%$ chance that the sentences are generated by a human.
561
+ 1: Totally impossible that the sentences are generated by a human.
aninterpretableneurosymbolicreasoningframeworkfortaskorienteddialoguegeneration/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f26c80fba8ff2982cd4f0675a013a55fb1f631b9d007e7921921b399b893d536
3
+ size 749376
aninterpretableneurosymbolicreasoningframeworkfortaskorienteddialoguegeneration/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:10663167ca16a361fbea73ffcb17a9ec7f05b9b26f22bc17738ee0237e8d80b4
3
+ size 645117
aninvestigationoftheineffectivenessofcounterfactuallyaugmenteddata/2fb073d9-9113-4e11-981d-1abaacaccaaf_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0e68e69a01f1a1f5aab52f87e4688fd3c8907150258be1f5f252935f79778946
3
+ size 96403
aninvestigationoftheineffectivenessofcounterfactuallyaugmenteddata/2fb073d9-9113-4e11-981d-1abaacaccaaf_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f240d8884a584fbcd6c0e26de5eeae0530aa4f23b150a02d90a6331d4e502c3d
3
+ size 113988