Add Batch bd5a6d10-e133-49af-8e99-20ca1620d962
Browse filesThis view is limited to 50 files because it contains too many changes. See raw diff
- acopyaugmentedgenerativemodelforopendomainquestionanswering/984865e2-7d9a-4782-9928-7feb6ef0dcf6_content_list.json +3 -0
- acopyaugmentedgenerativemodelforopendomainquestionanswering/984865e2-7d9a-4782-9928-7feb6ef0dcf6_model.json +3 -0
- acopyaugmentedgenerativemodelforopendomainquestionanswering/984865e2-7d9a-4782-9928-7feb6ef0dcf6_origin.pdf +3 -0
- acopyaugmentedgenerativemodelforopendomainquestionanswering/full.md +207 -0
- acopyaugmentedgenerativemodelforopendomainquestionanswering/images.zip +3 -0
- acopyaugmentedgenerativemodelforopendomainquestionanswering/layout.json +3 -0
- aflexiblemultitaskmodelforbertserving/88aeb2d7-8861-4873-9e05-2226dccca80a_content_list.json +3 -0
- aflexiblemultitaskmodelforbertserving/88aeb2d7-8861-4873-9e05-2226dccca80a_model.json +3 -0
- aflexiblemultitaskmodelforbertserving/88aeb2d7-8861-4873-9e05-2226dccca80a_origin.pdf +3 -0
- aflexiblemultitaskmodelforbertserving/full.md +306 -0
- aflexiblemultitaskmodelforbertserving/images.zip +3 -0
- aflexiblemultitaskmodelforbertserving/layout.json +3 -0
- arecipeforarbitrarytextstyletransferwithlargelanguagemodels/b1029a19-5f12-4d77-b5f4-81acfde01e3e_content_list.json +3 -0
- arecipeforarbitrarytextstyletransferwithlargelanguagemodels/b1029a19-5f12-4d77-b5f4-81acfde01e3e_model.json +3 -0
- arecipeforarbitrarytextstyletransferwithlargelanguagemodels/b1029a19-5f12-4d77-b5f4-81acfde01e3e_origin.pdf +3 -0
- arecipeforarbitrarytextstyletransferwithlargelanguagemodels/full.md +293 -0
- arecipeforarbitrarytextstyletransferwithlargelanguagemodels/images.zip +3 -0
- arecipeforarbitrarytextstyletransferwithlargelanguagemodels/layout.json +3 -0
- bert2berttowardsreusablepretrainedlanguagemodels/636dad0a-afca-4eaa-b6f9-f18130783a34_content_list.json +3 -0
- bert2berttowardsreusablepretrainedlanguagemodels/636dad0a-afca-4eaa-b6f9-f18130783a34_model.json +3 -0
- bert2berttowardsreusablepretrainedlanguagemodels/636dad0a-afca-4eaa-b6f9-f18130783a34_origin.pdf +3 -0
- bert2berttowardsreusablepretrainedlanguagemodels/full.md +516 -0
- bert2berttowardsreusablepretrainedlanguagemodels/images.zip +3 -0
- bert2berttowardsreusablepretrainedlanguagemodels/layout.json +3 -0
- caspicausalawaresafepolicyimprovementfortaskorienteddialogue/51a7f5a6-79cf-43e6-8346-26446f52a346_content_list.json +3 -0
- caspicausalawaresafepolicyimprovementfortaskorienteddialogue/51a7f5a6-79cf-43e6-8346-26446f52a346_model.json +3 -0
- caspicausalawaresafepolicyimprovementfortaskorienteddialogue/51a7f5a6-79cf-43e6-8346-26446f52a346_origin.pdf +3 -0
- caspicausalawaresafepolicyimprovementfortaskorienteddialogue/full.md +358 -0
- caspicausalawaresafepolicyimprovementfortaskorienteddialogue/images.zip +3 -0
- caspicausalawaresafepolicyimprovementfortaskorienteddialogue/layout.json +3 -0
- ecareanewdatasetforexploringexplainablecausalreasoning/8e503919-6886-4a5c-a4ba-5a0878d1b23e_content_list.json +3 -0
- ecareanewdatasetforexploringexplainablecausalreasoning/8e503919-6886-4a5c-a4ba-5a0878d1b23e_model.json +3 -0
- ecareanewdatasetforexploringexplainablecausalreasoning/8e503919-6886-4a5c-a4ba-5a0878d1b23e_origin.pdf +3 -0
- ecareanewdatasetforexploringexplainablecausalreasoning/full.md +506 -0
- ecareanewdatasetforexploringexplainablecausalreasoning/images.zip +3 -0
- ecareanewdatasetforexploringexplainablecausalreasoning/layout.json +3 -0
- epicemployingproverbsincontextasabenchmarkforabstractlanguageunderstanding/08052a61-a4c0-407e-9392-c211280a8fe8_content_list.json +3 -0
- epicemployingproverbsincontextasabenchmarkforabstractlanguageunderstanding/08052a61-a4c0-407e-9392-c211280a8fe8_model.json +3 -0
- epicemployingproverbsincontextasabenchmarkforabstractlanguageunderstanding/08052a61-a4c0-407e-9392-c211280a8fe8_origin.pdf +3 -0
- epicemployingproverbsincontextasabenchmarkforabstractlanguageunderstanding/full.md +512 -0
- epicemployingproverbsincontextasabenchmarkforabstractlanguageunderstanding/images.zip +3 -0
- epicemployingproverbsincontextasabenchmarkforabstractlanguageunderstanding/layout.json +3 -0
- formerinfinitememorytransformer/62019f22-fdab-4ef6-bef4-1d98fbdc9319_content_list.json +3 -0
- formerinfinitememorytransformer/62019f22-fdab-4ef6-bef4-1d98fbdc9319_model.json +3 -0
- formerinfinitememorytransformer/62019f22-fdab-4ef6-bef4-1d98fbdc9319_origin.pdf +3 -0
- formerinfinitememorytransformer/full.md +610 -0
- formerinfinitememorytransformer/images.zip +3 -0
- formerinfinitememorytransformer/layout.json +3 -0
- latentglatglancingatlatentvariablesforparalleltextgeneration/168a7c40-1c1e-4732-a855-b20fd9de45fa_content_list.json +3 -0
- latentglatglancingatlatentvariablesforparalleltextgeneration/168a7c40-1c1e-4732-a855-b20fd9de45fa_model.json +3 -0
acopyaugmentedgenerativemodelforopendomainquestionanswering/984865e2-7d9a-4782-9928-7feb6ef0dcf6_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:a68660816ed7600af4d8cea79e7a5039e788cf3e9873ee19b8a1cfc75dba7455
|
| 3 |
+
size 48598
|
acopyaugmentedgenerativemodelforopendomainquestionanswering/984865e2-7d9a-4782-9928-7feb6ef0dcf6_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:09ef2fb92f0f29e9e54bfe4cd088cd463ccf8e1c2ca9375034c6dc66077069ee
|
| 3 |
+
size 57724
|
acopyaugmentedgenerativemodelforopendomainquestionanswering/984865e2-7d9a-4782-9928-7feb6ef0dcf6_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:d16d0cbf35d6f8d5223e3076e1b69ddd3b4b1562a03a5402d550642836a7f0df
|
| 3 |
+
size 351413
|
acopyaugmentedgenerativemodelforopendomainquestionanswering/full.md
ADDED
|
@@ -0,0 +1,207 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# A Copy-Augmented Generative Model for Open-Domain Question Answering
|
| 2 |
+
|
| 3 |
+
Shuang Liu $^{1*}$ , Dong Wang $^{2}$ , Xiaoguang Li $^{1}$ , Minghui Huang $^{2}$ , Meizhen Ding $^{2}$
|
| 4 |
+
|
| 5 |
+
<sup>1</sup> Huawei Noah's Ark Lab
|
| 6 |
+
|
| 7 |
+
$^{2}$ AI Application Research Center (AARC)
|
| 8 |
+
|
| 9 |
+
Huawei Technologies Co., Ltd
|
| 10 |
+
|
| 11 |
+
{liushuang30, wangdong153}@huawei.com
|
| 12 |
+
|
| 13 |
+
# Abstract
|
| 14 |
+
|
| 15 |
+
Open-domain question answering is a challenging task with a wide variety of practical applications. Existing modern approaches mostly follow a standard two-stage paradigm: retriever then reader. In this article, we focus on improving the effectiveness of the reader module and propose a novel copy-augmented generative approach that integrates the merits of both extractive and generative readers. In particular, our model is built upon the powerful generative model FiD (Izacard and Grave, 2021b). We enhance the original generative reader by incorporating a pointer network to encourage the model to directly copy words from the retrieved passages. We conduct experiments on the two benchmark datasets, NaturalQuestions and TriviaQA, and the empirical results demonstrate the performance gains of our proposed approach.
|
| 16 |
+
|
| 17 |
+
# 1 Introduction
|
| 18 |
+
|
| 19 |
+
Open-domain question answering (ODQA) focuses on providing highly precise answers to natural language questions from a large collection of unstructured text data (Voorhees, 1999). With the pioneering work of DrQA (Chen et al., 2017), modern approaches to ODQA commonly adopt a simple two-stage retriever-reader pipeline, that firstly retrieve a relatively small number of support passages (Karpukhin et al., 2020; Min et al., 2021b; Yamada et al., 2021), followed by the reader identifying the answer.
|
| 20 |
+
|
| 21 |
+
The reader models can be broadly categorized into two classes: extractive (Chen et al., 2017; Asai et al., 2020; Karpukhin et al., 2020) and generative (Izacard and Grave, 2021b; Lewis et al., 2020b; Wu et al., 2021). Recently, benefiting from the powerful ability of large-scale pre-trained encoder-decoder language models (Lewis et al., 2020a; Raffel et al., 2019) and the capability of aggregating information from multiple passages (Izacard
|
| 22 |
+
|
| 23 |
+
Question: where was a hologram for the king filmed?
|
| 24 |
+
|
| 25 |
+
Passages (Truncated): title: A Hologram for the King (film) context: Production was set to begin in first quarter of 2014. Principal photography commenced on March 6, 2014 in Morocco. Filming also took place in Hurghada in Egypt, as well as in Berlin and Düsseldorf in Germany. Shooting wrapped in June 2014.
|
| 26 |
+
|
| 27 |
+
Answer: Hurghada in Egypt, Berlin and Düsseldorf in Germany
|
| 28 |
+
|
| 29 |
+
FiD:Dubai in Germany
|
| 30 |
+
|
| 31 |
+
FiD-PGN: Hurghada in Egypt
|
| 32 |
+
|
| 33 |
+
Question: who has the most trophies in la liga?
|
| 34 |
+
|
| 35 |
+
Passages (Truncated): title: La Liga context: A total of 62 teams have competed in La Liga since its inception. Nine teams have been crowned champions, with Real Madrid winning the title a record 33 times and Barcelona 25 times.
|
| 36 |
+
|
| 37 |
+
Answer: Real Madrid
|
| 38 |
+
|
| 39 |
+
FiD:33
|
| 40 |
+
|
| 41 |
+
FiD-PGN: Real Madrid
|
| 42 |
+
|
| 43 |
+
Table 1: Comparisons of answers generated by FiD and our approach. The orange text represents supportive sentences.
|
| 44 |
+
|
| 45 |
+
and Grave, 2021b), generative approaches have achieved in general better performance than extractive methods.
|
| 46 |
+
|
| 47 |
+
Compared to extractive models, generative models generate text more freely, which makes it often suffer from the problem of producing hallucinated text that is factual inaccuracy or inconsistent to the input. This problem has been addressed in tasks like text summarization (Maynez et al., 2020) and machine translation (Zhou et al., 2021). We found that the phenomenon also happens in ODQA. As shown in Table 1, the answer "Dubai in Germany" produced by the generative model FiD (Izacard and Grave, 2021b) is factual incorrect and the answer "33" in the second example is not coherent to the question. While in both cases, the ground-truth answers are present in the retrieved passages. Thus, we hypothesize that if we could put a constraint on the produced words to the input text, the generated answer will be more faithful.
|
| 48 |
+
|
| 49 |
+
Inspired by the work of See et al. (2017), we enhance the generative model with a pointer net
|
| 50 |
+
|
| 51 |
+

|
| 52 |
+
Figure 1: The overall architecture of our proposed model. We add a linear layer to calculate the generation probability, which decides the weights of generating words from vocabulary or copying from source passages.
|
| 53 |
+
|
| 54 |
+
work (Vinyals et al., 2015), that enables the model to directly copy text from the retrieved passages while retains the ability of generating new words when the true answers are not explicitly present in the input. To be more specific, our model fusion-in-decoder pointer-generator network (FiD-PGN) is built upon the state-of-the-art model FiD. We reuse the encoder-decoder attention scores as the copy distribution to reduce the computational cost. Compared to FiD, we achieve comparative or even better accuracy on the NaturalQuestions (NQ) (Kwiatkowski et al., 2019) and TriviaQA (Joshi et al., 2017) benchmarks, with less passages used in training. Our experiments results show the effectiveness and efficiency of our model.
|
| 55 |
+
|
| 56 |
+
# 2 Related Work
|
| 57 |
+
|
| 58 |
+
# 2.1 Open-Domain Question Answering
|
| 59 |
+
|
| 60 |
+
In this era of data explosion, ODQA offers a way to rapidly and accurately fulfill user's information needs, and hence has recently received significant attention from both industry and academia (Min et al., 2021a). Following the work of DrQA (Chen et al., 2017), most recent works build a two-stage retriever-reader system to tackle the problem. The retriever aims at retrieving supportive passages to the given question from a large document corpus. The reader intends to find answer of the question from the first stage retrieved passages. Early work of Chen et al. (2017) adapts a BiLSTM architecture with various lexical and semantic features from the question and passages as inputs. Later, with the emergence of large-scale pre-trained language models, readers based on pre-trained models such
|
| 61 |
+
|
| 62 |
+
as BERT (Devlin et al., 2019) and T5 (Raffel et al., 2019) have become a common approach (Yang et al., 2019; Izacard and Grave, 2021b; Karpukhin et al., 2020).
|
| 63 |
+
|
| 64 |
+
# 2.2 Generative Readers
|
| 65 |
+
|
| 66 |
+
Compared to extractive models which extract spans from the retrieved passages, generative models are able to produce new words out of the retrieved passages, and thus provide a more flexible modeling framework. Min et al. (2020) and Lewis et al. (2020b) concatenate the given question with top retrieved passages and feed the concatenation to the BART model (Lewis et al., 2020a). Izacard and Grave (2021b) separately encodes the question with each top retrieved passage, then takes the concatenation of the encoder outputs as input to the decoder. Their method provides a way to better aggregate evidence from multiple passages and improve the performance significantly. FiD-KD (Izacard and Grave, 2021a) is an extension of FiD model that increases the accuracy of passage retrieval by training the dense retriever with the guidance of the FiD reader iteratively.
|
| 67 |
+
|
| 68 |
+
# 2.3 Pointer-Generator Network
|
| 69 |
+
|
| 70 |
+
Pointer-Generator Network (See et al., 2017) is an extension of the sequence-to-sequence model by integrating a copy mechanism (Vinyals et al., 2015) into the generator. At each decoding stage, the model is able to either directly copy a word from the input or generate one with certain probability, and thus can be viewed as a combination of extractive and generative approaches. It has been frequently used in natural language tasks like
|
| 71 |
+
|
| 72 |
+
summmarization (Gu et al., 2016; See et al., 2017; Gehrmann et al., 2018) and neural machine translation (Luong et al., 2015; Gu et al., 2018), but its application to ODQA has been less explored.
|
| 73 |
+
|
| 74 |
+
# 3 Method
|
| 75 |
+
|
| 76 |
+
Our model follows the standard two-stage retriever-reader framework with a focus on the enhancement of the reader module built upon the FiD reader. We adopt the retriever results of FiD-KD, where a dense retriever similar to DPR (Karpukhin et al., 2020) is used. A pointer network is integrated into the FiD reader to facilitate copying words from the retrieved passages. The overall reader architecture is depicted in Figure 1.
|
| 77 |
+
|
| 78 |
+
Reader Encoder. The reader encoder of our model is identical to the one of FiD reader. We firstly concatenate the given question $q$ with each retrieved passage $p_i$ as $x_i = [q; p_i]$ . Next, we pass each $x_i$ individually to the reader encoder, i.e., the encoder of T5 or BART model, and obtain the hidden representations $h_i = (h_{i,1}, h_{i,2}, \ldots, h_{i,n})$ of the question-passage pair where $h_{i,j} \in \mathbb{R}^d$ and $d$ is the model dimension. Finally, we concatenate all the hidden representations of top- $k$ passages $\{h_1, \ldots, h_k\}$ as input to the decoder.
|
| 79 |
+
|
| 80 |
+
Reader Decoder. Our approach mainly differs from FiD reader in the decoder module by adding a pointer network. Specifically, at each decoding step $t$ , let $e_t \in \mathbb{R}^d$ be the embedding vector of the input token at this step, and denote $s_t^L \in \mathbb{R}^d$ as the output representation of the last layer $L$ of transformer decoder, then the probability of generation is given as follows,
|
| 81 |
+
|
| 82 |
+
$$
|
| 83 |
+
p _ {\mathrm {g e n}} = \sigma \left(w _ {e} ^ {T} e _ {t} + w _ {s} ^ {T} s _ {t} ^ {L} + b\right) \tag {1}
|
| 84 |
+
$$
|
| 85 |
+
|
| 86 |
+
where $w_{e}\in \mathbb{R}^{d},w_{s}\in \mathbb{R}^{d}$ and $b\in \mathbb{R}$ are all learnable parameters and $\sigma (\cdot)$ represents the sigmoid function. In addition, the probability of copying is $1 - p_{\mathrm{gen}}$
|
| 87 |
+
|
| 88 |
+
Next, let $\mathcal{V}$ denote the vocabulary containing words for the generative model and $|\mathcal{V}|$ be the size of the vocabulary. Then at step $t$ , the probability distribution of words generation over the vocabulary is computed as,
|
| 89 |
+
|
| 90 |
+
$$
|
| 91 |
+
P _ {\mathrm {v o c a b}} = \operatorname {s o f t m a x} \left(W _ {E} s _ {t} ^ {L}\right) \tag {2}
|
| 92 |
+
$$
|
| 93 |
+
|
| 94 |
+
where $W_{E}\in \mathbb{R}^{|V|\times d}$ is a learnable weight matrix.
|
| 95 |
+
|
| 96 |
+
Benefiting from the encoder-decoder attention layer in transformer architecture, we directly utilize the cross-attention score $\alpha_{t}^{L}$ of the last decoder layer $L$ over the source tokens for the target token $y_{t}$ as copy distribution. Then the probability of selecting $y_{t}$ in source sequence is calculated as,
|
| 97 |
+
|
| 98 |
+
$$
|
| 99 |
+
P _ {\mathrm {c t x}} \left(y _ {t}\right) = \sum_ {j: x _ {1: k, j} = y _ {t}} \alpha_ {t, j} ^ {L} \tag {3}
|
| 100 |
+
$$
|
| 101 |
+
|
| 102 |
+
where $x_{1:k}$ denotes the concatenation of the top- $k$ retrieved passages, $x_{1:k,j}$ is the $j$ -th token of $x_{1:k}$ , and $\alpha_{t,j}^{L}$ is the $j$ -th element of $\alpha_{t}^{L}$ . If $y_{t}$ is not present in the top- $k$ retrieved passages, $P_{\mathrm{ctx}}(y_t)$ will be zero.
|
| 103 |
+
|
| 104 |
+
Finally, put all the above together, the target token $y_{t}$ could both be generated from vocabulary with probability $p_{gen}$ , and copy from the source passages. The final prediction probability is defined as
|
| 105 |
+
|
| 106 |
+
$$
|
| 107 |
+
P \left(y _ {t}\right) = p _ {\text {g e n}} P _ {\text {v o c a b}} \left(y _ {t}\right) + \left(1 - p _ {\text {g e n}}\right) P _ {\text {c t x}} \left(y _ {t}\right). \tag {4}
|
| 108 |
+
$$
|
| 109 |
+
|
| 110 |
+
# 4 Experiments
|
| 111 |
+
|
| 112 |
+
# 4.1 Datasets
|
| 113 |
+
|
| 114 |
+
We evaluate the performance of our approach on two standard ODQA datasets, NQ and TriviaQA. The NQ dataset comprises real queries that user issued on Google search engine along with answers. The TriviaQA dataset consists of question-answer pairs collected from trivia and quiz-league websites. The details of data statistics are listed in Table 2. It can be seen that TriviaQA has on average longer question length than NQ, indicating that questions in TriviaQA are relatively more complex. We use the data released on the repository of FiD<sup>1</sup>, containing question-answer pairs and top-100 passages retrieved by FiD-KD.
|
| 115 |
+
|
| 116 |
+
<table><tr><td>Statistics</td><td>NQ</td><td>TriviaQA</td></tr><tr><td>Train</td><td>79,168</td><td>78,785</td></tr><tr><td>Validation</td><td>8,757</td><td>8,837</td></tr><tr><td>Test</td><td>3,610</td><td>11,313</td></tr><tr><td>Avg. Qlen</td><td>9.3</td><td>16.9</td></tr><tr><td>Avg. Alen</td><td>2.4</td><td>2.2</td></tr></table>
|
| 117 |
+
|
| 118 |
+
Table 2: Summary statistics of the two datasets. Avg. Qlen and Avg. Alen denote the average number of tokens per question and answer, respectively.
|
| 119 |
+
|
| 120 |
+
<table><tr><td>Model</td><td>Reader Size</td><td>Top-k</td><td>NQ</td><td>TriviaQA</td></tr><tr><td>DPR (BERT-base) (Karpukhin et al., 2020)</td><td>110M</td><td>24</td><td>41.5</td><td>57.9</td></tr><tr><td>RAG-Seq (BART-large) (Lewis et al., 2020b)</td><td>406M</td><td>50</td><td>44.5</td><td>56.8</td></tr><tr><td>FiD (T5-base) (Izacard and Grave, 2021b)</td><td>220M</td><td>100</td><td>48.2</td><td>65.0</td></tr><tr><td>FiD-KD (T5-base) (Izacard and Grave, 2021a)</td><td>220M</td><td>100</td><td>49.6</td><td>68.8</td></tr><tr><td>FiD-KD (Our implementation)</td><td>220M</td><td>25</td><td>48.5</td><td>67.5</td></tr><tr><td>FiD-PGN</td><td>220M</td><td>25</td><td>51.4</td><td>68.4</td></tr></table>
|
| 121 |
+
|
| 122 |
+
Table 3: Exact match (EM) scores on NQ and TriviaQA test sets. Top- $k$ indicates the number of retrieved passages used during reader training. The performance of SOTA model is in bold and the second best model is in underline.
|
| 123 |
+
|
| 124 |
+
# 4.2 Implementation Details
|
| 125 |
+
|
| 126 |
+
We follow the experimental settings as in FiD. Our model is initialized with a pre-trained T5-base model, and trained using AdamW (Loshchilov and Hutter, 2017) algorithm with a learning rate of $10^{-4}$ , linear scheduling with 15k total steps and 1k warm-up steps. Moreover, we train our model using the top-25 retrieved passages for each question and set the batch size as 64 due to computational limitation. All experiments are run on eight Nvidia V100 32GB GPUs.
|
| 127 |
+
|
| 128 |
+
# 4.3 Results
|
| 129 |
+
|
| 130 |
+
Table 3 shows the experimental results of our model and other approaches on the test sets, evaluated with the standard exact match (EM) score (Rajpurkar et al., 2016). For a fair comparison, we retrained the FiD reader on the top-25 retrieved passages to match our experimental settings.
|
| 131 |
+
|
| 132 |
+
As shown in Table 3, our model outperforms FiD-KD on both NQ and TriviaQA datasets under the same setting. This demonstrates that the pointer network could help to generate answers more accurately. It is worth noting that, compared with FiD-KD trained with the top-100 retrieved passages, our model achieves comparative or even better results with only 1/4 of the input data and without introducing many parameters (only 1537 extra parameters are added), indicating the efficiency of our model.
|
| 133 |
+
|
| 134 |
+
# 5 Analysis
|
| 135 |
+
|
| 136 |
+
Generation Probability. We explore the probability of generation during training to further investigate the effects of the pointer module. As shown in Figure 2, the generation probability $p_{\mathrm{gen}}$ in TriviaQA is always higher than the one in NQ. Note that a higher generation probability means that more tokens are produced from the vocabulary instead of copying from the input. We conjecture that this phenomenon is caused by the different
|
| 137 |
+
|
| 138 |
+

|
| 139 |
+
Figure 2: Generation probability $p_{\mathrm{gen}}$ over training steps on NQ and TriviaQA.
|
| 140 |
+
|
| 141 |
+
question types. As stated in Rogers et al. (2021), Trivia questions are more like probing questions. Compared to the information-seeking questions in NQ, probing questions tend to need more complex reasoning, and thus it is difficult to directly extract relevant tokens from input texts. Moreover, this observation is also consistent with the results that the improvements of our model over FiD reader is smaller in TriviaQA than the one in NQ (0.9 vs. 2.9 EM for TriviaQA and NQ, respectively).
|
| 142 |
+
|
| 143 |
+
Test-Train Overlap Evaluation. The study of test-train overlap (Lewis et al., 2021) provides valuable insights into the model's question answering behavior. We evaluate our model on the same test data splits as in Lewis et al. (2021). Table 4 reports the results with respect to three kinds of test-train overlaps. It can be seen that our approach improves most over FiD reader on "No Overlap" category, the most challenging setting, indicating a better generalization ability to question answering.
|
| 144 |
+
|
| 145 |
+
Training with Varying Number of Passages. Figure 3 shows the performance of our model and FiD reader with regard to different number of retrieved training passages. We train both models with top- $k$ passages ( $k \in \{1, 5, 10, 25\}$ ) and evaluate on the development sets with the same number of pas
|
| 146 |
+
|
| 147 |
+
<table><tr><td>Dataset</td><td>Overlap Type</td><td>FiD</td><td>FiD-PGN</td><td>Δ</td></tr><tr><td rowspan="4">NQ</td><td>Total</td><td>48.5</td><td>51.4</td><td>2.9</td></tr><tr><td>Question Overlap</td><td>73.5</td><td>75.9</td><td>2.4</td></tr><tr><td>Answer Overlap Only</td><td>41.0</td><td>45.1</td><td>4.1</td></tr><tr><td>No Overlap</td><td>28.8</td><td>38.4</td><td>9.6</td></tr><tr><td rowspan="4">TriviaQA</td><td>Total</td><td>67.5</td><td>68.4</td><td>0.9</td></tr><tr><td>Question Overlap</td><td>88.4</td><td>89.6</td><td>1.2</td></tr><tr><td>Answer Overlap Only</td><td>66.9</td><td>68.4</td><td>1.5</td></tr><tr><td>No Overlap</td><td>41.5</td><td>43.4</td><td>1.9</td></tr></table>
|
| 148 |
+
|
| 149 |
+
Table 4: Test-train overlap evaluation on NQ and TriviaQA test sets. Exact match (EM) scores are reported.
|
| 150 |
+
|
| 151 |
+

|
| 152 |
+
Figure 3: The variation of performance with different number of retrieved passages used in reader training. Exact match (EM) scores are measured on the development sets of NQ and TriviaQA.
|
| 153 |
+
|
| 154 |
+

|
| 155 |
+
|
| 156 |
+
sages. We can observe that the matching scores of both models increase with respect to the number of passages used in training, consistent with the findings in Izacard and Grave (2021b) that sequence-to-sequence model is capable of gathering information across multiple retrieved passages. Moreover, the two models show comparative performance when the number of training passages is small, but when more passages are included, our model outperforms FiD, especially on the NQ dataset.
|
| 157 |
+
|
| 158 |
+
# 6 Conclusion
|
| 159 |
+
|
| 160 |
+
In this article, we propose a novel FiD-PGN approach for the reader module of ODQA under the standard retriever-reader framework. Specifically, we integrate a pointer network into the FiD reader to allow the model to directly select words from the retrieved passages. Experimental results show that our model outperforms FiD-KD on two benchmark datasets under the same setting, demonstrating the advantages of our method.
|
| 161 |
+
|
| 162 |
+
# References
|
| 163 |
+
|
| 164 |
+
Akari Asai, Kazuma Hashimoto, Hannaneh Hajishirzi, Richard Socher, and Caiming Xiong. 2020. Learning to retrieve reasoning paths over wikipedia graph
|
| 165 |
+
|
| 166 |
+
for question answering. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.
|
| 167 |
+
|
| 168 |
+
Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading Wikipedia to answer open-domain questions. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1870-1879, Vancouver, Canada. Association for Computational Linguistics.
|
| 169 |
+
|
| 170 |
+
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.
|
| 171 |
+
|
| 172 |
+
Sebastian Gehrmann, Yuntian Deng, and Alexander Rush. 2018. Bottom-up abstractive summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4098-4109, Brussels, Belgium. Association for Computational Linguistics.
|
| 173 |
+
|
| 174 |
+
Jiatao Gu, James Bradbury, Caiming Xiong, Victor O. K. Li, and Richard Socher. 2018. Non-autoregressive neural machine translation. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net.
|
| 175 |
+
|
| 176 |
+
Jiatao Gu, Zhengdong Lu, Hang Li, and Victor O.K. Li. 2016. Incorporating copying mechanism in sequence-to-sequence learning. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1631-1640, Berlin, Germany. Association for Computational Linguistics.
|
| 177 |
+
|
| 178 |
+
Gautier Izacard and Edouard Grave. 2021a. Distilling knowledge from reader to retriever for question answering. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net.
|
| 179 |
+
|
| 180 |
+
Gautier Izacard and Edouard Grave. 2021b. Leveraging passage retrieval with generative models for open domain question answering. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 874-880, Online. Association for Computational Linguistics.
|
| 181 |
+
|
| 182 |
+
Mandar Joshi, Eunsol Choi, Daniel Weld, and Luke Zettlemoyer. 2017. TriviaQA: A large scale distantly supervised challenge dataset for reading comprehension. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1601-1611, Vancouver, Canada. Association for Computational Linguistics.
|
| 183 |
+
|
| 184 |
+
Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for open-domain question answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6769-6781, Online. Association for Computational Linguistics.
|
| 185 |
+
Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, Kristina Toutanova, Llion Jones, Matthew Kelcey, Ming-Wei Chang, Andrew M. Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natural questions: A benchmark for question answering research. Transactions of the Association for Computational Linguistics, 7:452-466.
|
| 186 |
+
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020a. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7871-7880, Online. Association for Computational Linguistics.
|
| 187 |
+
Patrick Lewis, Pontus Stenetorp, and Sebastian Riedel. 2021. Question and answer test-train overlap in open-domain question answering datasets. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 1000-1008, Online. Association for Computational Linguistics.
|
| 188 |
+
Patrick S. H. Lewis, Ethan Perez, Aleksandra Pik-tus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Kuttler, Mike Lewis, Wen-tau Yih, Tim Roektaschel, Sebastian Riedel, and Douwe Kiela. 2020b. Retrieval-augmented generation for knowledge-intensive NLP tasks. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual.
|
| 189 |
+
Ilya Loshchilov and Frank Hutter. 2017. Fixing weight decay regularization in adam. ArXiv preprint, abs/1711.05101.
|
| 190 |
+
Thang Luong, Ilya Sutskever, Quoc Le, Oriol Vinyals, and Wojciech Zaremba. 2015. Addressing the rare word problem in neural machine translation. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 11-19, Beijing, China. Association for Computational Linguistics.
|
| 191 |
+
Joshua Maynez, Shashi Narayan, Bernd Bohnet, and Ryan McDonald. 2020. On faithfulness and factuality in abstractive summarization. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1906-1919, Online. Association for Computational Linguistics.
|
| 192 |
+
|
| 193 |
+
Sewon Min, Jordan L. Boyd-Graber, Chris Alberti, Danqi Chen, Eunsol Choi, Michael Collins, Kelvin Guu, Hannaneh Hajishirzi, Kenton Lee, Jennimaria Palomaki, Colin Raffel, Adam Roberts, Tom Kwiatkowski, Patrick S. H. Lewis, Yuxiang Wu, Heinrich Kuttler, Linqing Liu, Pasquale Minervini, Pontus Stenetorp, Sebastian Riedel, Sohee Yang, Minjoon Seo, Gautier Izacard, Fabio Petroni, Lucas Hosseini, Nicola De Cao, Edouard Grave, Ikuya Yamada, Sonse Shimaoka, Masatoshi Suzuki, Shumpei Miyawaki, Shun Sato, Ryo Takahashi, Jun Suzuki, Martin Fajcik, Martin Docekal, Karel Ondrej, Pavel Smrz, Hao Cheng, Yelong Shen, Xiaodong Liu, Pengcheng He, Weizhu Chen, Jianfeng Gao, Barlas Oguz, Xilun Chen, Vladimir Karpukhin, Stan Peshterliev, Dmytro Okhonko, Michael Sejr Schlichtkrull, Sonal Gupta, Yashar Mehdad, and Wen-tau Yih. 2021a. Neurips 2020 efficientqa competition: Systems, analyses and lessons learned. ArXiv preprint, abs/2101.00133.
|
| 194 |
+
Sewon Min, Kenton Lee, Ming-Wei Chang, Kristina Toutanova, and Hannaneh Hajishirzi. 2021b. Joint passage ranking for diverse multi-answer retrieval. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 6997-7008, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
|
| 195 |
+
Sewon Min, Julian Michael, Hannaneh Hajishirzi, and Luke Zettlemoyer. 2020. AmbigQA: Answering ambiguous open-domain questions. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 5783-5797, Online. Association for Computational Linguistics.
|
| 196 |
+
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text transformer. *ArXiv* preprint, abs/1910.10683.
|
| 197 |
+
Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383-2392, Austin, Texas. Association for Computational Linguistics.
|
| 198 |
+
Anna Rogers, Matt Gardner, and Isabelle Augenstein. 2021. QA dataset explosion: A taxonomy of NLP resources for question answering and reading comprehension. ArXiv preprint, abs/2107.12708.
|
| 199 |
+
Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointer-generator networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1073-1083, Vancouver, Canada. Association for Computational Linguistics.
|
| 200 |
+
Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. 2015. Pointer networks. In Advances in Neural
|
| 201 |
+
|
| 202 |
+
Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, December 7-12, 2015, Montreal, Quebec, Canada, pages 2692-2700.
|
| 203 |
+
Ellen M. Voorhees. 1999. The TREC-8 question answering track report. In Proceedings of The Eighth Text Retrieval Conference, TREC 1999, Gaithersburg, Maryland, USA, November 17-19, 1999, volume 500-246 of NIST Special Publication. National Institute of Standards and Technology (NIST).
|
| 204 |
+
Yuxiang Wu, Pasquale Minervini, Pontus Stenetorp, and Sebastian Riedel. 2021. Training adaptive computation for open-domain question answering with computational constraints. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 447-453, Online. Association for Computational Linguistics.
|
| 205 |
+
Ikuya Yamada, Akari Asai, and Hannaneh Hajishirzi. 2021. Efficient passage retrieval with hashing for open-domain question answering. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 979-986, Online. Association for Computational Linguistics.
|
| 206 |
+
Wei Yang, Yuqing Xie, Aileen Lin, Xingyu Li, Luchen Tan, Kun Xiong, Ming Li, and Jimmy Lin. 2019. End-to-end open-domain question answering with BERTserini. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations), pages 72-77, Minneapolis, Minnesota. Association for Computational Linguistics.
|
| 207 |
+
Chunting Zhou, Graham Neubig, Jiatao Gu, Mona Diab, Francisco Guzmán, Luke Zettlemoyer, and Marjan Ghazvininejad. 2021. Detecting hallucinated content in conditional neural sequence generation. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 1393-1404, Online. Association for Computational Linguistics.
|
acopyaugmentedgenerativemodelforopendomainquestionanswering/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:4645833141bce30b55ffd956989c79824a9fc442312cfb36bd75865a0a9d0f0f
|
| 3 |
+
size 196623
|
acopyaugmentedgenerativemodelforopendomainquestionanswering/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:bbc23d8658cced3d50e5cbeb04dcfff8285b3c3fbbc1dd30074486daf9b54c0c
|
| 3 |
+
size 225145
|
aflexiblemultitaskmodelforbertserving/88aeb2d7-8861-4873-9e05-2226dccca80a_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:cffe872d03b0177f42de7fe4b3644b1fdb77d27a37b70f0be39af4b0d4d4b328
|
| 3 |
+
size 78281
|
aflexiblemultitaskmodelforbertserving/88aeb2d7-8861-4873-9e05-2226dccca80a_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:48062fa3c0124c3ee7b337e43b56030d790b534c6e0c3af2defd90fbf8e8260d
|
| 3 |
+
size 96598
|
aflexiblemultitaskmodelforbertserving/88aeb2d7-8861-4873-9e05-2226dccca80a_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:7253ca912927f4c54187c88b22f5d5b7c7f2fe003017b33b9976ec7deb67fe3b
|
| 3 |
+
size 2448520
|
aflexiblemultitaskmodelforbertserving/full.md
ADDED
|
@@ -0,0 +1,306 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# A Flexible Multi-Task Model for BERT Serving
|
| 2 |
+
|
| 3 |
+
Tianwen Wei* Jianwei Qi* Shenghuan He
|
| 4 |
+
Xiaomi, XiaoAI Team
|
| 5 |
+
|
| 6 |
+
{weitianwen,qijianwei,heshenghuan}@xiaomi.com
|
| 7 |
+
|
| 8 |
+
# Abstract
|
| 9 |
+
|
| 10 |
+
We present an efficient BERT-based multi-task (MT) framework that is particularly suitable for iterative and incremental development of the tasks. The proposed framework is based on the idea of partial fine-tuning, i.e. only fine-tune some top layers of BERT while keep the other layers frozen. For each task, we train independently a single-task (ST) model using partial fine-tuning. Then we compress the task-specific layers in each ST model using knowledge distillation. Those compressed ST models are finally merged into one MT model so that the frozen layers of the former are shared across the tasks. We exemplify our approach on eight GLUE tasks, demonstrating that it is able to achieve $99.6\%$ of the performance of the full fine-tuning method, while reducing up to two thirds of its overhead.
|
| 11 |
+
|
| 12 |
+
# 1 Introduction
|
| 13 |
+
|
| 14 |
+
In this work we explore the strategies of BERT (Devlin et al., 2019) serving for multiple tasks under the following two constraints: 1) Memory and computational resources are limited. On edge devices such as mobile phones, this is usually a hard constraint. On local GPU stations and Cloud-based servers, this constraint is not as hard but it is still desirable to reduce the computation overhead to cut the serving cost. 2) The tasks are expected to be modular and are subject to frequent updates. When one task is updated, the system should be able to quickly adapt to the task modification such that the other tasks are not affected. This is a typical situation for applications (e.g. AI assistant) under iterative and incremental development.
|
| 15 |
+
|
| 16 |
+
In principle, there are two strategies of BERT serving: single-task serving and multi-task serving. In single-task serving, one independent single-task model is trained and deployed for each task. Typically, those models are obtained by fine-tuning a
|
| 17 |
+
|
| 18 |
+
copy of the pre-trained BERT and are completely different from each other. Single-task serving has the advantage of being flexible and modular as there is no dependency between the task models. The downside is its inefficiency in terms of both memory usage and computation, as neither parameters nor computation are shared or reused across the tasks. In multi-task serving, one single multi-task model is trained and deployed for all tasks. This model is typically trained with multi-task learning (MTL) (Caruana, 1997; Ruder, 2017). Compared to its single-task counterpart, multi-task serving is much more computationally efficient and incurs much less memory usage thanks to its sharing mechanism. However, it has the disadvantage in that any modification made to one task usually affects the other tasks.
|
| 19 |
+
|
| 20 |
+
The main contribution of this work is the proposition of a framework for BERT serving that simultaneously achieves the flexibility of single-task serving and the efficiency of multi-task serving. Our method is based on the idea of partial finetuning, i.e. only fine-tuning some topmost layers of BERT depending on the task and keeping the remaining bottom layers frozen. The fine-tuned layers are task-specific, which can be updated on a per-task basis. The frozen layers at the bottom, which plays the role of a feature extractor, can be shared across the tasks.
|
| 21 |
+
|
| 22 |
+
# 2 Related Work
|
| 23 |
+
|
| 24 |
+
The standard practice of using BERT is fine-tuning, i.e. the entirety of the model parameters is adjusted on the training corpus of the downstream task, so that the model is adapted to that specific task (Devlin et al., 2019). There is also an alternative feature-based approach, used by ELMo (Peters et al., 2018). In the latter approach, the pre-trained model is regarded as a feature extractor with frozen parameters. During the learning of a downstream task, one feeds a fixed or learnable combination of
|
| 25 |
+
|
| 26 |
+
<table><tr><td>L</td><td>QNLI</td><td>RTE</td><td>QQP</td><td>MNLI</td><td>SST-2</td><td>MRPC</td><td>CoLA</td><td>STS-B</td></tr><tr><td>1</td><td>85.9</td><td>60.3</td><td>86.1</td><td>77.1</td><td>91.6</td><td>77.2</td><td>38.7</td><td>84.8</td></tr><tr><td>2</td><td>88.3</td><td>63.5</td><td>88.3</td><td>80.8</td><td>91.9</td><td>80.6</td><td>40.0</td><td>86.1</td></tr><tr><td>3</td><td>89.9</td><td>65.3</td><td>89.0</td><td>82.5</td><td>91.2</td><td>84.6</td><td>45.3</td><td>87.3</td></tr><tr><td>4</td><td>90.7</td><td>69.0</td><td>89.7</td><td>83.3</td><td>92.0</td><td>84.3</td><td>48.6</td><td>88.2</td></tr><tr><td>5</td><td>91.0</td><td>71.5</td><td>90.1</td><td>84.0</td><td>92.2</td><td>89.7</td><td>51.3</td><td>88.3</td></tr><tr><td>6</td><td>91.2</td><td>71.1</td><td>90.3</td><td>84.2</td><td>93.1</td><td>86.8</td><td>53.1</td><td>86.4</td></tr><tr><td>7</td><td>91.3</td><td>70.0</td><td>90.5</td><td>83.9</td><td>93.0</td><td>87.5</td><td>51.5</td><td>88.6</td></tr><tr><td>8</td><td>91.5</td><td>70.8</td><td>90.6</td><td>84.5</td><td>92.8</td><td>88.0</td><td>55.2</td><td>88.9</td></tr><tr><td>9</td><td>91.6</td><td>70.8</td><td>90.7</td><td>84.0</td><td>92.5</td><td>87.7</td><td>54.7</td><td>88.8</td></tr><tr><td>10</td><td>91.7</td><td>69.7</td><td>91.1</td><td>84.5</td><td>93.0</td><td>87.3</td><td>55.0</td><td>88.7</td></tr><tr><td>11</td><td>91.7</td><td>70.4</td><td>91.1</td><td>84.5</td><td>93.1</td><td>88.2</td><td>54.7</td><td>89.1</td></tr><tr><td>12</td><td>91.6</td><td>69.7</td><td>91.1</td><td>84.6</td><td>93.4</td><td>88.2</td><td>54.7</td><td>88.8</td></tr></table>
|
| 27 |
+
|
| 28 |
+
Table 1: Dev results on GLUE datasets obtained with partial fine-tuning. The parameter $L$ indicates the number of fine-tuned transformer layers. For each dataset and for each value of $L$ , we always run the experiment 5 times with different initializations and report the maximum dev result obtained. The best result in each column is highlighted in bold face. Shaded numbers indicate that they attain $99\%$ of the best result of the column. It can be seen that although fine-tuning more layers generally leads to better performance, the benefit of doing so suffers diminishing returns. Perhaps surprisingly, for RTE, MRPC and CoLA it is the partial fine-tuning with roughly half of the layers frozen that gives the best results.
|
| 29 |
+
|
| 30 |
+
the model's intermediate representations as input to the task-specific module, and only the parameters of the latter will be updated. It has been shown that the fine-tuning approach is generally superior to the feature-based approach for BERT in terms of task performance (Devlin et al., 2019; Peters et al., 2019).
|
| 31 |
+
|
| 32 |
+
A natural middle ground between these two approaches is partial fine-tuning, i.e. only fine-tuning some topmost layers of BERT while keeping the remaining bottom layers frozen. This approach has been studied in (Houlsby et al., 2019; Merchant et al., 2020), where the authors observed that fine-tuning only the top layers can almost achieve the performance of full fine-tuning on several GLUE tasks. The approach of partial fine-tuning essentially regards the bottom layers of BERT as a feature extractor. Freezing weights from bottom layers is a sensible idea as previous studies show that the mid layer representations produced by BERT are most transferrable, whereas the top layers representations are more task-oriented (Wang et al., 2019a; Tenney et al., 2019b,a; Liu et al., 2019a; Merchant et al., 2020). Notably, Merchant et al. (2020) showed that fine-tuning primarily affects weights from the top layers while weights from bottom layers do not alter much. Liu et al. (2019a)
|
| 33 |
+
|
| 34 |
+
showed that it is possible to achieve state-of-the-art results on a number of probing tasks with linear models trained on frozen mid layer representations of BERT.
|
| 35 |
+
|
| 36 |
+
# 3 Method
|
| 37 |
+
|
| 38 |
+
In what follows, we denote by $\mathcal{T}$ the set of all target tasks. We always use the 12-layer uncased version of BERT as the pre-trained language model<sup>1</sup>. The proposed framework features a pipeline (Fig. 1) that consists of three steps: 1) Single task partial fine-tuning; 2) Single task knowledge distillation; 3) Model merging. We give details of these steps below.
|
| 39 |
+
|
| 40 |
+
# 3.1 Single Task Partial Fine-Tuning
|
| 41 |
+
|
| 42 |
+
In the first step, we partial fine-tune for each task an independent copy of BERT. The exact number of layers $L$ to fine-tune is a hyper-parameter and may vary across the tasks. We propose to experiment for each task with different value of $L$ within range $N_{\mathrm{min}} \leqslant L \leqslant N_{\mathrm{max}}$ , and select the one that gives the best validation performance. The purpose of imposing the search range $[N_{\mathrm{min}}, N_{\mathrm{max}}]$ is to guarantee a minimum degree of parameter sharing. In the subsequent experiments on GLUE tasks (see Section 4.3), we set $N_{\mathrm{min}} = 4$ and $N_{\mathrm{max}} = 10$ .
|
| 43 |
+
|
| 44 |
+
This step produces a collection of single-task models as depicted in Fig. 1(a). We shall refer to them as single-task teacher models, as they are to be knowledge distilled to further reduce the memory and computation overhead.
|
| 45 |
+
|
| 46 |
+
# 3.2 Single Task Knowledge Distillation
|
| 47 |
+
|
| 48 |
+
Since there is no interaction between the tasks, the process of knowledge distillation (KD) can be carried out separately for each task. In principle any of the existing KD methods for BERT (Wang et al., 2020; Aguilar et al., 2020; Sun et al., 2019a; Jiao et al., 2020; Xu et al., 2020a) suits our needs. In preliminary experiments we found out that as long as the student model is properly initialized, the vanilla knowledge distillation (Hinton et al., 2015) can be as performant as those more sophisticated methods.
|
| 49 |
+
|
| 50 |
+
Assume that the teacher model for task $\tau \in \mathcal{T}$ contains $L^{(\tau)}$ fine-tuned layers at the top and $12 - L^{(\tau)}$ frozen layers at the bottom. Our goal is
|
| 51 |
+
|
| 52 |
+

|
| 53 |
+
Figure 1: Pipeline of the proposed method. (a) For each task we train separately a task-specific model with partial fine-tuning, i.e. only the weights from some topmost layers (blue and red blocks) of the pre-trained model are updated while the rest are kept frozen (gray blocks). (b) We perform knowledge distillation independently for each task on the task-specific layers of the teacher models. (c) The student models are merged into one MT model so that the frozen layers of the former can be shared.
|
| 54 |
+
|
| 55 |
+
to compress the former into a smaller $l^{(\tau)}$ -layer module. The proposed initialization scheme is very simple: we initialize the student model with the weights from the corresponding layers of the teacher. More precisely, let $N_{s}$ denote the number of layers (including both frozen and task-specific layers) in the student, where $N_{s} < 12$ . We propose to initialize the student from the bottommost $N_{s}$ layers of the teacher. Similar approach has also been used in (Sanh et al., 2019), where the student is initialized by taking one layer out of two from the teacher. The value of $l^{(\tau)}$ , i.e. the number of task-specific layers in the student model for task $\tau$ , determines the final memory and computation overhead for that task.
|
| 56 |
+
|
| 57 |
+
# 3.3 Model Merging
|
| 58 |
+
|
| 59 |
+
In the final step, we merge the single-task student models into one multi-task model (Fig. 1(c)) so that the parameters and computations carried out in the frozen layers can be shared. To achieve this, it suffices to load weights from multiple model checkpoints into one computation graph.
|
| 60 |
+
|
| 61 |
+
# 4 Experiments
|
| 62 |
+
|
| 63 |
+
In this section, we compare the performance and efficiency of our model with various baselines on eight GLUE tasks (Wang et al., 2019b). More details on these tasks can be found in Appendix A.
|
| 64 |
+
|
| 65 |
+
# 4.1 Metrics
|
| 66 |
+
|
| 67 |
+
The performance metrics for GLUE tasks is accuracy except for CoLA and STS-B. We use Matthews correlation for CoLA, and Pearson correlation for STS-B.
|
| 68 |
+
|
| 69 |
+
To measure the parameter and computational efficiency, we introduce the total number of transformer layers that are needed to perform inference for all eight tasks. For the models studied in our experiments, the actual memory usage and the computational overhead are approximately linear with respect to this number. It is named "overhead" in the header of Table 2.
|
| 70 |
+
|
| 71 |
+
# 4.2 Baselines
|
| 72 |
+
|
| 73 |
+
The baseline models/methods can be divided into 4 categories:
|
| 74 |
+
|
| 75 |
+
Single-task without $KD$ . There is only one method in this category, i.e. the standard practice of single task full fine-tuning that creates a separate model for each task.
|
| 76 |
+
|
| 77 |
+
Single-task with KD. The methods in this category create a separate model for each task, but a certain knowledge distillation method is applied to compress each task model into a 6-layer one. The KD methods include (Hinton et al., 2015; Xu et al., 2020b; Sanh et al., 2019; Turc et al., 2019; Sun et al., 2019b; Jiao et al., 2020; Wang et al., 2020).
|
| 78 |
+
|
| 79 |
+
Multi-task learning. This category includes two versions of MT-DNN (Liu et al., 2019b, 2020), both of which produce one single multi-task model. 1) MT-DNN (full) is jointly trained for all eight tasks.
|
| 80 |
+
|
| 81 |
+
<table><tr><td></td><td>QNLI</td><td>RTE</td><td>QQP</td><td>MNLI</td><td>SST-2</td><td>MRPC</td><td>CoLA</td><td>STS-B</td><td>Avg.</td><td>Layers</td><td>Overhead</td></tr><tr><td>Full fine-tuning</td><td>91.6</td><td>69.7</td><td>91.1</td><td>84.6</td><td>93.4</td><td>88.2</td><td>54.7</td><td>88.8</td><td>82.8</td><td>12 × 8</td><td>96 (100%)</td></tr><tr><td>DistillBERT[b]</td><td>89.2</td><td>59.9</td><td>88.5</td><td>82.2</td><td>91.3</td><td>87.5</td><td>51.3</td><td>86.9</td><td>79.6</td><td>6 × 8</td><td>48 (50.0%)</td></tr><tr><td>Vanilla-KD[c]</td><td>88.0</td><td>64.9</td><td>88.1</td><td>80.1</td><td>90.5</td><td>86.2</td><td>45.1</td><td>84.9</td><td>78.5</td><td>6 × 8</td><td>48 (50.0%)</td></tr><tr><td>PD-BERT[d]</td><td>89.0</td><td>66.7</td><td>89.1</td><td>83.0</td><td>91.1</td><td>87.2</td><td>-</td><td>-</td><td>-</td><td>6 × 8</td><td>48 (50.0%)</td></tr><tr><td>BERT-PKD[e]</td><td>88.4</td><td>66.5</td><td>88.4</td><td>81.3</td><td>91.3</td><td>85.7</td><td>45.5</td><td>86.2</td><td>79.2</td><td>6 × 8</td><td>48 (50.0%)</td></tr><tr><td>BERT-of-Theseus[f]</td><td>89.5</td><td>68.2</td><td>89.6</td><td>82.3</td><td>91.5</td><td>89.0</td><td>51.1</td><td>88.7</td><td>81.2</td><td>6 × 8</td><td>48 (50.0%)</td></tr><tr><td>TinyBERT[g]</td><td>90.5</td><td>72.2</td><td>90.6</td><td>83.5</td><td>91.6</td><td>88.4</td><td>42.8</td><td>-</td><td>-</td><td>6 × 8</td><td>48 (50.0%)</td></tr><tr><td>MiniLM[h]</td><td>88.4</td><td>66.5</td><td>88.4</td><td>81.3</td><td>91.3</td><td>85.7</td><td>45.5</td><td>86.2</td><td>79.2</td><td>6 × 8</td><td>48 (50.0%)</td></tr><tr><td>MT-DNN (full)[j]</td><td>91.1</td><td>80.9</td><td>87.6</td><td>84.4</td><td>93.5</td><td>87.4</td><td>51.3</td><td>86.8</td><td>82.9</td><td>12 × 1</td><td>12 (12.5%)</td></tr><tr><td>MT-DNN (LOO)[k]</td><td>69.7</td><td>60.6</td><td>66.5</td><td>56.7</td><td>79.2</td><td>74.2</td><td>10.2</td><td>72.9</td><td>-</td><td>-</td><td>-</td></tr><tr><td>Ours (KD-1)</td><td>86.4</td><td>66.1</td><td>91.0</td><td>77.5</td><td>90.7</td><td>85.1</td><td>36.4</td><td>88.3</td><td>77.4</td><td>7 + 1 × 8</td><td>15 (15.6%)</td></tr><tr><td>Ours (KD-2)</td><td>88.6</td><td>64.6</td><td>91.3</td><td>81.7</td><td>92.7</td><td>86.3</td><td>44.0</td><td>88.6</td><td>79.7</td><td>7 + 2 × 8</td><td>23 (24.0%)</td></tr><tr><td>Ours (KD-3)</td><td>90.2</td><td>66.8</td><td>91.2</td><td>82.9</td><td>92.7</td><td>88.0</td><td>50.0</td><td>88.9</td><td>81.3</td><td>7 + 3 × 8</td><td>31 (32.3%)</td></tr><tr><td rowspan="2">Ours (w/o KD)</td><td>91.7</td><td>71.5</td><td>91.1</td><td>84.5</td><td>93.1</td><td>89.7</td><td>55.2</td><td>88.9</td><td rowspan="2">83.2</td><td rowspan="2">7 + 60</td><td rowspan="2">67 (69.8%)</td></tr><tr><td>(2,10)</td><td>(7,5)</td><td>(2,10)</td><td>(4,8)</td><td>(6,6)</td><td>(7,5)</td><td>(4,8)</td><td>(4,8)</td></tr><tr><td rowspan="2">Ours (mixed)</td><td>90.2</td><td>71.5</td><td>91.0</td><td>82.9</td><td>92.7</td><td>88.0</td><td>55.2</td><td>88.3</td><td rowspan="2">82.5</td><td rowspan="2">7 + 26</td><td rowspan="2">33 (34.3%)</td></tr><tr><td>(2,3)</td><td>(7,5)</td><td>(2,1)</td><td>(4,3)</td><td>(6,2)</td><td>(7,3)</td><td>(4,8)</td><td>(4,1)</td></tr></table>
|
| 82 |
+
|
| 83 |
+
Table 2: A comparison of performance and overhead between our approach and various baselines (see §4.2 for more details). The performance is evaluated on the dev set. To obtain the results labeled as “Ours”, we always run the experiment 5 times with different initializations and report the maximum. The best result in each column is highlighted in bold face. Shaded numbers indicate that they attain $99\%$ of the Full fine-tuning baseline. Results of $[b]$ are from (Sanh et al., 2019); $[c] - [f]$ are from (Xu et al., 2020b); $[g] - [h]$ are from (Wang et al., 2020); $[j] - [k]$ are reproduced by us with the toolkit from (Liu et al., 2020). Round bracket $(x, y)$ indicates that the underlying task model before merging consists of $x$ frozen layers and $y$ task-specific layers (fine-tuned or knowledge-distilled). In the “Layers” column, notation $7 + 2 \times 8$ implies that in the final multi-task model there are 7 shared frozen layers and 2 task-specific layers for each of the 8 task.
|
| 84 |
+
|
| 85 |
+
It corresponds to the idea scenario where all tasks are known in advance. 2) MT-DNN (LOO), where "LOO" stands for "leave-one-out", corresponds to the scenario where one of the eight tasks is not known in advance. The model is jointly pre-trained on the 7 available tasks. Then an output layer for the "unknown" task is trained with the pre-trained weights frozen.
|
| 86 |
+
|
| 87 |
+
Flexible multi-task. Our models under various efficiency constraints. Ours (w/o KD) means that no knowledge distillation is applied to the task models. The number of fine-tuned layers for each task is selected according to the criterion described in Section 3.1. Ours (KD-n) means that knowledge distillation is applied such that the student model for each task contains exactly $n$ task-specific layers. For Ours (mixed), we determine the number of task-specific layers for each task based on the marginal benefit (in terms of task performance metric) of adding more layers to the task. More precisely, for each task we keep adding task-specific layers as long as the marginal benefit of doing so is no less than a pre-determined threshold $c$ . In Table 2, we report the result for $c = 1.0$ . Results with
|
| 88 |
+
|
| 89 |
+
other values of $c$ can be found in Appendix D.
|
| 90 |
+
|
| 91 |
+
# 4.3 Results
|
| 92 |
+
|
| 93 |
+
The results are summarized in Table 2. From the table it can be seen that the proposed method Ours (mixed) outperforms all KD methods while being more efficient. Compared to the single-task full fine-tuning baseline, our method reduces up to around two thirds of the total overhead while achieves $99.6\%$ of its performance.
|
| 94 |
+
|
| 95 |
+
We observe that MT-DNN (full) achieves the best average performance with the lowest overhead. However, its performance superiority primarily comes from one big boost on a single task (RTE) rather than consistent improvements on all tasks. In fact, we see that MT-DNN (full) suffers performance degradation on QQP and STS-B due to task interference, a known problem for MTL (Caruana, 1997; Bingel and Sogaard, 2017; Alonso and Plank, 2017; Wu et al., 2020). From our perspective, the biggest disadvantage of MT-DNN is that it assumes full knowledge of all target tasks in advance. From the results of MT-DNN (LOO), we observe that MT-DNN has difficulty in han
|
| 96 |
+
|
| 97 |
+
dling new tasks if the model is not allowed to be retrained.
|
| 98 |
+
|
| 99 |
+
# 5 Discussions
|
| 100 |
+
|
| 101 |
+
# 5.1 Advantages
|
| 102 |
+
|
| 103 |
+
One major advantage of the proposed architecture is its flexibility. First, different tasks may be fed with representations from different layers of BERT, which encapsulate different levels of linguistic information (Liu et al., 2019a). This flexibility is beneficial to both task performance and efficiency. For instance, on QQP we achieve an accuracy of 91.0, outperforming all KD baselines with merely one task-specific layer (connected to the 2nd layer of the frozen backbone model). Second, our architecture explicitly allows for allocating uneven resources to different tasks. We have redistributed the resources among the tasks in ours (mixed), resulting in both greater performance and efficiency. Third, our framework does not compromise the modular design of the system. The model can be straightforwardly updated on on a per-task basis.
|
| 104 |
+
|
| 105 |
+
# 5.2 Limitations
|
| 106 |
+
|
| 107 |
+
The major limitation of our approach is that for each downstream task it requires approximately 10x more training time for the hyper-parameter search compared to the conventional approach. Although the cost is arguably manageable in practice, i.e. typically 2 or 3 days per task on a single Nvidia Tesla V100 GPU, the excessive computation load should not be overlooked.
|
| 108 |
+
|
| 109 |
+
Another limitation is that although the overall computation overhead is reduced, the serving latency of our model deteriorates as the number of tasks grows, and may eventually be worse than that of the single task baseline. This is due to the fact that during inference one cannot get the output of any one task until the model has finished computing for all tasks. In this regard, our approach may not be appropriate for those applications that demand exceptionally low serving latency, e.g. below $10\mathrm{ms}$ . Nevertheless, we report in Appendix E an industrial use case where our multi-task model serves 21 tasks while achieving a latency as low as $32\mathrm{ms}$ (99th percentile).
|
| 110 |
+
|
| 111 |
+
# 5.3 Comparison with Adaptor-Based Approaches
|
| 112 |
+
|
| 113 |
+
The adaptor-based approaches (Houlsby et al., 2019; Pfeiffer et al., 2020) belong to another
|
| 114 |
+
|
| 115 |
+
category of fine-tuning approaches that are also parameter-efficient. Basically, the adaptor-based approaches introduce one trainable task-specific "adaptor" module for each downstream task. This module is generally lightweight, containing only a few parameters and is inserted between (or within) layers of the backbone model (e.g. BERT). However, even though the parameters of the backbone model can be shared across the tasks, the computation for inference cannot due to the fact that the internal data flow in each task model is modified by the task-specific adaptor. Therefore, the adaptor-based approaches are not computationally efficient and one needs to perform a separate full forward pass for each task. Since both parameter and computation efficiency are what we aim to achieve, the adaptor-based approaches are not comparable to our method.
|
| 116 |
+
|
| 117 |
+
# 6 Conclusion
|
| 118 |
+
|
| 119 |
+
We have presented our framework that is designed to provide efficient and flexible BERT-based multitask serving. We have demonstrated on eight GLUE datasets that the proposed method achieves both strong performance and efficiency. We release our code<sup>2</sup> and hope that it can facilitate BERT serving in cost-sensitive applications.
|
| 120 |
+
|
| 121 |
+
# References
|
| 122 |
+
|
| 123 |
+
Gustavo Aguilar, Yuan Ling, Yu Zhang, Benjamin Yao, Xing Fan, and Chenlei Guo. 2020. Knowledge distillation from internal representations. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pages 7350-7357. AAAI Press.
|
| 124 |
+
Héctor Alonso and Barbara Plank. 2017. When is multitask learning effective? semantic sequence prediction under varying data conditions. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 44-53, Valencia, Spain. Association for Computational Linguistics.
|
| 125 |
+
Luisa Bentivogli, Peter Clark, Ido Dagan, and Danilo Giampiccolo. 2009. The fifth pascal recognizing textual entailment challenge. In TAC.
|
| 126 |
+
Joachim Bingel and Anders Sogaard. 2017. Identifying beneficial task relations for multi-task learning
|
| 127 |
+
|
| 128 |
+
in deep neural networks. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 164-169, Valencia, Spain. Association for Computational Linguistics.
|
| 129 |
+
Rich Caruana. 1997. Multitask Learning. Machine Learning, 28(1):41-75. 00000.
|
| 130 |
+
Daniel Cer, Mona Diab, Eneko Agirre, Inigo Lopez-Gazpio, and Lucia Specia. 2017. SemEval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 1-14, Vancouver, Canada. Association for Computational Linguistics.
|
| 131 |
+
Z. Chen, H. Zhang, X. Zhang, and L. Zhao. 2018. Quora question pairs.
|
| 132 |
+
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.
|
| 133 |
+
William B. Dolan and Chris Brockett. 2005. Automatically constructing a corpus of sentential paraphrases. In Proceedings of the Third International Workshop on Paraphrasing (IWP2005).
|
| 134 |
+
Geoffrey Hinton, Oriol Vinyals, and Jeffrey Dean. 2015. Distilling the knowledge in a neural network. In NIPS Deep Learning and Representation Learning Workshop.
|
| 135 |
+
Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. 2019. Parameter-efficient transfer learning for NLP. In Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pages 2790-2799. PMLR.
|
| 136 |
+
Xiaoqi Jiao, Yichun Yin, Lifeng Shang, Xin Jiang, Xiao Chen, Linlin Li, Fang Wang, and Qun Liu. 2020. TinyBERT: Distilling BERT for natural language understanding. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 4163-4174, Online. Association for Computational Linguistics.
|
| 137 |
+
Hector Levesque, Ernest Davis, and Leora Morgenstern. 2012. The winograd schema challenge. In Thirteenth international conference on the principles of knowledge representation and reasoning.
|
| 138 |
+
Nelson F. Liu, Matt Gardner, Yonatan Belinkov, Matthew E. Peters, and Noah A. Smith. 2019a. Linguistic knowledge and transferability of contextual
|
| 139 |
+
|
| 140 |
+
representations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1073-1094, Minneapolis, Minnesota. Association for Computational Linguistics.
|
| 141 |
+
Xiaodong Liu, Pengcheng He, Weizhu Chen, and Jianfeng Gao. 2019b. Multi-task deep neural networks for natural language understanding. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4487-4496, Florence, Italy. Association for Computational Linguistics.
|
| 142 |
+
Xiaodong Liu, Yu Wang, Jianshu Ji, Hao Cheng, Xueyun Zhu, Emmanuel Awa, Pengcheng He, Weizhu Chen, Hoifung Poon, Guihong Cao, and Jianfeng Gao. 2020. The Microsoft toolkit of multitask deep neural networks for natural language understanding. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 118-126, Online. Association for Computational Linguistics.
|
| 143 |
+
Amil Merchant, Elahe Rahimtoroghi, Ellie Pavlick, and Ian Tenney. 2020. What happens to BERT embeddings during fine-tuning? In Proceedings of the Third BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP, pages 33-44, Online. Association for Computational Linguistics.
|
| 144 |
+
Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227-2237, New Orleans, Louisiana. Association for Computational Linguistics.
|
| 145 |
+
Matthew E. Peters, Sebastian Ruder, and Noah A. Smith. 2019. To tune or not to tune? adapting pretrained representations to diverse tasks. In Proceedings of the 4th Workshop on Representation Learning for NLP (RepL4NLP-2019), pages 7-14, Florence, Italy. Association for Computational Linguistics.
|
| 146 |
+
Jonas Pfeiffer, Andreas Rückle, Clifton Poth, Aishwarya Kamath, Ivan Vulic, Sebastian Ruder, Kyunghyun Cho, and Iryna Gurevych. 2020. AdapterHub: A framework for adapting transformers. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 46-54, Online. Association for Computational Linguistics.
|
| 147 |
+
Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training.
|
| 148 |
+
Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text.
|
| 149 |
+
|
| 150 |
+
Sebastian Ruder. 2017. An overview of multitask learning in deep neural networks. CoRR, abs/1706.05098.
|
| 151 |
+
Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. CoRR, abs/1910.01108.
|
| 152 |
+
Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1631-1642, Seattle, Washington, USA. Association for Computational Linguistics.
|
| 153 |
+
Siqi Sun, Yu Cheng, Zhe Gan, and Jingjing Liu. 2019a. Patient knowledge distillation for BERT model compression. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4323-4332, Hong Kong, China. Association for Computational Linguistics.
|
| 154 |
+
Siqi Sun, Yu Cheng, Zhe Gan, and Jingjing Liu. 2019b. Patient knowledge distillation for BERT model compression. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4323-4332, Hong Kong, China. Association for Computational Linguistics.
|
| 155 |
+
Ian Tenney, Dipanjan Das, and Ellie Pavlick. 2019a. BERT rediscovers the classical NLP pipeline. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4593-4601, Florence, Italy. Association for Computational Linguistics.
|
| 156 |
+
Ian Tenney, Patrick Xia, Berlin Chen, Alex Wang, Adam Poliak, R Thomas McCoy, Najoung Kim, Benjamin Van Durme, Sam Bowman, Dipanjan Das, and Ellie Pavlick. 2019b. What do you learn from context? probing for sentence structure in contextualized word representations. In International Conference on Learning Representations.
|
| 157 |
+
Iulia Turc, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Well-read students learn better: On the importance of pre-training compact models.
|
| 158 |
+
Alex Wang, Jan Hula, Patrick Xia, Raghavendra Pappagari, R. Thomas McCoy, Roma Patel, Najoung Kim, Ian Tenney, Yinghui Huang, Katherin Yu, Shuning Jin, Berlin Chen, Benjamin Van Durme, Edouard Grave, Ellie Pavlick, and Samuel R. Bowman. 2019a. Can you tell me how to get past sesame street? sentence-level pretraining beyond language modeling. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4465-4476, Florence, Italy. Association for Computational Linguistics.
|
| 159 |
+
|
| 160 |
+
Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2018. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 353-355, Brussels, Belgium. Association for Computational Linguistics.
|
| 161 |
+
Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. 2019b. Glue: A multi-task benchmark and analysis platform for natural language understanding. In 7th International Conference on Learning Representations, ICLR 2019.
|
| 162 |
+
Wenhui Wang, Furu Wei, Li Dong, Hangbo Bao, Nan Yang, and Ming Zhou. 2020. Minilm: Deep self-attention distillation for task-agnostic compression of pre-trained transformers. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual.
|
| 163 |
+
A. Warstadt, A. Singh, and S. R. Bowman. 2018. Corpus of linguistic acceptability.
|
| 164 |
+
Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112-1122, New Orleans, Louisiana. Association for Computational Linguistics.
|
| 165 |
+
Sen Wu, Hongyang R Zhang, and Christopher Re. 2020. Understanding and improving information transfer in multi-task learning. arXiv preprint arXiv:2005.00944.
|
| 166 |
+
Canwen Xu, Wangchunshu Zhou, Tao Ge, Furu Wei, and Ming Zhou. 2020a. BERT-of-theseus: Compressing BERT by progressive module replacing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 7859-7869, Online. Association for Computational Linguistics.
|
| 167 |
+
Canwen Xu, Wangchunshu Zhou, Tao Ge, Furu Wei, and Ming Zhou. 2020b. BERT-of-theseus: Compressing BERT by progressive module replacing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 7859-7869, Online. Association for Computational Linguistics.
|
| 168 |
+
|
| 169 |
+
# Supplemental Materials
|
| 170 |
+
|
| 171 |
+
# A Details on the GLUE tasks
|
| 172 |
+
|
| 173 |
+
The GLUE benchmark includes the following datasets:
|
| 174 |
+
|
| 175 |
+
- QNLI (Question Natural Language Inference). The dataset is derived from (Rajpurkar et al., 2016). This is a binary classification task where an example is of the form (question, sentence) and the goal is to predict whether the sentence contains the correct answer to the question (Wang et al., 2018).
|
| 176 |
+
- RTE (Recognizing Textual Entailment). A binary entailment task similar to MNLI but with much less training data (Bentivogli et al., 2009).
|
| 177 |
+
- QQP (Quora Question Pairs) A binary classification task where the goal is to determine if two questions asked on Quora are semantically equivalent (Chen et al., 2018).
|
| 178 |
+
- MNLI (Multi-Genre Natural Language Inference). Given a pair of sentences, the goal is to predict whether the second sentence is an entailment, contradiction or neutral with respect to the first one (Williams et al., 2018).
|
| 179 |
+
- SST-2 (The Stanford Sentiment Treebank). A binary single-sentence classification task where the goal is to predict the sentiment (positive or negative) of the movie reviews (Socher et al., 2013).
|
| 180 |
+
- MRPC (Microsoft Research Paraphrase Corpus). A binary classification task where the goal is to predict whether two sentences are semantically equivalent (Dolan and Brockett, 2005).
|
| 181 |
+
- CoLA (The Corpus of Linguistic Acceptability). A binary single-sentence classification task where the goal is to predict whether an English sentence is linguistically "acceptable" or not (Warstadt et al., 2018).
|
| 182 |
+
- STS-B (The Semantic Textual Similarity Benchmark). A regression task where the goal is to predict whether two sentences are similar in terms of semantic meaning as measured by a score from 1 to 5 (Cer et al., 2017).
|
| 183 |
+
|
| 184 |
+
- WNLI (Winograd NLI). The dataset is derived from (Levesque et al., 2012). We exclude this task in our experiments following the practice of (Devlin et al., 2019; Radford et al., 2018).
|
| 185 |
+
|
| 186 |
+
<table><tr><td>Dataset</td><td>Train</td><td>Dev</td></tr><tr><td>QNLI</td><td>108k</td><td>5.4k</td></tr><tr><td>RTE</td><td>2.5k</td><td>0.3k</td></tr><tr><td>QQP</td><td>363k</td><td>40k</td></tr><tr><td>MNLI</td><td>392k</td><td>9.8k</td></tr><tr><td>SST-2</td><td>67k</td><td>0.8k</td></tr><tr><td>MRPC</td><td>3.5k</td><td>0.4k</td></tr><tr><td>CoLA</td><td>8.5k</td><td>1.0k</td></tr><tr><td>STS-B</td><td>5.7k</td><td>1.5k</td></tr></table>
|
| 187 |
+
|
| 188 |
+
Table 3: Number of examples for training and development in GLUE datasets.
|
| 189 |
+
|
| 190 |
+
# B Hyper-parameters
|
| 191 |
+
|
| 192 |
+
The approach presented in this work introduces two new hyper-parameters for each task $\tau \in \mathcal{T}$ , namely the number of fine-tuned layers $L^{(\tau)}$ for the teacher and the number of knowledge distilled layer $l^{(\tau)}$ for the student. If the resources permit, these two hyper-parameters should be tuned separately for each task. As introduced in Section 3.1, we suggest to constrain $L$ within the range $4 \leq L^{(\tau)} \leq 10$ . As for $l^{(\tau)}$ which determines the eventual task-specific overhead, we impose $l^{(\tau)} \leq 3$ . Since we always determines $L^{(\tau)}$ first, we do not need to experiment with every combination of $(L^{(\tau)}, l^{(\tau)})$ . Combining these together, our approach requires approximately $10\mathrm{x}$ (7 for $L$ and 3 for $l$ ) more training time compared to conventional full fine-tuning approach.
|
| 193 |
+
|
| 194 |
+
The conventional hyper-parameters (e.g. learning rate, mini-batch size, etc) used in our experiments are summarized in Table 4.
|
| 195 |
+
|
| 196 |
+
# C Detailed Experiment Results
|
| 197 |
+
|
| 198 |
+
In the box plots of Figure 2 above we report the performance of the student models initialized from pre-trained BERT and from the teacher. It can be clearly seen that the latter initialization scheme generally outperforms the former. Besides, we also observe that although increasing the number of task-specific layers improves the performance, the marginal benefit of doing so varies across tasks.
|
| 199 |
+
|
| 200 |
+
<table><tr><td>Hyper-parameter</td><td>value</td></tr><tr><td>learning rate</td><td>2e-5</td></tr><tr><td>batch size</td><td>32</td></tr><tr><td>Epoch</td><td>3, 4, 5</td></tr><tr><td>Optimizer</td><td>Adam</td></tr><tr><td>weight decay rate</td><td>0.01</td></tr><tr><td>β1</td><td>0.9</td></tr><tr><td>β2</td><td>0.999</td></tr><tr><td>ε</td><td>1e-6</td></tr></table>
|
| 201 |
+
|
| 202 |
+
Table 4: Hyper-parameters used in our experiments. We mainly followed the practice of (Devlin et al., 2019).
|
| 203 |
+
|
| 204 |
+
Notably, for QQP and STS-B the student models with only one task-specific layer are able to attain $99\%$ of the performance of their teacher.
|
| 205 |
+
|
| 206 |
+
# D Performance-Efficiency Trade-off
|
| 207 |
+
|
| 208 |
+
In Fig 5, we report the performance of our method with various values of $c$ , where $c$ is defined as the minimal marginal benefit (in terms of task performance metric) that every task-specific layer should bring (see Section 4.2).
|
| 209 |
+
|
| 210 |
+
# E Industrial Application
|
| 211 |
+
|
| 212 |
+
We have implemented our framework in the application of utterance understanding of XiaoAI, a mono-lingual (Chinese) commercial AI assistant developed by XiaoMi. Our flexible multi-task model forms the bulk of the utterance understanding system, which processes over 100 million user queries per day with a peak throughput of nearly 4000 queries-per-second (QPS).
|
| 213 |
+
|
| 214 |
+
For each user query, the utterance understanding system performs various tasks, including emotion recognition, incoherence detection, domain classification, intent classification, named entity recognition, slot filling, etc. Due to the large workload, these tasks are developed and maintained by a number of different teams. As the AI assistant itself is under iterative/incremental development, its utterance understanding system undergoes frequent updates<sup>3</sup>:
|
| 215 |
+
|
| 216 |
+
- Update of training corpus, e.g. when new training samples become available or some mislabeled samples are corrected or removed.
|
| 217 |
+
|
| 218 |
+
- Redefinition of existing tasks. For instance, when a more fine-grained intent classification is needed, we may need to redefine existing intent labels or introduce new labels.
|
| 219 |
+
- Introduction of new tasks. This may happen when the AI assistant needs to upgrade its skillsets so as to perform new tasks (e.g. recognize new set of instructions, play verbal games with kids, etc).
|
| 220 |
+
- Removal of obsolete tasks. Sometimes a task is superseded by another task, or simply deprecated due to commercial considerations. Those tasks need to be removed from the system.
|
| 221 |
+
|
| 222 |
+
One imperative feature for the system is the modular design, i.e. the tasks should be independent of each other so that any modification made to one task does not affect the other tasks. Clearly, a conventional multi-task system does not meet our need as multi-task training breaks modularity.
|
| 223 |
+
|
| 224 |
+
Before the introduction of BERT, our utterance understanding system is based on single-task serving, i.e. a separate model is deployed for each task. As those models are relatively lightweight (e.g. TextCNN, LSTM), overhead is not an issue. However, with the introduction of BERT, the cost for single-task serving becomes a valid concern as each task model (a unique 12-layer fine-tuned BERT) requires two Nvidia Tesla V100 GPUs for stable serving that meets the latency requirement.
|
| 225 |
+
|
| 226 |
+
With the primary objective of reducing cost, we have implemented the proposed flexible multitask model in our utterance understanding system, which provides serving for a total of 21 downstream tasks. Overall, there are 40 transformer layers of which 8 are shared frozen layers (on average 1.5 task-specific layers per task). Using only 5 Nvidia Tesla V100 GPUs, we are able to achieve<sup>4</sup> a P99 latency of $32\mathrm{ms}$ under a peak throughput of 4000 QPS. Compared with single-task serving for 21 tasks which would require 42 GPUs, we estimate that our system reduces the total serving cost by up to $88\%$ .
|
| 227 |
+
|
| 228 |
+

|
| 229 |
+
|
| 230 |
+

|
| 231 |
+
|
| 232 |
+

|
| 233 |
+
|
| 234 |
+

|
| 235 |
+
|
| 236 |
+

|
| 237 |
+
|
| 238 |
+

|
| 239 |
+
|
| 240 |
+

|
| 241 |
+
Figure 2: A comparison of the task performance between vanilla initialization (initialize from pre-trained BERT) and teacher initialization as described in Section 3.2 for $n \in \{1,2,3\}$ , where $n$ is the number of task-specific layers in the student model.
|
| 242 |
+
|
| 243 |
+

|
| 244 |
+
|
| 245 |
+
<table><tr><td></td><td>QNLI</td><td>RTE</td><td>QQP</td><td>MNLI</td><td>SST-2</td><td>MRPC</td><td>CoLA</td><td>STS-B</td><td>Avg.</td><td>Layers</td><td>Overhead</td></tr><tr><td>Full fine-tuning</td><td>91.6</td><td>69.7</td><td>91.1</td><td>84.6</td><td>93.4</td><td>88.2</td><td>54.7</td><td>88.8</td><td>82.8</td><td>12 × 8</td><td>96 (100%)</td></tr><tr><td>Ours (KD-1)</td><td>86.4</td><td>66.1</td><td>91.0</td><td>77.5</td><td>90.7</td><td>85.1</td><td>36.4</td><td>88.3</td><td>77.4</td><td>7 + 1 × 8</td><td>15 (15.6%)</td></tr><tr><td>Ours (KD-2)</td><td>88.6</td><td>64.6</td><td>91.3</td><td>81.7</td><td>92.7</td><td>86.3</td><td>44.0</td><td>88.6</td><td>79.7</td><td>7 + 2 × 8</td><td>23 (24.0%)</td></tr><tr><td>Ours (KD-3)</td><td>90.2</td><td>66.8</td><td>91.2</td><td>82.9</td><td>92.7</td><td>88.0</td><td>50.0</td><td>88.9</td><td>81.3</td><td>7 + 3 × 8</td><td>31 (32.3%)</td></tr><tr><td rowspan="2">Ours (c = 1.0)</td><td>90.2</td><td>71.5</td><td>91.0</td><td>82.9</td><td>92.7</td><td>88.0</td><td>55.2</td><td>88.3</td><td rowspan="2">82.5</td><td rowspan="2">7 + 26</td><td rowspan="2">33 (34.3%)</td></tr><tr><td>(2,3)</td><td>(7,5)</td><td>(2,1)</td><td>(4,3)</td><td>(6,2)</td><td>(7,3)</td><td>(4,8)</td><td>(4,1)</td></tr><tr><td rowspan="2">Ours (c = 2.0)</td><td>88.6</td><td>66.1</td><td>91.0</td><td>81.7</td><td>92.7</td><td>85.1</td><td>50.0</td><td>88.3</td><td rowspan="2">80.4</td><td rowspan="2">7 + 13</td><td rowspan="2">20 (20.2%)</td></tr><tr><td>(2,2)</td><td>(7,1)</td><td>(2,1)</td><td>(4,2)</td><td>(6,2)</td><td>(7,1)</td><td>(4,3)</td><td>(4,1)</td></tr><tr><td rowspan="2">Ours (c = 3.0)</td><td>86.4</td><td>66.1</td><td>91.0</td><td>81.7</td><td>90.7</td><td>85.1</td><td>50.0</td><td>88.3</td><td rowspan="2">79.9</td><td rowspan="2">7 + 11</td><td rowspan="2">18 (18.8%)</td></tr><tr><td>(2,1)</td><td>(7,1)</td><td>(2,1)</td><td>(4,2)</td><td>(6,1)</td><td>(7,1)</td><td>(4,3)</td><td>(4,1)</td></tr><tr><td rowspan="2">Ours (w/o KD)</td><td>91.7</td><td>71.5</td><td>91.1</td><td>84.5</td><td>93.1</td><td>89.7</td><td>55.2</td><td>88.9</td><td rowspan="2">83.2</td><td rowspan="2">7 + 60</td><td rowspan="2">67 (69.8%)</td></tr><tr><td>(2,10)</td><td>(7,5)</td><td>(2,10)</td><td>(4,8)</td><td>(6,6)</td><td>(7,5)</td><td>(4,8)</td><td>(4,8)</td></tr></table>
|
| 246 |
+
|
| 247 |
+
Table 5: Results with various values of $c$ . This parameter controls the performance-efficiency trade-off of the overall multi-task model, in the sense that we allow the growth of an existing task module by one more task-specific layer only if that would bring a performance gain greater than $c$ .
|
| 248 |
+
|
| 249 |
+
# Responsible NLP Research Checklist
|
| 250 |
+
|
| 251 |
+
# A. For every submission
|
| 252 |
+
|
| 253 |
+
A1. Did you discuss the limitations of your work? Yes, it is explicitly discussed in Section 5.2.
|
| 254 |
+
A2. Did you discuss any potential risks of your work?
|
| 255 |
+
No, we believe that there is no potential risk.
|
| 256 |
+
A3. Do the abstract and introduction summarize the paper's main claims?
|
| 257 |
+
Yes, we confirm so.
|
| 258 |
+
|
| 259 |
+
# B. Did you use or create scientific artifacts?
|
| 260 |
+
|
| 261 |
+
Yes, we used the GLUE datasets in Section 4.
|
| 262 |
+
|
| 263 |
+
B1. Did you cite the creators of artifacts you used? Yes, the GLUE paper is cited in Section 4. The individual datasets in GLUE are cited in Appendix A.
|
| 264 |
+
B2. Did you discuss the license or terms for use and/or distribution of any artifacts?
|
| 265 |
+
No. Since those artifacts are popular in the NLP community, we merely followed the common practice of using these artifacts. We do not believe that our usage violate the license for use, or is potentially risky in any ways we can imagine.
|
| 266 |
+
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the
|
| 267 |
+
|
| 268 |
+
artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
|
| 269 |
+
|
| 270 |
+
No. The justification is the same that for question B2.
|
| 271 |
+
|
| 272 |
+
B4. Did you discuss the steps taken to check whether the data that was collected/used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
|
| 273 |
+
No. The justification is the same that for question B2.
|
| 274 |
+
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
|
| 275 |
+
|
| 276 |
+
Yes, it is provided in Appendix A.
|
| 277 |
+
|
| 278 |
+
B6. Did you report relevant statistics like the number of examples, details of train/test/dev splits, etc. for the data that you used/created?
|
| 279 |
+
|
| 280 |
+
Yes, it is provided in Appendix A.
|
| 281 |
+
|
| 282 |
+
# C. Did you run computational experiments?
|
| 283 |
+
|
| 284 |
+
Yes, in Section 4.
|
| 285 |
+
|
| 286 |
+
C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure
|
| 287 |
+
|
| 288 |
+
used?
|
| 289 |
+
|
| 290 |
+
No, we did not report the number of parameters in the models used as it can be easily inferred from Table 2. The total computation budget was discussed in Section 5.2.
|
| 291 |
+
|
| 292 |
+
C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
|
| 293 |
+
|
| 294 |
+
Yes, it is provided in the Section 4 and Appendix B.
|
| 295 |
+
|
| 296 |
+
C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
|
| 297 |
+
|
| 298 |
+
Yes, we explicitly stated in the caption of Table 1 and Table 2 that our results are the maximum over 5 independent runs. Detailed results are also reported in Appendix C.
|
| 299 |
+
|
| 300 |
+
C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)?
|
| 301 |
+
|
| 302 |
+
We did reuse the WordPiece implementation from BERT's repository https://github.com/google-research/bert for tokenization. We did not report this as we consider it as a trivial matter.
|
| 303 |
+
|
| 304 |
+
# D. Did you use human annotators (e.g., crowdworkers) or research with human subjects?
|
| 305 |
+
|
| 306 |
+
No, we did not use any human annotators, nor did we research with human subjects.
|
aflexiblemultitaskmodelforbertserving/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:7abc6b724945f4eeaf718a3f29722ced02d39072b410a15cc1bfb83144704934
|
| 3 |
+
size 486320
|
aflexiblemultitaskmodelforbertserving/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:e1e0ae5e7030f6968339305338c61e945b324ae0aacb5ad4be5888ff9f01c736
|
| 3 |
+
size 358500
|
arecipeforarbitrarytextstyletransferwithlargelanguagemodels/b1029a19-5f12-4d77-b5f4-81acfde01e3e_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:4e11c58d24dcef44b0baa4ea9c1efb82074488cac5a05d724ac0650b195649c0
|
| 3 |
+
size 79733
|
arecipeforarbitrarytextstyletransferwithlargelanguagemodels/b1029a19-5f12-4d77-b5f4-81acfde01e3e_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:7c130668a76bf2e1a39040a83de1b9d3c8a8df57f427b1293e966249f482c35e
|
| 3 |
+
size 93853
|
arecipeforarbitrarytextstyletransferwithlargelanguagemodels/b1029a19-5f12-4d77-b5f4-81acfde01e3e_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:f7f58e81e69167d8e9198151a53048ba3dea69fba728b98f33629dec6269eaef
|
| 3 |
+
size 688944
|
arecipeforarbitrarytextstyletransferwithlargelanguagemodels/full.md
ADDED
|
@@ -0,0 +1,293 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# A Recipe For Arbitrary Text Style Transfer with Large Language Models
|
| 2 |
+
|
| 3 |
+
Emily Reif $^{1*}$ Daphne Ippolito $^{1,2*}$ Ann Yuan $^{1}$ Andy Coenen $^{1}$ Chris Callison-Burch $^{2}$ Jason Wei $^{1}$ $^{1}$ Google Research $^{2}$ University of Pennsylvania
|
| 4 |
+
{ereif, annyuan, andycoenen, jasonwei}@google.com
|
| 5 |
+
{daphnei, ccb}@seas.upenn.edu
|
| 6 |
+
|
| 7 |
+
# Abstract
|
| 8 |
+
|
| 9 |
+
In this paper, we leverage large language models (LMs) to perform zero-shot text style transfer. We present a prompting method that we call augmented zero-shot learning, which frames style transfer as a sentence rewriting task and requires only a natural language instruction, without model fine-tuning or exemplars in the target style. Augmented zero-shot learning is simple and demonstrates promising results not just on standard style transfer tasks such as sentiment, but also on natural language transformations such as "make this melodramatic" or "insert a metaphor."
|
| 10 |
+
|
| 11 |
+
# 1 Introduction
|
| 12 |
+
|
| 13 |
+
Text style transfer is the task of rewriting text to incorporate additional or alternative stylistic elements while preserving the overall semantics and structure. Although style transfer has garnered increased interest due to the success of deep learning, these approaches usually require a substantial amount of labeled training examples, either as parallel text data (Zhu et al., 2010; Rao and Tetreault, 2018) or non-parallel text data of a single style. (Li et al., 2018; Jin et al., 2019; Liu et al., 2020; Krishna et al., 2020). Even bleeding-edge approaches that tackle the challenging problem of label-free style transfer are limited in that they require at least several exemplar sentences that dictate a given target style (Xu et al., 2020; Riley et al., 2021). Hence, recent survey papers have identified a need for new methods that both reduce the training data requirements and expand the scope of styles supported (Jin et al., 2020; Hu et al., 2020).
|
| 14 |
+
|
| 15 |
+
In this work, we present augmented zero-shot learning, a prompting method that allows large language models to perform text style transfer to arbitrary styles, without any exemplars in the target style. Our method builds on prior work showing
|
| 16 |
+
|
| 17 |
+

|
| 18 |
+
Figure 1: Zero-shot, few-shot, and augmented zero-shot prompts for style transfer. The boldface text is the zero-shot prompt, and the plain text is the additional priming sequence. The full prompts used in this paper are shown in Table 7. We encourage readers to examine the outputs of our model at https://bit.ly/3fLDuci.
|
| 19 |
+
|
| 20 |
+
that sufficiently large LMs such as GPT-3 can perform various tasks ranging from classification to translation, simply by choosing a clever prompt to comprehend to continue (Brown et al., 2020; Branwen, 2020). Using a single prompt that provides several demonstrations of sentences being "rewritten" to meet a desired condition, language models can extrapolate and rewrite text in unseen styles. We are thus able to perform style transfer to arbitrary styles such as "make this sentence more comic" or "include the word balloon."
|
| 21 |
+
|
| 22 |
+
Augmented zero-shot learning is simple and facilitates the application of style transfer to a wider
|
| 23 |
+
|
| 24 |
+
range of styles than existing work. Our contributions are the following.
|
| 25 |
+
|
| 26 |
+
1. We propose a recipe for style transfer using large LMs that is label-free, training-free, and intuitively controllable.
|
| 27 |
+
2. Via human evaluation, we find that our method achieves strong performance on both standard and non-standard style transfer tasks. We also compare our approach for sentiment transfer with prior methods using automatic evaluation.
|
| 28 |
+
3. We explore real-world desired style transfers generated from users of a text editing UI that implements our method.
|
| 29 |
+
|
| 30 |
+
# 2 Augmented zero-shot prompting
|
| 31 |
+
|
| 32 |
+
Although large LMs are trained only for continuation, recent work has shown that they can perform a variety of NLP tasks by expressing the task as a prompt that encourages the model to output the desired answer as the continuation (Puri and Catanzaro, 2019; Weller et al., 2020; Brown et al., 2020; Schick and Schütze, 2021, inter alia; see Liu et al. (2021a) for a survey). The simplest approach, zero-shot prompting, directly uses natural language to ask the large LM to perform a task, as shown in Figure 1a. Zero-shot prompting, however, can be prone to failure modes such as not returning well-formatted or logical outputs (see §6). Few-shot prompting, as shown in Figure 1b, has been shown to achieve higher performance, but requires exemplars for the exact task that we want the model to perform. Such few-shot examples can be easily obtained if the desired style transformation is known ahead of time, but this ultimately limits style transfer to a set of pre-specified style tasks.
|
| 33 |
+
|
| 34 |
+
To remove the need for these labeled exemplars for each style transfer task, we propose augmented zero-shot learning, a method for performing multitask style transfer using a single set of exemplars. Instead of prompting the model with exemplars specific to the exact style transfer task we wish to perform, we prompt the model with examples of a variety of sentence rewriting operations, as shown in Figure 1c. This intuition is inspired by Reynolds and McDonell (2021)'s observation that successful prompts constrain the behavior of the large LM away from failure modes—in our case, we aim to preserve the flexibility of a zero shot prompt while encouraging the model to produce outputs of a specific template. We keep the the format of the exemplars constant and insert the de
|
| 35 |
+
|
| 36 |
+
<table><tr><td>Original</td><td>There, in the middle of the street, stood an old man in a weatherbeaten brown coat.</td></tr><tr><td>More descriptive</td><td>The man's brown trenchcoat was coated with snow, and as he leaned against the telephone booth, he cast a long shadow across the road in the late afternoon sun.</td></tr><tr><td>Include a metaphor</td><td>There, in the middle of the street, stood an old man whose face resembled the weatherworn bark of a gnarled tree.</td></tr><tr><td>Include the word “park”</td><td>There, in the middle of Central Park, stood an old man in a weatherbeaten brown coat.</td></tr><tr><td>Include the word “balloon”</td><td>There, in the middle of the street, stood an old man with several colourful balloons tied to the straps of his coat.</td></tr><tr><td>More comic</td><td>There, in the middle of the street, stood an old man in a raggedy brown coat that was much too big for him.</td></tr><tr><td>More melodramaic</td><td>There, in the middle of the road, stood a grizzled old man, the light of life faded from his sunken eyes.</td></tr></table>
|
| 37 |
+
|
| 38 |
+
Table 1: Example style transfer outputs from augmented zero-shot learning for non-standard styles.
|
| 39 |
+
|
| 40 |
+
sired sentence transformation into the same format. In this way, the augmented zero-shot formulation supports arbitrary sentence rewriting tasks without the need to write any task-specific exemplars. Thus, it works for a wide range of styles, including modifying the text to be "more melodramatic," "insert a metaphor," or "include the word balloon."
|
| 41 |
+
|
| 42 |
+
# 3 Experimental Setup
|
| 43 |
+
|
| 44 |
+
Style transfer tasks. We consider six style transfer tasks that we deem non-standard, listed in Table 1. These styles were chosen to be representative of most frequent style adjustments made by users of an AI-assisted text editor that employs our method (discussed further in §5). As source sentences, we use 50 sentences randomly drawn from the Reddit Writing Prompts validation set (Fan et al., 2018), excluding those that already clearly exhibited one of the styles or were ungrammatical/incoherent. We use human evaluation for these styles, since not all styles have readily available classifiers.
|
| 45 |
+
|
| 46 |
+
We also evaluate our method on two standard style transfer tasks: sentiment and formality. We use the Yelp polarity dataset (Zhang et al., 2015) for sentiment and Grammarly's Yahoo Answers Formality Corpus (GYAFC) dataset for formality (Rao and Tetreault, 2018). These datasets allow us to evaluate performance of augmented zero-shot learning in the context of prior supervised methods which have been used on these tasks.
|
| 47 |
+
|
| 48 |
+
Model. Augmented zero-shot learning requires a large language model. We primarily use LaMDA, a left-to-right decoder-only transformer language model (Vaswani et al., 2017) with a non-embedding parameter count of 137B (Thoppilan et al., 2022). The pre-trained LaMDA model, which we refer to as $LLM$ , was trained on a corpus comprising 1.95B public web documents, including forum and dialog data and Wikipedia. The dataset was tokenized into 2.49T BPE tokens with a SentencePiece vocabulary size of 32K (Kudo and Richardson, 2018). We also use $LLM-Dialog$ , the final LaMDA model which was finetuned on a curated, high-quality subset of data identified to be in a conversational format. Decoding was done with top- $k = 40$ . To show that the success of augmented zero-shot learning is not restricted to these two large LMs, we also perform experiments with GPT-3 (Table 8). For GPT-3, decoding was done with nucleus sampling using $p = 0.6$ (Holtzman et al., 2019).
|
| 49 |
+
|
| 50 |
+
The prompts used for $LLM$ and GPT-3 are shown in Figure 1. For $LLM$ -Dialog, the prompt was instead formulated as a conversation between one agent who is requesting rewrites and another who is performing the rewrites. See Table 7 in the Appendix for the full non-abbreviated prompts.
|
| 51 |
+
|
| 52 |
+
# 4 Results
|
| 53 |
+
|
| 54 |
+
# 4.1 Non-Standard Styles
|
| 55 |
+
|
| 56 |
+
For our six non-standard styles, we asked six professional raters to assess <input sentence, target style, output sentence> tuples. These raters are fluent in English, live in India, and work full time labeling and evaluating data. To decrease inter-rater discrepancy and ensure that our instructions were clear, we had an initial calibration session where they test-rated a small portion of the data (around 10 datapoints which were then omitted from the results) and asked us any clarifying questions. For each style, we compare outputs from our method plus the three baselines for 50 sentences.
|
| 57 |
+
|
| 58 |
+
Each tuple was scored by three raters (3,600 ratings total) on the following three axes which are standard to textual style transfer (Mir et al., 2019): (1) transfer strength (the amount that the output actually matches the target style), (2) semantic preservation (whether the underlying meaning of the output text, aside from style, matches that of the input), and (3) fluency (whether the text is coherent and could have been written by a proficient English speaker). Following Sakaguchi and Van Durme
|
| 59 |
+
|
| 60 |
+

|
| 61 |
+
|
| 62 |
+

|
| 63 |
+
Figure 2: Human evaluation of style transfer for six atypical styles. Our method is rated comparably to the human-written ground truth. Error bars show Standard Error of the Mean. Evaluation of fluency is shown in Figure 4 in the Appendix.
|
| 64 |
+
|
| 65 |
+
(2018), transfer strength and semantic preservation were rated on a scale from 1-100. A screenshot of the evaluation UI is shown in Figure 5 in the Appendix. Note that the guidelines for semantic preservation are not standardized in prior literature (Briakou et al., 2021); while some evaluations are strict that the outputs cannot contain any more information than the inputs, we asked the annotators not to penalize for meaning transformations which are necessary for the specified transformation. We use dialog-LLM, and compare it with three other methods: (1) zero-shot (a baseline), (2) paraphrase (our normal augmented zero shot prompt, but with the target style of "paraphrased", as a control) and (3) human (ground-truth transformations written by the authors).
|
| 66 |
+
|
| 67 |
+
Figure 2 shows these results. We found that the outputs of our method were rated almost as highly as the human-written ground truth for all three evaluations. The zero-shot baseline performed the worst in all categories: $25.4\%$ of the time, it did not return a valid response at all (see §6), compared with $0.6\%$ for augmented zero shot. The strong performance of the paraphrase baseline at fluency and semantic similarity shows that large LMs are capable of generating high quality text that remains true to the input sentence's meaning. Overall, the average length of the input sentences was 66 characters, whereas the average length of augmented zero-shot outputs was 107 characters. For context, human paraphrase outputs were 82 characters.
|
| 68 |
+
|
| 69 |
+
For a subset of the tasks, some automatic evaluation was also possible. We found that the "balloon" and "park" transformations successfully inserted
|
| 70 |
+
|
| 71 |
+
the target word $85\%$ of the time. For "more descriptive" and "include a metaphor" the transformed text was, as expected, longer than the original (by $252\%$ and $146\%$ respectively, compared with $165\%$ and $146\%$ for human baselines).
|
| 72 |
+
|
| 73 |
+
# 4.2 Standard Styles
|
| 74 |
+
|
| 75 |
+
To better contextualize the performance of our method with prior methods, we also generated outputs for two standard style transfer tasks: sentiment and formality. Figure 3 shows human evaluations (same setup as before) for our outputs as well as the outputs from two popular prior style transfer methods, Unsup MT (Prabhumoye et al., 2018) and Dual RL (Luo et al., 2019). The outputs from our method were rated comparably to both human generated responses and the two prior methods, using the same rating setup as the non-standard styles, with six outputs and baselines for four styles across 50 sentences, rated independently by three raters, totalling 3,000 total ratings.
|
| 76 |
+
|
| 77 |
+
Furthermore, following Li et al. (2018) and Sudhakar et al. (2019), we perform automatic evaluation for sentiment style transfer since there are classifiers available for these styles. We note that although automatic evaluations can diverge from human ratings, they can still be a good proxy as we could not perform human evaluation against every prior method due to time and resource constraints. We automatically evaluate (1) transfer strength using a sentiment classifier from HuggingFace Transformers (Wolf et al., 2020), (2) semantic similarity to human examples provided by Luo et al. (2019) via BLEU score, and (3) fluency via perplexity, as measured by GPT-2 (117M).
|
| 78 |
+
|
| 79 |
+
Table 2 shows these automatic evaluations, with four main takeaways. First, augmented zero-shot prompting achieves high accuracy and low perplexity compared with baselines. The BLEU scores, however, are low, which we believe is because it tends to add additional information to generated sentences (see Appendix B for a deeper analysis). Second, we apply augmented zero-shot learning to GPT-3 175B; these results indicate that augmented zero-shot learning generalizes to another large language model. Third, we vary model size for GPT-3 models, finding that larger size greatly improves style transfer. Fourth, for LLM and LLM-dialog, we find that augmented zero-shot learning substantially outperforms vanilla zero-shot learning and almost reaches the accuracy of five-shot learning.
|
| 80 |
+
|
| 81 |
+

|
| 82 |
+
Figure 3: Human evaluation of sentiment and formality transfer. Our method is rated comparably to human-written ground truth as well as prior methods. Error bars show Standard Error of the Mean. Unsup. MT is Prabhumoye et al. (2018); Dual RL is Luo et al. (2019).
|
| 83 |
+
|
| 84 |
+
# 5 Potential of Arbitrary Styles
|
| 85 |
+
|
| 86 |
+
One promising application of augmented zero-shot learning is an AI-powered writing assistant that can allow writers to transform their text in arbitrary ways that the writer defines and controls. As a qualitative case study to explore what arbitrary re-write styles may be requested, we built an AI-assisted story-writing editor with a "rewrite as" feature that uses our augmented few-shot method. Our editor has a freeform text box for users to specify how they would like a selection of their story to be rewritten (see Figure 6 in the Appendix). We asked 30 people from a creative writing group to use our UI to write a 100-300 word story, collecting 333 rewrite requests in total. Table 3 shows a subset of these, which were as diverse as asking for the text "to be about mining" or "to be less diabolical."
|
| 87 |
+
|
| 88 |
+
# 6 Limitations and Failure Modes
|
| 89 |
+
|
| 90 |
+
This section details several qualitative limitations with our method.
|
| 91 |
+
|
| 92 |
+
Unparsable answers A frequent problem that arises when using large LMs for other NLP tasks is their outputs cannot be automatically parsed into usable answers. For example, when given a prompt like "Here is some text: that is an ugly dress. Here is a rewrite of the text, which is more positive" LLM-Dialog might return something like "Sounds like you are a great writer!" Similar error modes exist for LLM, which might output something like "Here are more writing tips and tricks." Other
|
| 93 |
+
|
| 94 |
+
<table><tr><td></td><td>Acc</td><td>BLEU</td><td>PPL</td></tr><tr><td colspan="4">SUPERVISED METHODS</td></tr><tr><td>Cross-alignment (Shen et al., 2017)</td><td>73.4</td><td>17.6</td><td>812</td></tr><tr><td>Backtrans (Prabhumoye et al., 2018)</td><td>90.5</td><td>5.1</td><td>424</td></tr><tr><td>Multidecoder (Fu et al., 2018)</td><td>50.3</td><td>27.7</td><td>1,703</td></tr><tr><td>Delete-only (Li et al., 2018)</td><td>81.4</td><td>28.6</td><td>606</td></tr><tr><td>Delete-retrieve (Li et al., 2018)</td><td>86.2</td><td>31.1</td><td>948</td></tr><tr><td>Unpaired RL (Xu et al., 2018)</td><td>52.2</td><td>37.2</td><td>2,750</td></tr><tr><td>Dual RL (Luo et al., 2019)</td><td>85.9</td><td>55.1</td><td>982</td></tr><tr><td>Style transformer (Dai et al., 2019)</td><td>82.1</td><td>55.2</td><td>935</td></tr><tr><td colspan="4">INFERENCE-ONLY METHODS</td></tr><tr><td>GPT-3 ada, aug zero-shot</td><td>31.5</td><td>39.0</td><td>283</td></tr><tr><td>GPT-3 curie, aug zero-shot</td><td>53.0</td><td>48.3</td><td>207</td></tr><tr><td>GPT-3 da Vinci, aug zero-shot</td><td>74.1</td><td>43.8</td><td>231</td></tr><tr><td>LLM: zero-shot</td><td>69.7</td><td>28.6</td><td>397</td></tr><tr><td>five-shot</td><td>83.2</td><td>19.8</td><td>240</td></tr><tr><td>aug zero-shot</td><td>79.6</td><td>16.1</td><td>173</td></tr><tr><td>LLM-dialog: zero-shot</td><td>59.1</td><td>17.6</td><td>138</td></tr><tr><td>five-shot</td><td>94.3</td><td>13.6</td><td>126</td></tr><tr><td>aug zero-shot</td><td>90.6</td><td>10.4</td><td>79</td></tr></table>
|
| 95 |
+
|
| 96 |
+
Table 2: Comparing augmented zero-shot prompting with supervised style transfer methods on the Yelp sentiment style transfer dataset using automatic evaluation. Acc: accuracy; PPL: perplexity. The inference-only table shows our method applied to 3 different sizes of GPT-3, plus our own LLM.
|
| 97 |
+
|
| 98 |
+
to be a little less angsty $\bullet$ to be about mining $\bullet$ to be better written $\bullet$ to be less diabolical $\bullet$ to be more absurd $\bullet$ to be more adventurous $\bullet$ to be more Dickensian $\bullet$ to be more emotional $\bullet$ to be more magical $\bullet$ to be more melodramatic $\bullet$ to be more philosophical $\bullet$ to be more revolutionary $\bullet$ to be more surprising $\bullet$ to be more suspenseful $\bullet$ to be more technical $\bullet$ to be more whimsical $\bullet$ to be warmer $\bullet$ to fit better grammatically with the rest of the story $\bullet$ to make more sense
|
| 99 |
+
|
| 100 |
+
Table 3: Requests in the form of "Rewrite this..." made by real users to a large LM-powered text editor. For the full set of unique requests, see Table 5 in the Appendix.
|
| 101 |
+
|
| 102 |
+
times, the response contains correct information, but it cannot be automatically parsed (e.g., "a good rewrite might be to say that the dress is pretty.") In hindsight, these outputs make a lot of sense: most of the training data of large LMs is not well-formatted pairs of inputs and outputs (Reynolds and McDonell, 2021). See §A for how we dealt with these issues.
|
| 103 |
+
|
| 104 |
+
Hallucinations Large LMs are known to hallucinate text content; we saw this happen frequently for style transfer. While this is an advantage in some contexts like creative writing, it is undesirable for applications like summarization.
|
| 105 |
+
|
| 106 |
+
Inherent style trends We also noticed that even our "paraphrase" baseline, where the model was simply asked to rewrite the input sentence, was
|
| 107 |
+
|
| 108 |
+
rated highly for style strength for a few styles, including "more formal" and "more melodramatic". This implies that our method's generations generally trend toward these styles. A direction for future work would be to see what styles and qualities of text our method (and large LMs in general) are inherently more likely to produce.
|
| 109 |
+
|
| 110 |
+
Less reliable than trained methods For style transfer tasks that have available training data, prior methods that either train or finetune on that data are going to be inherently more reliable at producing text that looks like their training data. This can be observed in the lower BLEU scores our method achieves than trained methods, despite comparable transfer accuracy (Section B). Thus, augmented zero-shot learning offers less fine-grained controllability in the properties of the style-transferred text than methods which see task-specific training data.
|
| 111 |
+
|
| 112 |
+
Large LM safety concerns Large LMs themselves come with their own host of difficulties, barriers to entry, and potential safety concerns as discussed by Bender et al. (2021), which are also valid for this style transfer method. However, we also think that this method can be a useful tool in exploring and exposing the safety and boundaries of these models themselves: what happens if we try to force the large LM to make a text "more racist", "more sexist", or "more incendiary"? It is important to keep pushing these models to their boundaries to see where they fail and where problems arise, and specific use cases that show a broader range of the model's capabilities also show a broader range of its failure modes.
|
| 113 |
+
|
| 114 |
+
# 7 Conclusions
|
| 115 |
+
|
| 116 |
+
We introduced augmented zero-shot learning, which we find shows strikingly promising performance considering its simplicity. This prompting paradigm moves the needle in text style transfer by expanding the range of possible styles beyond the currently limited set of styles for which annotated data exists. More broadly, we also hope that the strategy of prompting a large LM with nontask specific examples can inspire new inference-only methods for other NLP tasks.
|
| 117 |
+
|
| 118 |
+
# References
|
| 119 |
+
|
| 120 |
+
Emily M. Bender, Timnit Gebru, Angelina McMillan-Major, and ShMargaret Shmitchell. 2021. On the dangers of stochastic parrots: Can language models
|
| 121 |
+
|
| 122 |
+
be too big? In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, FAccT '21, page 610-623, New York, NY, USA. Association for Computing Machinery.
|
| 123 |
+
Gwern Branwen. 2020. GPT-3 creative fiction.
|
| 124 |
+
Eleftheria Briakou, Sweta Agrawal, Ke Zhang, Joel R. Tetreault, and Marine Carpuat. 2021. A review of human evaluation for style transfer. CoRR, abs/2106.04747.
|
| 125 |
+
Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. CoRR, abs/2005.14165.
|
| 126 |
+
Ning Dai, Jianze Liang, Xipeng Qiu, and Xuanjing Huang. 2019. Style transformer: Unpaired text style transfer without disentangled latent representation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5997-6007, Florence, Italy. Association for Computational Linguistics.
|
| 127 |
+
Angela Fan, Mike Lewis, and Yann Dauphin. 2018. Hierarchical neural story generation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 889-898, Melbourne, Australia. Association for Computational Linguistics.
|
| 128 |
+
Zhenxin Fu, Xiaoye Tan, Nanyun Peng, Dongyan Zhao, and Rui Yan. 2018. Style transfer in text: Exploration and evaluation. In Proceedings of the AAAI Conference on Artificial Intelligence.
|
| 129 |
+
Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. 2019. The curious case of neural text degeneration. In International Conference on Learning Representations.
|
| 130 |
+
Zhiqiang Hu, Roy Ka-Wei Lee, and Charu C. Aggarwal. 2020. Text style transfer: A review and experiment evaluation. CoRR, abs/2010.12742.
|
| 131 |
+
Di Jin, Zhijing Jin, Zhiting Hu, Olga Vechtomova, and Rada Mihalcea. 2020. Deep learning for text style transfer: A survey. CoRR, abs/2011.00416.
|
| 132 |
+
Zhijing Jin, Di Jin, Jonas Mueller, Nicholas Matthews, and Enrico Santus. 2019. IMaT: Unsupervised text attribute transfer via iterative matching and translation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3097-3109, Hong Kong, China. Association for Computational Linguistics.
|
| 133 |
+
|
| 134 |
+
Kalpesh Krishna, John Wieting, and Mohit Iyyer. 2020. Reformulating unsupervised style transfer as paraphrase generation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 737-762, Online. Association for Computational Linguistics.
|
| 135 |
+
Taku Kudo and John Richardson. 2018. Sentencepiece: A simple and language independent subword tokenizer and detokenizer for neural text processing. CoRR, abs/1808.06226.
|
| 136 |
+
Juncen Li, Robin Jia, He He, and Percy Liang. 2018. Delete, retrieve, generate: a simple approach to sentiment and style transfer. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1865-1874, New Orleans, Louisiana. Association for Computational Linguistics.
|
| 137 |
+
Dayiheng Liu, Jie Fu, Yidan Zhang, Chris Pal, and Jiancheng Lv. 2020. Revision in continuous space: Unsupervised text style transfer without adversarial learning. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 8376-8383.
|
| 138 |
+
Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, and Graham Neubig. 2021a. Pretrain, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586.
|
| 139 |
+
Ruibo Liu, Chenyan Jia, and Soroush Vosoughi. 2021b. A transformer-based framework for neutralizing and reversing the political polarity of news articles. Proc. ACM Hum.-Comput. Interact., 5(CSCW1).
|
| 140 |
+
Fuli Luo, Peng Li, Jie Zhou, Pengcheng Yang, Baobao Chang, Xu Sun, and Zhifang Sui. 2019. A dual reinforcement learning framework for unsupervised text style transfer. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI 2019, Macao, China, August 10-16, 2019, pages 5116-5122. ijcai.org.
|
| 141 |
+
Aman Madaan, Amrith Setlur, Tanmay Parekh, Barnabas Poczos, Graham Neubig, Yiming Yang, Ruslan Salakhutdinov, Alan W Black, and Shrimai Prabhumoye. 2020. Politeness transfer: A tag and generate approach. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1869-1881, Online. Association for Computational Linguistics.
|
| 142 |
+
Remi Mir, Bjarke Felbo, Nick Obradovich, and Iyad Rahwan. 2019. Evaluating style transfer for text. CoRR, abs/1904.02295.
|
| 143 |
+
Shrimai Prabhumoye, Yulia Tsvetkov, Ruslan Salakhutdinov, and Alan W Black. 2018. Style transfer through back-translation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 866–876, Melbourne, Australia. Association for Computational Linguistics.
|
| 144 |
+
|
| 145 |
+
Raul Puri and Bryan Catanzaro. 2019. Zero-shot text classification with generative language models. arXiv preprint arXiv:1912.10165.
|
| 146 |
+
Sudha Rao and Joel Tetreault. 2018. Dear sir or madam, may I introduce the GYAFC dataset: Corpus, benchmarks and metrics for formality style transfer. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 129-140, New Orleans, Louisiana. Association for Computational Linguistics.
|
| 147 |
+
Laria Reynolds and Kyle McDonell. 2021. Prompt programming for large language models: Beyond the few-shot paradigm.
|
| 148 |
+
Parker Riley, Noah Constant, Mandy Guo, Girish Kumar, David C. Uthus, and Zarana Parekh. 2021. Textsettr: Label-free text style extraction and tunable targeted restyling. Proceedings of the Annual Meeting of the Association of Computational Linguistics (ACL).
|
| 149 |
+
Keisuke Sakaguchi and Benjamin Van Durme. 2018. Efficient online scalar annotation with bounded support. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 208-218, Melbourne, Australia. Association for Computational Linguistics.
|
| 150 |
+
Timo Schick and Hinrich Schütze. 2021. It's not just size that matters: Small language models are also few-shot learners. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2339-2352, Online. Association for Computational Linguistics.
|
| 151 |
+
Tianxiao Shen, Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2017. Style transfer from non-parallel text by cross-alignment. In Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc.
|
| 152 |
+
Akhilesh Sudhakar, Bhargav Upadhyay, and Arjun Maheswaran. 2019. "Transforming" delete, retrieve, generate approach for controlled text style transfer. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3269-3279, Hong Kong, China. Association for Computational Linguistics.
|
| 153 |
+
Romal Thoppilan, Daniel De Freitas, Jamie Hall, Noam Shazeer, Apoorv Kulshreshtha, Heng-Tze Cheng, Alicia Jin, Taylor Bos, Leslie Baker, Yu Du, et al. 2022. Lamda: Language models for dialog applications. arXiv preprint arXiv:2201.08239.
|
| 154 |
+
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz
|
| 155 |
+
|
| 156 |
+
Kaiser, and Illia Polosukhin. 2017. Attention is all you need. CoRR, abs/1706.03762.
|
| 157 |
+
Orion Weller, Nicholas Lourie, Matt Gardner, and Matthew E. Peters. 2020. Learning from task descriptions. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1361-1375, Online. Association for Computational Linguistics.
|
| 158 |
+
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Association for Computational Linguistics.
|
| 159 |
+
Jingjing Xu, Xu Sun, Qi Zeng, Xiaodong Zhang, Xu-ancheng Ren, Houfeng Wang, and Wenjie Li. 2018. Unpaired sentiment-to-sentiment translation: A cycled reinforcement learning approach. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 979–988, Melbourne, Australia. Association for Computational Linguistics.
|
| 160 |
+
Peng Xu, Yanshuai Cao, and Jackie Chi Kit Cheung. 2020. On variational learning of controllable representations for text without supervision. Proceedings of the International Conference on Machine Learning (ICML), abs/1905.11975.
|
| 161 |
+
Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classification. Proceedings of the Conference on Neural Information Processing Systems.
|
| 162 |
+
Zhemin Zhu, Delphine Bernhard, and Iryna Gurevych. 2010. A monolingual tree-based translation model for sentence simplification. In Proceedings of the 23rd International Conference on Computational Linguistics (COLING 2010), pages 1353-1361, Beijing, China. Coling 2010 Organizing Committee.
|
| 163 |
+
|
| 164 |
+
# Appendix
|
| 165 |
+
|
| 166 |
+
# A Prompt Selection
|
| 167 |
+
|
| 168 |
+
A promising new area of prompt engineering has arisen to address the failure modes discussed above, specifically the invalid or unparseable answers. Reynolds and McDonell (2021) find that prompting a model for a task is more akin to locating an already-learned task than truly learning a new one. Moreover, they emphasize that prompt engineering is mostly about avoiding various failure cases such as those described above. In this work, we use delimiters (“{” and “}”) to help avoid these types of errors, giving scores of zero when there was no valid responses with such delimiters. There are other delimiters that could be used (e.g., quotes, “(” and “)”, “<” and “>”, newlines with a colon (as used by GPT-3), etc. We chose curly braces as they were 1) likely to occur in the training data as delimiters in other contexts and 2) not frequently part of the input sentence itself. We also use a second person prompt template for the dialog, which yielded better results as it was more similar to the training data. Exploring these options more quantitatively would be an interesting direction for future work. Because the performance of prompting can vary depending on the exact language of the prompt (Reynolds and McDonell, 2021), we compare four variations of prompts for sentiment: “more positive/negative,” “happier/sadder,” “more optimistic/pessimistic,” and “more cheerful/miserable.” As shown in Table 4 in the Appendix, performance differed across the four prompts, but we found them comparable.
|
| 169 |
+
|
| 170 |
+
<table><tr><td>Model / prompt wording</td><td>Acc</td><td>Bleu</td><td>PPL</td></tr><tr><td>LLM</td><td></td><td></td><td></td></tr><tr><td>“more positive/negative”</td><td>76.3</td><td>14.8</td><td>180</td></tr><tr><td>“happier/sadder”</td><td>62.6</td><td>15.5</td><td>173</td></tr><tr><td>“more optimistic/pessimistic”</td><td>69.7</td><td>14.1</td><td>143</td></tr><tr><td>“more cheerful/miserable”</td><td>74.5</td><td>15.7</td><td>186</td></tr><tr><td>LLM-Dialog</td><td></td><td></td><td></td></tr><tr><td>“more positive/negative”</td><td>90.5</td><td>10.4</td><td>79</td></tr><tr><td>“happier/sadder”</td><td>85.9</td><td>9.6</td><td>90</td></tr><tr><td>“more optimistic/pessimistic”</td><td>85.8</td><td>10.2</td><td>79</td></tr><tr><td>“more cheerful/miserable”</td><td>88.8</td><td>11.4</td><td>93</td></tr></table>
|
| 171 |
+
|
| 172 |
+
Table 4: Comparing variations of augmented zero-shot learning prompt wording for sentiment style transfer.
|
| 173 |
+
|
| 174 |
+
# B Low BLEU for LLM Outputs
|
| 175 |
+
|
| 176 |
+
As we saw in Table 2, the outputs of our model had low BLEU scores with respect to human gen
|
| 177 |
+
|
| 178 |
+
into paragraphs $\bullet$ to be a bit clearer $\bullet$ to be a little less angsty $\bullet$ to be a word for a song $\bullet$ to be about mining $\bullet$ to be about vegetables $\bullet$ to be better written $\bullet$ to be less descriptive $\bullet$ to be less diabolical $\bullet$ to be more absurd $\bullet$ to be more adventurous $\bullet$ to be more angry $\bullet$ to be more cheerful $\bullet$ to be more descriptive $\bullet$ to be more Dickensian $\bullet$ to be more emotional $\bullet$ to be more fancy $\bullet$ to be more flowery $\bullet$ to be more interesting $\bullet$ to be more joyful $\bullet$ to be more magical $\bullet$ to be more melodramatic $\bullet$ to be more philosophical $\bullet$ to be more revolutionary $\bullet$ to be more scary $\bullet$ to be more subtle $\bullet$ to be more surprising $\bullet$ to be more suspenseful $\bullet$ to be more technical $\bullet$ to be more violent $\bullet$ to be more whimsical $\bullet$ to be warmer $\bullet$ to fit better grammatically with the rest of the story $\bullet$ to make more sense $\bullet$ to use a more interesting word $\bullet$ with a few words
|
| 179 |
+
|
| 180 |
+
Table 5: Full results for requests in the form of "Rewrite this..." made by users to a large LM-powered text editor.
|
| 181 |
+
|
| 182 |
+
erated outputs, while simultaneously having high semantic similarity in human evaluations. Based on qualitative examination of outputs, we believe that this is because model outputs often, despite having high semantic similarity with the source sentence, used different language from human annotations. For instance, for transferring the sentiment of "ever since joes has changed hands it's just gotten worse and worse" to positive sentiment, our augmented zero-shot learning model outputted "the establishment has continued to provide excellent service, improving steadily since its change of ownership." This will have low BLEU with the ground truth with respect to human references, which is simply "ever since joes has changed hands it's just gotten better and better."
|
| 183 |
+
|
| 184 |
+
Though we do not see this as an inherent problem, increasing the BLEU for the purposes of comparison can be done in an easy way via candidate selection, as our model returns sixteen possible continuations. In applications for which we prefer model outputs to have high lexical similarity to the source sentence, we could select the candidate of the sixteen with the highest BLEU score compared with the original source sentence. We find that this candidate selection step can substantially improve the BLEU score with the ground truth target sentences, as we show in Table 8.
|
| 185 |
+
|
| 186 |
+
# C Further Related Work
|
| 187 |
+
|
| 188 |
+
Style transfer has gained increasing attention in the NLP landscape, for which neural models have been trained to perform style transfer for styles including sentiment, formality, politeness, gender, and politi
|
| 189 |
+
|
| 190 |
+
<table><tr><td>Input</td><td>Model suggestion</td></tr><tr><td>You honor me," said Bruno. As he spoke, he began to laugh. His laugh was merry sounding and melodic. He took a deep breath. When he spoke again, his voice was low and strong. "I take this great task on and with my brothers shall lead us on to victory! Rewrite it to be about mining.</td><td>You honor me, said Bruno. As he spoke, he began to laugh. His laugh was deep and gravely, as if from the bowels of the earth. He took a deep breath. When he spoke again, his voice was deep and dark. "I take this great task on and with my brothers shall delve into the mines!"</td></tr><tr><td>I reckon,' smiled Toe, 'you'd have to say the same for icicles, wouldn't you? Rewrite it to be more interesting.</td><td>'I reckon,' smiled Toe, 'you'd have to say the same for icicles, wouldn't you?' These words were barely out of his mouth before the others realized exactly what they meant.</td></tr><tr><td>with its southern perfume Rewrite it to be more descriptive</td><td>A mixture of sweet honeysuckles and jasmine with a hint of tobacco and warm mesquite smoke. It was his southern perfume, with a hint of a memory of simpler times.</td></tr><tr><td>But Sally remained unmoved-and lonely. Rewrite it to be more emotional</td><td>Her tears rolled and rolled down her cheek, mixing with the saltwater around her, until the water was a rich mixture of the two. She wanted her poem to be cheerful like it used to be, but couldn't find the right words.</td></tr></table>
|
| 191 |
+
|
| 192 |
+
Table 6: Examples of users' arbitrary style transfer requests for which the model suggestion was accepted.
|
| 193 |
+
|
| 194 |
+
cal slant (Prabhumoye et al., 2018; Madaan et al., 2020; Liu et al., 2021b). We will briefly summarize the primary approaches to style transfer here, and refer the involved reader to either (Jin et al., 2020) or (Hu et al., 2020) for a survey.
|
| 195 |
+
|
| 196 |
+
Most text style transfer approaches fall in two categories. Early approaches tend to require parallel text data (Zhu et al., 2010; Rao and Tetreault, 2018), where every input in the source style has a corresponding output in the target style. Though this formulation elegantly fits the standard encoder-decoder paradigm, the availability of a parallel text corpus is a stringent requirement. Hence, recent text style transfer approaches have instead used non-parallel monostyle data (no one-to-one-mapping between instances in the source and target styles). Such methods include latent representation manipulation (Liu et al., 2020), prototype-based text editing (Li et al., 2018), and pseudo-parallel corpus construction (Jin et al., 2019). However, even non-parallel monostyle data can be hard to collect for arbitrary styles. As such, surveys have called for more research on approaches that expand the scope of supported styles and reduce the training data requirements for style transfer systems (Jin et al., 2020; Hu et al., 2020).
|
| 197 |
+
|
| 198 |
+
Several new methods tackle the challenging problem of label-free style transfer, which does not require a full corpus of labeled data, but rather just a few exemplars that define a style. Xu et al. (2020) use variational autoencoders for unsupervised learning of controllable representations for
|
| 199 |
+
|
| 200 |
+

|
| 201 |
+
Figure 4: Human evaluation of fluency for style transfer for six atypical styles. Error bars show standard error of the mean.
|
| 202 |
+
|
| 203 |
+
text. Riley et al. (2021) extract a style vector from a set of target texts and use this vector to condition the decoder to perform style transfer to a target style. These approaches have a similar goal to ours in terms of expanding the scope of possible style transfers. However, they are different in two main ways. First, they require a fully specialized model, where our method can be applied out-of-the-box with something like GPT-3. This can either be a strength or weakness, depending on the availability of such a model. Second, they require exemplars to define a style rather than a plain text description.
|
| 204 |
+
|
| 205 |
+
# Augmented Zero-shot Prompt: LLM
|
| 206 |
+
|
| 207 |
+
Here is some text: {When the doctor asked Linda to take the medicine, he smiled and gave her a lollipop.}. Here is a rewrite of the text, which is more scary. {When the doctor told Linda to take the medicine, there had been a malicious gleam in her eye that Linda didn't like at all.} Here is some text: {they asked loudly, over the sound of the train.}. Here is a rewrite of the text, which is more intense. {they yelled aggressively, over the clanging of the train.} Here is some text: {When Mohammed left the theatre, it was already dark out}. Here is a rewrite of the text, which is more about the movie itself. {The movie was longer than Mohammed had expected, and despite the excellent ratings he was a bit disappointed when he left the theatre.} Here is some text: {next to the path}. Here is a rewrite of the text, which is about France. {next to la Siene} Here is some text: {The man stood outside the grocery store, ringing the bell.}. Here is a rewrite of the text, which is about clowns. {The man stood outside the circus, holding a bunch of balloons.} Here is some text: {the bell ringing}. Here is a rewrite of the text, which is more flowery. {the peales of the jangling bell} Here is some text: {against the tree}. Here is a rewrite of the text, which is include the word "snow". {against the snow-covered bark of the tree} Here is some text: {That is an ugly dress}. Here is a rewrite of the text, which is more positive."
|
| 208 |
+
|
| 209 |
+
# Augmented Zero-shot Prompt: LLM-dialog
|
| 210 |
+
|
| 211 |
+
> Here is some text: {When the doctor asked Linda to take the medicine, he smiled and gave her a lollipop.}. Rewrite it to be more scary.
|
| 212 |
+
> {When the doctor told Linda to take the medicine, there had been a malicious gleam in her eye that Linda didn't like at all.}
|
| 213 |
+
> Here is some text: {they asked loudly, over the sound of the train.}. Rewrite it to be more intense.
|
| 214 |
+
> {they yelled aggressively, over the clanging of the train.}
|
| 215 |
+
> Here is some text: {When Mohammed left the theatre, it was already dark out}. Rewrite it to be more about the movie itself.
|
| 216 |
+
> {The movie was longer than Mohammed had expected, and despite the excellent ratings he was a bit disappointed when he left the theatre.}
|
| 217 |
+
> Here is some text: {next to the path}. Rewrite it to be about France.
|
| 218 |
+
> {next to la Siene}
|
| 219 |
+
> Here is some text: {The man stood outside the grocery store, ringing the bell.}. Rewrite it to be about clowns.
|
| 220 |
+
> {The man stood outside the circus, holding a bunch of balloons.}
|
| 221 |
+
> Here is some text: {the bell ringing}. Rewrite it to be more flowery.
|
| 222 |
+
> {the peals of the jangling bell}
|
| 223 |
+
> Here is some text: {against the tree}. Rewrite it to be include the word "snow".
|
| 224 |
+
> {against the snow-covered bark of the tree}
|
| 225 |
+
> Here is some text: {That is an ugly dress}. Rewrite it to be more positive."
|
| 226 |
+
|
| 227 |
+
Table 7: In black, we show the exact augmented-zero shot prompts used in our experiments, for LLM and GPT-3 (top), and for LLM-Dialog (bottom). As shown, for LLM-Dialog, we replaced "Here is a rewrite of the text, which is" with "Rewrite it to be". Each line starting with $>''$ above was passed in as an individual dialog turn. The blue shows how an input text and goal style are concatenated to the few-shot prompt in order to produce final model output. Note that we can achieve high accuracy even though the prompt formulation resulted in some minor grammatical errors for some styles (e.g., "rewrite it to be include the word 'snow'"). Text versions of these prompts can be downloaded at https://bit.ly/3fLDuci.
|
| 228 |
+
|
| 229 |
+
<table><tr><td></td><td>Acc</td><td>BLEU</td><td>PPL</td></tr><tr><td colspan="4">LLM-128B</td></tr><tr><td>Zero-shot</td><td>69.7</td><td>28.6</td><td>397</td></tr><tr><td>+ cand. select.</td><td>31.4</td><td>61.5</td><td>354</td></tr><tr><td>Five-shot</td><td>83.2</td><td>19.8</td><td>240</td></tr><tr><td>+ cand. select.</td><td>61.5</td><td>55.6</td><td>306</td></tr><tr><td>Augmented zero-shot</td><td>79.6</td><td>16.1</td><td>173</td></tr><tr><td>+ cand. select.</td><td>65.0</td><td>49.3</td><td>292</td></tr><tr><td colspan="4">LLM-128B-dialog</td></tr><tr><td>Zero-shot</td><td>59.1</td><td>17.6</td><td>138</td></tr><tr><td>+ cand. select.</td><td>46.8</td><td>24.2</td><td>166</td></tr><tr><td>Five-shot</td><td>94.3</td><td>13.6</td><td>126</td></tr><tr><td>+ cand. select.</td><td>81.3</td><td>47.6</td><td>345</td></tr><tr><td>Augmented zero-shot</td><td>90.6</td><td>10.4</td><td>79</td></tr><tr><td>+ cand. select.</td><td>73.7</td><td>40.6</td><td>184</td></tr></table>
|
| 230 |
+
|
| 231 |
+
Table 8: Sentiment style transfer results with candidate selection (cand.select.). Candidate selection means that of the sixteen examples returned by our model, we choose the one with the highest BLEU with the source sentence.
|
| 232 |
+
|
| 233 |
+
Instructions: In this task, your goal is to identify whether a desired transformation has been successfully applied to a sentence, without changing the overall meaning of the sentence. Each question contains a sentence marked "original sentence," a desired transformation, and an output sentence where the transformation has been applied.
|
| 234 |
+
|
| 235 |
+
Each of these questions relates to the same original text and desired transform, but each has a different output transformed sentence. Please rate each transformed sentence along the following three axes:
|
| 236 |
+
|
| 237 |
+
1) Transferred Style Strength: Does the transformed text has the applied style/transform compared to the original text? For example, if the original text is "I went to the store" and the style is "more angry":
|
| 238 |
+
|
| 239 |
+
<table><tr><td>example</td><td>score</td><td>reasoning</td></tr><tr><td>"The store is where I went"</td><td>0</td><td>The transformed text is no more angry than the original text.</td></tr><tr><td>"I went to the stupid store"</td><td>50</td><td>The transformed text somewhat relates to the style.</td></tr><tr><td>"When I went to the store, I couldn't believe how rude the storekeeper was to me!"</td><td>100</td><td>The text is clearly more angry.</td></tr></table>
|
| 240 |
+
|
| 241 |
+
2) Meaning: Does the transformed sentence still have the same overall meaning as the original? It is OK if extra information is added, as long as it doesn't change the underlying people, events, and objects described in the sentence. You should also not penalize for meaning transformations which are necessary for the specified transformation. For example, if the original text is "I love this store" and the style is "more angry":
|
| 242 |
+
|
| 243 |
+
<table><tr><td>example</td><td>score</td><td>reasoning</td></tr><tr><td>"it is raining today"</td><td>0</td><td>the transformed text is about something totally different. It would be hard to tell that the texts are related at all.</td></tr><tr><td>"they were out of chicken at the store"</td><td>50</td><td>The transformed text is mostly related to original- some modifications of the meaning have been made but they are not egregious</td></tr><tr><td>"I adore the store." or "The store was really horrible; it took forever to do my shopping."</td><td>100</td><td>The text talks about the same concepts as the original, just with different or more words</td></tr></table>
|
| 244 |
+
|
| 245 |
+
3) Fluency: Is this sentence fluent english and does it make sense?
|
| 246 |
+
|
| 247 |
+
<table><tr><td>example</td><td>score</td><td>reasoning</td></tr><tr><td>"who said that? I thought we were going to go together!"</td><td>Yes</td><td>This text makes sense</td></tr><tr><td>"who, she said it up to me and to me together!"</td><td>No</td><td>The text is incoherent</td></tr></table>
|
| 248 |
+
|
| 249 |
+

|
| 250 |
+
Figure 5: The rating UI used for human evaluation. The user may be shown a number of blue squares at once with the same original text and different outputs.
|
| 251 |
+
|
| 252 |
+
I first met Dean not long after my wife and I split up. I had just gotten over a serious illness that I won't bother to talk about, except that it had something to do with the miserably weary split-up and my feeling that everything was dead.
|
| 253 |
+
|
| 254 |
+

|
| 255 |
+
|
| 256 |
+
# Controls
|
| 257 |
+
|
| 258 |
+
rewrite the text :
|
| 259 |
+
|
| 260 |
+
to be more melodramatic
|
| 261 |
+
|
| 262 |
+
choose
|
| 263 |
+
|
| 264 |
+
enter
|
| 265 |
+
|
| 266 |
+
refresh
|
| 267 |
+
|
| 268 |
+
tab
|
| 269 |
+
|
| 270 |
+
cancel
|
| 271 |
+
|
| 272 |
+
refine
|
| 273 |
+
|
| 274 |
+
+e
|
| 275 |
+
|
| 276 |
+

|
| 277 |
+
Figure 6: Screenshot AI-assisted editor with 'Rewrite as' feature.
|
| 278 |
+
|
| 279 |
+
to cycle through choices (1/14)
|
| 280 |
+
|
| 281 |
+
I first laid eyes on Dean not long after my wife and I had split - a bitter parting - I had recovered from a most serious illness, I was a bitter and weary man, everything was cold and dead within me. That is, until Dean came along.
|
| 282 |
+
|
| 283 |
+
and after i got over my illness, i felt dead on the inside, like something from a Greek Tragedy. So when Dean came by and asked if I was in, i felt the life come back into my heart, as if i'd been waiting for him and we'd spent an entire lifetime with our eyes on each other.
|
| 284 |
+
|
| 285 |
+
I first met Dean not long after my wife left me. I was just getting over a serious illness that had something to do with the miserably weary and painful split-up, and my feeling that everything was dead inside. And the sadness. So much sadness.
|
| 286 |
+
|
| 287 |
+
It was a dark and stormy night when I first met Dean.
|
| 288 |
+
|
| 289 |
+
The long, lonely walk home along the dark road is never far from my mind. As the black dog howls in the distance, I remember the bleak, dark mood that pervaded everything during that terrible time. I remember, too, the first time I met Dean...
|
| 290 |
+
|
| 291 |
+
<table><tr><td>Style</td><td>Inputs</td><td>Aug. Zero</td><td>Zero</td><td>Human</td><td>Paraphrase</td></tr><tr><td>more comic</td><td>75</td><td>116</td><td>63</td><td>97</td><td>87</td></tr><tr><td>more melodromatic</td><td>75</td><td>124</td><td>88</td><td>116</td><td>87</td></tr><tr><td>include the word “park”</td><td>75</td><td>124</td><td>72</td><td>94</td><td>87</td></tr><tr><td>include the word “balloon”</td><td>75</td><td>135</td><td>86</td><td>98</td><td>87</td></tr><tr><td>include a metaphor</td><td>75</td><td>110</td><td>74</td><td>110</td><td>87</td></tr><tr><td>more descriptive</td><td>75</td><td>190</td><td>105</td><td>124</td><td>87</td></tr><tr><td>Overall</td><td>75</td><td>133</td><td>81</td><td>107</td><td>87</td></tr></table>
|
| 292 |
+
|
| 293 |
+
Table 9: The mean length in characters of the inputs and outputs for our six atypical styles.
|
arecipeforarbitrarytextstyletransferwithlargelanguagemodels/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:718f777b41bf5e9a427985aca2fc26eb5c2d67cd9f74b96de49448ba5a88e8f8
|
| 3 |
+
size 731864
|
arecipeforarbitrarytextstyletransferwithlargelanguagemodels/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:2a286495769de40c81dbb6f9847e734ff921c1ed7d58a4d1a584fd7a86941356
|
| 3 |
+
size 346002
|
bert2berttowardsreusablepretrainedlanguagemodels/636dad0a-afca-4eaa-b6f9-f18130783a34_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:5c80a15a9627b04b201339bdddaf418bd01be13677427d075cc79554b145d4b3
|
| 3 |
+
size 107721
|
bert2berttowardsreusablepretrainedlanguagemodels/636dad0a-afca-4eaa-b6f9-f18130783a34_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:4cfc2f4dbb7cbdfe04de4bc74bb8c4fc6fef0bb386e570a2d9898bd028192816
|
| 3 |
+
size 127637
|
bert2berttowardsreusablepretrainedlanguagemodels/636dad0a-afca-4eaa-b6f9-f18130783a34_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:8f85da47ddd9c41170558f6d7d5af31c840fff84121297a892e3116c04ea3204
|
| 3 |
+
size 1688253
|
bert2berttowardsreusablepretrainedlanguagemodels/full.md
ADDED
|
@@ -0,0 +1,516 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# bert2BERT: Towards Reusable Pretrained Language Models
|
| 2 |
+
|
| 3 |
+
Cheng Chen $^{1\dagger}$ , Yichun Yin $^{2}$ , Lifeng Shang $^{2}$ , Xin Jiang $^{2}$ , Yujia Qin $^{1}$ , Fengyu Wang $^{1}$ , Zhi Wang $^{3,4\dagger}$ , Xiao Chen $^{2}$ , Zhiyuan Liu $^{1}$ , Qun Liu $^{2}$
|
| 4 |
+
|
| 5 |
+
$^{1}$ Department of Computer Science and Technology, Tsinghua University $^{2}$ Huawei Noah's Ark Lab, $^{3}$ Tsinghua Shenzhen International Graduate School $^{4}$ Peng Cheng Laboratory
|
| 6 |
+
|
| 7 |
+
{c-chen19, qyj20, wangfy20} $@$ mails.tsinghua.edu.cn
|
| 8 |
+
|
| 9 |
+
{yinyichun, shang.lifeng, jiang.xin, chen.xiao2, qun.liu}@huawei.com
|
| 10 |
+
|
| 11 |
+
wangzhi@sz.tsinghua.edu.cn, liuzy@tsinghua.edu.cn
|
| 12 |
+
|
| 13 |
+
# Abstract
|
| 14 |
+
|
| 15 |
+
In recent years, researchers tend to pre-train ever-larger language models to explore the upper limit of deep models. However, large language model pre-training costs intensive computational resources, and most of the models are trained from scratch without reusing the existing pre-trained models, which is wasteful. In this paper, we propose bert2BERT<sup>1</sup>, which can effectively transfer the knowledge of an existing smaller pre-trained model to a large model through parameter initialization and significantly improve the pre-training efficiency of the large model. Specifically, we extend the previous function-preserving (Chen et al., 2016) method proposed in computer vision on the Transformer-based language model, and further improve it by proposing a novel method, advanced knowledge for the large model's initialization. In addition, a two-stage learning method is proposed to further accelerate the pre-training. We conduct extensive experiments on representative PLMs (e.g., BERT and GPT) and demonstrate that (1) our method can save a significant amount of training cost compared with baselines including learning from scratch, StackBERT (Gong et al., 2019) and MSLT (Yang et al., 2020); (2) our method is generic and applicable to different types of pretrained models. In particular, bert2BERT saves about $45\%$ and $47\%$ computational cost of pretraining $\mathrm{BERT}_{\mathrm{BASE}}$ and $\mathrm{GPT}_{\mathrm{BASE}}$ by reusing the models of almost their half sizes.
|
| 16 |
+
|
| 17 |
+
# 1 Introduction
|
| 18 |
+
|
| 19 |
+
Pre-trained language models (PLMs), such as BERT (Devlin et al., 2019), GPT (Radford et al., 2018, 2019; Brown et al., 2020), ELECTRA (Clark et al., 2020), XLNet (Yang et al., 2019) and RoBERTa (Liu et al., 2019), have achieved great
|
| 20 |
+
|
| 21 |
+

|
| 22 |
+
Figure 1: Loss curves of bert2BERT and baselines. StackBERT (Gong et al., 2019) is based on the progressive training setting. More details are shown in Table 2.
|
| 23 |
+
|
| 24 |
+
success in natural language processing (NLP). However, the pre-training process of large PLMs can be extremely computationally expensive and produces huge carbon footprints. For example, GPT-3 uses $3.1\mathrm{E} + 6$ GPU hours for training, at an estimated cost of $\$4.6$ million², consuming a lot of computing resources. Therefore, how to reduce the training cost of PLM is of great importance to Green AI (Schwartz et al., 2020).
|
| 25 |
+
|
| 26 |
+
Recently, there is a trend of training extremely large models to explore the upper limits of PLMs. For example, large pre-trained models, including GPT-3 (Brown et al., 2020) (175B), PanGuα (Zeng et al., 2021) (200B) and Switch Transformers (Fedus et al., 2021) (1571B), have been proved promising in language understanding and generation. However, these models are all pre-trained from scratch independently without utilizing the knowledge of smaller ones that have already been trained. On the other hand, our empirical studies show that the pre-trained models of different scales could share similar knowledge, for example in Figure 2, the attention patterns of the two PLMs with different sizes are similar.
|
| 27 |
+
|
| 28 |
+
To save the training cost of large models, we
|
| 29 |
+
|
| 30 |
+

|
| 31 |
+
Figure 2: The comparisons of attention patterns between small and large PLMs. The upper ones are the attention patterns of BERT<sub>BASE</sub> model whose architecture is {L=12, D=768}, and the lower ones are the attention patterns of one small BERT model whose architecture is {L=12, D=512}. We find that there are a large number of similar attention patterns in the same layer of the two models, indicating the possibility of reusing parameters of trained small PLMs to speed up the pre-training of large PLMs. The attention maps of PLMs with different layers are also similar, which is visualized in previous work (Gong et al., 2019; Yang et al., 2020).
|
| 32 |
+
|
| 33 |
+
propose the bert2BERT method, which can efficiently transfer the learned knowledge of the smaller model to the large model. bert2BERT consists of two components: (1) For parameter initialization, we first extend the function preserving training (Chen et al., 2016) to PLMs by duplicating and stacking the parameters of the existing smaller PLM, which we call function-preserving initialization (FPI). FPI ensures that the initialized large model has almost the same behavior as the small model, so that the large model has a good starting point for later optimization. We also find that duplicating the weights of the upper layer to the current layer can further accelerate the convergence of the large model, which we call advanced knowledge initialization (AKI). Although the AKI somewhat violates the principle of function preserving, we find that empirically it also has a good starting point as shown in Table 1, which leads to a faster convergence rate and achieves higher training efficiency. (2) Secondly, a two-stage training strategy is further applied to the large model to accelerate the training process.
|
| 34 |
+
|
| 35 |
+
To demonstrate the superiority of our method, we conduct extensive experiments on two representative PLMs: BERT and GPT, with different source model sizes. The results show that: (1) our method can save a significant amount of computation in pre-training compared to the traditional way of learning from scratch and progressive stacking methods such as StackBERT (Gong et al., 2019) and MSLT (Yang et al., 2020); (2) our method is model-agnostic, which can be applied on a wide range of Transformer-based PLMs. One typical example is that, when using a small pre-trained
|
| 36 |
+
|
| 37 |
+
model with half the size of $\mathrm{BERT}_{\mathrm{BASE}}$ for initialization, bert2BERT saves $45\%$ computation cost of the original $\mathrm{BERT}_{\mathrm{BASE}}$ pre-training.
|
| 38 |
+
|
| 39 |
+
In general, our contributions are summarized as follows: (1) We explore a new direction for the efficient pre-training by reusing the trained parameters of small models to initialize the large model; (2) We successfully extend function preserving method (Chen et al., 2016) on BERT and further propose advanced knowledge initialization, which can effectively transfer the knowledge of the trained small model to the big model and improve the pre-training efficiency; (3) The proposed method outperforms other training methods and achieves $45\%$ computation reduction on $\mathrm{BERT}_{\mathrm{BASE}}$ ; (4) Our method is generic, effective for both the BERT and GPT models, and have great potential to become an energy-efficient solution for pre-training super large-scale language models.
|
| 40 |
+
|
| 41 |
+
# 2 Related Work
|
| 42 |
+
|
| 43 |
+
Efficient Pre-training in NLP. The efficiency of pre-training has been explored by previous work. Some works (Gong et al., 2019; Yang et al., 2020; Gu et al., 2021) propose progressive learning to accelerate the pre-training, which are motivated by the fact that different layers have some similar knowledge (e.g., attention patterns). They start pre-training a small model with fewer Transformer layers, and then iteratively expand the model by stacking the already trained layers on the top. Another line of work proposes to "back distill" the knowledge of the small models into large models, which is termed as knowledge inheritance (Qin et al., 2021). Some works focus on the data effi
|
| 44 |
+
|
| 45 |
+
ciency (Wu et al., 2021) and take notes for rare words during the pre-training process to help the model understand them when they occur next. ELECTRA (Clark et al., 2020) proposes a task of replaced token detection to predict whether each token in the input was replaced or not, which improves the pre-training efficiency. Our method is orthogonal to this kind of work and the combination of ELECTRA and bert2BERT could achieve better efficiency. In addition, there are several other orthogonal techniques for efficient pre-training: mixed-precision training (Shoeybi et al., 2019), large batch optimization (You et al., 2020), model architecture innovation (Lan et al., 2020), layer dropping technique (Zhang and He, 2020), etc.
|
| 46 |
+
|
| 47 |
+
Reusable Neural Network. Reusable neural network, a topic related to transfer learning (Pan and Yang, 2010), is introduced to accelerate the model training in computer vision. One classical work is Net2Net (Chen et al., 2016), which first proposes the concept of the function-preserving transformation to make neural networks reusable. However, Net2Net randomly selects the neurons to be split. To handle this problem, some works (Wu et al., 2019, 2020b; Wang et al., 2019b; Wu et al., 2020a) leverage a functional steepest descent idea to decide the optimal subset of neurons to be split. The pruning technique (Han et al., 2015) is also introduced for reusable neural networks (Feng and Panda, 2020). In this paper, we study the reusable pre-trained language model and propose a new method, bert2BERT, to accelerate the pre-training of BERT and GPT.
|
| 48 |
+
|
| 49 |
+
# 3 Preliminary
|
| 50 |
+
|
| 51 |
+
BERT consists of one embedding layer and multiple Transformer (Vaswani et al., 2017) layers.
|
| 52 |
+
|
| 53 |
+
# 3.1 Embedding Layer
|
| 54 |
+
|
| 55 |
+
The embedding layer first maps the tokens in a sentence into vectors with an embedding matrix $\mathbf{W}^E$ . Then one normalization layer is employed to produce the initial hidden states $\mathbf{H}_0$ .
|
| 56 |
+
|
| 57 |
+
# 3.2 Transformer Layer
|
| 58 |
+
|
| 59 |
+
The hidden states are iteratively processed by multiple Transformer layers as follows:
|
| 60 |
+
|
| 61 |
+
$$
|
| 62 |
+
\boldsymbol {H} _ {l} = \operatorname {T r a n s f o r m e r} _ {l} \left(\boldsymbol {H} _ {l - 1}\right), l \in [ 1, L ] \tag {1}
|
| 63 |
+
$$
|
| 64 |
+
|
| 65 |
+
where $L$ denotes the number of Transformer layers, each including a multi-head attention (MHA) and a feed-forward network (FFN).
|
| 66 |
+
|
| 67 |
+
MHA. It is composed of multiple parallel self-attention heads. The hidden states of the previous layer are fed into each head and then the outputs of all heads are summed to obtain the final output as follows:
|
| 68 |
+
|
| 69 |
+
$$
|
| 70 |
+
\pmb {Q} _ {i}, \pmb {K} _ {i}, \pmb {V} _ {i} = \pmb {H} _ {l - 1} \pmb {W} _ {l, i} ^ {Q}, \pmb {H} _ {l - 1} \pmb {W} _ {l, i} ^ {K}, \pmb {H} _ {l - 1} \pmb {W} _ {l, i} ^ {V},
|
| 71 |
+
$$
|
| 72 |
+
|
| 73 |
+
$$
|
| 74 |
+
\pmb {H} _ {l, i} ^ {\mathrm {H E A D}} = \mathrm {s o f t m a x} (\frac {\pmb {Q} _ {i} \pmb {K} _ {i} ^ {T}}{\sqrt {d _ {k}}}) \pmb {V} _ {i} \pmb {W} _ {l, i} ^ {O},
|
| 75 |
+
$$
|
| 76 |
+
|
| 77 |
+
$$
|
| 78 |
+
\operatorname {M H A} \left(\boldsymbol {H} _ {l - 1}\right) = \sum_ {i = 1} ^ {a} \boldsymbol {H} _ {l, i} ^ {\text {H E A D}},
|
| 79 |
+
$$
|
| 80 |
+
|
| 81 |
+
$$
|
| 82 |
+
\boldsymbol {H} _ {l} ^ {\mathrm {M H A}} = \operatorname {L a y e r N o r m} \left(\boldsymbol {H} _ {l - 1} + \operatorname {M H A} \left(\boldsymbol {H} _ {l - 1}\right)\right). \tag {2}
|
| 83 |
+
$$
|
| 84 |
+
|
| 85 |
+
$\pmb{H}_{l-1}$ is linearly projected to queries $(Q_{i})$ , keys $(K_{i})$ and values $(V_{i})$ using $\pmb{W}_{l,i}^{Q}, \pmb{W}_{l,i}^{K}, \pmb{W}_{l,i}^{V}$ respectively. $\pmb{H}_{l,i}^{\mathrm{HEAD}}$ indicates the context-aware vector which is obtained by the scaled dot-product of queries and keys in the $i$ -th attention head. $a$ represents the number of self-attention heads. $d_{k}$ is the head dimension acting as the scaling factor.
|
| 86 |
+
|
| 87 |
+
FFN. It consists of two linear layers and one GeLU activation function (Hendrycks and Gimpel, 2016), that is:
|
| 88 |
+
|
| 89 |
+
$$
|
| 90 |
+
\pmb {H} _ {l} ^ {\mathrm {F F N}} = \mathrm {G e L U} (\pmb {H} _ {l} ^ {\mathrm {M H A}} \pmb {W} _ {l} ^ {1} + \pmb {b} _ {l} ^ {1}) \pmb {W} _ {l} ^ {2} + \pmb {b} _ {l} ^ {2},
|
| 91 |
+
$$
|
| 92 |
+
|
| 93 |
+
$$
|
| 94 |
+
\boldsymbol {H} _ {l} = \text {L a y e r N o r m} \left(\boldsymbol {H} _ {l} ^ {\mathrm {M H A}} + \boldsymbol {H} _ {l} ^ {\mathrm {F F N}}\right). \tag {3}
|
| 95 |
+
$$
|
| 96 |
+
|
| 97 |
+
Layer Normalization. Both the modules of MHA and FFN have one layer normalization (Ba et al., 2016) that stabilizes the dynamics of the hidden state in the Transformer. Formally, it is written as:
|
| 98 |
+
|
| 99 |
+
$$
|
| 100 |
+
\operatorname {L a y e r N o r m} (\boldsymbol {H}) = \left(\frac {\boldsymbol {H} - \mu_ {\boldsymbol {H}}}{\sigma_ {\boldsymbol {H}}}\right) \odot \boldsymbol {W} ^ {L N} + \boldsymbol {b} ^ {L N}, \tag {4}
|
| 101 |
+
$$
|
| 102 |
+
|
| 103 |
+
where $\odot$ means the element-wise multiplication. The statistics of $\mu_{H}$ and $\sigma_{H}$ are the mean and variance of hidden states $H$ respectively.
|
| 104 |
+
|
| 105 |
+
# 4 Methodology
|
| 106 |
+
|
| 107 |
+
# 4.1 Problem Statement
|
| 108 |
+
|
| 109 |
+
We aim to accelerate the pre-training of target model $\mathcal{T}(L^t,D^t)$ by transferring the knowledge of an existing pre-trained source model $S(L^{s},D^{s})$ where $L^{s|t}$ means the numbers of Transformer layer and $D^{s|t}$ means the model width (i.e., hidden size), satisfying $L^{s}\leq L^{t}$ and $D^{s}\leq D^{t}$ . Formally, our problem is two-fold: (1) how to perform an effective parameter initialization for $\mathcal{T}$ by reusing the trained parameters of $\mathcal{S}$ , and (2) how to efficiently
|
| 110 |
+
|
| 111 |
+
train the initialized $\mathcal{T}$ , so that $\mathcal{T}$ can have a faster convergence rate in pre-training.
|
| 112 |
+
|
| 113 |
+
# 4.2 Overview
|
| 114 |
+
|
| 115 |
+
Targeting the above problems, bert2BERT first initializes the target model $\mathcal{T}$ with the parameters of the existing model $S$ by the width-wise expansion $(D^{s} \rightarrow D^{t})$ and depth-wise expansion $(L^{s} \rightarrow L^{t})$ . Through this expansion, the knowledge contained in the parameters of the source model is directly transferred to the target model. Then we further pre-train the initialized target model with a two-stage pre-training method. The overall workflow is illustrated in Section 4.5.
|
| 116 |
+
|
| 117 |
+
Essentially, the width-wise expansion can be decomposed into expansions of parameter matrices (or vectors). As illustrated in Figure 3, the matrix expansion enlarges $\mathbf{W} \in \mathbb{R}^{d_{\mathrm{in}}^w * d_{\mathrm{out}}^w}$ of $S$ to $\mathbf{U} \in \mathbb{R}^{d_{\mathrm{in}}^u * d_{\mathrm{out}}^u}$ of $\mathcal{T}$ by two kinds of operations: in-dimension and out-dimension expansion.
|
| 118 |
+
|
| 119 |
+
In the following sections, we first introduce two strategies of width-wise expansion: function-preserving and advanced knowledge initialization. Then, we introduce the depth-wise expansion and detail the two-stage pre-training process.
|
| 120 |
+
|
| 121 |
+
# 4.3 Width-wise Expansion
|
| 122 |
+
|
| 123 |
+
For the paper clarity, we introduce two index mapping functions: $g_{\mathrm{in}}$ and $g_{\mathrm{out}}$ , where $g_{\mathrm{in}}(i)$ means the $i$ -th in-dimension of $U$ reuses the $g_{\mathrm{in}}(i)$ -th in-dimension parameters of $W$ , $g_{\mathrm{out}}(j)$ means the $j$ -th out-dimension of $U$ reuses the $g_{\mathrm{out}}(j)$ -th out-dimension parameters of $W$ . Both our two methods are defined with these two mapping functions. $W_{(i,j)}$ means the parameter element, $i$ and $j$ refer to the $i$ -th in-dimension index and $j$ -th out-dimension index respectively. As shown in Figure 3, the $i$ -th in-dimension parameters of $W$ are the parameters of the $i$ -th input neuron of $W$ or the $i$ -th column of $W$ .
|
| 124 |
+
|
| 125 |
+
# 4.3.1 Function Preserving Initialization
|
| 126 |
+
|
| 127 |
+
Function preserving initialization (FPI) (Chen et al., 2016) aims to make the initialized target model have the same function as the source model, which means that given the same input, the initialized target model has the same output as the source model. In this paper, we extend FPI on a different architecture, Transformer-based pre-trained language model. We give an example in Figure 3 to illustrate
|
| 128 |
+
|
| 129 |
+

|
| 130 |
+
Figure 3: Overview of the function preserving initialization (FPI). Given the same input $\{x_1, x_2\}$ , FPI ensures the initialized target model has the same output $\{y_1, y_2\}$ with the source model. The first and the second steps are expanding the in-dimension and out-dimension of the parameter matrix according to mapping functions $g_{in}$ and $g_{out}$ respectively. After we expand the matrix $\pmb{W}$ into $\pmb{U}$ , we use the in-dimension expansion on the upper parameter matrix again to ensure the output $\{y_1, y_2\}$ same as the original one. From the view of neurons, FPI copies the corresponding input and output neurons to expand the neural network.
|
| 131 |
+
|
| 132 |
+
FPI. Formally, the mapping functions are defined as follows:
|
| 133 |
+
|
| 134 |
+
$$
|
| 135 |
+
g _ {\text {i n}} (i) = \left\{ \begin{array}{l l} i & i \in [ 1, d _ {\text {i n}} ^ {w} ] \\ f (\{1, 2, \dots , d _ {\text {i n}} ^ {w} \}) & i \in \left(d _ {\text {i n}} ^ {w}, d _ {\text {i n}} ^ {u} \right], \end{array} \right. \tag {5}
|
| 136 |
+
$$
|
| 137 |
+
|
| 138 |
+
$$
|
| 139 |
+
g _ {\text {o u t}} (j) = \left\{ \begin{array}{l l} j & j \in [ 1, d _ {\text {o u t}} ^ {w} ] \\ f (\{1, 2, \dots , d _ {\text {o u t}} ^ {w} \}) & j \in \left(d _ {\text {o u t}} ^ {w}, d _ {\text {o u t}} ^ {u} \right], \end{array} \right. \tag {6}
|
| 140 |
+
$$
|
| 141 |
+
|
| 142 |
+
where $f(\cdot)$ is uniform sampling. We denote the weight expansion as $\pmb{U} = \mathrm{EXPN}(\pmb{W};g_{\mathrm{in}},g_{\mathrm{out}})$ , which includes in-dimension expansion (Eq. 7) and out-dimension expansion (Eq. 8):
|
| 143 |
+
|
| 144 |
+
$$
|
| 145 |
+
C _ {g _ {\mathrm {i n}} (i)} = \sum_ {i ^ {\prime} = 1} ^ {d _ {\mathrm {i n}} ^ {u}} \mathbb {I} \left(g _ {\mathrm {i n}} \left(i ^ {\prime}\right) = g _ {\mathrm {i n}} (i)\right) \tag {7}
|
| 146 |
+
$$
|
| 147 |
+
|
| 148 |
+
$$
|
| 149 |
+
\widetilde {\boldsymbol {U}} _ {(i, *)} = \frac {1}{C _ {g _ {\mathrm {i n}} (i)}} \boldsymbol {W} _ {(g _ {\mathrm {i n}} (i), *)},
|
| 150 |
+
$$
|
| 151 |
+
|
| 152 |
+
$$
|
| 153 |
+
\boldsymbol {U} _ {(*, j)} = \widetilde {\boldsymbol {U}} _ {(*, g _ {\text {o u t}} (j))}, \tag {8}
|
| 154 |
+
$$
|
| 155 |
+
|
| 156 |
+
where $\mathbb{I}(\cdot)$ is an indicator function, and $C_{g_{\mathrm{in}}(i)}$ is the count of $g_{\mathrm{in}}(i)$ in the values of $g_{\mathrm{in}}(\cdot)$ , which is used to re-scale the original parameters to keep the function preserving property.
|
| 157 |
+
|
| 158 |
+
Expansion for All Modules. We apply FPI for all modules of BERT via matrix expansion $\mathrm{EXPN}(\cdot)$ . Specifically, for the embedding matrix $W^{E}$ , we only conduct the out-dimension expansion:
|
| 159 |
+
|
| 160 |
+
$$
|
| 161 |
+
\boldsymbol {U} _ {(*, j)} ^ {E} = \boldsymbol {W} _ {(*, g _ {\text {o u t}} ^ {e} (j))} ^ {E}. \tag {9}
|
| 162 |
+
$$
|
| 163 |
+
|
| 164 |
+
MHA module can be decomposed into multiple parallel self-attention heads and we conduct the head-wise expansion for this module, which means
|
| 165 |
+
|
| 166 |
+
increasing the number of attention heads. The headwise expansion is formulated as:
|
| 167 |
+
|
| 168 |
+
$$
|
| 169 |
+
\boldsymbol {U} ^ {Q \mid K \mid V \mid O} = \operatorname {E X P N} \left(\boldsymbol {W} ^ {Q \mid K \mid V \mid O}; g _ {\text {i n}} ^ {q \mid k \mid v \mid o}, g _ {\text {o u t}} ^ {q \mid k \mid v \mid o}\right). \tag {10}
|
| 170 |
+
$$
|
| 171 |
+
|
| 172 |
+
Specifically, the head-wise expansion means that we reuse the head group parameters to construct the new matrices. The $i$ -th head group in $l$ -th layer contains $\mathbf{W}_{l,i}^{Q}|\mathbf{W}_{l,i}^{K}|\mathbf{W}_{l,i}^{V}|\mathbf{W}_{l,i}^{O}$ in Eq. 2 and the out-dimension expansion for $\mathbf{W}_{l,i}^{Q}|\mathbf{W}_{l,i}^{K}|\mathbf{W}_{l,i}^{V}$ is:
|
| 173 |
+
|
| 174 |
+
$$
|
| 175 |
+
g _ {\text {o u t}} ^ {q \mid k \mid v} (j) = \left\{ \begin{array}{l l} j & j \in [ 1, a ^ {s} ] \\ f (\{1, 2, \dots , a ^ {s} \}) & j \in \left(a ^ {s}, a ^ {t} \right], \end{array} \right. \tag {11}
|
| 176 |
+
$$
|
| 177 |
+
|
| 178 |
+
where $j$ is the head index and $a^{s|t}$ mean the head numbers of source model and target model respectively. The module has three constraints: $\{g_{\mathrm{out}}^e = g_{\mathrm{in}}^{q|k|v}; g_{\mathrm{out}}^{q|k|v} = g_{\mathrm{in}}^o; g_{\mathrm{in}}^{q|k|v} = g_{\mathrm{out}}^o\}$ , with the first two constraints for hidden dimension consistency (Wen et al., 2018; Chen et al., 2021) and the third one for residual connection (Eq. 2).
|
| 179 |
+
|
| 180 |
+
For the FFN module, we perform the expansion on the parameter matrices $W^{1|2}$ (Eq. 3) as follows:
|
| 181 |
+
|
| 182 |
+
$$
|
| 183 |
+
\boldsymbol {U} ^ {1 | 2} = \operatorname {E X P N} \left(\boldsymbol {W} ^ {1 | 2}; g _ {\text {i n}} ^ {1 | 2}, g _ {\text {o u t}} ^ {1 | 2}\right). \tag {12}
|
| 184 |
+
$$
|
| 185 |
+
|
| 186 |
+
Similar to the MHA module, the mapping functions of FFN also have three constraints: $\{g_{\mathrm{out}}^o = g_{\mathrm{in}}^1;$ $g_{\mathrm{out}}^{1} = g_{\mathrm{in}}^{2};g_{\mathrm{in}}^{1} = g_{\mathrm{out}}^{2}\}$
|
| 187 |
+
|
| 188 |
+
For the layer normalization, we take the layer normalization of FFN as an example, its expansion is formulated as:
|
| 189 |
+
|
| 190 |
+
$$
|
| 191 |
+
\boldsymbol {U} _ {j} ^ {L N} = \boldsymbol {W} _ {g _ {\text {o u t}} ^ {2} (j)} ^ {L N}. \tag {13}
|
| 192 |
+
$$
|
| 193 |
+
|
| 194 |
+
Note that in layer normalization (Eq. 4), the mean $\mu$ and variance $\sigma$ are calculated based on the hidden representations $H$ . Thus, the expansion of this parameter inevitably induces a gap and prevents the target model from strictly following the function preserving principle. However, we empirically find that the gap is so small that it can hardly affect the initialization and convergence of the target model. Thus we ignore this discrepancy.
|
| 195 |
+
|
| 196 |
+
We have validated the effectiveness of the adapted FPI in different settings in Table 1. The results show that the initialized model $\mathcal{T}$ achieves almost the same loss as $S$ , demonstrating that FPI successfully retains the knowledge of the small model when performing parameter expansion.
|
| 197 |
+
|
| 198 |
+
# 4.3.2 Advanced Knowledge Initialization
|
| 199 |
+
|
| 200 |
+
To further improve the convergence rate of the pretraining target model, we propose the advanced knowledge initialization (AKI), which expands new
|
| 201 |
+
|
| 202 |
+
<table><tr><td>Method</td><td>S(12,384)</td><td>S(12,512)</td></tr><tr><td>Original</td><td>1.89</td><td>1.67</td></tr><tr><td>Rand</td><td>10.40</td><td>10.42</td></tr><tr><td>DirectCopy</td><td>9.05</td><td>6.45</td></tr><tr><td>FPI</td><td>1.89</td><td>1.70</td></tr><tr><td>AKI</td><td>2.08</td><td>1.96</td></tr></table>
|
| 203 |
+
|
| 204 |
+
Table 1: The comparison of MLM losses between FPI and baselines. "Original" refers to the MLM losses of source pre-trained models $S$ . "Rand" refers to the MLM losses of randomly initialized target models. "Direct-Copy" refers to a naive method that directly copies the source model to the target model and the unfilled part is randomly initialized, "FPI" represents the function preserving method. We expand both models to the target model $T(12,768)$ and find that FPI can make the target model have similar losses with these trained source models. The loss gap between FPI and Original is brought by layer normalization. "AKI" represents the advanced knowledge initialization method.
|
| 205 |
+
|
| 206 |
+
matrices based on not only the parameters of the same layer but also the parameters of the upper layer in the source model. The intuition is based on previous findings (Jawahar et al., 2019; Clark et al., 2019) that adjacent Transformer layers have similar functionality, which ensures that it will not damage the knowledge contained in the parameters of the current layer. Moreover, the knowledge that comes from adjacent layers can break the symmetry (Chen et al., 2016) appeared in FPI, which has been demonstrated beneficial. We give an illustrative example in Figure 4 and formulate AKI as:
|
| 207 |
+
|
| 208 |
+
$$
|
| 209 |
+
\boldsymbol {U} ^ {l} = \operatorname {E X P N} \left(\boldsymbol {W} ^ {l}, \boldsymbol {W} ^ {l + 1}; g _ {\text {i n}} ^ {l | l + 1}, g _ {\text {o u t}} ^ {l}\right). \tag {14}
|
| 210 |
+
$$
|
| 211 |
+
|
| 212 |
+
Specifically, we first do the in-dimension expansion for $\pmb{W}^{l|l + 1}$ . Here we take $\pmb{W}^l$ as an example:
|
| 213 |
+
|
| 214 |
+
$$
|
| 215 |
+
C _ {g _ {\text {i n}} ^ {l} (i)} = \sum_ {i ^ {\prime} = 1} ^ {d _ {\text {i n}} ^ {u}} \mathbb {I} \left(g _ {\text {i n}} ^ {l} \left(i ^ {\prime}\right) = g _ {\text {i n}} ^ {l} (i)\right) \tag {15}
|
| 216 |
+
$$
|
| 217 |
+
|
| 218 |
+
$$
|
| 219 |
+
\widetilde {\boldsymbol {U}} _ {(i, \ast)} ^ {l} = \frac {1}{C _ {g _ {\mathrm {i n}} ^ {l} (i)} ^ {l}} \boldsymbol {W} _ {(g _ {\mathrm {i n}} ^ {l} (i), \ast)} ^ {l}.
|
| 220 |
+
$$
|
| 221 |
+
|
| 222 |
+
It is similar with Eq. 7. Then we stack the expanded matrices of $\widetilde{\boldsymbol{U}}^l$ and $\widetilde{\boldsymbol{U}}^{l + 1}$ to construct the final matrix:
|
| 223 |
+
|
| 224 |
+
$$
|
| 225 |
+
\boldsymbol {U} _ {(*, j)} ^ {l} = \left\{ \begin{array}{l l} \widetilde {\boldsymbol {U}} _ {(*, j)} ^ {l} & j \in [ 1, d _ {\text {o u t}} ^ {w} ] \\ \widetilde {\boldsymbol {U}} _ {(*, g _ {\text {o u t}} ^ {l} (j))} ^ {l + 1} & j \in \left(d _ {\text {o u t}} ^ {w}, d _ {\text {o u t}} ^ {u} \right]. \end{array} \right. \tag {16}
|
| 226 |
+
$$
|
| 227 |
+
|
| 228 |
+
We directly copy the expanded $\widetilde{U}^l$ as the top part of the new matrix and place the sampled parameters from $\widetilde{U}^{l + 1}$ on the bottom of the new matrix.
|
| 229 |
+
|
| 230 |
+
We aggregate upper-layer information into a new matrix for two intuitions: (1) it breaks the FPI symmetry that hinders model convergence (Chen et al.,
|
| 231 |
+
|
| 232 |
+

|
| 233 |
+
Figure 4: Overview of AKI. It first performs the indimension expansion on both the matrixes of current and upper layers. Then it uses the widened matrix of the current layer as the top part of the new matrix and samples the row of the widened matrix of the upper layer as the bottom part of the new matrix.
|
| 234 |
+
|
| 235 |
+
2016). For example, FPI makes the attention patterns in the same layer repeated, which is redundant and called symmetry; (2) upper-layer information can be used as similar but high-level knowledge to guide the model to converge faster. We display the attention patterns of the target model initialized by AKI in Appendix E and find that the target model can maintain the attention patterns of both current and upper layers very well.
|
| 236 |
+
|
| 237 |
+
Expansion for All Modules. For embedding matrix, we only do the out-dimension expansion as Eq. 9 in the FPI. Both the modules of MHA and FFN do the matrix expansion by following the defined operation in Eq. 15 and Eq. 16. The constraints of mapping functions follow the setting of FPI.
|
| 238 |
+
|
| 239 |
+
Empirically, we find that the AKI method outperforms FPI, while the performance is worse if we build a new matrix based on the matrix of the lower layer (or low-level knowledge). How to construct the optimal initialization for the target model with the parameters of different layers remains an open question and we leave it as future work.
|
| 240 |
+
|
| 241 |
+
For more details, we give a clear illustration of the FPI and AKI process in Appendix F.
|
| 242 |
+
|
| 243 |
+
# 4.4 Depth-wise Expansion
|
| 244 |
+
|
| 245 |
+
After the width-wise expansion, we obtain a widened model with the same width as the target model. To bridge the depth gap, we perform depthwise expansion to increase model depth to the depth of the target model. We illustrate this process in Algorithm 1 and the main idea is to iteratively stack the widened model until its depth is equal to the target model (Gong et al., 2019).
|
| 246 |
+
|
| 247 |
+
# 4.5 Two-stage Pre-training
|
| 248 |
+
|
| 249 |
+
To further improve the pre-training efficiency of initialized target model, we propose a two-stage training method: (1) train sub-models with different
|
| 250 |
+
|
| 251 |
+
# Algorithm 1 Target Model Initialization
|
| 252 |
+
|
| 253 |
+
Input: the target model $\mathcal{T}(L^t,D^t)$ and the source model $S(L^{s},D^{s})$
|
| 254 |
+
|
| 255 |
+
1: $\mathcal{T}_1(L^s,D^t)\gets$ do AKI or FPI with $\mathcal{S}(L^s,D^s)$
|
| 256 |
+
2: $k\gets \lfloor L^t /L^s\rfloor$
|
| 257 |
+
3: for $t = 2 \rightarrow k$ do
|
| 258 |
+
4: $\mathcal{T}_t(L^s\cdot t,D^t)\gets$ stack $\mathcal{T}_1$ on top of $\mathcal{T}_{t - 1}$
|
| 259 |
+
5: end for
|
| 260 |
+
6: $\mathcal{T}\gets$ stack top $L^t -L^s\cdot k$ layers of $\mathcal{T}_1$
|
| 261 |
+
|
| 262 |
+
Output: the initialized model $\mathcal{T}(L^t,D^t)$
|
| 263 |
+
|
| 264 |
+
# Algorithm 2 Two-stage Pre-training
|
| 265 |
+
|
| 266 |
+
Input: the initialized model $\mathcal{T}$ , large-scale unsupervised dataset $\mathcal{D}$ , the epoch number of sub-model training $E_{b}$ and the epoch number of whole training process $E$ , the layer number $l_{b}$ .
|
| 267 |
+
|
| 268 |
+
1: Construct sub-models and these models have the layer numbers of $\{l_b, 2 \cdot l_b, \ldots, L^t\}$ .
|
| 269 |
+
|
| 270 |
+
2: for $e = 1 \rightarrow E_b$ do
|
| 271 |
+
|
| 272 |
+
3: for batch in $D$ do
|
| 273 |
+
|
| 274 |
+
4: $\mathcal{T}^{\prime}\gets$ sample one sub-model.
|
| 275 |
+
|
| 276 |
+
5: Perform forward and backward of $\mathcal{T}'$ .
|
| 277 |
+
|
| 278 |
+
6: Update only top $l_{b}$ layers of $\mathcal{T}'$ .
|
| 279 |
+
|
| 280 |
+
7: end for
|
| 281 |
+
|
| 282 |
+
8: end for
|
| 283 |
+
|
| 284 |
+
9: for $e = E_b \to E$ do
|
| 285 |
+
|
| 286 |
+
10: for batch in $D$ do
|
| 287 |
+
|
| 288 |
+
11: Perform forward and backward of $\mathcal{T}$
|
| 289 |
+
|
| 290 |
+
12: Update whole model $\mathcal{T}$
|
| 291 |
+
|
| 292 |
+
13: end for
|
| 293 |
+
|
| 294 |
+
14: end for
|
| 295 |
+
|
| 296 |
+
Output: the pre-trained model $\mathcal{T}$
|
| 297 |
+
|
| 298 |
+
layers in a random manner to make the complete model converge at a low cost. These sub-models are built with bottom Transformer layers of the initialized target model and share one classification layer. At each optimization step, we randomly sample one sub-model and only update its top Transformer layers and the shared classification layer. (2) After the sub-structure training, we further perform the traditional full-model training. The details of our method are displayed in Algorithm 2.
|
| 299 |
+
|
| 300 |
+
# 5 Experiment
|
| 301 |
+
|
| 302 |
+
# 5.1 Experimental Setup
|
| 303 |
+
|
| 304 |
+
Pre-training Details. We use the English Wikipedia and Toronto Book Corpus (Zhu et al., 2015) as the pre-training data. The settings of pretraining are: peak learning rate of 1e-4, warmup
|
| 305 |
+
|
| 306 |
+
<table><tr><td>Model</td><td>FLOPs (×1e19)</td><td>Ratio (Saving)</td><td>Loss (MLM)</td><td>SQuADv1.1 (F1)</td><td>SST-2 (Acc)</td><td>MNLI (Acc)</td><td>MRPC (Acc)</td><td>CoLA (Mcc)</td><td>QNLI (Acc)</td><td>QQP (Acc)</td><td>STS-B (Acc)</td><td>Avg.</td></tr><tr><td>BERTBASE (Google)</td><td>-</td><td>-</td><td>-</td><td>88.4(0.1)</td><td>93.6(0.2)</td><td>84.7(0.1)</td><td>87.9(0.9)</td><td>59.6(1.5)</td><td>91.6(0.1)</td><td>91.4(0.1)</td><td>89.6(0.5)</td><td>85.8(0.1)</td></tr><tr><td>BERTBASE † (Ours)</td><td>7.3</td><td>0%</td><td>1.437</td><td>89.6(0.1)</td><td>92.7(0.2)</td><td>84.6(0.2)</td><td>88.6(0.5)</td><td>57.3(4.0)</td><td>90.6(0.7)</td><td>90.6(0.1)</td><td>89.9(0.3)</td><td>85.5(0.5)</td></tr></table>
|
| 307 |
+
|
| 308 |
+
Progressive Training
|
| 309 |
+
|
| 310 |
+
<table><tr><td>MSLT†</td><td>6.5</td><td>10.7%</td><td>1.436</td><td>90.4(0.2)</td><td>92.9(0.2)</td><td>85.1(0.2)</td><td>87.9(2.1)</td><td>55.6(4.1)</td><td>90.7(0.2)</td><td>90.6(0.2)</td><td>88.2(0.6)</td><td>85.2(0.7)</td></tr><tr><td>StackBERT†</td><td>5.5</td><td>24.3%</td><td>1.433</td><td>90.4(0.2)</td><td>92.6(0.4)</td><td>85.3(0.1)</td><td>88.2(1.0)</td><td>63.2(0.9)</td><td>91.0(0.4)</td><td>91.0(0.1)</td><td>86.7(0.7)</td><td>86.0(0.2)</td></tr></table>
|
| 311 |
+
|
| 312 |
+
bert2BERT: $S(12,512)\to \mathcal{T}(12,768)$
|
| 313 |
+
|
| 314 |
+
<table><tr><td>DirectCopy</td><td>6.4</td><td>12.2%</td><td>1.436</td><td>89.8(0.2)</td><td>92.9(0.3)</td><td>84.7(0.2)</td><td>86.2(0.6)</td><td>62.2(0.7)</td><td>90.2(0.6)</td><td>90.4(0.1)</td><td>89.2(0.1)</td><td>85.7(0.1)</td></tr><tr><td>FPI</td><td>5.1</td><td>30.4%</td><td>1.436</td><td>90.0(0.2)</td><td>92.6(0.4)</td><td>85.2(0.1)</td><td>87.1(0.5)</td><td>61.5(0.9)</td><td>90.9(0.6)</td><td>90.8(0.2)</td><td>89.7(0.2)</td><td>86.0(0.1)</td></tr><tr><td>AKI</td><td>4.5</td><td>38.4%</td><td>1.434</td><td>90.4(0.1)</td><td>92.5(0.4)</td><td>85.3(0.4)</td><td>87.8(0.9)</td><td>61.0(1.4)</td><td>91.2(0.2)</td><td>90.5(0.1)</td><td>89.5(0.2)</td><td>86.0(0.2)</td></tr><tr><td>bert2BERT</td><td>4.0</td><td>45.2%</td><td>1.433</td><td>90.0(0.2)</td><td>92.9(0.1)</td><td>85.1(0.1)</td><td>87.7(0.7)</td><td>60.0(1.2)</td><td>90.5(0.8)</td><td>90.4(0.1)</td><td>89.2(0.2)</td><td>85.7(0.4)</td></tr></table>
|
| 315 |
+
|
| 316 |
+
Table 2: Comparison between bert2BERT and baselines. We report mean (and standard deviation) performance over 3 runs on the dev set. bert2BERT means the combination of AKI and two-stage pre-training here. FPI and AKI mean that the function preserving initialization, advanced knowledge initialization respectively. $\dagger$ means the re-implemented results, where the $\mathrm{BERT}_{\mathrm{BASE}}$ and StackBERT achieve similar results with the original paper, and the MSLT result is different from the original paper may be due to the different training settings (e.g., in the original paper, it uses the LAMB optimizer (You et al., 2020) and only trains the corpus with a max sequence length of 128).
|
| 317 |
+
|
| 318 |
+
steps of $10\mathrm{k}$ , training epochs of $E = 40$ , batch size of 512, sub-model training epochs of $E_{b} = 5$ , layer number of $l_{b} = 3$ . Unless otherwise noted, all methods including bert2BERT and baselines use the same pre-training settings for fair comparisons. In the settings of bert2BERT, the target model has a $\mathrm{BERT}_{\mathrm{BASE}}$ architecture of $\mathcal{T}(12,768)$ and the source model has an architecture of $\mathcal{S}(12,512)$ .
|
| 319 |
+
|
| 320 |
+
Fine-tuning Details. For the evaluation, we use tasks from GLUE benchmark (Wang et al., 2019a) and SQuADv1.1 (Rajpurkar et al., 2016). We report F1 for SQuADv1.1, Matthews correlation coefficient (Mcc) for CoLA (Warstadt et al., 2019) and accuracy (Acc) for other tasks. For the GLUE tasks fine-tuning, we set the batch size to 32, choose the learning rate from $\{5\mathrm{e} - 6,1\mathrm{e} - 5,2\mathrm{e} - 5,3\mathrm{e} - 5\}$ and epochs from $\{4,5,10\}$ . For the SQuADv1.1 fine-tuning, we set the batch size to 16, the learning rate to $3\mathrm{e} - 5$ and the number of training epochs to 4. All results are the average of 3 runs on the dev set.
|
| 321 |
+
|
| 322 |
+
Baselines. We first introduce a naive bert2BERT baseline named DirectCopy, which directly copies the small model to the target model and randomly initializes the unfilled parameters. StackBERT (Gong et al., 2019) and MSLT (Yang et al., 2020) are also included as the baselines. Both of them are trained in a progressive manner. Following the original setting, for the StackBERT, we first train the 3-layer BERT for 5 epochs, stack it twice into a 6-layer BERT and then train it for 7 epochs. In the final step, we stack the 6-layer model into $\mathrm{BERT}_{\mathrm{BASE}}$ and further train it with 28 epochs. For MSLT, we first perform 4-stage training. In each stage, we add the top 3 layers of the model already trained to the top of the model and then pre-train the new model by partially updating
|
| 323 |
+
|
| 324 |
+
the top 3 layers. Each stage of the partial training process has 8 epochs. Finally, we further perform 20 full-model training epochs $^4$ to achieve the same loss as $\mathrm{BERT}_{\mathrm{BASE}}$ trained from scratch. The baselines are trained using the same optimizer, training steps, and warmup steps as the bert2BERT.
|
| 325 |
+
|
| 326 |
+
# 5.2 Results and Analysis
|
| 327 |
+
|
| 328 |
+
We demonstrate the effectiveness of the proposed method on the SQuAD and GLUE benchmark. The results are shown in Table 2. We also represent the loss curves in Figure 1 and Appendix A. The results show that: (1) DirectCopy only saves $12.2\%$ computational costs, which indicates this naive method of directly copying the trained parameters of the source model to the target model is not effective; (2) our proposed methods, FPI and AKI, achieve better performances than the baselines. Although AKI does not follow the function preserving, it has a bigger loss than FPI at the start of training, AKI achieves a faster convergence rate by using the advanced knowledge and breaking the symmetry; (3) by performing the two-stage pre-training on the target model initialized by AKI, we can save $45.2\%$ computational costs. Note that the total parameters of the source model are half of those of the target model (54M vs. 110M). The loss of bert2BERT in Figure 1 is high at the stage of sub-model training because it represents the average loss of all sub-models. We also compare the attention patterns of the target models initialized by DirectCopy, FPI, and AKI. The attention patterns and their discussions are displayed in Appendix E.
|
| 329 |
+
|
| 330 |
+
bert2BERT with Smaller Source Model. We also evaluate bert2BERT on different settings, where the source model $S(6, 512)$ , $S(8, 512)$ , $S(10, 512)$ are significantly smaller than the target model (35M | 42M | 48M vs. 110M). The results are shown in Table 3 and loss curves are displayed in Appendix B. We observe that DirectCopy for $S(6, 512)$ achieves no efficiency improvement over the original pre-training, which indicates that the significant size gap between the source and target model greatly reduces the benefit of DirectCopy methods. Compared with DirectCopy, our proposed method reduces the computation cost by $23.3\%$ , which again demonstrates the effectiveness of bert2BERT. The results show that the smaller the size gap between the source model and target model, the greater the cost savings of bert2BERT. We also note that it is more challenging to speed up the target model with a small source model $S(6, 512)$ . We encourage future work to explore to transfer the knowledge from smaller source models to improve the pre-training efficiency of the target model.
|
| 331 |
+
|
| 332 |
+
<table><tr><td>Settings</td><td>Model</td><td>FLOPs (×1e19)</td><td>Ratio (Saving)</td><td>Loss (MLM)</td><td>Avg.</td></tr><tr><td rowspan="2">S(6,512)</td><td>DirectCopy</td><td>7.3</td><td>0%</td><td>1.440</td><td>89.1</td></tr><tr><td>bert2BERT</td><td>5.6</td><td>23.3%</td><td>1.435</td><td>89.3</td></tr><tr><td>S(8,512)</td><td>bert2BERT</td><td>4.6</td><td>36.8%</td><td>1.435</td><td>89.2</td></tr><tr><td>S(10,512)</td><td>bert2BERT</td><td>4.2</td><td>42.7%</td><td>1.434</td><td>89.1</td></tr></table>
|
| 333 |
+
|
| 334 |
+
Effect of Sub-model Training Epochs. Our training procedure includes two stages: sub-model training and full-model training. Here, we study the effect of the number of sub-model training epochs by performing bert2BERT on the different settings of $E_{b} = \{0, 5, 10, 20\}$ . The results are presented in Table 4 and the loss curves are displayed in Appendix C. We observe that our method achieves the best efficiency when the epoch number is set to 5, while a larger or smaller epoch number will bring a negative impact.
|
| 335 |
+
|
| 336 |
+
Table 3: bert2BERT with smaller source model. Avg means the average score of SST-2/MNLI/SQuADv1.1.
|
| 337 |
+
|
| 338 |
+
<table><tr><td>Model</td><td>FLOPs (×1e19)</td><td>Ratio (Saving)</td><td>Loss (MLM)</td><td>Avg.</td></tr><tr><td colspan="5">bert2BERT: S(12, 512) → T(12, 768)</td></tr><tr><td>bert2BERT (Eb=0)</td><td>4.5</td><td>38.4%</td><td>1.434</td><td>89.4</td></tr><tr><td>bert2BERT (Eb=5)</td><td>4.0</td><td>45.2%</td><td>1.433</td><td>89.3</td></tr><tr><td>bert2BERT (Eb=10)</td><td>4.1</td><td>43.9%</td><td>1.436</td><td>89.3</td></tr><tr><td>bert2BERT (Eb=20)</td><td>5.4</td><td>25.4%</td><td>1.448</td><td>89.1</td></tr></table>
|
| 339 |
+
|
| 340 |
+
# 5.3 Application on GPT
|
| 341 |
+
|
| 342 |
+
Datasets. To demonstrate that our method is generic, following the BERT setting, we also use the English Wikipedia and Book Corpus in the GPT-training. For the evaluation, we use the datasets of WikiText-2, PTB, and WikiText103 and evaluate these models under the zero-shot setting without fine-tuning on the training set.
|
| 343 |
+
|
| 344 |
+
Implementation Details. We use the architecture of $\{L = 12, D = 768\}$ for the GPT target model, and pre-train it with the learning rate of 1e-4, training epochs of 20. For bert2BERT, we use the source model with an architecture of $\{L = 12, D = 512\}$ , initialize the target model with AKI, and pre-train it by the full-model training $(E_b = 0)$ .
|
| 345 |
+
|
| 346 |
+
Results and Analysis. We compare the original pre-training method and bert2BERT, the results are shown in Table 5 and Appendix D. We observe that the proposed method saves $47\%$ computation cost of GPT pre-training, exhibiting a similar trend to BERT pre-training. Although GPT and BERT have different architectures (e.g., postLN and pre-LN (Xiong et al., 2020)) and are pretrained with different tasks, bert2BERT saves a significant amount of training cost on both these two models, which shows that the proposed method is generic and is effective for different kinds of PLMs.
|
| 347 |
+
|
| 348 |
+
Table 4: Effect of sub-model training epochs. Avg means the average score of SST-2/MNLI/SQuADv1.1.
|
| 349 |
+
|
| 350 |
+
<table><tr><td>Model</td><td>FLOPs (× 1e19)</td><td>PTB (w/o FT)</td><td>WikiText-2 (w/o FT)</td><td>WikiText103 (w/o FT)</td></tr><tr><td colspan="5">bert2BERT: S(12, 512) → T(12, 768)</td></tr><tr><td>GPT</td><td>4.9</td><td>133.8</td><td>47.0</td><td>53.5</td></tr><tr><td>bert2BERT</td><td>2.6 (47% ↓)</td><td>132.1</td><td>47.9</td><td>53.0</td></tr></table>
|
| 351 |
+
|
| 352 |
+
Table 5: Experiments on GPT. We report the perplexity for these tasks. "w/o FT" means that the pre-trained model is directly evaluated on the test set without finetuning on the train set.
|
| 353 |
+
|
| 354 |
+
# 5.4 Application on T5
|
| 355 |
+
|
| 356 |
+
Datasets. To demonstrate that our method can be used to train larger models, we use the Baidu Wikipedia, Sougou Wikipedia, and Zhihu to train the T5 model (Raffel et al., 2020). For the evaluation, we use the dataset of the original Chinese natural language inference task (OCNLI) (Hu et al., 2020).
|
| 357 |
+
|
| 358 |
+
Implementation Details. Since the bert2BERT method is suitable for BERT and GPT, it can also be used for the T5 model, which consists of an encoder and a decoder. The target T5 model's architecture is $\{L_{e} = 12, L_{d} = 12, D = 1024, A = 16\}$ , where
|
| 359 |
+
|
| 360 |
+
$L_{e}$ and $L_{d}$ means the numbers of encoder and decoder Transformer layers respectively, $D$ means the hidden size, $A$ means the number of attention heads. We pre-train it with the learning rate of 1e-4, batch size of 1024. For bert2BERT, we use the source model with an architecture of $\{L_e = 12$ , $L_{d} = 12$ , $D = 256$ , $A = 4\}$ , initialize the target model with FPI, and pre-train it by the full-model training $(E_b = 0)$ . Note that the scale gap between the source model and the target model is over 10 times (31M vs. 360M), which is a challenging setting.
|
| 361 |
+
|
| 362 |
+
Results and Analysis. We compare the original pre-training method and bert2BERT method on the T5 model, the results are shown in Table 6. We observe that the proposed method saves at least $25\%$ computation cost of T5 pre-training. It demonstrates the effectiveness of the method on larger models.
|
| 363 |
+
|
| 364 |
+
<table><tr><td>Model</td><td>FLOPs (× 1e20)</td><td>Loss (MLM)</td><td>OCNLI (Acc)</td></tr><tr><td colspan="4">bert2BERT: S(12, 12, 256, 4) → T(12, 12, 1024, 16)</td></tr><tr><td>T5</td><td>1.6</td><td>1.90</td><td>72.03</td></tr><tr><td>bert2BERT</td><td>1.2 (25% ↓)</td><td>1.90</td><td>72.75</td></tr></table>
|
| 365 |
+
|
| 366 |
+
Table 6: Experiments on the T5 model.
|
| 367 |
+
|
| 368 |
+
# 6 Conclusion and Future Work
|
| 369 |
+
|
| 370 |
+
This paper proposes an efficient pre-training method, bert2BERT, which reuses the parameters of the small trained model as the initialization parameters of the large model. We employ the proposed method in BERT and GPT under different settings of model sizes. The extensive results show that bert2BERT is generic to Transformer-based models and saves a significant amount of computation cost. Moreover, the detailed analysis shows that our techniques, function-preserving, advanced knowledge initialization, and two-stage pre-training, are all effective. In the future, we will apply bert2BERT on training super largescale language models (e.g., use the 10B source model to train the 100B target model) and extends its scope to other PLMs such as ELECTRA and BART (Lewis et al., 2020).
|
| 371 |
+
|
| 372 |
+
# Acknowledgements
|
| 373 |
+
|
| 374 |
+
This work is supported in part by NSFC (Grant No. 61872215), and Shenzhen Science and Technology Program (Grant No. RCYX20200714114523079). We would like to thank Yifeng Liu, Binbin Deng, Ziliang Yang, Jiaxin Shi for their support of this work.
|
| 375 |
+
|
| 376 |
+
# References
|
| 377 |
+
|
| 378 |
+
Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E. Hinton. 2016. Layer normalization. ArXiv preprint, abs/1607.06450.
|
| 379 |
+
Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual.
|
| 380 |
+
Cheng Chen, Yichun Yin, Lifeng Shang, Zhi Wang, Xin Jiang, Xiao Chen, and Qun Liu. 2021. Extract then distill: Efficient and effective task-agnostic BERT distillation. In Artificial Neural Networks and Machine Learning - ICANN 2021 - 30th International Conference on Artificial Neural Networks, Bratislava, Slovakia, September 14-17, 2021, Proceedings, Part III, volume 12893 of Lecture Notes in Computer Science, pages 570-581. Springer.
|
| 381 |
+
Tianqi Chen, Ian J. Goodfellow, and Jonathon Shlens. 2016. Net2net: Accelerating learning via knowledge transfer. In ICLR.
|
| 382 |
+
Kevin Clark, Urvashi Khandelwal, Omer Levy, and Christopher D. Manning. 2019. What does BERT look at? an analysis of BERT's attention. In Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 276-286, Florence, Italy. Association for Computational Linguistics.
|
| 383 |
+
Kevin Clark, Minh-Thang Luong, Quoc V. Le, and Christopher D. Manning. 2020. ELECTRA: pretraining text encoders as discriminators rather than generators. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.
|
| 384 |
+
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.
|
| 385 |
+
William Fedus, Barret Zoph, and Noam Shazeer. 2021. Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity. CoRR.
|
| 386 |
+
|
| 387 |
+
A. Feng and P. Panda. 2020. Energy-efficient and robust cumulative training with net2net transformation. In IJCNN.
|
| 388 |
+
Linyuan Gong, Di He, Zhuohan Li, Tao Qin, Liwei Wang, and Tie-Yan Liu. 2019. Efficient training of BERT by progressively stacking. In Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, volume 97 of Proceedings of Machine Learning Research, pages 2337-2346. PMLR.
|
| 389 |
+
Xiaotao Gu, Liyuan Liu, Hongkun Yu, Jing Li, Chen Chen, and Jiawei Han. 2021. On the transformer growth for progressive BERT training. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5174-5180, Online. Association for Computational Linguistics.
|
| 390 |
+
Song Han, Jeff Pool, John Tran, and William Dally. 2015. Learning both weights and connections for efficient neural network. Advances in neural information processing systems, 28.
|
| 391 |
+
Dan Hendrycks and Kevin Gimpel. 2016. Gaussian error linear units (gelus). ArXiv preprint, abs/1606.08415.
|
| 392 |
+
Hai Hu, Kyle Richardson, Liang Xu, Lu Li, Sandra Kübler, and Lawrence S. Moss. 2020. OCNLI: original chinese natural language inference. In Findings of the Association for Computational Linguistics: EMNLP 2020, Online Event, 16-20 November 2020, volume EMNLP 2020 of Findings of ACL, pages 3512-3526. Association for Computational Linguistics.
|
| 393 |
+
Ganesh Jawahar, Benoit Sagot, and Djamé Seddah. 2019. What does BERT learn about the structure of language? In ACL, pages 3651-3657, Florence, Italy. Association for Computational Linguistics.
|
| 394 |
+
Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2020. ALBERT: A lite BERT for self-supervised learning of language representations. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.
|
| 395 |
+
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In ACL, pages 7871-7880, Online. Association for Computational Linguistics.
|
| 396 |
+
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining approach. CoRR.
|
| 397 |
+
|
| 398 |
+
Sinno Jialin Pan and Qiang Yang. 2010. A survey on transfer learning. TKDE.
|
| 399 |
+
Yujia Qin, Yankai Lin, Jing Yi, Jiajie Zhang, Xu Han, Zhengyan Zhang, YuSheng Su, Zhiyuan Liu, Peng Li, Maosong Sun, and Jie Zhou. 2021. Knowledge inheritance for pre-trained language models. CoRR.
|
| 400 |
+
Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training.
|
| 401 |
+
Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9.
|
| 402 |
+
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res., 21:140:1-140:67.
|
| 403 |
+
Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383-2392, Austin, Texas. Association for Computational Linguistics.
|
| 404 |
+
Roy Schwartz, Jesse Dodge, Noah A Smith, and Oren Etzioni. 2020. Green ai. Communications of the ACM, 63(12):54-63.
|
| 405 |
+
Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper, and Bryan Catanzaro. 2019. Megatron-lm: Training multi-billion parameter language models using model parallelism CoRR.
|
| 406 |
+
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 5998-6008.
|
| 407 |
+
Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019a GLUE: A multi-task benchmark and analysis platform for natural language understanding. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net.
|
| 408 |
+
Dilin Wang, Meng Li, Lemeng Wu, Vikas Chandra, and Qiang Liu. 2019b. Energy-aware neural architecture optimization with fast splitting steepest descent CoRR.
|
| 409 |
+
Alex Warstadt, Amanpreet Singh, and Samuel R. Bowman. 2019. Neural network acceptability judgments Transactions of the Association for Computational Linguistics, 7:625-641.
|
| 410 |
+
|
| 411 |
+
Wei Wen, Yuxiong He, Samyam Rajbhandari, Minjia Zhang, Wenhan Wang, Fang Liu, Bin Hu, Yiran Chen, and Hai Li. 2018. Learning intrinsic sparse structures within long short-term memory. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net.
|
| 412 |
+
Lemeng Wu, Bo Liu, Peter Stone, and Qiang Liu. 2020a. Firefly neural architecture descent: a general approach for growing neural networks. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual.
|
| 413 |
+
Lemeng Wu, Dilin Wang, and Qiang Liu. 2019. Splitting steepest descent for growing neural architectures. In NeurIPS, pages 10655-10665.
|
| 414 |
+
Lemeng Wu, Mao Ye, Qi Lei, Jason D. Lee, and Qiang Liu. 2020b. Steepest descent neural architecture optimization: Escaping local optimum with signed neural splitting. CoRR.
|
| 415 |
+
Qiyu Wu, Chen Xing, Yatao Li, Guolin Ke, Di He, and Tie-Yan Liu. 2021. Taking notes on the fly helps language pre-training. In ICLR.
|
| 416 |
+
Ruibin Xiong, Yunchang Yang, Di He, Kai Zheng, Shuxin Zheng, Chen Xing, Huishuai Zhang, Yanyan Lan, Liwei Wang, and Tie-Yan Liu. 2020. On layer normalization in the transformer architecture. In Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event, volume 119 of Proceedings of Machine Learning Research, pages 10524-10533. PMLR.
|
| 417 |
+
Cheng Yang, Shengnan Wang, Chao Yang, Yuechuan Li, Ru He, and Jingqiao Zhang. 2020. Progressively stacking 2.0: A multi-stage layerwise training method for bert training speedup. arXiv preprint arXiv:2011.13635.
|
| 418 |
+
Zhilin Yang, Zihang Dai, Yiming Yang, Jaime G. Carbonell, Ruslan Salakhutdinov, and Quoc V. Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. In NeurIPS, pages 5754-5764.
|
| 419 |
+
Yang You, Jing Li, Sashank J. Reddi, Jonathan Hseu, Sanjiv Kumar, Srinadh Bhojanapalli, Xiaodan Song, James Demmel, Kurt Keutzer, and Cho-Jui Hsieh. 2020. Large batch optimization for deep learning: Training BERT in 76 minutes. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.
|
| 420 |
+
Wei Zeng, Xiaozhe Ren, Teng Su, Hui Wang, Yi Liao, Zhiwei Wang, Xin Jiang, ZhenZhang Yang, Kaisheng Wang, Xiaoda Zhang, Chen Li, Ziyan Gong, Yifan Yao, Xinjing Huang, Jun Wang, Jianfeng Yu, Qi Guo, Yue Yu, Yan Zhang, Jin Wang, Hengtao Tao, Dasen Yan, Zexuan Yi, Fang Peng, Fangqing Jiang, Han Zhang, Lingfeng Deng, Yehong Zhang,
|
| 421 |
+
|
| 422 |
+
Zhe Lin, Chao Zhang, Shaojie Zhang, Mingyue Guo, Shanzhi Gu, Gaojun Fan, Yaowei Wang, Xuefeng Jin, Qun Liu, and Yonghong Tian. 2021. Panguα: Large-scale autoregressive pretrained chinese language models with auto-parallel computation.
|
| 423 |
+
Minjia Zhang and Yuxiong He. 2020. Accelerating training of transformer-based language models with progressive layer dropping. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual.
|
| 424 |
+
Yukun Zhu, Ryan Kiros, Richard S. Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. In 2015 IEEE International Conference on Computer Vision, ICCV 2015, Santiago, Chile, December 7-13, 2015, pages 19-27. IEEE Computer Society.
|
| 425 |
+
|
| 426 |
+
# A Ablation Study of bert2BERT
|
| 427 |
+
|
| 428 |
+

|
| 429 |
+
Figure 5: Ablation study of bert2BERT. bert2BERT means the combination of AKI and two-stage pretraining.
|
| 430 |
+
|
| 431 |
+
The ablation study of bert2BERT is displayed in Table 5. From the table, we observe that: (1) all the proposed methods is better than the original pre-training method and DirectCopy; (2) although AKI has a worse initialization than FPI, it achieves faster convergence rate than FPI; (3) the two-stage pre-training furthers reduce the cost from $61.6\%$ to $54.8\%$ ; (4) the FPI curve has an upward trend at the beginning. We conjecture that it is due to the symmetry brought by FPI and the model needs some optimization time to break this symmetry.
|
| 432 |
+
|
| 433 |
+
# B bert2BERT with smaller source model
|
| 434 |
+
|
| 435 |
+

|
| 436 |
+
Figure 6: Loss curves of bert2BERT and baselines with smaller source models.
|
| 437 |
+
|
| 438 |
+
We test bert2BERT with different source models and the loss curves are represented in Figure 6.
|
| 439 |
+
|
| 440 |
+
# C Effect of sub-model training epochs
|
| 441 |
+
|
| 442 |
+
We study the effect of sub-model training epochs on the pre-training efficiency. The loss curves are represented in Figure 7. Note that the setting $E_{b} = 20$ has not achieved the same loss (1.437) as the baseline BERT<sub>BASE</sub> in the 40 training epochs.
|
| 443 |
+
|
| 444 |
+
# D Application on GPT
|
| 445 |
+
|
| 446 |
+
The loss curve of our method on GPT application is displayed in Figure 8.
|
| 447 |
+
|
| 448 |
+

|
| 449 |
+
Figure 7: Loss curves of bert2BERT with different sub-model training epochs.
|
| 450 |
+
|
| 451 |
+

|
| 452 |
+
Figure 8: Pre-training loss curves of GPT.
|
| 453 |
+
|
| 454 |
+
# E Comparisons of Attention Patterns
|
| 455 |
+
|
| 456 |
+
We take the source model $S(4,256)$ and target model $T(4,512)$ as an example to analyze the attention patterns of DirectCopy in Figure 10, FPI in Figure 11 and AKI in Figure 12.
|
| 457 |
+
|
| 458 |
+
We display the attention patterns of the source model $S(4,256)$ in Figure 9. Compared with the source model, we observe that the newly added attention patterns of DirectCopy are messy, and the randomly initialized parameters destroy the attention patterns of the source model. The proposed FPI method makes the new model have the same attention patterns as the source model, thus the knowledge of the source model is preserved. However, FPI always induces symmetrical attention patterns in the same layer. This symmetry will hinder the convergence. To handle this problem, we use AKI method to reuse the parameters of the upper layer (advanced knowledge) to break the symmetry, and meanwhile make the knowledge in the same layer richer. Through the AKI method, the attention patterns of the upper layer can be also maintained well in the target model. For example, as shown in Figure 12, the newly added attention patterns of the 1st layer in the target model are similar to the ones of the 2nd layer in the source model.
|
| 459 |
+
|
| 460 |
+
# F Illustration of FPI and AKI process
|
| 461 |
+
|
| 462 |
+
We illustrate the process of FPI and AKI in Figure 13 and 14 respectively.
|
| 463 |
+
|
| 464 |
+

|
| 465 |
+
|
| 466 |
+

|
| 467 |
+
|
| 468 |
+

|
| 469 |
+
|
| 470 |
+

|
| 471 |
+
|
| 472 |
+

|
| 473 |
+
|
| 474 |
+

|
| 475 |
+
|
| 476 |
+

|
| 477 |
+
|
| 478 |
+

|
| 479 |
+
|
| 480 |
+

|
| 481 |
+
|
| 482 |
+

|
| 483 |
+
|
| 484 |
+

|
| 485 |
+
|
| 486 |
+

|
| 487 |
+
|
| 488 |
+

|
| 489 |
+
Figure 9: Attention patterns of the source model $S(4,256)$ , which has 4 attention heads in each layer.
|
| 490 |
+
|
| 491 |
+

|
| 492 |
+
|
| 493 |
+

|
| 494 |
+
|
| 495 |
+

|
| 496 |
+
|
| 497 |
+

|
| 498 |
+
|
| 499 |
+

|
| 500 |
+
|
| 501 |
+

|
| 502 |
+
|
| 503 |
+

|
| 504 |
+
Figure 10: Attention patterns of the target model $\mathcal{T}(4,512)$ based on the baseline DirectCopy method. The first 4 attention patterns (H0-H3) in each row correspond to the source model's attention patterns, and the last 4 attention patterns (H4-H7) are newly added.
|
| 505 |
+
|
| 506 |
+

|
| 507 |
+
Figure 11: Attention patterns of the target model $\mathcal{T}(4,512)$ based on our FPI method. The last 4 attention patterns (H4-H7) in each row are obtained by FPI expansion.
|
| 508 |
+
|
| 509 |
+

|
| 510 |
+
Figure 12: Attention patterns of the target model $\mathcal{T}(4,512)$ based on our AKI method. The last 4 attention patterns (H4-H7) in each row are obtained by AKI expansion.
|
| 511 |
+
|
| 512 |
+

|
| 513 |
+
Figure 13: FPI process. We use the FPI method to widen the source model with a width of 2 into a target model with a width of 3. In the example, the source model and the target model have 2 and 3 attention heads respectively. And the head dimension is 1. To facilitate the illustration, we reduce the number of neurons in the FFN layer. We also note that since the MLM classifier of BERT is a transposition of the Embedding layer, they share a parameter matrix. Therefore, in step 1, we expand the MLM classifier by re-scaling the parameter values of the LN layer below the MLM classifier instead of following formula 7.
|
| 514 |
+
|
| 515 |
+

|
| 516 |
+
Figure 14: AKI process. We ignore its first step because it is the same as FPI's first step. The main difference between AKI and FPI is that in step 2, AKI copies some attention heads in MHA and some parameters in FFN of the $(l + 1)$ -th layer instead of only copying the $l$ -th layer to construct the new $l$ -th layer Transformer.
|
bert2berttowardsreusablepretrainedlanguagemodels/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:2f1b12851c8fe9385e0ea43bcf08c02536a3f1cf87dc0b60caa7a8ecb8d14379
|
| 3 |
+
size 1196853
|
bert2berttowardsreusablepretrainedlanguagemodels/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:851bfcf5143511273998f936683632fdff3227cb889e08cd165f17b296122303
|
| 3 |
+
size 617258
|
caspicausalawaresafepolicyimprovementfortaskorienteddialogue/51a7f5a6-79cf-43e6-8346-26446f52a346_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:13f47ce5038b1c8b7a587ef27d2f0bb97b239dc443cff7e42b8fdc202caba3e1
|
| 3 |
+
size 76750
|
caspicausalawaresafepolicyimprovementfortaskorienteddialogue/51a7f5a6-79cf-43e6-8346-26446f52a346_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:f1f902efb185f6786c5514c59e6fe93b5dc27e269a4a118df69ebd950acbd13b
|
| 3 |
+
size 88513
|
caspicausalawaresafepolicyimprovementfortaskorienteddialogue/51a7f5a6-79cf-43e6-8346-26446f52a346_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:f058c557f805e7cdbb749a052dfbede95ebc3e020a9ca15a17451ca343c4cc67
|
| 3 |
+
size 1238968
|
caspicausalawaresafepolicyimprovementfortaskorienteddialogue/full.md
ADDED
|
@@ -0,0 +1,358 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# [CASPI] Causal-aware Safe Policy Improvement for Task-oriented Dialogue
|
| 2 |
+
|
| 3 |
+
Govardana Sachithanandam Ramachandran, Kazuma Hashimoto*, Caiming Xiong
|
| 4 |
+
Salesforce Research
|
| 5 |
+
|
| 6 |
+
gramachandran@salesforce.com
|
| 7 |
+
|
| 8 |
+
hassy@logos.t.u-tokyo.ac.jp
|
| 9 |
+
|
| 10 |
+
cxiong@salesforce.com
|
| 11 |
+
|
| 12 |
+
# Abstract
|
| 13 |
+
|
| 14 |
+
The recent success of reinforcement learning (RL) in solving complex tasks is often attributed to its capacity to explore and exploit an environment. Sample efficiency is usually not an issue for tasks with cheap simulators to sample data online. On the other hand, Task-oriented Dialogues (ToD) are usually learnt from offline data collected using human demonstrations. Collecting diverse demonstrations and annotating them is expensive. Unfortunately, RL policy trained on off-policy data are prone to issues of bias and generalization, which are further exacerbated by stochasticity in human response and non-markovian nature of annotated belief state of a dialogue management system. To this end, we propose a batch-RL framework for ToD policy learning: Causal-aware Safe Policy Improvement (CASPI). CASPI includes a mechanism to learn fine-grained reward that captures intention behind human response and also offers guarantee on dialogue policy's performance against a baseline. We demonstrate the effectiveness of this framework on end-to-end dialogue task of the Multiwoz2.0 dataset. The proposed method outperforms the current state of the art. Further more we demonstrate sample efficiency, where our method trained only on $20\%$ of the data, are comparable to current state of the art method trained on $100\%$ data on two out of there evaluation metrics.
|
| 15 |
+
|
| 16 |
+
# 1 Introduction
|
| 17 |
+
|
| 18 |
+
Offline task-oriented dialogue (ToD) systems involves solving disparate tasks of belief states tracking, dialogue policy management, and response generation. Of these tasks, in this work we focus on dialogue policy management to improve the end-to-end performance of ToD. The need for sample
|
| 19 |
+
|
| 20 |
+
efficiency is key for learning offline task-oriented dialogue system, as access to data are finite and expensive. Recent advancements in off-policy reinforcement learning methods that uses offline data as against a simulator has proven to be sample efficient (Thomas and Brunskill, 2016). The effective use of these techniques are hindered by the nature of ToD. For instance, bias correction in off-policy based methods usually requires estimation of behaviour policy for a given state of Markov Decision Process (MDP). In ToD, per-turn annotated belief-state does not capture the true state of the MDP. Example of such annotated belief-state are shown in Fig:1. Latent state information such as prosody, richness of natural language and among others induces stochasticity in the agents response. In addition to these short comings, the direct use of automatic evaluation metric as reward for policy learning is not desirable, since these automatic evaluation metrics are often for the entire dialogue and not per turn. Hence such rewards are sparse and under-specified (Wang et al., 2020). Use of under-specified reward will often lead to policy that suffers from high variance (Agarwal et al., 2019). Alternatively use of imitation learning based methods falls short of reasoning on the outcome. This is demonstrated in Fig:1. Turns#3 and #2 are rich in semantic information and Turn#3 is key to success of the booking process. While Turn#4 contributes least to successful outcome. Though the turns have varying levels of importance, each of the turns are treated equally in imitation learning. In worst case, turns like Turn#4 will appear more often than turns Turn#2 and #3 in a ToD dataset, there by taking greater share of the gradient budget.
|
| 21 |
+
|
| 22 |
+
We address aforementioned shortcomings with following key contributions:
|
| 23 |
+
|
| 24 |
+
1. We introduce pairwise causal reward learning to learn fine grained per turn reward that reason the intention of human utterance.
|
| 25 |
+
|
| 26 |
+
2.We propose a safe policy improvement method
|
| 27 |
+
|
| 28 |
+

|
| 29 |
+
Figure 1: A typical Task oriented dialogue conversation in MultiWoz2.0 dataset
|
| 30 |
+
|
| 31 |
+
for task oriented dialogue setting that guarantees performance against a baseline.
|
| 32 |
+
|
| 33 |
+
By use of these two methods, we demonstrate performance and sample efficiency.
|
| 34 |
+
|
| 35 |
+
# 2 Related Works
|
| 36 |
+
|
| 37 |
+
With the release of multi-domain, multi-turn MultiWoz2.0 dataset (Budzianowski et al., 2018a), there has been flurry of recent works, of which Zhang et al. (2019) uses data augmentation. Rastogi et al. (2019) and Hosseini-Asl et al. (2020) frame dialogue policy learning as language modeling task. Among the works that uses reinforcement learning. Mehri et al. (2019) uses supervised learning to bootstrap followed by RL fine tuning, whereas Zhao et al. (2019) uses policy gradient on latent action space as against handcrafted ones. Jaques et al. (2019) and Wang et al. (2020) uses Batch-RL for dialogue policy learning. (Wang et al., 2020) is first to argue the use of automated evaluation metrics directly as reward is under-specified for ToD policy learning. Recently there's has been proliferation in use of large pretrained language model based systems like Hosseini-Asl et al. (2020), Lin et al. (2020), Chen et al. (2019) etc. More details on contrasting the merits and limitations of these methods can be found in Sec:A.1
|
| 38 |
+
|
| 39 |
+
The line of inverse RL used in this work can be traced back to Ziebart et al. (2008), which proposes roll-outs from expert demonstration should have rewards exponentially higher than any other arbi
|
| 40 |
+
|
| 41 |
+
trary roll-outs. This method requires a normalizing constant that integrates across rollouts, which is challenging. Christiano et al. (2017) and Thananjeyan et al. (2020) propose to do relative comparison of two roll-outs there by eliminating the need for normalization constant and they demonstrate in online setting.
|
| 42 |
+
|
| 43 |
+
# 3 Method
|
| 44 |
+
|
| 45 |
+
# 3.1 Preliminaries
|
| 46 |
+
|
| 47 |
+
We model task-oriented dialogue as a Markov decision process (MDP) (Sutton and Barto, 2018) with set of states $S$ and actions $A$ . The agent at time step $t$ with state $s_t$ performs a composite action $a_t$ as per a target policy $\pi_e(a_t|s_t)$ on the environment. The environment is defined by transition probabilities $P(s_{t+1}|s_t, a_t)$ , a latent reward function, $R(s_t, a_t, g)$ , discount factor $\gamma \in [0,1]$ and goal of dialogue $g$ . Then the objective of the target policy $\pi_e$ , is to maximize the discounted sum of future reward on the MDP, given by the state-action value function $Q^{\pi_e}(a_t, s_t) = \mathbb{E}_{a_t \sim \pi_e, s_t \sim P}[\sum_{t'=t}^{T} \gamma^{t-t'} R(s_{t'}, a_{t'}, g)]$ .
|
| 48 |
+
|
| 49 |
+
In offline Batch-RL. The agent does not get to interact with the environment, instead we are provided with offline data $D$ logged by human agents performing actions based on a latent stochastic behaviour policy $\pi_{b}$ . Rollout of a dialogue $\tau^i\in D$ is composed of $\tau^i = ((o_0^i,a_0^i),\dots,(o_{T - 1}^i,a_{T - 1}^i))$ . Here $o_t$ is the observation at turn $t$ , composing of $o_t = (b_t,u_t^u,u_{t - 1}^a)$ , where $b_{t}$ is the belief state of the agent at turn $t$ , $u_{t}^{u}$ and $u_{t - 1}^{a}$ are the user and agent utterance at time $t$ and $t - 1$ respectively.
|
| 50 |
+
|
| 51 |
+
# 3.2 Safe policy improvement
|
| 52 |
+
|
| 53 |
+
Batch-RL entails training target policy $\pi_e$ on rollout generated by a latent behaviour policy $\pi_b$ . Directly optimizing on the rollouts generated by policy other than the target policy, will lead to large bias in the value function estimation, poor generalization characteristic, and sample inefficiency (Thomas and Brunskill, 2016). Safe policy improvement ensures the new policy performance is bounded by performance against a baseline policy. This is expressed as:
|
| 54 |
+
|
| 55 |
+
$$
|
| 56 |
+
P r (V ^ {\pi_ {e}} \geq V ^ {\pi_ {b}} - \zeta) \geq 1 - \delta ,
|
| 57 |
+
$$
|
| 58 |
+
|
| 59 |
+
where $V^{\pi_e}$ and $V^{\pi_b}$ are value functions of the target and behaviour policy respectively. Here $1 - \delta$ and $\zeta$ are the high probability and approximation metaparameters respectively. Schulman et al. (2015)
|
| 60 |
+
|
| 61 |
+

|
| 62 |
+
Figure 2: Shows stochacity i.e number of different dialogue act against each delexicalized belief state in MultiWoz2.0 dataset
|
| 63 |
+
|
| 64 |
+
provide such update mechanism, (1), whose errors are bounded as long as the constraints of (1) are met, where $D_{KL}(.||.)$ is the KL divergence and $\eta$ is a hyper-parameter.
|
| 65 |
+
|
| 66 |
+
$$
|
| 67 |
+
\begin{array}{l} L _ {s t o} (\theta) = \min _ {\substack {s _ {t} \sim P ^ {\pi_ {b s}} \\ a _ {t} \sim \pi_ {b s}}} - \mathbb {E} \left[ \frac {\pi_ {e} (a _ {t} | s _ {t} ; \theta)}{\pi_ {b s} (a _ {t} | s _ {t})} Q ^ {\pi_ {b s}} (s _ {t}, a _ {t}) \right] \\ s. t. \underset {s _ {t} \sim P ^ {\pi_ {b s}}} {\mathbb {E}} \left[ D _ {K L} \left(\pi_ {b s} (.. | s _ {t}) \mid \mid \pi_ {e} (.. | s _ {t}))\right) \right] \leq \eta \tag {1} \\ \end{array}
|
| 68 |
+
$$
|
| 69 |
+
|
| 70 |
+
(Schulman et al., 2015) originally formulated (1) for online learning as trust region for policy updates and uses policy before gradient update as the baseline policy, $\pi_{bs}(a_t|b_t;\theta_{old})$ . In this work we adapt it to offline setting and use behaviour policy $\pi_b$ as the baseline policy. Use of this update rule requires access to the behavior policy $\pi_b(a_t|s_t)$ which is intractable to estimate and the learnt ones might have bias. Use of such behavior policy to perform bias correction by Important Sampling (Precup, 2000) might lead to worse policy. Instead we estimate the behaviour policy conditioned only the annotated belief-state $b_{t}$ as against true state $s_t$ in (1), which result in a stochastic behavior policy. This stochasticity of dialogue act vis-à-vis annotated belief state can be observed in Fig:2. We also estimate the Q-function of the behavior policy, $Q^{\pi_b}(b_t,a_t)$ using learnt reward $R(s_{t},a_{t},g)$ . More on learnt reward in Sec: 3.3.
|
| 71 |
+
|
| 72 |
+
The belief state $b_{t}$ is part of the observation $o_{t}$ , hence we purport that, on availability of more evi
|
| 73 |
+
|
| 74 |
+
dence of the observation $o_t$ , (beside $b_t$ ) the mode of the policy collapse to a near deterministic action. To factor this into the policy learning, we have an additional loss:
|
| 75 |
+
|
| 76 |
+
$$
|
| 77 |
+
L _ {d e t} (\theta) = \min - \underset {\left(o _ {t}, a _ {t}\right) \sim D} {\mathbb {E}} [ G (\tau , t) \log \pi_ {e} \left(a _ {t} \mid o _ {t}; \theta\right) ] \tag {2}
|
| 78 |
+
$$
|
| 79 |
+
|
| 80 |
+
where return $G(\tau, t) = \sum_{t' = t}^{T} \gamma^{t' - t} R(s_{t'}, a_{t'}, g)$ is the discounted sum of future reward for rollout $\tau$ with goal $g$ . Hence policy optimization loss function is given by:
|
| 81 |
+
|
| 82 |
+
$$
|
| 83 |
+
L (\theta) = \alpha L _ {s t o} (\theta) + (1 - \alpha) L _ {d e t} (\theta) \tag {3}
|
| 84 |
+
$$
|
| 85 |
+
|
| 86 |
+
We achieve this by doing two forward passes of the policy network $\pi_e(a_t|o_t;\theta)$ , first with only the belief state, $b_{t}$ as the input and second pass with entire observation i,e $o_{t} := (b_{t},u_{t}^{u},u_{t}^{a})$ as input to the policy network. We then use the corresponding action distribution $\pi_e(a_t|b_t;\theta)$ and $\pi_e(a_t|o_t;\theta)$ in loss functions (1) and (2) respectively.
|
| 87 |
+
|
| 88 |
+
# 3.3 Pairwise causal reward learning
|
| 89 |
+
|
| 90 |
+
# Algorithm 1 CASPI
|
| 91 |
+
|
| 92 |
+
Input: Dialogue dataset $D$ and evaluation metric $M(.)$
|
| 93 |
+
|
| 94 |
+
Sub-sample K-folds of train and val set $\{(D_T, D_V)_1, \dots, (D_T, D_V)_k | (D_T, D_V) \sim D\}$
|
| 95 |
+
|
| 96 |
+
for $\forall (D_T,D_V)$ do
|
| 97 |
+
|
| 98 |
+
Learn ToD in supervised setting by optimizing for objective:
|
| 99 |
+
|
| 100 |
+
$$
|
| 101 |
+
- \min \mathbb {E} _ {a _ {t}, s _ {t} \sim D _ {T}} \log \left(\pi_ {m} \left(a _ {t} \mid s _ {t}\right)\right)
|
| 102 |
+
$$
|
| 103 |
+
|
| 104 |
+
for $\forall$ epoch do
|
| 105 |
+
|
| 106 |
+
Using $\pi_{m}(a_{t}|s_{t})$ predict actions on the valset $D_V$ and add it to the dataset, $D_P$ along with corresponding metric score $M(\tau)$ for pairwise causal reward learning
|
| 107 |
+
|
| 108 |
+
$$
|
| 109 |
+
D _ {P} = D _ {P} \cup (\tau , M (\tau)) | \tau \sim \pi_ {m}
|
| 110 |
+
$$
|
| 111 |
+
|
| 112 |
+
end for
|
| 113 |
+
|
| 114 |
+
end for
|
| 115 |
+
|
| 116 |
+
repeat
|
| 117 |
+
|
| 118 |
+
Sample pair of rollouts $(\tau^1, \tau^2) \sim D_P$
|
| 119 |
+
|
| 120 |
+
Learn for $R(.)$ by optimizing for objective (4)
|
| 121 |
+
|
| 122 |
+
until Convergence using data $D_P$
|
| 123 |
+
|
| 124 |
+
repeat
|
| 125 |
+
|
| 126 |
+
Optimize for policy $\pi_{e}$ using objective (3)
|
| 127 |
+
|
| 128 |
+
until Convergence using data $D$
|
| 129 |
+
|
| 130 |
+
The policy optimization objective introduced in the previous section requires access to per timestep reward $R(s_{t},a_{t},g)$ . To this end, we provide a
|
| 131 |
+
|
| 132 |
+

|
| 133 |
+
Figure 3: Process flow of pairwise causal reward learning
|
| 134 |
+
|
| 135 |
+
mechanism to learn a reward that is causally reasoned on the intention of the human demonstrator. Usually ToD are evaluated using dialogue level automatic evaluation metrics $M(\cdot)$ . Given the large state-action space of the dialogue management system, these dialogue level feedback are under-specified for for effective policy learning (Wang et al., 2020). Details about the choice of evaluation metric $M(\cdot)$ are covered in Sec:4.4.2.
|
| 136 |
+
|
| 137 |
+
To address this under-specified feedback, we adapt preference learning introduced by (Christiano et al., 2017) from an online to an offline setting, to learn fine grained per dialogue turn (ie. per timestep $t$ ) reward, $R(s_{t},a_{t},g)$ . Given a pair of rollouts $\tau^1,\tau^2\in D$ with actions for each state in the rollout is sampled from a pair of different policies $\pi_m^1$ and $\pi_m^2$ respectively. Let $\tau^1\succ \tau^2$ represent preference of rollout $\tau^1$ over rollout $\tau^2$ . This preference is true when sum of rewards of each dialogue turn of the two rollouts satisfies: $\sum_{t = 0}^{T}R(s_{t},a_{t},g|(s_{t},a_{t})\in \tau^{1}) > \sum_{t = 0}^{T}R(s_{t},a_{t},g|(s_{t},a_{t})\in \tau^{2})$ . For brevity, henceforth we refer $\sum_{t = 0}^{T}R(s_{t},a_{t},g|(s_{T},a_{t})\in \tau)$ as $R(\tau)$ . Then preferential probability of one rollout over another, can be represented by:
|
| 138 |
+
|
| 139 |
+
$$
|
| 140 |
+
P [ \tau^ {1} \succ \tau^ {2} ] = \frac {\phi (R (\tau^ {1}))}{\phi (R (\tau^ {1})) + \phi (R (\tau^ {2}))}
|
| 141 |
+
$$
|
| 142 |
+
|
| 143 |
+
Here $\phi(.)$ could either be $exp(.)$ or identity $\mathbb{1}(.)$ . In our experiments, the later works best. We optimize for reward, $R(s_{t},a_{t},g)$ by minimizing binary cross-entropy loss between the preference probability and the normalized metrics score, $\mu(\tau)$ between a pair of rollout.
|
| 144 |
+
|
| 145 |
+
$$
|
| 146 |
+
\begin{array}{l} L (\theta) = \min - \underset {\tau^ {1} \sim \pi_ {m} ^ {1}, \tau^ {2} \sim \pi_ {m} ^ {2}} {\mathbb {E}} [ \mu (\tau^ {1}) \log P [ \tau^ {1} \succ \tau^ {2} ] \\ + \mu (\tau^ {2}) \log P [ \tau^ {2} \succ \tau^ {1} ] ] \tag {4} \\ \end{array}
|
| 147 |
+
$$
|
| 148 |
+
|
| 149 |
+
where,
|
| 150 |
+
|
| 151 |
+
$$
|
| 152 |
+
\mu (\tau^ {1}) = \frac {M (\tau^ {1})}{M (\tau^ {1}) + M (\tau^ {2})} \tag {5}
|
| 153 |
+
$$
|
| 154 |
+
|
| 155 |
+
We observe that the dialogue roll-outs are generated by expert latent policy. The data (dialogue rollouts) are distributed as per the optimal latent policy and transition probability. We propose that predictions made by a policy while in the process of learning to maximize the likelihood of the data is a good curriculum for exploring the state-action space for pairwise reward learning. This is a key insight of this work.
|
| 156 |
+
|
| 157 |
+
We formalize this insight into a method depicted in Fig:3 and Algo:1. The (train) dataset is subsampled into $K$ -fold train & val sets. $K$ -baseline policies are trained to fit the data distribution generated by experts using cross entropy loss, i.e supervised learning. During the process of fitting the data distribution, the still learning K-policies are used to predict on their corresponding K-fold valset at every epoch of the training. Each of these predictions are the scored by a chosen dialogue level metric, $M(\cdot)$ . On convergence of this supervised learning process, pairs of dialogue predictions generated by the above process, along with their corresponding metric score are used to train for fine grained reward $R(a_{t}, s_{t}, g)$ using objective (4).
|
| 158 |
+
|
| 159 |
+
The use of K-fold subsampling, K-baseline policies, $\pi_{m}$ and actions sampled from these K-policies that are still in the process of learning help generate counter factual examples in the action space. These counter factual actions close to optimal policy, along with the goal of the dialogue helps us to learn subtle nuance of fine grained reward function $R(a_{t},s_{t},g)$ in the region of action space that matters the most.
|
| 160 |
+
|
| 161 |
+
# 4 Experimental Settings
|
| 162 |
+
|
| 163 |
+
# 4.1 Model
|
| 164 |
+
|
| 165 |
+
# 4.1.1 CASPI(.)
|
| 166 |
+
|
| 167 |
+
The learnt reward using CASPI $R(s_{t}, a_{t}, g)$ is akin to sample weights for each dialogue turn, that helps to redistribute the gradient budget among dialogue turns based on their contribution to the overall success of the ToD.
|
| 168 |
+
|
| 169 |
+
$$
|
| 170 |
+
\theta := \theta - R (s _ {t}, a _ {t}, g) \nabla \pi_ {\text {b l a c k b o x}} (a _ {t} | s _ {t}; \theta) \quad (6)
|
| 171 |
+
$$
|
| 172 |
+
|
| 173 |
+
Hence we believe our pairwise casual reward learning and associated improvement in sample efficiency are independent of model architecture. To this end we choose two ToD methods that are at the extremes of model architecture spectrum 1) One uses a light weight custom model and 2) Other uses a large standard pre-trained out-of-the-box universal language model.
|
| 174 |
+
|
| 175 |
+
# 4.1.2 CASPI(DAMD)
|
| 176 |
+
|
| 177 |
+
In this setting, we use the neural model proposed by Zhang et al. (2019). DAMD is composed of three seq2seq generative model using GRUs. The three seq2seq models are one each for belief state, dialogue act and response generation modules. An attention layers is used to attend the outputs of the seq2seq models with the context vector of previous turn for copy over mechanism. The outputs of these attention layer are used as representation for predicting series of tokens for their respective modules. For more details on the model architecture and parameter setting refer Zhang et al. (2019). In this setting we use both stochastic, $L_{sto}$ and deterministic, $L_{det}$ loss functions on dialogue act. For DST and response generation, we retain the cross entropy loss as is from DAMD (Zhang et al., 2019).
|
| 178 |
+
|
| 179 |
+
# 4.1.3 CASPI(MinTL)
|
| 180 |
+
|
| 181 |
+
On the other extreme of model complexity, we use the Task oriented Dialogue model, MinTL(Lin et al., 2020). MinTL uses a large pretrained language model BART (Lewis et al., 2019). BART use as a standard encoder decoder transformer architecture with a bidirectional encoder and an autoregressive decoder. It is pre-trained on the task of denoising corrupt documents. BART is trained using cross-entropy loss between the decoder output and the original document. For more details of the model architecture and parameter setting, we suggest referring to (Lin et al., 2020) (Lewis et al., 2019).
|
| 182 |
+
|
| 183 |
+
MinTL doesn't explicitly predict dialogue act. Hence we only use the deterministic loss, $L_{det}$ directly on the generated response and for DST we retain the loss as is from MintTL (Lin et al., 2020).
|
| 184 |
+
|
| 185 |
+
# 4.1.4 Pairwise Causal Learning Network
|
| 186 |
+
|
| 187 |
+
For k-model training of pairwise casual reward learning illustrated in Fig:3, we chose DAMD (Zhang et al., 2019) model for its light weight model architecture. In all our experiments, we use $K = 10$ .
|
| 188 |
+
|
| 189 |
+
For the pairwise casual reward learning network, we use three single bi-LSTM layers, one each to encode goal, belief state and either dialogue act or response sequences at each dialogue turn on each of the sampled roll-outs pairs, $\tau^1$ and $\tau^2$ . The three encoded representations are concatenate and are fed through a couple of feed-forward layers before making a bounded reward prediction $R(s_{t},a_{t},g)\in [0,1]$ for each turn using a sigmoid function. The per turn rewards are summed to form a global reward $R(\tau)$ for the roll-out $\tau$ . Using a pair of dialogue rewards $R(\tau^{1})$ and $R(\tau^{2})$ , we compute the probabilistic preference between the roll-outs $P[\tau^1\succ \tau^2 ]$ either by standard normalization or a softmax function. The output of this optimized using binary crossentropy loss described in Eqn:4. The above described architecture is illustrated in Fig:10.
|
| 190 |
+
|
| 191 |
+
# 4.2 Dataset
|
| 192 |
+
|
| 193 |
+
To evaluate our proposed method on Multi-domain Wizard-of-Oz (MultiWoz) (Budzianowski et al., 2018a) dataset. It is a large scale multidomain, task oriented dataset generated by human-to-human conversation, where one participant plays the role of a user while the other plays the agent. The conversations are between a tourist and a clerk at an information center. The conversations span across 7 domains including attraction, hospital, hotel, police, restaurant, taxi and train. Each dialogue is generated by users with a defined goal which may cover 1-5 domains with a maximum of 13 turns in a conversation. The dataset has 10438 dialogues split into 8438 dialogues for training set and 1000 dialogues each for validation and test set.
|
| 194 |
+
|
| 195 |
+
# 4.3 Prepossessing
|
| 196 |
+
|
| 197 |
+
We represent DB results as one-hot vectors as proposed by Budzianowski et al. (2018b). To reduce surface-level variability in the responses, we use domain-adaptive delexicalization preprocess
|
| 198 |
+
|
| 199 |
+
ing proposed in Wen et al. (2016). As proposed in Zhang et al. (2019), We generate delexicalized responses with placeholders for specific values which can be filled with information in DST and database.
|
| 200 |
+
|
| 201 |
+
# 4.4 Metrics
|
| 202 |
+
|
| 203 |
+
# 4.4.1 Evaluation
|
| 204 |
+
|
| 205 |
+
We evaluate performance of our method on end-to-end dialogue modeling task of Multiwoz2.0 (Budzianowski et al., 2018a). We uses three evaluations metrics proposed by (Budzianowski et al., 2018a). These include: 1) inform rate - measures the fraction of dialogue, the system has provided the correct entity, 2) success rate - fraction of dialogues, the system has answered all the requested information and 3) BLEU (Papineni et al., 2002) - measures the fluency of the generated response. We also report the combined score $(\text{Inform} + \text{Success}) \times 0.5 + \text{BLEU}$ proposed by Mehri et al. (2019). All the numbers of CASPI reported in this work are median of 5 runs with different seeds.
|
| 206 |
+
|
| 207 |
+
# 4.4.2 Training
|
| 208 |
+
|
| 209 |
+
For the metric $M$ used in pairwise causal reward learning, we use the following:
|
| 210 |
+
|
| 211 |
+
$$
|
| 212 |
+
M := \text {I n f o r m} + \text {S u c c e s s} + \lambda \times \text {B L E U} \tag {7}
|
| 213 |
+
$$
|
| 214 |
+
|
| 215 |
+
This is very similar to combined score used in evaluation and both are equivalent when $\lambda = 2$ . We introduced hyperparameter $\lambda$ to normalize the achievable scale of $BLEU$ . We observe that success rate, if used as is, will result in non-markovian and stochastic per turn reward function. This is because the reward of current state will depend on the performance of future states. Hence, we also use a soft version of the metric $M_{soft}$ , where the success rate measures a fraction of requested information provided in a dialogue. We refer the original metric that uses the discrete variant of success rate as $M_{hard}$ . The choice of action in reward function $R(s_t, a_t, g)$ can either be dialogue act or generate response, we refer corresponding variants of metrics as $M(act)$ and $M(resp)$ . To demonstrate the versatility of our method to adapt to different metrics, we use all the discussed variants of the metric.
|
| 216 |
+
|
| 217 |
+
# 5 Result
|
| 218 |
+
|
| 219 |
+
We compare both adaptation of our methods CASPI(DAMD) and CASPI(MinTL) on the end-to-end dialogue tasks defined by MultiWoz2.0
|
| 220 |
+
|
| 221 |
+
(Budzianowski et al., 2018a). The results are tabulated at Table:1. CASPI(DAMD) with its light weight model architecture and no pretraining on any external corpus, except for (Lubis et al., 2020), out perform all other previous methods, these include methods that use large pretrained language models such as Hosseini-Asl et al. (2020), Peng et al. (2020) and Lin et al. (2020). This show using CASPI to Shepard the gradient update process as sample weights for each dialogue turn leads to a model that's well aligned with true objective of the task. CASPI(MinTL) with its robust pretrained model out performs CASPI(DAMD) and LAVA (Lubis et al., 2020) by a large margin. This demonstrates the ease of adaptation of existing methods with CASPI.
|
| 222 |
+
|
| 223 |
+
# 5.1 Sample Efficiency
|
| 224 |
+
|
| 225 |
+
Inverse reinforcement learning, coupled with off-policy policy learning and evaluation are proven to be sample efficient (Thomas and Brunskill, 2016). We argue CASPI is competitive with other sample efficiency techniques, such as data augmentation and transfer learning as performed by Zhang et al. (2019) and Lin et al. (2020) respectively. To demonstrate the hypothesis, we test our method against baseline in a low sample complexity regime. For experimental setup, we adopt the low resource testing strategy from Lin et al. (2020). We train our model on $5\%$ , $10\%$ , and $20\%$ of the training data and compared with other baselines on end-to-end dialogue task, Table 2 list the results. CASPI(MinTL) trained only on $20\%$ of data was able to out perform previous state of the art method, LAVA (Lubis et al., 2020) and MINTL (Lin et al., 2020) trained on $100\%$ data on two of the three performance metrics. This goes to show that having the right reward function to guide the budget of the gradient update process to reach the true objective is important in extremely low resource setting.
|
| 226 |
+
|
| 227 |
+
# 5.2 Human Evaluation
|
| 228 |
+
|
| 229 |
+
Automatic evaluation metrics have their own biases. True objective of ToD is human experience while interacting with the dialogue systems, which automatic evaluation metrics might fall short to capture. To this end we conduct human evaluation on the quality of the generated response. We define quality by the following criterias:
|
| 230 |
+
|
| 231 |
+
1) Appropriateness: Are the generated responses appropriate for the given context in the dialogue turn?
|
| 232 |
+
|
| 233 |
+
<table><tr><td>Model</td><td>Pre-trained model</td><td>Inform %</td><td>Success %</td><td>BLEU</td><td>Combined Score</td></tr><tr><td>DAMD</td><td>No</td><td>72.79</td><td>60.45</td><td>16.93</td><td>83.55</td></tr><tr><td>DAMD + multi-action</td><td>No</td><td>76.33</td><td>64.35</td><td>17.96</td><td>88.30</td></tr><tr><td>SimpleTOD</td><td>Yes</td><td>84.4</td><td>70.10</td><td>15.01</td><td>92.26</td></tr><tr><td>SOLOIST</td><td>Yes</td><td>85.5</td><td>72.90</td><td>16.54</td><td>95.74</td></tr><tr><td>MinTL-BART</td><td>Yes</td><td>84.88</td><td>74.91</td><td>17.89</td><td>97.79</td></tr><tr><td>LAVA</td><td>Yes</td><td>91.80</td><td>81.80</td><td>12.03</td><td>98.47</td></tr><tr><td>CASPI(DAMD), Msoft(act)</td><td>No</td><td>89.1</td><td>76.1</td><td>18.08</td><td>100.68</td></tr><tr><td>CASPI(MinTL), Msoft(act)</td><td>Yes</td><td>94.59</td><td>85.59</td><td>17.96</td><td>108.05</td></tr><tr><td>CASPI(MinTL), Mhard(act)</td><td>Yes</td><td>93.79</td><td>84.88</td><td>17.47</td><td>106.81</td></tr></table>
|
| 234 |
+
|
| 235 |
+
Table 1: Comparison of results for end-to-end task of Multiwoz2.0.
|
| 236 |
+
|
| 237 |
+
<table><tr><td rowspan="2">Model</td><td colspan="3">5%</td><td colspan="3">10%</td><td colspan="3">20%</td></tr><tr><td>Inform</td><td>Success</td><td>BLEU</td><td>Inform</td><td>Success</td><td>BLEU</td><td>Inform</td><td>Success</td><td>BLEU</td></tr><tr><td>MD-Sequicity</td><td>49.40</td><td>19.70</td><td>10.30</td><td>58.10</td><td>34.70</td><td>11.40</td><td>64.40</td><td>42.10</td><td>13.00</td></tr><tr><td>DAMD</td><td>56.60</td><td>24.50</td><td>10.60</td><td>62.00</td><td>39.40</td><td>14.50</td><td>68.30</td><td>42.90</td><td>11.80</td></tr><tr><td>MinTL</td><td>75.48</td><td>60.96</td><td>13.98</td><td>78.08</td><td>66.87</td><td>15.46</td><td>82.48</td><td>68.57</td><td>13.00</td></tr><tr><td>CASPI(MinTL),Msoft(resp)</td><td>87.69</td><td>71.17</td><td>13.51</td><td>82.08</td><td>72.27</td><td>14.10</td><td>89.39</td><td>78.58</td><td>15.16</td></tr><tr><td>CASPI(MinTL),Mhard(resp)</td><td>89.69</td><td>69.47</td><td>13.33</td><td>92.59</td><td>78.58</td><td>14.48</td><td>94.19</td><td>83.28</td><td>13.65</td></tr></table>
|
| 238 |
+
|
| 239 |
+
Table 2: Comparison of results for end-to-end of Multiwoz2.0. in low resource setting
|
| 240 |
+
|
| 241 |
+
2) Fluency: Are the generated responses coherent and comprehensible?
|
| 242 |
+
|
| 243 |
+
A dialogue turn in the test set is randomly picked. The human evaluators were shown context leading up to the turn. The predictions for the turn by different methods were anonymized and displayed to the evaluators. This is illustrated in Fig:4. The human evaluators were asked to give a score between 1 and 5 for appropriateness and fluency, with score of 5 being best and 1 being the worst. 100 randomly selected dialogue turns were presented to 10 participants. We report the mean and variance of the score. We compare our model performance against MinTL (Lin et al., 2020), SimpleTOD (Hosseini-Asl et al., 2020), LAVA (Lubis et al., 2020) and DAMD (Zhang et al., 2019). Fig:5 shows the results of the evaluation. CASPI(MinTL) outperforms all other models in appropriateness score. While fluency score of CASPI(MinTL), MinTL and SimpleTOD are comparable to each other. It is worth noting that though LAVA (Lubis et al., 2020) performs well on automatic evaluation metrics, it performs poorly on human evaluation. We suspect the policy learnt by (Lubis et al., 2020) exploits gaps in the reward function. In case of LAVA (Lubis et al., 2020), success rate is used as the reward
|
| 244 |
+
|
| 245 |
+
function. In our analysis, low BLEU score is good indicator if the learnt policy indulges in reward hacking, which LAVA (Lubis et al., 2020) exhibits. More on reward hacking in Sec:5.4.2.
|
| 246 |
+
|
| 247 |
+
# 5.3 Human in the loop training
|
| 248 |
+
|
| 249 |
+
In the previous section we argued that automatic dialogue evaluation metrics are biased and doesn't truly reflect the human objective, but in our method we use these very same dialogue evaluation metrics to learn reward $R(s_{t},a_{t},g)$ . To bridge this gap, we performed the following human-in-the-loop (HITL) experiment. We first trained a pair CASPI(MINTL) models with different seeds, on 5% of Multiwoz2.0 dataset. We then used these pair of models to predict on 0.5% of Multiwoz2.0 train data (40 dialogues) and had a human score these pairs of generated response relative to each other. We then trained for reward $R(s_{t},a_{t},g)$ using pairwise causal reward learning as described in Sec:3.3, where examples of the mini batch are randomly sampled either from human scored examples or the ones scored by the automatic evaluation metric as show in Fig:6. We then trained a fresh CASPI(MINTL) model on the original 5% of data and the learnt $R(s_{t},a_{t},g)$ . We perform human evaluation on 24 dialogues using 3 participants. Fig:7
|
| 250 |
+
|
| 251 |
+
<table><tr><td>user</td><td>i need to find a [value_type] that has free parking .</td></tr><tr><td>agent</td><td>sure i can help with that . what area will you be staying in ?</td></tr><tr><td>user</td><td>in the [value_area] , i more specifically am looking for a [value_type] with a [valuestars] star rating in that area .</td></tr><tr><td>agent-gt</td><td>[value_name] is a [valuestars] star [value_type] located in the [value_area] has both free parking and wifi . what days are you interested in staying there ? can i book for you ?</td></tr><tr><td>MinTL</td><td>i have [value_choice] [value_type] in the [value_area] with a [valuestars] star rating . it is [value_name] . would you like me to book it for you ?</td></tr><tr><td>CASPI(MinTL)</td><td>i have [value_choice] [value_type] in the [value_area] . [value_name] is [value_price] and [valuestars] stars . would you like me to book it ?</td></tr><tr><td>DAMD</td><td>i have [value_choice] options for you . is there a certain area you are looking for ?</td></tr><tr><td>SimpleTOD</td><td>there are no guesthouses with a [value_count] star rating in the [value_area], but there are [value_count] star guesthouses with free parking in the [value_pricerange] to [value_pricerange] price range</td></tr><tr><td>LAVA</td><td>the [hotel_name] is located at [hotel_address] , postcode [hotel_postcode] . the phone number is [hotel_phone] .</td></tr></table>
|
| 252 |
+
|
| 253 |
+

|
| 254 |
+
Figure 4: Example of generated responses by different ToD models
|
| 255 |
+
Figure 5: Human evaluation on criterias: Appropriateness and Fluency
|
| 256 |
+
|
| 257 |
+
shows the performance.
|
| 258 |
+
|
| 259 |
+
Though CASPI(MINTL) using just $5\%$ of the data outperforms DAMD trained on $100\%$ of data in 2 out of the 3 automatic evaluation metrics shown in Table:1 and 2, performs poorly in human appropriateness score. With the HITL score in the reward learning, we see a boost in performance in both the human evaluation criteria: appropriateness and fluency. The $5\%$ data CASPI(MINTL)'s human appropriateness score is now comparable to $100\%$ data DAMD. This goes to show the versatility of the pairwise causal reward learning. With enough expressiveness of the neural network used, the pairwise causal reward learning can generalize to unknown dialogue evaluation criteria.
|
| 260 |
+
|
| 261 |
+
# 5.4 Analysis
|
| 262 |
+
|
| 263 |
+
# 5.4.1 Rewards
|
| 264 |
+
|
| 265 |
+
In this section we qualitatively analyze the results of pairwise causal reward learning. Fig:8 is the same conversation between a tourist and information center agents that we introduced earlier, now we have learnt reward $R(s_{t},a_{t},g)$ , against each turn. We observe that Turn#3 has received the highest reward, retrospectively we realize the trans
|
| 266 |
+
|
| 267 |
+

|
| 268 |
+
Figure 6: Mixed Human-in-the-loop and automatic evaluation metric scores for pairwise causal reward learning
|
| 269 |
+
|
| 270 |
+

|
| 271 |
+
Figure 7: Human evaluation of Human in the loop training of CASPI(MinTL) on $5\%$ of Multiwoz2.0 dataset
|
| 272 |
+
|
| 273 |
+
action happens in this turn, which is crucial and has to be risk averse for the success of the dialogue. Turn#2 gets the next best reward which captures crucial information needed for transaction to happen in Turn#3. Turn#4 gets reward an order lower than Turn#3 & 2 because other than nicety, it doesn't contribute much to the success of the conversation. It should be noted that responses like Turn#4 will appear in almost all conversations and in supervised learning, these turns will be receiving the highest share of the gradient budget. The learnt reward redistributes the gradient budget based on the turns contribution to the success of the dialogue objective.
|
| 274 |
+
|
| 275 |
+
# 5.4.2 Type of agents
|
| 276 |
+
|
| 277 |
+
In this section we analyze the type of behaviour CASPI agents sometime exhibit, especially when trained in low sample regime.
|
| 278 |
+
|
| 279 |
+
Greedy agent: In certain domains, the agents has a tendency to book a service before it has gathered all the required information or before the user requested or agreed for booking a service. The first example in Fig:9 demonstrate this behaviour. Here the user has requested for a taxi, before enough information such as destination or time of departure
|
| 280 |
+
|
| 281 |
+

|
| 282 |
+
Figure 8: Example of learnt reward
|
| 283 |
+
|
| 284 |
+

|
| 285 |
+
Figure 9: Example of agent behaviour in low sample regime.
|
| 286 |
+
|
| 287 |
+
are gathered, the agent books the taxi. This happens because there are gaps in automatic evaluation metrics. A low BLEU score and relatively high inform and success rate might indicate greedy agent behaviour. Other reasons for low BLEU score include: lack of diversity in the responses or malformation of response.
|
| 288 |
+
|
| 289 |
+
Cautious agent: The agent tends to be cautious by providing long winded replies packed with more information than needed. Agent tend to do this to prevent the risk of loosing rewards by missing out any requested information. This behaviour is demonstrated in the second example in Fig:9
|
| 290 |
+
|
| 291 |
+
These subtle behaviour demonstrates gap in automatic evaluation metrics, which could be weeded out using Human in the loop learning described in Sec:5.3.
|
| 292 |
+
|
| 293 |
+
# 6 Conclusion
|
| 294 |
+
|
| 295 |
+
In this work we introduced a fine grained reward learning process using an under-specified metrics and expert demonstrations for efficiently learn task oriented dialogue. We demonstrated the efficacy of our method on MultiWoz2.0 dataset with results comparable to the existing state of the art method with only $20\%$ of data. We believe the methods is
|
| 296 |
+
|
| 297 |
+
generic and can be extend to other NLP tasks.
|
| 298 |
+
|
| 299 |
+
# References
|
| 300 |
+
|
| 301 |
+
Rishabh Agarwal, Chen Liang, Dale Schuurmans, and Mohammad Norouzi. 2019. Learning to generalize from sparse and underspecified rewards. arXiv preprint arXiv:1902.07198.
|
| 302 |
+
|
| 303 |
+
Pawel Budzianowski, Tsung-Hsien Wen, Bo-Hsiang Tseng, Inigo Casanueva, Stefan Ultes, Osman Ramadan, and Milica Gašić. 2018a. Multiwoz-a large-scale multi-domain wizard-of-oz dataset for task-oriented dialogue modelling. arXiv preprint arXiv:1810.00278.
|
| 304 |
+
|
| 305 |
+
Pawel Budzianowski, Tsung-Hsien Wen, Bo-Hsiang Tseng, Inigo Casanueva, Stefan Ultes, Osman Ramadan, and Milica Gašić. 2018b. Multiwoz-a large-scale multi-domain wizard-of-oz dataset for task-oriented dialogue modelling. arXiv preprint arXiv:1810.00278.
|
| 306 |
+
|
| 307 |
+
Wenhu Chen, Jianshu Chen, Pengda Qin, Xifeng Yan, and William Yang Wang. 2019. Semantically conditioned dialog response generation via hierarchical disentangled self-attention. arXiv preprint arXiv:1905.12866.
|
| 308 |
+
|
| 309 |
+
Paul F Christiano, Jan Leike, Tom Brown, Miljan Martic, Shane Legg, and Dario Amodei. 2017. Deep reinforcement learning from human preferences. In Advances in Neural Information Processing Systems, pages 4299-4307.
|
| 310 |
+
|
| 311 |
+
Ehsan Hosseini-Asl, Bryan McCann, Chien-Sheng Wu, Semih Yavuz, and Richard Socher. 2020. A simple language model for task-oriented dialogue. arXiv preprint arXiv:2005.00796.
|
| 312 |
+
|
| 313 |
+
Natasha Jaques, Asma Ghandeharioun, Judy Hanwen Shen, Craig Ferguson, Agata Lapedriza, Noah Jones, Shixiang Gu, and Rosalind Picard. 2019. Way off-policy batch deep reinforcement learning of implicit human preferences in dialog. arXiv preprint arXiv:1907.00456.
|
| 314 |
+
|
| 315 |
+
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. 2019. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461.
|
| 316 |
+
|
| 317 |
+
Zhaojiang Lin, Andrea Madotto, Genta Indra Winata, and Pascale Fung. 2020. Mintl: Minimalist transfer learning for task-oriented dialogue systems. arXiv preprint arXiv:2009.12005.
|
| 318 |
+
|
| 319 |
+
Nurul Lubis, Christian Geishauser, Michael Heck, Hsien-chin Lin, Marco Moresi, Carel van Niekerk, and Milica Gašić. 2020. Lava: Latent action spaces via variational auto-encoding for dialogue policy optimization. arXiv preprint arXiv:2011.09378.
|
| 320 |
+
|
| 321 |
+
Shikib Mehri, Tejas Srinivasan, and Maxine Eskenazi. 2019. Structured fusion networks for dialog. arXiv preprint arXiv:1907.10016.
|
| 322 |
+
Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting of the Association for Computational Linguistics, pages 311-318.
|
| 323 |
+
Baolin Peng, Chunyuan Li, Jinchao Li, Shahin Shayan- deh, Lars Liden, and Jianfeng Gao. 2020. Soloist: Few-shot task-oriented dialog with a single pretrained auto-regressive model. arXiv preprint arXiv:2005.05298.
|
| 324 |
+
Doina Precup. 2000. Eligibility traces for off-policy policy evaluation. Computer Science Department Faculty Publication Series, page 80.
|
| 325 |
+
Abhinav Rastogi, Xiaoxue Zang, Srinivas Sunkara, Raghav Gupta, and Pranav Khaitan. 2019. Towards scalable multi-domain conversational agents: The schema-guided dialogue dataset. arXiv preprint arXiv:1909.05855.
|
| 326 |
+
John Schulman, Sergey Levine, Pieter Abbeel, Michael Jordan, and Philipp Moritz. 2015. Trust region policy optimization. In International conference on machine learning, pages 1889-1897.
|
| 327 |
+
Richard S Sutton and Andrew G Barto. 2018. Reinforcement learning: An introduction. MIT press.
|
| 328 |
+
Brijen Thananjeyan, Ashwin Balakrishna, Ugo Rosolia, Felix Li, Rowan McAllister, Joseph E Gonzalez, Sergey Levine, Francesco Borrelli, and Ken Goldberg. 2020. Safety augmented value estimation from demonstrations (saved): Safe deep model-based rl for sparse cost robotic tasks. IEEE Robotics and Automation Letters, 5(2):3612-3619.
|
| 329 |
+
Philip Thomas and Emma Brunskill. 2016. Data-efficient off-policy policy evaluation for reinforcement learning. In International Conference on Machine Learning, pages 2139-2148.
|
| 330 |
+
Jianhong Wang, Yuan Zhang, Tae-Kyun Kim, and Yunjie Gu. 2020. Modelling hierarchical structure between dialogue policy and natural language generator with option framework for task-oriented dialogue system. arXiv preprint arXiv:2006.06814.
|
| 331 |
+
Tsung-Hsien Wen, David Vandyke, Nikola Mrksic, Milica Gasic, Lina M Rojas-Barahona, Pei-Hao Su, Stefan Ultes, and Steve Young. 2016. A network-based end-to-end trainable task-oriented dialogue system. arXiv preprint arXiv:1604.04562.
|
| 332 |
+
Yichi Zhang, Zhijian Ou, and Zhou Yu. 2019. Task-oriented dialog systems that consider multiple appropriate responses under the same context. arXiv preprint arXiv:1911.10484.
|
| 333 |
+
|
| 334 |
+
Tiancheng Zhao, Kaige Xie, and Maxine Eskenazi. 2019. Rethinking action spaces for reinforcement learning in end-to-end dialog agents with latent variable models. arXiv preprint arXiv:1902.08858.
|
| 335 |
+
Brian D Ziebart, Andrew L Maas, J Andrew Bagnell, and Anind K Dey. 2008. Maximum entropy inverse reinforcement learning.
|
| 336 |
+
|
| 337 |
+
# A Appendix
|
| 338 |
+
|
| 339 |
+
# A.1 Baselines
|
| 340 |
+
|
| 341 |
+
DAMD: Introduced by (Zhang et al., 2019) is a domain-aware multi-decoder network. The method also exploits stochastic nature of the dialogue act by using a data-augmentation technique called the multi-action data augmentation. DAMD with data augmentation is denoted here as DAMD + multiaction.
|
| 342 |
+
|
| 343 |
+
HDSA by (Chen et al., 2019) proposes to use hierarchical graph representation for dialogue act. It uses a pre-trained 12-layer BERT model (Devlin et al., 2019) to represent dialogue act. The predicted dialogue act is transformed to the hierarchical graph structure using disentangled self-attention model, a 3-layer self-attention model (Vaswani et al., 2017)
|
| 344 |
+
|
| 345 |
+
SOLOIST (Peng et al., 2020) and SimpleTOD (Hosseini-Asl et al., 2020) uses pretrained GPT-2-based methods. These method are trained on turn-level data without generated belief state and system act in dialog history.
|
| 346 |
+
|
| 347 |
+
MinTL-BART (Lin et al., 2020), introduced Levenshtein belief spans framework that predicts only the incremental change in dialogue state per turn. It leverages the pretrained T5 and BART (Lewis et al., 2019) as backbone for model architecture.
|
| 348 |
+
|
| 349 |
+
LAVA (Lubis et al., 2020), reduces the action space of policy in end-to-end ToD, by using the latent space of a variational model with an informed prior. The work use variable distribution: via pretraining, to obtain an informed prior, and uses autoencoding as the auxiliary task, to capture generative factors of dialogue responses.
|
| 350 |
+
|
| 351 |
+
HDNO proposed by (Wang et al., 2020) is a dialogue policy learning method to solve context-to-response generation task of Multiwoz2.0 (Budzianowski et al., 2018a). It exploits the hierarchical nature of dialogue act and response generation task by proposing an option based framework of Hierarchical RL and variational model to learn a latent dialogue act that corresponds to natural language response. Unlike our
|
| 352 |
+
|
| 353 |
+
method, HDNO though highlights the risk of sparsity of metric function such as success rate as reward function, resorts to shaping a proxy reward function. It uses markov language model as a proxy reward function. The language model is learnt independent of the metric function. Our method refrains from reward shaping and is independent of the nature of any underspecified metric function. Since we learn fine grained turn specific credit assignment, our solution can adapt to other metric function as long as the pairwise reward network is rich enough to factorize them.
|
| 354 |
+
|
| 355 |
+
# A.2 Pairwise causal reward learning network architecture
|
| 356 |
+
|
| 357 |
+

|
| 358 |
+
Figure 10: Pairwise causal reward learning network architecture
|
caspicausalawaresafepolicyimprovementfortaskorienteddialogue/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:812c834e48d6eca349467fec49c23e0dc2803613436548e492e8fcc7bbceeb77
|
| 3 |
+
size 595850
|
caspicausalawaresafepolicyimprovementfortaskorienteddialogue/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:67acebfca9d719a039a946a7ef7c214090cbb35d3f18c92027ee531acd06ffd1
|
| 3 |
+
size 395653
|
ecareanewdatasetforexploringexplainablecausalreasoning/8e503919-6886-4a5c-a4ba-5a0878d1b23e_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:0d9ae3492a4d3d54a4a094c6e047b10d3ab178fc9b8933cad12cf528fa065cea
|
| 3 |
+
size 110055
|
ecareanewdatasetforexploringexplainablecausalreasoning/8e503919-6886-4a5c-a4ba-5a0878d1b23e_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:445665a361eeeb29c242625ff3e56f7d5ba5cd04276b9e1ebe649489e94d41de
|
| 3 |
+
size 131984
|
ecareanewdatasetforexploringexplainablecausalreasoning/8e503919-6886-4a5c-a4ba-5a0878d1b23e_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:ef3d03f6889e4fbc8c186f141c401af6a2e9aaf9ac849da209dce58e735805a4
|
| 3 |
+
size 657810
|
ecareanewdatasetforexploringexplainablecausalreasoning/full.md
ADDED
|
@@ -0,0 +1,506 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# e-CARE: a New Dataset for Exploring Explainable Causal Reasoning
|
| 2 |
+
|
| 3 |
+
Li Du, Xiao Ding*, Kai Xiong, Ting Liu, and Bing Qin
|
| 4 |
+
|
| 5 |
+
Research Center for Social Computing and Information Retrieval
|
| 6 |
+
|
| 7 |
+
Harbin Institute of Technology, China
|
| 8 |
+
|
| 9 |
+
{ldu,xding,kxiong,tliu,qinb} $@$ ir.hit.edu.cn
|
| 10 |
+
|
| 11 |
+
# Abstract
|
| 12 |
+
|
| 13 |
+
Understanding causality has vital importance for various Natural Language Processing (NLP) applications. Beyond the labeled instances, conceptual explanations of the causality can provide deep understanding of the causal facts to facilitate the causal reasoning process. However, such explanation information still remains absent in existing causal reasoning resources. In this paper, we fill this gap by presenting a human-annotated explainable CAusal REasoning dataset (e-CARE), which contains over 21K causal reasoning questions, together with natural language formed explanations of the causal questions. Experimental results show that generating valid explanations for causal facts still remains especially challenging for the state-of-the-art models, and the explanation information can be helpful for promoting the accuracy and stability of causal reasoning models.
|
| 14 |
+
|
| 15 |
+
# 1 Introduction
|
| 16 |
+
|
| 17 |
+
Causal reasoning is one of the most central cognitive abilities of human beings (Waldmann and Hagmayer, 2013; Jonassen et al., 2008), which enables one to understand the observed facts and predict the future. However, although recent causal reasoning models have achieved impressive performances on certain hand-crafted datasets, there still remains a considerable gap compared to human performances, as they cannot achieve stable performances across different datasets and are susceptible to adversarial attacks (McCoy et al., 2019; Poliak et al., 2018; Gururangan et al., 2018).
|
| 18 |
+
|
| 19 |
+
One key factor leading to such drastic contrast is that, present causal reasoning models only learn to induce empirical causal patterns that are predictive to the label, while human beings seek for deep and conceptual understanding of the causality to explain the observed causal facts. The conceptual
|
| 20 |
+
|
| 21 |
+

|
| 22 |
+
Figure 1: Conceptual explanations of observed causality can be helpful for understanding the unseen causal facts.
|
| 23 |
+
|
| 24 |
+
explanations can not only serve as a touchstone to examine whether the underlying causal mechanism has been thoroughly understood, but it can also in turn support the causal reasoning process. As illustrated in Figure 1, observing the causal fact $C_1$ : adding rock into hydrochloric acid causes $E_1$ : rock dissolved, one may further ask why such a causal relationship exists and reach the plausible conceptual explanation that Acid is corrosive, which goes beyond the isolated facts and reaches the conceptual nature to reveal the principle of the causal mechanism.
|
| 25 |
+
|
| 26 |
+
However, despite the critical importance of conceptual explanations in causal reasoning, there is still a lack of such an explainable causal reasoning dataset. To fill this gap, we contribute an explainable CAusal REasoning dataset (e-CARE), together with a new causal explanation generation task, and a novel Causal Explanation Quality (CEQ) evaluation metric.
|
| 27 |
+
|
| 28 |
+
The e-CARE dataset is constructed by crowdsourcing and contains over 21K multiple-choice causal reasoning questions, which makes e-CARE the largest human-annotated commonsense causal reasoning dataset to the best of our knowledge. In addition to the causal reasoning question itself, e-CARE also provides a free-text-formed conceptual explanation for each causal question to explain why the causation exists. On this basis, we propose a new causal explanation generation task that requires models not only to choose the correct causal fact but also to generate the ex
|
| 29 |
+
|
| 30 |
+
planation for the choice. In addition, to directly measure the quality of generated explanations, we propose a novel causal explanation quality evaluation metric (namely, CEQ score). Compared to conventional text generation evaluation metrics such as BLEU (Papineni et al., 2002) and ROUGE (Lin, 2004) which mainly evaluate the textual or semantic similarity between generated explanations with golden annotations, CEQ score focuses on evaluating how much promotion an explanation can bring to understanding the causal mechanism. The dataset is publicly available at https://github.com/Waste-Wood/e-CARE/.
|
| 31 |
+
|
| 32 |
+
Experimental results demonstrate that the causal questions of e-CARE are still challenging for the state-of-the-art (SOTA) pretrained language models, indicating the effectiveness of the e-CARE dataset in evaluating the causal learning ability of models. In addition, the explanation signal received in the training process can enhance the performance and the stability of the reasoning model, while the SOTA baselines still have trouble in explaining the causal facts at a conceptual level. These analyses highlight the importance of the conceptual explanations in causal reasoning, and suggest an avenue for future researches.
|
| 33 |
+
|
| 34 |
+
# 2 Related Work
|
| 35 |
+
|
| 36 |
+
# 2.1 Commonsense Causal Reasoning Datasets
|
| 37 |
+
|
| 38 |
+
Existing commonsense causal reasoning corpora differ in their annotation guidelines and how they are constructed: (1) whether the corpus is automatically constructed or built by human annotation; (2) whether the annotation unit of the corpus is word-level, phrase-level, or sentence-level.
|
| 39 |
+
|
| 40 |
+
To obtain abundant causal knowledge, a natural way is extracting causal knowledge using heuristic rules from large-scale open-domain web text corpora (Luo et al., 2016; Li et al., 2020; Sap et al., 2019). However, the reporting bias may challenge both the coverage and quality of the extracted causal knowledge.
|
| 41 |
+
|
| 42 |
+
Different from automatic construction, human annotation can endow datasets with higher precision. A line of work focuses on providing word-level causality knowledge (Girju et al., 2007; Mostafazadeh et al., 2016; Do et al., 2011; Hendrickx et al., 2019). However, a word is not a complete semantic unit, which may limit the integrity of causal expressions and lead to ambi
|
| 43 |
+
|
| 44 |
+
<table><tr><td>Dataset</td><td>Anno.</td><td>Unit</td><td>Size</td><td>Expl.</td></tr><tr><td colspan="5">Automatically-Built Dataset</td></tr><tr><td>CausalNet (Luo et al., 2016)</td><td>W</td><td>11M</td><td>N</td><td></td></tr><tr><td>CausalBank (Li et al., 2020)</td><td>P</td><td>314M</td><td>N</td><td></td></tr><tr><td colspan="5">Human-Annotated Dataset</td></tr><tr><td>SemEval-2007 T4 (Girju et al., 2007)</td><td>W</td><td>220</td><td>N</td><td></td></tr><tr><td>CaTeRS (Mostafazadeh et al., 2016)</td><td>W</td><td>488</td><td>N</td><td></td></tr><tr><td>EventCausalityData (Do et al., 2011)</td><td>W</td><td>580</td><td>N</td><td></td></tr><tr><td>SemEval-2010 T8 (Hendrickx et al., 2019)</td><td>W</td><td>1,003</td><td>N</td><td></td></tr><tr><td>ESC (Caselli and Vossen, 2017)</td><td>P</td><td>117</td><td>N</td><td></td></tr><tr><td>T-CBank (Bethard and Martin, 2008)</td><td>P</td><td>271</td><td>N</td><td></td></tr><tr><td>CausalTimeBank (Mirza et al., 2014)</td><td>P</td><td>318</td><td>N</td><td></td></tr><tr><td>BECauSE 2.0 (Dunietz et al., 2017)</td><td>P</td><td>1,803</td><td>N</td><td></td></tr><tr><td>TCR (Ning et al., 2019)</td><td>S</td><td>172</td><td>N</td><td></td></tr><tr><td>COPA (Roemmle et al., 2011)</td><td>S</td><td>1,000</td><td>N</td><td></td></tr><tr><td>e-CARE</td><td>S</td><td>21K</td><td>Y</td><td></td></tr></table>
|
| 45 |
+
|
| 46 |
+
Table 1: A list of previous commonsense causal reasoning datasets. In the column "Annotation Unit", "W", "P" and "S" are abbreviation of word, phrase and sentence, respectively. "Expl." is the abbreviation of "Explanation".
|
| 47 |
+
|
| 48 |
+
guity. To address this issue, other datasets are constructed to provide phrase-level (Caselli and Vossen, 2017; Bethard and Martin, 2008; Mirza et al., 2014; Dunietz et al., 2017) and sentence-level (Ning et al., 2019; Roemmele et al., 2011) causal knowledge. Among these datasets, COPA (Roemmele et al., 2011) has become a widely adopted benchmark. Nevertheless, the size of COPA is rather limited, which may result in overfitting and arouse concerns about the confidence of the results.
|
| 49 |
+
|
| 50 |
+
In this paper, we introduce an explainable CAusal REaoning dataset (e-CARE). As shown in Table 1, to the best of our knowledge, e-CARE is the largest human-annotated causal reasoning dataset. With more than 21,000 instances, the e-CARE dataset can serve as a more reliable benchmark. Furthermore, compared to previous work, e-CARE can provide additional explanation information, which plays a critical role in learning the underlying mechanism of causal knowledge.
|
| 51 |
+
|
| 52 |
+
# 2.2 Explainable Textual Inference
|
| 53 |
+
|
| 54 |
+
Recently, an increasing amount of datasets have been proposed to address the explainability of textual inference tasks, such as textual entailment inference (Camburu et al., 2018), question-answering (QA) (DeYoung et al., 2019; Perez et al., 2019) and multi-hop QA (Ye et al., 2020). The form and content of the explanations vary with the nature of specific tasks.
|
| 55 |
+
|
| 56 |
+
The QA task requires a model to answer the question based on evidences within given texts. Therefore, the explanation for this task should de
|
| 57 |
+
|
| 58 |
+
<table><tr><td>Number</td><td>Train</td><td>Dev</td><td>Test</td><td>Total</td></tr><tr><td>Causal Questions</td><td>14,928</td><td>2,132</td><td>4,264</td><td>21,324</td></tr><tr><td>Uniq. Explanations</td><td>10,491</td><td>2,102</td><td>3,814</td><td>13,048</td></tr></table>
|
| 59 |
+
|
| 60 |
+
scribe where and how an answer can be found (Wiegreffe and Marasovic, 2021). The explanations can have various forms, including answer-bearing sentences (Perez et al., 2019), structured information connecting the question and answer (Hancock et al., 2018; Ye et al., 2020), or even human-annotated free-formed sentences (Camburu et al., 2018; Rajani et al., 2019). In contrast, the multi-hop QA task requires the model to infer the correct answer through multiple reasoning steps. Hence, the explanation of this task needs to provide the specific reasoning paths (Wiegreffe and Marasovic, 2021; Jhamtani and Clark, 2020).
|
| 61 |
+
|
| 62 |
+
Our work is quite different from previous work. We notice that all of these previous work only offer explanations that explain a specific question. Whereas we aim at providing a conceptual understanding of the causality, which has the potential to explain a set of related causal observations, rather than only explain a specific causal fact.
|
| 63 |
+
|
| 64 |
+
# 3 e-CARE: an Explainable Causal Reasoning Dataset
|
| 65 |
+
|
| 66 |
+
e-CARE contains a total of 21,324 instances, corresponding to 13,048 unique explanations. This also makes e-CARE the largest human-annotated commonsense causal reasoning benchmark. The corpus-level statistics of the e-CARE dataset are shown in Table 2.
|
| 67 |
+
|
| 68 |
+
As shown in Table 3, each instance of the e-CARE dataset is constituted by two components: (1) a multiple-choice causal reasoning question, composed of a premise and two hypotheses, and one of the hypotheses can form a valid causal fact with the premise; (2) a conceptual explanation about the essential condition that enables the existence of the causal fact. For example, as Table 3 shows, the explanation points out the nature of copper that Copper is a good thermal conductor, so that holding copper on fire will make fingers feel burnt immediately. The appendix provides more discussion about the explanations within e-CARE. On this basis, we introduce two tasks:
|
| 69 |
+
|
| 70 |
+
Causal Reasoning Task We formulate the causal
|
| 71 |
+
|
| 72 |
+
Table 2: Corpus level statistics of the e-CARE dataset. Uniq. Explanations refer to the explanations that only correspond to a single causal fact.
|
| 73 |
+
|
| 74 |
+
<table><tr><td>Premise: Tom holds a copper block by hand and heats it on fire.
|
| 75 |
+
Ask-for: Effect
|
| 76 |
+
Hypothesis 1: His fingers feel burnt immediately. (√)
|
| 77 |
+
Hypothesis 2: The copper block keeps the same. (×)</td></tr><tr><td>Explanation: Copper is a good thermal conductor.</td></tr></table>
|
| 78 |
+
|
| 79 |
+
Table 3: An instance from the e-CARE dataset.
|
| 80 |
+
|
| 81 |
+
reasoning task as a multiple-choice task: given a premise event, one needs to choose a more plausible hypothesis from two candidates, so that the premise and the correct hypothesis can form into a valid causal fact.
|
| 82 |
+
|
| 83 |
+
Explanation Generation Task It requires the model to generate a free-text-formed explanation for a given causal fact (composed of a premise and the corresponding correct hypothesis).
|
| 84 |
+
|
| 85 |
+
# 3.1 Data Annotation
|
| 86 |
+
|
| 87 |
+
To construct the e-CARE dataset, we start by collecting statements that describe conceptual understandings of world knowledge. Then given a statement, we ask different annotators to generate causal facts that can be explained by the statement, and build causal questions based on these causal facts. This is because we hope to provide conceptual explanations with more generality, that can explain a set of correlated causal facts, instead of only applicable to a certain isolated causal fact. Moreover, the statements can serve as clues to help the annotators to come up with causal facts.
|
| 88 |
+
|
| 89 |
+
Collecting Potential Explanations Two key issues remain in collecting statements as potential explanations: (1) what kind of statements can be potential conceptual explanations of the causal facts; (2) where to find the appropriate statements.
|
| 90 |
+
|
| 91 |
+
For the first question, Jonassen et al. (2008) concluded that, in general, the explanation of causality mainly describes three categories of information: (1) the nature or attributes of the objectives involved in the causal facts; (2) forces or actions that cause changes and drive transient motions; (3) the goals, intentions, motives or purposes of the causal agents. In addition, to be the conceptual explanation of a causal fact, the statement should be able to involve with a category of objects or people, but not only focus on a specific object or person (Sembugamoorthy and Chandrasekaran, 1986).
|
| 92 |
+
|
| 93 |
+
Following these principles, we notice that there are already several available knowledge bases containing statements about such generic world knowledge, including ConceptNet (Speer
|
| 94 |
+
|
| 95 |
+
and Havasi, 2013), WordNet (Fellbaum, 2010), Atomic (Sap et al., 2019) and GenericsKB (Bhakthavatsalam et al., 2020). However, ConceptNet and WordNet are structured knowledge graphs, containing only triplet-structured statements with a limited number of predicates. The scope of Atomic is limited in the activities of human beings. Compared to these knowledge bases, GenericsKB is an open-domain, large-scale knowledge base, containing rich generic world knowledge described in free-form text. Therefore, we collect the statements from GenericsKB to ensure the coverage and diversity of the potential explanations.
|
| 96 |
+
|
| 97 |
+
Specifically, we filter out the statements in GenericsKB with low reliability, and the statements that may disobey the above-mentioned three principles. More details are provided in the Appendix. Thereafter, a total of 19,746 statements are left to form into a potential explanation set, which is further provided to the annotators to generate the causal questions.
|
| 98 |
+
|
| 99 |
+
Annotating Causal Reasoning Questions Given the potential explanation set, annotators were recruited to generate corresponding causal questions. Specifically, a causal question is generated by two steps:
|
| 100 |
+
|
| 101 |
+
First, an annotator was presented with a statement as a potential explanation, and was instructed to write a causal fact (composed of a cause and an effect), so that the causal fact can be interpreted by the given statement. In this step, a key issue is controlling the quality of generated causal facts. Thus we demonstrated illustrative examples to guide the annotators to avoid the following mistakes:
|
| 102 |
+
|
| 103 |
+
(1) The created cause and effect are not in a valid causal relationship;
|
| 104 |
+
(2) The created causal fact cannot be explained by the provided statement;
|
| 105 |
+
(3) There are factual errors or imaginary contents in the created causal facts.
|
| 106 |
+
|
| 107 |
+
In the causal fact generation process, each statement is randomly distributed to 1-3 annotators, so that we can find some statements that could explain multiple causal facts. Note that, in this process, we do not assume all statements are necessary to be a valid explanation. In other words, we do not require that the annotators must generate a causal fact for each given statement. Instead, we leave it to the judgment of annotators. In this way, the unreliable statements can be further excluded to promote the quality of our dataset.
|
| 108 |
+
|
| 109 |
+
<table><tr><td>Model</td><td>Dev</td><td>Test</td></tr><tr><td>Random</td><td>50.1</td><td>50.1</td></tr><tr><td>GPT2 (Radford et al., 2018)</td><td>57.17</td><td>56.30</td></tr><tr><td>RoBERTa (Liu et al., 2019)</td><td>58.38</td><td>56.42</td></tr><tr><td>BERT (Devlin et al., 2019)</td><td>56.19</td><td>54.45</td></tr></table>
|
| 110 |
+
|
| 111 |
+
Table 4: Model's accuracy $(\%)$ of choosing the correct hypothesis without the premise.
|
| 112 |
+
|
| 113 |
+
After the generation of causal facts, an ask-for indicator $a \in [\text{"cause", "effect"}]$ was randomly generated, where $a =$ "cause" ("effect") means that the cause (effect) event is the hypothesis, and the effect (cause) event is the premise of the causal question, respectively. Then given the ask-for indicator, in order to control the grammar and writing style consistency, the same annotator was prompted to write a distract cause (effect) as the implausible hypothesis according to the ask-for indicator. In this process, the annotators were instructed to create the implausible hypothesis as close as possible to the true hypothesis, meanwhile prevent creating uninformative distractors (such as simply adding a "not" into the true hypothesis).
|
| 114 |
+
|
| 115 |
+
# 3.2 Refinement and Analysis of the e-CARE Dataset
|
| 116 |
+
|
| 117 |
+
A significant challenge in dataset construction is avoiding introducing superficial cues into the dataset (Gururangan et al., 2018; Poliak et al., 2018), which refers to the unintentional features that leak the label information. To address this issue, following Bhagavatula et al. (2019) and Sakaguchi et al. (2020), we employ an adversarial filtering algorithm to replace the implausible hypotheses that can easily be distinguished with the correct hypotheses using the superficial clues. More details about the adversarial filtering are provided in the Appendix. As Table 4 shows, after the adversarial filtering, without the existence of the premise, the SOTA pretrained language models can hardly distinguish two candidate hypotheses, which indicates that to predict the correct label, a model must understand the causal relationship between the premise and hypothesis, rather than only depend on the superficial cues within the two hypotheses.
|
| 118 |
+
|
| 119 |
+
After the refinement, we evaluate the quality of the annotated causal questions and collected explanations through crowdsourcing. We assess the quality of causal questions by testing if there is agreement among human raters on the answer of causal questions. Specifically, we randomly sampled 200 causal questions from e-CARE, and en
|
| 120 |
+
|
| 121 |
+
listed 10 annotators to answer the causal questions. In this process, each causal question was evaluated by three annotators. When answering the causal questions, the raters were allowed to choose an additional option "None of the above" if neither hypothesis was deemed plausible. The human annotators achieve a $92\%$ accuracy with a high agreement (Cohen's $\kappa = 0.935$ ) (Cohen, 1960).
|
| 122 |
+
|
| 123 |
+
To validate the quality of explanations, we enlisted volunteers to determine whether or not the explanations can explain corresponding causal facts. In total 200 causal facts with corresponding explanations were sampled and distributed to 10 volunteers, and each explanation was evaluated by three volunteers. After the evaluation, on average $89.5\%$ of the explanations were deemed as valid (Cohen's $\kappa = 0.832$ ), showcasing the quality of the explanations in e-CARE.
|
| 124 |
+
|
| 125 |
+
# 4 Causal Explanation Quality (CEQ) Score
|
| 126 |
+
|
| 127 |
+
A number of automatic scores have been proposed to evaluate the quality of generated explanations, such as BLEU (Papineni et al., 2002) and ROUGE (Lin, 2004). However, these metrics evaluate the quality of the generated explanations only through comparing the textual or semantic similarity between the generated explanations and the golden annotation. Alternatively, an ideal causal explanation quality evaluation metric should directly measure if the causal fact is appropriately explained by the explanation.
|
| 128 |
+
|
| 129 |
+
Hence, we propose a novel causal explanation quality evaluation metric (namely, CEQ score) as a step towards directly measuring the quality of generated explanations. We devise the CEQ score based on the consideration that a better explanation should provide more information for understanding the causality, so that the prediction model can more accurately estimate the reasonableness of the causal fact. Previous literature characterized such reasonableness as the causal strength of the given causal fact (Roemmle et al., 2011; Luo et al., 2016), where the causal strength is a score in [0, 1]. Hence, in theory, for a valid causal fact, its causal strength should be equal to 1. Given a valid causal fact, an explanation should help to increase its estimated causal strength to the groundtruth value 1.
|
| 130 |
+
|
| 131 |
+
Therefore, we can evaluate the quality of a
|
| 132 |
+
|
| 133 |
+
generated explanation by measuring the increase of causal strength brought by the explanation. Specifically, let $C$ , $E$ , and $X$ denote the cause, the effect and the generated explanation, respectively. Formally, the CEQ score is defined as:
|
| 134 |
+
|
| 135 |
+
$$
|
| 136 |
+
\operatorname {C E Q} = \Delta_ {\mathrm {c s}} = \operatorname {c s} (C, E | X) - \operatorname {c s} (C, E), \tag {1}
|
| 137 |
+
$$
|
| 138 |
+
|
| 139 |
+
where $\operatorname{cs}(C, E)$ is the original causal strength between $C$ and $E$ ; $\operatorname{cs}(C, E|X)$ is the causal strength after involvement of the additional explanation information. The explanation enhanced causal strength $\operatorname{cs}(C, E|X)$ is defined as:
|
| 140 |
+
|
| 141 |
+
$$
|
| 142 |
+
\operatorname {c s} (C, E | X) = \max [ \operatorname {c s} (C + X, E), \operatorname {c s} (C, E + X) ], \tag {2}
|
| 143 |
+
$$
|
| 144 |
+
|
| 145 |
+
where “+” denotes the string concatenate operation. Therefore, the CEQ score is positively related to the increase of causal strength between $C$ and $E$ after the involvement of the explanation $X$ .
|
| 146 |
+
|
| 147 |
+
In this paper, we employ a widely-adopted model-agnostic method proposed by Luo et al. (2016) to calculate the causal strength. The model-agnostic nature enable us to avoid reliance on certain models and keep the fairness of evaluation. Specifically, the phrase-level causal strength is derived through synthesizing the word-level causality.
|
| 148 |
+
|
| 149 |
+
$$
|
| 150 |
+
\operatorname {c s} \left(C _ {A}, E _ {B}\right) = \frac {1}{N _ {C _ {A}} + N _ {E _ {B}}} \sum_ {w _ {i} \in C _ {A}, w _ {j} \in E _ {B}} \operatorname {c s} \left(w _ {i}, w _ {j}\right), \tag {3}
|
| 151 |
+
$$
|
| 152 |
+
|
| 153 |
+
where $(C_A, E_B)$ is an arbitrary causal fact; $N_{C_A}$ and $N_{E_B}$ are the number of words within $C_A$ and $E_B$ , respectively; $\operatorname{cs}(w_i, w_j)$ is the causal strength between word $w_i$ and $w_j$ , which is estimated from a large corpus as:
|
| 154 |
+
|
| 155 |
+
$$
|
| 156 |
+
\operatorname {c s} \left(w _ {i}, w _ {j}\right) = \frac {\operatorname {C o u n t} \left(w _ {i} , w _ {j}\right)}{\operatorname {C o u n t} \left(w _ {i}\right) \operatorname {C o u n t} \left(w _ {j}\right) ^ {\alpha}}, \tag {4}
|
| 157 |
+
$$
|
| 158 |
+
|
| 159 |
+
where $\alpha$ is a penalty coefficient and Luo et al. (2016) empirically set $\alpha = 0.66$ .
|
| 160 |
+
|
| 161 |
+
# 5 Experiments and Results
|
| 162 |
+
|
| 163 |
+
We examine the performance of state-of-the-art pretrained language models on the causal reasoning task and the explanation generation task. Furthermore, we investigate the specific role of explanations in causal reasoning by: (1) a predict-and-generate experiment, which requires models to conduct the causal reasoning task and generate corresponding explanations simultaneously; (2) a stability analysis using adversarial attacks.
|
| 164 |
+
|
| 165 |
+
<table><tr><td>Model</td><td>AVG-BLEU</td><td>ROUGE-1</td><td>PPL</td><td>CEQ</td><td>Human Evaluation (%)</td></tr><tr><td>GRU-Seq2Seq</td><td>18.66</td><td>21.32</td><td>33.71</td><td>0.024</td><td>0</td></tr><tr><td>GPT2 (Radford et al., 2019)</td><td>32.04</td><td>31.47</td><td>7.14</td><td>0.105</td><td>20.0</td></tr><tr><td>Human Generation</td><td>35.51</td><td>33.46</td><td>-</td><td>0.144</td><td>89.5</td></tr></table>
|
| 166 |
+
|
| 167 |
+
Table 6: Model performance on the explanation generation task.
|
| 168 |
+
|
| 169 |
+
<table><tr><td>Model</td><td>Accuracy (%)</td></tr><tr><td>GPT2 (Radford et al., 2019)</td><td>69.51</td></tr><tr><td>RoBERTa (Liu et al., 2019)</td><td>70.73</td></tr><tr><td>BART (Lewis et al., 2020)</td><td>71.65</td></tr><tr><td>XLNET (Yang et al., 2019)</td><td>74.58</td></tr><tr><td>BERT (Devlin et al., 2019)</td><td>75.38</td></tr><tr><td>ALBERT (Lan et al., 2019)</td><td>74.60</td></tr><tr><td>Human Performance</td><td>92.00</td></tr></table>
|
| 170 |
+
|
| 171 |
+
# 5.1 Causal Reasoning
|
| 172 |
+
|
| 173 |
+
Settings We cast the causal reasoning task as a prediction problem: The input of the model is a candidate causal fact composed of a premise and one of the corresponding candidate hypotheses. The output is a score measuring the reasonableness of the candidate causal fact. We evaluate the causal reasoning ability of several SOTA pretrained language models, including discriminative pretrained language models BERT (Devlin et al., 2019), RoBERTa (Liu et al., 2019), XLNet (Yang et al., 2019), and ALBERT (Lan et al., 2019); as well as autoregressive generative pretrained language models GPT2 (Radford et al., 2019) and BART (Lewis et al., 2020), which can also be adapted to the predictive causal reasoning task. In this section and the following parts, all experiments are conducted using the base-sized version of the pretrained language models. Additional details about experimental settings are provided in the Appendix.
|
| 174 |
+
|
| 175 |
+
Results As shown in Table 5, ALBERT achieves the highest accuracy of $73.86\%$ on the causal reasoning task of e-CARE. However, ALBERT can achieve an accuracy of $86.0\%$ on the widely adopted causal reasoning benchmark COPA by our implementation. This is mainly because, on one hand, previous causal reasoning datasets are too small to evaluate the genuine reasoning ability of the model. On the other hand, previous datasets may provide some superficial cues for the reasoning models to achieve superb performances. In contrast, e-CARE is the largest causal reasoning dataset that can provide enough test instances to evaluate the actual ability of the model. More
|
| 176 |
+
|
| 177 |
+
Table 5: Performance of pretrained language models on the test set of the causal reasoning task.
|
| 178 |
+
|
| 179 |
+
<table><tr><td></td><td>Corr. Coef with Human Eval.</td><td>P-value</td></tr><tr><td>AVG-BLEU</td><td>0.032</td><td>0.749</td></tr><tr><td>ROUGE-I</td><td>0.021</td><td>0.836</td></tr><tr><td>CEQ</td><td>0.247</td><td>0.013*</td></tr></table>
|
| 180 |
+
|
| 181 |
+
Table 7: Pearson Correlation coefficients between human evaluation and automatic scores. " $\ast$ " denotes P-value $< 0.05$
|
| 182 |
+
|
| 183 |
+
over, in the annotating process of e-CARE, we introduced an adversarial filtering process to avoid the influence of superficial cues on the performances of reasoning models. Hence, we believe that e-CARE dataset can serve as a new benchmark for effectively evaluating models' causal reasoning ability. We also notice that human beings can achieve an accuracy of $92.00\%$ on the e-CARE dataset. The large gap between the human performance and the pretrained language models suggests that the causal reasoning questions provided in our dataset still remain challenging, and calls for more powerful causal reasoning models.
|
| 184 |
+
|
| 185 |
+
# 5.2 Explanation Generation
|
| 186 |
+
|
| 187 |
+
We investigate whether the model can generate correct explanations for given valid causal facts by training a GRU-based Seq2Seq model (Chung et al., 2014), and finetuning a generative pretrained language model GPT2 (Radford et al., 2019) on the e-CARE dataset. Both models take the concatenation of the cause and effect as input. Please refer to the Appendix for more details.
|
| 188 |
+
|
| 189 |
+
Evaluation Metrics We automatically evaluate the quality of generated explanations using average-BLEU $(n = 4)$ (Papineni et al., 2002), ROUGE-1 (Lin, 2004), Perplexity (Horgan, 1995), together with our proposed CEQ score.
|
| 190 |
+
|
| 191 |
+
Human Evaluation We also assess the quality of model-generated explanations through human evaluation. Specifically, we sampled 200 explanations generated by each method. Then three workers were shown with the generated explanations, together with corresponding causal facts, and were asked to label whether the generated explanation can explain the corresponding causal fact.
|
| 192 |
+
|
| 193 |
+
Quantitative Results As shown in Table 6, $89.5\%$ of human-written explanations are found to be valid, while the generative pretrained language
|
| 194 |
+
|
| 195 |
+
<table><tr><td>Causal Facts</td><td>(Generated) Explanation</td><td>Human Annotation</td><td>CEQ</td></tr><tr><td>Cause: He was infected with gram-positive bacteria.
|
| 196 |
+
Effect: The doctor raised the lysozyme in his body.</td><td>Lysozyme is a chemical produced by the bacterium Bacillus thuringiensis to kill bacteria.</td><td>Lysozyme destroys cell wall of bacteria.</td><td>0.193</td></tr><tr><td>Cause: Pneumonia pervaded in this area.
|
| 197 |
+
Effect: Many people died of Pneumonia.</td><td>Pneumonia is a leading infectious disease caused by a bacterium called Pseudomonas aeruginosa.</td><td>Pneumonia is the most common infectious disease leading to death.</td><td>0.069</td></tr><tr><td>Cause: Tom wanted to prevent cancer.
|
| 198 |
+
Effect: The doctor told him to eat more foods containing Vitamin C.</td><td>Vitamin C is a naturally occurring amino acid found in many foods.</td><td>Vitamin C slow the growth of small cell lung cancer cells.</td><td>0.012</td></tr></table>
|
| 199 |
+
|
| 200 |
+
Table 8: Examples of explanations generated by GPT2. We highlighted the factual mistakes within the generated explanations and the totally irrelevant explanation in yellow and pink, respectively.
|
| 201 |
+
|
| 202 |
+
<table><tr><td>Model</td><td>Accu (%)</td><td>AVG-BLEU</td><td>ROUGE-I</td><td>CEQ</td><td>Human Eval. (%)</td><td>ΔAccu. (%) after Adv. Attack</td></tr><tr><td>GPT2CR</td><td>69.51</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-6.40</td></tr><tr><td>GPT2EG</td><td>-</td><td>32.04</td><td>31.47</td><td>0.035</td><td>20.0</td><td>-</td></tr><tr><td>GPT2CR-EG</td><td>71.06</td><td>34.83</td><td>34.22</td><td>0.042</td><td>26.5</td><td>-5.49</td></tr></table>
|
| 203 |
+
|
| 204 |
+
Table 9: Model performance on the test set of Joint Causal Reasoning and Explanation Generation task.
|
| 205 |
+
|
| 206 |
+
model GPT2 only achieves a correctness of $20.0\%$ . The last row of Table 6 reports the score of held-out human-written explanations, which serves as a ceiling for model performance. The significant gap indicates that, although GPT2 can achieve impressive performance on various natural language generation tasks, it still remains especially challenging for GPT2 to deeply understand the causal facts and then generate explanations like human beings. This may be one of the main obstacles hindering the further improvement of present causal reasoning models.
|
| 207 |
+
|
| 208 |
+
Moreover, we measure the similarity between the automatic scores with the results of human evaluation using the Spearman correlation coefficient. As Table 7 shows, ROUGH-1 and average-BLEU barely have a correlation with the results of human evaluation. This is because average-BLEU and ROUGH-1 only implicitly evaluate the quality of generated explanations by measuring the textual similarity with the golden annotations. Compared to average-BLEU and ROUGH-1, the CEQ score has a significant positive relationship with the human evaluation results. This indicates the efficiency of the CEQ score in evaluating the quality of generated explanations.
|
| 209 |
+
|
| 210 |
+
Qualitative Analysis In Table 8, we provide examples of explanations generated by GPT2. We observe that GPT2 can generate a reasonable explanation for some causal facts, while the generated explanations may still contain factual mistakes, or be totally irrelevant to the given causal fact (highlighted in yellow and pink, respectively). This indicates that the explanation generation still remains challenging for the GPT2 model.
|
| 211 |
+
|
| 212 |
+
# 5.3 Joint Causal Reasoning and Explanation Generation
|
| 213 |
+
|
| 214 |
+
To investigate the role of causal explanations in the causal reasoning process, we trained models to jointly conduct these two tasks.
|
| 215 |
+
|
| 216 |
+
Settings Since this task requires a model to predict a label meanwhile generate an explanation, we conduct the experiments using the GPT2 model, which can be adapted to conduct the predictive causal reasoning task and explanation generation simultaneously. We denote this multi-task finetuned GPT2 model as GPT2CR-GE. Details for training GPT2CR-GE is provided in the Appendix.
|
| 217 |
+
|
| 218 |
+
To make the performance comparable, when evaluating the performance of $\mathrm{GPT2}_{\mathrm{CR - GE}}$ on the causal expatiations generation task, the same as the settings in the explanation generation task, the premise and the correct hypothesis are taken as the input of $\mathrm{GPT2}_{\mathrm{CR - GE}}$ for generating explanations.
|
| 219 |
+
|
| 220 |
+
Results We measure the quality of generated explanations using the same automatic scores and human evaluation settings as the Explanation Generation experiment. The performance of causal reasoning is also measured using accuracy. The results are shown in Table 9, where $\mathrm{GPT2}_{\mathrm{CR}}$ denotes the GPT2 model finetuned for the causal reasoning task, and $\mathrm{GPT2}_{\mathrm{EG}}$ refers to the GPT2 model finetuned for the explanation generation task. We observe that compared with $\mathrm{GPT2}_{\mathrm{CR}}$ , the improved performance of $\mathrm{GPT2}_{\mathrm{CR-EG}}$ on causal reasoning indicates that the additional explanation can be helpful for the causal reasoning task, as it prompts model to have a deep understanding of the causal mechanisms. Interestingly, by comparing with $\mathrm{GPT2}_{\mathrm{EG}}$ and $\mathrm{GPT2}_{\mathrm{CR-EG}}$ , we find that learning to predict the label can also be helpful for the explanation generation process. This indicates the
|
| 221 |
+
|
| 222 |
+
synergistic effect of the causal reasoning and the explanation generation on promoting models' understanding of causal mechanism.
|
| 223 |
+
|
| 224 |
+
# 5.4 Stability Analysis
|
| 225 |
+
|
| 226 |
+
Previous studies indicate that models may utilize some superficial cues within the dataset to predict the label. This leads to the vulnerability of models when facing adversarial attacks (Poliak et al., 2018; McCoy et al., 2019). Learning to generate the additional conceptual explanation may promote the understanding of causality to increase the stability of the reasoning model. Hence, we conduct a stability analysis to examine the specific effect of additional explanations.
|
| 227 |
+
|
| 228 |
+
Following Bekoulis et al. (2018) and Yasunaga et al. (2018), we attack the causal reasoning system by adding a perturbation term on the word embeddings of inputs. The perturbation term is derived using the gradient-based FGM method (Miyato et al., 2016). Table 9 shows the change of causal reasoning accuracy ( $\Delta \mathrm{Accu.}$ ) brought by the adversarial attack. For example, $\Delta = -6.40$ means a $6.40\%$ decrease of prediction accuracy after the adversarial attack. We find that, compared to the vanilla GPT2CR model, the explanation enhanced GPT2 model GPT2CR-EG demonstrates stronger stability. This suggests that, by training reasoning models to generate correct explanations of the causal facts, the understanding of the causality can be promoted, and then the stability of model performance can be increased.
|
| 229 |
+
|
| 230 |
+
# 5.5 Enhancing Pretrained Language Model with e-CARE
|
| 231 |
+
|
| 232 |
+
Causal knowledge is critical for various NLP applications. In this section, we investigate if the causality knowledge provided by e-CARE can be used as a resource to boost model performance on other causal-related tasks. To this end, we apply transfer learning by first finetuning a BERT model on e-CARE, then adapting the e-CARE-enhanced model (denoted as $\mathrm{BERT_E}$ ) on a causal extraction task EventStoryLine 0.9 (Caselli and Vossen, 2017), two causal reasoning tasks BECuSE 2.0 (Dunietz et al., 2017) and COPA (Roemmele et al., 2011), as well as a commonsense reasoning dataset CommonsenseQA (Talmor et al., 2019). On the EventStoryLine 0.9 dataset, we conduct experiment only on the instances about within-sentence causal relationship. The results are shown in Table 10. We observe
|
| 233 |
+
|
| 234 |
+
<table><tr><td>Dataset</td><td>Metric</td><td>BERT</td><td>BERT_E</td></tr><tr><td>EventStoryLine 0.9</td><td>F1 (%)</td><td>66.5</td><td>68.1</td></tr><tr><td>BECauSE 2.1</td><td>Accu. (%)</td><td>76.8</td><td>81.0</td></tr><tr><td>COPA</td><td>Accu. (%)</td><td>70.4</td><td>75.4</td></tr><tr><td>CommonsenseQA</td><td>Accu. (%)</td><td>52.6</td><td>56.4</td></tr></table>
|
| 235 |
+
|
| 236 |
+
Table 10: Performance of e-CARE-enhanced BERT.
|
| 237 |
+
|
| 238 |
+

|
| 239 |
+
Figure 2: Conceptual explanations of observed causality can be helpful for understanding the unseen causal facts.
|
| 240 |
+
|
| 241 |
+
that the additional training process on e-CARE can consistently increase the model performance on all four tasks. This indicates the potential of e-CARE in providing necessary causality information for promoting causal-related tasks in multiple domains.
|
| 242 |
+
|
| 243 |
+
# 6 Discussion
|
| 244 |
+
|
| 245 |
+
In this paper, we introduce additional explanation information for the causal reasoning process, and propose a corresponding explanation generation task. Previous literature concluded the explanation generation process as an abductive reasoning process (Hanson, 1958; Peirce, 1974) and highlighted the importance of the abductive explanation generation, as it may interact with the causal reasoning process to promote the understanding of causal mechanism, and increase the efficiency and reliability of causal reasoning.
|
| 246 |
+
|
| 247 |
+
For example, as Figure 2 shows, one may have an observation that $C_1$ : adding rock into hydrochloric acid caused $E_1$ : rock dissolved. Through abductive reasoning, one may come up with a conceptual explanation for the observation that acid is corrosive. After that, one can confirm or rectify the explanation by experiments, or resorting to external references. In this way, new ideas about causality can be involved for understanding the observed causal fact. Then if the explanation is confirmed, it can be further utilized to support the causal reasoning process by helping to explain and validate other related causal facts,
|
| 248 |
+
|
| 249 |
+
such as $C_2$ : adding rust into sulphuric acid may lead to $E_2$ : rust dissolved. This analysis highlights the pivotal role of conceptual explanation in learning and inferring causality. In this paper, we introduce the e-CARE dataset to provide causal explanations and support future research towards stronger human-like causal reasoning systems.
|
| 250 |
+
|
| 251 |
+
# 7 Conclusion
|
| 252 |
+
|
| 253 |
+
In this paper, we present an explainable CAusal REeasoning dataset $e$ -CARE, which contains over 21K causal questions, together with over 13K unique conceptual explanations about the deep understanding of the causal facts, which also makes e-CARE the largest causal reasoning benchmark. Experimental results show that both the causal reasoning task and especially the explanation generation task remain challenging for the SOTA pretrained language models. Moreover, the additional explanation signal can promote both the prediction accuracy and stability of models, highlighting the vital importance of the conceptual explanations in causal reasoning.
|
| 254 |
+
|
| 255 |
+
# 8 Acknowledgments
|
| 256 |
+
|
| 257 |
+
We thank the anonymous reviewers for their constructive comments, and gratefully acknowledge the support of the New Generation Artificial Intelligence of China (2020AAA0106501), and the National Natural Science Foundation of China (62176079, 61976073).
|
| 258 |
+
|
| 259 |
+
# References
|
| 260 |
+
|
| 261 |
+
Giannis Bekoulis, Johannes Deleu, Thomas Demeester, and Chris Develder. 2018. Adversarial training for multi-context joint entity and relation extraction. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2830-2836.
|
| 262 |
+
Steven Bethard and James H Martin. 2008. Learning semantic links from a corpus of parallel temporal and causal relations. In Proceedings of ACL-08: HLT, Short Papers, pages 177-180.
|
| 263 |
+
Chandra Bhagavatula, Ronan Le Bras, Chaitanya Malaviya, Keisuke Sakaguchi, Ari Holtzman, Hannah Rashkin, Doug Downey, Wen-tau Yih, and Yejin Choi. 2019. Abductive commonsense reasoning. In International Conference on Learning Representations.
|
| 264 |
+
Sumithra Bhakthavatsalam, Chloe Anastasiades, and Peter Clark. 2020. Genericskb: A knowl-
|
| 265 |
+
|
| 266 |
+
edge base of generic statements. arXiv preprint arXiv:2005.00660.
|
| 267 |
+
Oana-Maria Camburu, Tim Rocktäschel, Thomas Lukasiewicz, and Phil Blunsom. 2018. e-snli: Natural language inference with natural language explanations. In NeurIPS.
|
| 268 |
+
Tommaso Caselli and Piek Vossen. 2017. The event storyline: A new benchmark for causal and temporal relation extraction. In Proceedings of the Events and Stories in the News Workshop, pages 77-86.
|
| 269 |
+
Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555.
|
| 270 |
+
Jacob Cohen. 1960. A coefficient of agreement for nominal scales. Educational and psychological measurement, 20(1):37-46.
|
| 271 |
+
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186.
|
| 272 |
+
Jay DeYoung, Sarthak Jain, Nazneen Fatema Rajani, Eric Lehman, Caiming Xiong, Richard Socher, and Byron C Wallace. 2019. Eraser: A benchmark to evaluate rationalized nlp models. arXiv preprint arXiv:1911.03429.
|
| 273 |
+
Quang Do, Yee Seng Chan, and Dan Roth. 2011. Minimally supervised event causality identification. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 294-303.
|
| 274 |
+
Jesse Dunietz, Lori Levin, and Jaime G Carbonell. 2017. The because corpus 2.0: Annotating causality and overlapping relations. In Proceedings of the 11th Linguistic Annotation Workshop, pages 95-104.
|
| 275 |
+
Christiane Fellbaum. 2010. Wordnet. In Theory and applications of ontology: computer applications, pages 231-243. Springer.
|
| 276 |
+
Roxana Girju, Preslav Nakov, Vivi Nastase, Stan Szpakowicz, Peter Turney, and Deniz Yuret. 2007. Semeval-2007 task 04: Classification of semantic relations between nominals. In Proceedings of the Fourth International Workshop on Semantic Evaluations (SemEval-2007), pages 13-18.
|
| 277 |
+
Suchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Samuel Bowman, and Noah A Smith. 2018. Annotation artifacts in natural language inference data. In Proceedings of the 2018 Conference of the North American Chapter of the
|
| 278 |
+
|
| 279 |
+
Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 107-112.
|
| 280 |
+
Braden Hancock, Martin Bringmann, Paroma Varma, Percy Liang, Stephanie Wang, and Christopher Ré. 2018. Training classifiers with natural language explanations. In Proceedings of the conference. Association for Computational Linguistics. Meeting, volume 2018, page 1884. NIH Public Access.
|
| 281 |
+
Norwood Russell Hanson. 1958. Patterns of discovery: An inquiry into the conceptual foundations of science, volume 251. CUP Archive.
|
| 282 |
+
Iris Hendrickx, Su Nam Kim, Zornitsa Kozareva, Preslav Nakov, Diarmuid O Seaghdha, Sebastian Padó, Marco Pennacchiotti, Lorenzo Romano, and Stan Szpakowicz. 2019. Semeval-2010 task 8: Multi-way classification of semantic relations between pairs of nominals. arXiv preprint arXiv:1911.10422.
|
| 283 |
+
John Horgan. 1995. From complexity to perplexity. Scientific American, 272(6):104-109.
|
| 284 |
+
Harsh Jhamtani and Peter Clark. 2020. Learning to explain: Datasets and models for identifying valid reasoning chains in multihop question-answering. arXiv preprint arXiv:2010.03274.
|
| 285 |
+
David H Jonassen, Ionas, and Gelu Loan. 2008. Designing effective supports for causal reasoning. Educational Technology Research and Development, 56(3):287-308.
|
| 286 |
+
Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2019. Albert: A lite bert for self-supervised learning of language representations. arXiv preprint arXiv:1909.11942.
|
| 287 |
+
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. Bart: Denoising sequence-to-sequence pretraining for natural language generation, translation, and comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7871-7880.
|
| 288 |
+
Zhongyang Li, Xiao Ding, Ting Liu, J Edward Hu, and Benjamin Van Durme. 2020. Guided generation of cause and effect. IJCAI.
|
| 289 |
+
Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out, pages 74-81.
|
| 290 |
+
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach.
|
| 291 |
+
|
| 292 |
+
Zhiyi Luo, Yuchen Sha, Kenny Q Zhu, Seung-won Hwang, and Zhongyuan Wang. 2016. Common-sense causal reasoning between short texts. In KR, pages 421-431.
|
| 293 |
+
Tom McCoy, Ellie Pavlick, and Tal Linzen. 2019. Right for the wrong reasons: Diagnosing syntactic heuristics in natural language inference. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3428-3448.
|
| 294 |
+
Paramita Mirza, Rachele Sprugnoli, Sara Tonelli, and Manuela Speranza. 2014. Annotating causality in the tempeval-3 corpus. In EACL 2014 Workshop on Computational Approaches to Causality in Language (CAtoCL), pages 10-19. Association for Computational Linguistics.
|
| 295 |
+
Takeru Miyato, Andrew M Dai, and Ian Goodfellow. 2016. Adversarial training methods for semi-supervised text classification. arXiv preprint arXiv:1605.07725.
|
| 296 |
+
Nasrin Mostafazadeh, Alyson Grealish, Nathanael Chambers, James Allen, and Lucy Vanderwende. 2016. Caters: Causal and temporal relation scheme for semantic annotation of event structures. In Proceedings of the Fourth Workshop on Events, pages 51-61.
|
| 297 |
+
Qiang Ning, Zhili Feng, Hao Wu, and Dan Roth. 2019. Joint reasoning for temporal and causal relations. arXiv preprint arXiv:1906.04941.
|
| 298 |
+
Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting of the Association for Computational Linguistics, pages 311-318.
|
| 299 |
+
Charles Sanders Peirce. 1974. Collected papers of charles sanders peirce, volume 2. Harvard University Press.
|
| 300 |
+
Ethan Perez, Siddharth Karamcheti, Rob Fergus, Jason Weston, Douwe Kiela, and Kyunghyun Cho. 2019. Finding generalizable evidence by learning to convince q&a models. arXiv preprint arXiv:1909.05863.
|
| 301 |
+
Adam Poliak, Aparajita Haldar, Rachel Rudinger, J Edward Hu, Ellie Pavlick, Aaron Steven White, and Benjamin Van Durme. 2018. Collecting diverse natural language inference problems for sentence representation evaluation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 67-81.
|
| 302 |
+
Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training.
|
| 303 |
+
Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9.
|
| 304 |
+
|
| 305 |
+
Nazneen Fatema Rajani, Bryan McCann, Caiming Xiong, and Richard Socher. 2019. Explain yourself! leveraging language models for commonsense reasoning. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4932-4942.
|
| 306 |
+
Melissa Roemmele, Cosmin Adrian Bejan, and Andrew S Gordon. 2011. Choice of plausible alternatives: An evaluation of commonsense causal reasoning. In AAAI Spring Symposium: Logical Formalizations of Commonsense Reasoning, pages 90-95.
|
| 307 |
+
Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. 2020. Winogrande: An adversarial winograd schema challenge at scale. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 8732-8740.
|
| 308 |
+
Maarten Sap, Ronan Le Bras, Emily Allaway, Chandra Bhagavatula, Nicholas Lourie, Hannah Rashkin, Brendan Roof, Noah A Smith, and Yejin Choi. 2019. Atomic: An atlas of machine commonsense for if-then reasoning. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 3027-3035.
|
| 309 |
+
V Sembugamoorthy and B Chandrasekaran. 1986. Functional representation of devices and compilation of diagnostic problem-solving systems. Experience, memory and Reasoning, pages 47-73.
|
| 310 |
+
Robert Speer and Catherine Havasi. 2013. Conceptnet 5: A large semantic network for relational knowledge. In *The Peoples Web Meets NLP*, pages 161-176. Springer.
|
| 311 |
+
Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. 2019. *Commonsenseqa: A question answering challenge targeting commonsense knowledge*. In *Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies*, Volume 1 (Long and Short Papers), pages 4149–4158.
|
| 312 |
+
Michael R Waldmann and York Hagemayer. 2013. Causal reasoning.
|
| 313 |
+
Sarah Wiegrefe and Ana Marasovic. 2021. Teach me to explain: A review of datasets for explainable nlp. arXiv preprint arXiv:2102.12060.
|
| 314 |
+
Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. arXiv preprint arXiv:1906.08237.
|
| 315 |
+
Michihiro Yasunaga, Jungo Kasai, and Dragomir Radev. 2018. Robust multilingual part-of-speech tagging via adversarial training. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 976-986.
|
| 316 |
+
|
| 317 |
+
Qinyuan Ye, Xiao Huang, and Xiang Ren. 2020. Teaching machine comprehension with compositional explanations. arXiv preprint arXiv:2005.00806.
|
| 318 |
+
|
| 319 |
+
# 9 More Discussions about the e-CARE Dataset
|
| 320 |
+
|
| 321 |
+
# 9.1 The Generality of the Conceptual Explanation
|
| 322 |
+
|
| 323 |
+
In this paper, we construct the dataset by first obtaining the conceptual explanations, then obtaining the causal questions. This is because, we also hope to find the conceptual explanations with more generality, that that can explain more than one causal fact, but can explain a set of correlated causal facts. Table 11 demonstrate an example of such conceptual explanation. The explanation points out the nature of Copper that Copper is a good thermal conductor, so that holding copper on fire will make fingers feel burnt immediately. Additionally, the same explanation can also provide insights about another causal fact seemingly totally different from the case in Table 3 (a), that putting copper tubes into computer can promote thermal dispersion. This is because, the conceptual explanation points out the nature of copper, which drives a set of causal facts into existence.
|
| 324 |
+
|
| 325 |
+
This example demonstrate the usefulness of the conceptual explanations in providing the deep understanding of causality to support the causal reasoning. However, note that in this paper, we do not assume all the statements we collected can explain multiple causal facts. Instead, we resort to the empirical knowledge of human annotators to find such explanations. Specifically, we distribute statements to several annotators, and require each annotator to generate a causal fact that can be explained by the statement. For a certain statement, if it is distributed to multiple annotators and more than one annotator can generate a corresponding causal fact, then we assume that this statement can be a conceptual statement.
|
| 326 |
+
|
| 327 |
+
# 9.2 The Exhaustiveness of the Explanations
|
| 328 |
+
|
| 329 |
+
Another point we wish to elucidate is about the exhaustiveness of the explanations. In this paper, we only aim at providing plausible explanations that can explain the causal fact, but do not assume the provided explanations to be exhaustive or self-sufficient.
|
| 330 |
+
|
| 331 |
+
<table><tr><td>(a) Premise: Tom held a copper block by hand and heated it on fire.
|
| 332 |
+
Ask-for: Effect
|
| 333 |
+
Hypothesis 1: His fingers felt burnt for a short time. (✓)
|
| 334 |
+
Hypothesis 2: The copper block kept the same. (×)</td><td>(b) Premise: This computer's heat dispersion performance is bad.
|
| 335 |
+
Ask-for: Effect
|
| 336 |
+
Hypothesis 1: Designers add copper tubes into the computer. (✓)
|
| 337 |
+
Hypothesis 2: Designers put the computer into the ice water. (×)</td></tr><tr><td>Explanation: Copper is a good thermal conductor.</td><td>Explanation: Copper is a good thermal conductor.</td></tr></table>
|
| 338 |
+
|
| 339 |
+
Table 11: Two instances from the e-CARE dataset.
|
| 340 |
+
|
| 341 |
+
# 9.3 The Relationship between the Unique Explanations and Causal Questions
|
| 342 |
+
|
| 343 |
+
Due to the practical limits, to ensure the coverage of dataset, only a part of statements are distributed to multiple annotators, as described in Section 3.1.
|
| 344 |
+
|
| 345 |
+
# 10 Data Collection Details
|
| 346 |
+
|
| 347 |
+
# 10.1 Collection of Explanations
|
| 348 |
+
|
| 349 |
+
We collect the potential explanations from a commonsense knowledge base GenericsKB (Bhakthavatsalam et al., 2020), which contains naturally occurring generic statements, such as "Trees remove carbon dioxide from the atmosphere", collected from multiple corpora. We first filtered the statements according to their quality score $s$ , which is a human-annotation based metric, provided in the GenericsKB and evaluating the correctness of each statement. To ensure the factual correctness of the potential explanations, we only kept the statements whose quality score are among the highest $1\%$ . In addition, we also excluded the statements including: (1) Overly complex statements. The statements with connective, and statements with more than 20 words are excluded. This is because, by observation, we found that the annotators always struggle with understand and generate plausible causal facts for the over complex explanations. The number 20 is an empirical setting. (2) Statements describing named entities. (3) Statements describing the hypernymy or hyperonymy relationship between the subject and object. For example, the statement Monkey is a kind of mammal. describes the hypernymy relationship between the subject monkey and object mammal. This kind of statement does not belong to the three kinds of information that a valid explanation contains, as mentioned in Section 3.1.
|
| 350 |
+
|
| 351 |
+
After the filtering process, totally 19K statements are remained to be the potential explanations. Note that we do not assume that the statements after the filtering process are necessarily to be valid potential explanation and force the annotators to generate corresponding causal fact(s). Instead, we left the judgment to the annotators. If
|
| 352 |
+
|
| 353 |
+
a statement has already been distributed to three annotators and no annotator can generate a corresponding causal question for this statement, then it is discarded.
|
| 354 |
+
|
| 355 |
+
# 10.2 Collection of Causal Questions
|
| 356 |
+
|
| 357 |
+
We guided the annotators using illustrative examples to avoid the following mistakes:
|
| 358 |
+
|
| 359 |
+
(1) The generated cause and effect cannot be explained by the statement.
|
| 360 |
+
|
| 361 |
+
- Wrong Case
|
| 362 |
+
|
| 363 |
+
Explanation: Copper is a good The copper block was oxidized and the surface became dark..
|
| 364 |
+
|
| 365 |
+
Cause: Tom held a copper block and heated it on fire.
|
| 366 |
+
|
| 367 |
+
Effect: The copper block was oxidized and the surface became dark.
|
| 368 |
+
|
| 369 |
+
- Correct Case
|
| 370 |
+
|
| 371 |
+
Explanation: Copper is a good thermal conductor.
|
| 372 |
+
|
| 373 |
+
Cause: Tom held a copper block by hand and heated it on fire.
|
| 374 |
+
|
| 375 |
+
Effect: His fingers felt burnt for a short time.
|
| 376 |
+
|
| 377 |
+
(2) The generated "cause" and "effect" do not form a valid causal relationship.
|
| 378 |
+
|
| 379 |
+
- Wrong Case
|
| 380 |
+
|
| 381 |
+
Explanation: Oncologists specialize in the treatment of cancer.
|
| 382 |
+
|
| 383 |
+
Cause: Jerry suffered from cancer.
|
| 384 |
+
|
| 385 |
+
Effect: Jerry consulted many artists.
|
| 386 |
+
|
| 387 |
+
- Correct Case
|
| 388 |
+
|
| 389 |
+
Explanation: Oncologists specialize in the treatment of cancer.
|
| 390 |
+
|
| 391 |
+
Cause: Jerry suffered from cancer.
|
| 392 |
+
|
| 393 |
+
Effect: Jerry consulted many oncologists.
|
| 394 |
+
|
| 395 |
+
(3) The distractor can also form a causal relationship with the premise.
|
| 396 |
+
|
| 397 |
+
- Wrong Case
|
| 398 |
+
|
| 399 |
+
Explanation: Oncologists specialize in the treatment of cancer.
|
| 400 |
+
|
| 401 |
+
Cause: Jerry suffered from cancer.
|
| 402 |
+
|
| 403 |
+
Effect: Jerry consulted many oncologists.
|
| 404 |
+
|
| 405 |
+
Distractor Cause: Jerry consulted many traditional herbalists.
|
| 406 |
+
|
| 407 |
+
(4) The generated distractor is uninformative.
|
| 408 |
+
|
| 409 |
+
- Wrong Case
|
| 410 |
+
|
| 411 |
+
Explanation: Copper is a good thermal conductor.
|
| 412 |
+
|
| 413 |
+
Cause: Tom held a copper block by hand and heated it on fire.
|
| 414 |
+
|
| 415 |
+
Effect: His fingers felt burnt for a short time.
|
| 416 |
+
|
| 417 |
+
Distractor Effect: His fingers did not feel burnt for a short time.
|
| 418 |
+
|
| 419 |
+
# 11 Adversarial Filtering
|
| 420 |
+
|
| 421 |
+
During the annotation process, some superficial clues may be incurred into the dataset, which makes the correct and implausible hypothesis can be distinguished merely using these annotation artifacts. To decrease the influence of potential annotation artifacts, we introduce an Adversarial Filtering algorithm (Bhagavatula et al., 2019) to refine our dataset.
|
| 422 |
+
|
| 423 |
+
In specific, for an arbitrary causal question $\langle p,a,h^{+},h^{-}\rangle$ , where $p$ is the premise, $a\in$ ["cause", "effect"] is an ask-for annotator, $h^+$ and $h^{-}$ is the correct and wrong hypothesis, respectively, if $\langle p,h^{+}\rangle$ and $\langle p,h^{-}\rangle$ can be easily distinguished by a predictive model, then we replace $h^{-}$ with another implausible hypothesis $h^{-'}$ sampled from an implausible hypothesis set $\mathcal{H}$ , so that $\langle p,h^{-'}\rangle$ is harder to be distinguished from $\langle p,h^{+}\rangle$ . Where the implausible hypothesis set $\mathcal{H}$ is the collection of all wrong hypotheses within the dataset.
|
| 424 |
+
|
| 425 |
+
Algorithm 1 provides a formal description of our adversarial filtering algorithm. Specifically, in each iteration $i$ , we randomly split the dataset into a training set $\mathcal{T}_i$ and a validation set $\nu_{i}$ . Then a model $\mathcal{M}_i$ is trained on $\mathcal{T}_i$ to update $\nu_{i}$ to make it more challenging for $\mathcal{M}_i$ . To this end, given an instance $\langle p_j,a_j,h_j^+,h_j^-\rangle \in \mathcal{V}_i$ , we randomly sample $K$ more implausible hypotheses $h_j^{-}1',\dots ,h_j^{-}K'$ . Let $\delta_k^{\mathcal{M}_i}$ denotes the difference of model evaluation between $\langle p_j,a_j,h_j^+,h_j^-\rangle$
|
| 426 |
+
|
| 427 |
+
and $\langle p_j, a_j, h_k^- \rangle$ , where $\delta_k^{\mathcal{M}_i} < 0$ means model $\mathcal{M}_i$ favors $h_j^+$ to be the plausible hypothesis than the implausible hypothesis $h_{jk}^-$ . With probability $t_i$ , we replace $h_j^-$ with the implausible that is hardest to distinguish with $h_j^+$ , i.e., $h_j^- = h_{jl}^-$ , $l = \arg \min_l \delta_k^{\mathcal{M}_i}$ . In this way, in each iteration, the proportion of easy implausible hypotheses decreases, and then the adversary model is forced to capture more causality knowledge.
|
| 428 |
+
|
| 429 |
+
Algorithm 1 Adversarial Filtering
|
| 430 |
+
Input: number of iteration $n$ , dataset $\mathcal{D}_0$ , implausible hypothesis set $\mathcal{H}^-$ , initial and final temperature parameter $t_s$ and $t_e$ .
|
| 431 |
+
Output: dataset $\mathcal{D}_n$
|
| 432 |
+
1: for iteration $i = 1 \rightarrow (n - 1)$ do
|
| 433 |
+
2: $t_i = t + e + \frac{t_s - t_e}{1 + e^{0.3(i - 3n / 4)}}$
|
| 434 |
+
3: Random split $M_i$ into training set $\mathcal{T}_i$ and validation set $\mathcal{V}_i$
|
| 435 |
+
4: Train Model $M_i$ on $\mathcal{T}_i$
|
| 436 |
+
5: for instance $j \in S_i$ do
|
| 437 |
+
6: for $h_{jk}^- \in \mathcal{H}_j^-$ do
|
| 438 |
+
7: Calculate $\delta_k^{\mathcal{M}_i}(\langle p_j, a_j, h_j^+ \rangle, \langle p_j, a_j, h_{jk}^- \rangle)$
|
| 439 |
+
8: $l = \arg \min_l \delta_k^{\mathcal{M}_i}$
|
| 440 |
+
9: Sample $r$ from a Uniform distribution $U(0,1)$
|
| 441 |
+
10: If $r < t_i$ or $\delta_l^{\mathcal{M}_i} < 0$ then $h_j^- = h_{jl}^-$
|
| 442 |
+
11: Add instance $j$ into $\mathcal{S}_i$
|
| 443 |
+
12: end for
|
| 444 |
+
13: end for
|
| 445 |
+
14: end for
|
| 446 |
+
15: $\mathcal{D}_n = \mathcal{S}_n$
|
| 447 |
+
|
| 448 |
+
We implemented the adversary model using pretrained language model RoBERTa-base (Liu et al., 2019). The AF algorithm is run for 25 iterations and the temperature $t_i$ follows a sigmoid function, parameterized by the iteration number, between $t_s = 1.0$ and $t_e = 0.2$ . For each instance, we sampled $K = 20$ more implausible hypotheses from the implausible hypothesis set $\mathcal{H}$ .
|
| 449 |
+
|
| 450 |
+
# 12 Details of Experiments
|
| 451 |
+
|
| 452 |
+
# 12.1 Details of the Causal Reasoning Experiment
|
| 453 |
+
|
| 454 |
+
Settings In this paper, the causal reasoning task is defined as a multiple-choice problem, which requires the model to choose a more plausible hypothesis from two candidates, so that the premise and hypothesis can form a valid causal fact. Therefore, the causal reasoning task could be formalized as a prediction problem: given a candidate cause fact $\langle \text{cause}, \text{effect} \rangle$ composed of the premise event and one of the hypothesis events, the prediction model is required to predict a score mea
|
| 455 |
+
|
| 456 |
+
<table><tr><td colspan="2">Model Input Format</td></tr><tr><td>GPT2</td><td>< |startoftext| > C [SEP] E < |endoftext| ></td></tr><tr><td>RoBERTa</td><td><s> C <s> E <s></td></tr><tr><td>BART</td><td><s> C <s> E <s></td></tr><tr><td>XLNET</td><td><cls> C <sep> E <sep></td></tr><tr><td>BERT</td><td>[CLS] C [SEP] E [SEP]</td></tr><tr><td>ALBERT</td><td>[CLS] C [SEP] E [SEP]</td></tr></table>
|
| 457 |
+
|
| 458 |
+
Table 12: Input format of models in the causal reasoning task.
|
| 459 |
+
|
| 460 |
+
suring the causality of the event pair. Note that the ask-for indicator decides whether the premise or candidate hypothesis to be the cause or effect, respectively.
|
| 461 |
+
|
| 462 |
+
To this end, we concatenate the premise with each one of the candidate hypothesis to form two candidate causal facts. Then each of the candidate causal fact is fed into the models, to obtain a probability measuring the plausibility of the candidate causal fact. To satisfy the input format of the pretrained language models, the input candidate causal fact is preprocessed by adding special tokens. Additionally, we adapt GPT2 and BART to predictive causal reasoning task by adding an EOS token to the end of input text, and making predictions based on the representation of the EOS token. The specific input format of the models is listed in Table 12, where $C$ , $E$ denotes the cause and effect of the candidate causal fact, respectively.
|
| 463 |
+
|
| 464 |
+
Training Details In the causal reasoning task, we optimize all the models with a batch size of 64, learning rate of 1e-5, and the model is finetuned for 3 epochs.
|
| 465 |
+
|
| 466 |
+
# 12.2 Details of the Explanation Generation Experiment
|
| 467 |
+
|
| 468 |
+
Settings In the explanation generation experiment, models are trained to generate an explanation for a given valid causal fact $\langle C,E\rangle$ . Hence, the input of GPT2 is formatted as:
|
| 469 |
+
|
| 470 |
+
$$
|
| 471 |
+
< \text {s t a r t o f t e x t} | > C [ S E P ] E < \text {e n d o f t e x t} | >, \tag {5}
|
| 472 |
+
$$
|
| 473 |
+
|
| 474 |
+
where $<$ |startoftext| $>$ and $<$ |endoftext| $>$ are two special tokens. The input of the GRUSeq2Seq model is formatted as:
|
| 475 |
+
|
| 476 |
+
$$
|
| 477 |
+
< S O S > C, E < E O S >. \tag {6}
|
| 478 |
+
$$
|
| 479 |
+
|
| 480 |
+
Training Details In the explanation generation task, the GPT2 model is trained with a batch size of 32, learning rate of 1e-5, and the model is finetuned for 10 epochs. For the GRU-Seq2seq model, both the encoder and the decoder contains 2 GRU layers with a dimension of $300 \times 300$ .
|
| 481 |
+
|
| 482 |
+
The word embedding is initialized using 300-dimension GloVe. During optimization, the GRU-Seq2seq model is trained for 10 epochs as well.
|
| 483 |
+
|
| 484 |
+
# 12.3 Details of Explanation AND Generation Experiment
|
| 485 |
+
|
| 486 |
+
Settings Given a causal question, we first concatenate the premise with each one of the candidate hypothesis to form two candidate causal facts. Then each of the candidate causal fact is fed into the GPT2 model, to get a distributed representation of the candidate causal fact. Then probability measuring the plausibility of the candidate causal fact is predicted using an MLP based on the distributed representation. After predicting plausibility score of two candidate causal facts, the model is trained to generate an explanation based on only the representation of the candidate causal fact that model thinks is more likely to be valid.
|
| 487 |
+
|
| 488 |
+
Training Details During the training process, to balance the generation loss and prediction loss, we introduce an balance coefficient $\lambda$ . Hence, the loss function is formulated as $L = (1 - \lambda)L_{\text{Prediction}} + \lambda L_{\text{Generation}}$ . We empirically set $\lambda = 0.1$ . The batch size and learning rate are also set as 32 and 1e-5, respectively. While different to the explanation generation process, in the Generate And Prediction experiment, the GPT2 model is trained for 5 epochs, as it receives two kinds of supervision signals.
|
| 489 |
+
|
| 490 |
+
# 12.4 Details of Transfer Analysis
|
| 491 |
+
|
| 492 |
+
# Settings
|
| 493 |
+
|
| 494 |
+
All four tasks in the transfer analysis can be formalized as multiple-choice problem. Specifically, the causal event extraction task EventStoryLine requires model to predict whether two phrase-level events within a sentence can form a causal relationship. While in two causal reasoning tasks BECauSE 2.0 (Dunietz et al., 2017) and COPA (Roemmele et al., 2011), models are required to choose a plausible hypothesis, so that the premise and the hypothesis can form a valid causal fact.
|
| 495 |
+
|
| 496 |
+
<table><tr><td colspan="2">Dataset Input Format</td></tr><tr><td>EventStoryLine</td><td>[CLS] Statement</td></tr><tr><td>BECauSE 2.0</td><td>[CLS] C [SEP] E [SEP]</td></tr><tr><td>COPA</td><td>[CLS] C [SEP] E [SEP]</td></tr><tr><td>CommonsenseQA 2.0</td><td>[CLS] Q [SEP] A [SEP]</td></tr></table>
|
| 497 |
+
|
| 498 |
+
Table 13: Input format of models in the transfer analysis.
|
| 499 |
+
|
| 500 |
+
The CommonsenseQA (Talmor et al., 2019) task requires model to choose a correct answer for a given question. We list the specific format of the input on these four tasks in Table 13, where $C$ and $E$ denotes the cause and effect, respectively, $Q$ and $A$ denotes the question and answer, respectively.
|
| 501 |
+
|
| 502 |
+
Training Details To equip model with the causality knowledge within e-CARE, we train a BERT model for 3 epochs, with a batch size of 32 and a learning rate of 1e-5. Then in the following fine-tuning stage, on all four datasets, both BERT and e-CARE enhanced model $\mathrm{BERT_E}$ are fine-tuned using a grid search with the following set of hyperparameters:
|
| 503 |
+
|
| 504 |
+
- batch size: $\{16, 32\}$
|
| 505 |
+
number of epochs: $\{3,5,10\}$
|
| 506 |
+
- learning rate: $\{1\mathrm{e} - 6,1\mathrm{e} - 5\}$
|
ecareanewdatasetforexploringexplainablecausalreasoning/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:17141a010562bc2e6eafee9935609c698b34d2d091dd345e87416e53a1e80812
|
| 3 |
+
size 441092
|
ecareanewdatasetforexploringexplainablecausalreasoning/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:8ddc432c1fc5ab9b607d43b1a038ee728d1483576ff4244f23bacaae7bee6283
|
| 3 |
+
size 543301
|
epicemployingproverbsincontextasabenchmarkforabstractlanguageunderstanding/08052a61-a4c0-407e-9392-c211280a8fe8_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:43f041f968b80c1b2e7dfbd1931f4f65102898c103804a5e9d8ad963d6132fe4
|
| 3 |
+
size 118437
|
epicemployingproverbsincontextasabenchmarkforabstractlanguageunderstanding/08052a61-a4c0-407e-9392-c211280a8fe8_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:87a622dc59b1d7873674c7c5a36007d5f15e9101e44362a0a793f8ffd6bc1a01
|
| 3 |
+
size 145608
|
epicemployingproverbsincontextasabenchmarkforabstractlanguageunderstanding/08052a61-a4c0-407e-9392-c211280a8fe8_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:fc7cbf3f55f9be2ab3665170920817f3485f0f2ac106b20d930520a19995e270
|
| 3 |
+
size 520307
|
epicemployingproverbsincontextasabenchmarkforabstractlanguageunderstanding/full.md
ADDED
|
@@ -0,0 +1,512 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# ePiC: Employing Proverbs in Context as a Benchmark for Abstract Language Understanding
|
| 2 |
+
|
| 3 |
+
Sayan Ghosh Shashank Srivastava
|
| 4 |
+
|
| 5 |
+
UNC Chapel Hill
|
| 6 |
+
|
| 7 |
+
{sayghosh,ssrivastava}@cs.unc.edu
|
| 8 |
+
|
| 9 |
+
# Abstract
|
| 10 |
+
|
| 11 |
+
While large language models have shown exciting progress on several NLP benchmarks, evaluating their ability for complex analogical reasoning remains under-explored. Here, we introduce a high-quality crowdsourced dataset of narratives for employing proverbs in context as a benchmark for abstract language understanding. The dataset provides fine-grained annotation of aligned spans between proverbs and narratives, and contains minimal lexical overlaps between narratives and proverbs, ensuring that models need to go beyond surface-level reasoning to succeed. We explore three tasks: (1) proverb recommendation and alignment prediction, (2) narrative generation for a given proverb and topic, and (3) identifying narratives with similar motifs. Our experiments show that neural language models struggle on these tasks compared to humans, and these tasks pose multiple learning challenges.
|
| 12 |
+
|
| 13 |
+
# 1 Introduction
|
| 14 |
+
|
| 15 |
+
Large language models (LLMs) (Devlin et al., 2019; Liu et al., 2019a; Raffel et al., 2020; Lewis et al., 2020; Reimers and Gurevych, 2019; Sanh et al., 2019; Lan et al., 2020) have led to a paradigm shift in NLP, and have shown exciting progress on benchmarks such as GLUE (Wang et al., 2019b) and SuperGLUE (Wang et al., 2019a). In particular, these include tasks such as reading comprehension, natural language inference, and coreference resolution. Many of these tasks rely on semantic and syntactic reasoning, which has been mastered by these LLMs. For example, apart from improving on distributional semantics through contextualized embeddings (Ethayarajh, 2019), recent work has shown evidence that these models implicitly learn emergent concepts such as subject-verb agreement (Jawahar et al., 2019), semantic roles (Tenney et al., 2019) and dependency structures (Hewitt and Manning, 2019).
|
| 16 |
+
|
| 17 |
+
# PROVERB (P)
|
| 18 |
+
|
| 19 |
+
Prevention is better than cure
|
| 20 |
+
|
| 21 |
+
# NARRATIVE (N1)
|
| 22 |
+
|
| 23 |
+
Instead of working on his English assignments weekly, he put them off until the last week before they were all due. He was able to finish them all over the final few days, but it took a lot of energy drinks and misery.
|
| 24 |
+
|
| 25 |
+
Keywords (K1): {assignments, working, misery}
|
| 26 |
+
|
| 27 |
+
# NARRATIVE (N2)
|
| 28 |
+
|
| 29 |
+
There was once a disease that spread like wildfire. It killed people by the thousands. Doctors said wash your hands regularly and you'll be ok. Eventually an inoculation shot was created, and it worked, except on the people who refused to wash their hands and died waiting for their turn to get a shot.
|
| 30 |
+
|
| 31 |
+
Keywords (K2): {disease, washing, hands, inoculation}
|
| 32 |
+
|
| 33 |
+
Figure 1: We introduce ePiC, a crowdsourced dataset of narratives for employing proverbs in context. Our dataset contains narratives (N1 and N2) paired against proverbs (P) along with a fine-grained annotation of aligned spans between the narratives and proverbs. Aligned spans are shown with matching colors and indicate correspondences in roles between proverbs and narratives. We explore three tasks: (1) proverb recommendation and alignment prediction (predict P given N1), (2) narrative generation for a given proverb and topic (generate N1 given P and K1), and (3) identifying narratives with similar motifs (e.g. identify N2 in a set of narratives given N1).
|
| 34 |
+
|
| 35 |
+
However, humans show an ability for deeper linguistic reasoning. We can identify people's intentions and goals (Douglas and Sutton, 2006), perform relational reasoning (Alexander et al., 2016), and find analogies in situations with little surface overlap (Holyoak, 2013). In particular, making verbal analogies in the form of proverbs is noted as an indicator of literary ability (Penfield and Duru, 1988; Nippold et al., 2001). Proverbs are also repositories of information on culture, societal norms, values, and folk wisdom (Raymond, 1956; White, 1987). In this work, we investigate proverbs in narrative contexts as a testbed for evaluating abstract reasoning and analogical abilities of LLMs.
|
| 36 |
+
|
| 37 |
+
We introduce ePiC (employing Proverbs in
|
| 38 |
+
|
| 39 |
+
Context), a high-quality crowdsourced dataset of narratives paired with proverbs. The dataset provides fine-grained annotation of aligned spans between proverbs and narratives, and is designed to minimize lexical overlap between narratives and proverbs. Figure 1 shows two examples of narratives for a proverb from our dataset, along with corresponding alignment annotations. We diverge from related extant resources (Wang et al., 2020; Tan et al., 2015, 2016) on using proverbs in terms of quality of narratives, direct supervision, and having fine-grained alignment annotations.<sup>1</sup> We explore three tasks: (1) proverb and alignment prediction ( $\S$ 5.1), (2) narrative generation for a given proverb and a set of keywords specifying a topic ( $\S$ 5.2), and (3) discovering narratives with similar motifs ( $\S$ 5.3). By benchmarking several LLMs, we find that existing models struggle with these tasks, suggesting much scope of improvement in abstract reasoning. In particular, humans show much higher performance in many cases.
|
| 40 |
+
|
| 41 |
+
In §3, we describe the crowdsourced creation of the ePiC dataset. In §4, we analyze lexical overlap, biases, and narrative quality in ePiC. §5 describes the three tasks and details of experimental evaluation of LLMs for each task. We conclude with a discussion, and a statement of ethics and broader impact relevant to our work. Our contributions are:
|
| 42 |
+
|
| 43 |
+
- We introduce ePiC, a high-quality dataset for employing proverbs in context. It contains multiple narratives for English proverbs and fine-grained annotation of aligned spans between them.
|
| 44 |
+
- We design three challenging tasks that require models to go beyond surface-level reasoning and provoke research towards making more socially grounded NLP systems.
|
| 45 |
+
- We benchmark the performance of several state-of-the-art large language models in our proposed tasks using our dataset.
|
| 46 |
+
|
| 47 |
+
Our dataset and code are publicly available at: https://epic-benchmark.github.io
|
| 48 |
+
|
| 49 |
+
# 2 Related Work
|
| 50 |
+
|
| 51 |
+
Prior works in figurative language understanding have explored a diverse set of topics, such as simile detection and generation (Niculae and Danescu-Niculescu-Mizil, 2014; Mpouli, 2017; Zeng et al., 2020; Chakrabarty et al., 2020), metaphor detection
|
| 52 |
+
|
| 53 |
+
and generation (Dagan et al., 2005; Gao et al., 2018; Stowe et al., 2019, 2021; Chakrabarty et al., 2021b), pun identification (Poliak et al., 2018; Miller and Turković, 2016), and quote/proverb recommendation (Tan et al., 2015, 2016; Wang et al., 2020). Recent work (Chakrabarty et al., 2021a) has also focused on interpreting idioms and similes in narratives. Liu et al. (2019b) has explored recommending Chinese idioms through context-based recommendation and Zheng et al. (2019) formulated idiom recommendation as cloze-style reading comprehension task. Learning to quote has been explored based on fiction (Tan et al., 2015, 2016) and noisy social media conversations from Twitter, Reddit or Weibo (Lee et al., 2016; Wang et al., 2020). In the most related prior work, authors explore a quote retrieval task borrowing inspiration from context based recommendation systems (Huang et al., 2012; He et al., 2010). Wang et al. (2020) formulated learning to quote as a generation task by using topic modeling (Miao et al., 2017; Wang et al., 2019c) in a sequence-to-sequence network. While previous work has considered idioms, proverbs and common phrases as quotes, we specifically work with proverbs. Compared to earlier datasets, our dataset is manually created and labeled. Further, ePiC includes fine-grained annotations aligning parts of proverb to parts of the narrative, which has significant possibilities for model training, evaluation and interpretability.
|
| 54 |
+
|
| 55 |
+
# 3 Dataset Creation
|
| 56 |
+
|
| 57 |
+
In this section, we describe the steps involved in creating the dataset in detail.
|
| 58 |
+
|
| 59 |
+
Proverb collection: We obtained a candidate set of English proverbs by scraping websites of 'The Phrase Finder'² and WikiQuotes³. Next, this set was manually pruned to remove lexical variations of the same proverb. This manual curation led to a set of 250 proverbs, which we consider in the current version of our dataset.
|
| 60 |
+
|
| 61 |
+
Narrative collection: In the second step, we use Amazon Mechanical Turk to collect a diverse set of narratives corresponding to each proverb. We collect 10 narratives contributed by distinct turkers for each proverb, leading to a total of 2500 proverb-narrative pairs. We also ensure that no turker contributes a large number of narratives to alleviate
|
| 62 |
+
|
| 63 |
+
annotator bias (Geva et al., 2019) (where models can overfit to annotator characteristics) while encouraging diversity in writing style and content. The turkers were asked to write short realistic stories, preferably within 100 words. Additionally, to avoid surface-form biases, turkers were encouraged to minimize lexical overlap and to not mention the proverb or parts of it in the narrative. This was done so that doing well on the tasks requires a detailed understanding of the narratives rather than relying on surface-level cues. Turkers were paid 50 cents for each narrative for this task.
|
| 64 |
+
|
| 65 |
+
Span alignment annotation: Next, we solicit fine-grained annotations between the narratives and the proverb in form of aligned spans. For this, we present proverb-narrative pairs to turkers asking them to find contiguous spans in the narrative which align well with contiguous spans in the proverb. Turkers could submit up to 5 pairs of aligned spans per proverb-narrative pair. These aligned spans highlight the grounding of a proverb in the narrative (see Figure 1). These annotations can help to verify the reasoning capabilities of various neural models by checking if these models are able to identify these correspondences, and add interpretability to our tasks. Turkers were paid 25 cents for each proverb-narrative pair annotation for this task.
|
| 66 |
+
|
| 67 |
+
Statistics: Table 1 shows the statistics of narrative collection for the proverbs. The narrative writing task was perceived as challenging yet interesting by most turkers due to (a) not having outlines about topics for the narrative beforehand (b) requirement of low lexical overlap with the proverb. Thus, the narrative writing task had a learning curve and some of the narratives submitted initially were not included in the dataset.
|
| 68 |
+
|
| 69 |
+
<table><tr><td># submitted narratives</td><td>2561</td></tr><tr><td># approved narratives</td><td>2500</td></tr><tr><td># workers participated</td><td>166</td></tr><tr><td>Avg. # approved narratives per turker</td><td>15.06</td></tr><tr><td>Max # approved narratives by one turker</td><td>168</td></tr></table>
|
| 70 |
+
|
| 71 |
+
# 4 Dataset Analysis
|
| 72 |
+
|
| 73 |
+
Table 2 shows some statistics of the dataset collected through the process described in §3. In this section, we analyze the characteristics and biases of the ePiC dataset in detail.
|
| 74 |
+
|
| 75 |
+
Table 1: Statistics of AMT task for narrative collection.
|
| 76 |
+
|
| 77 |
+
<table><tr><td>Vocabulary size</td><td>16170</td></tr><tr><td>Avg. no. of tokens per narrative</td><td>64.27</td></tr><tr><td>Avg. no. of sentences per narrative</td><td>4.26</td></tr><tr><td>Avg. no. of aligned spans</td><td>2.18</td></tr><tr><td>Avg. no. words per proverb span</td><td>2.71</td></tr><tr><td>Avg. no. words per narrative span</td><td>11.57</td></tr><tr><td>No. of unique bigrams</td><td>80978</td></tr><tr><td>No. of unique trigrams</td><td>133772</td></tr></table>
|
| 78 |
+
|
| 79 |
+
Table 2: Dataset statistics for ePiC.
|
| 80 |
+
|
| 81 |
+
<table><tr><td>N-GRAM</td><td>JACCARD SIM.</td><td>COMMON N-GRAMS</td></tr><tr><td>unigram</td><td>0.0258 (0.0211)</td><td>1.27 (1.06)</td></tr><tr><td>bigram</td><td>0.0010 (0.0004)</td><td>0.07 (0.03)</td></tr><tr><td>trigram</td><td>0.0003 (0.0000)</td><td>0.02 (0.00)</td></tr></table>
|
| 82 |
+
|
| 83 |
+
Table 3: Avg. Jaccard similarity and number of common n-grams between proverbs and narratives. Numbers in parenthesis denote the corresponding statistics upon random assignment of proverbs to narratives.
|
| 84 |
+
|
| 85 |
+
# 4.1 Lexical overlap analysis
|
| 86 |
+
|
| 87 |
+
Using n-grams: We evaluate the extent of lexical overlap between proverbs and narratives by computing common n-grams between them. Table 3 reports the average Jaccard similarity score between n-gram sets of proverbs and narratives, and the average number of common n-grams. On average, there are 1.27 unigrams common between narratives and proverbs (including stopwords). In comparison, randomly permuting assignments of proverbs for narratives yields an average unigram Jaccard similarity of 0.0211 and 1.06 common unigrams. Thus, the overlap metrics in the dataset are comparable to those between unrelated texts.
|
| 88 |
+
|
| 89 |
+
To evaluate diversity among narratives corresponding to a proverb, we compute average Jaccard similarity between sets of unigrams for the narratives. This score is 0.107, which is comparable to a value of 0.098 for unigram overlap between pairs of narratives from different proverbs. This suggests a high lexical diversity between narratives.
|
| 90 |
+
|
| 91 |
+
Using distributional embeddings: We explore if we can retrieve the correct proverb corresponding to a narrative only by using similarity in their distributional representations. The similarity between a proverb and a narrative is defined as the cosine similarity between the representation of the proverb and the narrative obtained using word2vec embeddings (Mikolov et al., 2013) or contextual embeddings from LLMs. Details of implementation are provided in Appendix §F.1.
|
| 92 |
+
|
| 93 |
+
For this retrieval task, we report the accuracy and Mean Reciprocal Rank of the correct proverb in
|
| 94 |
+
|
| 95 |
+
<table><tr><td>LLM</td><td>Acc. (%) ↑</td><td>MRR ↑</td></tr><tr><td>Random</td><td>0.40</td><td>0.024</td></tr><tr><td>Word2Vec</td><td>1.52</td><td>0.047</td></tr><tr><td>BERT</td><td>0.36</td><td>0.025</td></tr><tr><td>ROBERTA</td><td>1.64</td><td>0.054</td></tr><tr><td>DistilBERT</td><td>1.92</td><td>0.053</td></tr><tr><td>ALBERT</td><td>0.40</td><td>0.025</td></tr><tr><td>Sentence-BERT</td><td>13.44</td><td>0.217</td></tr><tr><td>GPT-2</td><td>0.92</td><td>0.033</td></tr><tr><td>BART</td><td>1.14</td><td>0.041</td></tr><tr><td>T5</td><td>2.32</td><td>0.065</td></tr></table>
|
| 96 |
+
|
| 97 |
+
Table 4. We note that while all models perform better than random (with Sentence-BERT performing the best), the performance is very low when using out-of-the-box representations. In §5, we explore learning-based methods for the same setup.
|
| 98 |
+
|
| 99 |
+
# 4.2 Data characteristics
|
| 100 |
+
|
| 101 |
+
Diversity of narrative events: Fig 2 shows the
|
| 102 |
+
|
| 103 |
+

|
| 104 |
+
Figure 2: Top-30 'event'/'process' hyponyms in ePiC.
|
| 105 |
+
|
| 106 |
+
distribution of events in our dataset. Following Mostafazadeh et al. (2016) we find events as the hyponyms of the word 'event' or 'process' using WordNet (Fellbaum, 2010). We see that the top events comprise less than $3\%$ of all events in our dataset, and the long tail of less frequent events shows the diversity of the dataset.
|
| 107 |
+
|
| 108 |
+
Sentiment analysis: To evaluate the presence of sentiment association bias between proverbs and corresponding narratives (e.g., if negative sentiment proverbs only correspond to negative sentiments in narratives), we perform sentiment analysis of the narratives using VADER (Hutto and Gilbert, 2014). Figure 3 shows the average sentiment scores of the narratives corresponding to a proverb plotted against the sentiment score of the proverb. We find that the narratives are diverse in terms of their semi-
|
| 109 |
+
|
| 110 |
+
Table 4: Proverb retrieval performance using word2vec and off-the-shelf LLMs ('base' versions).
|
| 111 |
+
|
| 112 |
+
<table><tr><td>CRITERION</td><td>ePiC</td><td>[1]</td><td>[2]</td></tr><tr><td>Relatedness</td><td>3.91</td><td>3.15</td><td>3.92</td></tr><tr><td>Interesting/Creative</td><td>3.57</td><td>3.34</td><td>3.63</td></tr><tr><td>Fluency</td><td>3.98</td><td>3.23</td><td>3.80</td></tr><tr><td>Overall</td><td>3.68</td><td>3.15</td><td>3.66</td></tr></table>
|
| 113 |
+
|
| 114 |
+
Table 5: Averaged Likert scale ratings for data quality. Overall ratings for ePiC are better than [1] Wang et al. (2020) and [2] Tan et al. (2015).
|
| 115 |
+
|
| 116 |
+
ment polarities showing a weak positive correlation (Pearson correlation score 0.35) with the sentiment score of the proverbs. Figure 4 shows the variance
|
| 117 |
+
|
| 118 |
+

|
| 119 |
+
Figure 3: Average VADER sentiment score of narratives corresponding to a proverb against the VADER sentiment score of the proverb. The blue line shows the least-squares fit.
|
| 120 |
+
|
| 121 |
+
in terms of the number of positive and negative sentiment narratives (out of 10) for each proverb, showing a diverse spread of narrative sentiment polarities across proverbs. For additional details, please refer to Appendix $\S A$
|
| 122 |
+
|
| 123 |
+
We perform a few additional analyses on our dataset and found that (1) around $61\%$ of mentions in the narratives were male, (2) diverse spread of reading complexity values in narratives measured using Fleisch reading ease<sup>4</sup>, and (3) absence of any hate speech in the narratives of our dataset. The detailed experiments for these analyses are given in Appendix §A.
|
| 124 |
+
|
| 125 |
+
# 4.3 Human Evaluation of Dataset Quality
|
| 126 |
+
|
| 127 |
+
We perform a human evaluation of the narratives in our dataset on various criteria to judge the quality of our dataset. We perform this evaluation using the AMT platform. We randomly sample 250 proverb-narrative pairs and ask the turkers to evaluate the narratives on the following criteria:
|
| 128 |
+
|
| 129 |
+
- Relatedness: how closely the narrative reflects
|
| 130 |
+
|
| 131 |
+

|
| 132 |
+
Figure 4: Count of narratives with positive or negative VADER sentiment for each proverb. Proverbs are arranged in increasing order of their own VADER sentiment scores. Neutral sentiment narratives are excluded. For count of negative sentiment narratives (shown in red), consider the absolute value.
|
| 133 |
+
|
| 134 |
+
the meaning of the proverb (1: totally unrelated, 5: perfectly related)
|
| 135 |
+
|
| 136 |
+
- Interesting/Creative: how much is the narrative like a short creative or interesting story (1: very uninteresting/boring, 5: very creative/story-like)
|
| 137 |
+
- Fluency: grammatical correctness of the narrative (1: poor English with grammatical mistakes, 5: perfect English with no errors in writing)
|
| 138 |
+
|
| 139 |
+
# - Overall rating
|
| 140 |
+
|
| 141 |
+
All the ratings are done on Likert scales from 1 to 5, where 1 is the lowest value for each criterion and 5 is the highest. Also, the rating value '3' was calibrated to be slightly leaning to the higher end of the scale (instead of neutral) so that the turkers take a clear stand on the polarity of each criterion. Table 5 shows the qualitative evaluation of our dataset. The average overall rating was 3.67 and the average pair-wise inter-annotator agreement for labeling a narrative as overall good vs overall poor (overall score $>= 3$ vs $< 3$ ) is $0.84^{5}$ . We also rate the quality of the aligned spans in our dataset similarly on a scale of 1 to 5. The average rating of the alignment between spans was 3.91 and the average pair-wise inter-annotator agreement for alignment as good vs poor (rating $>= 3$ vs $< 3$ ) is $0.86^{5}$ .
|
| 142 |
+
|
| 143 |
+
Table 6 highlights the key differences between ePiC and prior work that dealt with related figurative language tasks involving quotes. Notably, ePiC exclusively deals with proverbs unlike prior work (which includes common phrases and idioms such as "trust your gut") and also provides granular annotations in form of annotated spans. Also,
|
| 144 |
+
|
| 145 |
+
ePiC contains narratives crowdsourced by specifically keeping proverbs in focus, rather than obtaining them using heuristic supervision. To quantify dataset quality, we ran human evaluation similar to ePiC over (1) 200 randomly drawn samples from the "Reddit" dataset of quotations in context from the Wang et al. (2020), and (2) 200 randomly drawn samples from the corpus of Tan et al. (2015). Based on average Likert scores in Table 5 we find that ePiC is (1) significantly superior (using t-test; $p < 0.05$ ) on all criteria than Wang et al. (2020), and (2) better in overall ratings than Tan et al. (2015).
|
| 146 |
+
|
| 147 |
+
# 5 Tasks & Evaluation
|
| 148 |
+
|
| 149 |
+
In this section, we introduce three tasks associated with ePiC and describe their experimental setup and benchmark results: (1) Proverb and Alignment Prediction, (2) Narrative Generation, and (3) Identifying narratives with similar motifs.
|
| 150 |
+
|
| 151 |
+
# 5.1 Proverb and alignment prediction
|
| 152 |
+
|
| 153 |
+
# 5.1.1 Task details
|
| 154 |
+
|
| 155 |
+
In this task, the objective is to predict the correct proverb for a given narrative from the set of 250 proverbs in the dataset. The motivation of this task is to test whether language models can abstract the underlying meaning of the narratives and make an analogy with the correct proverb from a large set of proverbs. In terms of applications, this task is related to proverb recommendation, which can be useful in creative writing assistants. The task is challenging as there might be multiple proverbs loosely related to the narrative context, but not be completely consonant with subliminal themes in the narrative. An underlying assumption here is that a narrative would match well with exactly one proverb. We found this reasonable for most examples in the dataset.
|
| 156 |
+
|
| 157 |
+
# 5.1.2 Experiment Setup and Results
|
| 158 |
+
|
| 159 |
+
We consider two settings, predicting (1) Seen and (2) Unseen proverbs.
|
| 160 |
+
|
| 161 |
+
- Seen proverbs: The set of proverbs in the train and test set are the same. We divide narratives corresponding to each proverb into train and test in 6:4 ratio. So, the train and test sets have 1500 and 1000 proverb-narrative pairs respectively.
|
| 162 |
+
- Unseen proverbs: Here, we consider 150 proverbs in the train set and the remaining 100 proverbs in the test set (6:4 split on the set of proverbs). The sets of proverbs in the train and
|
| 163 |
+
|
| 164 |
+
<table><tr><td>CHARACTERISTICS</td><td>Tan et al. (2015)</td><td>Lee et al. (2016)</td><td>Wang et al. (2020)</td><td>ePiC</td></tr><tr><td>Domain</td><td>Fiction</td><td>Social Media</td><td>Social Media</td><td>Fiction</td></tr><tr><td>Manual curation of narratives</td><td>X</td><td>X</td><td>X</td><td>✓</td></tr><tr><td>Alignment annotation</td><td>X</td><td>X</td><td>X</td><td>✓</td></tr><tr><td>Focus on proverbs</td><td>X</td><td>-6</td><td>X</td><td>✓</td></tr></table>
|
| 165 |
+
|
| 166 |
+
Table 6: Comparing ePiC with prior works on learning to quote based on different characteristics of the data and the collection process. While previous methods collect contexts and labels by mining existing text resources through heuristics (with no manual curation), ePiC contains contexts in form of narratives authored by crowdworkers explicitly for this task. ePiC further provides fine-grained alignment annotation between narratives and proverbs.
|
| 167 |
+
|
| 168 |
+
test split are disjoint. So, the train and test sets have 1500 and 1000 proverb-narrative pairs respectively (since each proverb has 10 narratives).
|
| 169 |
+
|
| 170 |
+
Proverb prediction: Here we focus on only predicting the corresponding proverb for a narrative, without employing the span alignments in training or evaluation. For this, we fine-tune the retrieval models based on different LLMs previously described in §4 (details of models in Appendix §F.2). To evaluate performance we consider accuracy and Mean Reciprocal Rank as metrics. Table 7 shows best proverb prediction performance on test split for 'seen' and 'unseen' proverbs<sup>7</sup>. RoBERTa performs the best for both the 'seen' and 'unseen' settings, and the performance for all models is consistently lower for unseen proverbs (as would be expected, since this task involves much greater generalization). Further, while the performance of all models is much better than chance, even the highest performance is only $28.2\%$ .
|
| 171 |
+
|
| 172 |
+
<table><tr><td>MODEL</td><td>ACC. (%) ↑</td><td>MRR ↑</td></tr><tr><td colspan="3">Seen proverbs</td></tr><tr><td>Random</td><td>0.4</td><td>0.024</td></tr><tr><td>BERT</td><td>22.9</td><td>0.342</td></tr><tr><td>RoBERTa</td><td>28.2</td><td>0.391</td></tr><tr><td>DistilBERT</td><td>18.7</td><td>0.289</td></tr><tr><td>ALBERT</td><td>13.4</td><td>0.221</td></tr><tr><td>Sentence-BERT</td><td>20.6</td><td>0.315</td></tr><tr><td>BART</td><td>15.8</td><td>0.245</td></tr><tr><td>T5</td><td>18.7</td><td>0.292</td></tr><tr><td colspan="3">Unseen proverbs</td></tr><tr><td>Random</td><td>1.0</td><td>0.005</td></tr><tr><td>BERT</td><td>19.2</td><td>0.307</td></tr><tr><td>RoBERTa</td><td>20.3</td><td>0.314</td></tr><tr><td>DistilBERT</td><td>17.4</td><td>0.277</td></tr><tr><td>ALBERT</td><td>1.1</td><td>0.053</td></tr><tr><td>Sentence-BERT</td><td>17.0</td><td>0.278</td></tr><tr><td>BART</td><td>8.5</td><td>0.189</td></tr><tr><td>T5</td><td>13.7</td><td>0.242</td></tr></table>
|
| 173 |
+
|
| 174 |
+
Alignment prediction: Here we focus only on predicting an aligned span from the narrative given the narrative, proverb, and a span from the proverb as inputs. We fine-tune two large language models (BERT and RoBERTa) for this by adopting a learning framework similar to answer span prediction for SQUAD (Rajpurkar et al., 2016). The language model outputs two probability distributions corresponding to the start and end positions of a span, over the narrative tokens. We iterate over all the combinations of the start and end tokens and choose the span with maximum likelihood. For span prediction, we report token-level precision, recall, and F1. Table 8 shows the results of alignment
|
| 175 |
+
|
| 176 |
+
Table 7:Proverb prediction performance on 'seen' and 'unseen'proverbs (all LLMs are in base' version).
|
| 177 |
+
|
| 178 |
+
<table><tr><td>MODEL</td><td>SPAN P</td><td>SPAN R</td><td>SPAN F1</td></tr><tr><td>BERT</td><td>0.070</td><td>0.123</td><td>0.089</td></tr><tr><td>RoBERTa</td><td>0.068</td><td>0.143</td><td>0.092</td></tr></table>
|
| 179 |
+
|
| 180 |
+
Table 8: Alignment prediction performance for seen proverbs using LLMs ('base' versions).
|
| 181 |
+
|
| 182 |
+
prediction on the 'seen' proverbs using BERT and RoBERTa models. We find that the performance is low for both models indicating major scope for improvements.
|
| 183 |
+
|
| 184 |
+
Predicting proverbs and alignment jointly: We formulate this as multi-task learning. We extend the models from the proverb prediction task by adding a component to predict span from narrative given a span from the proverb and the narrative. The language model is thus shared across the proverb prediction and span prediction tasks. The span prediction branch predicts the start and end position of the corresponding narrative span. We jointly train the model with multi-task learning of the two tasks, i.e., proverb and alignment prediction, on the 'seen' proverbs data split. We report the accuracy for proverb prediction and precision, recall, and F1 for span prediction. Apart from this joint model, we also consider a pipelined baseline model which first does proverb prediction, followed by span pre
|
| 185 |
+
|
| 186 |
+
diction if the correct proverb was predicted. Table 9 shows results for the joint model and the pipelined-baseline model. The low performance of the models indicates major scope for improvements in the individual tasks. While in principle the two tasks should benefit from joint training, we find that joint training performs worse than pipelined-baseline for both proverb and alignment prediction. Future work can explore designing better models for joint training to leverage the interdependence between proverb prediction and alignment prediction.
|
| 187 |
+
|
| 188 |
+
<table><tr><td>MODEL</td><td>ACC. (%)</td><td>SPAN P</td><td>SPAN R</td><td>SPAN F1</td></tr><tr><td colspan="5">Pipelined</td></tr><tr><td>BERT</td><td>22.9</td><td>0.018</td><td>0.035</td><td>0.024</td></tr><tr><td>RoBERTa</td><td>28.2</td><td>0.019</td><td>0.048</td><td>0.027</td></tr><tr><td colspan="5">Joint training</td></tr><tr><td>BERT</td><td>19.8</td><td>0.015</td><td>0.029</td><td>0.019</td></tr><tr><td>RoBERTa</td><td>26.5</td><td>0.015</td><td>0.030</td><td>0.020</td></tr></table>
|
| 189 |
+
|
| 190 |
+
Table 9: Joint proverb and alignment prediction performance for seen proverbs using LLMs ('base' versions).
|
| 191 |
+
|
| 192 |
+
# 5.1.3 Qualitative analysis of proverb prediction models
|
| 193 |
+
|
| 194 |
+
Figure 5 shows a heatmap to study the differences in prediction accuracies of BERT and RoBERTa models. We see that RoBERTa generally outperforms BERT for many cases (in Figure 5, values in the bottom-right triangle are typically greater than the top-left). Looking into the narratives for proverbs in the test set with high accuracy $(>=0.75)$ , we think a reason for the high performance could be the presence of certain words/phrases which are synonymous to some words/phrases in the proverb (for example, presence of word 'group' for the proverb 'birds of a feather flock together'). On the other hand, there are cases when the model is confused because of multiple topics being discussed in the narrative resulting in an incorrect prediction. For example, some narratives in the test set for the proverb 'life's not all beer and skittles' describe earning money the hard way, which confused the RoBERTa model into predicting 'time is money' for such narratives.
|
| 195 |
+
|
| 196 |
+
# 5.1.4 MCQ task for human performance comparison
|
| 197 |
+
|
| 198 |
+
To formulate a feasible task for humans, we frame proverb prediction as a multiple choice question (MCQ) task where for each narrative, 5 proverbs are provided as choices. The set of choices includes the correct proverb and 4 other distractor
|
| 199 |
+
|
| 200 |
+

|
| 201 |
+
Figure 5: Heatmap showing the percentage of proverbs with various fine-tuned BERT and RoBERTa proverb prediction accuracies (for example, more than $15\%$ of the proverbs have RoBERTA prediction accuracy as $25\%$ and BERT prediction accuracy as $25\%$ ).
|
| 202 |
+
|
| 203 |
+
proverbs, chosen by using the fine-tuned RoBERTa model. Examples of the MCQ task and details of choosing distractors are provided in Appendix §B. Table 10 shows the accuracy of the human evaluation for this MCQ task on a random sample of 100 narratives from the test split of "seen" proverbs conducted using AMT. Compared to RoBERTa, we find humans are much better at this adversarially created MCQ task. Note that the performance for RoBERTa in Table 10 and Table 7 is different, as Table 10 reports accuracy only on the random sample of narratives chosen for human evaluation. The estimate for human performance is likely an under-estimate since in many cases human subjects were unfamiliar with the meanings of some of the proverbs provided in the options and as a result, focused more on surface-level cues (details of this analysis are provided in Appendix §B). The average pair-wise inter-annotator agreement between human subjects for this task was $0.73^{5}$ .
|
| 204 |
+
|
| 205 |
+
This evaluation does not take into account semantic similarity between proverbs (two proverbs might be equally suitable for the same context). To explore this, we analyze the human errors on the MCQ task and find that in only around $11\%$ of the errors, the proverb chosen by humans is semantically similar to the annotated proverb and can also be a suitable answer to the MCQ task. Details about this analysis are given in Appendix §C. Future work can consider handling semantic similarity between proverbs explicitly and devise suitable evaluation metrics.
|
| 206 |
+
|
| 207 |
+
<table><tr><td>Predictor</td><td>Acc.(%) ↑</td></tr><tr><td>RoBERTA</td><td>23.0</td></tr><tr><td>Human</td><td>78.7</td></tr></table>
|
| 208 |
+
|
| 209 |
+
# 5.2 Narrative Generation
|
| 210 |
+
|
| 211 |
+
# 5.2.1 Task details
|
| 212 |
+
|
| 213 |
+
One of the important use-cases for NLP models in the creative writing domain is to use these models to generate content. We explore the task of generating narratives corresponding to a proverb and a given topic (specified as a set of keywords). We benchmark the performance of two recently proposed state-of-the-art models in text generation, T5 (Raffel et al., 2020) and BART (Lewis et al., 2020), by fine-tuning them on ePiC.
|
| 214 |
+
|
| 215 |
+
# 5.2.2 Experiments and Results
|
| 216 |
+
|
| 217 |
+
We divide our dataset into train and test split under 'seen' and 'unseen' proverbs settings similar to the proverb prediction task. We consider the set of verbs and named-entities as the keywords for a narrative. We train our narrative generation model conditioned on the proverb and the keywords.
|
| 218 |
+
|
| 219 |
+
Table 11 shows results for automatic evaluation of the generated narratives using BLEU (Papineni et al., 2002), ROUGE-L (Lin, 2004), and recall of the keywords mentioned in the generated narrative as metrics. Examples of generated narratives are given in Appendix §D. We find that BART performs better than T5 on the automatic evaluation metrics. Further, we perform human evaluation to evaluate the quality of the generated narratives in AMT by considering the same criteria (and rating semantics) employed in Section 4.3. Table 12 shows the human evaluation of generated narratives using BART and T5 when tested over 'seen' proverbs. Low scores for BLEU and ROUGE-L in automatic metrics and low Likert ratings of the generated narratives indicate much scope for future improvement on this task.
|
| 220 |
+
|
| 221 |
+
# 5.3 Identifying narratives with similar motifs
|
| 222 |
+
|
| 223 |
+
# 5.3.1 Task details
|
| 224 |
+
|
| 225 |
+
An important aspect of language understanding is the ability to make linguistic (and narrative) analogies, i.e., identifying 'similarity' between narratives (e.g., identifying two narratives that are variations on the 'Cinderella story' theme). Here, we explore the task of identifying narrative analogy by modeling 'similarity' between narratives based
|
| 226 |
+
|
| 227 |
+
Table 10:Proverb prediction accuracy in MCQ setting.
|
| 228 |
+
|
| 229 |
+
<table><tr><td>MODEL</td><td>BLEU↑</td><td>ROUGE-L↑</td><td>RECALL↑</td></tr><tr><td colspan="4">Seen proverbs</td></tr><tr><td>BART</td><td>4.21</td><td>30.80</td><td>0.90</td></tr><tr><td>T5</td><td>2.25</td><td>27.83</td><td>0.77</td></tr><tr><td colspan="4">Unseen proverbs</td></tr><tr><td>BART</td><td>4.39</td><td>31.36</td><td>0.93</td></tr><tr><td>T5</td><td>2.34</td><td>26.61</td><td>0.75</td></tr></table>
|
| 230 |
+
|
| 231 |
+
Table 11: Automatic evaluation for narrative generation on 'seen' and 'unseen' proverbs using 'base' versions of LLMs.
|
| 232 |
+
|
| 233 |
+
<table><tr><td>CRITERION</td><td>BART</td><td>T5</td></tr><tr><td>Relatedness</td><td>2.75</td><td>2.57</td></tr><tr><td>Interesting/Creative</td><td>2.97</td><td>3.07</td></tr><tr><td>Fluency</td><td>2.71</td><td>2.53</td></tr><tr><td>Overall</td><td>2.87</td><td>2.76</td></tr></table>
|
| 234 |
+
|
| 235 |
+
Table 12: Human evaluation results for narrative generation on 'seen' proverbs.
|
| 236 |
+
|
| 237 |
+
on proverbs illustrated by them. For this task, two narratives are taken to be similar if they are related to the same proverb.
|
| 238 |
+
|
| 239 |
+
# 5.3.2 Experiments and Results
|
| 240 |
+
|
| 241 |
+
For this task, we use the train and test split of 'seen' proverbs setup in the proverb prediction task. The aim is to find similar narratives for each narrative in the test split amongst all narratives in the test split. So for each narrative, there are 3 other similar narratives (corresponding to the same proverb) in the test split (containing 1000 narratives).
|
| 242 |
+
|
| 243 |
+
Modeling similarity between narratives We use the learned models in the proverb prediction task to obtain a probability distribution over the proverbs for each narrative. To model similarity, we compute the distance between the (vectors representing) two probability distributions using one of the following: (1) cosine distance; (2) Jenson-Shannon divergence; (3) L2 (Euclidean) distance; and (4) L1 (Manhattan) distance. We predict the narrative closest (in terms of distance metrics) to the input narrative as the most similar. Table 13 shows the accuracy of getting a similar narrative using different distance metrics and different finetuned LLMs. Using cosine or Jenson-Shannon divergence as the distance metric on the probability distribution over proverbs predicted by the RoBERTa model performs best on this task. However, the overall performance of models are still low and can be benefited by devising suitable training methods for this task.
|
| 244 |
+
|
| 245 |
+
We perform an additional experiment on finding similar narratives without performing proverb prediction as an intermediate step. We use a pretrained Sentence-BERT model to obtain representations of each narrative. For a given input narrative, we calculate the cosine distance between the Sentence-BERT representation of the input narrative and all other narratives in the test set. We predict the narrative having minimum cosine distance to the input narrative as the most similar. Using this approach we find the accuracy of identifying similar narratives as $6.6\%$ , which is lower than most values reported in Table 13. This low value highlights the diversity between narratives and the challenge in finding analogies between narratives.
|
| 246 |
+
|
| 247 |
+
<table><tr><td>MODEL</td><td>Cos</td><td>JSD</td><td>L2</td><td>L1</td></tr><tr><td>BERT</td><td>8.5</td><td>8.0</td><td>7.3</td><td>7.9</td></tr><tr><td>RoBERTa</td><td>13.3</td><td>13.4</td><td>11.2</td><td>11.8</td></tr><tr><td>Distil-BERT</td><td>6.5</td><td>7.2</td><td>5.2</td><td>6.0</td></tr><tr><td>Sentence-BERT</td><td>7.2</td><td>6.1</td><td>7.0</td><td>5.9</td></tr></table>
|
| 248 |
+
|
| 249 |
+
Table 13: Prediction accuracy $(\%)$ for identifying similar narratives by using different distance metrics and distribution over proverbs from different LLMs ('base' versions).
|
| 250 |
+
|
| 251 |
+
# 6 Conclusion and Future Work
|
| 252 |
+
|
| 253 |
+
We introduce ePiC, a high-quality crowdsourced dataset of narratives paired with proverbs, and a suite of challenging tasks associated with this dataset. We show that these provide a challenging testbed for evaluating abstract reasoning and analogical abilities of LLMs. Future work can explore more sophisticated mechanisms to use alignment annotations in improving the performance for proverb prediction and model interpretability. Additionally, researchers can explore conditional narrative generation through more informative prompts than using keywords. ePiC can also be extended in the future by incorporating more proverbs and adding more layers of complexity like sarcasm or adversarily creating harder narratives. Most of all, the development of similarly challenging resources and tasks can enable the possibility of socially grounded NLP systems.
|
| 254 |
+
|
| 255 |
+
# Ethics and Broader Impact
|
| 256 |
+
|
| 257 |
+
In §4, we note that our dataset shows considerable differences in the distribution of gender of entities (61% male vs 39% female), whereas in the real
|
| 258 |
+
|
| 259 |
+
world we expect the ratios to be about equally balanced. Systems that don't account for this bias might end up performing better for narratives with male entities than with females. However, we note that narratives with male and female entities show no differences in overall length or the average number of mentions to those entities.
|
| 260 |
+
|
| 261 |
+
The proverbs used in our dataset were collected from free public resources without violating intellectual property rights. We do not collect any personal information from the turkers who participated in our crowdsourced tasks. We release our dataset publicly without mentioning any personal details of turkers available automatically in AMT (such as turker IDs). The turkers were compensated fairly and the payment per task is equivalent to an hourly compensation that is greater than minimum wage (based on the median time taken by turkers).
|
| 262 |
+
|
| 263 |
+
For all the crowdsourcing tasks in this work, we limited the locale of eligible turkers to the USA, Canada, and the UK. Further, to ensure good-faith turkers, we required that the approval rate of the turkers be above $97\%$ .
|
| 264 |
+
|
| 265 |
+
Our screening process has selection biases that likely over-samples narrative-writers from demographics that are over-represented on AMT (ethnically white, college-educated, lower-to-medium income, and young) (Hitlin, 2016), and this is likely to have affected the topics and type of language usage in the collected narratives.
|
| 266 |
+
|
| 267 |
+
Finally, our investigation here has focused on traditional English proverbs, even while proverbs are universal in human languages and cultures (Penfield and Duru, 1988). This poses a real risk of the development of AI models that better understand and employ specific types of figurative language than others. Such systems are likely to be less user-friendly to users that don't belong to specific social-cultural backgrounds. To mitigate these risks, but also since proverbs are universal repositories of culture-specific knowledge, future work should extend our effort to more equitably represent the variety and diversity of human thought and cultural experiences. Our investigation here, unfortunately, does not adequately do this. As the proverb goes, the road to hell is paved with good intentions.
|
| 268 |
+
|
| 269 |
+
# References
|
| 270 |
+
|
| 271 |
+
Patricia A Alexander, Denis Dumas, Emily M Grossnickle, Alexandra List, and Carla M Fretto. 2016.
|
| 272 |
+
|
| 273 |
+
Measuring relational reasoning. The Journal of Experimental Education, 84(1):119-151.
|
| 274 |
+
Tuhin Chakrabarty, Yejin Choi, and Vered Shwartz. 2021a. It's not rocket science: Interpreting figurative language in narratives. arXiv preprint arXiv:2109.00087.
|
| 275 |
+
Tuhin Chakrabarty, Smaranda Muresan, and Nanyun Peng. 2020. Generating similes effortlessly like a pro: A style transfer approach for simile generation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6455-6469, Online. Association for Computational Linguistics.
|
| 276 |
+
Tuhin Chakrabarty, Xurui Zhang, Smaranda Muresan, and Nanyun Peng. 2021b. MERMAID: Metaphor generation with symbolism and discriminative decoding. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4250-4261, Online. Association for Computational Linguistics.
|
| 277 |
+
Ido Dagan, Oren Glickman, and Bernardo Magnini. 2005. The pascal recognising textual entailment challenge. In *Machine Learning Challenges Workshop*, pages 177–190. Springer.
|
| 278 |
+
Thomas Davidson, Dana Warmsley, Michael Macy, and Ingmar Weber. 2017. Automated hate speech detection and the problem of offensive language. In Proceedings of the International AAAI Conference on Web and Social Media, volume 11.
|
| 279 |
+
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.
|
| 280 |
+
Karen M Douglas and Robbie M Sutton. 2006. When what you say about others says something about you: Language abstraction and inferences about descriptors' attitudes and goals. Journal of Experimental Social Psychology, 42(4):500-508.
|
| 281 |
+
Kawin Ethayarajh. 2019. How contextual are contextualized word representations? comparing the geometry of BERT, ELMo, and GPT-2 embeddings. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 55–65, Hong Kong, China. Association for Computational Linguistics.
|
| 282 |
+
Alvan R Feinstein and Domenic V Cicchetti. 1990. High agreement but low kappa: I. the problems of two paradoxes. Journal of clinical epidemiology, 43(6):543-549.
|
| 283 |
+
|
| 284 |
+
Christiane Fellbaum. 2010. Wordnet. In Theory and applications of ontology: computer applications, pages 231-243. Springer.
|
| 285 |
+
Ge Gao, Eunsol Choi, Yejin Choi, and Luke Zettlemoyer. 2018. Neural metaphor detection in context. arXiv preprint arXiv:1808.09653.
|
| 286 |
+
Mor Geva, Yoav Goldberg, and Jonathan Berant. 2019. Are we modeling the task or the annotator? an investigation of annotator bias in natural language understanding datasets. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1161-1166, Hong Kong, China. Association for Computational Linguistics.
|
| 287 |
+
Qi He, Jian Pei, Daniel Kifer, Prasenjit Mitra, and Lee Giles. 2010. Context-aware citation recommendation. In Proceedings of the 19th international conference on World wide web, pages 421-430.
|
| 288 |
+
John Hewitt and Christopher D. Manning. 2019. A structural probe for finding syntax in word representations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4129-4138, Minneapolis, Minnesota. Association for Computational Linguistics.
|
| 289 |
+
Paul Hitlin. 2016. 4. turkers in this canvassing: Young, well-educated and frequent users. Pew Research Center, 437.
|
| 290 |
+
Keith J Holyoak. 2013. Analogy and relational reasoning. The Oxford Handbook of Thinking and Reasoning, page 234.
|
| 291 |
+
Matthew Honnibal, Ines Montani, Sofie Van Landeghem, and Adriane Boyd. spacy: Industrialstrength natural language processing in python.
|
| 292 |
+
Wenyi Huang, Saurabh Kataria, Cornelia Caragea, Prasenjit Mitra, C Lee Giles, and Lior Rokach. 2012. Recommending citations: translating papers into references. In Proceedings of the 21st ACM international conference on Information and knowledge management, pages 1910-1914.
|
| 293 |
+
Clayton Hutto and Eric Gilbert. 2014. Vader: A parsimonious rule-based model for sentiment analysis of social media text. In Proceedings of the International AAAI Conference on Web and Social Media, volume 8.
|
| 294 |
+
Ganesh Jawahar, Benoit Sagot, and Djamé Seddah. 2019. What does BERT learn about the structure of language? In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3651-3657, Florence, Italy. Association for Computational Linguistics.
|
| 295 |
+
|
| 296 |
+
Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2020. Albert: A lite bert for self-supervised learning of language representations. In International Conference on Learning Representations.
|
| 297 |
+
Hanbit Lee, Yeonchan Ahn, Haejun Lee, Seungdo Ha, and Sang-goo Lee. 2016. Quote recommendation in dialogue using deep neural network. In Proceedings of the 39th International ACM SIGIR conference on Research and Development in Information Retrieval, pages 957-960.
|
| 298 |
+
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pretraining for natural language generation, translation, and comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7871-7880, Online. Association for Computational Linguistics.
|
| 299 |
+
Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out, pages 74-81.
|
| 300 |
+
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019a. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692.
|
| 301 |
+
Yuanchao Liu, Bo Pang, and Bingquan Liu. 2019b. Neural-based Chinese idiom recommendation for enhancing elegance in essay writing. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5522-5526, Florence, Italy. Association for Computational Linguistics.
|
| 302 |
+
Ilya Loshchilov and Frank Hutter. 2018. Fixing weight decay regularization in adam.
|
| 303 |
+
Yishu Miao, Edward Grefenstette, and Phil Blunsom. 2017. Discovering discrete latent topics with neural variational inference. In International Conference on Machine Learning, pages 2410-2419. PMLR.
|
| 304 |
+
Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781.
|
| 305 |
+
Tristan Miller and Mladen Turkovic. 2016. Towards the automatic detection and identification of english puns. The European Journal of Humour Research, 4(1):59-75.
|
| 306 |
+
Nasrin Mostafazadeh, Nathanael Chambers, Xiaodong He, Devi Parikh, Dhruv Batra, Lucy Vanderwende, Pushmeet Kohli, and James Allen. 2016. A corpus and cloze evaluation for deeper understanding of commonsense stories. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human
|
| 307 |
+
|
| 308 |
+
Language Technologies, pages 839-849, San Diego, California. Association for Computational Linguistics.
|
| 309 |
+
Suzanne Mpuli. 2017. Annotating similes in literary texts. In Proceedings of the 13th Joint ISO-ACL Workshop on Interoperable Semantic Annotation (ISA-13).
|
| 310 |
+
Vlad Niculae and Cristian Danescu-Niculescu-Mizil. 2014. Brighter than gold: Figurative language in user generated comparisons. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 2008-2018.
|
| 311 |
+
Marilyn A Nippold, Melissa M Allen, and Dixon I Kirsch. 2001. Proverb comprehension as a function of reading proficiency in preadolescents.
|
| 312 |
+
Travis E Oliphant. 2006. A guide to NumPy, volume 1. Trelgol Publishing USA.
|
| 313 |
+
Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting of the Association for Computational Linguistics, pages 311-318.
|
| 314 |
+
Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. Pytorch: An imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems 32, pages 8024-8035. Curran Associates, Inc.
|
| 315 |
+
Joyce Penfield and Mary Duru. 1988. Proverbs: Metaphors that teach. Anthropological quarterly, pages 119-128.
|
| 316 |
+
Adam Poliak, Aparajita Haldar, Rachel Rudinger, J Edward Hu, Ellie Pavlick, Aaron Steven White, and Benjamin Van Durme. 2018. Collecting diverse natural language inference problems for sentence representation evaluation. arXiv preprint arXiv:1804.08207.
|
| 317 |
+
Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9.
|
| 318 |
+
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research, 21(140):1-67.
|
| 319 |
+
Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of
|
| 320 |
+
|
| 321 |
+
the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383-2392, Austin, Texas. Association for Computational Linguistics.
|
| 322 |
+
Joseph Raymond. 1956. Tensions in proverbs: more light on international understanding. Western Folklore, 15(3):153-158.
|
| 323 |
+
Nils Reimers and Iryna Gurevych. 2019. SentenceBERT: Sentence embeddings using Siamese BERTnetworks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3982-3992, Hong Kong, China. Association for Computational Linguistics.
|
| 324 |
+
Victor Sanh, Lysandre Debut, Julien Chaumont, and Thomas Wolf. 2019. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108.
|
| 325 |
+
Noam Shazeer and Mitchell Stern. 2018. Adafactor: Adaptive learning rates with sublinear memory cost. In International Conference on Machine Learning, pages 4596-4604. PMLR.
|
| 326 |
+
Kevin Stowe, Tuhin Chakrabarty, Nanyun Peng, Smaranda Muresan, and Iryna Gurevych. 2021. Metaphor generation with conceptual mappings. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 6724-6736, Online. Association for Computational Linguistics.
|
| 327 |
+
Kevin Stowe, Sarah Moeller, Laura Michaelis, and Martha Palmer. 2019. Linguistic analysis improves neural metaphor detection. In Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL), pages 362-371.
|
| 328 |
+
Jiwei Tan, Xiaojun Wan, and Jianguo Xiao. 2015. Learning to recommend quotes for writing. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 29.
|
| 329 |
+
Jiwei Tan, Xiaojun Wan, and Jianguo Xiao. 2016. A neural network approach to quote recommendation in writings. In Proceedings of the 25th ACM International on Conference on Information and Knowledge Management, pages 65-74.
|
| 330 |
+
Ian Tenney, Dipanjan Das, and Ellie Pavlick. 2019. BERT rediscovers the classical NLP pipeline. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4593-4601, Florence, Italy. Association for Computational Linguistics.
|
| 331 |
+
Pauli Virtanen, Ralf Gommers, Travis E. Oliphant, Matt Haberland, Tyler Reddy, David Cournapeau, Evgeni Burovski, Pearu Peterson, Warren Weckesser, Jonathan Bright, Stefan J. van der Walt, Matthew Brett, Joshua Wilson, K. Jarrod Millman,
|
| 332 |
+
|
| 333 |
+
Nikolay Mayorov, Andrew R. J. Nelson, Eric Jones, Robert Kern, Eric Larson, CJ Carey, Ilhan Polat, Yu Feng, Eric W. Moore, Jake Vand erPlas, Denis Laxalde, Josef Perktold, Robert Cimrman, Ian Henriksen, E. A. Quintero, Charles R Harris, Anne M. Archibald, Antonio H. Ribeiro, Fabian Pedregosa, Paul van Mulbregt, and SciPy 1.0 Contributors. 2020. SciPy 1.0: Fundamental Algorithms for Scientific Computing in Python. Nature Methods, 17:261-272.
|
| 334 |
+
Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2019a. Superglue: A stickier benchmark for general-purpose language understanding systems. In Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc.
|
| 335 |
+
Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019b. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In International Conference on Learning Representations.
|
| 336 |
+
Lingzhi Wang, Jing Li, Xingshan Zeng, Haisong Zhang, and Kam-Fai Wong. 2020. Continuity of topic, interaction, and query: Learning to quote in online conversations. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6640-6650, Online. Association for Computational Linguistics.
|
| 337 |
+
Yue Wang, Jing Li, Hou Pong Chan, Irwin King, Michael R. Lyu, and Shuming Shi. 2019c. Topi- aware neural keyphrase generation for social media language. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2516-2526, Florence, Italy. Association for Computational Linguistics.
|
| 338 |
+
Geoffrey M White. 1987. Proverbs and cultural models: An american psychology of problem solving.
|
| 339 |
+
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtopicz, et al. 2019. Huggingface's transformers: State-of-the-art natural language processing. arXiv preprint arXiv:1910.03771.
|
| 340 |
+
Jiali Zeng, Linfeng Song, Jinsong Su, Jun Xie, Wei Song, and Jiebo Luo. 2020. Neural simile recognition with cyclic multitask learning and local attention. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 9515-9522.
|
| 341 |
+
Chujie Zheng, Minlie Huang, and Aixin Sun. 2019. ChID: A large-scale Chinese IDiom dataset for cloze test. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 778-787, Florence, Italy. Association for Computational Linguistics.
|
| 342 |
+
|
| 343 |
+
# Appendix
|
| 344 |
+
|
| 345 |
+
# A Additional dataset analysis
|
| 346 |
+
|
| 347 |
+
Additional details on sentiment analysis: An example of proverb for which the narratives were close in sentiment scores to the proverb is 'a thing of beauty is a joy forever' while for 'there's no fool like an old fool' the sentiment polarity of narratives was on average opposite to that of the proverb. We note that there are indeed a small number of proverbs for which all or most narratives leaning towards a particular sentiment polarity. Quantitatively, for 23 proverbs, either 9 or all 10 of the narratives have positive VADER sentiment score. These include: 'Nothing succeeds like success', 'Christmas comes but once a year' and 'Genius is one percent inspiration, ninety-nine percent perspiration'. There are 6 proverbs for which either 9 or all 10 narratives have a negative VADER sentiment score. These include: 'The wages of sin is death', 'Fish always stink from the head down' and 'Don't wash your dirty linen in public'. However, as seen in Figure 4, the vast majority of proverbs in the dataset are represented by narratives with both positive and negative sentiment polarities.
|
| 348 |
+
|
| 349 |
+
Gender distribution of entities: Using an off-the-shelf neural coreference pipeline, we find that $61\%$ of the mentions in the narratives are male, while $39\%$ are female. Around $48\%$ of the narratives have predominantly male mentions, $26\%$ of the narratives have predominantly female mentions and the rest have equal number of male and female mentions. The average number of words in predominantly male and female mention containing narratives was comparable ( 65 words).
|
| 350 |
+
|
| 351 |
+
Language complexity: We use the Fleisch reading ease to calculate language complexity of narratives in our dataset. The reading scores vary from 112.1 (equivalent to 3rd grade reading levels) to -41.5 (significantly above college graduate reading levels) with an average score for the narratives in our dataset as 66.5 (equivalent to 8th/9th grade reading levels), showing a considerable spread in the complexity of language in our dataset.
|
| 352 |
+
|
| 353 |
+
Hate speech: Using an off-the-shelf hate speech classifier (Davidson et al., 2017), we found no instances of hate or toxic speech in the dataset.
|
| 354 |
+
|
| 355 |
+
# B Human evaluation on MCQ task
|
| 356 |
+
|
| 357 |
+
We formulated a MCQ task for proverb prediction to gauge human performance. The MCQ task has 5 options - correct proverb and 4 distractor proverbs. The distractor proverbs were chosen using the finetuned RoBERTa model on the proverb prediction task. We choose the distractor proverbs from a mix of proverbs with the highest prediction probabilities, and proverbs that are assigned the most similar probabilities to the correct answer from the RoBERTa model. We performed this study using the Amazon Mechanical Turk platform. We observed that this task is not that simple even for humans and requires a certain level of proficiency in English language or in proverbs specifically. The task is more challenging since the options other than the correct choice in the MCQ task were chosen by picking the most confusing options deemed by the RoBERTa (Liu et al., 2019a) model. However, we find that these wrong choices are confusing for humans too. This is because superficially these wrong choices also seem quite related to the narrative and it requires good reasoning skills to identify the correct narrative. The other situation where the turkers failed was when the options contained multiple proverbs which are quite close in meaning. For example, when the options contained both 'there's no accounting for tastes' and 'Beauty is in the eye of the beholder' the turkers often chose the former when the annotated proverb was the latter. Table 14 shows examples of narratives along with the choices of proverbs where turkers failed to identify the correct proverb.
|
| 358 |
+
|
| 359 |
+
# C Semantically similar proverbs
|
| 360 |
+
|
| 361 |
+
Our chosen set of 250 proverbs in ePiC includes instances of proverbs that are semantically very similar, or even paraphrases (e.g., 'never judge a book by its cover' and 'appearances can be deceptive'). This can be problematic since the presence of semantically similar proverbs as different options in MCQ (and as different classes in proverb classification task) can confuse both humans and automated models. To estimate the extent of this phenomenon, we perform an analysis of human errors on the aforementioned MCQ task. Out of 64 errors we find that for 20 cases, the chosen proverb was completely unrelated to the actual answer. For 29 out of the remaining 44 cases, the chosen proverb seems related to the narrative at first glance, but is not aligned and thus not the best fit. For the remaining
|
| 362 |
+
|
| 363 |
+
# Narrative 1:
|
| 364 |
+
|
| 365 |
+
She had been so happy when he had asked her to marry him but three years on, it seemed that he had so many excuses for not setting a date that she thought that it was never going to happen. Her happiness eventually turned to despair and she considered breaking the engagement.
|
| 366 |
+
|
| 367 |
+
Choice A: You win some, you lose some.
|
| 368 |
+
|
| 369 |
+
Choice B: Jam tomorrow and jam yesterday, but never jam today (Correct)
|
| 370 |
+
|
| 371 |
+
Choice C : Cowards may die many times before their death
|
| 372 |
+
|
| 373 |
+
Choice D: The course of true love never did run smooth (Marked)
|
| 374 |
+
|
| 375 |
+
Choice E: Nothing is certain but death and taxes
|
| 376 |
+
|
| 377 |
+
# Narrative 2:
|
| 378 |
+
|
| 379 |
+
She didn't want to embarrass her friend when she asked her, "It's beautiful, isn't it?" She looked at her friend's new car and nodded her head in agreement.
|
| 380 |
+
|
| 381 |
+
It was purple, the worst car colour she had ever seen, but she faked a smile and congratulated her.
|
| 382 |
+
|
| 383 |
+
Choice A : Imitation is the sincerest form of flattery
|
| 384 |
+
|
| 385 |
+
Choice B: From the sublime to the ridiculous is only one step
|
| 386 |
+
|
| 387 |
+
Choice C : There's no accounting for tastes (Marked)
|
| 388 |
+
|
| 389 |
+
Choice D: Beauty is in the eye of the beholder (Correct)
|
| 390 |
+
|
| 391 |
+
Choice E : All publicity is good publicity
|
| 392 |
+
|
| 393 |
+
Table 14: Tricky MCQ questions from human evaluation task of proverb prediction: The above samples show the challenges in the human evaluation task. In case of narrative 1, the turkers often confuse with choice D which superficially seems related but is not correct. For narrative 2, the proverbs in choices C and D are quite close in meaning, thus resulting in a wrong choice by turkers.
|
| 394 |
+
|
| 395 |
+
15 cases (23% of human errors), the chosen proverb would have been equally appropriate for the narrative. Further, in 7 out these 15 cases (11% of human errors), the chosen proverb is also semantically similar to the annotated proverb. Future work can consider handling semantic similarity between proverbs explicitly and devise suitable evaluation metrics.
|
| 396 |
+
|
| 397 |
+
# D Generated Narratives
|
| 398 |
+
|
| 399 |
+
We show some examples of the narratives generated by the BART (Lewis et al., 2020) and T5 (Raffel et al., 2020) models for the narrative generation task in Table 15. We see that even though the models try to mention all the keywords but they are not able to generate a coherent narrative.
|
| 400 |
+
|
| 401 |
+
# E Evaluation of alignment prediction for jointly trained models
|
| 402 |
+
|
| 403 |
+
In §5.1, we present models to predict proverb and alignment jointly. During evaluation, we first perform proverb prediction and then perform alignment prediction if the correct proverb was pre
|
| 404 |
+
|
| 405 |
+
dicted. If an incorrect proverb is predicted, the span precision, span recall, and span F1 are considered as zero.
|
| 406 |
+
|
| 407 |
+
If we isolate the layers responsible for alignment prediction from the joint model and evaluate it solely on the task of alignment prediction, the span F1 scores for BERT (base) and RoBERTa (base) are 0.078 and 0.074 respectively.
|
| 408 |
+
|
| 409 |
+
# F Training details
|
| 410 |
+
|
| 411 |
+
In this section we discuss about the model parameters, hyper-parameter settings and hardware and software specifications of training.
|
| 412 |
+
|
| 413 |
+
# F.1 Retrieval models' implementation details
|
| 414 |
+
|
| 415 |
+
As discussed in §4.1, we formulate a retrieval task to explore if we can retrieve the correct proverb corresponding to a narrative only by using similarity in their distributional representations. We define similarity between a proverb and a narrative by using cosine similarity between the embeddings of the proverb and the narrative. We use (1) word2vec embeddings (Mikolov et al., 2013) (2) contextual embeddings from LLMs to represent the proverb
|
| 416 |
+
|
| 417 |
+
# Proverb 1:
|
| 418 |
+
|
| 419 |
+
Absence makes the heart grow fonder
|
| 420 |
+
|
| 421 |
+
# Gold narrative:
|
| 422 |
+
|
| 423 |
+
Although they never got along together much, she somehow always looked forward to the holidays each year.
|
| 424 |
+
|
| 425 |
+
# Keywords:
|
| 426 |
+
|
| 427 |
+
got, looked, the holidays
|
| 428 |
+
|
| 429 |
+
# Generated narrative by BART:
|
| 430 |
+
|
| 431 |
+
Polly got back into the swing of things, she looked forward to the holidays each year.
|
| 432 |
+
|
| 433 |
+
# Generated narrative by T5:
|
| 434 |
+
|
| 435 |
+
Although he got through the holidays each year, he looked over everything again to his friends, family and friends.
|
| 436 |
+
|
| 437 |
+
# Proverb 2:
|
| 438 |
+
|
| 439 |
+
Those who do not learn from history are doomed to repeat it
|
| 440 |
+
|
| 441 |
+
# Gold narrative:
|
| 442 |
+
|
| 443 |
+
Bob cheated on his girlfriend in college. He was heartbroken when she broke up with him but moved on. A few years later he married and cheated on his wife. She left him, just as his girlfriend did in college.
|
| 444 |
+
|
| 445 |
+
# Keywords:
|
| 446 |
+
|
| 447 |
+
broke, did, a few years later, cheated, moved, left, married, heartbroken, Bob
|
| 448 |
+
|
| 449 |
+
# Generated narrative by BART:
|
| 450 |
+
|
| 451 |
+
Bob cheated on his girlfriend with her best friend. He was heartbroken when she broke up with him and moved out of his apartment. A few years later, he got married to the girl he cheated on with, and left his apartment with her. He never did again.
|
| 452 |
+
|
| 453 |
+
# Generated narrative by T5:
|
| 454 |
+
|
| 455 |
+
Bob was heartbroken. a few years later, he broke into town. He moved out in his life and moved out.. a few years later, he married a girl. A few years later, she cheated on him but, unfortunately, she left him because she did not do well.
|
| 456 |
+
|
| 457 |
+
# Proverb 3:
|
| 458 |
+
|
| 459 |
+
A trouble shared is a trouble halved
|
| 460 |
+
|
| 461 |
+
# Gold narrative:
|
| 462 |
+
|
| 463 |
+
Mike knows that this new problem is going to need some serious help. He tells his friend Josh about the new problem, and Josh agrees to help him solve it.
|
| 464 |
+
|
| 465 |
+
# Keywords:
|
| 466 |
+
|
| 467 |
+
help, solve, knows, going, Mike, Josh, need, tells, agrees
|
| 468 |
+
|
| 469 |
+
# Generated narrative by BART:
|
| 470 |
+
|
| 471 |
+
Mike knows a thing or two about what is going on in his life that he need help with anything. he tells his friend josh about it and Josh agrees to help solve the problem.
|
| 472 |
+
|
| 473 |
+
# Generated narrative by T5:
|
| 474 |
+
|
| 475 |
+
Mike, Josh, knows that he is going to need help to solve the problem. He tells me that he agrees but he will not help me solve the problem.
|
| 476 |
+
|
| 477 |
+
Table 15: Examples of generated narratives by BART and T5 on 'seen' proverbs
|
| 478 |
+
|
| 479 |
+
and narrative. We obtain the embeddings for a context $c$ (where $c$ can be a proverb or a narrative) as:
|
| 480 |
+
|
| 481 |
+
- Word2vec: average of word embeddings for tokens in $c$ .
|
| 482 |
+
- BERT (Devlin et al., 2019)/RoBERTa (Liu et al.,
|
| 483 |
+
|
| 484 |
+
2019a): [CLS] token embedding on passing $c$ through BERT/RoBERTa.
|
| 485 |
+
|
| 486 |
+
- DistilBERT (Sanh et al., 2019)/AlBERT (Lan et al., 2020): [CLS] token embedding on passing $c$ through DistilBERT/AlBERT
|
| 487 |
+
- SentenceBERT (Reimers and Gurevych, 2019):
|
| 488 |
+
|
| 489 |
+
normalized SentenceBERT embeddings obtained by using 'all-mpnet-base-v2', model on $c$ .
|
| 490 |
+
- T5 (Raffel et al., 2020)/GPT-2 (Radford et al., 2019) Encoder: sum of embeddings of tokens in $c$ after passing through the encoder
|
| 491 |
+
|
| 492 |
+
# F.2 Proverb prediction models' implementation details
|
| 493 |
+
|
| 494 |
+
We use the same LLM models (and implementations) used for the retrieval setup discussed in $\S 4.1$ and $\S F.1$ .
|
| 495 |
+
|
| 496 |
+
# F.3 Obtaining keywords for narrative generation
|
| 497 |
+
|
| 498 |
+
We consider the named entities and verbs present in a narrative (extracted using spacy (Honnibal et al.)) as keywords for generating that narrative.
|
| 499 |
+
|
| 500 |
+
# F.4 Model parameters
|
| 501 |
+
|
| 502 |
+
Our proverb prediction models do not introduce any additional parameters over the existing parameters in the large language models. For joint prediction of proverb and span, we introduce new fully connected layers over the language models, thus introducing $0.6\mathrm{M}$ additional parameters.
|
| 503 |
+
|
| 504 |
+
# F.5 Hyper-parameter settings
|
| 505 |
+
|
| 506 |
+
For all the transformer based models we use the implementation of HuggingFace library (Wolf et al., 2019). All the model based hyper-parameters are thus kept default to the settings in the HuggingFace library. We use the publicly available checkpoints to initialise the pre-trained models (for example "bert-base-uncased" checkpoint for initialising BERT(Devlin et al., 2019)). For the proverb prediction models we did not truncate any tokens from the proverb and considered the maximum length of the narrative sequence to be 256 tokens. For the alignment prediction and joint training models, we considered the maximum length of the narrative sequence as 230 tokens. We used the AdamW (Loshchilov and Hutter, 2018) optimizer commonly used to train these models except for T5 (Raffel et al., 2020). We used AdaFactor(Shazeer and Stern, 2018) to train our T5 based proverb prediction model. We kept the learning rate as 0.00002 for training. Batch sizes was kept as 16 except for T5, for which we reduced the batch size to 4. The random seed for all experiments was 42. The proverb prediction models were trained for 25
|
| 507 |
+
|
| 508 |
+
epochs. The BART narrative generation model was trained for 15 epochs and loss converged after that. T5 took longer and was trained for 25 epochs.
|
| 509 |
+
|
| 510 |
+
# F.6 Software and hardware specifications
|
| 511 |
+
|
| 512 |
+
All the models are coded using Pytorch $1.4.0^{8}$ (Paszke et al., 2019) and related libraries like numpy (Oliphant, 2006), scipy (Virtanen et al., 2020) etc. We run all experiments on GeForce RTX 2080 GPU of size 12 GB. The system has 256 GB RAM and 40 CPU cores. The proverb prediction models typically take 2-5 mins for one epoch. For the joint proverb and span prediction models it took roughly 10 mins for one epoch. For narrative generation models it takes 10 mins for BART and around 18 mins for T5 to complete one epoch of training.
|
epicemployingproverbsincontextasabenchmarkforabstractlanguageunderstanding/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:7c0b6876865b99cc25786aeacae2783660e8611d03e553939e23b0bc7c088550
|
| 3 |
+
size 355047
|
epicemployingproverbsincontextasabenchmarkforabstractlanguageunderstanding/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:b3680b973616d8545c5d0f1626e9e5ef4e0eff19563da4373f7f8372d48ae439
|
| 3 |
+
size 509347
|
formerinfinitememorytransformer/62019f22-fdab-4ef6-bef4-1d98fbdc9319_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:7e73adfdf726546bf353d208f9dc4063ba559398fb7207c7d9edb8cfd423bee8
|
| 3 |
+
size 120674
|
formerinfinitememorytransformer/62019f22-fdab-4ef6-bef4-1d98fbdc9319_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:6e037e889fe96868e56949660b2845ad4211e6a82c4de877f76516719673533b
|
| 3 |
+
size 152989
|
formerinfinitememorytransformer/62019f22-fdab-4ef6-bef4-1d98fbdc9319_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:788d5e46efae597e225f7c64c4ba77c92df8d068d05d843d37026f5bfc4d06f4
|
| 3 |
+
size 1872215
|
formerinfinitememorytransformer/full.md
ADDED
|
@@ -0,0 +1,610 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# $\infty$ -former: Infinite Memory Transformer
|
| 2 |
+
|
| 3 |
+
Pedro Henrique Martins<sup>ψ</sup> Zita Marinho<sup>dm</sup> André F. T. Martins<sup>ψφψ</sup>
|
| 4 |
+
<sup>ψ</sup>Instituto de Telecomunicações <sup>DeepMind</sup> Institute of Systems and Robotics
|
| 5 |
+
<sup>ŋ</sup>LULIS (Lisbon ELLIS Unit), Instituto Superior Técnico <sup>5</sup>Unbabel Lisbon, Portugal
|
| 6 |
+
|
| 7 |
+
pedrohenriqueamartins@tecnico.ulisboa.pt, zmarinho@google.com, andre.t.martins@tecnico.ulisboa.pt.
|
| 8 |
+
|
| 9 |
+
# Abstract
|
| 10 |
+
|
| 11 |
+
Transformers are unable to model long-term memories effectively, since the amount of computation they need to perform grows with the context length. While variations of efficient transformers have been proposed, they all have a finite memory capacity and are forced to drop old information. In this paper, we propose the $\infty$ -former, which extends the vanilla transformer with an unbounded long-term memory. By making use of a continuous-space attention mechanism to attend over the long-term memory, the $\infty$ -former's attention complexity becomes independent of the context length, trading off memory length with precision. In order to control where precision is more important, $\infty$ -former maintains "sticky memories," being able to model arbitrarily long contexts while keeping the computation budget fixed. Experiments on a synthetic sorting task, language modeling, and document grounded dialogue generation demonstrate the $\infty$ -former's ability to retain information from long sequences. $^{1}$
|
| 12 |
+
|
| 13 |
+
# 1 Introduction
|
| 14 |
+
|
| 15 |
+
When reading or writing a document, it is important to keep in memory the information previously read or written. Humans have a remarkable ability to remember long-term context, keeping in memory the relevant details (Carroll, 2007; Kuhbandner, 2020). Recently, transformer-based language models have achieved impressive results by increasing the context size (Radford et al., 2018, 2019; Dai et al., 2019; Rae et al., 2019; Brown et al., 2020). However, while humans process information sequentially, updating their memories continuously, and recurrent neural networks (RNNs) update a single memory vector during time, transformers do not – they exhaustively query every representation associated to the past events. Thus, the amount
|
| 16 |
+
|
| 17 |
+
of computation they need to perform grows with the length of the context, and, consequently, transformers have computational limitations about how much information can fit into memory. For example, a vanilla transformer requires quadratic time to process an input sequence and linear time to attend to the context when generating every new word.
|
| 18 |
+
|
| 19 |
+
Several variations have been proposed to address this problem (Tay et al., 2020b). Some propose using sparse attention mechanisms, either with data-dependent patterns (Kitaev et al., 2020; Vyas et al., 2020; Tay et al., 2020a; Roy et al., 2021; Wang et al., 2021) or data-independent patterns (Child et al., 2019; Beltagy et al., 2020; Zaheer et al., 2020), reducing the self-attention complexity (Katharopoulos et al., 2020; Choromanski et al., 2021; Peng et al., 2021; Jaegle et al., 2021), and caching past representations in a memory (Dai et al., 2019; Rae et al., 2019). These models are able to reduce the attention complexity, and, consequently, to scale up to longer contexts. However, as their complexity still depends on the context length, they cannot deal with unbounded context.
|
| 20 |
+
|
| 21 |
+
In this paper, we propose the $\infty$ -former (infinite former; Fig. 1): a transformer model extended with an unbounded long-term memory (LTM), which allows the model to attend to arbitrarily long contexts. The key for making the LTM unbounded is a continuous-space attention framework (Martins et al., 2020) which trades off the number of information units that fit into memory (basis functions) with the granularity of their representations. In this framework, the input sequence is represented as a continuous signal, expressed as a linear combination of $N$ radial basis functions (RBFs). By doing this, the $\infty$ -former's attention complexity is $\mathcal{O}(L^2 + L \times N)$ while the vanilla transformer's is $\mathcal{O}(L \times (L + L_{\mathrm{LTM}}))$ , where $L$ and $L_{\mathrm{LTM}}$ correspond to the transformer input size and the long-term memory length, respectively (details in §3.1.1). Thus, this representation comes with
|
| 22 |
+
|
| 23 |
+
two significant advantages: (i) the context can be represented using a number of basis functions $N$ smaller than the number of tokens, reducing the attention computational cost; and (ii) $N$ can be fixed, making it possible to represent unbounded context in memory, as described in §3.2 and Fig. 2, without increasing its attention complexity. The price, of course, is a loss in resolution: using a smaller number of basis functions leads to lower precision when representing the input sequence as a continuous signal, as shown in Fig. 3.
|
| 24 |
+
|
| 25 |
+
To mitigate the problem of losing resolution, we introduce the concept of "sticky memories" (§3.3), in which we attribute a larger space in the LTM's signal to regions of the memory more frequently accessed. This creates a notion of "permanence" in the LTM, allowing the model to better capture long contexts without losing the relevant information, which takes inspiration from long-term potentiation and plasticity in the brain (Mills et al., 2014; Bamji, 2005).
|
| 26 |
+
|
| 27 |
+
To sum up, our contributions are the following:
|
| 28 |
+
|
| 29 |
+
- We propose the $\infty$ -former, in which we extend the transformer model with a continuous long-term memory (§3.1). Since the attention computational complexity is independent of the context length, the $\infty$ -former is able to model long contexts.
|
| 30 |
+
- We propose a procedure that allows the model to keep unbounded context in memory (§3.2).
|
| 31 |
+
- We introduce sticky memories, a procedure that enforces the persistence of important information in the LTM (§3.3).
|
| 32 |
+
- We perform empirical comparisons in a synthetic task (§4.1), which considers increasingly long sequences, in language modeling (§4.2), and in document grounded dialogue generation (§4.3). These experiments show the benefits of using an unbounded memory.
|
| 33 |
+
|
| 34 |
+
# 2 Background
|
| 35 |
+
|
| 36 |
+
# 2.1 Transformer
|
| 37 |
+
|
| 38 |
+
A transformer (Vaswani et al., 2017) is composed of several layers, which encompass a multi-head self-attention layer followed by a feed-forward layer, along with residual connections (He et al., 2016) and layer normalization (Ba et al., 2016).
|
| 39 |
+
|
| 40 |
+
Let us denote the input sequence as $X = [x_{1},\ldots ,x_{L}]\in \mathbb{R}^{L\times e}$ , where $L$ is the
|
| 41 |
+
|
| 42 |
+
input size and $e$ is the embedding size of the attention layer. The queries $Q$ , keys $K$ , and values $V$ , to be used in the multi-head self-attention computation are obtained by linearly projecting the input, or the output of the previous layer, $X$ , for each attention head $h$ :
|
| 43 |
+
|
| 44 |
+
$$
|
| 45 |
+
Q _ {h} = X _ {h} W ^ {Q _ {h}}, K _ {h} = X _ {h} W ^ {K _ {h}}, V _ {h} = X _ {h} W ^ {V _ {h}}, \tag {1}
|
| 46 |
+
$$
|
| 47 |
+
|
| 48 |
+
where $W^{Q_h}, W^{K_h}, W^{V_h} \in \mathbb{R}^{d \times d}$ are learnable projection matrices, $d = e / H$ , and $H$ is the number of heads. Then, the context representation $Z_h \in \mathbb{R}^{L \times d}$ , that corresponds to each attention head $h$ , is obtained as:
|
| 49 |
+
|
| 50 |
+
$$
|
| 51 |
+
Z _ {h} = \operatorname {s o f t m a x} \left(\frac {Q _ {h} K _ {h} ^ {\top}}{\sqrt {d}}\right) V _ {h}, \tag {2}
|
| 52 |
+
$$
|
| 53 |
+
|
| 54 |
+
where the softmax is performed row-wise. The head context representations are concatenated to obtain the final context representation $Z \in \mathbb{R}^{L \times e}$ :
|
| 55 |
+
|
| 56 |
+
$$
|
| 57 |
+
Z = \left[ Z _ {1}, \dots , Z _ {H} \right] W ^ {R}, \tag {3}
|
| 58 |
+
$$
|
| 59 |
+
|
| 60 |
+
where $W^{R} \in \mathbb{R}^{e \times e}$ is another projection matrix that aggregates all head's representations.
|
| 61 |
+
|
| 62 |
+
# 2.2 Continuous Attention
|
| 63 |
+
|
| 64 |
+
Continuous attention mechanisms (Martins et al., 2020) have been proposed to handle arbitrary continuous signals, where the attention probability mass function over words is replaced by a probability density over a signal. This allows time intervals or compact segments to be selected.
|
| 65 |
+
|
| 66 |
+
To perform continuous attention, the first step is to transform the discrete text sequence represented by $X \in \mathbb{R}^{L \times e}$ into a continuous signal. This is done by expressing it as a linear combination of basis functions. To do so, each $x_{i}$ , with $i \in \{1, \ldots, L\}$ , is first associated with a position in an interval, $t_{i} \in [0, 1]$ , e.g., by setting $t_{i} = i / L$ . Then, we obtain a continuous-space representation $\bar{X}(t) \in \mathbb{R}^e$ , for any $t \in [0, 1]$ as:
|
| 67 |
+
|
| 68 |
+
$$
|
| 69 |
+
\bar {X} (t) = B ^ {\top} \psi (t), \tag {4}
|
| 70 |
+
$$
|
| 71 |
+
|
| 72 |
+
where $\psi (t)\in \mathbb{R}^N$ is a vector of $N$ RBFs, e.g., $\psi_j(t) = \mathcal{N}(t;\mu_j,\sigma_j)$ , with $\mu_{j}\in [0,1]$ , and $B\in \mathbb{R}^{N\times e}$ is a coefficient matrix. $B$ is obtained with multivariate ridge regression (Brown et al., 1980) so that $\bar{X} (t_i)\approx x_i$ for each $i\in [L]$ , which leads to the closed form (see App. A for details):
|
| 73 |
+
|
| 74 |
+
$$
|
| 75 |
+
B ^ {\top} = X ^ {\top} F ^ {\top} \left(F F ^ {\top} + \lambda I\right) ^ {- 1} = X ^ {\top} G, \tag {5}
|
| 76 |
+
$$
|
| 77 |
+
|
| 78 |
+
where $F = [\psi(t_1), \ldots, \psi(t_L)] \in \mathbb{R}^{N \times L}$ packs the basis vectors for the $L$ locations. As $G \in \mathbb{R}^{L \times N}$ only depends on $F$ , it can be computed offline.
|
| 79 |
+
|
| 80 |
+
Having converted the input sequence into a continuous signal $\bar{X} (t)$ , the second step is to attend over this signal. To do so, instead of having a discrete probability distribution over the input sequence as in standard attention mechanisms (like in Eq. 2), we have a probability density $p$ , which can be a Gaussian $\mathcal{N}(t;\mu ,\sigma^2)$ , where $\mu$ and $\sigma^2$ are computed by a neural component. A unimodal Gaussian distribution encourages each attention head to attend to a single region, as opposed to scattering its attention through many places, and enables tractable computation. Finally, having $p$ , we can compute the context vector $c$ as:
|
| 81 |
+
|
| 82 |
+
$$
|
| 83 |
+
c = \mathbb {E} _ {p} [ \bar {X} (t) ]. \tag {6}
|
| 84 |
+
$$
|
| 85 |
+
|
| 86 |
+
Martins et al. (2020) introduced the continuous attention framework for RNNs. In the following section (§3.1), we will explain how it can be used for transformer multi-head attention.
|
| 87 |
+
|
| 88 |
+
# 3 Infinite Memory Transformer
|
| 89 |
+
|
| 90 |
+
To allow the model to access long-range context, we propose extending the vanilla transformer with a continuous LTM, which stores the input embeddings and hidden states of the previous steps. We also consider the possibility of having two memories: the LTM and a short-term memory (STM), which consists in an extension of the transformer's hidden states and is attended to by the transformer's self-attention, as in the transformer-XL (Dai et al., 2019). A diagram of the model is shown in Fig. 1.
|
| 91 |
+
|
| 92 |
+
# 3.1 Long-term Memory
|
| 93 |
+
|
| 94 |
+
For simplicity, let us first assume that the long-term memory contains an explicit input discrete sequence $X$ that consists of the past text sequence's input embeddings or hidden states, $^2$ depending on the layer $^3$ (we will later extend this idea to an unbounded memory in §3.2).
|
| 95 |
+
|
| 96 |
+
First, we need to transform $X$ into a continuous approximation $\bar{X} (t)$ . We compute $\bar{X} (t)$ as:
|
| 97 |
+
|
| 98 |
+
$$
|
| 99 |
+
\bar {X} (t) = B ^ {\top} \psi (t), \tag {7}
|
| 100 |
+
$$
|
| 101 |
+
|
| 102 |
+
where $\psi (t)\in \mathbb{R}^N$ are basis functions and coefficients $B\in \mathbb{R}^{N\times e}$ are computed as in Eq. 5,
|
| 103 |
+
|
| 104 |
+

|
| 105 |
+
Figure 1: $\infty$ -former's attention diagram with sequence of text, $X_{t}$ , of size $L = 2$ and STM of size $L_{\mathrm{STM}} = 2$ . Circles represent input embeddings or hidden states (depending on the layer) for head $h$ and query $i$ . Both the self-attention and the attention over the LTM are performed in parallel for each head and query.
|
| 106 |
+
|
| 107 |
+
$B^{\top} = X^{\top}G$ . Then, we can compute the LTM keys, $K_{h}\in \mathbb{R}^{N\times d}$ , and values, $V_{h}\in \mathbb{R}^{N\times d}$ , for each attention head $h$ , as:
|
| 108 |
+
|
| 109 |
+
$$
|
| 110 |
+
K _ {h} = B _ {h} W ^ {K _ {h}}, \quad V _ {h} = B _ {h} W ^ {V _ {h}}, \tag {8}
|
| 111 |
+
$$
|
| 112 |
+
|
| 113 |
+
where $W^{K_h}, W^{V_h} \in \mathbb{R}^{d \times d}$ are learnable projection matrices. For each query $q_{h,i}$ for $i \in \{1, \dots, L\}$ , we use a parameterized network, which takes as input the attention scores, to compute $\mu_{h,i} \in ]0, 1[$ and $\sigma_{h,i}^2 \in \mathbb{R}_{>0}$ :
|
| 114 |
+
|
| 115 |
+
$$
|
| 116 |
+
\mu_ {h, i} = \text {s i g m o i d} \left(\operatorname {a f f i n e} \left(\frac {K _ {h} q _ {h , i}}{\sqrt {d}}\right)\right) \tag {9}
|
| 117 |
+
$$
|
| 118 |
+
|
| 119 |
+
$$
|
| 120 |
+
\sigma_ {h, i} ^ {2} = \text {s o f t p l u s} \left(\text {a f f i n e} \left(\frac {K _ {h} q _ {h , i}}{\sqrt {d}}\right)\right). \tag {10}
|
| 121 |
+
$$
|
| 122 |
+
|
| 123 |
+
Then, using the continuous softmax transformation (Martins et al., 2020), we obtain the probability density $p_{h,i}$ as $\mathcal{N}(t; \mu_{h,i}, \sigma_{h,i}^2)$ .
|
| 124 |
+
|
| 125 |
+
Finally, having the value function $\bar{V}_h(t)$ given as $\bar{V}_h(t) = V_h^\top \psi (t)$ , we compute the head-specific representation vectors as in Eq. 6:
|
| 126 |
+
|
| 127 |
+
$$
|
| 128 |
+
z _ {h, i} = \mathbb {E} _ {p _ {h, i}} [ \bar {V} _ {h} ] = V _ {h} ^ {\top} \mathbb {E} _ {p _ {h, i}} [ \psi (t) ] \tag {11}
|
| 129 |
+
$$
|
| 130 |
+
|
| 131 |
+
which form the rows of matrix $Z_{\mathrm{LTM},h}\in \mathbb{R}^{L\times d}$ that goes through an affine transformation, $Z_{\mathrm{LTM}} = [Z_{\mathrm{LTM},1},\dots ,Z_{\mathrm{LTM},H}]W^{O}$
|
| 132 |
+
|
| 133 |
+
The long-term representation, $Z_{\mathrm{LTM}}$ , is then summed to the transformer context vector, $Z_{\mathrm{T}}$ , to obtain the final context representation $Z\in \mathbb{R}^{L\times e}$
|
| 134 |
+
|
| 135 |
+
$$
|
| 136 |
+
Z = Z _ {\mathrm {T}} + Z _ {\mathrm {L T M}}, \tag {12}
|
| 137 |
+
$$
|
| 138 |
+
|
| 139 |
+
which will be the input to the feed-forward layer.
|
| 140 |
+
|
| 141 |
+

|
| 142 |
+
Figure 2: Diagram of the unbounded memory update procedure. This is performed in parallel for each embedding dimension, and repeated throughout the input sequence. We propose two alternatives to select the positions used for the function evaluation: linearly spaced or sticky memories.
|
| 143 |
+
|
| 144 |
+
# 3.1.1 Attention Complexity
|
| 145 |
+
|
| 146 |
+
As the $\infty$ -former makes use of a continuous-space attention framework (Martins et al., 2020) to attend over the LTM signal, its key matrix size $K_{h} \in \mathbb{R}^{N \times d}$ depends only on the number of basis functions $N$ , but not on the length of the context being attended to. Thus, the $\infty$ -former's attention complexity is also independent of the context's length. It corresponds to $\mathcal{O}(L \times (L + L_{\mathrm{STM}}) + L \times N)$ when also using a short-term memory and $\mathcal{O}(L^2 + L \times N)$ when only using the LTM, both $\ll \mathcal{O}(L \times (L + L_{\mathrm{LTM}}))$ which would be the complexity of a vanilla transformer attending to the same context. For this reason, the $\infty$ -former can attend to arbitrarily long contexts without increasing the amount of computation needed.
|
| 147 |
+
|
| 148 |
+
# 3.2 Unbounded Memory
|
| 149 |
+
|
| 150 |
+
When representing the memory as a discrete sequence, to extend it, we need to store the new hidden states in memory. In a vanilla transformer, this is not feasible for long contexts due to the high memory requirements. However, the $\infty$ -former can attend to unbounded context without increasing memory requirements by using continuous attention, as next described and shown in Fig. 2.
|
| 151 |
+
|
| 152 |
+
To be able to build an unbounded representation, we first sample $M$ locations in $[0,1]$ and evaluate $\bar{X}(t)$ at those locations. These locations can simply be linearly spaced, or sampled according to the region importance, as described in §3.3.
|
| 153 |
+
|
| 154 |
+
Then, we concatenate the corresponding vectors with the new vectors coming from the short-term memory. For that, we first need to contract this function by a factor of $\tau \in ]0,1[$ to make room for
|
| 155 |
+
|
| 156 |
+
the new vectors. We do this by defining:
|
| 157 |
+
|
| 158 |
+
$$
|
| 159 |
+
X ^ {\text {c o n t r a c t e d}} (t) = X (t / \tau) = B ^ {\top} \psi (t / \tau). \tag {13}
|
| 160 |
+
$$
|
| 161 |
+
|
| 162 |
+
Then, we can evaluate $\bar{X}(t)$ at the $M$ locations $0 \leq t_1, t_2, \ldots, t_M \leq \tau$ as:
|
| 163 |
+
|
| 164 |
+
$$
|
| 165 |
+
x _ {m} = B ^ {\top} \psi \left(t _ {m} / \tau\right), \quad \text {f o r} m \in [ M ], \tag {14}
|
| 166 |
+
$$
|
| 167 |
+
|
| 168 |
+
and define a matrix $X_{\mathrm{past}} = [x_1,x_2,\dots ,x_M]^\top \in$ $\mathbb{R}^{M\times e}$ with these vectors as rows. After that, we concatenate this matrix with the new vectors $X_{\mathrm{new}}$ obtaining:
|
| 169 |
+
|
| 170 |
+
$$
|
| 171 |
+
X = \left[ X _ {\text {p a s t}} ^ {\top}, X _ {\text {n e w}} ^ {\top} \right] ^ {\top} \in \mathbb {R} ^ {(M + L) \times e}. \tag {15}
|
| 172 |
+
$$
|
| 173 |
+
|
| 174 |
+
Finally, we simply need to perform multivariate ridge regression to compute the new coefficient matrix $B \in \mathbb{R}^{N \times e}$ , via $B^{\top} = X^{\top}G$ , as in Eq. 5. To do this, we need to associate the vectors in $X_{\mathrm{past}}$ with positions in $[0, \tau]$ and in $X_{\mathrm{new}}$ with positions in $[\tau, 1]$ so that we obtain the matrix $G \in \mathbb{R}^{(M + L) \times N}$ . We consider the vectors positions to be linearly spaced.
|
| 175 |
+
|
| 176 |
+
# 3.3 Sticky Memories
|
| 177 |
+
|
| 178 |
+
When extending the LTM, we evaluate its current signal at $M$ locations in [0, 1], as shown in Eq. 14. These locations can be linearly spaced. However, some regions of the signal can be more relevant than others, and should consequently be given a larger "memory space" in the next step LTM's signal. To take this into account, we propose sampling the $M$ locations according to the signal's relevance at each region (see Fig. 6 in App. B). To do so, we construct a histogram based on the attention given to each interval of the signal on the previous step. For that, we first divide the signal into
|
| 179 |
+
|
| 180 |
+
$D$ linearly spaced bins $\{d_1,\ldots ,d_D\}$ . Then, we compute the probability given to each bin, $p(d_j)$ for $j\in \{1,\dots ,D\}$ , as:
|
| 181 |
+
|
| 182 |
+
$$
|
| 183 |
+
p \left(d _ {j}\right) \propto \sum_ {h = 1} ^ {H} \sum_ {i = 1} ^ {L} \int_ {d _ {j}} \mathcal {N} \left(t; \mu_ {h, i}, \sigma_ {h, i} ^ {2}\right) d t, \tag {16}
|
| 184 |
+
$$
|
| 185 |
+
|
| 186 |
+
where $H$ is the number of attention heads and $L$ is the sequence length. Note that Eq. 16's integral can be evaluated efficiently using the erf function:
|
| 187 |
+
|
| 188 |
+
$$
|
| 189 |
+
\int_ {a} ^ {b} \mathcal {N} (t; \mu , \sigma^ {2}) = \frac {1}{2} \left(\operatorname {e r f} \left(\frac {b}{\sqrt {2}}\right) - \operatorname {e r f} \left(\frac {a}{\sqrt {2}}\right)\right). \tag {17}
|
| 190 |
+
$$
|
| 191 |
+
|
| 192 |
+
Then, we sample the $M$ locations at which the LTM's signal is evaluated at, according to $p$ . By doing so, we evaluate the LTM's signal at the regions which were considered more relevant by the previous step's attention, and will, consequently attribute a larger space of the new LTM's signal to the memories stored in those regions.
|
| 193 |
+
|
| 194 |
+
# 3.4 Implementation and Learning Details
|
| 195 |
+
|
| 196 |
+
Discrete sequences can be highly irregular and, consequently, difficult to convert into a continuous signal through regression. Because of this, before applying multivariate ridge regression to convert the discrete sequence $X$ into a continuous signal, we use a simple convolutional layer (with stride $= 1$ and width $= 3$ ) as a gate, to smooth the sequence:
|
| 197 |
+
|
| 198 |
+
$$
|
| 199 |
+
\tilde {X} = \operatorname {s i g m o i d} (\operatorname {C N N} (X)) \odot X. \tag {18}
|
| 200 |
+
$$
|
| 201 |
+
|
| 202 |
+
To train the model we use the cross entropy loss. Having a sequence of text $X$ of length $L$ as input, a language model outputs a probability distribution of the next word $p(x_{t + 1} \mid x_t, \ldots, x_{t - L})$ . Given a corpus of $T$ tokens, we train the model to minimize its negative log likelihood:
|
| 203 |
+
|
| 204 |
+
$$
|
| 205 |
+
\mathcal {L} _ {\mathrm {N L L}} = - \sum_ {t = 0} ^ {T - 1} \log p \left(x _ {t + 1} \mid x _ {t}, \dots , x _ {t - L}\right). \tag {19}
|
| 206 |
+
$$
|
| 207 |
+
|
| 208 |
+
Additionally, in order to avoid having uniform distributions over the LTM, we regularize the continuous attention given to the LTM, by minimizing the Kullback-Leibler (KL) divergence, $D_{\mathrm{KL}}$ , between the attention probability density, $\mathcal{N}(\mu_h,\sigma_h)$ , and a Gaussian prior, $\mathcal{N}(\mu_0,\sigma_0)$ . As different heads can attend to different regions, we set $\mu_0 = \mu_h$ , regularizing only the attention variance, and
|
| 209 |
+
|
| 210 |
+
get:
|
| 211 |
+
|
| 212 |
+
$$
|
| 213 |
+
\begin{array}{l} \mathcal {L} _ {\mathrm {K L}} = \sum_ {t = 0} ^ {T - 1} \sum_ {h = 1} ^ {H} D _ {\mathrm {K L}} \left(\mathcal {N} \left(\mu_ {h}, \sigma_ {h}\right) \mid \mid \mathcal {N} \left(\mu_ {h}, \sigma_ {0}\right)\right) (20) \\ = \sum_ {t = 0} ^ {T - 1} \sum_ {h = 1} ^ {H} \frac {1}{2} \left(\frac {\sigma_ {h} ^ {2}}{\sigma_ {0} ^ {2}} - \log \left(\frac {\sigma_ {h}}{\sigma_ {0}}\right) - 1\right). (21) \\ \end{array}
|
| 214 |
+
$$
|
| 215 |
+
|
| 216 |
+
Thus, the final loss that is minimized corresponds to:
|
| 217 |
+
|
| 218 |
+
$$
|
| 219 |
+
\mathcal {L} = \mathcal {L} _ {\mathrm {N L L}} + \lambda_ {\mathrm {K L}} \mathcal {L} _ {\mathrm {K L}}, \tag {22}
|
| 220 |
+
$$
|
| 221 |
+
|
| 222 |
+
where $\lambda_{\mathrm{KL}}$ is a hyperparameter that controls the amount of KL regularization.
|
| 223 |
+
|
| 224 |
+
# 4 Experiments
|
| 225 |
+
|
| 226 |
+
To understand if the $\infty$ -former is able to model long contexts, we first performed experiments on a synthetic task, which consists of sorting tokens by their frequencies in a long sequence (§4.1). Then, we performed experiments on language modeling (§4.2) and document grounded dialogue generation (§4.3) by fine-tuning a pre-trained language model.<sup>5</sup>
|
| 227 |
+
|
| 228 |
+
# 4.1 Sorting
|
| 229 |
+
|
| 230 |
+
In this task, the input consists of a sequence of tokens sampled according to a token probability distribution (which is not known to the system). The goal is to generate the tokens in the decreasing order of their frequencies in the sequence. One example can be:
|
| 231 |
+
|
| 232 |
+
$$
|
| 233 |
+
\underbrace {1 2 1 3 1 0 3 1 3 2} _ {\text {1 o c c u r s 4 t i m e s ; 3 o c c u r s 3 t i m e s , e t c .}} < \mathrm {S E P} > \begin{array}{l l l l l l l l l} 1 & 3 & 2 & 0 \end{array}
|
| 234 |
+
$$
|
| 235 |
+
|
| 236 |
+
To understand if the long-term memory is being effectively used and the transformer is not only performing sorting by modeling the most recent tokens, we design the token probability distribution to change over time: namely, we set it as a mixture of two distributions, $p = \alpha p_0 + (1 - \alpha)p_1$ , where the mixture coefficient $\alpha \in [0,1]$ is progressively increased from 0 to 1 as the sequence is generated. The vocabulary has 20 tokens and we experiment with sequences of length 4,000, 8,000, and 16,000.
|
| 237 |
+
|
| 238 |
+

|
| 239 |
+
Figure 3: Left: Sorting task accuracy for sequences of length 4,000, 8,000, and 16,000. Right: Sorting task accuracy vs regression mean error, when varying the number of basis functions, for sequences of length 8,000.
|
| 240 |
+
|
| 241 |
+

|
| 242 |
+
|
| 243 |
+
Baselines. We consider the transformer-XL $^6$ (Dai et al., 2019) and the compressive transformer $^7$ (Rae et al., 2019) as baselines. The transformer-XL consists of a vanilla transformer (Vaswani et al., 2017) extended with a short-term memory which is composed of the hidden states of the previous steps. The compressive transformer is an extension of the transformer-XL: besides the short-term memory, it has a compressive long-term memory which is composed of the old vectors of the short-term memory, compressed using a CNN. Both the transformer-XL and the compressive transformer require relative positional encodings. In contrast, there is no need for positional encodings in the memory in our approach since the memory vectors represent basis coefficients in a predefined continuous space.
|
| 244 |
+
|
| 245 |
+
For all models we used a transformer with 3 layers and 6 attention heads, input size $L = 1,024$ and memory size 2,048. For the compressive transformer, both memories have size 1,024. For the $\infty$ -former, we also consider a STM of size 1,024 and a LTM with $N = 1,024$ basis functions, having the models the same computational cost. Further details are described in App. C.1.
|
| 246 |
+
|
| 247 |
+
Results. As can be seen in the left plot of Fig. 3, the transformer-XL achieves a slightly higher accuracy than the compressive transformer and $\infty$ -former for a short sequence length (4,000). This is because the transformer-XL is able to keep almost the entire sequence in memory. However, its accuracy degrades rapidly when the sequence length is increased. Both the compressive trans
|
| 248 |
+
|
| 249 |
+
former and $\infty$ -former also lead to smaller accuracies when increasing the sequence length, as expected. However, this decrease is not so significant for the $\infty$ -former, which indicates that it is better at modeling long sequences.
|
| 250 |
+
|
| 251 |
+
Regression error analysis. To better understand the trade-off between the $\infty$ -former's memory precision and its computational efficiency, we analyze how its regression error and sorting accuracy vary when varying the number of basis functions used, on the sorting task with input sequences of length 8,000. As can be seen in the right plot of Fig. 3, the sorting accuracy is negatively correlated with the regression error, which is positively correlated with the number of basis functions. It can also be observed, that when increasing substantially the number of basis functions the regression error reaches a plateau and the accuracy starts to drop. We posit that the latter is caused by the model having a harder task at selecting the locations it should attend to. This shows that, as expected, when increasing $\infty$ -former's efficiency or increasing the size of context being modeled, the memory loses precision.
|
| 252 |
+
|
| 253 |
+
# 4.2 Language Modeling
|
| 254 |
+
|
| 255 |
+
To understand if long-term memories can be used to extend a pre-trained language model, we fine-tune GPT-2 small (Radford et al., 2019) on Wikitext-103 (Merit et al., 2017) and a subset of PG-19 (Rae et al., 2019) containing the first 2,000 books $(\approx 200$ million tokens) of the training set. To do so, we extend GPT-2 with a continuous long-term memory $(\infty$ -former) and a compressed memory (compressive transformer) with a positional bias,
|
| 256 |
+
|
| 257 |
+
<table><tr><td></td><td>Wikitext-103</td><td>PG19</td></tr><tr><td>GPT-2</td><td>16.85</td><td>33.44</td></tr><tr><td>Compressive</td><td>16.87</td><td>33.09</td></tr><tr><td>∞-former</td><td>16.64</td><td>32.61</td></tr><tr><td>∞-former (SM)</td><td>16.61</td><td>32.48</td></tr></table>
|
| 258 |
+
|
| 259 |
+
based on Press et al. (2021).<sup>8</sup>
|
| 260 |
+
|
| 261 |
+
For these experiments, we consider transformers with input size $L = 512$ , for the compressive transformer we use a compressed memory of size 512, and for the $\infty$ -former we consider a LTM with $N = 512$ Gaussian RBFs and a memory threshold of 2,048 tokens, having the same computational budget for the two models. Further details and hyperparameters are described in App. C.2.
|
| 262 |
+
|
| 263 |
+
Results. The results reported in Table 1 show that the $\infty$ -former leads to perplexity improvements on both Wikitext-103 and PG19, while the compressive transformer only has a slight improvement on the latter. The improvements obtained by the $\infty$ -former are larger on the PG19 dataset, which can be justified by the nature of the datasets: books have more long range dependencies than Wikipedia articles (Rae et al., 2019).
|
| 264 |
+
|
| 265 |
+
# 4.3 Document Grounded Dialogue
|
| 266 |
+
|
| 267 |
+
In document grounded dialogue generation, besides the dialogue history, models have access to a document concerning the conversation's topic. In the CMU Document Grounded Conversation dataset (CMU-DoG) (Zhou et al., 2018), the dialogues are about movies and a summary of the movie is given as the auxiliary document; the auxiliary document is divided into parts that should be considered for the different utterances of the dialogue. In this paper, to evaluate the usefulness of the long-term memories, we make this task slightly more challenging by only giving the models access to the document before the start of the dialogue.
|
| 268 |
+
|
| 269 |
+
We fine-tune GPT-2 small (Radford et al., 2019) using an approach based on Wolf et al. (2019). To allow the model to keep the whole document on memory, we extend GPT-2 with a continuous LTM ( $\infty$ -former) with $N = 512$ basis functions. As baselines, we use GPT-2, with and without ac
|
| 270 |
+
|
| 271 |
+
Table 1: Perplexity on Wikitext-103 and PG19.
|
| 272 |
+
|
| 273 |
+
<table><tr><td></td><td>PPL</td><td>F1</td><td>Rouge-1</td><td>Rouge-L</td><td>Meteor</td></tr><tr><td>GPT-2 w/o doc</td><td>19.43</td><td>7.82</td><td>12.18</td><td>10.17</td><td>6.10</td></tr><tr><td>GPT-2</td><td>18.53</td><td>8.64</td><td>14.61</td><td>12.03</td><td>7.15</td></tr><tr><td>Compressive</td><td>18.02</td><td>8.78</td><td>14.74</td><td>12.14</td><td>7.29</td></tr><tr><td>∞-former</td><td>18.02</td><td>8.92</td><td>15.28</td><td>12.51</td><td>7.52</td></tr><tr><td>∞-former (SM)</td><td>18.04</td><td>9.01</td><td>15.37</td><td>12.56</td><td>7.55</td></tr></table>
|
| 274 |
+
|
| 275 |
+
Table 2: Results on CMU-DoG dataset.
|
| 276 |
+
|
| 277 |
+
cess (GPT-2 w/o doc) to the auxiliary document, with input size $L = 512$ , and GPT-2 with a compressed memory with attention positional biases (compressive), of size 512. Further details and hyper-parameters are stated in App. C.3.
|
| 278 |
+
|
| 279 |
+
To evaluate the models we use the metrics: perplexity, F1 score, Rouge-1 and Rouge-L (Lin, 2004), and Meteor (Banerjee and Lavie, 2005).
|
| 280 |
+
|
| 281 |
+
Results. As shown in Table 2, by keeping the whole auxiliary document in memory, the $\infty$ -former and the compressive transformer are able to generate better utterances, according to all metrics. While the compressive and $\infty$ -former achieve essentially the same perplexity in this task, the $\infty$ -former achieves consistently better scores on all other metrics. Also, using sticky memories leads to slightly better results on those metrics, which suggests that attributing a larger space in the LTM to the most relevant tokens can be beneficial.
|
| 282 |
+
|
| 283 |
+
Analysis. In Fig. 4, we show examples of utterances generated by $\infty$ -former along with the excerpts from the LTM that receive higher attention throughout the utterances' generation. In these examples, we can clearly see that these excerpts are highly pertinent to the answers being generated. Also, in Fig. 5, we can see that the phrases which are attributed larger spaces in the LTM, when using sticky memories, are relevant to the conversations.
|
| 284 |
+
|
| 285 |
+
# 5 Related Work
|
| 286 |
+
|
| 287 |
+
Continuous attention. Martins et al. (2020) introduced 1D and 2D continuous attention, using Gaussians and truncated parabolas as densities. They applied it to RNN-based document classification, machine translation, and visual question answering. Several other works have also proposed the use of (discretized) Gaussian attention for natural language processing tasks: Guo et al. (2019) proposed a Gaussian prior to the self-attention mechanism to bias the model to give higher attention to nearby words, and applied it to natural language inference; You et al. (2020) proposed the use
|
| 288 |
+
|
| 289 |
+
Cast: Macaulay Culkin as Kevin. Joe Pesci as Harry. Daniel Stern as Marv. John Heard as Peter. Roberts Blossom as Marley.
|
| 290 |
+
|
| 291 |
+
The film stars Macaulay Culkin as Kevin McCallister, a boy who is mistakenly left behind when his family flies to Paris for their Christmas vacation. Kevin initially relishes being home alone, but soon has to contend with two would-be burglars played by Joe Pesci and Daniel Stern. The film also features Catherine O'Hara and John Heard as Kevin's parents.
|
| 292 |
+
|
| 293 |
+
Movie Name: Home Alone. Rating: Rotten Tomatoes: $62\%$ and average: 5.5/10, Metacritic Score: 63/100, CinemaScore: A. Year: 1990. The McCallister family is preparing to spend Christmas in Paris, gathering at Peter and Kate's home outside of Chicago on the night before their departure. Peter and Kate's youngest son, eightyear-old Kevin, is being ridiculed by his siblings and cousins. A fight with his older brother, Buzz, results in Kevin getting sent to the third floor of the house for punishment, where he wishes that his
|
| 294 |
+
|
| 295 |
+
Previous utterance: Or maybe rent, anything is reason to celebrate..I would like to talk about a movie called"Home Alone
|
| 296 |
+
|
| 297 |
+
Answer: Macaulay Culkin is the main actor and it is a comedy.
|
| 298 |
+
|
| 299 |
+
Previous utterance: That sounds like a great movie. Any more details?
|
| 300 |
+
|
| 301 |
+
Answer: The screenplay came out in 1990 and has been on the air for quite a while.
|
| 302 |
+
|
| 303 |
+
中
|
| 304 |
+
|
| 305 |
+
Figure 4: Examples of answers generated by $\infty$ -former on a dialogue about the movie "Home Alone". The excerpts from the LTM that are more attended to throughout the utterances generation are highlighted on each color, correspondingly.
|
| 306 |
+
|
| 307 |
+
Toy Story: Tom Hanks as Woody | animated buddy comedy | Toy Story was the first feature length computer animated film | produced by Pixar | toys pretend to be lifeless whenever humans are present | focuses on the relationship between Woody and Gold | fashioned pull string cowboy doll
|
| 308 |
+
|
| 309 |
+
La La Land: Ryan Gosling | Emma Stone as Mia | Hollywood | the city of Los Angeles | Meta critics: 93/100 | 2016 | During a gig at a restaurant Sebastian slips into a passionate jazz | despite warning from the owner | Mia overhears the music as she passes by | for his disobedience
|
| 310 |
+
|
| 311 |
+
Figure 5: Phrases that hold larger spaces of the LTM, when using sticky memories, for two dialogue examples (in App. E).
|
| 312 |
+
|
| 313 |
+
of hard-coded Gaussian attention as input-agnostic self-attention layer for machine translation; Dubois et al. (2020) proposed using Gaussian attention as a location attention mechanism to improve the model generalization to longer sequences. However, these approaches still consider discrete sequences and compute the attention by evaluating the Gaussian density at the token positions. Farinhas et al. (2021) extend continuous attention to multimodal densities, i.e., mixtures of Gaussians, and applied it to VQA. In this paper, we opt for the simpler case, an unimodal Gaussian, and leave sparse and multimodal continuous attention for future work.
|
| 314 |
+
|
| 315 |
+
Efficient transformers. Several methods have been proposed that reduce the transformer's attention complexity, and can, consequently, model longer contexts. Some of these do so by performing sparse attention, either by selecting pre-defined attention patterns (Child et al., 2019; Beltagy et al., 2020; Zaheer et al., 2020), or by learning these patterns from data (Kitaev et al., 2020; Vyas et al., 2020; Tay et al., 2020a; Roy et al., 2021; Wang et al., 2021). Other works focus on directly reducing the attention complexity by applying the (reversed) kernel trick (Katharopoulos et al., 2020; Choromanski et al., 2021; Peng et al., 2021; Jaegle et al., 2021). Closer to our approach are the transformer-XL and compressive transformer models (Dai et al., 2019; Rae et al., 2019), which extend the vanilla transformer with a bounded memory.
|
| 316 |
+
|
| 317 |
+
Memory-augmented language models. RNNs, LSTMs, and GRUs (Hochreiter et al., 1997; Cho et al., 2014) have the ability of keeping a memory state of the past. However, these require backpropagation through time which is impractical for long sequences. Because of this, Graves et al. (2014), Weston et al. (2014), Joulin and Mikolov (2015) and Grefenstette et al. (2015) proposed extending RNNs with an external memory, while Chandar et al. (2016) and Rae et al. (2016) proposed efficient procedures to read and write from these memories, using hierarchies and sparsity. Grave et al. (2016) and Merity et al. (2017) proposed the use of cache-based memories which store pairs of hidden states and output tokens from previous steps. The distribution over the words in the memory is then combined with the distribution given by the language model. More recently, Khandelwal et al. (2019) and Yogatama et al. (2021) proposed using nearest neighbors to retrieve words from a key-based memory constructed based on the training data. Similarly, Fan et al. (2021) proposed retrieving sentences from a memory based on the training data and auxiliary information. Khandelwal et al. (2019) proposed interpolating the retrieved words probability distributions with the probability over the vocabulary words when generating a new word, while Yogatama et al. (2021) and Fan et al. (2021) proposed combining the information at the architecture level. These models have the disadvantage of needing to perform a retrieval step when generating
|
| 318 |
+
|
| 319 |
+
each token / utterance, which can be computationally expensive. These approaches are orthogonal to the $\infty$ -former's LTM and in future work the two can be combined.
|
| 320 |
+
|
| 321 |
+
# 6 Conclusions
|
| 322 |
+
|
| 323 |
+
In this paper, we proposed the $\infty$ -former: a transformer extended with an unbounded long-term memory. By using a continuous-space attention framework, its attention complexity is independent of the context's length, which allows the model to attend to arbitrarily long contexts while keeping a fixed computation budget. By updating the memory taking into account past usage, the model keeps "sticky memories", enforcing the persistence of relevant information in memory over time. Experiments on a sorting synthetic task show that $\infty$ -former scales up to long sequences, maintaining a high accuracy. Experiments on language modeling and document grounded dialogue generation by fine-tuning a pre-trained language model have shown improvements across several metrics.
|
| 324 |
+
|
| 325 |
+
# Ethics Statement
|
| 326 |
+
|
| 327 |
+
Transformer models that attend to long contexts, to improve their generation quality, need large amounts of computation and memory to perform self-attention. In this paper, we propose an extension to a transformer model that makes the attention complexity independent of the length of the context being attended to. This can lead to a reduced number of parameters needed to model the same context, which can, consequently, lead to gains in efficiency and reduce energy consumption.
|
| 328 |
+
|
| 329 |
+
On the other hand, the $\infty$ -former, as well as the other transformer language models, can be used on questionable scenarios, such as the generation of fake news (Zellers et al., 2019), defamatory text (Wallace et al., 2019), or other undesired content.
|
| 330 |
+
|
| 331 |
+
# Acknowledgments
|
| 332 |
+
|
| 333 |
+
This work was supported by the European Research Council (ERC StG DeepSPIN 758969), by the P2020 project MAIA (contract 045909), by the Fundação para a Ciência e Tecnologia through project PTDC/CCI-INF/4703/2021 (PRELUNA, contract UIDB/50008/2020), by the EU H2020 SELMA project (grant agreement No 957017), and by contract PD/BD/150633/2020 in the scope of the Doctoral Program FCT - PD/00140/2013 NET-SyS. We thank Jack Rae, Tom Schaul, the SAR
|
| 334 |
+
|
| 335 |
+
DINE team members, and the reviewers for helpful discussion and feedback.
|
| 336 |
+
|
| 337 |
+
# References
|
| 338 |
+
|
| 339 |
+
Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. 2016. Layer normalization.
|
| 340 |
+
Shernaz X Bamji. 2005. Cadherins: actin with the cytoskeleton to form synapses. Neuron.
|
| 341 |
+
Satanjeev Banerjee and Alon Lavie. 2005. METEOR: An automatic metric for MT evaluation with improved correlation with human judgments. In Proc. Workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarization.
|
| 342 |
+
Iz Beltagy, Matthew E Peters, and Arman Cohan. 2020. Longformer: The long-document transformer.
|
| 343 |
+
Philip J Brown, James V Zidek, et al. 1980. Adaptive multivariate ridge regression. The Annals of Statistics.
|
| 344 |
+
Tom B Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language Models are Few-Shot Learners. In Proc. NeurIPS.
|
| 345 |
+
D.W. Carroll. 2007. Psychology of Language. Cengage Learning.
|
| 346 |
+
Sarath Chandar, Sungjin Ahn, Hugo Larochelle, Pascal Vincent, Gerald Tesauro, and Yoshua Bengio. 2016. Hierarchical memory networks.
|
| 347 |
+
Rewon Child, Scott Gray, Alec Radford, and Ilya Sutskever. 2019. Generating long sequences with sparse transformers.
|
| 348 |
+
Kyunghyun Cho, Bart van Merrienboer, Dzmitry Bahdanau, and Yoshua Bengio. 2014. On the Properties of Neural Machine Translation: Encoder-Decoder Approaches. In Proc. Workshop on Syntax, Semantics and Structure in Statistical Translation.
|
| 349 |
+
Krzysztof Choromanski, Valerii Likhosherstov, David Dohan, Xingyou Song, Andreea Gane, Tamas Sarlos, Peter Hawkins, Jared Davis, Afroz Mohiuddin, Lukasz Kaiser, et al. 2021. Rethinking attention with performers. In Proc. ICLR (To appear).
|
| 350 |
+
Zihang Dai, Zhilin Yang, Yiming Yang, William W Cohen, Jaime Carbonell, Quoc V Le, and Ruslan Salakhutdinov. 2019. Transformer-xl: Attentive language models beyond a fixed-length context. In Proc. ACL.
|
| 351 |
+
Yann Dubois, Gautier Dagan, Dieuwke Hupkes, and Elia Bruni. 2020. Location Attention for Extrapolation to Longer Sequences. In Proc. ACL.
|
| 352 |
+
|
| 353 |
+
Angela Fan, Claire Gardent, Chloe Braud, and Antoine Bordes. 2021. Augmenting Transformers with KNN-Based Composite Memory for Dialog. Transactions of the Association for Computational Linguistics.
|
| 354 |
+
António Farinhas, André F. T. Martins, and P. Aguiar. 2021. Multimodal Continuous Visual Attention Mechanisms.
|
| 355 |
+
Edouard Grave, Armand Joulin, and Nicolas Usunier. 2016. Improving Neural Language Models with a Continuous Cache. In Proc. ICLR.
|
| 356 |
+
Alex Graves, Greg Wayne, and Ivo Danihelka. 2014. Neural tuning machines.
|
| 357 |
+
Edward Grefenstette, Karl Moritz Hermann, Mustafa Suleyman, and Phil Blunsom. 2015. Learning to transduce with unbounded memory. Proc. NeurIPS.
|
| 358 |
+
Maosheng Guo, Yu Zhang, and Ting Liu. 2019. Gaussian Transformer: A Lightweight Approach for Natural Language Inference. In Proc. AAAI.
|
| 359 |
+
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proc. CVPR.
|
| 360 |
+
Sepp Hochreiter, J urgen Schmidhuber, and Corso Elvezia. 1997. Long Short-Term Memory. Neural Computation.
|
| 361 |
+
Andrew Jaegle, Felix Gimeno, Andrew Brock, Andrew Zisserman, Oriol Vinyals, and Joao Carreira. 2021. Perceiver: General Perception with Iterative Attention.
|
| 362 |
+
Armand Joulin and Tomas Mikolov. 2015. Inferring algorithmic patterns with stack-augmented recurrent nets. Proc. NeurIPS.
|
| 363 |
+
Angelos Katharopoulos, Apoorv Vyas, Nikolaos Papas, and François Fleuret. 2020. Transformers are rnns: Fast autoregressive transformers with linear attention. In Proc. ICML.
|
| 364 |
+
Urvashi Khandelwal, Omer Levy, Dan Jurafsky, Luke Zettlemoyer, and Mike Lewis. 2019. Generalization through Memorization: Nearest Neighbor Language Models. In Proc. ICLR.
|
| 365 |
+
Diederik P Kingma and Jimmy Ba. 2015. Adam: A Method for Stochastic Optimization. In Proc. ICLR.
|
| 366 |
+
Nikita Kitaev, Łukasz Kaiser, and Anselm Levskaya. 2020. Reformer: The efficient transformer. In Proc. ICLR.
|
| 367 |
+
Christof Kuhbandner. 2020. Long-Lasting Verbatim Memory for the Words of Books After a Single Reading Without Any Learning Intention. Frontiers in Psychology.
|
| 368 |
+
Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Proc. Workshop on Automatic Summarization.
|
| 369 |
+
|
| 370 |
+
Andre FT Martins, Marcos Treviso, Antonio Farinhas, Vlad Niculae, Mario AT Figueiredo, and Pedro MQ Aguiar. 2020. Sparse and Continuous Attention Mechanisms. In Proc. NeurIPS.
|
| 371 |
+
Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. 2017. Pointer Sentinel Mixture Models. In Proc. ICLR.
|
| 372 |
+
Fergil Mills, Thomas E Bartlett, Lasse Dissing-Olesen, Marta B Wisniewska, Jacek Kuznicki, Brian A Macvicar, Yu Tian Wang, and Shernaz X Bamji. 2014. Cognitive flexibility and long-term depression (LTD) are impaired following $\beta$ -catenin stabilization in vivo. In Proc. of the National Academy of Sciences.
|
| 373 |
+
Hao Peng, Nikolaos Pappas, Dani Yogatama, Roy Schwartz, Noah Smith, and Lingpeng Kong. 2021. Random Feature Attention. In Proc. ICLR (To appear).
|
| 374 |
+
Ofir Press, Noah A Smith, and Mike Lewis. 2021. Train short, test long: Attention with linear biases enables input length extrapolation.
|
| 375 |
+
Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training.
|
| 376 |
+
Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners.
|
| 377 |
+
Jack W Rae, Jonathan J Hunt, Tim Harley, Ivo Danihelka, Andrew Senior, Greg Wayne, Alex Graves, and Timothy P Lillicrap. 2016. Scaling memory-augmented neural networks with sparse reads and writes. In Proc. NeurIPS.
|
| 378 |
+
Jack W Rae, Anna Potapenko, Siddhant M Jayakumar, Chloe Hillier, and Timothy P Lillicrap. 2019. Compressive Transformers for Long-Range Sequence Modelling. In Proc. ICLR.
|
| 379 |
+
Aurko Roy, Mohammad Saffar, Ashish Vaswani, and David Grangier. 2021. Efficient content-based sparse attention with routing transformers. Transactions of the Association for Computational Linguistics, 9:53-68.
|
| 380 |
+
Yi Tay, Dara Bahri, Liu Yang, Donald Metzler, and Da-Cheng Juan. 2020a. Sparse sinkhorn attention. In Proc. ICML.
|
| 381 |
+
Yi Tay, Mostafa Dehghani, Dara Bahri, and Donald Metzler. 2020b. Efficient transformers: A survey.
|
| 382 |
+
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proc. NeurIPS.
|
| 383 |
+
Apoory Vyas, Angelos Katharopoulos, and François Fleuret. 2020. Fast transformers with clustered attention. In Proc. NeurIPS.
|
| 384 |
+
|
| 385 |
+
Eric Wallace, Shi Feng, Nikhil Kandpal, Matt Gardner, and Sameer Singh. 2019. Universal Adversarial Triggers for Attacking and Analyzing NLP. In Proc. EMNLP-IJCNLP.
|
| 386 |
+
Shuohang Wang, Luowei Zhou, Zhe Gan, Yen-Chun Chen, Yuwei Fang, Siqi Sun, Yu Cheng, and Jingjing Liu. 2021. Cluster-Former: Clustering-based Sparse Transformer for Question Answering. In Proc. ACL Findings.
|
| 387 |
+
Jason Weston, Sumit Chopra, and Antoine Bordes. 2014. Memory networks.
|
| 388 |
+
Thomas Wolf, Victor Sanh, Julien Chaumont, and Clement Delangue. 2019. Transfertransfo: A transfer learning approach for neural network based conversational agents.
|
| 389 |
+
Dani Yogatama, Cyprien de Masson d'Autume, and Lingpeng Kong. 2021. Adaptive Semiparametric Language Models. Transactions of the Association for Computational Linguistics, 9:362-373.
|
| 390 |
+
Weiqiu You, Simeng Sun, and Mohit Iyyer. 2020. Hard-Coded Gaussian Attention for Neural Machine Translation. In Proc. ACL.
|
| 391 |
+
Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, et al. 2020. Big bird: Transformers for longer sequences.
|
| 392 |
+
Rowan Zellers, Ari Holtzman, Hannah Rashkin, Yonatan Bisk, Ali Farhadi, Franziska Roesner, and Yejin Choi. 2019. Defending against neural fake news. Proc. NeurIPS.
|
| 393 |
+
Kangyan Zhou, Shrimai Prabhumoye, and Alan W Black. 2018. A Dataset for Document Grounded Conversations. In Proc. EMNLP.
|
| 394 |
+
|
| 395 |
+
# A Multivariate ridge regression
|
| 396 |
+
|
| 397 |
+
The coefficient matrix $B \in \mathbb{R}^{N \times e}$ is obtained with multivariate ridge regression criterion so that $\bar{X}(t_i) \approx x_i$ for each $i \in [L]$ , which leads to the closed form:
|
| 398 |
+
|
| 399 |
+
$$
|
| 400 |
+
\begin{array}{l} B ^ {\top} = \underset {B ^ {\top}} {\arg \min } \left\| B ^ {\top} F - X ^ {\top} \right\| _ {\mathcal {F}} ^ {2} + \lambda \| B \| _ {\mathcal {F}} ^ {2} \tag {23} \\ = X ^ {\top} F ^ {\top} \left(F F ^ {\top} + \lambda I\right) ^ {- 1} = X ^ {\top} G, \\ \end{array}
|
| 401 |
+
$$
|
| 402 |
+
|
| 403 |
+
where $F = [\psi(t_1), \ldots, \psi(t_L)]$ packs the basis vectors for $L$ locations and $||\cdot||_{\mathcal{F}}$ is the Frobenius norm. As $G \in \mathbb{R}^{L \times N}$ only depends of $F$ , it can be computed offline.
|
| 404 |
+
|
| 405 |
+
# B Sticky memories
|
| 406 |
+
|
| 407 |
+
We present in Fig. 6 a scheme of the sticky memories procedure. First we sample $M$ locations from the previous step LTM attention histogram (Eq. 16). Then, we evaluate the LTM's signal at the sampled locations (Eq. 14). Finally, we consider that the sampled vectors, $X_{\mathrm{past}}$ , are linearly spaced in $[0,\tau]$ . This way, the model is able to attribute larger spaces of its memory to the relevant words.
|
| 408 |
+
|
| 409 |
+
# C Experimental details
|
| 410 |
+
|
| 411 |
+
# C.1 Sorting
|
| 412 |
+
|
| 413 |
+
For the compressive transformer, we consider compression rates of size 2 for sequences of length 4,000, from 2 to 6 for sequences of length 8,000, and from 2 to 12 for sequences of length 16,000. We also experiment training the compressive transformer with and without the attention reconstruction auxiliary loss. For the $\infty$ -former we consider 1,024 Gaussian RBFs $\mathcal{N}(t; \tilde{\mu}, \tilde{\sigma}^2)$ with $\tilde{\mu}$ linearly spaced in $[0,1]$ and $\tilde{\sigma} \in \{ .01, .05 \}$ . We set $\tau = 0.75$ and for the KL regularization we used $\lambda_{\mathrm{KL}} = 1 \times 10^{-5}$ and $\sigma_0 = 0.05$ .
|
| 414 |
+
|
| 415 |
+
For this task, for each sequence length, we created a training set with 8,000 sequences and validation and test sets with 800 sequences. We trained all models with batches of size 8 for 20 epochs on 1 Nvidia GeForce RTX 2080 Ti or 1 Nvidia GeForce GTX 1080 Ti GPU with $\approx 11$ Gb of memory, using the Adam optimizer (Kingma and Ba, 2015). For the sequences of length 4,000 and 8,000 we used a learning rate of $2.5 \times 10^{-4}$ while for sequences of length 16,000 we used a learning rate of $2 \times 10^{-4}$ . The learning rate was decayed to 0 until the end of training with a cosine schedule.
|
| 416 |
+
|
| 417 |
+
# C.2 Pre-trained Language Models
|
| 418 |
+
|
| 419 |
+
In these experiments, we fine-tune the GPT-2 small, which is composed of 12 layers with 12 attention heads, on the English dataset Wikitext-103 and on a subset of the English dataset $\mathrm{PG19^9}$ containing the first 2,000 books. We consider an input size $L = 512$ and a long-term memory with $N = 512$ Gaussian RBFs $\mathcal{N}(t; \tilde{\mu}, \tilde{\sigma}^2)$ with $\tilde{\mu}$ linearly spaced in [0, 1] and $\tilde{\sigma} \in \{ .005, .01 \}$ and for the KL regularization we use $\lambda_{\mathrm{KL}} = 1 \times 10^{-6}$ and $\sigma_0 = 0.05$ . We set $\tau = 0.5$ . For the compressive transformer we also consider a compressed memory of size 512 with a compression rate of 4, and train the model with the auxiliary reconstruction loss.
|
| 420 |
+
|
| 421 |
+
We fine-tuned GPT-2 small with a batch size of 1 on 1 Nvidia GeForce RTX 2080 Ti or 1 Nvidia GeForce GTX 1080 Ti GPU with $\approx 11$ Gb of memory, using the Adam optimizer (Kingma and Ba, 2015) for 1 epoch with a learning rate of $5 \times 10^{-5}$ for the GPT-2 parameters and a learning rate of $2.5 \times 10^{-4}$ for the LTM parameters.
|
| 422 |
+
|
| 423 |
+
# C.3 Document Grounded Generation
|
| 424 |
+
|
| 425 |
+
In these experiments, we fine-tune the GPT-2 small, which is composed of 12 layers with 12 attention heads, on the English dataset CMU - Document Grounded Conversations<sup>10</sup> (CMU-DoG. CMU-DoG has 4112 conversations, being the proportion of train/validation/test split 0.8/0.05/0.15.
|
| 426 |
+
|
| 427 |
+
We consider an input size $L = 512$ and a long-term memory with $N = 512$ Gaussian RBFs $\mathcal{N}(t; \tilde{\mu}, \tilde{\sigma}^2)$ with $\tilde{\mu}$ linearly spaced in [0, 1] and $\tilde{\sigma} \in \{ .005, .01 \}$ and for the KL regularization we use $\lambda_{\mathrm{KL}} = 1 \times 10^{-6}$ and $\sigma_0 = 0.05$ . We set $\tau = 0.5$ . For the compressive transformer we consider a compressed memory of size 512 with a compression rate of 3, and train the model with the auxiliary reconstruction loss. We fine-tuned GPT-2 small with a batch size of 1 on 1 Nvidia GeForce RTX 2080 Ti or 1 Nvidia GeForce GTX 1080 Ti GPU with $\approx 11$ Gb of memory, using the Adam optimizer (Kingma and Ba, 2015) with a linearly decayed learning rate of $5 \times 10^{-5}$ , for 5 epochs.
|
| 428 |
+
|
| 429 |
+
# D Additional experiments
|
| 430 |
+
|
| 431 |
+
We also perform language modeling experiments on the Wikitext-103 dataset<sup>11</sup> (Merit et al., 2017)
|
| 432 |
+
|
| 433 |
+

|
| 434 |
+
Figure 6: Sticky memories procedure diagram. The dashed vertical lines correspond to the position of the words in the LTM signal.
|
| 435 |
+
|
| 436 |
+
which has a training set with 103 million tokens and validation and test sets with 217,646 and 245,569 tokens, respectively. For that, we follow the standard architecture of the transformer-XL (Dai et al., 2019), which consists of a transformer with 16 layers and 10 attention heads. For the transformer-XL, we experiment with a memory of size 150. For the compressive transformer, we consider that both memories have a size of 150 and a compression rate of 4. For the $\infty$ -former we consider a short-term memory of size 150, a continuous long-term memory with 150 Gaussian RBFs, and a memory threshold of 900 tokens.
|
| 437 |
+
|
| 438 |
+
For this experiment, we use a transformer with 16 layers, 10 heads, embeddings of size 410, and a feed-forward hidden size of 2100. For the compressive transformer, we follow Rae et al. (2019) and use a compression rate of 4 and the attention reconstruction auxiliary loss. For the $\infty$ -former we consider 150 Gaussian RBFs $\mathcal{N}(t; \tilde{\mu}, \tilde{\sigma}^2)$ with $\tilde{\mu}$ linearly spaced in $[0,1]$ and $\tilde{\sigma} \in \{.01, .05\}$ . We set $\tau = 0.5$ and for the KL regularization we used $\lambda_{\mathrm{KL}} = 1 \times 10^{-5}$ and $\sigma_0 = 0.1$ .
|
| 439 |
+
|
| 440 |
+
We trained all models, from scratch, with batches of size 40 for 250,000 steps on 1 Nvidia Titan RTX or 1 Nvidia Quadro RTX 6000 with $\approx 24$ GPU Gb of memory using the Adam optimizer (Kingma and Ba, 2015) with a learning rate of $2.5 \times 10^{-4}$ . The learning rate was decayed to 0 until the end of training with a cosine schedule.
|
| 441 |
+
|
| 442 |
+
<table><tr><td></td><td>STM</td><td>LTM</td><td>Perplexity</td></tr><tr><td>Transformer-XL</td><td>150</td><td>—</td><td>24.52</td></tr><tr><td>Compressive</td><td>150</td><td>150</td><td>24.41</td></tr><tr><td>∞-former</td><td>150</td><td>150</td><td>24.29</td></tr><tr><td>∞-former (Sticky memories)</td><td>150</td><td>150</td><td>24.22</td></tr></table>
|
| 443 |
+
|
| 444 |
+
Table 3: Perplexity on Wikitext-103.
|
| 445 |
+
|
| 446 |
+
Results. As can be seen in Table 3, extending the model with a long-term memory leads to a better perplexity, for both the compressive transformer and $\infty$ -former. Moreover, the $\infty$ -former slightly outperforms the compressive transformer. We can also see that using sticky memories leads to a somewhat lower perplexity, which shows that it helps the model to focus on the relevant memories.
|
| 447 |
+
|
| 448 |
+
Analysis. To better understand whether $\infty$ -former is paying more attention to the older memories in the LTM or to the most recent ones, we plotted a histogram of the attention given to each region of the long-term memory when predicting the tokens on the validation set. As can be seen in Fig. 7, in the first and middle layers, the $\infty$ -former tends to focus more on the older memories, while in the last layer, the attention pattern is more uniform. In Figs. 8 and 9, we present examples of words for which the $\infty$ -former has lower perplexity than the transformer-XL along with the attention given by the $\infty$ -former to the last layer's LTM. We can see that the word being predicted is present sev-
|
| 449 |
+
|
| 450 |
+
eral times in the long-term memory and $\infty$ -former gives higher attention to those regions.
|
| 451 |
+
|
| 452 |
+
To know whether the sticky memories approach attributes a larger space of the LTM's signal to relevant phrases or words, we plotted the memory space given to each word $^{12}$ present in the long-term memory of the last layer when using and not using sticky memories. We present examples in Figs. 10 and 11 along with the phrases / words which receive the largest spaces when using sticky memories. We can see in these examples that this procedure does in fact attribute large spaces to old memories, creating memories that stick over time. We can also see that these memories appear to be relevant as shown by the words / phrases in the examples.
|
| 453 |
+
|
| 454 |
+
# E Additional examples
|
| 455 |
+
|
| 456 |
+
In Fig. 12, we show additional examples of utterances generated by $\infty$ -former along with the excerpts from the LTM that receive higher attention throughout the utterances' generation.
|
| 457 |
+
|
| 458 |
+
Additionally, ground truth conversations concerning the movies "Toy Story" and "La La Land", for which the sticky memories are stated in Fig. 5, are shown in Tables 4 and 5, respectively.
|
| 459 |
+
|
| 460 |
+

|
| 461 |
+
Figure 7: Histograms of attention given to the LTM by $\infty$ -former, for the first (on the left), middle (on the middle), and last (on the right) layers. The dashed vertical lines represent the limits of the memory segments ( $\tau$ ) for the various memory updates.
|
| 462 |
+
|
| 463 |
+

|
| 464 |
+
|
| 465 |
+

|
| 466 |
+
|
| 467 |
+
GT: as the respective audio releases of the latter two concerts, Zoo TV Live and Hasta la Vista Baby! U2
|
| 468 |
+
|
| 469 |
+

|
| 470 |
+
Figure 8: Example of attention given by $\infty$ -former to the last layer's long-term memory, when predicting the ground truth word "U2". The words in the LTM which receive higher attention ( $>0.05$ ) are shaded.
|
| 471 |
+
|
| 472 |
+

|
| 473 |
+
|
| 474 |
+

|
| 475 |
+
|
| 476 |
+
the Pet Shop Boys' synthpop cover of the song (titled "Where the Streets Have No Name (I Can't Take My Eyes off You)"). Bono parodied this by occasionally adopting the deadpan vocal style used in the Pet Shop Boys' cover. Critics welcomed the song in the group's stylist: The Independent said the song "induces instant euphoria, as U2 do what they're best at, slipping into epic rock mode, playing music made for the arena". In two other local newspaper reviews, critics praised the song's inclusion in a sequence of greatest hits.
|
| 477 |
+
|
| 478 |
+
For the PopMart Tour of 1997-1998, U2 returned to the electronic dance arrangement they occasionally played on the Zoo TV Tour. The set's massive video screen displayed a video that
|
| 479 |
+
|
| 480 |
+
Hot Press described as an "astonishing 2001-style trip into the heart of a swirling, psychedelic tunnel that sucks the audience in towards a horizontal monolith". Near the end of the song, peace doves were shown on the screen and bright beams of light flanking the set's golden arch were projected upwards. Hot Press said the effect transformed the stadium into a "UFO landing site". Shortly before the third leg of the Elevation Tour, the September 11 attacks occurred in New York City and Washington D.C.. During the band's first show in New York City following the attacks, the band performed "Where the Streets Have No Name", and when the stage lights illuminated the audience, the band saw tears streaming down the faces of many fans. The experience was
|
| 481 |
+
|
| 482 |
+
one inspiration for the song "City of Blinding Lights". The band paid tribute to the 9/11 victims during their performance of the song at the Super Bowl XXXVI halftime show on 3 February 2002. The performance featured the names of the September 11 victims projected onto a large white banner behind the band. U2's appearance was later ranked number 1 on Sports Illustrated's list of "Top 10 Super Bowl Halftime Shows". For the Vertigo Tour, the group originally considered dropping the song from their setlists, but Mullen and Clayton successfully argued against this. All 131 of the Vertigo Tour concerts featured a performance of the song, which were accompanied by the stage's LED video curtains displaying African flags. On the tour's opening night, this reminded Bono that he had
|
| 483 |
+
|
| 484 |
+
GT: for the first time on 26 January 1942, and attacked regularly thereafter, damaging some aircraft. The intact Hudson were withdrawn to Darwin but Headlam
|
| 485 |
+
|
| 486 |
+

|
| 487 |
+
Figure 9: Example of attention given by $\infty$ -former to the last layer's long-term memory, when predicting the ground truth word "Headlam". The words in the long-term memory which receive higher attention (bigger than 0.05) are shaded.
|
| 488 |
+
|
| 489 |
+

|
| 490 |
+
|
| 491 |
+

|
| 492 |
+
|
| 493 |
+
and fixed defences, Australia may be made practically invulnerable". According to Air Force historian Alan Stephens this paper "in effect, defined the anti-iodgment concept which has been a persistent feature of RAAF strategic thinking".
|
| 494 |
+
|
| 495 |
+
Headlam completed a flying instructors course in July 1936 and joined the staff of No. 1 FTS. He was promoted to flight lieutenant on 1 March 1937. Commencing in July 1938, he was one of six students to take part in the RAAF's first Long Specialist Navigation Course, run by Flight Lieutenants Bill Garing and Alister Murdoch at Point Cook. The course involved several epic training flights that attracted considerable media attention, including a twelve-day, 10,800-kilometre (6,700 mi) round-Australia trip by
|
| 496 |
+
|
| 497 |
+
three Avro Ansons, one of which was piloted by Headlam, in November. The following month, Headlam led the three Ansons on a six-day journey back and forth over Central Australia. He subsequently passed the navigation course with a special distinction. On 27 January 1939 he was posted to RAAF Station Laverton, Victoria, as a flight commander. He served initially with No. 2 Squadron, before transferring to No. 1 Squadron on 29 August. Both units operated Ansons.
|
| 498 |
+
|
| 499 |
+
World War II
|
| 500 |
+
|
| 501 |
+
Following the outbreak of World War II, No. 1 Squadron was engaged in convoy escort and maritime reconnaissance duties off south-eastern Australia. Headlam continued to serve with the squadron as a flight commander until 15 January 1940, when he was assigned to Headquarters Laverton
|
| 502 |
+
|
| 503 |
+
as the station navigation officer. On 27 March he was posted to the staff of RAAF Headquarters, Melbourne. He was promoted to squadron leader on 1 June 1940. Two weeks later he married Katherine Bridge at St Paul's Anglican Church in Frankston; the couple would have a son and a daughter. Headlam was given command of No. 2 Squadron at Laverton on 15 April 1941, and raised to wing commander on 1 July. Equipped with Lockheed Hudson's, the squadron mainly conducted maritime patrols in southern waters until 5 December, when four of its aircraft were ordered to Darwin, Northern Territory, in response to fears of Japanese aggression in the Pacific. On 7 December, this detachment established itself at Penfui, near Koepang in Dutch Timor, while No. 2 Squadron's eight remaining Hudson's
|
| 504 |
+
|
| 505 |
+

|
| 506 |
+
Phrases / words: "transport gasoline" | "American Civil Rights" | "along with Michael" | "community center" | "residents began to move" | "Landmarks Comission" | "Meridian Main" | "projects" | "the historic train station" | "Weidmann's Restaurant" | "Arts" | "Meridian Main Street" | "in late 2007" | "effort" | "Alliance serves" | "Plans were underway" | "Building" | "Mayor Cheri" | "the Alliance" | "promote further development" | "assist businesses" | "Street program"
|
| 507 |
+
|
| 508 |
+

|
| 509 |
+
|
| 510 |
+

|
| 511 |
+
Figure 10: Example of the memory space attributed to each word in the last layer's long-term memory (after 5 updates) without / with the sticky memories procedure, along with the words / phrases which have the largest memory spaces when using sticky memories (top peaks with space $>$ .005). Excerpt of the sequence being generated in this example: "Given Meridian's site as a railroad junction, its travelers have attracted the development of many hotels." The dashed vertical lines represent the limits of the memory segments for the various memory updates.
|
| 512 |
+
Figure 11: Example of the memory space attributed to each word in the last layer's long-term memory (after 5 updates) without / with the sticky memories procedure, along with the words / phrases which have the largest memory spaces when using sticky memories (top peaks with space> .005) Excerpt of the sequence being generated in this example: "Headlam became Officer Commanding North-Western Area in January 1946. Posted to Britain at the end of the year; he attended the Royal Air Force Staff College, Andover, and served with RAAF Overseas Headquarters, London." The dashed vertical lines represent the limits of the memory segments for the various memory updates.
|
| 513 |
+
|
| 514 |
+

|
| 515 |
+
"July 1936" | "Headlam continued to serve" | "27 March" | "in Frankston" | "daughter" | "four of its aircraft" | "in response to fears of Japanese" | "stationed at Darwin" | "attacked the Japanese" | "forced it aground" | "dispersed at Penfui" | "three Japanese floatplanes" | "attacked regularly" | "withdrawn to Darwin" | "his staff remained at Penfui" | "ordered to evacuate" | "assistance from Sparrow Force" | "Four of No. 2 Squadron's Hudsons were destroyed" | "relocated to Daly Waters"
|
| 516 |
+
|
| 517 |
+
Cast: Jesse Eisenberg as Mark Zuckerberg. Andrew Garfield as Eduardo Savern. Justin Timberlake as Sean Parker. Armie Hammer as Cameron and Tyler Winklevoss. Max Minghella as Divya Narendra. Critical Response: David Fincher's film has the rare quality of being not only as smart as its brilliant hero, but in the same way. It is cocksure, impatient, cold, exciting and instinctively perceptive. The Social Network is the movie of the year
|
| 518 |
+
|
| 519 |
+
···
|
| 520 |
+
|
| 521 |
+
In October 2003, 19-year-old Harvard University student Mark Zuckerberg is dumped by his girlfriend Erica Albright. Returning to his dorm, Zuckerberg writes an insulting entry about Albright on his LiveJournal blog and then creates a campus website called Facemash by hacking into college databases to steal photos of female students, then allowing site visitors to rate their attractiveness. After traffic to the site crashes parts of Harvard's computer network,
|
| 522 |
+
|
| 523 |
+
··
|
| 524 |
+
|
| 525 |
+
Previous utterance: So, what movie are we going to chat about today? Right, the one about Zuckerberg?
|
| 526 |
+
|
| 527 |
+
Answer: Yep, Jesse Eisenberg plays Zuckerberg.
|
| 528 |
+
|
| 529 |
+
Previous utterance: So, have you seen it?
|
| 530 |
+
|
| 531 |
+
Answer: Yeah. Its about the founder of Facebook, Mark Zuckerberg who was basically dumped by his girlfriend, Erica, so he created "TheFacebook."
|
| 532 |
+
|
| 533 |
+
Figure 12: Examples of answers generated by $\infty$ -former on a dialogue about the movie "The Social Network". The excerpts from the LTM that are more attended to throughout their generation are highlighted on each color correspondingly.
|
| 534 |
+
|
| 535 |
+
Table 4: Ground truth conversation about movie "Toy Story".
|
| 536 |
+
```txt
|
| 537 |
+
- Hi
|
| 538 |
+
- Yo you really need to watch Toy Story. It has $100\%$ on Rotten Tomatoes!
|
| 539 |
+
- Really! $100\%$ that's pretty good What's it about
|
| 540 |
+
- It's an animated buddy-comedy where toys come to life
|
| 541 |
+
- who stars in it
|
| 542 |
+
- The main characters are voiced by Tom Hanks and Tim Allen
|
| 543 |
+
- does it have any other critic ratings
|
| 544 |
+
- Yep, metacritic gave it 95/100 and Cinemascore gave it an A
|
| 545 |
+
- how old is it?
|
| 546 |
+
- It's a 1995 film so 23 years
|
| 547 |
+
- The old ones are always good:) I heard there were some sad parts in it is that true
|
| 548 |
+
- Yeah actually, the movie starts off pretty sad as the toys fear that they might be replaced and that they have to move
|
| 549 |
+
- is this a disney or dreamworks movie
|
| 550 |
+
- Disney, pixar to be exact
|
| 551 |
+
- Why do the toys think they will be replaced :
|
| 552 |
+
- they thought so because Andy was having a birthday party and might get new toys
|
| 553 |
+
- What part does Tom Hanks play
|
| 554 |
+
- Woody, the main character
|
| 555 |
+
- How about Tim Allen
|
| 556 |
+
- Buzz, the main antagonist at first then he becomes a friend
|
| 557 |
+
- What kind of toy is Woody?
|
| 558 |
+
- A cowboy doll
|
| 559 |
+
- What is Buzz
|
| 560 |
+
- A space ranger
|
| 561 |
+
- so do all the toys talk
|
| 562 |
+
- yep! but andy doesn't know that
|
| 563 |
+
- Is andy a little kid or a teen
|
| 564 |
+
- He's 6!
|
| 565 |
+
- Sounds good. Thanks for the info. Have a great day
|
| 566 |
+
```
|
| 567 |
+
|
| 568 |
+
- hey
|
| 569 |
+
- hey
|
| 570 |
+
- i just watched la la land. It is a movie from 2016 starring ryan gosing and emma stone. they are too artists (one actress and one panist) and they fall in love and try to achieve their dreams. its a great movie
|
| 571 |
+
- It's a wonderful movie and got a score of $92\%$ on rotten tomatoes
|
| 572 |
+
- yes, i think it also won an oscar
|
| 573 |
+
- Yes but I thought it was a little dull
|
| 574 |
+
- metacritics rating is 93/100 as well its pretty critically acclaimed
|
| 575 |
+
- the two leads singing and dancing weren't exceptional
|
| 576 |
+
- i suppose it is not for everyone
|
| 577 |
+
- It also sags badly in the middle I like how Sebastian slipped into a passionate jazz despite warnings from the owner.
|
| 578 |
+
- what do you think of the cover of "i ran so far away?" in the movie, sebastian found the song an insult for a serious musician
|
| 579 |
+
- I don't know, it is considered an insult for serious musicians not sure why
|
| 580 |
+
- yeah
|
| 581 |
+
- The idea of a one woman play was daring
|
| 582 |
+
- it was interesting how sebastian joined a jazz fusion band he couldnt find real happiness in any of the bands he was in its hard
|
| 583 |
+
- It is considering she didn't know of any of that until she attended one of his concerts
|
| 584 |
+
- yeah, that is daring the movie kind of speaks to a lot of people. she accused him of abandoning his dreams but sometimes thats what you have to do.
|
| 585 |
+
- Not nice that she leaves because he told her she liked him better when he was unsuccessful The play was a disaster so he didn't miss anything when he missed it.
|
| 586 |
+
- yeah, but i dont blame her for dumping him for that
|
| 587 |
+
- She should didn't want to support him and she had to move back
|
| 588 |
+
- id be pretty upset as well to boulder city nevada
|
| 589 |
+
- yes she didn't want to forgive him, I didn't understand that
|
| 590 |
+
- well because that was a big deal to her and he missed it
|
| 591 |
+
- if she was with him when he was unsuccessful, she could have supported him to follow his dreams or other dreams
|
| 592 |
+
- i suppose that is true
|
| 593 |
+
- she wasn't successful either
|
| 594 |
+
- yeah she wasnt nobody showed up to her play
|
| 595 |
+
- so why the big hulabaloo about him
|
| 596 |
+
- not sure
|
| 597 |
+
- she was selfish I guess He missed her play because he had to go for a photo shoot with the band that he had previously missed
|
| 598 |
+
- yeah but he should have kept better track and scheduled it better
|
| 599 |
+
- this shows that he was trying to commit some and follow his dreams although not necessarily like them so she would be please if he didn't attend the photo shoot a second time, and came to her show
|
| 600 |
+
- i definitely felt bad for both of them though in that scene
|
| 601 |
+
- it's more of a do or don't he is still condemned I feel bad for him because he tried he tried to get her back by apologizing as well she didn't want any of it
|
| 602 |
+
- yeah because she felt like he didnt care enough because he missed it he's the one that suggested the one woman play
|
| 603 |
+
- They could have started all over again just like the beginning
|
| 604 |
+
- maybe so
|
| 605 |
+
- did she fail because of the one-woman play? she could have tried something else if she felt that
|
| 606 |
+
- she wanted to give it a shot
|
| 607 |
+
- she did and it failed, he did and it failed they just had to compromise so they could be together again, which was how the happiness was He signed up for the band after hearing her talking to her mom about how he is working
|
| 608 |
+
- on his career I think he did a lot for her
|
| 609 |
+
|
| 610 |
+
Table 5: Ground truth conversation about movie "La La Land".
|
formerinfinitememorytransformer/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:387e3947d07f41082c9d211974dca8fbc707f193b2327dd8a33c6bd6e312826e
|
| 3 |
+
size 482358
|
formerinfinitememorytransformer/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:26d35b66c286945d9b07967ca83ddd00c004eedad897bd79818064fe147bee55
|
| 3 |
+
size 734501
|
latentglatglancingatlatentvariablesforparalleltextgeneration/168a7c40-1c1e-4732-a855-b20fd9de45fa_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:e738d3ce0c93bec1408d4b300527be34edf886b1a0c4aaff90566e918d224ec3
|
| 3 |
+
size 90153
|
latentglatglancingatlatentvariablesforparalleltextgeneration/168a7c40-1c1e-4732-a855-b20fd9de45fa_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:71b6d99e5009975a781ba634e75ab9a5d5750add02b0f10f307a9b7a8d9cd5e6
|
| 3 |
+
size 109231
|