Add Batch aa633563-ad8f-4446-b667-361cbc28c169
Browse filesThis view is limited to 50 files because it contains too many changes. See raw diff
- averageapproximatesfirstprincipalcomponentanempiricalanalysisonrepresentationsfromneurallanguagemodels/ddafc98b-68f0-4185-b78b-b93551643375_content_list.json +3 -0
- averageapproximatesfirstprincipalcomponentanempiricalanalysisonrepresentationsfromneurallanguagemodels/ddafc98b-68f0-4185-b78b-b93551643375_model.json +3 -0
- averageapproximatesfirstprincipalcomponentanempiricalanalysisonrepresentationsfromneurallanguagemodels/ddafc98b-68f0-4185-b78b-b93551643375_origin.pdf +3 -0
- averageapproximatesfirstprincipalcomponentanempiricalanalysisonrepresentationsfromneurallanguagemodels/full.md +267 -0
- averageapproximatesfirstprincipalcomponentanempiricalanalysisonrepresentationsfromneurallanguagemodels/images.zip +3 -0
- averageapproximatesfirstprincipalcomponentanempiricalanalysisonrepresentationsfromneurallanguagemodels/layout.json +3 -0
- itdoesntlookgoodforadatetransformingcritiquesintopreferencesforconversationalrecommendationsystems/077fe0ce-4c8e-4e53-a592-0bbc670043aa_content_list.json +3 -0
- itdoesntlookgoodforadatetransformingcritiquesintopreferencesforconversationalrecommendationsystems/077fe0ce-4c8e-4e53-a592-0bbc670043aa_model.json +3 -0
- itdoesntlookgoodforadatetransformingcritiquesintopreferencesforconversationalrecommendationsystems/077fe0ce-4c8e-4e53-a592-0bbc670043aa_origin.pdf +3 -0
- itdoesntlookgoodforadatetransformingcritiquesintopreferencesforconversationalrecommendationsystems/full.md +188 -0
- itdoesntlookgoodforadatetransformingcritiquesintopreferencesforconversationalrecommendationsystems/images.zip +3 -0
- itdoesntlookgoodforadatetransformingcritiquesintopreferencesforconversationalrecommendationsystems/layout.json +3 -0
- kfoldenkfoldensembleforoutofdistributiondetection/571c3501-ee7d-4559-b7d2-989a30005529_content_list.json +3 -0
- kfoldenkfoldensembleforoutofdistributiondetection/571c3501-ee7d-4559-b7d2-989a30005529_model.json +3 -0
- kfoldenkfoldensembleforoutofdistributiondetection/571c3501-ee7d-4559-b7d2-989a30005529_origin.pdf +3 -0
- kfoldenkfoldensembleforoutofdistributiondetection/full.md +403 -0
- kfoldenkfoldensembleforoutofdistributiondetection/images.zip +3 -0
- kfoldenkfoldensembleforoutofdistributiondetection/layout.json +3 -0
- mt6multilingualpretrainedtexttotexttransformerwithtranslationpairs/2894af76-5f28-4534-8fef-eab19cff0048_content_list.json +3 -0
- mt6multilingualpretrainedtexttotexttransformerwithtranslationpairs/2894af76-5f28-4534-8fef-eab19cff0048_model.json +3 -0
- mt6multilingualpretrainedtexttotexttransformerwithtranslationpairs/2894af76-5f28-4534-8fef-eab19cff0048_origin.pdf +3 -0
- mt6multilingualpretrainedtexttotexttransformerwithtranslationpairs/full.md +379 -0
- mt6multilingualpretrainedtexttotexttransformerwithtranslationpairs/images.zip +3 -0
- mt6multilingualpretrainedtexttotexttransformerwithtranslationpairs/layout.json +3 -0
- soyouthinkyourefunnyratingthehumourquotientinstandupcomedy/fb3e4cfb-d66e-428f-bf22-aa2c8c4bda4a_content_list.json +3 -0
- soyouthinkyourefunnyratingthehumourquotientinstandupcomedy/fb3e4cfb-d66e-428f-bf22-aa2c8c4bda4a_model.json +3 -0
- soyouthinkyourefunnyratingthehumourquotientinstandupcomedy/fb3e4cfb-d66e-428f-bf22-aa2c8c4bda4a_origin.pdf +3 -0
- soyouthinkyourefunnyratingthehumourquotientinstandupcomedy/full.md +170 -0
- soyouthinkyourefunnyratingthehumourquotientinstandupcomedy/images.zip +3 -0
- soyouthinkyourefunnyratingthehumourquotientinstandupcomedy/layout.json +3 -0
- wasitstatedorwasitclaimedhowlinguisticbiasaffectsgenerativelanguagemodels/64f961ba-93f2-4974-b737-5c0b4ef99d51_content_list.json +3 -0
- wasitstatedorwasitclaimedhowlinguisticbiasaffectsgenerativelanguagemodels/64f961ba-93f2-4974-b737-5c0b4ef99d51_model.json +3 -0
- wasitstatedorwasitclaimedhowlinguisticbiasaffectsgenerativelanguagemodels/64f961ba-93f2-4974-b737-5c0b4ef99d51_origin.pdf +3 -0
- wasitstatedorwasitclaimedhowlinguisticbiasaffectsgenerativelanguagemodels/full.md +368 -0
- wasitstatedorwasitclaimedhowlinguisticbiasaffectsgenerativelanguagemodels/images.zip +3 -0
- wasitstatedorwasitclaimedhowlinguisticbiasaffectsgenerativelanguagemodels/layout.json +3 -0
- wikilysupervisedneuraltranslationtailoredtocrosslingualtasks/7a19827a-97b7-43be-8319-df1d7a9bdf74_content_list.json +3 -0
- wikilysupervisedneuraltranslationtailoredtocrosslingualtasks/7a19827a-97b7-43be-8319-df1d7a9bdf74_model.json +3 -0
- wikilysupervisedneuraltranslationtailoredtocrosslingualtasks/7a19827a-97b7-43be-8319-df1d7a9bdf74_origin.pdf +3 -0
- wikilysupervisedneuraltranslationtailoredtocrosslingualtasks/full.md +441 -0
- wikilysupervisedneuraltranslationtailoredtocrosslingualtasks/images.zip +3 -0
- wikilysupervisedneuraltranslationtailoredtocrosslingualtasks/layout.json +3 -0
- zeroshotdialoguedisentanglementbyselfsupervisedentangledresponseselection/e6a5921f-cf3c-46be-a9f7-737faa10faab_content_list.json +3 -0
- zeroshotdialoguedisentanglementbyselfsupervisedentangledresponseselection/e6a5921f-cf3c-46be-a9f7-737faa10faab_model.json +3 -0
- zeroshotdialoguedisentanglementbyselfsupervisedentangledresponseselection/e6a5921f-cf3c-46be-a9f7-737faa10faab_origin.pdf +3 -0
- zeroshotdialoguedisentanglementbyselfsupervisedentangledresponseselection/full.md +191 -0
- zeroshotdialoguedisentanglementbyselfsupervisedentangledresponseselection/images.zip +3 -0
- zeroshotdialoguedisentanglementbyselfsupervisedentangledresponseselection/layout.json +3 -0
- zeroshotdialoguestatetrackingviacrosstasktransfer/960a6ef3-99a8-43fd-9617-b685940e7e4f_content_list.json +3 -0
- zeroshotdialoguestatetrackingviacrosstasktransfer/960a6ef3-99a8-43fd-9617-b685940e7e4f_model.json +3 -0
averageapproximatesfirstprincipalcomponentanempiricalanalysisonrepresentationsfromneurallanguagemodels/ddafc98b-68f0-4185-b78b-b93551643375_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:4c22f98d852c0aa3316ae4ce244849919410b7df87edd78972d7559787e9aa3a
|
| 3 |
+
size 58177
|
averageapproximatesfirstprincipalcomponentanempiricalanalysisonrepresentationsfromneurallanguagemodels/ddafc98b-68f0-4185-b78b-b93551643375_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:e759cea35f37e2622051ee8298d0d6355a9aaa14c80185ae6cd8e6020e2b15d2
|
| 3 |
+
size 70623
|
averageapproximatesfirstprincipalcomponentanempiricalanalysisonrepresentationsfromneurallanguagemodels/ddafc98b-68f0-4185-b78b-b93551643375_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:6f592241d2c6b49370ce574b7b7dbed10793675a4c82faf28130cd972eefdadb
|
| 3 |
+
size 3296799
|
averageapproximatesfirstprincipalcomponentanempiricalanalysisonrepresentationsfromneurallanguagemodels/full.md
ADDED
|
@@ -0,0 +1,267 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# "Average" Approximates "First Principal Component"? An Empirical Analysis on Representations from Neural Language Models
|
| 2 |
+
|
| 3 |
+
Zihan Wang $^{1}$ Chengyu Dong $^{1}$ Jingbo Shang $^{*,1,2}$
|
| 4 |
+
|
| 5 |
+
$^{1}$ Department of Computer Science and Engineering, University of California San Diego, CA, USA
|
| 6 |
+
|
| 7 |
+
$^{2}$ Halcioglu Data Science Institute, University of California San Diego, CA, USA
|
| 8 |
+
|
| 9 |
+
{ziw224, cdong, jshang}@ucsd.edu*
|
| 10 |
+
|
| 11 |
+
# Abstract
|
| 12 |
+
|
| 13 |
+
Contextualized representations based on neural language models have furthered the state of the art in various NLP tasks. Despite its great success, the nature of such representations remains a mystery. In this paper, we present an empirical property of these representations—"average" $\approx$ "first principal component". Specifically, experiments show that the average of these representations shares almost the same direction as the first principal component of the matrix whose columns are these representations. We believe this explains why the average representation is always a simple yet strong baseline. Our further examinations show that this property also holds in more challenging scenarios, for example, when the representations are from a model right after its random initialization. Therefore, we conjecture that this property is intrinsic to the distribution of representations and not necessarily related to the input structure. We realize that these representations empirically follow a normal distribution for each dimension, and by assuming this is true, we demonstrate that the empirical property can be in fact derived mathematically.
|
| 14 |
+
|
| 15 |
+
# 1 Introduction
|
| 16 |
+
|
| 17 |
+
A large variety of state-of-the-art methods in NLP tasks nowadays are built upon contextualized representations from pre-trained neural language models, such as ELMo (Peters et al., 2018), BERT (Devlin et al., 2019), and XLNet (Yang et al., 2019). Despite the great success, we lack understandings about the nature of such representations. For example, Aharoni and Goldberg (2020) have shown that averaging BERT representations in a sentence can preserve its domain information. However, to our best knowledge, there is no analysis on what leads to the power of averaging representations.
|
| 18 |
+
|
| 19 |
+

|
| 20 |
+
Figure 1: Visualization of our discovered empirical property: "average" $\approx$ "first principal component".
|
| 21 |
+
|
| 22 |
+
Table 1: Average and minimum absolute cosine similarity of last layer representations between $\overline{\mathbf{r}}$ and $\mathbf{p}$ from 4,000 tests. As a reference, $\mathbf{r_i}$ drawn from a uniformly random distribution would lead to Average of .0149.
|
| 23 |
+
|
| 24 |
+
<table><tr><td rowspan="2">Model</td><td colspan="2">AG's news</td><td colspan="2">KP20k</td><td colspan="2">Dbpedia</td></tr><tr><td>Average</td><td>Min</td><td>Average</td><td>Min</td><td>Average</td><td>Min</td></tr><tr><td>BERT</td><td>.9994</td><td>.9908</td><td>.9995</td><td>.9958</td><td>.9988</td><td>.9845</td></tr><tr><td>RoBERTa</td><td>.9989</td><td>.9984</td><td>.9990</td><td>.9982</td><td>.9987</td><td>.9980</td></tr><tr><td>XLNet</td><td>.9990</td><td>.9874</td><td>.9991</td><td>.9932</td><td>.9994</td><td>.9856</td></tr><tr><td>ELMo</td><td>.9957</td><td>.9681</td><td>.9985</td><td>.9666</td><td>.9949</td><td>.9355</td></tr><tr><td>Word2vec</td><td>.9590</td><td>.8506</td><td>.9647</td><td>.8907</td><td>.9530</td><td>.8474</td></tr><tr><td>Glove</td><td>.9639</td><td>.5014</td><td>.9839</td><td>.6369</td><td>.9697</td><td>.6088</td></tr></table>
|
| 25 |
+
|
| 26 |
+
In this work, we present an empirical property of these representations, "average" $\approx$ "first principal component". As shown in Figure 1, given a sequence of $L$ tokens, one can construct a $d\times L$ matrix $\mathbf{R}$ using each $d$ -dimensional representation $\mathbf{r_i}$ of the $i$ -th token as a column. There are two popular ways to project this matrix into a single $d$ -dimensional vector: (1) average and (2) first principal component. Formally, the average $\overline{\mathbf{r}}$ is a $d$ -dimensional vector where $\overline{\mathbf{r}} = \sum_{i=1}^{L}\mathbf{r_i} / L$ . The first principal component $\mathbf{p}$ is a $d$ -dimensional vector whose direction maximizes the variance of the (mean-shifted) $L$ representations. Then, the property can be written as $|\cos (\overline{\mathbf{r}},\mathbf{p})|\approx 1$ . This absolute value is more than 0.999 in our experiments.
|
| 27 |
+
|
| 28 |
+
We examine the generality of this property and find it also holds in three more scenarios when every $\mathbf{r_i}$ is drawn from (1) a fixed layer (not necessary the last layer) in a pre-trained neural language model, (2) a fixed layer in a model right after random initialization without any training, and (3)
|
| 29 |
+
|
| 30 |
+
Table 2: Average and minimum absolute cosine similarity between $\overline{\mathbf{r}}$ and $\mathbf{p}$ from 4,000 tests from AG's news.
|
| 31 |
+
|
| 32 |
+
<table><tr><td rowspan="2">Model</td><td colspan="2">Same Sentence</td><td colspan="2">Random Sentence</td></tr><tr><td>Average</td><td>Min</td><td>Average</td><td>Min</td></tr><tr><td>BERT</td><td>.9994</td><td>.9908</td><td>.9992</td><td>.9930</td></tr><tr><td>RoBERTa</td><td>.9989</td><td>.9984</td><td>.9989</td><td>.9985</td></tr><tr><td>XLNet</td><td>.9990</td><td>.9874</td><td>.9994</td><td>.9960</td></tr><tr><td>ELMo</td><td>.9957</td><td>.9681</td><td>.9860</td><td>.6836</td></tr><tr><td>Word2vec</td><td>.9590</td><td>.8506</td><td>.9405</td><td>.8497</td></tr><tr><td>Glove</td><td>.9639</td><td>.5014</td><td>.9546</td><td>.3102</td></tr></table>
|
| 33 |
+
|
| 34 |
+
random token representations from all sentences encoded by a pre-trained model. Therefore, we conjecture that this property is intrinsic to the representations' distribution, which is related to the neural language model's architecture and parameters, and not necessarily related to the input structure. We realize that the empirical distribution of these representations is similar to a normal distribution on each dimension. Assuming this is true, we show that the property can be in fact derived mathematically.
|
| 35 |
+
|
| 36 |
+
Our contributions are summarized as follow.
|
| 37 |
+
|
| 38 |
+
- We discover a common, insightful property of several pre-trained neural language models—"average" $\approx$ "first principal component". To some extent, this explains why the average representation is always a simple yet strong baseline.
|
| 39 |
+
- We verify the generality of this property by obtaining representations from a random mixture of layers and sentences and also using randomly initialized models instead of pre-trained ones.
|
| 40 |
+
- We show that representations from language models empirically follow a per-dimension normal distribution that leads to the property.
|
| 41 |
+
|
| 42 |
+
Reproducibility. We will release code to reproduce experiments on $\mathrm{Gibthub}^{1}$
|
| 43 |
+
|
| 44 |
+
# 2 Experimental Settings
|
| 45 |
+
|
| 46 |
+
Dataset. We random sample 4,000 sentences each from three different datasets on three different domains: AG's news corpus (Zhang et al., 2015), KP20k Computer Science papers (Meng et al., 2017), and DBpedia (Zhang et al., 2015).
|
| 47 |
+
|
| 48 |
+
Pre-trained Neural Language Models. We experiment on four well-known language models: (1) BERT (Devlin et al., 2019), (2) RoBERTa (Liu et al., 2019), (3) XLNet (Yang et al., 2019), and (4) ELMo (Peters et al., 2018). For the first four transformer-based models, we use the base (and
|
| 49 |
+
|
| 50 |
+

|
| 51 |
+
(a) Pre-trained
|
| 52 |
+
Figure 2: Average cosine similarity for different layers.
|
| 53 |
+
|
| 54 |
+

|
| 55 |
+
(b) Randomly Initialized
|
| 56 |
+
|
| 57 |
+
cased, if available) version from the HuggingFace's (Wolf et al., 2019) implementation. For ELMo, we follow the AllenNLP toolbox (Gardner et al., 2018).
|
| 58 |
+
|
| 59 |
+
Word Embedding Models. We include experiments on word embeddings Word2vec (Mikolov et al., 2013) and Glove (Pennington et al., 2014) learned on Wikipedia (Fares et al., 2017).
|
| 60 |
+
|
| 61 |
+
# 3 The Property: "Average" ≈ "First Principal Component"
|
| 62 |
+
|
| 63 |
+
In most applications, each representation $\mathbf{r}_{\mathrm{i}}$ in $\mathbf{R}$ comes from the tokens within the Same Sentence and the last layer of a pre-trained neural language model. Following this setting, we conduct 4,000 tests each on three datasets and summarize the results in Table 1. One can easily see that the average and min absolute cosine similarities are very close to 1 for all pre-trained neural language models. The word embeddings satisfy the property on average, but not for some outlier sentences $^{2}$ . Given that uniformly random representations have near-zero average and min absolute cosine similarity values, we conclude that this is a special property for the language model generated representations. To some extent, it explains the effectiveness of the average last-layer representation based on a language model, which has been widely adopted and observed in the literature.
|
| 64 |
+
|
| 65 |
+
# 4 Generality Tests of the Property
|
| 66 |
+
|
| 67 |
+
Different Layers. To evaluate our discovered property's generality, we first investigate if this property only holds for the last-layer representations. For the four transformer-based language models, there are 13 possible layers (i.e., one after lookup table and 12 after encoder/decoder layers) to retrieve
|
| 68 |
+
|
| 69 |
+
representations for tokens. Therefore, we test the property based on representations from each layer and plot the average absolute cosine similarities in Figure 2. One can see that the property holds for the last few layers in all four models.
|
| 70 |
+
|
| 71 |
+
Random Initialized Models. We repeat the same test for randomly initialized models, i.e., not (pre-)trained at all. The results are in Figure 2. Again, we can see that the property holds for the last few layers in all four models.
|
| 72 |
+
|
| 73 |
+
Random Sentence. Finally, we explore the case when the representations can even come from different sentences. Specifically, we shuffle all the last-layer token representations of the 4,000 sentences and re-group them into 4,000 random lists of representations. With a high probability, each token representation in a list is generated independently of other tokens from the same list. We show the results in Random Sentences section in Table 2. Surprisingly, even with "unrelated" token representations, the property still holds well.
|
| 74 |
+
|
| 75 |
+
# 5 Analysis
|
| 76 |
+
|
| 77 |
+
In this section, we attempt to answer what could be a reason that the language models show this property. From Section 4, we know that the property also holds for randomly initialized models. Such models know nothing about natural languages. Therefore, it is reasonable to believe that this property is intrinsic to the models and related to the distribution of these representations.
|
| 78 |
+
|
| 79 |
+
# 5.1 Representation Distribution Analysis: BERT as a Case Study
|
| 80 |
+
|
| 81 |
+
We show that each dimension of BERT representations likely follows a normal distribution.
|
| 82 |
+
|
| 83 |
+
From Figure 3, we can see that the quantiles match with a normal distribution almost perfectly through a Q-Q plot (Wilk and Gnanadesikan, 1968) on the first dimension. We have checked another ten random dimensions and their quantiles all match well (see Appendix).
|
| 84 |
+
|
| 85 |
+
We also compare the skewness and kurtosis of a standard normal distribution and the empirical distribution of standardized representation values in each dimension. Let $\mathbf{s_j}$ be the vector that contains values of dimension $j$ in the representations. Specifically, consider the representation matrix $\mathbf{R}'$ for all $D = 224,970$ representations over the 4,000 sentences. The rows of $\mathbf{R}'$ correspond to $\mathbf{s_j}$ . The standardized vector $\widetilde{\mathbf{s_j}}$ of $\mathbf{s_j}$ is
|
| 86 |
+
|
| 87 |
+

|
| 88 |
+
Figure 3: Q-Q plot of the 1st dimension of BERT representations against a normal distribution. We sampled $10\%$ of representations to reduce the figure size.
|
| 89 |
+
|
| 90 |
+
Table 3: $N(0,1)$ vs. the Distribution of Normalized BERT Representations. For empirical values, we show Avg(±Std) over 768 dimensions.
|
| 91 |
+
|
| 92 |
+
<table><tr><td></td><td>~N(0,1)</td><td>~Distribution(¯sj)</td></tr><tr><td>Skewness (E[z3])</td><td>0</td><td>0.0062(±0.5884)</td></tr><tr><td>Kurtosis (E[z4])</td><td>3</td><td>3.9629(±3.3821)</td></tr></table>
|
| 93 |
+
|
| 94 |
+
defined as $\widetilde{s_{ji}} = \frac{s_{ji} - \hat{\mu}_j}{\hat{\sigma}_j}$ , where $\hat{\mu}_j = \frac{\sum_{i=1}^D s_{ji}}{D}$ and $\hat{\sigma}_j = \sqrt{\frac{\sum_{i=1}^D (s_{ji} - \hat{\mu}_j)^2}{D}}$ . For each dimension $j, 1 \leq j \leq d$ , one can obtain an empirical distribution from $\widetilde{s_j}$ . From Table 3, the third moment matches with a standard normal distribution well, while the fourth moment is a bit off. Further, we examine the off-diagonal terms in the $d \times d$ covariance matrix of the representations, which has a mean of 0.0101 and a standard deviation of 0.0116. When compared with a mean of 0.1747 of the diagonal terms, this is very small. Therefore, we conjecture that each dimension of BERT's representation can be treated approximately like an independent normal distribution. We note that we do not perform normality tests due to the large dataset size (i.e., over 200,000 representations), since even a minor shift away from the normal distribution can make statistical tests reject the null hypothesis.
|
| 95 |
+
|
| 96 |
+
In the rest of this section, we assume representations are sampled from $d$ normal distributions, i.e., each dimension follows a distribution $N(\mu_j,\sigma_j^2)$
|
| 97 |
+
|
| 98 |
+
# 5.2 Fitted distributions satisfy the property
|
| 99 |
+
|
| 100 |
+
We verify the property on generated representations following the distribution. When the parameters $\mu_{j},\sigma_{j}$ are estimated from representations from language models, the property holds (see Appendix). We can also randomly sample the parameters from pre-defined distributions, as shown in Table 4. The results on pre-defined distributions tell us: (1) the average of all $\mu_{j}$ should be 0, (2) not all of $\mu_{j}$
|
| 101 |
+
|
| 102 |
+
Table 4: Property testing results of representations following $d$ normal distributions with $\mu_{j}$ and $\sigma_{j}$ sampled from certain uniform distributions. 4000 tests are done.
|
| 103 |
+
|
| 104 |
+
<table><tr><td>rij ~ N(μj, σj2)</td><td>Average</td><td>Min</td></tr><tr><td>μj ~ U[-1, 1], σj ~ U[0, 1]</td><td>.9986</td><td>.9939</td></tr><tr><td>μj ~ U[-1, 1], σj ~ U[0, 10]</td><td>.1475</td><td>.0000</td></tr><tr><td>μj ~ U[3, 5], σj ~ U[0, 1]</td><td>.1490</td><td>.1463</td></tr><tr><td>μj = 0, σj ~ U[0, 1]</td><td>.1587</td><td>.0000</td></tr></table>
|
| 105 |
+
|
| 106 |
+
should be exactly 0, and (3) the variance should not be too large in magnitude compared to the mean.
|
| 107 |
+
|
| 108 |
+
In the following analysis, we additionally restrict that all representations have a sum of value to 0, i.e. $\sum_{j=1}^{d} r_{ij} = 0$ , for all representations $\mathbf{r_i}$ . This is mainly for the simplicity of the covariance matrix computation, as the PCA algorithm will first mean-shift the R matrix.
|
| 109 |
+
|
| 110 |
+
# 5.3 Covariance Matrix C of Normally Distributed Representations
|
| 111 |
+
|
| 112 |
+
We define the L-by-L covariance matrix $\mathbf{C} = \mathbf{R}^{\mathrm{T}}\mathbf{R}$ . Its L-by-1 eigenvector $\mathbf{w}$ corresponding to the largest eigenvalue can be used to get the first principal component, i.e., $\mathbf{p} = \mathbf{R}\mathbf{w}$ .
|
| 113 |
+
|
| 114 |
+
We show that if the representations follow a per-dimension normal distribution, $\mathbf{C}$ will follow a special shape-by expectation, its diagonals and off-diagonals will be the same positive value, respectively. We theoretically derive the mean and standard deviation of the entries based on $\mu_{j}$ and $\sigma_{j}$ (derivations are available in Appendix), empirically estimate their values, and put them in Table 5. It is clear that the standard deviation is smaller than the mean in magnitudes, confirming the special shape of C. Also, the theoretical and estimated values mostly match. The only significant difference is the standard deviation for diagonal entries, which is due to the difference on the fourth power statistics between the representations and the standard normal distribution as shown in Table 3.
|
| 115 |
+
|
| 116 |
+
# 5.4 This Special $\mathbf{C} \to$ the Property
|
| 117 |
+
|
| 118 |
+
If the diagonal entries of the covariance matrix $\mathbf{C}$ are $a > 0$ , and all off-diagonal entries are $b > 0$ , the eigenvector $\mathbf{w}$ corresponding to the largest eigenvalue will be a uniform vector. The Perron-Frobenius theorem (Samelson, 1957) states that the (unique) largest eigenvalue $\lambda$ is bounded:
|
| 119 |
+
|
| 120 |
+
$$
|
| 121 |
+
\min _ {i} \sum_ {j = 1} ^ {L} C _ {i j} \leq \lambda \leq \max _ {i} \sum_ {j = 1} ^ {L} C _ {i j}, \tag {1}
|
| 122 |
+
$$
|
| 123 |
+
|
| 124 |
+
which refer to the min and max row-sums in C. Due to its special shape, all row-sums in C are
|
| 125 |
+
|
| 126 |
+
Table 5: Theoretical and Estimated Mean and Standard Deviation of the Values in the Covariance Matrix C.
|
| 127 |
+
|
| 128 |
+
<table><tr><td></td><td colspan="2">Theoretical</td><td colspan="2">Estimated</td></tr><tr><td></td><td>Mean</td><td>Std</td><td>Mean</td><td>Std</td></tr><tr><td>diagonal</td><td>0.2857</td><td>0.0350</td><td>0.2857</td><td>0.0710</td></tr><tr><td>off-diagonal</td><td>0.1100</td><td>0.0248</td><td>0.1100</td><td>0.0248</td></tr></table>
|
| 129 |
+
|
| 130 |
+
around $a + b*(L - 1)$ . Therefore, the largest eigenvalue $\lambda_1\approx a + b*(L - 1)$ . To obtain $\mathbf{w}$ , one can solve $\mathbf{C}\mathbf{w} = \lambda_1\mathbf{w}$ . Obviously, $\mathbf{w} = \mathbf{1}$ is a solution, where $\mathbf{1}$ is a vector of 1's of length $L$ . As a result, the first principal component $\mathbf{p} = \mathbf{R}\mathbf{w}$ follows the same direction as the average.
|
| 131 |
+
|
| 132 |
+
# 6 Related Work
|
| 133 |
+
|
| 134 |
+
Simply averaging is a widely used, strong baseline to aggregate (contextualized) token representations (Ethayarajh, 2019; Aharoni and Goldberg, 2020; Reimers and Gurevych, 2019; Zhang et al., 2015; Taddy, 2015; Yu et al., 2018). In this paper, we discover an empirical property of these representations ("average" $\approx$ "first principal component"), which can justify its effectiveness.
|
| 135 |
+
|
| 136 |
+
There are other attempts to analyze properties of language models. Clark et al. (2019) analyze syntactic information that BERT's attention maps capture. K et al. (2020) prune the causes for multilinguality of multilingual BERT. Wang and Chen (2020) show that position information are learned differently in different language models. Different from these language-specific properties, we believe our newly discovered property relates more to the internal structure of neural language models.
|
| 137 |
+
|
| 138 |
+
# 7 Conclusion and Future Work
|
| 139 |
+
|
| 140 |
+
This paper shows a common, insightful property of representations from neural language models—"average" $\approx$ "first principal component". This property is general and holds in many challenging scenarios. After analyzing the BERT representations as a case study, we conjecture that these representations follow a normal distribution for each dimension, and this distribution leads to our discovered property. We believe that this work can shed light on future directions: (1) identifying the distributions that representations from language models follow, and (2) further implications or properties that representations have.
|
| 141 |
+
|
| 142 |
+
# Acknowledgements
|
| 143 |
+
|
| 144 |
+
We thank anonymous reviewers and program chairs for their valuable and insightful feedback. The research was sponsored in part by National Science Foundation Convergence Accelerator under award OIA-2040727 as well as generous gifts from Google, Adobe, and Teradata. Any opinions, findings, and conclusions or recommendations expressed herein are those of the authors and should not be interpreted as necessarily representing the views, either expressed or implied, of the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for government purposes notwithstanding any copyright annotation hereon.
|
| 145 |
+
|
| 146 |
+
# References
|
| 147 |
+
|
| 148 |
+
Roee Aharoni and Yoav Goldberg. 2020. Unsupervised domain clusters in pretrained language models. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 7747-7763. Association for Computational Linguistics.
|
| 149 |
+
Kevin Clark, Urvashi Khandelwal, Omer Levy, and Christopher D. Manning. 2019. What does BERT look at? an analysis of bert's attention. CoRR, abs/1906.04341.
|
| 150 |
+
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 4171-4186. Association for Computational Linguistics.
|
| 151 |
+
Kawin Ethayarajh. 2019. How contextual are contextualized word representations? comparing the geometry of bert, elmo, and GPT-2 embeddings. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, pages 55-65. Association for Computational Linguistics.
|
| 152 |
+
Murhaf Fares, Andrey Kutuzov, Stephan Oepen, and Erik Velldal. 2017. Word vectors, reuse, and replicability: Towards a community repository of large-text resources. In Proceedings of the 21st Nordic Conference on Computational Linguistics, NODALIDA 2017, Gothenburg, Sweden, May 22-24, 2017, volume 131 of Linkoping Electronic Conference Proceedings, pages 271-276. Linkoping University
|
| 153 |
+
|
| 154 |
+
Electronic Press / Association for Computational Linguistics.
|
| 155 |
+
Matt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson F. Liu, Matthew E. Peters, Michael Schmitz, and Luke Zettlemoyer. 2018. Allennlp: A deep semantic natural language processing platform. CoRR, abs/1803.07640.
|
| 156 |
+
Karthikeyan K, Zihan Wang, Stephen Mayhew, and Dan Roth. 2020. Cross-lingual ability of multilingual BERT: an empirical study. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.
|
| 157 |
+
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining approach. CoRR, abs/1907.11692.
|
| 158 |
+
Rui Meng, Sanqiang Zhao, Shuguang Han, Daqing He, Peter Brusilovsky, and Yu Chi. 2017. Deep keyphrase generation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017, Vancouver, Canada, July 30 - August 4, Volume 1: Long Papers, pages 582-592. Association for Computational Linguistics.
|
| 159 |
+
Tomas Mikolov, Ilya Sutskever, Kai Chen, Gregory S. Corrado, and Jeffrey Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems 2013. Proceedings of a meeting held December 5-8, 2013, Lake Tahoe, Nevada, United States, pages 3111-3119.
|
| 160 |
+
Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP 2014, October 25-29, 2014, Doha, Qatar. A meeting of SIGDAT, a Special Interest Group of the ACL, pages 1532-1543. ACL.
|
| 161 |
+
Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2018, New Orleans, Louisiana, USA, June 1-6, 2018, Volume 1 (Long Papers), pages 2227-2237. Association for Computational Linguistics.
|
| 162 |
+
Nils Reimers and Iryna Gurevych. 2019. Sentence-bert: Sentence embeddings using siamese bert-networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019,
|
| 163 |
+
|
| 164 |
+
Hong Kong, China, November 3-7, 2019, pages 3980-3990. Association for Computational Linguistics.
|
| 165 |
+
Hans Samelson. 1957. On the perron-frobenius theorem. Michigan Math. J., 4(1):57-59.
|
| 166 |
+
Matt Taddy. 2015. Document classification by inversion of distributed language representations. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing, ACL 2015, July 26-31, 2015, Beijing, China, Volume 2: Short Papers, pages 45-49. The Association for Computer Linguistics.
|
| 167 |
+
Yu-An Wang and Yun-Nung Chen. 2020. What do position embeddings learn? an empirical study of pretrained language model positional encoding. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pages 6840-6849. Association for Computational Linguistics.
|
| 168 |
+
M. B. Wilk and R. Gnanadesikan. 1968. Probability plotting methods for the analysis of data. Biometrika, 55(1):1-17.
|
| 169 |
+
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, and Jamie Brew. 2019. Huggingface's transformers: State-of-the-art natural language processing. CoRR, abs/1910.03771.
|
| 170 |
+
Zhilin Yang, Zihang Dai, Yiming Yang, Jaime G. Carbonell, Ruslan Salakhutdinov, and Quoc V. Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, 8-14 December 2019, Vancouver, BC, Canada, pages 5754-5764.
|
| 171 |
+
Katherine Yu, Haoran Li, and Barlas Oguz. 2018. Multilingual seq2seq training with similarity loss for cross-lingual document classification. In Proceedings of The Third Workshop on Representation Learning for NLP, Rep4NLP@ACL 2018, Melbourne, Australia, July 20, 2018, pages 175-179. Association for Computational Linguistics.
|
| 172 |
+
Xiang Zhang, Junbo Jake Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classification. In Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, December 7-12, 2015, Montreal, Quebec, Canada, pages 649-657.
|
| 173 |
+
|
| 174 |
+
# A Q-Q Plot of Ten Random Dimensions
|
| 175 |
+
|
| 176 |
+
We randomly sample another 10 dimensions from the 768 dimensions of BERT and plot the quantiles against a normal distribution in Figure 4. All the 10 dimensions match with a normal distribution pretty well.
|
| 177 |
+
|
| 178 |
+
Table 6: Representations following $d$ normal distributions with parameters estimated from neural language models.
|
| 179 |
+
|
| 180 |
+
<table><tr><td>Model</td><td>Average</td><td>Min</td></tr><tr><td>BERT</td><td>.9995</td><td>.9978</td></tr><tr><td>RoBERTa</td><td>.9989</td><td>.9982</td></tr><tr><td>GPT-2</td><td>.9988</td><td>.9982</td></tr><tr><td>XLNet</td><td>.9994</td><td>.9977</td></tr><tr><td>ELMo</td><td>.9987</td><td>.9947</td></tr></table>
|
| 181 |
+
|
| 182 |
+
# B Normal Distribution Estimated from Models
|
| 183 |
+
|
| 184 |
+
In addition to randomly sampled $\mu_{j}$ and $\sigma_{j}$ , we can also use the empirical mean and standard deviation of (dimensions of) representations from pretrained language models. Table 6 shows that the property is well satisfied on these representations. This further advocates that representations from these models have properties similar to normal distributions.
|
| 185 |
+
|
| 186 |
+
# C Diagonal & Off-diagonal Values
|
| 187 |
+
|
| 188 |
+
Here we show the calculations for values in the covariance matrix $C$ . Note that
|
| 189 |
+
|
| 190 |
+
$$
|
| 191 |
+
C _ {i j} = \frac {1}{d - 1} \sum_ {k = 1} ^ {d} r _ {k i} r _ {k j},
|
| 192 |
+
$$
|
| 193 |
+
|
| 194 |
+
so for diagonal entries $C_{ii}$ is a sum of $d$ products of normally distributed random variables with itself, and all $C_{ii}$ follow the same distribution; for off-diagonal entries $C_{ij}$ is a sum of $d$ products of pairs of normally distributed random variables, and similarly, all off-diagonal entries also follow the same distribution. Therefore, on expectation, the covariance matrix has the same diagonal entries, and the same off-diagonal entries. The average and variance can be mathematically derived:
|
| 195 |
+
|
| 196 |
+
$$
|
| 197 |
+
\mathbb {E} [ C _ {i i} ] = \frac {1}{d - 1} \sum_ {k = 1} ^ {d} (\sigma_ {k} ^ {2} + \mu_ {k} ^ {2})
|
| 198 |
+
$$
|
| 199 |
+
|
| 200 |
+
$$
|
| 201 |
+
\operatorname {V a r} \left[ C _ {i i} \right] = \frac {1}{(d - 1) ^ {2}} \left(\sum_ {k = 1} ^ {d} 2 \sigma_ {k} ^ {4} + 4 \mu_ {k} ^ {2} \sigma_ {k} ^ {2}\right)
|
| 202 |
+
$$
|
| 203 |
+
|
| 204 |
+
$$
|
| 205 |
+
\mathbb {E} [ C _ {i j} ] = \frac {1}{d - 1} \sum_ {k = 1} ^ {d} \mu_ {k} ^ {2}
|
| 206 |
+
$$
|
| 207 |
+
|
| 208 |
+
$$
|
| 209 |
+
V a r [ C _ {i j} ] = \frac {1}{(d - 1) ^ {2}} \left(\sum_ {k = 1} ^ {d} \sigma_ {k} ^ {4} + 2 \mu_ {k} ^ {2} \sigma_ {k} ^ {2}\right)
|
| 210 |
+
$$
|
| 211 |
+
|
| 212 |
+
We also outline steps for the derivation. Following our notations, $r_{ij} \sim N(\mu_j, \sigma_j^2) \Rightarrow r_{ij} = \sigma_j * z_{ij} + \mu_j$ where $z_{ij}$ is a standard normal variable, i.e. $z_{ij} \sim N(0,1)$ .
|
| 213 |
+
|
| 214 |
+

|
| 215 |
+
(a) $\mathrm{d} = {29}$
|
| 216 |
+
|
| 217 |
+

|
| 218 |
+
(b) $\mathrm{d} = {94}$
|
| 219 |
+
|
| 220 |
+

|
| 221 |
+
(c) $d = 287$
|
| 222 |
+
|
| 223 |
+

|
| 224 |
+
(d) $\mathrm{d} = 342$
|
| 225 |
+
|
| 226 |
+

|
| 227 |
+
(e) $\mathrm{d} = {390}$
|
| 228 |
+
|
| 229 |
+

|
| 230 |
+
(f) $\mathrm{d} = 495$
|
| 231 |
+
|
| 232 |
+

|
| 233 |
+
(g) $\mathrm{d} = 507$
|
| 234 |
+
|
| 235 |
+

|
| 236 |
+
(h) $\mathrm{d} = 527$
|
| 237 |
+
|
| 238 |
+

|
| 239 |
+
(i) $\mathrm{d} = 655$
|
| 240 |
+
Figure 4: Q-Q plots on ten random dimensions.
|
| 241 |
+
|
| 242 |
+

|
| 243 |
+
(j) $\mathrm{d} = 670$
|
| 244 |
+
|
| 245 |
+
$$
|
| 246 |
+
\begin{array}{l} E [ C _ {i i} ] = E [ \frac {1}{d - 1} * \sum_ {k = 1} ^ {d} r _ {i k} r _ {i k} ] \\ = \frac {1}{d - 1} \sum_ {k = 1} ^ {d} E \left[ \left(\sigma_ {k} * z _ {i k} + \mu_ {k}\right) ^ {2} \right] \tag {2} \\ = \frac {1}{d - 1} \sum_ {k = 1} ^ {d} \left(\sigma_ {k} ^ {2} + \mu_ {k} ^ {2}\right) \\ \end{array}
|
| 247 |
+
$$
|
| 248 |
+
|
| 249 |
+
$$
|
| 250 |
+
\begin{array}{l} E \left[ C _ {i j} \right] = E \left[ \frac {1}{d - 1} * \sum_ {k = 1} ^ {d} r _ {i k} r _ {j k} \right] \\ = \frac {1}{d - 1} \sum_ {k = 1} ^ {d} E \left[ \left(\sigma_ {k} * z _ {i k} + \mu_ {k}\right) \left(\sigma_ {k} * z _ {j k} + \mu_ {k}\right) \right] \tag {3} \\ = \frac {1}{d - 1} \sum_ {k = 1} ^ {d} \left(\mu_ {k} ^ {2}\right) \\ \end{array}
|
| 251 |
+
$$
|
| 252 |
+
|
| 253 |
+
$$
|
| 254 |
+
\begin{array}{l} \operatorname {V a r} \left[ C _ {i i} \right] = E \left[ \left(\frac {1}{d - 1} * \sum_ {k = 1} ^ {d} r _ {k i} r _ {k i}\right) ^ {2} \right] - E \left[ C _ {i i} \right] ^ {2} \tag {4} \\ = \frac {1}{(d - 1) ^ {2}} E [ (\sum_ {k = 1} ^ {d} r _ {k i} * r _ {k i}) ^ {2} ] - E [ C _ {i i} ] ^ {2} \\ \end{array}
|
| 255 |
+
$$
|
| 256 |
+
|
| 257 |
+
$$
|
| 258 |
+
\begin{array}{l} E [ (\sum_ {k = 1} ^ {d} r _ {k i} * r _ {k i}) ^ {2} ] = E [ (\sum_ {k = 1} ^ {d} (\sigma_ {k} z _ {i k} + \mu_ {k}) ^ {2}) ^ {2} ] \\ = \sum_ {k = 1} ^ {d} E [ (\sigma_ {k} ^ {2} z _ {i k} ^ {2} + 2 \mu_ {k} \sigma_ {k} z _ {i k} + \mu_ {k} ^ {2}) ^ {2} ] \\ + \sum_ {k _ {1}! = k _ {2}} E [ (\sigma_ {k _ {1}} ^ {2} z _ {i k _ {1}} ^ {2} + 2 \mu_ {k _ {1}} \sigma_ {k _ {1}} z _ {i k _ {1}} + \mu_ {k _ {1}} ^ {2}) * (\sigma_ {k _ {2}} ^ {2} z _ {i k _ {2}} ^ {2} + 2 \mu_ {k _ {2}} \sigma_ {k _ {2}} z _ {i k _ {2}} + \mu_ {k _ {2}} ^ {2}) ] \\ = \sum_ {k = 1} ^ {d} E \left[ \left(\sigma_ {k} ^ {2} z _ {i k} ^ {2} + \mu_ {k} ^ {2}\right) ^ {2} + 4 \mu_ {k} ^ {2} \sigma_ {k} ^ {2} z _ {i k} ^ {2} \right] + \sum_ {k _ {1}! = k _ {2}} \left(\sigma_ {k _ {1}} ^ {2} + \mu_ {k _ {1}} ^ {2}\right) * \left(\sigma_ {k _ {2}} ^ {2} + \mu_ {k _ {2}} ^ {2}\right) \\ = \sum_ {k = 1} ^ {d} \sigma_ {k} ^ {4} * 3 + \mu_ {k} ^ {4} + 2 * \sigma_ {k} ^ {2} \mu_ {k} ^ {2} + 4 \mu_ {k} ^ {2} \sigma_ {k} ^ {2} + \sum_ {k _ {1}! = k _ {2}} \left(\sigma_ {k _ {1}} ^ {2} + \mu_ {k _ {1}} ^ {2}\right) * \left(\sigma_ {k _ {2}} ^ {2} + \mu_ {k _ {2}} ^ {2}\right) \\ = \left(\sum_ {k = 1} ^ {d} \sigma_ {k} ^ {2} + \mu_ {k} ^ {2}\right) ^ {2} + \sum_ {k = 1} ^ {d} 2 \sigma_ {k} ^ {4} + 4 \mu_ {k} ^ {2} \sigma_ {k} ^ {2} \tag {5} \\ \end{array}
|
| 259 |
+
$$
|
| 260 |
+
|
| 261 |
+
$$
|
| 262 |
+
\begin{array}{l} \operatorname {V a r} \left[ C _ {i j} \right] = E \left[ \left(\frac {1}{d - 1} * \sum_ {k = 1} ^ {d} r _ {k i} r _ {k j}\right) ^ {2} \right] - E \left[ C _ {i j} \right] ^ {2} \tag {6} \\ = \frac {1}{(d - 1) ^ {2}} E [ (\sum_ {k = 1} ^ {d} r _ {k i} * r _ {k j}) ^ {2} ] - E [ C _ {i j} ] ^ {2} \\ \end{array}
|
| 263 |
+
$$
|
| 264 |
+
|
| 265 |
+
$$
|
| 266 |
+
\begin{array}{l} E [ (\sum_ {k = 1} ^ {d} r _ {k i} * r _ {k j}) ^ {2} ] = E [ \left(\sum_ {k = 1} ^ {d} (\sigma_ {k} z _ {i k} + \mu_ {k}) * (\sigma_ {k} z _ {j k} + \mu_ {k})\right) ^ {2} ] \\ = \sum_ {k = 1} ^ {d} E \left[ \left(\sigma_ {k} ^ {2} z _ {i k} z _ {j k} + \mu_ {k} \sigma_ {k} \left(z _ {i k} + z _ {j k}\right) + \mu_ {k} ^ {2}\right) ^ {2} \right] \\ + \sum_ {k _ {1}! = k _ {2}} E [ (\sigma_ {k _ {1}} ^ {2} z _ {i k _ {1}} z _ {j k _ {1}} + \mu_ {k _ {1}} \sigma_ {k _ {1}} (z _ {i k _ {1}} + z _ {j k _ {1}}) + \mu_ {k _ {1}} ^ {2}) \\ \left. \left. * \left(\sigma_ {k _ {2}} ^ {2} z _ {i k _ {2}} z _ {j k _ {2}} + \mu_ {k _ {2}} \sigma_ {k _ {2}} \left(z _ {i k _ {2}} + z _ {j k _ {2}}\right) + \mu_ {k _ {2}} ^ {2}\right) \right] \right. \tag {7} \\ = \sum_ {k = 1} ^ {d} E \left[ \left(\sigma_ {k} ^ {2} z _ {i k} z _ {j k} + \mu_ {k} ^ {2}\right) ^ {2} + \mu_ {k} ^ {2} \sigma_ {k} ^ {2} \left(z _ {i k} + z _ {j k}\right) ^ {2} \right] + \sum_ {k _ {1}! = k _ {2}} \mu_ {k _ {1}} ^ {2} * \mu_ {k _ {2}} ^ {2} \\ = \sum_ {k = 1} ^ {d} \sigma_ {k} ^ {4} + \mu_ {k} ^ {4} + \mu_ {k} ^ {2} \sigma_ {k} ^ {2} * 2 + \sum_ {k _ {1}! = k _ {2}} \mu_ {k _ {1}} ^ {2} * \mu_ {k _ {2}} ^ {2} \\ = \left(\sum_ {k = 1} ^ {d} \mu_ {k} ^ {2}\right) ^ {2} + \sum_ {k = 1} ^ {d} \sigma_ {k} ^ {4} + 2 \mu_ {k} ^ {2} \sigma_ {k} ^ {2} \\ \end{array}
|
| 267 |
+
$$
|
averageapproximatesfirstprincipalcomponentanempiricalanalysisonrepresentationsfromneurallanguagemodels/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:67d79be631da0756d0a6d4f6aa79e4900851b57e3dd8336e2072a399a9457574
|
| 3 |
+
size 413100
|
averageapproximatesfirstprincipalcomponentanempiricalanalysisonrepresentationsfromneurallanguagemodels/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:3832d24bf20da37924ed146abff7f4bcd58f0de1a3c0fae71cad210741c34a9b
|
| 3 |
+
size 327053
|
itdoesntlookgoodforadatetransformingcritiquesintopreferencesforconversationalrecommendationsystems/077fe0ce-4c8e-4e53-a592-0bbc670043aa_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:95bec11983a3d09b5f09764c85af571b382f526158de669e0e928423358d07d7
|
| 3 |
+
size 44298
|
itdoesntlookgoodforadatetransformingcritiquesintopreferencesforconversationalrecommendationsystems/077fe0ce-4c8e-4e53-a592-0bbc670043aa_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:ffd9da46a6e62952feba6b4bb871c2b9438046d2366f51312928c2978e28a648
|
| 3 |
+
size 53366
|
itdoesntlookgoodforadatetransformingcritiquesintopreferencesforconversationalrecommendationsystems/077fe0ce-4c8e-4e53-a592-0bbc670043aa_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:3a6fefbc53742615c8fed8eac523e3420e1915ba6e304f615ce138708f144379
|
| 3 |
+
size 345479
|
itdoesntlookgoodforadatetransformingcritiquesintopreferencesforconversationalrecommendationsystems/full.md
ADDED
|
@@ -0,0 +1,188 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# "It doesn't look good for a date": Transforming Critiques into Preferences for Conversational Recommendation Systems
|
| 2 |
+
|
| 3 |
+
Victor S. Bursztyn<sup>1</sup>, Jennifer Healey<sup>2</sup>, Nedim Lipka<sup>2</sup>, Eunyee Koh<sup>2</sup>, Doug Downey<sup>1,3</sup>, and Larry Birnbaum<sup>1</sup>
|
| 4 |
+
|
| 5 |
+
$^{1}$ Department of Computer Science, Northwestern University, Evanston, IL, USA $^{2}$ Adobe Research, San Jose, CA, USA
|
| 6 |
+
|
| 7 |
+
<sup>3</sup>Allen Institute for Artificial Intelligence, Seattle, WA, USA
|
| 8 |
+
|
| 9 |
+
v-bursztyn@u.northwestern.edu, {jehealey, lipka, eunyee} @adobe.com {d-downey, l-birnbaum} @northwestern.edu
|
| 10 |
+
|
| 11 |
+
# Abstract
|
| 12 |
+
|
| 13 |
+
Conversations aimed at determining good recommendations are iterative in nature. People often express their preferences in terms of a critique of the current recommendation (e.g., "It doesn't look good for a date"), requiring some degree of common sense for a preference to be inferred. In this work, we present a method for transforming a user critique into a positive preference (e.g., "I prefer more romantic") in order to retrieve reviews pertaining to potentially better recommendations (e.g., "Perfect for a romantic dinner"). We leverage a large neural language model (LM) in a few-shot setting to perform critique-to-preference transformation, and we test two methods for retrieving recommendations: one that matches embeddings, and another that fine-tunes an LM for the task. We instantiate this approach in the restaurant domain and evaluate it using a new dataset of restaurant critiques. In an ablation study, we show that utilizing critique-to-preference transformation improves recommendations, and that there are at least three general cases that explain this improved performance.
|
| 14 |
+
|
| 15 |
+
# 1 Introduction
|
| 16 |
+
|
| 17 |
+
Conversational recommendation systems (CRSs) are dialog-based systems that aim to refine a set of options over multiple turns of a conversation, envisioning more natural interactions and better user modeling than in non-conversational approaches.
|
| 18 |
+
|
| 19 |
+
However, the resulting dialogs still do not necessarily reflect how real conversations unfold. Most CRSs fall into two categories: they either frame the problem as a slot-filling task within a predefined feature space, such as Sun and Zhang (2018); Zhang et al. (2018); Budzianowski et al. (2018), which is closer to how people make decisions but not as flexible as real conversations; or they elicit preferences by asking users to rate specific items, such as Christakopoulou et al. (2016), which is in
|
| 20 |
+
|
| 21 |
+
dependent of a feature space but not as natural to users.
|
| 22 |
+
|
| 23 |
+
What's it about the menu that you didn't like?
|
| 24 |
+
|
| 25 |
+
I need something less expensive and more appropriate for kids
|
| 26 |
+
|
| 27 |
+
I hear you. You prefer a more affordable place for kids.
|
| 28 |
+
|
| 29 |
+
Check out M Burger for that. A previous customer wrote: "This is an affordable, kid-friendly place specialized in burgers and American food."
|
| 30 |
+
|
| 31 |
+
Figure 1: An example of our system transforming a critique into a positive preference and then using a customer testimonial to sell the user on a new option.
|
| 32 |
+
|
| 33 |
+
When we examine situations involving real human agents (Lyu et al., 2021), decisions typically require multiple rounds of recommendations by the agent and critiques by the user, with the agent continuously improving the recommendations based upon user preferences that can be inferred from such critiques.
|
| 34 |
+
|
| 35 |
+
These inferences can be compared to the types of common sense inferences that have been studied recently with LMs (Davison et al., 2019; Majumder et al., 2020; Jiang et al., 2021). However, use of LMs for critique interpretation remains underexplored, despite the important role of critiques in communicating preferences—a very natural real-world task. Working in the restaurant domain, we prompt GPT3 (Brown et al., 2020) to transform a free-form critique (e.g., "It doesn't look good for a date") into a positive preference (e.g., "I prefer more romantic") that better captures the user's needs. Compared with most previous work on common sense inference, which relies on manually-constructed question sets, our task presents an opportunity to study common sense inference within a naturally arising, real-world application.
|
| 36 |
+
|
| 37 |
+
We test the effect of our novel critique interpretation method on the quality of recommendations using two different methods: one that matches the embedding of an input statement (e.g., "I prefer
|
| 38 |
+
|
| 39 |
+
more romantic") to persuasive arguments found in customer reviews (e.g., "Perfect for a romantic dinner"); and another one that fine-tunes BERT (Devlin et al., 2018) in using an input statement to rank a given set of arguments.
|
| 40 |
+
|
| 41 |
+
Our work differs from previous critiquing-based systems that strongly limit the types of critiques that can be used (Chen and Pu, 2012) and aligns with a recent trend in the CRS literature towards more open-ended interactions (Radlinski et al., 2019; Byrne et al., 2019). To the best of our knowledge, Penha and Hauff (2020) are the closest prior work investigating whether BERT can be used for recommendations by trying to infer related items and genres. Here, we focus specifically on critique-to-preference inferences, aiming at more natural dialogs and better recommendations.
|
| 42 |
+
|
| 43 |
+
Our contributions are the following: 1. We propose a critique interpretation method that does not limit the feature space a priori; 2. We demonstrate that transforming critiques into preferences improves recommendations over two fold when matching embeddings and by $19 - 59\%$ when finetuning an LM to rank recommendations, and present three possible explanations for this; and 3. We release a new dataset of user critiques in the restaurant domain, contributing a new applied task where common sense has great practical value.
|
| 44 |
+
|
| 45 |
+
# 2 Methods
|
| 46 |
+
|
| 47 |
+
In this section, we describe three methods: A critique interpretation method (2.1), an embeddings-based recommender (2.2.1), and an LM-based recommender (2.2.2).
|
| 48 |
+
|
| 49 |
+
# 2.1 Critique Interpretation
|
| 50 |
+
|
| 51 |
+
Critique interpretation is the task of transforming a free-form critique into a positive preference. Our critique interpretation method uses GPT3 in a few-shot setting similarly to Brown et al. (2020), which can be represented in a 3-shot version as follows:
|
| 52 |
+
|
| 53 |
+
# GPT3 Input :
|
| 54 |
+
|
| 55 |
+
It looks cheap $= >$ I prefer a fancier place.
|
| 56 |
+
|
| 57 |
+
Too expensive $= >$ I prefer a more affordable place.
|
| 58 |
+
|
| 59 |
+
That's so tacky $= >$ I prefer a more stylish place.
|
| 60 |
+
|
| 61 |
+
That's not good for a date $= >$ I prefer
|
| 62 |
+
|
| 63 |
+
# GPT3 Output :
|
| 64 |
+
|
| 65 |
+
a more romantic place.
|
| 66 |
+
|
| 67 |
+
To prime GPT3 for our task, we include ten examples in its prompt, five related to food and five to the atmosphere. We then append the critique that we would like to transform followed by the string "I prefer", which conditions GPT3 to generate a positive preference. In our experiments, positive preferences are sampled using OpenAI's Completion API (the DaVinci model, temperature $= 0.7$ , top $\mathrm{p} = 1.0$ , response length $= 20$ , and no penalties).
|
| 68 |
+
|
| 69 |
+
Besides not requiring a hand-crafted feature set, this method is also capable of more flexible interpretation of language, such as transforming "How come they only serve that much?"—with no clearly negative words—into "I prefer larger portions."
|
| 70 |
+
|
| 71 |
+
# 2.2 Content-based Recommendations
|
| 72 |
+
|
| 73 |
+
# 2.2.1 Recommendation Search
|
| 74 |
+
|
| 75 |
+
Our embeddings-based recommender, $f_{cos}$ , takes a preference statement and searches for persuasive arguments in customer reviews. As seen in Figure 1, we can define a persuasive argument as a review sentence that conveys clearly positive sentiment while being as specific as possible w.r.t the user's preferences.
|
| 76 |
+
|
| 77 |
+
To incorporate this definition in $f_{cos}$ , first we parse sentences in customer reviews using spaCy (Honnibal and Montani, 2017) and use EmoNet (Abdul-Mageed and Ungar, 2017) to keep the sentences with at least a minimum amount of "joy" ( $\geq 0.7$ ) as our set of argument candidates $A$ .
|
| 78 |
+
|
| 79 |
+
Then we use the Universal Sentence Encoder (Cer et al., 2018) to calculate the similarity of all these argument candidates w.r.t a given user preference. We calculate the cosine similarity between their representations in this embedding space, select the argument with maximum alignment, and recommend the associated restaurant:
|
| 80 |
+
|
| 81 |
+
$$
|
| 82 |
+
\operatorname {S i m} \left(s _ {1}, s _ {2}\right) = \cos \left(\operatorname {E n c} \left(s _ {1}\right), \operatorname {E n c} \left(s _ {2}\right)\right)
|
| 83 |
+
$$
|
| 84 |
+
|
| 85 |
+
$$
|
| 86 |
+
f _ {c o s} (p r e f e r e n c e) = \operatorname * {a r g m a x} _ {a \in A} S i m (a, p r e f e r e n c e)
|
| 87 |
+
$$
|
| 88 |
+
|
| 89 |
+
As with critique interpretation, $f_{cos}$ can take any natural language statement as input to search for potential recommendations. We denote $f_{cos}^{pref}$ when it uses an inferred positive preference as input ("I prefer more romantic") and $f_{cos}^{crit}$ when it directly uses a critique ("It doesn't look good for a date"). In our first ablation study, we use $f_{cos}^{crit}$ as a baseline to test the efficacy of $f_{cos}^{pref}$ in retrieving better recommendations.
|
| 90 |
+
|
| 91 |
+
'Fully available at https://bit.ly/3fnf8V2
|
| 92 |
+
|
| 93 |
+
# 2.2.2 Recommendation Ranking
|
| 94 |
+
|
| 95 |
+
Besides using pretrained embeddings to search for recommendations from customer reviews, we design a more computationally intensive method, $f_{LM}$ , that fine-tunes BERT to rank a set of arguments $A$ considering a given input statement.
|
| 96 |
+
|
| 97 |
+
We use the currently top performing open-source solution (Han et al., 2020; Pasumarthi et al., 2019) on the MSMARCO passage ranking leaderboard $^2$ to fine-tune three versions of BERT: $f_{LM}^{pref}$ uses a positive preference as input ("I prefer more romantic"), $f_{LM}^{crit}$ uses a critique ("It doesn't look good for a date"), and $f_{LM}^{concat}$ uses a concatenation of both a critique and a preference ("It doesn't look good for a date. I prefer more romantic"). Hypothetically, the more powerful LM method could learn to satisfy the user's preferences without the need of critique interpretation if the performances of $f_{LM}^{crit} \approx f_{LM}^{pref} \approx f_{LM}^{concat}$ .
|
| 98 |
+
|
| 99 |
+
In our experiments, BERT-Base is fined-tuned for 10,000 steps, with learning rate $= 10^{-5}$ , maximum sequence length $= 512$ , and softmax loss, using a Nvidia Quadro RTX 8000 for 3-6h per run (when ranking 15 and 30 arguments, respectively) and two runs per model (2-fold cross validation).
|
| 100 |
+
|
| 101 |
+
# 3 Evaluation
|
| 102 |
+
|
| 103 |
+
We run two ablation studies to evaluate the hypothesis that critique interpretation would be beneficial to the overall recommendation approach. First, we analyze our embeddings-based recommender, $f_{cos}$ , to check whether the performance of $f_{cos}^{pref} > f_{cos}^{crit}$ . Secondly, we analyze our LM fine tuning-based recommender, $f_{LM}$ , to check if $f_{LM}^{pref} > f_{LM}^{crit}$ or $f_{LM}^{concat} > f_{LM}^{crit}$ . Finally, we discuss qualitative differences between the tested arms.
|
| 104 |
+
|
| 105 |
+
# 3.1 Data
|
| 106 |
+
|
| 107 |
+
Our methods were instantiated in a system comprising 15 restaurants selected from two of the largest metropolitan areas in the United States, covering a variety of price ranges and cuisines. For each restaurant, up to 100 four- or five-star customer reviews were collected from Google Places. This resulted in a total of 1455 reviews comprising 5744 sentences, 2865 of which pass the threshold for being identified as positive review sentences.
|
| 108 |
+
|
| 109 |
+
We compiled a set of user critiques from two sources: a set of 46 unique critiques from user
|
| 110 |
+
|
| 111 |
+
studies that were conducted to test an earlier system prototype (Bursztyn et al., 2021), and 294 additional critiques adapted from the Circa dataset (Louis et al., 2020). Circa was designed to study indirect answers to yes-no questions, such as "Are you a big meat eater?" answered with "I prefer leafy greens", from which the critique "I'm not a big meat eater" can be generated. We end with a total of 340 individual critiques after examining 1205 similar examples.
|
| 112 |
+
|
| 113 |
+
We generated a positive preference for each individual critique using our critique interpretation method in 2.1, without discarding any critiques. Our method yielded accurate preferences for 298 critiques $(87.6\%)$ . For the remaining 42, we found GPT3 mostly undecided and vague (e.g., "Jalapéños are my limit" generates "I prefer food without jalapéños"). In our experiments, for these edge cases, we kept the best of three trials, but we believe that results using just the first generation would have been qualitatively similar.
|
| 114 |
+
|
| 115 |
+
The 340 critiques were randomly combined into pairs and triples in order to simulate longer conversations, i.e., two- and three-round critiques. We sampled 340 pairs and 340 triples, substituting only exceptional combinations that contained contradictory statements (e.g., "I'm not a big meat eater." paired with "I'm not in the mood for vegetables"), for a total of 1020 critiques. Compound critiques were concatenated into single statements as well as their corresponding preferences.
|
| 116 |
+
|
| 117 |
+
This curated dataset of 1020 restaurant critiques and inferred preferences is made available to the research community. $^3$
|
| 118 |
+
|
| 119 |
+
# 3.2 Measurements
|
| 120 |
+
|
| 121 |
+
For evaluating our embedding-based methods $f_{cos}$ , we use critiques as input to $f_{cos}^{crit}$ and their positive preferences as input to $f_{cos}^{pref}$ . For each query we retrieve the top 3 arguments, which are labeled as accurate or inaccurate by a human judge (illustrated in Table 1). To measure labeling consistency, a second human annotator redundantly labeled a sample of 100 arguments resulting in a Cohen's Kappa of 0.71, which indicates strong agreement.
|
| 122 |
+
|
| 123 |
+
We then measure Precision@1, Precision@2, and Precision@3 in Table 2 for the embeddings-based method with $(f_{cos}^{pred})$ and without critique interpretation $(f_{cos}^{crit})$ .
|
| 124 |
+
|
| 125 |
+
<table><tr><td rowspan="2">Test case</td><td rowspan="2">Positive preference</td><td colspan="3">Without critique interpretation (f'crit cos)</td><td colspan="3">With critique interpretation (f'prej cos)</td></tr><tr><td>Rank #1</td><td>Rank #2</td><td>Rank #3</td><td>Rank #1</td><td>Rank #2</td><td>Rank #3</td></tr><tr><td>It looks too casual.</td><td>I prefer a fancier place.</td><td>Very cheesy, very fresh!</td><td>Very kid friendly.</td><td>Awesome ambiance!</td><td>Elegant, upscale and classy place for a special occasion.</td><td>The best restaurant around here.</td><td>Superior restaurant, the only place I will have a dim sum.</td></tr><tr><td>It has a freaking band!</td><td>I prefer a more quiet place.</td><td>It has an awesome atmosphere.</td><td>It has an awesome atmosphere.</td><td>It has a great atmosphere.</td><td>Excellent spot to spend time alone or talk business.</td><td>Good ambiance.</td><td>Great place to be at night.</td></tr><tr><td>I don't really like seafood.</td><td>I prefer beef or chicken.</td><td>Everything delicious with an exception of of the shrimps.</td><td>I found that I do not enjoy tuna, but my mom thought it was excellent.</td><td>For dinner, I enjoyed the scallops one night and the sea bass the second.</td><td>I only eat Beef Brisket here because is delicious!</td><td>Chicken flautas are always delish.</td><td>Chicken moist and tender.</td></tr></table>
|
| 126 |
+
|
| 127 |
+
Table 1: Three test cases with the top 3 arguments from ${f}_{cos}^{crit}$ and ${f}_{cos}^{pref}$ (accurate marked in bold).
|
| 128 |
+
|
| 129 |
+
<table><tr><td></td><td>Precision@1</td><td>Precision@2</td><td>Precision@3</td></tr><tr><td>fcrit</td><td>0.256</td><td>0.251</td><td>0.250</td></tr><tr><td>fcos</td><td>0.574</td><td>0.546</td><td>0.525</td></tr></table>
|
| 130 |
+
|
| 131 |
+
Table 2: Precision@1, 2, and 3 for $f_{cos}^{crit}$ and $f_{cos}^{pref}$ .
|
| 132 |
+
|
| 133 |
+
<table><tr><td></td><td>model</td><td>nDCG1</td><td>nDCG3</td><td>nDCG5</td><td>nDCG10</td></tr><tr><td rowspan="3">task1</td><td>fcritLM</td><td>0.617</td><td>0.674</td><td>0.723</td><td>0.811</td></tr><tr><td>fprefLM</td><td>0.731</td><td>0.753</td><td>0.773</td><td>0.858</td></tr><tr><td>concatfLM</td><td>0.726</td><td>0.740</td><td>0.773</td><td>0.844</td></tr><tr><td rowspan="3">task2</td><td>fcritLM</td><td>0.676</td><td>0.754</td><td>0.805</td><td>0.865</td></tr><tr><td>fprefLM</td><td>0.729</td><td>0.761</td><td>0.774</td><td>0.856</td></tr><tr><td>concatfLM</td><td>0.805</td><td>0.772</td><td>0.808</td><td>0.863</td></tr><tr><td rowspan="3">task3</td><td>fcritfL</td><td>0.498</td><td>0.537</td><td>0.605</td><td>0.660</td></tr><tr><td>fprefLM</td><td>0.790</td><td>0.754</td><td>0.758</td><td>0.791</td></tr><tr><td>concatfLM</td><td>0.686</td><td>0.663</td><td>0.685</td><td>0.746</td></tr></table>
|
| 134 |
+
|
| 135 |
+
Table 3: nDCG1, 3, 5, and 10 for $f_{LM}$ on each task.
|
| 136 |
+
|
| 137 |
+
To train and evaluate the BERT-based method $f_{LM}$ , we retrieve the top 15 arguments from $f_{cos}^{crit}$ and the top 15 arguments from $f_{cos}^{pref}$ for 100 queries. Each argument receives a score from 3 (very relevant) to 1 (irrelevant). Again, a second human annotator relabeled 100 arguments for a Cohen's Kappa of 0.73, also indicating strong agreement.
|
| 138 |
+
|
| 139 |
+
We design three ranking tasks: $task_{1}$ consists of ranking the 15 arguments originally retrieved with $f_{cos}^{crit}$ , hence closer to critiques in the embedding space; $task_{2}$ consists of ranking the 15 arguments originally retrieved with $f_{cos}^{pref}$ , hence closer to preferences; and $task_{3}$ consists of ranking both sets, i.e., 30 arguments. For each task we train $f_{LM}^{pref}$ , $f_{LM}^{crit}$ , and $f_{LM}^{concat}$ . We then measure nDCG@1, nDCG@3, nDCG@5, and nDCG@10 in Table 3 averaged after 2-fold cross validation.
|
| 140 |
+
|
| 141 |
+
# 3.3 Results
|
| 142 |
+
|
| 143 |
+
We found that using the positive preferences yields substantial improvements in information retrieval. For $f_{cos}$ , in Table 2, $f_{cos}^{pref}$ increases Precision@1 by $124\%$ , Precision@2 by $118\%$ , and Precision@3 by $110\%$ . This gap is also present, with marginal variations, when separately analyzing single-, two-, and three-round critiques. For $f_{LM}$ , in Table 3, $f_{LM}^{pref}$ outperforms $f_{LM}^{crit}$ by $19\%$ on
|
| 144 |
+
|
| 145 |
+
nDCG@1 even at $task_{1}$ , where $f_{LM}^{crit}$ could have an edge. This gap persists for $task_{2}$ ( $f_{LM}^{concat}$ outperforms by $19\%$ ), increases for $task_{3}$ ( $f_{LM}^{pref}$ outperforms by $59\%$ ), and tends to narrow towards nDCG@10. Overall, we found strong evidence in support of our hypothesis.
|
| 146 |
+
|
| 147 |
+
Table 1 shows three examples in which the use of positive preferences was clearly beneficial. These examples represent three critique patterns that cause systematic errors if critique interpretation is turned off: 1. When the user implies a preference for a feature using the polar opposite (e.g., "It looks too casual" implying "I prefer a fancier place"); 2. When the user draws on common sense to express a preference ("It has a freaking band!" implying "I prefer a more quiet place"); and 3. When the user implies a filter within a set of related features (e.g., "I don't really like seafood" implying preference for alternatives in the meat category).
|
| 148 |
+
|
| 149 |
+
Analyzing the results of $f_{cos}^{pref}$ and $f_{cos}^{crit}$ for the 340 single-round critiques, we found 170 cases where $f_{cos}^{pref}$ outperformed $f_{cos}^{crit}$ . Within these, 40 belong to the first pattern (24%), 78 to the second (46%), and 38 to the third (22%). A common trait behind the three patterns is that critiques can be lexically very distinct from their corresponding preference statements, and critique interpretation helps to bridge this gap.
|
| 150 |
+
|
| 151 |
+
# 4 Conclusion & Future Work
|
| 152 |
+
|
| 153 |
+
In this paper, we presented an open-ended approach to content-based recommendations for CRS. We developed a novel critique interpretation method that uses GPT3 to infer positive preferences from freeform critiques. We also developed two methods for retrieving recommendations: one that matches embeddings and another that fine-tunes BERT for the task. We ran two ablation studies to test if transforming critiques into positive preferences would yield better recommendations, confirming that it improves performance across both methods.
|
| 154 |
+
|
| 155 |
+
Finally, we described three critique patterns that cause systematic errors in recommendation search if critique interpretation is turned off.
|
| 156 |
+
|
| 157 |
+
For future work, we will strive to use critiques to identify and remove unsuitable restaurants; we speculate that the sparsity of customer reviews generally makes it harder to "rule out" than to "rule in." We will also study other issues such as when to ask clarification questions to resolve ambiguity in the scope of a critique.
|
| 158 |
+
|
| 159 |
+
# Acknowledgements
|
| 160 |
+
|
| 161 |
+
We would like to thank reviewers for their helpful feedback. This work was supported in part by gift funding from Adobe Research and by NSF grant IIS-2006851.
|
| 162 |
+
|
| 163 |
+
# References
|
| 164 |
+
|
| 165 |
+
Muhammad Abdul-Mageed and Lyle Ungar. 2017. Emonet: Fine-grained emotion detection with gated recurrent neural networks. In Proceedings of the 55th annual meeting of the association for computational linguistics (volume 1: Long papers), pages 718-728, Vancouver, Canada. Association for Computational Linguistics.
|
| 166 |
+
Tom B Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. arXiv preprint arXiv:2005.14165.
|
| 167 |
+
Paweł Budzianowski, Tsung-Hsien Wen, Bo-Hsiang Tseng, Inigo Casanueva, Stefan Ultes, Osman Ramadan, and Milica Gašić. 2018. MultiWOZ - a large-scale multi-domain Wizard-of-Oz dataset for task-oriented dialogue modelling. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 5016-5026, Brussels, Belgium. Association for Computational Linguistics.
|
| 168 |
+
Victor S. Bursztyn, Jennifer Healey, Eunyee Koh, Nedim Lipka, and Larry Birnbaum. 2021. Developing a conversational recommendation system for navigating limited options. In *Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems*, New York, NY, USA. Association for Computing Machinery.
|
| 169 |
+
Bill Byrne, Karthik Krishnamoorthi, Chinnadhurai Sankar, Arvind Neelakantan, Ben Goodrich, Daniel Duckworth, Semih Yavuz, Amit Dubey, Kyu-Young Kim, and Andy Cedilnik. 2019. Taskmaster-1: Toward a realistic and diverse dialog dataset. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the
|
| 170 |
+
|
| 171 |
+
9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4516-4525, Hong Kong, China. Association for Computational Linguistics.
|
| 172 |
+
Daniel Cer, Yinfei Yang, Sheng-yi Kong, Nan Hua, Nicole Limtiaco, Rhomni St. John, Noah Constant, Mario Guajardo-Cespedes, Steve Yuan, Chris Tar, Brian Strope, and Ray Kurzweil. 2018. Universal sentence encoder for English. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 169-174, Brussels, Belgium. Association for Computational Linguistics.
|
| 173 |
+
Li Chen and Pearl Pu. 2012. Critiquing-based recommenders: survey and emerging trends. User Modeling and User-Adapted Interaction, 22(1-2):125-150.
|
| 174 |
+
Konstantina Christakopoulou, Filip Radlinski, and Katja Hofmann. 2016. Towards conversational recommender systems. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 815-824, New York, NY, USA. Association for Computing Machinery.
|
| 175 |
+
Joe Davison, Joshua Feldman, and Alexander Rush. 2019. Commonsense knowledge mining from pretrained models. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1173-1178, Hong Kong, China. Association for Computational Linguistics.
|
| 176 |
+
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.
|
| 177 |
+
Shuguang Han, Xuanhui Wang, Mike Bendersky, and Marc Najork. 2020. Learning-to-rank with bert in tf-ranking. arXiv preprint arXiv:2004.08476.
|
| 178 |
+
Matthew Honnibal and Ines Montani. 2017. spacy 2: Natural language understanding with bloom embeddings, convolutional neural networks and incremental parsing. To appear, 7(1).
|
| 179 |
+
Liwei Jiang, Antoine Bosselut, Chandra Bhagavatula, and Yejin Choi. 2021. "i'm not mad": Common-sense implications of negation and contradiction. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), Online. Association for Computational Linguistics.
|
| 180 |
+
Annie Louis, Dan Roth, and Filip Radlinski. 2020. "i'd rather just go to bed": Understanding indirect answers. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 7411-7425.
|
| 181 |
+
|
| 182 |
+
Shengnan Lyu, Arpit Rana, Scott Sanner, and Mohamed Reda Bouadjenek. 2021. A workflow analysis of context-driven conversational recommendation. In Proceedings of The Web Conference 2021.
|
| 183 |
+
Bodhisattwa Prasad Majumder, Harsh Jhamtani, Taylor Berg-Kirkpatrick, and Julian McAuley. 2020. Like hiking? you probably enjoy nature: Person-grounded dialog with commonsense expansions. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 9194-9206, Online. Association for Computational Linguistics.
|
| 184 |
+
Rama Kumar Pasumarthi, Sebastian Bruch, Xuanhui Wang, Cheng Li, Michael Bendersky, Marc Najork, Jan Pfeifer, Nadav Golbandi, Rohan Anil, and Stephan Wolf. 2019. Tf-ranking: Scalable tensorflow library for learning-to-rank. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 2970-2978.
|
| 185 |
+
Gustavo Penha and Claudia Hauff. 2020. What does bert know about books, movies and music? probing bert for conversational recommendation. In Proceedings of the 14th ACM Conference on Recommender systems, New York, NY, USA. Association for Computing Machinery.
|
| 186 |
+
Filip Radlinski, Krisztian Balog, Bill Byrne, and Karthik Krishnamoorthi. 2019. Coached conversational preference elicitation: A case study in understanding movie preferences.
|
| 187 |
+
Yueming Sun and Yi Zhang. 2018. Conversational recommender system. In *The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval*, pages 235-244, New York, NY, USA. Association for Computing Machinery.
|
| 188 |
+
Yongfeng Zhang, Xu Chen, Qingyao Ai, Liu Yang, and W Bruce Croft. 2018. Towards conversational search and recommendation: System ask, user respond. In Proceedings of the 27th ACM International Conference on Information and Knowledge Management, pages 177-186, New York, NY, USA. Association for Computing Machinery.
|
itdoesntlookgoodforadatetransformingcritiquesintopreferencesforconversationalrecommendationsystems/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:c557e56cfac3c4d734e60afb967af3619d51d30c0e2ad673c00c94b626d7e6a9
|
| 3 |
+
size 115141
|
itdoesntlookgoodforadatetransformingcritiquesintopreferencesforconversationalrecommendationsystems/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:5cadb0494cd874285fd14d030b945722894d23092d2e609c97c829de1c75e8b6
|
| 3 |
+
size 232309
|
kfoldenkfoldensembleforoutofdistributiondetection/571c3501-ee7d-4559-b7d2-989a30005529_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:6647d96a6b94c58ec65531f55e281a056419f9126d681f070e192a5777e6b004
|
| 3 |
+
size 110411
|
kfoldenkfoldensembleforoutofdistributiondetection/571c3501-ee7d-4559-b7d2-989a30005529_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:4085b6a9e3250b729908cd141f59f39d90e77e3c378a575722d9218a05cb04dd
|
| 3 |
+
size 127948
|
kfoldenkfoldensembleforoutofdistributiondetection/571c3501-ee7d-4559-b7d2-989a30005529_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:e30c34ed32129ef2721c73477465382bad8b56261ddfb2a38941dc6543e69420
|
| 3 |
+
size 359325
|
kfoldenkfoldensembleforoutofdistributiondetection/full.md
ADDED
|
@@ -0,0 +1,403 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# $k$ Folden: $k$ -Fold Ensemble for Out-Of-Distribution Detection
|
| 2 |
+
|
| 3 |
+
Xiaoya Li\*, Jiwei Li\*, Xiaofei Sun\*, Chun Fan\*
|
| 4 |
+
|
| 5 |
+
Tianwei Zhang\*, Fei Wu\*, Yuxian Meng\*, Jun Zhang
|
| 6 |
+
|
| 7 |
+
$\spadesuit$ Shannon.AI
|
| 8 |
+
|
| 9 |
+
$\mathbf{\nabla}^{\mathbf{v}}$ National Biomedical Imaging Center, Peking University
|
| 10 |
+
|
| 11 |
+
$^{\star}$ Computer Center of Peking University, $^{\triangle}$ Peng Cheng Laboratory
|
| 12 |
+
|
| 13 |
+
$\spadesuit$ Nanyang Technological University $\spadesuit$ Zhejiang University, $\spadesuit$ Tsinghua University
|
| 14 |
+
|
| 15 |
+
{xiaoya_li,jiwei_li,xiaofei.sun,yuxian_meng} $@$ shannonai.com
|
| 16 |
+
|
| 17 |
+
fanchun@pku.edu.cn, tianwei.zhang@ntu.edu.sg
|
| 18 |
+
|
| 19 |
+
wufei@zju.edu.cn, jun-zhan19@mails.tsinghua.edu.cn
|
| 20 |
+
|
| 21 |
+
# Abstract
|
| 22 |
+
|
| 23 |
+
Out-of-Distribution (OD) detection is an important problem in natural language processing (NLP). In this work, we propose a simple yet effective framework $k$ -Folden, which mimics the behaviors of OOD detection during training without the use of any external data. For a task with $k$ training labels, $k$ -Folden induces $k$ sub-models, each of which is trained on a subset with $k - 1$ categories with the left category masked unknown to the sub-model. Exposing an unknown label to the sub-model during training, the model is encouraged to learn to equally attribute the probability to the seen $k - 1$ labels for the unknown label, enabling this framework to simultaneously resolve in- and out-distribution examples in a natural way via OOD simulations. Taking text classification as an archetype, we develop benchmarks for OOD detection using existing text classification datasets. By conducting comprehensive comparisons and analyses on the developed benchmarks, we demonstrate the superiority of $k$ -Folden against current methods in terms of improving OOD detection performances while maintaining improved in-domain classification accuracy.
|
| 24 |
+
|
| 25 |
+
# 1 Introduction
|
| 26 |
+
|
| 27 |
+
Recent progress in deep neural networks has drastically improved accuracy in numerous NLP tasks (Sun et al., 2019; Raffel et al., 2019; Chai et al., 2020; He et al., 2020), but detecting out-of-distribution (OOD) examples from the in-domain (ID) examples is still a challenge for existing state-of-the-art deep NLP models. The ability of identifying OOD examples is critical for building reliable and trustworthy NLP systems for, say, text classifi
|
| 28 |
+
|
| 29 |
+
cation (Hendrycks and Gimpel, 2016; Mukherjee and Awadallah, 2020), question answering (Kamath et al., 2020) and neural machine translation (Kumar and Sarawagi, 2019). Existing works studying OOD detection in NLP often rely on external data (Hendrycks et al., 2018) to diversify model predictions and achieve better generality in OOD detection. The reliance on external data not only brings additional burden for data collection, but also results in the annoying issue in deciding which subset of external data to use: there is massive amount of external data and the using different subsets leads to different final results. Therefore, developing OOD detection system without external data is important towards building reliable NLP systems.
|
| 30 |
+
|
| 31 |
+
In this work, we propose a novel, simple yet effective framework, $k$ Folden, short for a $k$ -Fold ensemble, to address OOD detection for NLP without the use of any external data. We accomplish this goal by simulating the process of detecting OOD examples during training. Concretely, for a standard NLP task with $k$ labels for both training and test, we first obtain $k$ separate sub-models, each of which is trained on a set of different $k - 1$ labels with the left one being masked unknown to the model. We train each sub-model by jointly optimizing the cross entropy loss for the visible $k - 1$ labels and the KL divergence loss between the predicted distribution and the uniform distribution for the left-one-out label. During test, we simply average the probability distributions produced by these $k$ sub-modules and treat the result as the final probability estimate for a given input. Intuitively, if the input is an ID example, the final probability distribution will lay much of the weight on one of the $k$
|
| 32 |
+
|
| 33 |
+
seen labels, but if the input is an OOD example, we expect the final probability distribution to get close to the uniform distribution, since each sub-model has tried to even its probability distribution when encountering unseen labels during training.
|
| 34 |
+
|
| 35 |
+
This training paradigm does not rely on any external data, and by mimicking the behaviors of distinguishing unseen labels from the seen, i.e., simulating the process of OOD detection during training, which is completed via the KL divergence loss, the framework naturally detects OOD examples and is able to perform reasonably better than other widely used strong OOD detection methods. Moreover, $k$ Folden is complementary to existing post-hoc OOD detection methods, and combining both leads to the most performance boosts.
|
| 36 |
+
|
| 37 |
+
To facilitate OOD detection researches in NLP, we also construct benchmarks on top of four widely used text classification datasets: 20NewsGroups, Reuters, AG News and Yahoo!Answers. This created benchmark consists of 7 datasets with different levels of difficulty directed to two types of OOD examples: semantic shift and non-semantic shift, which differ in whether a shift is related to the inclusion of new semantic categories. The proposes benchmarks help comprehensively examine OOD detection methods, and we hope it can serve as a convenient and general tool for developing more robust and effective OOD detection models.
|
| 38 |
+
|
| 39 |
+
To summarize, the contributions of this work are:
|
| 40 |
+
|
| 41 |
+
- We propose a simple yet effective framework – $k$ Folden, which simulates the process of OOD detection during training without using any external data.
|
| 42 |
+
- We construct benchmarks for OOD detection in text classification hoping for facilitating future related researches.
|
| 43 |
+
- We conduct comprehensive comparisons and analyses between existing methods and the proposed $k$ -Folden on the benchmark, and we show that $k$ -Folden achieves performance boosts regarding OOD detection while maintaining ID classification accuracy.
|
| 44 |
+
|
| 45 |
+
# 2 Related Work
|
| 46 |
+
|
| 47 |
+
# Out-Of-Distribution Detection
|
| 48 |
+
|
| 49 |
+
Detecting OOD examples using deep neural models has gained substantial traction over recent years. Hendrycks and Gimpel (2016) proposed a baseline
|
| 50 |
+
|
| 51 |
+
for misclassified and OOD examples by thresholding candidates based on the predicted softmax class probability. Lee et al. (2018) trained a classifier concurrent with a generator under the GAN framework (Goodfellow et al., 2014). The generator produces examples at the in-domain boundary and the classifier is forced to give lower confidence in predicting the classes for those examples. Hendrycks et al. (2018) leveraged real datasets instead of the generated examples, enabling the classifier to better generalize and detect anomalies. Liang et al. (2017) observed that temperature scaling and small perturbations lead to widened gaps between ID and OOD examples, for which they proposed proposed ODIN, a technique that makes OOD instances distinguishable by pulling apart the softmax scores of ID and OOD examples. Kamath et al. (2020) proposed to leverage the confidence estimate of a QA model to determine whether a question should be answered under domain shift to maintain a moderate accuracy. Hendrycks et al. (2019, 2020) showed that pretraining improves model robustness in terms of uncertainty estimation and OOD detection. Measuring model confidence has also exhibited power in detecting OOD examples (Lee et al., 2017a,b; DeVries and Taylor, 2018; Papadopoulos et al., 2021). This work differs from Hendrycks et al. (2020) mainly in that (1) they used a simple MaxProb-based method (Hendrycks and Gimpel, 2016) to estimate uncertainty while we propose a novel framework $k$ -Folden to improve OOD detection; and (2) they focused on comparing different NLP models on OOD generalization and shed light on the importance of pretraining for OOD robustness, whereas we highlight the merits of OOD simulation during training without the use of any external data, and construct a dedicated benchmark for text classification OOD detection.
|
| 52 |
+
|
| 53 |
+
# Meta Learning in NLP
|
| 54 |
+
|
| 55 |
+
Meta learning (Thrun and Pratt, 2012; Andrychowicz et al., 2016; Nichol et al., 2018; Finn et al., 2017) tackles the problem of model learning in the domain with scarce data when large quantities of data are accessible in another related domain. Meta learning has been applied to considerable NLP tasks including semantic parsing (Huang et al., 2018; Guo et al., 2019; Sun et al., 2020), dialog generation (Song et al., 2019; Huang et al., 2020), text classification (Wu et al., 2019; Sun et al., 2020; Bansal et al., 2020; Lin et al., 2021) and machine translation (Gu et al., 2018). Our work is distantly
|
| 56 |
+
|
| 57 |
+
related to meta learning in terms of the way we train $k$ Folden by simulating the behaviors of predicting the unseen label during training. But we do not intend to achieve strong few-shot learning performances, which is the main goal of meta learning.
|
| 58 |
+
|
| 59 |
+
# 3 Task Definition
|
| 60 |
+
|
| 61 |
+
In this paper, we consider the problem of distinguishing between ID and OOD examples. We take text classification for illustration, and other tasks can be analogously resolved using the proposed $k$ Folden framework. Let $\mathcal{D}^{\mathrm{train}} = \{\pmb{x},\pmb{y}^{\mathrm{train}}\}$ and $\mathcal{D}^{\mathrm{test}} = \{\pmb{x},\pmb{y}^{\mathrm{test}}\}$ denote the two sets respectively used for model training and test, where we assume the label space for training consists of $k$ distinct labels $\mathcal{V}^{\mathrm{train}} = \{1,\dots ,k\}$ and all possible labels for test is the ones in $\mathcal{V}^{\mathrm{train}}$ plus $t$ additional labels, i.e., $\mathcal{V}^{\mathrm{test}} = \{1,\dots ,k,k + 1,\dots ,k + t\}$ . Assume that a neural network $f$ is trained on $\mathcal{D}^{\mathrm{train}}$ , and tested on $\mathcal{D}^{\mathrm{test}}$ .
|
| 62 |
+
|
| 63 |
+
We are interested in two situations when testing $f$ on $\mathcal{D}^{\mathrm{test}}$ : (1) the current input example $x$ has a gold label belonging to $\mathcal{Y}^{\mathrm{train}}$ (i.e., $y^{\mathrm{test}} \in \mathcal{Y}^{\mathrm{train}}$ ), and (2) the input example's gold label does not belong to $\mathcal{Y}^{\mathrm{train}}$ (i.e., $y^{\mathrm{test}} \in \mathcal{Y}^{\mathrm{test}} \backslash \mathcal{Y}^{\mathrm{train}}$ ). For the former, we would like the model to achieve high accuracy because it has been trained on these ID examples; for the latter, we expect the model to figure out the current input is an OOD example. Hence, in this work, we mainly report the results from two aspects: accuracy on ID examples, and performances on OOD examples. The performance for OOD examples is evaluated via several targeted metrics, which will be introduced in experiments.
|
| 64 |
+
|
| 65 |
+
# 4 Method: $k$ -Fold Ensemble
|
| 66 |
+
|
| 67 |
+
# 4.1 Training $k$ Sub-Models as Simulation for OOD Detection
|
| 68 |
+
|
| 69 |
+
The core idea behind the proposed $k$ -Folden framework is to simulate the situation of encountering unseen labels at the training stage without the use of external data. To this end, we propose to train $k$ independent sub-models $\{f_1,\dots ,f_k\}$ , each of which is in order trained on a different subset of $k - 1$ labels with the left label masked unknown to the model. Each sub-model is required to attain high accuracy on examples with the seen $k - 1$ labels along with high uncertainty on examples with of masked label, and this is exactly what we
|
| 70 |
+
|
| 71 |
+
would expect for OOD detection: we would like the model to accurately detect OOD examples while not harming performances on ID examples.
|
| 72 |
+
|
| 73 |
+
More specifically, assume we are training the $i$ -th sub-model $f_{i}(1 \leq i \leq k)$ , and thus the visible label set for training $f_{i}$ would be $\mathcal{V}^{\mathrm{train}} \backslash \{i\}$ . All training examples in $\mathcal{D}^{\mathrm{train}}$ with label $i$ now becomes unknown to $f_{i}$ . For the visible $k - 1$ labels, $f_{i}$ should still achieve high accuracy as we want; but for the masked label $i$ , $f_{i}$ needs to give nondeterministic estimates when the input instance $\pmb{x}$ has the ground-truth label $i$ because the label $i$ is masked and not found in the training set. This implies that the model can not determine which label $\pmb{x}$ belongs to and may attribute it to an OOD example. These two considerations can be satisfied by jointly optimizing the following objective:
|
| 74 |
+
|
| 75 |
+
$$
|
| 76 |
+
\mathcal {L} = \mathcal {L} _ {\mathrm {C E}} + \gamma \mathcal {L} _ {\mathrm {K L}} \tag {1}
|
| 77 |
+
$$
|
| 78 |
+
|
| 79 |
+
where
|
| 80 |
+
|
| 81 |
+
$$
|
| 82 |
+
\mathcal {L} _ {\mathrm {C E}} = \sum_ {\substack {(\boldsymbol {x}, y ^ {\text {train}}) \in \mathcal {D} ^ {\text {train}} \\ y ^ {\text {train}} \in \mathcal {Y} ^ {\text {train}} \setminus \{i \}}} \operatorname {CrossEntropy}(y ^ {\text {train}}, f _ {i} (\boldsymbol {x})) \tag{2}
|
| 83 |
+
$$
|
| 84 |
+
|
| 85 |
+
$$
|
| 86 |
+
\mathcal {L} _ {\mathrm {K L}} = \sum_ {\substack {(\boldsymbol {x}, y ^ {\text {train}}) \in \mathcal {D} ^ {\text {train}} \\ y ^ {\text {train}} = i}} \mathrm {K L} \left(f _ {i} (\boldsymbol {x}), \boldsymbol {u}\right) \tag{3}
|
| 87 |
+
$$
|
| 88 |
+
|
| 89 |
+
$\gamma$ is a hyper-parameter ranging over $[0,1]$ and tuned on validation set. In the above equations, $\pmb{u}$ is a uniform distribution. Eq.(2) is a standard cross entropy loss that requires the model to achieve accurate predictions on the visible labels, while Eq.(3) draws on the KL divergence to encourage the model to produce a probability distribution close to the uniform distribution $\pmb{u}$ on the $k - 1$ labels for the masked label. By jointly training on both loss functions, $f_{i}$ will be able to detect the OOD label $i$ while preserving non-reduced performances on other $k - 1$ labels. We proceed with this process for all $k$ sub-models, each with a different masked label. $f_{i}(\pmb{x})$ takes as input $\pmb{x}$ and outputs a probability distribution of dimensionality $k - 1$ . $f_{i}$ can be implemented using any model backbone such as LSTM (Hochreiter and Schmidhuber, 1997), CNN (Kim, 2014), Transformer (Vaswani et al., 2017) and BERT (Devlin et al., 2018).
|
| 90 |
+
|
| 91 |
+
# 4.2 Sub-Model Ensemble
|
| 92 |
+
|
| 93 |
+
A single sub-model $f_{i}$ will inevitably result in poor performances during test regarding the ID exam-
|
| 94 |
+
|
| 95 |
+
plies with label $i$ . This is because for $f_{i}$ , the masked label $i$ during training will never have the chance to be predicted by the model, so that all the test examples with label $i$ in $\mathcal{D}^{\mathrm{test}}$ will be associated with possibly low probability, leading to overall reduced accuracy.
|
| 96 |
+
|
| 97 |
+
To tackle this issue, we adopt the idea of model ensemble: given an input $\pmb{x}$ , we first obtain $k$ probability distributions $\{f_1(\pmb{x}),\dots ,f_k(\pmb{x})\}$ respectively produced by the $k$ sub-models. In order to coordinate the label dimensions for different sub-models, we manually pad a zero dimension to each probability distribution at the corresponding masked position. For example, if $k = 4$ and the output from $f_{2}$ is $f_{2}(\pmb {x}) = [f_{2}(\pmb {x})_{1},f_{2}(\pmb {x})_{2},f_{2}(\pmb {x})_{3},f_{2}(\pmb {x})_{4}]$ , then the padded output distribution would thus be $\tilde{f}_2(\pmb {x}) = [f_2(\pmb {x})_1,0,f_2(\pmb {x})_2,f_2(\pmb {x})_3,f_2(\pmb {x})_4]$ . Next, we average all the $k$ padded probability distributions, and take the result as the final probability estimate:
|
| 98 |
+
|
| 99 |
+
$$
|
| 100 |
+
\tilde {f} (\boldsymbol {x}) = \frac {1}{k} \sum_ {i = 1} ^ {k} \tilde {f} _ {i} (\boldsymbol {x}) \tag {4}
|
| 101 |
+
$$
|
| 102 |
+
|
| 103 |
+
$\tilde{f}(\boldsymbol{x})$ is still a valid probability distribution and naturally remedies the shortcoming of a single sub-model: if $\boldsymbol{x}$ is an ID example, i.e., its ground-truth label $y$ belongs to $\mathcal{Y}^{\mathrm{train}}$ , $\tilde{f}(\boldsymbol{x})$ will put most of the probability mass on one of the $k$ labels; if $\boldsymbol{x}$ is an OOD example, $\tilde{f}(\boldsymbol{x})$ will get close to the uniform distribution because all sub-models comprising $\tilde{f}(\boldsymbol{x})$ will even their probability masses across all the $k$ labels. After training, $\tilde{f}(\boldsymbol{x})$ can be used for ID evaluation and OOD evaluation simultaneously.
|
| 104 |
+
|
| 105 |
+
# 5 Benchmark Construction
|
| 106 |
+
|
| 107 |
+
Out-of-distribution data can be conceptually divided into two categories: non-semantic shift (NSS) and semantic shift (SS) and datasets (Hsu et al., 2020). They are different in terms of whether a shift is related to the inclusion of new semantic categories: the training and OOD test examples in the NSS dataset come from different sub-categories of the same broader category. For example, the training and OOD test sets in an NSS dataset are both from the "car" category, but examples in the training set are able "real car", e.g. "that's when they took out the fuel tank and poured it into a jug", and all OOD test are about "toy car", e.g. "Raleigh 2-year-old fills up toy car with 'gas' amidst shortage". For SS, the training and OOD test examples in the SS dataset come from completely different cate
|
| 108 |
+
|
| 109 |
+
gories. For example, the training set contains labels "car" and "bicycle", and the test set has labels "train" and "plane", which have no intersections with training labels. In this paper, we construct both SS and NSS text classification benchmarks for OOD detection.
|
| 110 |
+
|
| 111 |
+
We construct benchmarks on multi-class topic classification datasets. The topic classification task has less vocabulary overlap between ID and OOD data. We use data from 20NewsGroups (Joachims, 1996), Reuters-21578 $^2$ , AG News (Del Corso et al., 2005) and Yahoo!Answers (Zhang et al., 2015). More details of the original datasets can be found at Appendix A. The statistics of the benchmark are present in Table 1.
|
| 112 |
+
|
| 113 |
+
We construct NSS benchmarks as follows:
|
| 114 |
+
|
| 115 |
+
20Newsgroups-6S This dataset is a modified version of 20Newsgroups. The original 20Newsgroups dataset has 20 newsgroups and each newsgroup (e.g., "comp.sys.ibm.pc.hardware") has a root subject topic (e.g., "comp"). We divide articles by its root subject and obtain 6 newsgroups ("comp", "rec", "sci", "religion", "politics" and "misc"). In this way, train and test data share the same root topic labeled but have different fine-grained topic labels. The training and ID test data are from 11 sub-classes in 20News, while OOD test data are from the rest 9 sub-classes.
|
| 116 |
+
|
| 117 |
+
AGNews-EXT This dataset is adapted from AG News and additional articles come from the AG Corpus. The original AG News dataset has 4 classes ("World", "Sports", "Business", "Sci/Tech"). The training and ID test data in AGNews-EXT come from the 4 class labels in AG News, and the OOD test data are from the AG Corpus but have the same class labels as in AG News.
|
| 118 |
+
|
| 119 |
+
Yahoo-AGNews-five This dataset contains a subset of Yahoo!Answers and a subset of AG Corpus. The original Yahoo!Answers dataset has 10 classes, and we use 5 of them ("Health", "Science & Mathematics", "Sports", "Entertainment & Music", "Business & Finance") for the training and ID test data. The OOD test data are selected from the 5 classes ("Health", "Sci/Tech", "Sports", "Entertainment", "Business") in AG Corpus.
|
| 120 |
+
|
| 121 |
+
We construct SS benchmarks as follows:
|
| 122 |
+
|
| 123 |
+
<table><tr><td rowspan="2"></td><td colspan="3">Non-Semantic Shift (NSS) Datasets</td><td colspan="4">Semantic Shift (SS) Datasets</td></tr><tr><td>20News-6S</td><td>AG-EXT</td><td>Yahoo-AG-five</td><td>Reuters-mK-nL</td><td>AG-FL</td><td>AG-FM</td><td>Yahoo-FM</td></tr><tr><td>Adapted From</td><td>20News</td><td>AGNews&AGCorpus</td><td>Yahoo&AGCorpus</td><td>Reuters</td><td>AGNews&AGCorpus</td><td>AGNews&AGCorpus</td><td>Yahoo</td></tr><tr><td># Labels in T</td><td>6</td><td>4</td><td>5</td><td>m</td><td>4</td><td>4</td><td>5</td></tr><tr><td># Instances in T</td><td>8,283</td><td>112,400</td><td>675,000</td><td>f(m, train)</td><td>116,000</td><td>116,000</td><td>680,000</td></tr><tr><td># Labels in ID-V</td><td>6</td><td>4</td><td>5</td><td>m</td><td>4</td><td>4</td><td>5</td></tr><tr><td># Instances in ID-V</td><td>1,034</td><td>7,600</td><td>25,000</td><td>f(m, valid)</td><td>4,000</td><td>4,000</td><td>20,000</td></tr><tr><td># Labels in OOD-V</td><td>6</td><td>4</td><td>5</td><td>n</td><td>4</td><td>4</td><td>5</td></tr><tr><td># Instances in OOD-V</td><td>846</td><td>7,600</td><td>25,000</td><td>f(n, valid)</td><td>4,000</td><td>4,000</td><td>20,000</td></tr><tr><td># Labels in ID-T</td><td>6</td><td>4</td><td>5</td><td>m</td><td>4</td><td>4</td><td>5</td></tr><tr><td># Instances in ID-T</td><td>1,034</td><td>7,600</td><td>25,000</td><td>f(m, test)</td><td>4,000</td><td>4,000</td><td>25,000</td></tr><tr><td># Labels in OOD-T</td><td>6</td><td>4</td><td>5</td><td>n</td><td>4</td><td>4</td><td>5</td></tr><tr><td># Instances in OOD-T</td><td>846</td><td>7,600</td><td>25,000</td><td>f(n, test)</td><td>4,000</td><td>4,000</td><td>25,000</td></tr></table>
|
| 124 |
+
|
| 125 |
+
Table 1: Statistics for the constructed benchmark. "T" is for "Training Set", "V" is for "Valid Set", and "T" is for "Test Set". All the data in each of the set are evenly distributed over the labels except 20News-6S. $f(m, \text{train/valid/test})$ means that the actual number is related to $m$ and the corresponding train-valid/test set in the original Reuters-ModApte dataset.
|
| 126 |
+
|
| 127 |
+
Reuters- $m\mathbf{K}$ - $n\mathbf{L}$ This dataset is a modified version of Reuters. We first follow previous works (Yang and Liu, 1999; Joachims, 1998) to use the ModApte split<sup>3</sup> to remove documents belonging to multiple classes, and then considered only 10 classes ("Acquisitions", "Corn", "Crude", "Earn", "Grain", "Interest", "Money-fx", "Ship", "Trade" and "Wheat") with the highest numbers of training examples. The resulting dataset is called Reuters-ModApte. We train the model on a subset of Reuters-ModApte and test on the rest subset. Specifically, we train with $m$ topic articles and test the model on the other $n = 10 - m$ topics. In this paper, we use five settings: $(m,n) = (9,1)/(6,4)/(5,5)/(3,7)/(2,8)$ .
|
| 128 |
+
|
| 129 |
+
AGNews-FL The dataset is adapted from AGNews and additional articles come from AG Corpus. In this setting, the training and ID test data are from the 4 classes ("World", "Sports", "Business", "Sci/Tech") in AGNews, and the OOD test data are from another 4 classes ("U.S.", "Europe", "Italia", "Software and Development") in AG Corpus.
|
| 130 |
+
|
| 131 |
+
AGNews-FM This dataset is adapted from AGNew and additional articles are taken from the AG Corpus. In this setting, the training and ID data are from the 4 classes ("World", "Sports", "Business", "Sci/Tech") in AGNews, and the OOD test data are from another 4 classes ("Entertainment", "Health", "Top Stories", "Music Feeds") in AG Corpus. This dataset is easier than AGNews-FL because the OOD labels are more distinct from the ID labels regarding the label semantics.
|
| 132 |
+
|
| 133 |
+
Yahoo!Answers-FM This dataset is modified from the Yahoo!Answers dataset. We use five topic articles ("Health", "Science & Mathematics", "Sports", "Entertainment & Music", "Business & Finance") for the training and ID tet data and use the other five unseen topics ("Society & Culture", "Education & Reference", "Computers & Internet", "Family & Relationships", "Politics & Government") for the OOD test data.
|
| 134 |
+
|
| 135 |
+
# 6 Experiments
|
| 136 |
+
|
| 137 |
+
# 6.1 Experimental Setups
|
| 138 |
+
|
| 139 |
+
We use both contextual and non-contextual model skeletons for experiments. We use CNN and BiLSTM as the non-contextual model backbones. We follow the CNN-non-static model (Kim, 2014) as the CNN implementation and the BiLSTM model is of a single layer. Both CNN and BiLSTM have $300d$ word vectors pretrained on Wikipedia 2014 using Glove (Pennington et al., 2014). The average of the hidden states of all words is used as the feature for classification. We trained the noncontextual models with a batch size of 32 and an initial learning rate of 0.001 using the Adam (Kingma and Ba, 2014). For contextual models, we use the officially pretrained BERT-uncased-base (Devlin et al., 2018) and RoBERTa-uncased-Base (Liu et al., 2019) for comparison. We use AdamW to optimize all contextual models, with 0.01 weight decay and 1000 warmup steps. The learning rate was chosen in the range of $\{1e - 5, 2e - 5, 3e - 5\}$ . We use batch size in the range of $\{16, 24, 32\}$ for all experiments. And use dropout 0.2 for BERT and RoBERTa experiments.
|
| 140 |
+
|
| 141 |
+
<table><tr><td rowspan="2">Model</td><td colspan="2">ID Metrics</td><td colspan="2">OOD Metrics</td></tr><tr><td>ACC↑</td><td>AUROC↑</td><td>AUPR↑</td><td>TNR@95TPR↑</td></tr><tr><td colspan="5">20Newsgroups-6S</td></tr><tr><td colspan="5">Vanilla</td></tr><tr><td>CNN-init emb</td><td>77.76</td><td>50.22</td><td>58.91</td><td>29.27</td></tr><tr><td>BiLSTM-init emb</td><td>78.01</td><td>50.00</td><td>59.93</td><td>29.53</td></tr><tr><td>BERT</td><td>82.15</td><td>54.76</td><td>62.61</td><td>50.89</td></tr><tr><td>RoBERTa</td><td>83.40</td><td>57.41</td><td>66.79</td><td>59.15</td></tr><tr><td>RoBERTa+Mahalanobis</td><td>83.40</td><td>58.22</td><td>68.63</td><td>61.98</td></tr><tr><td>RoBERTa-Dropout</td><td>85.06</td><td>57.72</td><td>67.30</td><td>60.56</td></tr><tr><td>RoBERTa-Dropout+Mahalanobis</td><td>85.06</td><td>58.29</td><td>68.99</td><td>62.40</td></tr><tr><td>RoBERTa(6)</td><td>84.37</td><td>58.00</td><td>67.94</td><td>60.81</td></tr><tr><td>RoBERTa(6)+Mahalanobis</td><td>84.69</td><td>58.42</td><td>69.05</td><td>62.07</td></tr><tr><td colspan="5">kFolden</td></tr><tr><td>CNN-init emb</td><td>78.29</td><td>50.33</td><td>62.10</td><td>34.57</td></tr><tr><td>BiLSTM-init emb</td><td>78.30</td><td>50.48</td><td>60.86</td><td>34.94</td></tr><tr><td>BERT</td><td>84.12</td><td>56.77</td><td>64.85</td><td>53.46</td></tr><tr><td>RoBERTa</td><td>85.75</td><td>58.35</td><td>67.54</td><td>60.45</td></tr><tr><td>RoBERTa+Scaling</td><td>85.75</td><td>59.83</td><td>68.88</td><td>62.17</td></tr><tr><td>RoBERTa+Mahalanobis</td><td>85.75</td><td>60.04</td><td>69.91</td><td>63.44</td></tr><tr><td colspan="5">AGNews-EXT</td></tr><tr><td colspan="5">Vanilla</td></tr><tr><td>CNN-init emb</td><td>86.13</td><td>48.29</td><td>61.54</td><td>35.62</td></tr><tr><td>BiLSTM-init emb</td><td>87.38</td><td>48.56</td><td>62.15</td><td>35.88</td></tr><tr><td>BERT</td><td>92.24</td><td>51.35</td><td>63.68</td><td>49.63</td></tr><tr><td>RoBERTa</td><td>94.54</td><td>52.75</td><td>64.01</td><td>51.45</td></tr><tr><td>RoBERTa+Mahalanobis</td><td>94.54</td><td>55.37</td><td>65.94</td><td>54.60</td></tr><tr><td>RoBERTa-Dropout</td><td>95.13</td><td>52.74</td><td>64.32</td><td>52.47</td></tr><tr><td>RoBERTa-Dropout+Mahalanobis</td><td>95.13</td><td>55.67</td><td>66.32</td><td>55.10</td></tr><tr><td>RoBERTa(4)</td><td>95.22</td><td>53.91</td><td>65.68</td><td>53.08</td></tr><tr><td>RoBERTa(4)+Mahalanobis</td><td>95.22</td><td>55.74</td><td>66.58</td><td>55.21</td></tr><tr><td colspan="5">kFolden</td></tr><tr><td>CNN-init emb</td><td>88.30</td><td>49.31</td><td>62.18</td><td>37.20</td></tr><tr><td>BiLSTM-init emb</td><td>88.92</td><td>49.45</td><td>63.08</td><td>37.54</td></tr><tr><td>BERT</td><td>93.43</td><td>51.25</td><td>64.19</td><td>53.16</td></tr><tr><td>RoBERTa</td><td>95.62</td><td>53.87</td><td>65.76</td><td>54.98</td></tr><tr><td>RoBERTa+Scaling</td><td>95.62</td><td>55.19</td><td>66.28</td><td>55.09</td></tr><tr><td>RoBERTa+Mahalanobis</td><td>95.62</td><td>56.07</td><td>67.81</td><td>55.48</td></tr><tr><td colspan="5">Yahoo-AGNews-five</td></tr><tr><td colspan="5">Vanilla</td></tr><tr><td>CNN-init emb</td><td>77.45</td><td>79.26</td><td>58.50</td><td>43.94</td></tr><tr><td>BiLSTM-init emb</td><td>77.68</td><td>79.98</td><td>58.76</td><td>44.07</td></tr><tr><td>BERT</td><td>81.93</td><td>82.35</td><td>62.17</td><td>50.82</td></tr><tr><td>RoBERTa</td><td>82.54</td><td>84.98</td><td>63.46</td><td>50.94</td></tr><tr><td>RoBERTa+Mahalanobis</td><td>82.54</td><td>85.88</td><td>63.92</td><td>51.96</td></tr><tr><td>RoBERTa-Dropout</td><td>84.04</td><td>84.29</td><td>63.36</td><td>51.03</td></tr><tr><td>RoBERTa-Dropout+Mahalanobis</td><td>84.04</td><td>85.95</td><td>64.17</td><td>51.39</td></tr><tr><td>RoBERTa(5)</td><td>84.10</td><td>85.01</td><td>64.14</td><td>51.22</td></tr><tr><td>RoBERTa(5)+Mahalanobis</td><td>84.10</td><td>86.23</td><td>64.37</td><td>53.11</td></tr><tr><td colspan="5">kFolden</td></tr><tr><td>CNN-init emb</td><td>79.23</td><td>81.12</td><td>61.09</td><td>45.82</td></tr><tr><td>BiLSTM-init emb</td><td>78.04</td><td>82.33</td><td>62.88</td><td>45.90</td></tr><tr><td>BERT</td><td>83.23</td><td>84.09</td><td>63.11</td><td>52.95</td></tr><tr><td>RoBERTa</td><td>84.45</td><td>85.61</td><td>64.15</td><td>52.22</td></tr><tr><td>RoBERTa+Scaling</td><td>84.45</td><td>86.69</td><td>64.87</td><td>54.39</td></tr><tr><td>RoBERTa+Mahalanobis</td><td>84.45</td><td>86.92</td><td>64.92</td><td>56.24</td></tr></table>
|
| 142 |
+
|
| 143 |
+
# 6.2 Baselines
|
| 144 |
+
|
| 145 |
+
We choose the following OOD detection methods for comparison:
|
| 146 |
+
|
| 147 |
+
MSP: The Maximum Softmax Probability method proposed by Hendrycks and Gimpel (2016). It uses the maximum probability in the final probability distribution over labels as the prediction score. If the maximum probability is under some specified threshold $\varphi \in [0,1]$ , then the example would be classified as OOD. We tune the threshold on the dev set. This is the default setting for all model backbones.
|
| 148 |
+
|
| 149 |
+
Table 2: Results of Non-Semantic Shift (NSS) datasets. The number in the bracket $(k)$ denotes averaging $k$ model predictions and $k$ equals to the number of labels in the training dataset.
|
| 150 |
+
|
| 151 |
+
<table><tr><td rowspan="2">Model</td><td colspan="2">ID Metrics</td><td colspan="2">OOD Metrics</td></tr><tr><td>ACC↑</td><td>AUROC↑</td><td>AUPR↑</td><td>TNR@95TPR↑</td></tr><tr><td colspan="5">Reuters-7K-3L</td></tr><tr><td colspan="5">Vanilla</td></tr><tr><td>CNN-init emb</td><td>62.04</td><td>64.76</td><td>53.49</td><td>49.23</td></tr><tr><td>BiLSTM-init emb</td><td>60.89</td><td>66.41</td><td>55.55</td><td>48.58</td></tr><tr><td>BERT</td><td>63.25</td><td>66.83</td><td>60.28</td><td>50.66</td></tr><tr><td>RoBERTa</td><td>65.88</td><td>67.37</td><td>63.30</td><td>51.95</td></tr><tr><td>RoBERTa+Mahalanobis</td><td>65.88</td><td>68.34</td><td>64.33</td><td>52.73</td></tr><tr><td>RoBERTa-Dropout</td><td>66.16</td><td>69.04</td><td>64.18</td><td>52.09</td></tr><tr><td>RoBERTa-Dropout+Mahalanobis</td><td>66.16</td><td>69.86</td><td>64.25</td><td>52.90</td></tr><tr><td>RoBERTa(7)</td><td>66.31</td><td>69.31</td><td>64.57</td><td>52.81</td></tr><tr><td>RoBERTa(7)+Mahalanobis</td><td>66.31</td><td>69.89</td><td>64.82</td><td>53.46</td></tr><tr><td colspan="5">kFolden</td></tr><tr><td>CNN-init emb</td><td>62.94</td><td>65.08</td><td>54.28</td><td>50.27</td></tr><tr><td>BiLSTM-init emb</td><td>61.05</td><td>67.81</td><td>56.98</td><td>49.96</td></tr><tr><td>BERT</td><td>65.45</td><td>68.14</td><td>61.11</td><td>51.79</td></tr><tr><td>RoBERTa</td><td>66.72</td><td>69.70</td><td>64.74</td><td>53.62</td></tr><tr><td>RoBERTa+Scaling</td><td>66.72</td><td>70.03</td><td>65.39</td><td>53.98</td></tr><tr><td>RoBERTa+Mahalanobis</td><td>66.72</td><td>70.52</td><td>65.81</td><td>54.91</td></tr><tr><td colspan="5">AGNews-FL</td></tr><tr><td colspan="5">Vanilla</td></tr><tr><td>CNN-init emb</td><td>80.55</td><td>62.94</td><td>52.70</td><td>30.54</td></tr><tr><td>BiLSTM-init emb</td><td>81.36</td><td>63.71</td><td>54.77</td><td>31.90</td></tr><tr><td>BERT</td><td>85.58</td><td>64.55</td><td>54.49</td><td>42.84</td></tr><tr><td>RoBERTa</td><td>87.19</td><td>65.52</td><td>55.48</td><td>45.89</td></tr><tr><td>RoBERTa+Mahalanobis</td><td>87.19</td><td>66.20</td><td>56.45</td><td>46.95</td></tr><tr><td>RoBERTa-Dropout</td><td>87.27</td><td>65.61</td><td>56.38</td><td>46.06</td></tr><tr><td>RoBERTa-Dropout+Mahalanobis</td><td>87.27</td><td>66.53</td><td>57.11</td><td>46.89</td></tr><tr><td>RoBERTa(4)</td><td>87.55</td><td>65.81</td><td>56.89</td><td>46.19</td></tr><tr><td>RoBERTa(4)+Mahalanobis</td><td>87.55</td><td>66.48</td><td>57.49</td><td>46.92</td></tr><tr><td colspan="5">kFolden</td></tr><tr><td>CNN-init emb</td><td>82.21</td><td>63.45</td><td>53.98</td><td>34.71</td></tr><tr><td>BiLSTM-init emb</td><td>84.33</td><td>64.44</td><td>55.01</td><td>35.68</td></tr><tr><td>BERT</td><td>87.20</td><td>65.19</td><td>55.39</td><td>45.39</td></tr><tr><td>RoBERTa</td><td>88.03</td><td>66.29</td><td>57.39</td><td>46.27</td></tr><tr><td>RoBERTa+Scaling</td><td>88.03</td><td>66.84</td><td>58.07</td><td>46.75</td></tr><tr><td>RoBERTa+Mahalanobis</td><td>88.03</td><td>66.89</td><td>58.26</td><td>47.16</td></tr><tr><td colspan="5">AGNews-FM</td></tr><tr><td colspan="5">Vanilla</td></tr><tr><td>CNN-init emb</td><td>79.81</td><td>79.63</td><td>53.50</td><td>54.72</td></tr><tr><td>BiLSTM-init emb</td><td>82.51</td><td>79.46</td><td>52.86</td><td>55.33</td></tr><tr><td>BERT</td><td>83.40</td><td>80.63</td><td>56.79</td><td>59.84</td></tr><tr><td>RoBERTa</td><td>85.62</td><td>82.53</td><td>58.84</td><td>60.36</td></tr><tr><td>RoBERTa+Mahalanobis</td><td>85.62</td><td>83.04</td><td>59.96</td><td>62.26</td></tr><tr><td>RoBERTa-Dropout</td><td>87.59</td><td>82.64</td><td>59.76</td><td>60.86</td></tr><tr><td>RoBERTa-Dropout+Mahalanobis</td><td>87.59</td><td>83.14</td><td>59.23</td><td>61.88</td></tr><tr><td>RoBERTa(4)</td><td>88.16</td><td>82.85</td><td>60.44</td><td>61.95</td></tr><tr><td>RoBERTa(4)+Mahalanobis</td><td>88.16</td><td>83.27</td><td>60.82</td><td>62.34</td></tr><tr><td colspan="5">kFolden</td></tr><tr><td>CNN-init emb</td><td>80.77</td><td>79.83</td><td>55.63</td><td>55.69</td></tr><tr><td>BiLSTM-init emb</td><td>83.43</td><td>80.23</td><td>57.40</td><td>55.57</td></tr><tr><td>BERT</td><td>84.55</td><td>81.35</td><td>58.19</td><td>62.89</td></tr><tr><td>RoBERTa</td><td>88.92</td><td>83.61</td><td>60.88</td><td>63.42</td></tr><tr><td>RoBERTa+Scaling</td><td>88.92</td><td>84.04</td><td>61.27</td><td>63.73</td></tr><tr><td>RoBERTa+Mahalanobis</td><td>88.92</td><td>84.31</td><td>61.48</td><td>64.29</td></tr><tr><td colspan="5">Yahoo!Answers-FM</td></tr><tr><td colspan="5">Vanilla</td></tr><tr><td>CNN-init emb</td><td>89.44</td><td>80.36</td><td>69.49</td><td>55.01</td></tr><tr><td>BiLSTM-init emb</td><td>90.57</td><td>79.42</td><td>68.43</td><td>55.49</td></tr><tr><td>BERT</td><td>93.25</td><td>82.71</td><td>74.55</td><td>57.82</td></tr><tr><td>RoBERTa</td><td>94.73</td><td>83.81</td><td>76.47</td><td>58.62</td></tr><tr><td>RoBERTa+Mahalanobis</td><td>94.73</td><td>84.51</td><td>77.38</td><td>59.86</td></tr><tr><td>RoBERTa-Dropout</td><td>95.13</td><td>84.46</td><td>77.09</td><td>59.05</td></tr><tr><td>RoBERTa-Dropout+Mahalanobis</td><td>95.13</td><td>84.90</td><td>77.50</td><td>59.99</td></tr><tr><td>RoBERTa(5)</td><td>95.16</td><td>84.78</td><td>77.42</td><td>59.18</td></tr><tr><td>RoBERTa(5)+Mahalanobis</td><td>95.16</td><td>85.06</td><td>77.92</td><td>60.28</td></tr><tr><td colspan="5">kFolden</td></tr><tr><td>CNN-init emb</td><td>90.38</td><td>81.92</td><td>70.82</td><td>57.49</td></tr><tr><td>BiLSTM-init emb</td><td>91.42</td><td>82.84</td><td>72.81</td><td>58.06</td></tr><tr><td>BERT</td><td>94.74</td><td>84.15</td><td>76.92</td><td>58.34</td></tr><tr><td>RoBERTa</td><td>95.56</td><td>85.50</td><td>78.52</td><td>59.10</td></tr><tr><td>RoBERTa+Scaling</td><td>95.56</td><td>85.66</td><td>78.82</td><td>59.95</td></tr><tr><td>RoBERTa+Mahalanobis</td><td>95.56</td><td>85.83</td><td>78.88</td><td>61.70</td></tr></table>
|
| 152 |
+
|
| 153 |
+
Table 3: Results of Semantic Shift (SS) datasets. The number in the bracket $(k)$ denotes averaging $k$ model predictions and $k$ equals to the number of labels in the training dataset.
|
| 154 |
+
|
| 155 |
+
Scaling: The temperature scaling (Guo et al., 2017) method leverages a temperature $T > 0$ to sharpen or widen the probability distribution,
|
| 156 |
+
|
| 157 |
+
<table><tr><td>Hyperparameter</td><td>Values to select</td></tr><tr><td>batch size</td><td>{16, 24, 32, 48}</td></tr><tr><td>dropout</td><td>{0.1, 0.2, 0.3}</td></tr><tr><td>weight decay</td><td>{0, 0.01}</td></tr><tr><td>max epochs</td><td>{3, 5, 8}</td></tr><tr><td>warmup ratio</td><td>{0, 0.1, 0.05}</td></tr><tr><td>learning rate</td><td>{1e-5, 2e-5, 3e-5}</td></tr><tr><td>learning rate decay</td><td>linear</td></tr><tr><td>gradient clip</td><td>1.0</td></tr><tr><td>MSP threshold φ</td><td>{0, 0.001, 0.01, 0.05, 0.1, 0.2}</td></tr><tr><td>Scaling tempera-ture T</td><td>{1, 10, 100, 1000, 5000}</td></tr><tr><td>Scaling threshold φ</td><td>{0, 0.0005, 0.001, 0.0015, 0.002, 0.005, 0.01, 0.05, 0.1, 0.2}</td></tr><tr><td>number of passes in Dropout</td><td>{5, 10, 15, 20, 30}</td></tr></table>
|
| 158 |
+
|
| 159 |
+
Table 4: The range of hyperparameter values.
|
| 160 |
+
|
| 161 |
+
and then treats the maximum probability as the final score. The temperature $T$ is chosen from $\{1, 10, 100, 1000, 5000\}$ and is selected on the OOD validation set.
|
| 162 |
+
|
| 163 |
+
Mahalanobis: Lee et al. (2018) defined the confidence score using the Mahalanobis distance of a test example $\pmb{x}$ with respect to the closest class-conditional distribution, which can be expressed as: $\mathrm{score}(\pmb {x}) = \min_c(\psi (\pmb {x}) - \pmb {\mu}_c)^{\top}\pmb{\Sigma}^{-1}(\psi (\pmb {x}) - \pmb {\mu}_c)$ where $\psi (\pmb {x})$ is the vector representation of the input $\pmb {x}$ , $\pmb {\mu}_c = \frac{1}{N_c}\sum_{\pmb {x}\in \mathcal{D}^c}\psi (\pmb {x})$ is the centroid for class $c$ in the valid set $\mathcal{D}^{\mathrm{valid}}$ and $\pmb {\Sigma} = \frac{1}{N}\sum_{c}\sum_{\pmb {x}\in \mathcal{D}^{c}}(\psi (\pmb {x}) - \pmb {\mu}_{c})(\psi (\pmb {x}) - \pmb {\mu}_{c})^{\top}$ is the co-variance matrix. $N_{c}$ is the number of instances belongs to class $c$ in $\mathcal{D}^{\mathrm{valid}}$ .
|
| 164 |
+
|
| 165 |
+
Dropout: Gal and Ghahramani (2016) casted dropout training as Bayesian inference for neural networks and obtained multiple predictions by running the model multiple times with dropout opened for a fixed input. These predictions are then averaged, giving the final probability distribution. Note that we can combine this method with the above three approaches.
|
| 166 |
+
|
| 167 |
+
More details regarding hyperparameter selection are present in Table 4. Since the proposed strategy uses the ensemble of $\mathbf{K}$ models, we also implement an ensemble of $k$ vanilla models.
|
| 168 |
+
|
| 169 |
+
# 6.3 Metrics
|
| 170 |
+
|
| 171 |
+
We use accuracy (ACC) to evaluate model performances on the in-distribution testset and follow previous works (Hendrycks and Gimpel, 2016; Hsu et al., 2020; Lee et al., 2018) to employ three metrics for the OOD detection task, including AUROC, $\mathrm{AUPR}_{out}$ , TNR@95TPR.
|
| 172 |
+
|
| 173 |
+
AUROC: The AUROC is short for area under the receiver operating characteristic curve. The ROC curve is a graph plotting true negative rate against the false positive rate $= \mathrm{FP} / (\mathrm{FP} + \mathrm{TN})$ by varying a threshold. This score is a threshold-independent evaluation metric and can be interpreted as the probability that a positive example has a greater detector score/value than a negative example (Fawcett, 2006). A random classifier has an AUROC score of $50\%$ . A higher AUROC value indicates a better OOD detection performance.
|
| 174 |
+
|
| 175 |
+
$\mathbf{AUPR}_{out}$ : The AUPR is short for the area under the precision-recall curve. The precision-recall curve is a graph plotting the precision $= \mathrm{TP} / (\mathrm{TP} + \mathrm{FP})$ against recall $= \mathrm{TP} / (\mathrm{TP} + \mathrm{FN})$ by varying a threshold. $\mathbf{AUPR}_{out}$ requires taking out-of-distribution data as the positive class. It is more suitable for highly imbalanced data compared to AUROC.
|
| 176 |
+
|
| 177 |
+
TNR@95TPR: The TNR@95TPR is short for true negative rate (TNR) at $95\%$ true positive rate (TPR). The TNR@95TPR measures the true negative rate $(\mathrm{TNR} = \mathrm{TN} / (\mathrm{FP} + \mathrm{TN}))$ when the true positive rate $(\mathrm{TPR} = \mathrm{TP} / (\mathrm{TP} + \mathrm{FN}))$ is $95\%$ , where TP, TN, FP and FN denotes true positive, true negative, false positive and false negative, respectively. It can be interpreted as the probability that an example predicted incorrectly is misclassified as a corrected prediction when TPR is equal to $95\%$ .
|
| 178 |
+
|
| 179 |
+
# 6.4 Results
|
| 180 |
+
|
| 181 |
+
Experimental results for non-semantic shift and semantic shift benchmarks are shown in Table 2 and Table 3, respectively. The first observation is that contextual models (BERT and RoBERTa) can achieve significantly better performances on both in-distribution and out-of-distribution datasets than non-contextual models (e.g., CNN, LSTM). The second observation is that existing methods including Scaling, Mahalanobis and Dropout can improve ID and OOD performances. The proposed $k$ -Folden framework introduces performance boost over the ensemble of its corresponding vanilla model (e.g., CNN, LSTM, Bert and RoBerta) in both ID and
|
| 182 |
+
|
| 183 |
+
OOD evaluations. Additionally, we also find that $k$ Folden is a flexible and general framework, which can be combined to existing OOD detection methods such as Mahalanobis, scaling and dropout, and can introduce addition performance boosts in OOD detection.
|
| 184 |
+
|
| 185 |
+
It is interesting to see that the improvements on SS datasets are greater than on NSS datasets when augmenting with the $k$ Folden framework. This is because compared to NSS tasks, SS poses more variability in data distributions and requires a better generality from ID to OOD samples. $k$ Folden serves this purpose well since it performs in a way as OOD simulation during training, which naturally addresses ID classification and OOD detection at the same time during training. This training paradigm wins better results for $k$ Folden on SS data.
|
| 186 |
+
|
| 187 |
+
# 6.5 The Ratio of Unseen Labels
|
| 188 |
+
|
| 189 |
+
In this subsection, we explore the effect of unseen categories at different ratios. We use RoBERTa as the model backbone and conduct experiments on Reuters- $m\mathrm{K} - n\mathrm{L}$ datasets, including 9K-1L, 6K-4L, 5K-5L, 3K-7L and 2K-8L. We use accuracy and the error rate as evaluation metrics. The error rate represents the proportion of OOD examples that are incorrectly classified to an in-distribution label, i.e., the maximum class probability is above the threshold tuned on the valid set. Experimental results are shown in Table 5. As we can see from Table 5, the overall trend is that the error rate increases as more unseen text categories are added to the out-of-distribution test set. Regarding specific models, we find that $k$ -Folden always outperforms Dropout, and the combination of $k$ -Folden and Mahalanobis leads to the best performance. We speculate that this is because unlike Dropout which relies on the masking patterns within the neural network, the $k$ -Folden framework straightforwardly performs at the output, or the training objective level using the training data. This gives a direct learning signal for the model to learn to distinguish OOD examples.
|
| 190 |
+
|
| 191 |
+
# 7 Conclusion
|
| 192 |
+
|
| 193 |
+
In this paper, we propose a simple yet effective framework $k$ Folden for OOD detection. It works by mimicking the behaviors of detecting out-of-distribution examples during training without the use of any external data. We also develop a bench
|
| 194 |
+
|
| 195 |
+
<table><tr><td>Model</td><td>AUPR↑</td><td>Error Rate↓</td></tr><tr><td colspan="3">Reuters-9K-1L</td></tr><tr><td>RoBERTa</td><td>79.77</td><td>36.61</td></tr><tr><td>RoBERTa+Dropout</td><td>80.07</td><td>32.74</td></tr><tr><td>kFolden RoBERTa</td><td>81.53</td><td>30.63</td></tr><tr><td>kFolden RoBERTa+mahal</td><td>81.68</td><td>29.75</td></tr><tr><td colspan="3">Reuters-6K-4L</td></tr><tr><td>RoBERTa</td><td>78.52</td><td>36.26</td></tr><tr><td>RoBERTa+Dropout</td><td>79.73</td><td>36.13</td></tr><tr><td>kFolden RoBERTa</td><td>80.83</td><td>35.76</td></tr><tr><td>kFolden RoBERTa+mahal</td><td>82.74</td><td>35.49</td></tr><tr><td colspan="3">Reuters-5K-5L</td></tr><tr><td>RoBERTa</td><td>89.56</td><td>42.83</td></tr><tr><td>RoBERTa+Dropout</td><td>90.25</td><td>41.36</td></tr><tr><td>kFolden RoBERTa</td><td>91.76</td><td>40.99</td></tr><tr><td>kFolden RoBERTa+mahal</td><td>92.08</td><td>40.76</td></tr><tr><td colspan="3">Reuters-3K-7L</td></tr><tr><td>RoBERTa</td><td>95.64</td><td>46.27</td></tr><tr><td>RoBERTa+Dropout</td><td>96.14</td><td>45.89</td></tr><tr><td>kFolden RoBERTa</td><td>96.75</td><td>44.82</td></tr><tr><td>kFolden RoBERTa+mahal</td><td>96.83</td><td>43.69</td></tr><tr><td colspan="3">Reuters-2K-8L</td></tr><tr><td>RoBERTa</td><td>97.35</td><td>58.14</td></tr><tr><td>RoBERTa+Dropout</td><td>97.56</td><td>57.62</td></tr><tr><td>kFolden RoBERTa</td><td>97.83</td><td>56.80</td></tr><tr><td>kFolden RoBERTa+mahal</td><td>97.91</td><td>56.06</td></tr></table>
|
| 196 |
+
|
| 197 |
+
Table 5: Results on Reuters- $m\mathrm{K} - n\mathrm{L}$ OOD test sets. The Reuters dataset contains 10 label categories. We use $m$ to represent the number of labels in ID training set and $n$ for the number of categories in OOD testset.
|
| 198 |
+
|
| 199 |
+
mark on top of existing widely used datasets for text classification OOD detection. This benchmark contains both semantic shift and non-semantic shift data, which would benefit a comprehensive examination to the ability of OOD detection methods. Through experiments and analyses, we show that the proposed $k$ -Folden framework outperforms strong OOD detection baselines on the constructed benchmark, and combining $k$ -Folden and other post-hoc methods leads to the most performance gains. We hope the proposed method and the created benchmark can facilitate further researches in related areas.
|
| 200 |
+
|
| 201 |
+
# Acknowledgement
|
| 202 |
+
|
| 203 |
+
This work was supported by Key-Area Research and Development Program of Guangdong Province (No. 2019B121204008). We thank the High-Performance Computing Platform at Peking University and the PCNL Cloud Brain for providing
|
| 204 |
+
|
| 205 |
+
platforms of data analysis and model training.
|
| 206 |
+
|
| 207 |
+
# References
|
| 208 |
+
|
| 209 |
+
Marcin Andrychowicz, Misha Denil, Sergio Gomez, Matthew W Hoffman, David Pfau, Tom Schaul, Brendan Shillingford, and Nando De Freitas. 2016. Learning to learn by gradient descent by gradient descent. arXiv preprint arXiv:1606.04474.
|
| 210 |
+
|
| 211 |
+
Trapit Bansal, Rishikesh Jha, Tsendsuren Munkhdalai, and Andrew McCallum. 2020. Self-supervised meta-learning for few-shot natural language classification tasks. arXiv preprint arXiv:2009.08445.
|
| 212 |
+
|
| 213 |
+
Duo Chai, Wei Wu, Qinghong Han, Fei Wu, and Jiwei Li. 2020. Description based text classification with reinforcement learning. In International Conference on Machine Learning, pages 1371-1382. PMLR.
|
| 214 |
+
|
| 215 |
+
Gianna M Del Corso, Antonio Gulli, and Francesco Romani. 2005. Ranking a stream of news. In Proceedings of the 14th international conference on World Wide Web, pages 97-106. ACM.
|
| 216 |
+
|
| 217 |
+
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.
|
| 218 |
+
|
| 219 |
+
Terrance DeVries and Graham W Taylor. 2018. Learning confidence for out-of-distribution detection in neural networks. arXiv preprint arXiv:1802.04865.
|
| 220 |
+
|
| 221 |
+
Tom Fawcett. 2006. An introduction to ROC analysis. Pattern Recognit. Lett., 27(8):861-874.
|
| 222 |
+
|
| 223 |
+
Chelsea Finn, Pieter Abbeel, and Sergey Levine. 2017. Model-agnostic meta-learning for fast adaptation of deep networks. In International Conference on Machine Learning, pages 1126-1135. PMLR.
|
| 224 |
+
|
| 225 |
+
Yarin Gal and Zoubin Ghahramani. 2016. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In international conference on machine learning, pages 1050-1059. PMLR.
|
| 226 |
+
|
| 227 |
+
Ian J Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial networks. arXiv preprint arXiv:1406.2661.
|
| 228 |
+
|
| 229 |
+
Jiatao Gu, Yong Wang, Yun Chen, Kyunghyun Cho, and Victor OK Li. 2018. Meta-learning for low-resource neural machine translation. arXiv preprint arXiv:1808.08437.
|
| 230 |
+
|
| 231 |
+
Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q Weinberger. 2017. On calibration of modern neural networks. In International Conference on Machine Learning, pages 1321-1330. PMLR.
|
| 232 |
+
|
| 233 |
+
Daya Guo, Duyu Tang, Nan Duan, Ming Zhou, and Jian Yin. 2019. Coupling retrieval and meta-learning for context-dependent semantic parsing. arXiv preprint arXiv:1906.07108.
|
| 234 |
+
|
| 235 |
+
Pengcheng He, Xiaodong Liu, Jianfeng Gao, and Weizhu Chen. 2020. Deberta: Decoding-enhanced bert with disentangled attention. arXiv preprint arXiv:2006.03654.
|
| 236 |
+
|
| 237 |
+
Dan Hendrycks and Kevin Gimpel. 2016. A baseline for detecting misclassified and out-of-distribution examples in neural networks. arXiv preprint arXiv:1610.02136.
|
| 238 |
+
|
| 239 |
+
Dan Hendrycks, Kimin Lee, and Mantas Mazeika. 2019. Using pre-training can improve model robustness and uncertainty. In International Conference on Machine Learning, pages 2712-2721. PMLR.
|
| 240 |
+
|
| 241 |
+
Dan Hendrycks, Xiaoyuan Liu, Eric Wallace, Adam Dziedzic, Rishabh Krishnan, and Dawn Song. 2020. Pretrained transformers improve out-of-distribution robustness. arXiv preprint arXiv:2004.06100.
|
| 242 |
+
|
| 243 |
+
Dan Hendrycks, Mantas Mazeika, and Thomas Dietterich. 2018. Deep anomaly detection with outlier exposure. arXiv preprint arXiv:1812.04606.
|
| 244 |
+
|
| 245 |
+
Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. *Neural computation*, 9(8):1735-1780.
|
| 246 |
+
|
| 247 |
+
Yen-Chang Hsu, Yilin Shen, Hongxia Jin, and Zsolt Kira. 2020. Generalized odin: Detecting out-of-distribution image without learning from out-of-distribution data. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10951-10960.
|
| 248 |
+
|
| 249 |
+
Po-Sen Huang, Chenglong Wang, Rishabh Singh, Wentau Yih, and Xiaodong He. 2018. Natural language to structured query generation via meta-learning. arXiv preprint arXiv:1803.02400.
|
| 250 |
+
|
| 251 |
+
Yi Huang, Junlan Feng, Min Hu, Xiaoting Wu, Xiaoyu Du, and Shuo Ma. 2020. Meta-reinforced multidomain state generator for dialogue systems. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7109-7118.
|
| 252 |
+
|
| 253 |
+
Thorsten Joachims. 1996. A probabilistic analysis of the rocchio algorithm with tfidf for text categorization. Technical report, Carnegie-mellon univ pittsburgh pa dept of computer science.
|
| 254 |
+
|
| 255 |
+
Thorsten Joachims. 1998. Text categorization with support vector machines: Learning with many relevant features. In European conference on machine learning, pages 137-142. Springer.
|
| 256 |
+
|
| 257 |
+
Amita Kamath, Robin Jia, and Percy Liang. 2020. Selective question answering under domain shift. arXiv preprint arXiv:2006.09462.
|
| 258 |
+
|
| 259 |
+
Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1746-1751, Doha, Qatar. Association for Computational Linguistics.
|
| 260 |
+
|
| 261 |
+
Diederik P Kingma and Jimmy Ba. 2014. Adam: A
|
| 262 |
+
|
| 263 |
+
method for stochastic optimization. arXiv preprint arXiv:1412.6980.
|
| 264 |
+
|
| 265 |
+
Aviral Kumar and Sunita Sarawagi. 2019. Calibration of encoder decoder models for neural machine translation. arXiv preprint arXiv:1903.00802.
|
| 266 |
+
|
| 267 |
+
Kimin Lee, Changho Hwang, KyoungSoo Park, and Jinwoo Shin. 2017a. Confident multiple choice learning. In International Conference on Machine Learning, pages 2014-2023. PMLR.
|
| 268 |
+
|
| 269 |
+
Kimin Lee, Honglak Lee, Kibok Lee, and Jinwoo Shin. 2017b. Training confidence-calibrated classifiers for detecting out-of-distribution samples. arXiv preprint arXiv:1711.09325.
|
| 270 |
+
|
| 271 |
+
Kimin Lee, Kibok Lee, Honglak Lee, and Jinwoo Shin. 2018. A simple unified framework for detecting out-of-distribution samples and adversarial attacks. arXiv preprint arXiv:1807.03888.
|
| 272 |
+
|
| 273 |
+
Shiyu Liang, Yixuan Li, and Rayadurgam Srikant. 2017. Enhancing the reliability of out-of-distribution image detection in neural networks. arXiv preprint arXiv:1706.02690.
|
| 274 |
+
|
| 275 |
+
Yuxiao Lin, Yuxian Meng, Xiaofei Sun, Qinghong Han, Kun Kuang, Jiwei Li, and Fei Wu. 2021. Bertgcn: Transductive text classification by combining gcn and bert. arXiv preprint arXiv:2105.05727.
|
| 276 |
+
|
| 277 |
+
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692.
|
| 278 |
+
|
| 279 |
+
Subhabrata Mukherjee and Ahmed Hassan Awadallah. 2020. Uncertainty-aware self-training for text classification with few labels. arXiv preprint arXiv:2006.15315.
|
| 280 |
+
|
| 281 |
+
Alex Nichol, Joshua Achiam, and John Schulman. 2018. On first-order meta-learning algorithms. arXiv preprint arXiv:1803.02999.
|
| 282 |
+
|
| 283 |
+
Aristotelis-Angelos Papadopoulos, Mohammad Reza Rajati, Nazim Shaikh, and Jiamian Wang. 2021. Outlier exposure with confidence control for out-of-distribution detection. Neurocomputing, 441:138-150.
|
| 284 |
+
|
| 285 |
+
Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP 2014, October 25-29, 2014, Doha, Qatar; A meeting of SIGDAT, a Special Interest Group of the ACL, pages 1532-1543. ACL.
|
| 286 |
+
|
| 287 |
+
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text transformer. arXiv preprint arXiv:1910.10683.
|
| 288 |
+
|
| 289 |
+
Yiping Song, Zequn Liu, Wei Bi, Rui Yan, and Ming Zhang. 2019. Learning to customize model structures
|
| 290 |
+
|
| 291 |
+
for few-shot dialogue generation tasks. arXiv preprint arXiv:1910.14326.
|
| 292 |
+
|
| 293 |
+
Chi Sun, Xipeng Qiu, Yige Xu, and Xuanjing Huang. 2019. How to fine-tune bert for text classification? In China National Conference on Chinese Computational Linguistics, pages 194-206. Springer.
|
| 294 |
+
|
| 295 |
+
Yibo Sun, Duyu Tang, Nan Duan, Yeyun Gong, Xiaocheng Feng, Bing Qin, and Daxin Jiang. 2020. Neural semantic parsing in low-resource settings with back-translation and meta-learning. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 8960-8967.
|
| 296 |
+
|
| 297 |
+
Sebastian Thrun and Lorien Pratt. 2012. Learning to learn. Springer Science & Business Media.
|
| 298 |
+
|
| 299 |
+
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 5998-6008. Curran Associates, Inc.
|
| 300 |
+
|
| 301 |
+
Jiawei Wu, Wenhan Xiong, and William Yang Wang. 2019. Learning to learn and predict: A meta-learning approach for multi-label classification. arXiv preprint arXiv:1909.04176.
|
| 302 |
+
|
| 303 |
+
Yiming Yang and Xin Liu. 1999. A re-examination of text categorization methods. In Proceedings of the 22nd annual international ACM SIGIR conference on Research and development in information retrieval, pages 42-49.
|
| 304 |
+
|
| 305 |
+
Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classification. arXiv preprint arXiv:1509.01626.
|
| 306 |
+
|
| 307 |
+
# A Dataset Details
|
| 308 |
+
|
| 309 |
+
# A.1 Original Datasets
|
| 310 |
+
|
| 311 |
+
In this paper, We use data from 20NewsGroups (Joachims, 1996), Reuters-21578 $^{5}$ , AG News (Del Corso et al., 2005) and Yahoo!Answers (Zhang et al., 2015) to construct our evaluation benchmark. Details regarding these four datasets are present below:
|
| 312 |
+
|
| 313 |
+
- 20Newsgroups: 20Newsgroups is a collection of approximate 20,000 newsgroup documents, partitioned (nearly) evenly across 20 different newsgroups. Each newsgroup corresponds to a different topic. Some of the newsgroups are very closely related to each other (e.g., "comp.sys.pc.hardware" and
|
| 314 |
+
|
| 315 |
+
"comp.sys.mac.hardware"), while others are highly unrelated (e.g., "misc.forsale" and "soc.religion.christian").
|
| 316 |
+
|
| 317 |
+
- AG News<sup>7</sup>: AG News is a subdataset of AG's corpus of news articles constructed by assembling titles and description fields of articles from the four largest classes ("World", "Sports", "Business", "Sci/Tech") of AG Corpus. AG News contains 30,000 training and 1,900 test samples per class.
|
| 318 |
+
|
| 319 |
+
- Yahoo!Answers<sup>8</sup>: Yahoo!answers was constructed by Zhang et al. (2015) and composed of 10 largest main categories from Yahoo!Answers Comprehensive Questions and the Answers version 1.0 dataset. Each class contains 140,000 training samples and 5,000 testing samples. Labels in the dataset include "Society & Culture", "Science & Mathematics", "Health", "Education & Reference", "Computers & Internet", "Sports", "Business & Finance", "Entertainment & Music", "Family & Relationships", and "Politics & Government".
|
| 320 |
+
|
| 321 |
+
- Reuters-21578<sup>9</sup>: Reuters-21578 is a collection of 10,788 documents from the Reuters financial newswire service, partitioned into a training set with 7,769 documents and a test set with 3,019 documents. The distribution of categories in the Reuters-21578 corpus is highly skewed, with $36.7\%$ of the documents in the most common category, and only $0.0185\%$ (2 documents) in each of the five least common categories. There are 90 categories in the corpus. Each document belongs to one or more categories. The average number of categories per document is 1.235, and the average number of documents per category is about 148, or $1.37\%$ of the corpus.
|
| 322 |
+
|
| 323 |
+
# A.2 Benchmark Construction
|
| 324 |
+
|
| 325 |
+
We construct our NSS benchmarks as follows:
|
| 326 |
+
|
| 327 |
+
20Newsgroups-6S This dataset is a modified version of 20Newsgroups. The original 20Newsgroups dataset has 20 newsgroups and each newsgroup (e.g., "comp.sys.ibm.pc.hardware") has a
|
| 328 |
+
|
| 329 |
+
<table><tr><td>Label</td><td>Train&ID-X</td><td>OOD-X</td></tr><tr><td>comp</td><td>comp.graphics comp.sys.ibm.pc.hardware comp.os.ms-windows.misc</td><td>comp.sys.mac.hardware comp.windows.x</td></tr><tr><td>rec</td><td>rec.autos rec.motorcycles</td><td>rec.sport.baseball rec.sport.hockey</td></tr><tr><td>sci</td><td>sci.crypt sci.electronics</td><td>sci.med sci.space</td></tr><tr><td>religion</td><td>talk.religion.misc</td><td>alt.atheism soc.religion.christian</td></tr><tr><td>politics</td><td>talk.politics.guns talk.politics.misc</td><td>talk.politics.mideast</td></tr><tr><td>misc</td><td>misc.forsale</td><td></td></tr></table>
|
| 330 |
+
|
| 331 |
+
Table 6: Merging labels from 20News for 20News-6S.
|
| 332 |
+
|
| 333 |
+
root subject topic (e.g., "comp"). We divide articles by its root subject and obtain 6 newsgroups ("comp", "rec", "sci", "religion", "politics" and "misc"). For example, the original label "comp.sys.ibm.pc.hardware" becomes "comp". Hence, train and test data share the same labels but may come from different data distributions. Data in the five sets do not overlap. We show the used classes for each of the following sets in Table 6.
|
| 334 |
+
|
| 335 |
+
- TrainingSet We use 8,283 articles from the trainset in 20Newsgroups belonging to 11 subclasses. Each class contains 753 articles.
|
| 336 |
+
- ID-ValidSet We use 1,034 articles from the trainset in 20Newsgroups belonging to 11 subclasses. Each class contains 94 articles.
|
| 337 |
+
- ID-TestSet We use 1,034 articles from the testset in 20Newsgroups belonging to 11 subclasses. Each class contains 94 articles.
|
| 338 |
+
- OOD-ValidSet We use 846 articles from the trainset in 20Newsgroups belonging to the other 9 sub-classes in 20Newsgroups. Each class contains 94 articles.
|
| 339 |
+
- OOD-TestSet We use 846 articles from the testset in 20Newsgroups belonging to the other 9 sub-classes in 20Newsgroups. Each class contains 94 articles.
|
| 340 |
+
|
| 341 |
+
AGNews-EXT This dataset contains data from AG-News and additional articles from AG Corpus. In this setting, the training and ID data are from the same 4 labels ("World", "Sports", "Business", "Sci/Tech"). OOD data are from the same 4 labels but use articles in AG Corpus instead of AG-News. Data in the five sets do not overlap.
|
| 342 |
+
|
| 343 |
+
- TrainingSet We use 112,400 articles from the 16RDRV3a0JHT3kxLVhVR2M trainset in AG-News with 4 classes. Each class contains 28,100 articles.
|
| 344 |
+
|
| 345 |
+
- ID-ValidSet We use 7,600 articles from the trainset in AG-News with the same 4 classes as TrainingSet. Each class has 1,900 articles.
|
| 346 |
+
- ID-TestSet We use 7,600 articles from the testset in AG-News with the same 4 classes as TrainingSet. Each class has 1,900 articles.
|
| 347 |
+
- OOD-ValidSet We assemble titles and description fields of articles in AG Corpus from the same 4 classes as TrainingSet. Each class has 1,900 articles.
|
| 348 |
+
- OOD-TestSet We assemble titles and description fields of articles in AG Corpus from the same 4 classes as TrainingSet. Each class has 1,900 articles.
|
| 349 |
+
|
| 350 |
+
Yahoo-AGNews-five This dataset contains a subset of Yahoo!Answers and a subset of AG Corpus. The original Yahoo!Answers dataset has 10 classes, and we use 5 of them ("Health", "Science & Mathematics", "Sports", "Entertainment & Music", "Business & Finance") for the training and ID data. The OOD data are from the 5 classes ("Health", "Sci/Tech", "Sports", "Entertainment", "Business") in AG Corpus. Data in the five sets do not overlap.
|
| 351 |
+
|
| 352 |
+
- TrainingSet We use 675,000 articles from the trainset in Yahoo!Answers with 5 classes. Each class contains 135,000 articles.
|
| 353 |
+
- ID-ValidSet We use 25,000 articles from the trainset in Yahoo!Answers with the same 5 classes as TrainingSet. Each class contains 5,000 articles.
|
| 354 |
+
- ID-TestSet We use 25,000 articles from the testset in Yahoo!Answers with the same 5 classes as TrainingSet. Each class contains 5,000 articles.
|
| 355 |
+
- OOD-ValidSet We assemble titles and description fields of articles in AG Corpus from the same 5 classes as TrainingSet. Each class contains 5,000 articles.
|
| 356 |
+
- OOD-TestSet We assemble titles and description fields of articles in AG Corpus from the same 5 classes as TrainingSet. Each class contains 5,000 articles.
|
| 357 |
+
|
| 358 |
+
We construct SS benchmarks as follows:
|
| 359 |
+
|
| 360 |
+
Reuters- $m$ K- $n$ L This dataset is a modified version of Reuters. We first follow previous works (Yang and Liu, 1999; Joachims, 1998) to use the ModApte split $^{10}$ to remove documents belonging
|
| 361 |
+
|
| 362 |
+
to multiple classes, and then considered only 10 classes ("Acquisitions", "Corn", "Crude", "Earn", "Grain", "Interest", "Money-fx", "Ship", "Trade" and "Wheat") with the highest numbers of training examples. The resulting dataset is called Reuters-ModApte. We train the model on a subset of Reuters-ModApte and test on the rest subset. Specifically, we train with $m$ topic articles and test the model on the other $n$ ( $n = 10 - m$ ) topic articles. In this paper, we use five settings: Reuters-9K-1L, Reuters-6K-4L, Reuters-5K-5L, Reuters-3K-7L and Reuters-2K-8L. This task is difficult because the resulting datasets are highly unbalanced. All documents in train-valid/test come from Reuters-21578. Data in the five sets do not overlap. Data statistics can be found in Table 7.
|
| 363 |
+
|
| 364 |
+
- TrainingSet We choose articles in the trianset of Reuters-ModApte belonging to $m$ topics.
|
| 365 |
+
- ID-ValidSet We choose articles in the valid set of Reuters-ModApte belonging to $m$ topics.
|
| 366 |
+
- ID-TestSet We choose articles in the test set of Reuters-ModApte belonging to $m$ topics.
|
| 367 |
+
- OOD-ValidSet We choose articles in the valid set of Reuters-ModApte belonging to $n$ topics.
|
| 368 |
+
- OOD-TestSet We choose articles in the test set of Reuters-ModApte belonging to $n$ topics.
|
| 369 |
+
|
| 370 |
+
AGNews-FL The dataset is composed of data from AGNews and additional articles from the AG Corpus. In this setting, the training and ID data are from the 4 classes ("World", "Sports", "Business", "Sci/Tech") in AGNews, and the OOD data are from another 4 classes ("U.S.", "Europe", "Italia", "Software and Development") in AG Corpus. It is noteworthy that these two sets of labels are similar in semantics, e.g., "U.S." to "World", "Europe" to "Sports" and "Software and Development" to "Sci/Tech". This makes the task more challenging than AGNews-FM, which will be introduced below. Data in the five sets do not overlap.
|
| 371 |
+
|
| 372 |
+
- TrainingSet We use 116,000 articles from the trainset in AG-News belonging to 4 classes. Each class contains 29,000 articles.
|
| 373 |
+
- ID-ValidSet We use 4,000 articles from the trainset in AG-News. Each class has 1,000 articles.
|
| 374 |
+
- ID-TestSet We use 4,000 articles from the testset in AG-News. Each class has 1,000 articles.
|
| 375 |
+
|
| 376 |
+
<table><tr><td>Train</td><td>Acq</td><td>Corn</td><td>Crude</td><td>Earn</td><td>Grain</td><td>Interest</td><td>Money-fx</td><td>Ship</td><td>Trade</td><td>Wheat</td></tr><tr><td>Reuters</td><td>1615</td><td>175</td><td>383</td><td>2817</td><td>422</td><td>343</td><td>518</td><td>187</td><td>356</td><td>206</td></tr><tr><td>Reuters-9K-1L</td><td>1615</td><td>175</td><td>383</td><td>2817</td><td>422</td><td>343</td><td>518</td><td>N/A</td><td>356</td><td>206</td></tr><tr><td>Reuters-6K-4L</td><td>1615</td><td>175</td><td>N/A</td><td>2817</td><td>422</td><td>343</td><td>518</td><td>N/A</td><td>N/A</td><td>N/A</td></tr><tr><td>Reuters-5K-5L</td><td>1615</td><td>N/A</td><td>N/A</td><td>2817</td><td>422</td><td>343</td><td>518</td><td>N/A</td><td>N/A</td><td>N/A</td></tr><tr><td>Reuters-3K-7L</td><td>1615</td><td>N/A</td><td>N/A</td><td>2817</td><td>N/A</td><td>N/A</td><td>518</td><td>N/A</td><td>N/A</td><td>N/A</td></tr><tr><td>Reuters-2K-8L</td><td>1615</td><td>N/A</td><td>N/A</td><td>2817</td><td>N/A</td><td>N/A</td><td>N/A</td><td>N/A</td><td>N/A</td><td>N/A</td></tr><tr><td>ID Test</td><td>Acq</td><td>Corn</td><td>Crude</td><td>Earn</td><td>Grain</td><td>Interest</td><td>Money-fx</td><td>Ship</td><td>Trade</td><td>Wheat</td></tr><tr><td>Reuters</td><td>719</td><td>56</td><td>189</td><td>1087</td><td>149</td><td>131</td><td>179</td><td>89</td><td>117</td><td>71</td></tr><tr><td>Reuters-9K-1L</td><td>719</td><td>56</td><td>189</td><td>1087</td><td>149</td><td>131</td><td>179</td><td>N/A</td><td>117</td><td>71</td></tr><tr><td>Reuters-6K-4L</td><td>719</td><td>56</td><td>N/A</td><td>1087</td><td>149</td><td>131</td><td>179</td><td>N/A</td><td>N/A</td><td>N/A</td></tr><tr><td>Reuters-5K-5L</td><td>719</td><td>N/A</td><td>N/A</td><td>1087</td><td>149</td><td>131</td><td>179</td><td>N/A</td><td>N/A</td><td>N/A</td></tr><tr><td>Reuters-3K-7L</td><td>719</td><td>N/A</td><td>N/A</td><td>1087</td><td>N/A</td><td>N/A</td><td>179</td><td>N/A</td><td>N/A</td><td>N/A</td></tr><tr><td>Reuters-2K-8L</td><td>719</td><td>N/A</td><td>N/A</td><td>1087</td><td>N/A</td><td>N/A</td><td>N/A</td><td>N/A</td><td>N/A</td><td>N/A</td></tr><tr><td>OOD Test</td><td>Acq</td><td>Corn</td><td>Crude</td><td>Earn</td><td>Grain</td><td>Interest</td><td>Money-fx</td><td>Ship</td><td>Trade</td><td>Wheat</td></tr><tr><td>Reuters</td><td>719</td><td>56</td><td>189</td><td>1087</td><td>149</td><td>131</td><td>179</td><td>89</td><td>117</td><td>71</td></tr><tr><td>Reuters-9K-1L</td><td>N/A</td><td>N/A</td><td>N/A</td><td>N/A</td><td>N/A</td><td>N/A</td><td>N/A</td><td>89</td><td>N/A</td><td>N/A</td></tr><tr><td>Reuters-6K-4L</td><td>N/A</td><td>N/A</td><td>189</td><td>N/A</td><td>N/A</td><td>N/A</td><td>N/A</td><td>89</td><td>117</td><td>71</td></tr><tr><td>Reuters-5K-5L</td><td>N/A</td><td>56</td><td>189</td><td>N/A</td><td>149</td><td>131</td><td>179</td><td>N/A</td><td>N/A</td><td>N/A</td></tr><tr><td>Reuters-3K-7L</td><td>N/A</td><td>56</td><td>189</td><td>N/A</td><td>149</td><td>131</td><td>N/A</td><td>89</td><td>117</td><td>71</td></tr><tr><td>Reuters-2K-8L</td><td>N/A</td><td>56</td><td>189</td><td>N/A</td><td>149</td><td>131</td><td>179</td><td>89</td><td>117</td><td>71</td></tr></table>
|
| 377 |
+
|
| 378 |
+
Table 7: Data statistics for Reuters- $m\mathrm{\;K} - n\mathrm{\;K}$ datasets.
|
| 379 |
+
|
| 380 |
+
- OOD-ValidSet We assemble titles and description fields of articles in AG Corpus from another 4 classes different from AG-News. There are 4,000 articles and 1,000 articles per class.
|
| 381 |
+
- OOD-TestSet We assemble titles and description fields of articles in AG Corpus from another 4 classes different from AG-News. There are 4,000 articles and 1,000 articles per class.
|
| 382 |
+
|
| 383 |
+
AGNews-FM The dataset is composed of data from AGNews and additional articles from the AG Corpus. In this setting, the training and ID data are from the 4 classes ("World", "Sports", "Business", "Sci/Tech") in AGNews, and the OOD data are from another 4 classes ("Entertainment", "Health", "Top Stories", "Music Feeds") in AG Corpus. This dataset is easier than AGNews-FL because the OOD labels are more distinct from the ID labels regarding the label semantics. Data in the five sets do not overlap.
|
| 384 |
+
|
| 385 |
+
- TrainingSet We use 116,000 articles from the trainset in AG-News belonging to 4 classes. Each class contains 29,000 articles.
|
| 386 |
+
- ID-ValidSet We use 4,000 articles from the trainset in AG-News. Each class has 1,000
|
| 387 |
+
|
| 388 |
+
articles.
|
| 389 |
+
|
| 390 |
+
- ID-TestSet We use 4,000 articles from the testset in AG-News. Each class has 1,000 articles.
|
| 391 |
+
- OOD-ValidSet We assemble titles and description fields of articles in AG Corpus from another 4 classes different from AG-News. There are 4,000 articles and 1,000 articles per class.
|
| 392 |
+
- OOD-TestSet We assemble titles and description fields of articles in AG Corpus from another 4 classes different from AG-News. There are 4,000 articles and 1,000 articles per class.
|
| 393 |
+
|
| 394 |
+
Yahoo!Answers-FM This dataset is modified from the Yahoo!Answers dataset. We use five topic articles ("Health", "Science & Mathematics", "Sports", "Entertainment & Music", "Business & Finance") for the training and ID data and use the other five unseen topics ("Society & Culture", "Education & Reference", "Computers & Internet", "Family & Relationships", "Politics & Government") for the OOD data. Data in the five sets do not overlap.
|
| 395 |
+
|
| 396 |
+
- TrainingSet We use 680,000 examples belonging to five categories in Yahoo!Answers,
|
| 397 |
+
|
| 398 |
+
136,000 samples per class.
|
| 399 |
+
|
| 400 |
+
- ID-ValidSet We use 20,000 examples belonging to five categories in Yahoo!Answers. 4,000 samples per class.
|
| 401 |
+
- ID-TestSet We use 25,000 examples belonging to five categories in Yahoo!Answers. 5,000 samples per class.
|
| 402 |
+
- OOD-ValidSet The data are from another five categories in Yahoo!Answers. The OOD-ValidSet contains 20,000 articles with 4,000 per class.
|
| 403 |
+
- OOD-TestSet The data are from another five categories in Yahoo!Answers. The OOD-TestSet contains 25,000 articles with 5,000 per class.
|
kfoldenkfoldensembleforoutofdistributiondetection/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:c594c94efce254882af6f56336dcf98b13218c88fb6ee73dc346e87807bc568a
|
| 3 |
+
size 718614
|
kfoldenkfoldensembleforoutofdistributiondetection/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:5e7b6de01a3e4ce9515b8cc4ce83843b2088cd84fe84a097046f597a57af2555
|
| 3 |
+
size 523065
|
mt6multilingualpretrainedtexttotexttransformerwithtranslationpairs/2894af76-5f28-4534-8fef-eab19cff0048_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:cdb2fec55ae21d12b373340566dd825dfd6be9d2a19a2eb328c7f3fbd1e79ac0
|
| 3 |
+
size 95886
|
mt6multilingualpretrainedtexttotexttransformerwithtranslationpairs/2894af76-5f28-4534-8fef-eab19cff0048_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:8d50d9afe553b98a7f02ac4a13a7ccbc1286c281d06e3416418f81167d03a513
|
| 3 |
+
size 114494
|
mt6multilingualpretrainedtexttotexttransformerwithtranslationpairs/2894af76-5f28-4534-8fef-eab19cff0048_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:1bae79fb3a53063de1e85913bfe61a4e55edbd4f7bf60216e9186bc1ad972972
|
| 3 |
+
size 449159
|
mt6multilingualpretrainedtexttotexttransformerwithtranslationpairs/full.md
ADDED
|
@@ -0,0 +1,379 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# mT6: Multilingual Pretrained Text-to-Text Transformer with Translation Pairs
|
| 2 |
+
|
| 3 |
+
Zewen Chi\*\*, Li Dong\*, Shuming Ma\*, Shaohan Huang\*
|
| 4 |
+
Xian-Ling Mao\*, Heyan Huang\*, Furu Wei\*
|
| 5 |
+
$\dagger$ Beijing Institute of Technology
|
| 6 |
+
$\ddagger$ Microsoft Research
|
| 7 |
+
{czw,maoxl,hhy63}@bit.edu.cn
|
| 8 |
+
{lidong1,shumma,shaohanh,fuwei}@microsoft.com
|
| 9 |
+
|
| 10 |
+
# Abstract
|
| 11 |
+
|
| 12 |
+
Multilingual T5 (MT5; Xue et al. 2020) pretrains a sequence-to-sequence model on massive monolingual texts, which has shown promising results on many cross-lingual tasks. In this paper, we improve multilingual text-to-text transfer Transformer with translation pairs (MT6). Specifically, we explore three cross-lingual text-to-text pre-training tasks, namely, machine translation, translation pair span corruption, and translation span corruption. In addition, we propose a partially non-autoregressive objective for text-to-text pretraining. We evaluate the methods on eight multilingual benchmark datasets, including sentence classification, named entity recognition, question answering, and abstractive summarization. Experimental results show that the proposed MT6 improves cross-lingual transferability over MT5.
|
| 13 |
+
|
| 14 |
+
# 1 Introduction
|
| 15 |
+
|
| 16 |
+
Multilingual pretrained language models, such as mBERT (Devlin et al., 2019), have attracted increasing attention. They not only improve the performance on downstream multilingual NLP tasks (Conneau and Lample, 2019; Conneau et al., 2020; Liu et al., 2020; Chi et al., 2021c), but also show an impressive cross-lingual transferability (Wu and Dredze, 2019; K et al., 2020; Hu et al., 2020b; Chi et al., 2021a).
|
| 17 |
+
|
| 18 |
+
Multilingual pretrained models are typically trained on multilingual unlabeled text with unsupervised language modeling tasks, e.g., masked language modeling (Devlin et al., 2019), causal language modeling (Conneau and Lample, 2019), and span corruption (Raffel et al., 2020). These unsupervised tasks are built upon large-scale monolingual texts. In addition, several studies propose cross-lingual tasks that utilize translation data from multilingual parallel corpora, such as translation language modeling (Conneau and Lample,
|
| 19 |
+
|
| 20 |
+
2019), cross-lingual contrast (Chi et al., 2021a), and bidirectional word alignment (Hu et al., 2020a). Thanks to the translation data, the pretrained models produce better-aligned cross-lingual representations and obtain better cross-lingual transferability.
|
| 21 |
+
|
| 22 |
+
Recently, the multilingual text-to-text transfer Transformer (MT5; Xue et al. 2020) achieves state-of-the-art performance on several cross-lingual understanding benchmarks. MT5 inherits the benefits of T5 (Raffel et al., 2020) that treats every text processing problem as a text-to-text problem, i.e., the problem of generating some target text conditioned on the input text. Despite the effectiveness of MT5, how to improve MT5 with translation data is still an open problem.
|
| 23 |
+
|
| 24 |
+
In this paper, we present MT6, standing for improving multilingual text-to-text transfer Transformer with translation data. MT6 differs from MT5 in terms of both pre-training tasks and the training objective. We present three cross-lingual tasks for text-to-text Transformer pre-training, i.e., machine translation, translation pair span corruption, and translation span corruption. In the translation span corruption task, the model is trained to predict the text spans based on the input translation pair. The cross-lingual tasks encourage the model to align representations of different languages. We also propose a new objective for text-to-text pre-training, called partially non-autoregressive (PNAT) decoding. The PNAT objective divides the target sequence into several groups, and constrains that the predictions should be only conditioned on the source tokens and the target tokens from the same group.
|
| 25 |
+
|
| 26 |
+
We conduct experiments on both multilingual understanding and generation tasks. Our MT6 model yields substantially better performance than MT5 on eight benchmarks. We also provide an empirical comparison of the cross-lingual pre-training tasks, where we evaluate several variants of MT6 under the same pre-training and fine-tuning procedure.
|
| 27 |
+
|
| 28 |
+
Moreover, our analysis indicates that the representations produced by MT6 are more cross-lingual transferable and better-aligned than MT5.
|
| 29 |
+
|
| 30 |
+
The contributions are summarized as follows:
|
| 31 |
+
|
| 32 |
+
- We introduce three cross-lingual tasks for text-to-text Transformer pre-training, which improves MT5 with translation data.
|
| 33 |
+
- We propose a partially non-autoregressive objective that pretrains the decoder to use more information from the source sequence.
|
| 34 |
+
- We provide extensive evaluation results of various pre-training tasks and training objectives.
|
| 35 |
+
|
| 36 |
+
# 2 Background on T5 and MT5
|
| 37 |
+
|
| 38 |
+
Multilingual text-to-text transfer Transformer (MT5; Xue et al. 2020) is the multilingual variant of T5 (Raffel et al., 2020) pretrained on the mC4 (Xue et al., 2020) dataset, which consists of natural text in 101 languages drawn from the public Commoncrawl web scrape.
|
| 39 |
+
|
| 40 |
+
The backbone architecture of MT5 is the simple encoder-decoder Transformer (Vaswani et al., 2017), which is trained in a unified text-to-text manner. In specific, text-based NLP problems are formulated as text-to-text transfer, i.e., the model is trained to predict the target text conditioned on the input source text. For example, in text classification, the model predicts the label text rather than a class index. This feature enables the MT5 to be fine-tuned with the same training objective for every task. Formally, let $x$ and $y$ denote the input sequence and the output sequence, the loss function of training the $x \rightarrow y$ transfer is
|
| 41 |
+
|
| 42 |
+
$$
|
| 43 |
+
\mathcal {L} (x \rightarrow y) = - \sum_ {i = 1} ^ {| y |} \log p \left(y _ {i} \mid x, y _ {< i}\right), \tag {1}
|
| 44 |
+
$$
|
| 45 |
+
|
| 46 |
+
where $y_{<i} = y_1, \dots, y_{i-1}$ . With the unified text-to-text formulation, the pre-training task can be designed by constructing the input and output text sequences. Specifically, MT5 employs the span corruption task as the pre-training task, which is an unsupervised masked language modeling task. As shown in Figure 1, we provide an example of constructing the input and output sequences for span corruption. Given a natural sentence $s$ , it first randomly selects several spans of $s$ as the spans to be masked. Then, the input sequence is constructed by replacing the selected spans with unique mask
|
| 47 |
+
|
| 48 |
+

|
| 49 |
+
Figure 1: Example of the span corruption task (Raffel et al., 2020) used in T5 and MT5.
|
| 50 |
+
|
| 51 |
+
tokens. The output sequence is the concatenation of the original tokens of the masked spans, each of which starts with a unique mask token to indicate the span to be decoded. We denote the above two operations as $g_{i}$ and $g_{o}$ , standing for converting the original sentence $s$ into the input or the output formats of span corruption. Thus, the loss function of the span corruption task can be written as
|
| 52 |
+
|
| 53 |
+
$$
|
| 54 |
+
\mathcal {L} _ {\mathrm {S C}} (s) = \mathcal {L} \left(g _ {i} (s) \rightarrow g _ {o} (s)\right). \tag {2}
|
| 55 |
+
$$
|
| 56 |
+
|
| 57 |
+
# 3 Methods
|
| 58 |
+
|
| 59 |
+
In this section, we first present three text-to-text pre-training tasks for improving MT5 with translation data. Then, we introduce the partially non-autoregressive decoding objective, and provide the detailed fine-tuning procedures for the classification, question answering, and named entity recognition tasks.
|
| 60 |
+
|
| 61 |
+
# 3.1 Cross-lingual Pre-training Tasks with Translation Pairs
|
| 62 |
+
|
| 63 |
+
As shown in Figure 2, we illustrate an overview of our cross-lingual text-to-text pre-training tasks. Given the same translation pair, the three tasks construct different input and output sequences.
|
| 64 |
+
|
| 65 |
+
# 3.1.1 Machine Translation
|
| 66 |
+
|
| 67 |
+
Machine translation (MT) is a typical text-to-text task with the goal of translating a sentence from the source language into a target language. It is a natural design to use MT as a text-to-text pre-training task for sequence-to-sequence learning (Chi et al., 2020). Let $e$ and $f$ denote a sentence and its corresponding translation. We directly use $e$ and $f$ as the input and output sequences, respectively. The loss function of MT is
|
| 68 |
+
|
| 69 |
+
$$
|
| 70 |
+
\mathcal {L} _ {\mathrm {M T}} (e, f) = \mathcal {L} (e \rightarrow f). \tag {3}
|
| 71 |
+
$$
|
| 72 |
+
|
| 73 |
+

|
| 74 |
+
Figure 2: Overview of three cross-lingual text-to-text pre-training tasks. For each task, we provide an example of the input and target text. The words marked with “×” are randomly replaced with unique mask tokens like $[\mathbf{M}_1]$ . Notice that in the translation span corruption task, we mask tokens only in one language.
|
| 75 |
+
|
| 76 |
+

|
| 77 |
+
|
| 78 |
+

|
| 79 |
+
|
| 80 |
+
# 3.1.2 Translation Pair Span Corruption
|
| 81 |
+
|
| 82 |
+
Inspired by the translation masked language modeling (Conneau and Lample, 2019) task, we propose the translation pair span corruption (TPSC) task that aims to predict the masked spans from a translation pair instead of a monolingual sentence. Let $e$ and $f$ denote a sentence and its corresponding translation. We concatenate $e$ and $f$ as a single sentence, and perform the span corruption on the concatenated sentence. Formally, we construct the input and output sequences by $g_i([e; f])$ and $g_o([e; f])$ , where $[e; f]$ stands for the concatenation of $e$ and $f$ . With the resulting input and output sequences, the loss function of TPSC can be written as
|
| 83 |
+
|
| 84 |
+
$$
|
| 85 |
+
\mathcal {L} _ {\mathrm {T P S C}} (e, f) = \mathcal {L} \left(g _ {i} ([ e; f ]) \rightarrow g _ {o} ([ e; f ])\right). \tag {4}
|
| 86 |
+
$$
|
| 87 |
+
|
| 88 |
+
# 3.1.3 Translation Span Corruption
|
| 89 |
+
|
| 90 |
+
A potential issue of translation pair span corruption is that the spans in the target sequence can be organized in unnatural word order. As shown in Figure 2, the output sequence of TPSC is organized as “ $[M_1]$ for your $[M_2]$ last week $[M_3]$ invitation $[M_4]$ . It can be found that the French word “invitation” is after the English word “week”, which could harm the language model of the decoder. This motivates us to propose the translation span corruption (TSC) task where we only mask and predict the spans in one language. Given a translation pair $(e,f)$ , we randomly select the $e$ or $f$ to perform span corruption. Without loss of generality, we consider $e$ as the sentence for span corruption. Then, the input and output sequences are constructed by $[g_i(e);f]$ and $g_o(e)$ , respectively. With the resulting input and output sequences, the loss function of TSC can be written as
|
| 91 |
+
|
| 92 |
+
$$
|
| 93 |
+
\mathcal {L} _ {\mathrm {T S C}} (e, f) = \mathcal {L} ([ g _ {i} (e); f ]) \rightarrow g _ {o} (e))). \tag {5}
|
| 94 |
+
$$
|
| 95 |
+
|
| 96 |
+
# 3.2 Pre-training Objective: Partially Non-autoregressive Decoding
|
| 97 |
+
|
| 98 |
+
Recall that the predictions in MT5 are conditioned on both the source tokens and the target tokens to the left. When predicting the tokens closer to the end, the model can use more information from the target sequence, resulting in the insufficient training of the encoder.
|
| 99 |
+
|
| 100 |
+
To encourage the model to utilize more information from the encoding side while preserving the ability of autoregressive decoding, we propose a new training objective for text-to-text training, called partially non-autoregressive decoding (PNAT). In Figure 3, we provide an example for PNAT. Specifically, given a target sequence containing several spans, we divide the target sequence into groups, and train the model to decode each group separately. With the PNAT objective, a prediction is only conditioned on the source tokens and the target tokens from the same group. Consider the target sequence consisting of $m$ spans. We divide the spans into $n_g$ groups, each of which contains $m / n_g$ consecutive spans. For the $j$ -th group, we denote $l_j$ and $r_j$ as the start position and the end position, respectively. The PNAT objective is defined as
|
| 101 |
+
|
| 102 |
+
$$
|
| 103 |
+
\mathcal {L} ^ {\mathrm {P N A T}} (x \rightarrow y) = - \sum_ {j = 1} ^ {n _ {g}} \sum_ {i = l _ {j}} ^ {r _ {j}} \log p \left(y _ {i} | x, y _ {l _ {j}} \dots y _ {i - 1}\right).
|
| 104 |
+
$$
|
| 105 |
+
|
| 106 |
+
The text-to-text loss $\mathcal{L}(x\to y)$ is a specially case of $\mathcal{L}^{\mathrm{PNAT}}(x\to y)$ with $n_g = 1$
|
| 107 |
+
|
| 108 |
+
The MT6 model is jointly pretrained on both monolingual and parallel corpora, where we use the span corruption and one of the three cross-lingual text-to-text tasks. For both tasks, we use the partially non-autoregressive decoding as the training objective where we divide the target sequence into
|
| 109 |
+
|
| 110 |
+

|
| 111 |
+
Figure 3: Partially non-autoregressive objective.
|
| 112 |
+
|
| 113 |
+
$n_g$ groups. The overall pre-training objective is to minimize
|
| 114 |
+
|
| 115 |
+
$$
|
| 116 |
+
\mathcal {L} _ {\mathrm {M T 6}} = \mathcal {L} _ {\mathrm {S C}} ^ {\mathrm {P N A T}} (s) + \mathcal {L} _ {\mathrm {X}} ^ {\mathrm {P N A T}} (e, f), \tag {6}
|
| 117 |
+
$$
|
| 118 |
+
|
| 119 |
+
$$
|
| 120 |
+
X \in \{\mathrm {M T}, \text {T P S C}, \text {T S C} \},
|
| 121 |
+
$$
|
| 122 |
+
|
| 123 |
+
where $\mathcal{L}_{\mathrm{X}}^{\mathrm{PNAT}}$ stands for the one of the loss functions of machine translation (MT; Section 3.1.1), translation pair span corruption (TPSC; Section 3.1.2) and translation span corruption (TSC; Section 3.1.3), with PNAT as the training objective.
|
| 124 |
+
|
| 125 |
+
# 3.3 Cross-lingual Fine-tuning
|
| 126 |
+
|
| 127 |
+
We fine-tune all parameters of the MT6 model with Equation (1) regardless of the end task. Unlike language generation tasks, language understanding tasks should be pre-processed as the text-to-text format. We introduce how to convert the following three types of the language understanding task into the text-to-text format, i.e., constructing the input and output sequences from the original examples.
|
| 128 |
+
|
| 129 |
+
Classification The goal of the text classification task is to predict the label of a given text. Following T5 (Raffel et al., 2020), we directly use the label text as the output text sequence. We provide an example for the MNLI natural language inference task (Williams et al., 2018). Given an input sentence pair of "You have access to the facts." and "The facts are accessible to you.", the goal is to classify the input into the relationships of "entailment", "contradiction", or "neutral". The input and target sequences are constructed as
|
| 130 |
+
|
| 131 |
+
Input: $\langle bos\rangle$ You have access to the facts. $\langle eos\rangle$ The facts are accessible to you. $\langle eos\rangle$
|
| 132 |
+
|
| 133 |
+
Output: $\langle bos\rangle$ entailment $\langle eos\rangle$
|
| 134 |
+
|
| 135 |
+
Since multi-task fine-tuning is not the focus of this work, we do not prepend a task prefix in the input text. We also adopt a constrained decoding
|
| 136 |
+
|
| 137 |
+
process, where the decoded text is constrained to be one of the labels.
|
| 138 |
+
|
| 139 |
+
Question Answering For the extractive question answering (QA) task, we concatenate the passage and the question as the input, and directly use the answer text as the target instead of predicting the answer span positions. We provide an example of converting a QA training example into the text-to-text format.
|
| 140 |
+
|
| 141 |
+
Input: $\langle bos\rangle$ It has offices in Seoul, South Korea. $\langle eos\rangle$ Where is the office in South Korea? $\langle eos\rangle$
|
| 142 |
+
|
| 143 |
+
Output: $\langle bos\rangle$ Seoul $\langle eos\rangle$
|
| 144 |
+
|
| 145 |
+
We use the constrained decoding for the QA tasks where we use the tokens shown in the input passage as the decoding vocabulary.
|
| 146 |
+
|
| 147 |
+
Named Entity Recognition In named entity recognition (NER), we do not directly use the original tag sentence as the output. We find that the model tends to repeat decoding the “ $O$ ” tag if the model directly learns to decode the tag sequences. Alternately, we construct the target text by concatenating the entity spans, each of which starts with the entity tag and ends with the entity tokens. We show an example of converting a NER training example into the text-to-text format.
|
| 148 |
+
|
| 149 |
+
Input: $\langle bos\rangle$ Italy recalled Marcello Cuttitta. $\langle eos\rangle$
|
| 150 |
+
|
| 151 |
+
Output: $\langle bos\rangle \langle loc\rangle$ Italy $\langle sep\rangle \langle per\rangle$ Marcello Cuttitta $\langle sep\rangle \langle eos\rangle$
|
| 152 |
+
|
| 153 |
+
$\langle loc\rangle$ and $\langle per\rangle$ are entity tags denoting location and person. The $\langle sep\rangle$ tag means the end of entity span. We use the following constrained decoding rules: (1) The model should decode entity tags or the end-of-sentence tag ( $\langle eos\rangle$ ) after a $\langle bos\rangle$ token or a $\langle sep\rangle$ token; (2) Otherwise, the model should decode the tokens from the input sentence or the $\langle sep\rangle$ token for the other situations.
|
| 154 |
+
|
| 155 |
+
# 4 Experiments
|
| 156 |
+
|
| 157 |
+
# 4.1 Setup
|
| 158 |
+
|
| 159 |
+
Data Following previous work on cross-lingual pre-training (Conneau et al., 2020; Chi et al., 2021a), we use the natural sentences from CCNet (Wenzek et al., 2019) in 94 languages for monolingual text-to-text tasks. For cross-lingual text-to-text tasks, we use parallel corpora of 14 English-centric language pairs, collected from MultiUN (Ziemski et al., 2016), IIT Bombay (Kunchukuttan et al., 2018), OPUS (Tiedemann, 2012), and WikiMatrix (Schwenk et al.,
|
| 160 |
+
|
| 161 |
+
2019). Details of the pre-training data are described in Appendix.
|
| 162 |
+
|
| 163 |
+
Training Details In the experiments, we consider the small-size Transformer model (Xue et al., 2020), with $d_{\mathrm{model}} = 512$ , $d_{\mathrm{ff}} = 1$ , 024, 6 attention heads, and 8 layers for both the encoder and the decoder<sup>1</sup>. We use the vocabulary provided by XLM-R (Conneau et al., 2020), and extend it with 100 unique mask tokens for the span corruption tasks. We pretrain our MT6 for 0.5M steps with batches of 256 length-512 input sequences. The model is optimized by the Adam optimizer (Kingma and Ba, 2015) with a linear learning rate scheduler. The pre-training procedure takes about 2.5 days on an Nvidia DGX-2 Station. Details of the pre-training hyperparameters are described in Appendix.
|
| 164 |
+
|
| 165 |
+
# 4.2 Results
|
| 166 |
+
|
| 167 |
+
# 4.2.1 XTREME Cross-lingual Understanding
|
| 168 |
+
|
| 169 |
+
To validate the performance of MT6, we evaluate the pretrained models on XTREME (Hu et al., 2020b), which is a widely used benchmark for cross-lingual understanding. Following MT5 (Xue et al., 2020), we consider six downstream tasks included by XTREME: the named entity recognition (NER) task on the WikiAnn (Pan et al., 2017; Rahimi et al., 2019) dataset in 40 languages, the question answering (QA) task on MLQA (Lewis et al., 2020b), XQuAD (Artetxe et al., 2020), and TyDiQA-GoldP (Clark et al., 2020), the cross-lingual natural language inference task on XNLI (Conneau et al., 2018), and cross-lingual paraphrase adversaries on PAWS-X (Yang et al., 2019). The models are evaluated under the cross-lingual transfer setting (Conneau et al., 2020; Hu et al., 2020b). Under this setting, the models should be fine-tuned only on English training data but evaluated on all target languages. Moreover, for each pretrained model, only one model is used for all languages rather than selecting fine-tuned models separately. Details of the fine-tuning hyperparameters are described in Appendix.
|
| 170 |
+
|
| 171 |
+
As shown in Table 1, we present the evaluation results of the pretrained models on the XTREME benchmark. We observe that MT6 achieves the best performance on XTREME, improving the average score from 45.0 to 50.4, as we go from MT5 to MT6. It is worth mentioning that pre-training the
|
| 172 |
+
|
| 173 |
+
model only with the machine translation task performs even worse than MT5. We have noticed that several target languages in TyDiQA and WikiAnn are not covered by our parallel corpora. However, the NMT pretrained model still shows poor results on the other four tasks, where all target languages are covered by the training data. Detailed results can be found in Appendix.
|
| 174 |
+
|
| 175 |
+
# 4.2.2 Comparison of Pre-training Tasks
|
| 176 |
+
|
| 177 |
+
To provide a clear comparison among the pretraining tasks, we implement the text-to-text pretraining methods presented in Section 3, and pretrain variants of MT6 with the same training data and resources for fair comparisons.
|
| 178 |
+
|
| 179 |
+
Table 1 compares the evaluation results of the models pretrained with seven different combinations of span corruption (SC), machine translation (MT), translation pair span corruption (TPSC), translation span corruption (TSC), and partially non-autoregressive decoding (PNAT). It can be observed that jointly training SC+TSC with PNAT achieves the best overall performance on the XTREME benchmark, with substantial gains over the models trained on monolingual data only. The same trend can be observed for the other models pretrained on both monolingual data and parallel data. This demonstrates that introducing translation data to text-to-text pre-training can improve the performance on the end tasks of cross-lingual understanding. Moreover, PNAT provides consistent gains over SC and SC+TSC, showing that PNAT is effective on both monolingual and cross-lingual tasks. Surprisingly, SC+PNAT obtains comparable results to SC+MT without any parallel data. Comparing TSC with MT and TPSC, we observe that SC+TSC brings noticeable improvements on question answering tasks. Although SC+MT shows competitive results on XNLI, the results on the other tasks are relatively low, indicating that simply jointly training SC with MT is not the most effective way to pretrain MT6.
|
| 180 |
+
|
| 181 |
+
# 4.3 Abstractive Summarization
|
| 182 |
+
|
| 183 |
+
Multilingual Summarization In addition to language understanding tasks, we also evaluate our MT6 model on the abstractive summarization task. Abstractive summarization aims to generate a summary of the input document while preserving its original meaning. We use the Gigaword dataset provided by Chi et al. (2020). The dataset is constructed by extracting the first sentences and head
|
| 184 |
+
|
| 185 |
+
<table><tr><td rowspan="2">Model</td><td colspan="5">Configuration</td><td rowspan="2">Structured (F1)
|
| 186 |
+
WikiAnn</td><td colspan="3">Question Answering (F1)</td><td colspan="2">Classification (Acc.)</td></tr><tr><td>SC</td><td>PNAT</td><td>MT</td><td>TPSC</td><td>TSC</td><td>XQuAD</td><td>MLQA</td><td>TyDiQA</td><td>XNLI</td><td>PAWS-X</td></tr><tr><td>NMT</td><td>X</td><td>X</td><td>✓</td><td>X</td><td>X</td><td>27.3</td><td>12.5</td><td>14.9</td><td>16.8</td><td>64.8</td><td>55.0</td></tr><tr><td>MT5</td><td>✓</td><td>X</td><td>X</td><td>X</td><td>X</td><td>43.1</td><td>42.1</td><td>37.6</td><td>30.7</td><td>57.2</td><td>78.0</td></tr><tr><td>MT6 (ours)</td><td>✓</td><td>✓</td><td>X</td><td>X</td><td>✓</td><td>44.7</td><td>50.4</td><td>44.1</td><td>36.0</td><td>64.7</td><td>82.2</td></tr><tr><td rowspan="4">Ablations</td><td>✓</td><td>✓</td><td>X</td><td>X</td><td>X</td><td>43.7</td><td>45.1</td><td>38.5</td><td>32.3</td><td>57.9</td><td>77.5</td></tr><tr><td>✓</td><td>X</td><td>✓</td><td>X</td><td>X</td><td>43.9</td><td>38.5</td><td>33.3</td><td>29.4</td><td>65.9</td><td>79.3</td></tr><tr><td>✓</td><td>X</td><td>X</td><td>✓</td><td>X</td><td>42.3</td><td>46.2</td><td>40.8</td><td>35.3</td><td>64.0</td><td>78.9</td></tr><tr><td>✓</td><td>X</td><td>X</td><td>X</td><td>✓</td><td>43.8</td><td>47.6</td><td>40.5</td><td>36.7</td><td>65.4</td><td>80.3</td></tr><tr><td colspan="12">Pre-training with larger batch size and more training steps</td></tr><tr><td colspan="6">MT5 (Xue et al., 2020)</td><td>50.5</td><td>58.1</td><td>54.6</td><td>35.2</td><td>67.5</td><td>82.4</td></tr></table>
|
| 187 |
+
|
| 188 |
+
Table 1: Evaluation results on XTREM under the cross-lingual transfer setting, where models are only fine-tuned on the English training data but evaluated on all target languages. We pretrain models with different combinations of span corruption (SC), machine translation (MT), translation pair span corruption (TPSC), translation span corruption (TSC), and partially non-autoregressive decoding (PNAT). All results are averaged over five runs.
|
| 189 |
+
|
| 190 |
+
<table><tr><td rowspan="2">Model</td><td rowspan="2">#Param</td><td colspan="3">en</td><td colspan="3">fr</td><td colspan="3">zh</td></tr><tr><td>RG-1</td><td>RG-2</td><td>RG-L</td><td>RG-1</td><td>RG-2</td><td>RG-L</td><td>RG-1</td><td>RG-2</td><td>RG-L</td></tr><tr><td colspan="11">Larger model size</td></tr><tr><td>XLM (Chi et al., 2020)</td><td>800M</td><td>48.15</td><td>26.35</td><td>45.04</td><td>56.27</td><td>39.20</td><td>52.84</td><td>55.30</td><td>42.57</td><td>52.95</td></tr><tr><td>XNLG (Chi et al., 2020)</td><td>800M</td><td>48.76</td><td>26.82</td><td>45.57</td><td>57.84</td><td>40.81</td><td>54.24</td><td>57.65</td><td>44.93</td><td>54.95</td></tr><tr><td colspan="11">Our re-implementation (Fine-tuning with full training data)</td></tr><tr><td>MT5 (reimpl)</td><td>300M</td><td>46.58</td><td>24.45</td><td>43.32</td><td>54.12</td><td>36.78</td><td>50.61</td><td>57.30</td><td>44.08</td><td>54.65</td></tr><tr><td>MT6</td><td>300M</td><td>46.82</td><td>24.65</td><td>43.50</td><td>54.82</td><td>37.61</td><td>51.30</td><td>57.38</td><td>44.20</td><td>54.66</td></tr><tr><td colspan="11">Our re-implementation (Fine-tuning with 1K training data)</td></tr><tr><td>MT5</td><td>300M</td><td>28.00</td><td>10.89</td><td>26.13</td><td>32.56</td><td>17.25</td><td>29.75</td><td>44.16</td><td>31.20</td><td>41.86</td></tr><tr><td>MT6</td><td>300M</td><td>28.80</td><td>11.44</td><td>26.45</td><td>35.07</td><td>18.70</td><td>31.39</td><td>46.48</td><td>33.17</td><td>44.02</td></tr></table>
|
| 191 |
+
|
| 192 |
+
lines as the input documents and summaries, respectively. The dataset consists of examples in the languages of English, French, and Chinese. For each language, it contains 500K, 5K, and 5K examples for the training, validation, and test, respectively. We fine-tune the models for 20 epochs with a batch size of 32 and a learning rate of 0.00001. During decoding, we use the greedy decoding for all evaluated models.
|
| 193 |
+
|
| 194 |
+
As shown in Table 2, we report the ROUGE (Lin, 2004) scores of the models on Gigaword multilingual abstractive summarization. We observe that MT6 consistently outperforms MT5 on all the three target languages. Comparing with the XLM (Connieu and Lample, 2019) and XNLG (Chi et al., 2020) models with 800M parameters, our MT6 model achieves a similar performance with only 300M parameters. Besides, under the setting with fewer training data, MT6 shows more improvements over MT5.
|
| 195 |
+
|
| 196 |
+
Cross-Lingual Summarization The cross-lingual summarization task aims to generate summaries in a different language. We use the
|
| 197 |
+
|
| 198 |
+
Table 2: Evaluation results on Gigaword multilingual abstractive summarization. RG is short for ROUGE. Results of XLM and XNLG are taken from (Chi et al., 2020). Results of MT5 and MT6 are averaged over three runs.
|
| 199 |
+
|
| 200 |
+
<table><tr><td>Model</td><td>es-en</td><td>ru-en</td><td>vi-en</td><td>tr-en</td></tr><tr><td>MT5</td><td>11.36</td><td>8.77</td><td>8.98</td><td>10.57</td></tr><tr><td>MT6</td><td>11.83</td><td>9.49</td><td>9.52</td><td>10.80</td></tr></table>
|
| 201 |
+
|
| 202 |
+
Table 3: ROUGE-2 scores on Wikilingua cross-lingual summarization. Results are averaged over three runs.
|
| 203 |
+
|
| 204 |
+
<table><tr><td>Model</td><td>XQuAD</td><td>MLQA</td><td>TyDiQA</td><td>XNLI</td><td>PAWS-X</td></tr><tr><td>MT5</td><td>30.4</td><td>27.5</td><td>27.5</td><td>19.5</td><td>16.0</td></tr><tr><td>MT6</td><td>28.6</td><td>27.2</td><td>25.9</td><td>14.6</td><td>13.2</td></tr></table>
|
| 205 |
+
|
| 206 |
+
Table 4: The cross-lingual transfer gap scores on the XTREME tasks. A lower transfer gap score indicates better cross-lingual transferability. We use the EM scores to compute the gap scores for the QA tasks.
|
| 207 |
+
|
| 208 |
+
Wikilingua (Ladhak et al., 2020) dataset containing passage-summary pairs in four language pairs. We fine-tune the models for 100K steps with a batch size of 32 and a learning rate of 0.0001. We use the greedy decoding for all evaluated models. The evaluation results are shown in Table 3, where MT6 outperforms MT5 on the test sets of four language pairs.
|
| 209 |
+
|
| 210 |
+

|
| 211 |
+
Figure 4: Evaluation results of different layers on Tatoeba cross-lingual sentence retrieval. We illustrate the average accuracy@1 scores on the Tatoeba test sets of the 14 language pairs covered by the parallel data.
|
| 212 |
+
|
| 213 |
+
# 4.4 Cross-lingual Transfer Gap
|
| 214 |
+
|
| 215 |
+
To explore whether our MT6 model achieves better cross-lingual transferability, we compare the cross-lingual transfer gap scores of our MT6 with MT5. Cross-lingual transfer gap (Hu et al., 2020b) is defined as the difference between the performance on the English test set and the average performance on the non-English test sets. The transfer gap indicates how much the end-task knowledge preserves when transferring from English to the other target languages. Empirically, a lower transfer gap score indicates better cross-lingual transferability. Following Hu et al. (2020b), we compute the transfer gap scores over the sentence classification and question answering tasks. As shown in Table 4, MT6 consistently reduces the transfer gap across all the five tasks, demonstrating that our model is more effective for cross-lingual transfer than MT5.
|
| 216 |
+
|
| 217 |
+
# 4.5 Cross-lingual Representations
|
| 218 |
+
|
| 219 |
+
We analyze the cross-lingual representations produced by our MT6 model. Following Chi et al. (2021a), we evaluate the representations on the Tatoeba (Artetxe and Schwenk, 2019) cross-lingual sentence retrieval task. The test sets consist of 14 English-centric language pairs covered by the parallel data in our experiments. Figure 4 illustrates the average accuracy@1 scores of cross-lingual sentence retrieval. The scores are averaged over 14 language pairs and both the directions of xx $\rightarrow$ en and en $\rightarrow$ xx. From the figure, we observe that MT5 shows a parabolic trend across different layers, which also appears in other cross-lingual encoder models (Jalili Sabet et al., 2020; Chi et al., 2021a). Differently, we obtain better performance
|
| 220 |
+
|
| 221 |
+
<table><tr><td>Model</td><td>en-de</td><td>en-fr</td><td>en-ro</td><td>Avg</td></tr><tr><td>MT5</td><td>35.84</td><td>19.05</td><td>45.24</td><td>33.38</td></tr><tr><td>MT6</td><td>23.69</td><td>12.11</td><td>42.56</td><td>26.12</td></tr></table>
|
| 222 |
+
|
| 223 |
+
Table 5: Evaluation results on word alignment. We report the alignment error rate scores (lower is better). We use the hidden vectors from the last encoder layer, and apply the SimAlign (Jalili Sabet et al., 2020) tool to obtain the resulting word alignments.
|
| 224 |
+
|
| 225 |
+
<table><tr><td>Noise Density</td><td>NER</td><td>QA</td><td>Classification</td><td>Avg</td></tr><tr><td>15%</td><td>41.7</td><td>33.5</td><td>71.9</td><td>47.4</td></tr><tr><td>30%</td><td>41.3</td><td>35.9</td><td>72.2</td><td>48.9</td></tr><tr><td>50%</td><td>43.8</td><td>35.5</td><td>72.9</td><td>49.4</td></tr><tr><td>100% (MT)</td><td>43.9</td><td>29.1</td><td>72.6</td><td>46.1</td></tr></table>
|
| 226 |
+
|
| 227 |
+
Table 6: Effects of noise density. We report the average results over different task types and the average results over all the six tasks on the XTREMEM benchmark. We vary the noise density of the translation span corruption task from $15\%$ to $100\%$ . All results are averaged over five runs.
|
| 228 |
+
|
| 229 |
+
as we use higher layers of our MT6 model. At layer-8, our MT6 model achieves an average accuracy@1 of 43.2, outperforming the MT5 model by 35.6, which means our MT6 model produces better-aligned text representations. We believe the better-aligned representations potentially improve the cross-lingual transferability. Furthermore, the results also indicate that our pre-training objective is more effective for training the encoder than MT5.
|
| 230 |
+
|
| 231 |
+
# 4.6 Word Alignment
|
| 232 |
+
|
| 233 |
+
In addition to cross-lingual sentence retrieval that evaluates sentence-level representations, we also explore whether the representations produced by MT6 are better-aligned at token-level. Thus, we compare our MT6 with MT5 on the word alignment task, where the goal is to find corresponding word pairs in a translation pair. We use the hidden vectors from the last encoder layer, and apply the SimAlign (Jalili Sabet et al., 2020) tool to obtain the resulting word alignments. Table 5 shows the alignment error rate (AER) scores on the test sets provided by Jalili Sabet et al. (2020). Among the three language pairs, MT6 achieves lower AER scores than MT5, indicating that the cross-lingual representations produced by MT6 are also better-aligned at token-level.
|
| 234 |
+
|
| 235 |
+
# 4.7 Effects of Noise Density
|
| 236 |
+
|
| 237 |
+
In the translation span corruption (TSC) task, the input parallel sentences provide redundant information in two languages, which is different from the standard monolingual span corruption task. Thus, we explore the effects of noise density by varying the noise density in the translation span corruption task, with the other hyperparameters fixed. To reduce the computational load, we do not apply the partially non-autoregressive decoding, i.e., we pretrain the models with the original text-to-text objective. We pretrain MT6 models with the noise density of 0.15, 0.3, 0.5, and 1.0 respectively. It means $15\%$ , $30\%$ , $50\%$ , or all of the source or target tokens are replaced with the masked tokens. Notice that setting the noise density as 1.0 is identical to machine translation, where the decoder is required to decode the whole target sentence.
|
| 238 |
+
|
| 239 |
+
In Table 6, we report the average scores on the XTREME benchmark. From the results, we observe that MT6 achieves the best results with the noise density of 0.5, rather than a higher noise density such as 1.0. The results indicate that the TSC task prefers a higher noise density, so that the model can learn to use more cross-lingual information. This finding is different from that reported by T5 (Raffel et al., 2020), where the span corruption task works better with the noise density of 0.15 under the monolingual setting.
|
| 240 |
+
|
| 241 |
+
# 5 Related Work
|
| 242 |
+
|
| 243 |
+
Cross-lingual LM Pre-training Cross-lingual language models are typically built with the Transformer (Vaswani et al., 2017) architecture, and pretrained with various pre-training tasks on large-scale text data. Multilingual BERT (mBERT; Devlin et al. 2019) and XLM-R (Conneau et al., 2020) are pretrained with masked language modeling (MLM; Devlin et al. 2019) on large-scale unlabeled text in about 100 languages. MASS (Song et al., 2019) and mBART (Liu et al., 2020) are pretrained in an auto-encoding manner, which provides improvements on the neural machine translation tasks. MT5 (Xue et al., 2020) is pretrained with the span corruption (Raffel et al., 2020) task under the text-to-text formulation (Raffel et al., 2020). Cross-lingual pretrained models also benefit from translation data. XLM (Conneau and Lample, 2019) jointly learns MLM and the translation language modeling (TLM) task. Unicoder (Huang et al., 2019) presents three cross-lingual tasks to
|
| 244 |
+
|
| 245 |
+
learn mappings among languages. ALM (Yang et al., 2020) converts the translation pairs into code-switched sequences as the training examples. Word-aligned BERT models (Cao et al., 2020; Zhao et al., 2020) improves the cross-lingual representations by fine-tuning the mBERT with the objective of minimizing the distance between aligned tokens. AMBER (Hu et al., 2020a) propose to maximize the agreement between the forward and backward attention matrices of the input translation pair. InfoXLM (Chi et al., 2021a) proposes the cross-lingual contrastive learning task that maximizes the InfoNCE (Oord et al., 2018) lower bound of the mutual information between the input translation pair. XLM-Align (Chi et al., 2021b) leverages token-level alignments implied in translation pairs to improve cross-lingual transfer. XNLG (Chi et al., 2020) introduces the cross-lingual transfer for NLG tasks, and achieves zero-shot cross-lingual transfer for question generation and abstractive summarization. VECO (Luo et al., 2020) pretrains a variable cross-lingual pre-training model that learns unified language representations for both NLU and NLG. ERNIE-M (Ouyang et al., 2020) utilizes the back-translation masked language modeling task that generates pseudo parallel sentence pairs for learning TLM.
|
| 246 |
+
|
| 247 |
+
Encoder-Decoder Pre-training Raffel et al. (2020) use span corruption to pretrain text-to-text Transformer, where both language understanding and generation tasks are formulated as sequence-to-sequence fine-tuning. Song et al. (2019) propose masked sequence-to-sequence pre-training where the model predicts a randomly masked span. BART (Lewis et al., 2020a) design various denoised autoencoding tasks to recover the whole original sentence. PEGASUS (Zhang et al., 2020) introduces the gap sentence generation task for abstractive summarization pre-training. Chi et al. (2020) use both denoised autoencoding and machine translation for cross-lingual language generation. Another strand of research follows unified language model pre-training (Dong et al., 2019; Bao et al., 2020; Luo et al., 2020), where the encoder and the decoder share parameters. Ma et al. (2020, 2021) reuse pretrained multilingual encoder for sequence-to-sequence pre-training.
|
| 248 |
+
|
| 249 |
+
# 6 Conclusion
|
| 250 |
+
|
| 251 |
+
In this paper, we propose MT6 that improves the multilingual text-to-text transfer Transformer with
|
| 252 |
+
|
| 253 |
+
translation data. We introduce three text-to-text pre-training tasks that are built on parallel corpora, and a training objective for improving text-to-text pre-training. Nonetheless, we present a comprehensive comparison of the text-to-text tasks, and show that our MT6 model outperforms MT5 on both cross-lingual understanding and generation benchmarks. For future work, we would like to pretrain MT6 models at a larger scale, and explore more applications, such as machine translation.
|
| 254 |
+
|
| 255 |
+
# Acknowledgements
|
| 256 |
+
|
| 257 |
+
We would like to acknowledge Bo Zheng for the helpful discussions. The work is supported by National Key R&D Plan (No. 2018YFB1005100), National Natural Science Foundation of China (No. 61751201, 61602197, 61772076, and 61732005), Natural Science Fund of Beijing (No. Z181100008918002), and the funds of Beijing Advanced Innovation Center for Language Resources (No. TYZ19005). Heyan Huang is the corresponding author.
|
| 258 |
+
|
| 259 |
+
# References
|
| 260 |
+
|
| 261 |
+
Mikel Artetxe, Sebastian Ruder, and Dani Yogatama. 2020. On the cross-lingual transferability of monolingual representations. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4623-4637, Online. Association for Computational Linguistics.
|
| 262 |
+
Mikel Artetxe and Holger Schwenk. 2019. Massively multilingual sentence embeddings for zero-shot cross-lingual transfer and beyond. Transactions of the Association for Computational Linguistics, 7(0):597-610.
|
| 263 |
+
Hangbo Bao, Li Dong, Furu Wei, Wenhui Wang, Nan Yang, Xiaodong Liu, Yu Wang, Songhao Piao, Jianfeng Gao, Ming Zhou, and Hsiao-Wuen Hon. 2020. UniLMv2: Pseudo-masked language models for unified language model pre-training. arXiv preprint arXiv:2002.12804.
|
| 264 |
+
Steven Cao, Nikita Kitaev, and Dan Klein. 2020. Multilingual alignment of contextual word representations. In International Conference on Learning Representations.
|
| 265 |
+
Zewen Chi, Li Dong, Furu Wei, Wenhui Wang, XianLing Mao, and Heyan Huang. 2020. Cross-lingual natural language generation via pre-training. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, New York, NY, USA, February 7-12, 2020, pages 7570-7577. AAAI Press.
|
| 266 |
+
Zewen Chi, Li Dong, Furu Wei, Nan Yang, Saksham Singhal, Wenhui Wang, Xia Song, Xian-Ling
|
| 267 |
+
|
| 268 |
+
Mao, Heyan Huang, and Ming Zhou. 2021a. InfoXLM: An information-theoretic framework for cross-lingual language model pre-training. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3576-3588, Online. Association for Computational Linguistics.
|
| 269 |
+
Zewen Chi, Li Dong, Bo Zheng, Shaohan Huang, XianLing Mao, Heyan Huang, and Furu Wei. 2021b. Improving pretrained cross-lingual language models via self-labeled word alignment. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 3418-3430, Online. Association for Computational Linguistics.
|
| 270 |
+
Zewen Chi, Shaohan Huang, Li Dong, Shuming Ma, Saksham Singhal, Payal Bajaj, Xia Song, and Furu Wei. 2021c. XLM-E: Cross-lingual language model pre-training via ELECTRA. ArXiv, abs/2106.16138.
|
| 271 |
+
Jonathan H. Clark, Eunsol Choi, Michael Collins, Dan Garrette, Tom Kwiatkowski, Vitaly Nikolaev, and Jennimaria Palomaki. 2020. TyDi QA: A benchmark for information-seeking question answering in typologically diverse languages. Transactions of the Association for Computational Linguistics, 8:454-470.
|
| 272 |
+
Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettle-moyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8440-8451, Online. Association for Computational Linguistics.
|
| 273 |
+
Alexis Conneau and Guillaume Lample. 2019. Crosslingual language model pretraining. In Advances in Neural Information Processing Systems, pages 7057-7067. Curran Associates, Inc.
|
| 274 |
+
Alexis Conneau, Rudy Rinott, Guillaume Lample, Adina Williams, Samuel Bowman, Holger Schwenk, and Veselin Stoyanov. 2018. XNLI: Evaluating cross-lingual sentence representations. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2475-2485, Brussels, Belgium. Association for Computational Linguistics.
|
| 275 |
+
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.
|
| 276 |
+
|
| 277 |
+
Li Dong, Nan Yang, Wenhui Wang, Furu Wei, Xiaodong Liu, Yu Wang, Jianfeng Gao, Ming Zhou, and Hsiao-Wuen Hon. 2019. Unified language model pre-training for natural language understanding and generation. In Advances in Neural Information Processing Systems, pages 13063-13075. Curran Associates, Inc.
|
| 278 |
+
Junjie Hu, Melvin Johnson, Orhan First, Aditya Siddhant, and Graham Neubig. 2020a. Explicit alignment objectives for multilingual bidirectional encoders. arXiv preprint arXiv:2010.07972.
|
| 279 |
+
Junjie Hu, Sebastian Ruder, Aditya Siddhant, Graham Neubig, Orhan First, and Melvin Johnson. 2020b. XTREME: A massively multilingual multitask benchmark for evaluating cross-lingual generalization. arXiv preprint arXiv:2003.11080.
|
| 280 |
+
Haoyang Huang, Yaobo Liang, Nan Duan, Ming Gong, Linjun Shou, Daxin Jiang, and Ming Zhou. 2019. Unicoder: A universal language encoder by pretraining with multiple cross-lingual tasks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, pages 2485–2494, Hong Kong, China. Association for Computational Linguistics.
|
| 281 |
+
Masoud Jalili Sabet, Philipp Duffer, François Yvon, and Hinrich Schütze. 2020. SimAlign: High quality word alignments without parallel training data using static and contextualized embeddings. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 1627-1643, Online. Association for Computational Linguistics.
|
| 282 |
+
Karthikeyan K, Zihan Wang, Stephen Mayhew, and Dan Roth. 2020. Cross-lingual ability of multilingual bert: An empirical study. In International Conference on Learning Representations.
|
| 283 |
+
Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, San Diego, CA.
|
| 284 |
+
Anoop Kunchukuttan, Pratik Mehta, and Pushpak Bhattacharyya. 2018. The IIT Bombay English-Hindi parallel corpus. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation, Miyazaki, Japan. European Language Resources Association.
|
| 285 |
+
Faisal Ladhak, Esin Durmus, Claire Cardie, and Kathleen McKeown. 2020. WikiLingua: A new benchmark dataset for cross-lingual abstractive summarization. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 4034-4048, Online. Association for Computational Linguistics.
|
| 286 |
+
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer.
|
| 287 |
+
|
| 288 |
+
2020a. BART: Denoising sequence-to-sequence pretraining for natural language generation, translation, and comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7871-7880, Online. Association for Computational Linguistics.
|
| 289 |
+
Patrick Lewis, Barlas Oguz, Rudy Rinott, Sebastian Riedel, and Holger Schwenk. 2020b. MLQA: Evaluating cross-lingual extractive question answering. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7315-7330, Online. Association for Computational Linguistics.
|
| 290 |
+
Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches Out: Proceedings of the ACL-04 Workshop, pages 74-81, Barcelona, Spain.
|
| 291 |
+
Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, and Luke Zettlemoyer. 2020. Multilingual denoising pre-training for neural machine translation. arXiv preprint arXiv:2001.08210.
|
| 292 |
+
Fuli Luo, Wei Wang, Jiahao Liu, Yijia Liu, Bin Bi, Songfang Huang, Fei Huang, and Luo Si. 2020. Veco: Variable encoder-decoder pre-training for cross-lingual understanding and generation. arXiv preprint arXiv:2010.16046.
|
| 293 |
+
Shuming Ma, Li Dong, Shaohan Huang, Dong-dong Zhang, Alexandre Muzio, Saksham Singhal, Hany Hassan Awadalla, Xia Song, and Furu Wei. 2021. DeltaLM: Encoder-decoder pre-training for language generation and translation by augmenting pretrained multilingual encoders. ArXiv, abs/2106.13736.
|
| 294 |
+
Shuming Ma, Jian Yang, H. Huang, Zewen Chi, Li Dong, Dongdong Zhang, Hany Hassan Awadalla, Alexandre Muzio, Akiko Eriguchi, Saksham Singhal, Xia Song, Arul Menezes, and Furu Wei. 2020. XLM-T: Scaling up multilingual machine translation with pretrained cross-lingual transformer encoders. *ArXiv*, abs/2012.15547.
|
| 295 |
+
Aaron van den Oord, Yazhe Li, and Oriol Vinyals. 2018. Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748.
|
| 296 |
+
Xuan Ouyang, Shuohuan Wang, Chao Pang, Yu Sun, Hao Tian, Hua Wu, and Haifeng Wang. 2020. Ernie-m: Enhanced multilingual representation by aligning cross-lingual semantics with monolingual corpora. arXiv preprint arXiv:2012.15674.
|
| 297 |
+
Xiaoman Pan, Boliang Zhang, Jonathan May, Joel Nothman, Kevin Knight, and Heng Ji. 2017. Crosslingual name tagging and linking for 282 languages. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1946-1958, Vancouver, Canada. Association for Computational Linguistics.
|
| 298 |
+
|
| 299 |
+
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research, 21(140):1-67.
|
| 300 |
+
Afshin Rahimi, Yuan Li, and Trevor Cohn. 2019. Massively multilingual transfer for NER. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 151-164, Florence, Italy. Association for Computational Linguistics.
|
| 301 |
+
Holger Schwenk, Vishrav Chaudhary, Shuo Sun, Hongyu Gong, and Francisco Guzmán. 2019. WikiMatrix: Mining 135M parallel sentences in 1620 language pairs from wikipedia. arXiv preprint arXiv:1907.05791.
|
| 302 |
+
Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, and TieYan Liu. 2019. MASS: Masked sequence to sequence pre-training for language generation. arXiv preprint arXiv:1905.02450.
|
| 303 |
+
Jörg Tiedemann. 2012. Parallel data, tools and interfaces in OPUS. In Proceedings of the Eighth International Conference on Language Resources and Evaluation, pages 2214-2218, Istanbul, Turkey. European Language Resources Association.
|
| 304 |
+
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998-6008. Curran Associates, Inc.
|
| 305 |
+
Guillaume Wenzek, Marie-Anne Lachaux, Alexis Conneau, Vishrav Chaudhary, Francisco Guzman, Armand Joulin, and Edouard Grave. 2019. CCNet: Extracting high quality monolingual datasets from web crawl data. arXiv preprint arXiv:1911.00359.
|
| 306 |
+
Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In *NAACL*, pages 1112–1122, New Orleans, Louisiana.
|
| 307 |
+
Shijie Wu and Mark Dredze. 2019. Beto, bentz, becas: The surprising cross-lingual effectiveness of BERT. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, pages 833-844, Hong Kong, China. Association for Computational Linguistics.
|
| 308 |
+
Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, and Colin Raffel. 2020. mT5: A massively multilingual pre-trained text-to-text transformer. arXiv preprint arXiv:2010.11934.
|
| 309 |
+
Jian Yang, Shuming Ma, Dongdong Zhang, Shuangzhi Wu, Zhoujun Li, and Ming Zhou. 2020. Alternating language modeling for cross-lingual pre-training. In
|
| 310 |
+
|
| 311 |
+
Thirty-Fourth AAAI Conference on Artificial Intelligence.
|
| 312 |
+
Yinfei Yang, Yuan Zhang, Chris Tar, and Jason Baldridge. 2019. PAWS-X: A cross-lingual adversarial dataset for paraphrase identification. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3687-3692, Hong Kong, China. Association for Computational Linguistics.
|
| 313 |
+
Jingqing Zhang, Yao Zhao, Mohammad Saleh, and Peter Liu. 2020. PEGASUS: Pre-training with extracted gap-sentences for abstractive summarization. In Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pages 11328-11339. PMLR.
|
| 314 |
+
Wei Zhao, Steffen Eger, Johannes Bjerva, and Isabelle Augenstein. 2020. Inducing language-agnostic multilingual representations. arXiv preprint arXiv:2008.09112.
|
| 315 |
+
Michal Ziemski, Marcin Junczys-Dowmunt, and Bruno Pouliquen. 2016. The united nations parallel corpus v1.0. In LREC, pages 3530-3534.
|
| 316 |
+
|
| 317 |
+
# A Pre-Training Data
|
| 318 |
+
|
| 319 |
+
We reconstruct CCNet² and follow (Conneau et al., 2020) to reproduce the CC-100 corpus for monolingual data. The resulting corpus contains 94 languages. We present the language codes and data size in Table 7 and Table 8 for the monolingual corpus and parallel corpus, respectively. Table 7 reports the language codes and data size in our work. We apply the multilingual sampling strategy (Conneau and Lample, 2019) with $\alpha = 0.7$ for both monolingual and parallel data.
|
| 320 |
+
|
| 321 |
+
# B Hyperparameters for Pre-Training
|
| 322 |
+
|
| 323 |
+
As shown in Table 9, we present the hyperparameters for pre-training MT6. We extend the vocabulary of the XLM-R (Conneau et al., 2020) with external 100 unique mask tokens as the vocabulary of MT6 and our MT5 re-implementation.
|
| 324 |
+
|
| 325 |
+
# C Hyperparameters for Fine-Tuning
|
| 326 |
+
|
| 327 |
+
In Table 10, we present the hyperparameters for fine-tuning MT6 on the end tasks.
|
| 328 |
+
|
| 329 |
+
<table><tr><td>Code</td><td>Size (GB)</td><td>Code</td><td>Size (GB)</td><td>Code</td><td>Size (GB)</td></tr><tr><td>af</td><td>0.2</td><td>hr</td><td>1.4</td><td>pa</td><td>0.8</td></tr><tr><td>am</td><td>0.4</td><td>hu</td><td>9.5</td><td>pl</td><td>28.6</td></tr><tr><td>ar</td><td>16.1</td><td>hy</td><td>0.7</td><td>ps</td><td>0.4</td></tr><tr><td>as</td><td>0.1</td><td>id</td><td>17.2</td><td>pt</td><td>39.4</td></tr><tr><td>az</td><td>0.8</td><td>is</td><td>0.5</td><td>ro</td><td>11.0</td></tr><tr><td>ba</td><td>0.2</td><td>it</td><td>47.2</td><td>ru</td><td>253.3</td></tr><tr><td>be</td><td>0.5</td><td>ja</td><td>86.8</td><td>sa</td><td>0.2</td></tr><tr><td>bg</td><td>7.0</td><td>ka</td><td>1.0</td><td>sd</td><td>0.2</td></tr><tr><td>bn</td><td>5.5</td><td>kk</td><td>0.6</td><td>si</td><td>1.3</td></tr><tr><td>ca</td><td>3.0</td><td>km</td><td>0.2</td><td>sk</td><td>13.6</td></tr><tr><td>ckb</td><td>0.6</td><td>kn</td><td>0.3</td><td>sl</td><td>6.2</td></tr><tr><td>cs</td><td>14.9</td><td>ko</td><td>40.0</td><td>sq</td><td>3.0</td></tr><tr><td>cy</td><td>0.4</td><td>ky</td><td>0.5</td><td>sr</td><td>7.2</td></tr><tr><td>da</td><td>6.9</td><td>la</td><td>0.3</td><td>sv</td><td>60.4</td></tr><tr><td>de</td><td>99.0</td><td>lo</td><td>0.2</td><td>sw</td><td>0.3</td></tr><tr><td>el</td><td>13.1</td><td>lt</td><td>2.3</td><td>ta</td><td>7.9</td></tr><tr><td>en</td><td>731.6</td><td>lv</td><td>1.3</td><td>te</td><td>2.3</td></tr><tr><td>eo</td><td>0.5</td><td>mk</td><td>0.6</td><td>tg</td><td>0.7</td></tr><tr><td>es</td><td>85.6</td><td>ml</td><td>1.3</td><td>th</td><td>33.0</td></tr><tr><td>et</td><td>1.4</td><td>mn</td><td>0.4</td><td>tl</td><td>1.2</td></tr><tr><td>eu</td><td>1.0</td><td>mr</td><td>0.5</td><td>tr</td><td>56.4</td></tr><tr><td>fa</td><td>19.0</td><td>ms</td><td>0.7</td><td>tt</td><td>0.6</td></tr><tr><td>fi</td><td>5.9</td><td>mt</td><td>0.2</td><td>ug</td><td>0.2</td></tr><tr><td>fr</td><td>89.9</td><td>my</td><td>0.4</td><td>uk</td><td>13.4</td></tr><tr><td>ga</td><td>0.2</td><td>ne</td><td>0.6</td><td>ur</td><td>3.0</td></tr><tr><td>gl</td><td>1.5</td><td>nl</td><td>25.9</td><td>uz</td><td>0.1</td></tr><tr><td>gu</td><td>0.3</td><td>nn</td><td>0.4</td><td>vi</td><td>74.5</td></tr><tr><td>he</td><td>4.4</td><td>no</td><td>5.5</td><td>yi</td><td>0.3</td></tr><tr><td>hi</td><td>5.0</td><td>or</td><td>0.3</td><td>zh</td><td>96.8</td></tr></table>
|
| 330 |
+
|
| 331 |
+
Table 7: Statistics of CCNet used for pre-training.
|
| 332 |
+
|
| 333 |
+
<table><tr><td>ISO Code</td><td>Size (GB)</td><td>ISO Code</td><td>Size (GB)</td></tr><tr><td>en-ar</td><td>5.88</td><td>en-ru</td><td>7.72</td></tr><tr><td>en-bg</td><td>0.49</td><td>en-sw</td><td>0.06</td></tr><tr><td>en-de</td><td>4.21</td><td>en-th</td><td>0.47</td></tr><tr><td>en-el</td><td>2.28</td><td>en-tr</td><td>0.34</td></tr><tr><td>en-es</td><td>7.09</td><td>en-ur</td><td>0.39</td></tr><tr><td>en-fr</td><td>7.63</td><td>en-vi</td><td>0.86</td></tr><tr><td>en-hi</td><td>0.62</td><td>en-zh</td><td>4.02</td></tr></table>
|
| 334 |
+
|
| 335 |
+
# D Results on XTREME Cross-Lingual Understanding
|
| 336 |
+
|
| 337 |
+
We present the detailed results of the MT6 and our re-implemented MT5 models on XTREME in Table 11-16.
|
| 338 |
+
|
| 339 |
+
# E Results on Wikilingua Cross-Lingual Summarization
|
| 340 |
+
|
| 341 |
+
As shown in Table 17, we present the detailed results of the MT6 and our re-implemented MT5 models on Wikilingua cross-lingual summarization.
|
| 342 |
+
|
| 343 |
+
Table 8: Parallel data used for pre-training.
|
| 344 |
+
|
| 345 |
+
<table><tr><td>Hyperparameters</td><td>Value</td></tr><tr><td>Layers</td><td>8</td></tr><tr><td>Hidden size</td><td>512</td></tr><tr><td>FFN inner hidden size</td><td>1,024</td></tr><tr><td>Attention heads</td><td>6</td></tr><tr><td>Training steps</td><td>500K</td></tr><tr><td>Batch size</td><td>256</td></tr><tr><td>Input length</td><td>512</td></tr><tr><td>Adam ε</td><td>1e-6</td></tr><tr><td>Adam β</td><td>(0.9, 0.9999)</td></tr><tr><td>Learning rate</td><td>1e-4</td></tr><tr><td>Learning rate schedule</td><td>Linear</td></tr><tr><td>Warmup steps</td><td>10,000</td></tr><tr><td>Gradient clipping</td><td>1.0</td></tr><tr><td>Noise density</td><td>0.5</td></tr><tr><td>PNAT group number</td><td>3</td></tr></table>
|
| 346 |
+
|
| 347 |
+
Table 9: Hyperparameters used for pre-training MT6.
|
| 348 |
+
|
| 349 |
+
<table><tr><td>Hyperparameters</td><td>WikiAnn</td><td>XQuAD</td><td>MLQA</td><td>TyDiQA</td><td>XNLI</td><td>PAWS-X</td><td>Gigaword</td><td>Wikilingua</td></tr><tr><td>Batch size</td><td>32</td><td>32</td><td>32</td><td>32</td><td>32</td><td>32</td><td>32</td><td>32</td></tr><tr><td>Learning rate</td><td>7e-5</td><td>3e-5</td><td>3e-5</td><td>5e-5</td><td>2e-5</td><td>3e-5</td><td>1e-5</td><td>1e-4</td></tr><tr><td>LR schedule</td><td>Linear</td><td>Linear</td><td>Linear</td><td>Linear</td><td>Linear</td><td>Linear</td><td>Linear</td><td>Linear</td></tr><tr><td>Warmup</td><td>10%</td><td>10%</td><td>10%</td><td>10%</td><td>10%</td><td>10%</td><td>10K steps</td><td>2.5K steps</td></tr><tr><td>Epochs/Steps</td><td>5 epochs</td><td>3 epochs</td><td>3 epochs</td><td>40 epochs</td><td>10 epochs</td><td>10 epochs</td><td>20 epochs</td><td>100K steps</td></tr></table>
|
| 350 |
+
|
| 351 |
+
Table 10: Hyperparameters used for fine-tuning MT6 on the end tasks.
|
| 352 |
+
|
| 353 |
+
<table><tr><td>Model</td><td>ar</td><td>he</td><td>vi</td><td>id</td><td>jv</td><td>ms</td><td>tl</td><td>eu</td><td>ml</td><td>ta</td><td>te</td><td>af</td><td>nl</td><td>en</td><td>de</td><td>el</td><td>bn</td><td>hi</td><td>mr</td><td>ur</td><td></td></tr><tr><td>MT5</td><td>26.5</td><td>24.0</td><td>60.7</td><td>43.5</td><td>43.7</td><td>49.2</td><td>65.2</td><td>52.4</td><td>13.1</td><td>26.4</td><td>20.2</td><td>58.2</td><td>69.4</td><td>77.5</td><td>63.6</td><td>51.7</td><td>28.3</td><td>37.9</td><td>27.2</td><td>19.6</td><td></td></tr><tr><td>MT6</td><td>39.6</td><td>22.2</td><td>63.8</td><td>43.7</td><td>40.4</td><td>54.7</td><td>62.9</td><td>42.9</td><td>14.2</td><td>26.4</td><td>15.7</td><td>58.9</td><td>66.0</td><td>78.5</td><td>67.1</td><td>59.6</td><td>39.2</td><td>47.5</td><td>31.8</td><td>25.5</td><td></td></tr><tr><td>Model</td><td>fa</td><td>fr</td><td>it</td><td>pt</td><td>es</td><td>bg</td><td>ru</td><td>ja</td><td>ka</td><td>ko</td><td>th</td><td>sw</td><td>yo</td><td>my</td><td>zh</td><td>kk</td><td>tr</td><td>et</td><td>fi</td><td>hu</td><td>Avg</td></tr><tr><td>MT5</td><td>15.5</td><td>69.8</td><td>69.1</td><td>67.7</td><td>57.6</td><td>61.1</td><td>49.5</td><td>24.1</td><td>26.2</td><td>23.8</td><td>3.0</td><td>54.2</td><td>56.3</td><td>2.8</td><td>29.0</td><td>23.4</td><td>52.8</td><td>57.0</td><td>62.6</td><td>60.9</td><td>43.1</td></tr><tr><td>MT6</td><td>21.7</td><td>70.7</td><td>65.9</td><td>67.8</td><td>64.9</td><td>65.8</td><td>51.6</td><td>23.4</td><td>25.3</td><td>21.9</td><td>4.9</td><td>65.2</td><td>53.6</td><td>8.5</td><td>26.3</td><td>28.6</td><td>55.9</td><td>49.3</td><td>58.2</td><td>57.1</td><td>44.7</td></tr></table>
|
| 354 |
+
|
| 355 |
+
Table 11: Results on WikiAnn named entity recognition.
|
| 356 |
+
|
| 357 |
+
<table><tr><td>Model</td><td>en</td><td>es</td><td>de</td><td>el</td><td>ru</td><td>tr</td><td>ar</td><td>vi</td><td>th</td><td>zh</td><td>hi</td><td>Avg</td></tr><tr><td>MT5</td><td>68.6 / 56.7</td><td>50.2 / 35.6</td><td>47.2 / 34.1</td><td>30.3 / 18.5</td><td>41.4 / 28.5</td><td>35.9 / 21.9</td><td>25.1 / 14.7</td><td>48.6 / 31.6</td><td>31.7 / 24.6</td><td>54.7 / 34.9</td><td>29.7 / 18.6</td><td>42.1 / 29.1</td></tr><tr><td>MT6</td><td>74.2 / 62.4</td><td>57.8 / 43.1</td><td>53.1 / 38.7</td><td>41.6 / 28.2</td><td>51.1 / 35.6</td><td>39.2 / 26.0</td><td>40.4 / 25.2</td><td>53.6 / 35.2</td><td>41.9 / 33.9</td><td>61.7 / 45.8</td><td>39.8 / 26.0</td><td>50.4 / 36.4</td></tr></table>
|
| 358 |
+
|
| 359 |
+
Table 12: Results on XQuAD question answering.
|
| 360 |
+
|
| 361 |
+
<table><tr><td>Model</td><td>en</td><td>es</td><td>de</td><td>ar</td><td>hi</td><td>vi</td><td>zh</td><td>Avg</td></tr><tr><td>MT5</td><td>61.2 / 47.8</td><td>41.7 / 27.1</td><td>37.8 / 25.4</td><td>21.1 / 10.8</td><td>22.6 / 13.7</td><td>40.5 / 24.2</td><td>38.4 / 20.6</td><td>37.6 / 24.2</td></tr><tr><td>MT6</td><td>65.5 / 52.7</td><td>47.8 / 32.0</td><td>43.2 / 29.8</td><td>32.4 / 18.7</td><td>31.8 / 20.2</td><td>45.0 / 28.3</td><td>42.4 / 23.6</td><td>44.1 / 29.3</td></tr></table>
|
| 362 |
+
|
| 363 |
+
Table 13: Results on MLQA question answering.
|
| 364 |
+
|
| 365 |
+
<table><tr><td>Model</td><td>en</td><td>ar</td><td>bn</td><td>fi</td><td>id</td><td>ko</td><td>ru</td><td>sw</td><td>te</td><td>Avg</td></tr><tr><td>MT5</td><td>55.4 / 44.7</td><td>35.3 / 18.3</td><td>18.4 / 9.2</td><td>33.3 / 22.2</td><td>37.3 / 24.8</td><td>22.6 / 16.9</td><td>37.3 / 27.7</td><td>25.5 / 13.6</td><td>11.2 / 4.5</td><td>30.7 / 20.2</td></tr><tr><td>MT6</td><td>58.1 / 48.0</td><td>40.8 / 23.6</td><td>24.1 / 14.2</td><td>39.7 / 27.3</td><td>39.9 / 26.1</td><td>26.9 / 18.4</td><td>41.9 / 31.4</td><td>35.9 / 24.5</td><td>16.3 / 10.9</td><td>36.0 / 24.9</td></tr></table>
|
| 366 |
+
|
| 367 |
+
Table 14: Results on TyDiQA question answering.
|
| 368 |
+
|
| 369 |
+
<table><tr><td>Model</td><td>en</td><td>fr</td><td>es</td><td>de</td><td>el</td><td>bg</td><td>ru</td><td>tr</td><td>ar</td><td>vi</td><td>th</td><td>zh</td><td>hi</td><td>sw</td><td>ur</td><td>Avg</td></tr><tr><td>MT5</td><td>75.4</td><td>62.0</td><td>62.1</td><td>58.9</td><td>58.9</td><td>57.7</td><td>59.0</td><td>55.7</td><td>52.7</td><td>58.4</td><td>55.0</td><td>55.2</td><td>53.6</td><td>42.4</td><td>50.7</td><td>57.2</td></tr><tr><td>MT6</td><td>78.4</td><td>70.6</td><td>69.8</td><td>64.8</td><td>65.7</td><td>66.6</td><td>65.8</td><td>61.6</td><td>63.3</td><td>66.6</td><td>63.1</td><td>66.2</td><td>60.3</td><td>51.5</td><td>56.9</td><td>64.7</td></tr></table>
|
| 370 |
+
|
| 371 |
+
Table 15: Results on XNLI natural language inference.
|
| 372 |
+
|
| 373 |
+
<table><tr><td>Model</td><td>en</td><td>fr</td><td>de</td><td>es</td><td>ja</td><td>ko</td><td>zh</td><td>Avg</td></tr><tr><td>MT5</td><td>91.6</td><td>81.2</td><td>79.9</td><td>80.7</td><td>70.7</td><td>68.2</td><td>73.5</td><td>78.0</td></tr><tr><td>MT6</td><td>93.5</td><td>87.0</td><td>85.4</td><td>87.3</td><td>72.4</td><td>70.1</td><td>79.8</td><td>82.2</td></tr></table>
|
| 374 |
+
|
| 375 |
+
Table 16: Results on PAWS-X cross-lingual paraphrase adversaries.
|
| 376 |
+
|
| 377 |
+
<table><tr><td rowspan="2">Model</td><td colspan="3">es-en</td><td colspan="3">ru-en</td><td colspan="3">vi-en</td><td colspan="3">tr-en</td></tr><tr><td>RG-1</td><td>RG-2</td><td>RG-L</td><td>RG-1</td><td>RG-2</td><td>RG-L</td><td>RG-1</td><td>RG-2</td><td>RG-L</td><td>RG-1</td><td>RG-2</td><td>RG-L</td></tr><tr><td>MT5</td><td>33.12</td><td>11.36</td><td>27.32</td><td>29.14</td><td>8.77</td><td>23.29</td><td>28.96</td><td>8.98</td><td>22.77</td><td>29.31</td><td>10.57</td><td>23.44</td></tr><tr><td>MT6</td><td>33.79</td><td>11.83</td><td>27.90</td><td>30.40</td><td>9.49</td><td>24.32</td><td>29.96</td><td>9.52</td><td>23.72</td><td>29.55</td><td>10.80</td><td>23.82</td></tr></table>
|
| 378 |
+
|
| 379 |
+
Table 17: Evaluation results on Wikilingua cross-lingual abstractive summarization. RG is short for ROUGE. Results of MT5 and MT6 are averaged over three runs.
|
mt6multilingualpretrainedtexttotexttransformerwithtranslationpairs/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:1159728ff7cd39b1ab87bc62511b8d86743543897345d40f8bb4be60e7251a36
|
| 3 |
+
size 684587
|
mt6multilingualpretrainedtexttotexttransformerwithtranslationpairs/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:21fb60173a1c61c856993445bc4c8531ecc1b13ac5814a6e901e1643f2d537f9
|
| 3 |
+
size 419122
|
soyouthinkyourefunnyratingthehumourquotientinstandupcomedy/fb3e4cfb-d66e-428f-bf22-aa2c8c4bda4a_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:56041937af3676ea8439f7ca804a962c208015819800c985b1fe3c5e5f4ce020
|
| 3 |
+
size 44205
|
soyouthinkyourefunnyratingthehumourquotientinstandupcomedy/fb3e4cfb-d66e-428f-bf22-aa2c8c4bda4a_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:6780a1e47a4ddfb0a1bf9283aa1dc034218d0d87d0d1e9fc4e52900940efbc9d
|
| 3 |
+
size 55102
|
soyouthinkyourefunnyratingthehumourquotientinstandupcomedy/fb3e4cfb-d66e-428f-bf22-aa2c8c4bda4a_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:c502378098f396c12428ad94fac342322864ef17d0accc739d5d77789d70b835
|
| 3 |
+
size 253863
|
soyouthinkyourefunnyratingthehumourquotientinstandupcomedy/full.md
ADDED
|
@@ -0,0 +1,170 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# "So You Think You're Funny?": Rating the Humour Quotient in Standup Comedy
|
| 2 |
+
|
| 3 |
+
Anirudh Mittal†, Pranav Jeevan‡, Prerak Gandhi‡, Diptesh Kanojia‡, Pushpak Bhattacharyya*
|
| 4 |
+
|
| 5 |
+
$\dagger ,\diamond ,\clubsuit ,\star$ Indian Institute of Technology Bombay, Mumbai
|
| 6 |
+
|
| 7 |
+
†Centre for Translation Studies, University of Surrey, United Kingdom
|
| 8 |
+
|
| 9 |
+
†,★{anirudhmittal, prerakgandhi, pb}@cse.iitb.ac.in
|
| 10 |
+
|
| 11 |
+
$^{\text{a}}$ pranavjp@ee.iitb.ac.in
|
| 12 |
+
|
| 13 |
+
d.kanojia@surrey.ac.uk
|
| 14 |
+
|
| 15 |
+
# Abstract
|
| 16 |
+
|
| 17 |
+
Computational Humour (CH) has attracted the interest of Natural Language Processing and Computational Linguistics communities. Creating datasets for automatic measurement of humour quotient is difficult due to multiple possible interpretations of the content. In this work, we create a multi-modal humour-annotated dataset ( $\sim 40$ hours) using stand-up comedy clips. We devise a novel scoring mechanism to annotate the training data with a humour quotient score using the audience's laughter. The normalized duration (laughter duration divided by the clip duration) of laughter in each clip is used to compute this humour coefficient score on a five-point scale (0-4). This method of scoring is validated by comparing with manually annotated scores, wherein a quadratic weighted kappa of 0.6 is obtained. We use this dataset to train a model that provides a "funniness" score, on a five-point scale, given the audio and its corresponding text. We compare various neural language models for the task of humour-rating and achieve an accuracy of 0.813 in terms of Quadratic Weighted Kappa (QWK). Our "Open Mic" dataset is released for further research along with the code.
|
| 18 |
+
|
| 19 |
+
# 1 Introduction
|
| 20 |
+
|
| 21 |
+
Humour is one of the most important lubricants of communication between people. Humour is subjective and, at times, also requires cultural knowledge as humour is often dependent on stereotypes in a culture or a country. At times, even cultural appropriation is used to convey humour, which can be offensive to minority cultures<sup>1</sup> (Rosenthal et al., 2015; Kuipers, 2017). The factors listed above, along with the underlying subjectivity in humour render the task of rating humour, difficult for machines (Meaney, 2020). The task of humour classification suffers due to this subjectivity and the lack of datasets that rate the "funniness" of content.
|
| 22 |
+
|
| 23 |
+
In this paper, we propose rating humour on a scale of zero to four. We create the first multimodal dataset $^2$ using standup comedy clips and compute the humour quotient of each clip using the audience laughter. The validity of our scoring criteria is verified by finding the overall agreement between human annotation and automated scores. We use the audio and text-based signals to process this multi-modal data to generate 'humour ratings'. Since humour annotation is subjective, even the data annotated by humans might not provide an objective measure. We reduce this subjectivity by taking laughter feedback from a larger audience. To the best of our knowledge, no previous literature has proposed an automatically humour-rated multimodal dataset and used it in ML model-building to automatically obtain the humor score.
|
| 24 |
+
|
| 25 |
+
Standup comedy is an art form where the delivery of humour has a much larger context, and there are multiple jokes and multiple related punchlines in the same story. The resulting laughter from the audience depends on various factors, including the understanding of the context, delivery, and tonality of the comic. Standup comedy seems to be an ideal choice for a humour rating dataset as it inherently contains some feedback in terms of the audience laughter. We believe a smaller context window restricts computational models, but we know this is not the case for the human audience. Hence, our approach utilises live audience laughter as a measure to rate the humour quotient in the data created. We also believe that such an approach can generate insights into what aspects of stories and their delivery make them funny.
|
| 26 |
+
|
| 27 |
+
Our humour rating model is partly inspired by the character "TARS" from the movie "Interstellar", which generates funny responses based on adjustable humour setting (Nolan, 2014). An essential step in developing such a machine that can adjust its "funniness" is to create a model that can
|
| 28 |
+
|
| 29 |
+
recognize and rate the "funniness" of a joke. With this work, we aim to release a dataset that can help researchers shed light on the humour quotient of a particular text. The key contributions of this paper are: (a) Creation and public release of an automatically rated multi-modal dataset based on English standup comedy clips and (b) Manual evaluation of this dataset along with humour-rating quotient defined on a Likert-scale (Likert, 1932).
|
| 30 |
+
|
| 31 |
+
# 2 Related Work
|
| 32 |
+
|
| 33 |
+
Most of the previous work on computational humour has been towards the detection of humour. Smaller joke formats like one-liners which have just a single line of context, have been used (Hetzron, 1991). Language models like BERT are used for generating sentence embeddings, which have been shown to outperform other architectures in humour detection on short texts (Annamoradnejad, 2020). Since humour depends on how the speaker's voice changes, the audio features, and language features have been used as inputs for machine learning models for humour detection. Bertero and Fung (2016) use audio and language features to detect humour in The Big Bang Theory sitcom dialogues. Park et al. (2018) passed audio and language features from a conversation dataset into an RNN to create a chatbot that can detect and respond to humour. Hasan et al. (2019) built a multi-modal dataset that uses text, audio, and video inputs for humour detection. There are existing datasets that rate the humour in tweets and Reddit posts, with the help of human annotators (Miller et al., 2020; Castro et al., 2018; Weller and Seppi, 2019). Creating human-annotated datasets is costly in terms of both time and money and has been one of the noted issues for creating humour datasets. Yang et al. (2019a,b) used time-aligned user comments for generating automated humour labels for multimodal humour identification tasks and found good agreement with manually annotated data. However, none of the previously existing datasets are created with standup comedy clips.
|
| 34 |
+
|
| 35 |
+
We present the first multi-modal dataset that uses a non-binary rating system. We use standup comedy clips which makes our dataset scalable and diverse. The dataset is novel in terms of the use of long contextual jokes ( $\sim$ 2 mins) and audience laughter which helps annotate the funniness in each clip in an automated manner.
|
| 36 |
+
|
| 37 |
+
# 3 Dataset Acquisition and Pre-processing
|
| 38 |
+
|
| 39 |
+
In this section, we describe the creation of our multi-modal dataset and the manual evaluation performed with the help of human annotators.
|
| 40 |
+
|
| 41 |
+
We gather 36 English language standup comedy shows from 32 comedians available on the web, where the length of each original clip is $\sim 1$ hour. We further segment them manually into $927\sim 2$ minute long clips. The standups are chosen based on the clarity of the audience feedback laughter. We choose comics from diverse categories of gender, nationality, and culture to ensure representation and reduce bias. While segmenting them, we ignore the clips, which results in laughter on interaction with the audience/personal jokes. We also create text files with the transcript for each audio clip from multiple online sources (Tra, 2020). We collect data for "unfunny" samples by gathering TED talk audio clips with similar speech delivery modes like standup comedy. We also segment them into $128\sim 2$ minute audio clips and create text files of their transcript<sup>3</sup>.
|
| 42 |
+
|
| 43 |
+
Clips were manually trimmed from the complete audio such that the entire context for the joke is available within the clip. This results in the overall set of $\sim 2$ minute clips described above. Finally, we acquire $519\sim 2$ minute audio clips and corresponding transcript text files in our dataset. The train-test split is chosen to be 70-30.
|
| 44 |
+
|
| 45 |
+
# 3.1 Laughter Detection
|
| 46 |
+
|
| 47 |
+
To find the humour quotient rating of each clip, we use the feedback of the audience laughter as discussed above. We measure the intensity and recorded time intervals of audience laughter in the clip (Gillick and Wlodarczak, 2019). We modify this library to output the sum of the duration, of all laughs in the clip. Based on hyperparameter tuning, we set the threshold parameters, adjusting the minimum probability for laughter detection to 0.7. Further, the minimum laughter duration parameter is set to 0.1. This allows us to get the humour quotient from the total duration of the audience laughter in the clip.
|
| 48 |
+
|
| 49 |
+
# 3.2 Scoring Humour Quotient
|
| 50 |
+
|
| 51 |
+
The sum of the duration of all the laugh intervals is detected from each clip. Since longer clips tend to have more jokes and hence a higher score, we eliminate this bias by dividing the sum with the
|
| 52 |
+
|
| 53 |
+
<table><tr><td>Rating</td><td># Clips</td><td>Scoring Criteria</td></tr><tr><td>4</td><td>233</td><td>score > μ + 0.75σ</td></tr><tr><td>3</td><td>185</td><td>μ + 0.75σ ≥ score > μ</td></tr><tr><td>2</td><td>256</td><td>μ ≥ score > μ - 0.75σ</td></tr><tr><td>1</td><td>253</td><td>μ - 0.75σ ≥ score > 0</td></tr><tr><td>0</td><td>128</td><td>score = 0</td></tr></table>
|
| 54 |
+
|
| 55 |
+
Table 1: Number of clips and the scoring criteria for assigning humour rating to each clip based on the mean $(\mu)$ and standard deviation $(\sigma)$ of the scores
|
| 56 |
+
|
| 57 |
+
duration of the clip. We use a Likert-scale to regard for the subjectivity in human opinion on each clip. The mean $\mu$ and standard deviation $\sigma$ of all the scores are calculated. A rule for assigning a 5 point rating (0-4) for each clip is devised as shown in Table 1 (Column 3). The number of samples for each class in our rating system is shown in Table 1.
|
| 58 |
+
|
| 59 |
+
# 3.3 Human Annotation
|
| 60 |
+
|
| 61 |
+
Three human annotators (2 males, 1 female) between the ages of 21-33 are assigned to rate the humour quotient in our dataset. The annotators are instructed to rate each clip based solely on the audience laughter feedback rather than their perception of the humour quotient of the clip. This allows the annotators to be unbiased towards a particular comedian or humour genre. The annotations were performed in a closed-room environment, without any external noise.
|
| 62 |
+
|
| 63 |
+
# 4 Experiment Setup and Methodology
|
| 64 |
+
|
| 65 |
+
In this section, we describe the features used for the humour rating prediction task along with the additional pre-processing in detail.
|
| 66 |
+
|
| 67 |
+
# 4.1 Network Architecture
|
| 68 |
+
|
| 69 |
+
The text embeddings and audio features are given as input to separate Bi-LSTM layers followed by separate, Dense layers (Graves, Alex and Fernandez, Santiago and Schmidhuber, Jirgen, 2005) as shown in Figure 1. The output from these two pathways is then concatenated and fed to a classifier that outputs one-hot encoding of the 5-point rating.
|
| 70 |
+
|
| 71 |
+
# 4.2 Muting Laughter
|
| 72 |
+
|
| 73 |
+
Before extracting audio features, we remove the audience laughter and isolate the speaker's voice from each clip. Retaining the audience laughter may enable a neural network to utilize it and predict a score without using information from the text
|
| 74 |
+
|
| 75 |
+

|
| 76 |
+
Figure 1: Neural Network Architecture
|
| 77 |
+
|
| 78 |
+
and other audio features. We envision creating a system that can predict the funniness of any clip. Such clips will not have laughter as an indicator, so we train and test our model on the muted clips. Please note that laughter is extracted separately to generate the funniness score (Section 3.1). We use Green (2018) to mute audience laughter from audio segments, thus, resulting in clips that are then used for extracting audio features.
|
| 79 |
+
|
| 80 |
+
# 4.3 Audio Features
|
| 81 |
+
|
| 82 |
+
Audio features such as MFCCs, RMS energy, and Spectrogram are extracted from the laughter-muted clips (McFee et al., 2020). These 3 feature tensors are concatenated to create a single feature of dimension 33 for each time sample. The maximum sequence length for audio embeddings was set as 8000. The clips with a lesser duration were padded with zeroes for uniformity. These features convey information about the volume, intonation, and emotion of the speaker, which are important for humour.
|
| 83 |
+
|
| 84 |
+
# 4.4 Textual Features
|
| 85 |
+
|
| 86 |
+
Additionally, we use the textual features extracted from various language models to ensure that the context of each joke is retained. We use BERT-derived models to generate contextual embeddings for each clip, which ensure attention over the entire text of the clip (Wolf et al., 2020). BERT-derived models can process sequences of token length 512; thus, we employ them for the entire transcript of each $\sim$ 2 minute clip. We sum the output of the final 4 layers from these models to obtain a clip embedding (Alammar, 2018).
|
| 87 |
+
|
| 88 |
+
As baseline textual features, we use GloVe embeddings (Pennington et al., 2014). For obtaining textual features, we experiment with BERT<sub>base</sub>, BERT<sub>large</sub>, XLM, DistilBERT, RoBERT<sub>base</sub> and RoBERT<sub>large</sub> to generate text embeddings (Devlin et al., 2018; Lample and Conneau, 2019; Sanh et al., 2019; Liu et al., 2019).
|
| 89 |
+
|
| 90 |
+
# 4.5 Methodology
|
| 91 |
+
|
| 92 |
+
The audio features and textual features are fed as input to the network for obtaining an output rating on the scale of 0-4. To evaluate our approach for scoring each clip, we obtain Cohen's weighted Kappa with quadratic weights, i.e., Quadratic Weighted Kappa score (QWK) (Cohen, 1968) between our scoring mechanism (Table 1) and the model output. We use QWK as a scoring mechanism because, unlike accuracy and F-Score, it considers that the system may randomly assign a particular label to a clip. The QWK score also penalizes mismatches more than linear or unweighted Kappa by taking the quadratic weights into account. Additionally, we validate the scores provided by our scoring mechanism by obtaining QWK with the human annotation performed.
|
| 93 |
+
|
| 94 |
+
<table><tr><td colspan="2">Pairwise Agreement</td></tr><tr><td>Annotators A and B</td><td>0.643</td></tr><tr><td>Annotators B and C</td><td>0.926</td></tr><tr><td>Annotators C and A</td><td>0.611</td></tr><tr><td>Average pairwise Cohen's Kappa</td><td>0.634</td></tr><tr><td>Fleiss' Kappa</td><td>0.632</td></tr><tr><td>Krippendorff's alpha</td><td>0.632</td></tr></table>
|
| 95 |
+
|
| 96 |
+
# 5 Results
|
| 97 |
+
|
| 98 |
+
In Table 2, we show the Krippendorff alpha, Fleiss' Kappa, and pairwise agreement between human annotators (Krippendorff, 2004; Cohen, 1960; Fleiss et al., 1971). The inter-annotator agreement between any two annotators is above 0.60, which signifies "substantial" agreement between them (Fleiss et al., 2003). We evaluate our scoring mechanism by comparing it with the manually annotated data by human annotators, as shown in Table 3. An average QWK of 0.595 was observed, indicating significant agreement with them (Vanbelle, 2014).
|
| 99 |
+
|
| 100 |
+
Table 3 also shows the QWK among the neural network outputs with our scoring mechanism. With the neural network output, we see a significant agreement when compared with our scoring mechanism. Even the GloVe-based model performs reasonably well when matched with our scoring mechanism. Embeddings created from
|
| 101 |
+
|
| 102 |
+
Table 2: Inter-Annotator Agreement (Fleiss' Kappa and Krippendorff's alpha) values along with pairwise agreement among the annotators
|
| 103 |
+
|
| 104 |
+
<table><tr><td>Annotators</td><td>QWK</td></tr><tr><td>Human A</td><td>0.659</td></tr><tr><td>Human B</td><td>0.562</td></tr><tr><td>Human C</td><td>0.563</td></tr><tr><td>Average</td><td>0.595</td></tr><tr><td>Textual Features</td><td>QWK</td></tr><tr><td>GloVe</td><td>0.691</td></tr><tr><td>BERTbase</td><td>0.722</td></tr><tr><td>BERTlarge</td><td>0.796</td></tr><tr><td>DistilBERT</td><td>0.721</td></tr><tr><td>RoBERTabase</td><td>0.775</td></tr><tr><td>RoBERTalarge</td><td>0.813</td></tr><tr><td>XLM</td><td>0.714</td></tr></table>
|
| 105 |
+
|
| 106 |
+
Table 3: (a) Quadratic Weighted Kappa (QWK) scores between the scores provided by human annotators, and our scoring mechanism (b) QWK scores between the various language models combined with neural networks, and our scoring mechanism.
|
| 107 |
+
|
| 108 |
+
BERT-derived language models showed considerable improvement from baseline performance. RoBERTa large outperforms all the other language models and shows an improvement of $12\%$ points over the baseline GloVe score. Since RoBERTa is pre-trained on datasets that contain text in a storylike format similar to standup comedy text (Liu et al., 2019), RoBERTa large can be seen performing better than all the other textual features. Analysing the confusion matrix of these models shows that RoBERTa large and BERTlarge can distinguish different levels of humourousness quite well. They show the highest accuracy in identifying the nonfunny clips. DistilBERT could not perform as well as BERTlarge because humour needs better quality text embeddings to understand the full context, which DistilBERT cannot provide due to the lower number of parameters in the model.
|
| 109 |
+
|
| 110 |
+
Larger models with embedding dimensions of 1024 (BERTlarge, RoBERTalarge) and 2048 (XLM) performed better than smaller models. A larger neural network would need a dataset of significant size to train, which also shows that our dataset is reasonably sized. For BERTbase, when we increased the Bi-LSTMs in the initial layer from 256 to 512, we see a slight improvement in the Quadratic Weighted Kappa value which shows that larger embeddings need a bigger neural network to classify accurately.
|
| 111 |
+
|
| 112 |
+
We further probe our best-performing model with an ablation test and observe that audio-based
|
| 113 |
+
|
| 114 |
+
features (0.66 QWK) outperform text-based features (0.48 QWK). This contradicts what was observed by Hasan et al. (2019) as humour in standup often depends on the tonality and the well-enunciated punchlines.
|
| 115 |
+
|
| 116 |
+
# 6 Discussion
|
| 117 |
+
|
| 118 |
+
Analysis of the predicted ratings show that our model can identify non-funny clips and most funny clips with very high accuracy. In cases of error in assigning ratings to the intermediate funny clips, the assigned ratings are not off by more than one rating point, for e.g., a clip rated 3 is assigned a rating of 2 or 4. This error should not be considered as a failure of our model since assigning a precise funniness rating in a definite way to intermediate funny clips is hard even for humans. So it is reasonable to expect our model to commit similar errors in assigning the ratings as a human would. In the individual confusion matrices obtained for both feature sets, we observe the maximum incorrect predictions among the classes $2/3$ and $3/4$ . We correlate these results with the human annotation and observe that even human annotators differ mostly in these two classes. All our annotators observed that clips with a moderate amount of laughter could be rated either as 2 or as 3, since such annotations are difficult to be discerned to a particular class.
|
| 119 |
+
|
| 120 |
+
Additionally, we observe that there are only 16/1055 cases where none of the ratings of the three human annotators match with each other. Out of which, only one rating differs (4, 1, 2) with a difference of $> = 2$ . In the other 15 ratings, the difference between the highest and lowest human ratings is $<= 2$ (e.g., 4-2-3). There are around $\sim 400$ cases where 2 annotators fully agree. The rest of the $\sim 600$ ratings are where all three annotators fully agree in their ratings for each clip shown to them.
|
| 121 |
+
|
| 122 |
+
As we evaluate the clips misclassified by our model, we observe that 1) sarcastic and ironic statements generate human laughter, but our model does not detect it, 2) a certain kind of jokes which are morbid also categorized as "dark humour" is consistently classified with lower scores, whereas there is a lot of human laughter generated during such jokes, 3) subtle comparisons, for example, the usage of internet to smoking where the comedian tries to imply that both are harmful to health; are classified as "mildly funny (1 or 2)" by our model.
|
| 123 |
+
|
| 124 |
+
We further evaluate clips with human annotation
|
| 125 |
+
|
| 126 |
+
score difference $>2$ . Despite providing detailed guidelines which required our annotators to focus only on audience laughter, they could have possibly focused on the content. Due to this subjectivity, we believe that our annotators may have misclassified a few clips. We trace the reasons to 1) country-specificity, thereby leading to less comprehension by the annotator, or 2) insensitivity towards the feelings of females, or 3) bias against a country/race which again leads to the diminished absorption of the joke. This observation validates our initial discussion on the subjectivity of humorous content, along with the observation that Annotator A (female) has consistently scored such clips lower than Annotator B and C (males). However, we also observed that the audience laughter in such clips is more consistent with the scores provided by Annotators B and C.
|
| 127 |
+
|
| 128 |
+
# 7 Conclusion and Future Work
|
| 129 |
+
|
| 130 |
+
We propose a novel scoring mechanism to show that humour rating can be automated using audience laughter, which concurs well with the humour perception of humans. We create a multi-model (audio & text) dataset for the task of humour rating. With the help of three human annotators, we manually evaluate our scoring mechanism and show a substantial agreement in terms of QWK. Our evaluation shows that our scoring mechanism can be emulated with the help of pre-existing language models and traditional audio features. Our neural network-based experiments show that the output obtained using various language models like RoBERTa show an agreement with our scoring mechanism. Despite the inherent subjectivity in humour and its different perceptions among humans, we propose a method to rate humour and release this dataset under the CC-BY-SA-NC 4.0 license for further research.
|
| 131 |
+
|
| 132 |
+
In the future, we would like to evaluate this scoring mechanism with the help of more human annotators. We aim to extend the dataset with the help of more standup comedy clips. Further experiments can be conducted to compare the contribution of audio, video and text features with a more detailed analysis. We would also like to perform experiments by including more audio features like Line Spectral Frequencies, Zero-Crossing rate, and Delta Coefficients. With the release of this dataset, we hope that research in computational humour can be taken further.
|
| 133 |
+
|
| 134 |
+
# References
|
| 135 |
+
|
| 136 |
+
2020. Stand-Up Comedy Transcripts.
|
| 137 |
+
Jay Alammar. 2018. The Illustrated BERT, ELMo, and co. (How NLP Cracked Transfer Learning).
|
| 138 |
+
Issa Annamoradnejad. 2020. ColBERT: Using BERT Sentence Embedding for Humor Detection.
|
| 139 |
+
Dario Bertero and Pascale Fung. 2016. Deep Learning of Audio and Language Features for Humor Prediction. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 496-501, Portož, Slovenia. European Language Resources Association (ELRA).
|
| 140 |
+
Santiago Castro, Luis Chiruzzo, Aiala Rosá, Diego Garat, and Guillermo Moncecchi. 2018. A Crowd-Annotated Spanish Corpus for Humor Analysis. In Proceedings of the Sixth International Workshop on Natural Language Processing for Social Media, pages 7-11, Melbourne, Australia. Association for Computational Linguistics.
|
| 141 |
+
J. Cohen. 1960. A Coefficient of Agreement for Nominal Scales. Educational and Psychological Measurement, 20(1):37.
|
| 142 |
+
Jacob Cohen. 1968. Weighted kappa: nominal scale agreement provision for scaled disagreement or partial credit. Psychological bulletin, 70(4):213.
|
| 143 |
+
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: pre-training of deep bidirectional transformers for language understanding. CoRR, abs/1810.04805.
|
| 144 |
+
J.L. Fleiss et al. 1971. Measuring nominal scale agreement among many raters. Psychological Bulletin, 76(5):378-382.
|
| 145 |
+
Joseph Fleiss, Bruce Levin, and Myunghee Paik. 2003. In Statistical Methods for Rates and Proportions. Statistical Methods for Rates and Proportions, 203.
|
| 146 |
+
Jon Gillick and Marcin Wlodarczak. 2019. laughter-detection.
|
| 147 |
+
Graves, Alex and Fernández, Santiago and Schmidhuber, Jürgen. 2005. Bidirectional lstm networks for improved phoneme classification and recognition. pages 799-804.
|
| 148 |
+
Jeff Green. 2018. Sitcom laughtrack mute tool.
|
| 149 |
+
Md Kamrul Hasan, Wasifur Rahman, AmirAli Bagher Zadeh, Jianyuan Zhong, Md Iftekhar Tanveer, Louis-Philippe Morency, and Mohammed (Ehsan) Hoque. 2019. UR-FUNNY: A multimodal language dataset for understanding humor. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2046-2056, Hong Kong, China. Association for Computational Linguistics.
|
| 150 |
+
|
| 151 |
+
R. Hetzron. 1991. On the structure of punchlines. Humor: International Journal of Humor Research, 4:61-108.
|
| 152 |
+
Klaus Krippendorff. 2004. Content Analysis: An Introduction to Its Methodology (second edition). Sage Publications.
|
| 153 |
+
Giselinde Kuipers. 2017. In The Anatomy of Laughter, chapter Humour styles and class cultures: Highbrow humour and lowbrow humour in the Netherlands. Routledge.
|
| 154 |
+
Guillaume Lample and Alexis Conneau. 2019. Crosslingual language model pretraining. Advances in Neural Information Processing Systems (NeurIPS).
|
| 155 |
+
Rensis Likert. 1932. A technique for the measurement of attitudes. Archives of psychology.
|
| 156 |
+
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A Robustly Optimized BERT Pretraining Approach. CoRR, abs/1907.11692.
|
| 157 |
+
Brian McFee, Vincent Lostanlen, Alexandros Metsai, Matt McVicar, Stefan Balke, Carl Thomé, Colin Raffel, Frank Zalkow, Ayoub Malek, Dana, Kyungyun Lee, Oriol Nieto, Jack Mason, Dan Ellis, Eric Battenberg, Scott Seyfarth, Ryuichi Yamamoto, Keunwoo Choi, viktorandreevichmorozov, Josh Moore, Rachel Bittner, Shunsuke Hidaka, Ziyao Wei, nullmightybofo, Dario Hereñu, Fabian Robert Stöter, Pius Friesch, Adam Weiss, Matt Vollrath, and Taewoon Kim. 2020. librosa/librosa: 0.8.0.
|
| 158 |
+
J. A. Meaney. 2020. Crossing the Line: Where do Demographic Variables Fit into Humor Detection? In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop, pages 176-181, Online. Association for Computational Linguistics.
|
| 159 |
+
Tristan Miller, Erik-Lan Do Dinh, Edwin Simpson, and Iryna Gurevych. 2020. OFAI-UKP at HAHA@IberLEF2019: Predicting the Humorousness of Tweets Using Gaussian Process Preference Learning.
|
| 160 |
+
Christopher Nolan. 2014. Interstellar.
|
| 161 |
+
Kate M. Park, Annie Hu, and Natalie Muenster. 2018. Laughbot: Detecting Humor in Spoken Language with Language and Audio Cues.
|
| 162 |
+
Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global Vectors for Word Representation. volume 14, pages 1532-1543.
|
| 163 |
+
A. Rosenthal, David Bindman, and A.W.B. Randolph. 2015. No laughing matter: Visual humor in ideas of race, nationality, and ethnicity.
|
| 164 |
+
|
| 165 |
+
Victor Sanh, Lysandre Debut, Julien Chaumont, and Thomas Wolf. 2019. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. ArXiv, abs/1910.01108.
|
| 166 |
+
Sophie Vanbelle. 2014. A New Interpretation of the Weighted Kappa Coefficients. Psychometrika.
|
| 167 |
+
Orion Weller and Kevin Seppi. 2019. Humor Detection: A Transformer Gets the Last Laugh.
|
| 168 |
+
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtopicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers: State-of-the-Art Natural Language Processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Association for Computational Linguistics.
|
| 169 |
+
Zixiaofan Yang, L. Ai, and Julia Hirschberg. 2019a. Multimodal Indicators of Humor in Videos. In 2019 IEEE Conference on Multimedia Information Processing and Retrieval (MIPR), pages 538-543.
|
| 170 |
+
Zixiaofan Yang, Bingyan Hu, and Julia Hirschberg. 2019b. Predicting Humor by Learning from Time-Aligned Comments. In Proc. Interspeech 2019, pages 496-500.
|
soyouthinkyourefunnyratingthehumourquotientinstandupcomedy/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:a4decc7702b7187ebdf695bf498580726f80e62bd86b2b6bd3507d3df04f1c7d
|
| 3 |
+
size 101974
|
soyouthinkyourefunnyratingthehumourquotientinstandupcomedy/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:e227a884a84a2cf5647eaf5b882b7797c61337b416b72934d2cb17a09159f3dd
|
| 3 |
+
size 190440
|
wasitstatedorwasitclaimedhowlinguisticbiasaffectsgenerativelanguagemodels/64f961ba-93f2-4974-b737-5c0b4ef99d51_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:b2d32e9e8bd9c81c523da30adf7885c79d87c59955923babcd723c5a4e1fdfdb
|
| 3 |
+
size 97453
|
wasitstatedorwasitclaimedhowlinguisticbiasaffectsgenerativelanguagemodels/64f961ba-93f2-4974-b737-5c0b4ef99d51_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:939a78b669d60a607e2228718c321f3d57b7d683ae25fa48dc8e73d692e190e9
|
| 3 |
+
size 114023
|
wasitstatedorwasitclaimedhowlinguisticbiasaffectsgenerativelanguagemodels/64f961ba-93f2-4974-b737-5c0b4ef99d51_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:a2692540c83e3ddcb10a431c32e3b63ed9f35b9e21fe4f1ef3984c654693e43d
|
| 3 |
+
size 2675642
|
wasitstatedorwasitclaimedhowlinguisticbiasaffectsgenerativelanguagemodels/full.md
ADDED
|
@@ -0,0 +1,368 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Was it “said” or was it “claimed”? How linguistic bias affects generative language models
|
| 2 |
+
|
| 3 |
+
Roma Patel
|
| 4 |
+
|
| 5 |
+
Brown University
|
| 6 |
+
|
| 7 |
+
romapatel@brown.edu
|
| 8 |
+
|
| 9 |
+
Ellie Pavlick
|
| 10 |
+
|
| 11 |
+
Brown University
|
| 12 |
+
|
| 13 |
+
ellie_pavlick@brown.edu
|
| 14 |
+
|
| 15 |
+
# Abstract
|
| 16 |
+
|
| 17 |
+
People use language in subtle and nuanced ways to convey their beliefs. For instance, saying claimed instead of said casts doubt on the truthfulness of the underlying proposition, thus representing the author's opinion on the matter. Several works have identified classes of words that induce such framing effects. In this paper, we test whether generative language models are sensitive to these linguistic cues. In particular, we test whether prompts that contain linguistic markers of author bias (e.g., hedges, implicatives, subjective intensifiers, assertives) influence the distribution of the generated text. Although these framing effects are subtle and stylistic, we find qualitative and quantitative evidence that they lead to measurable style and topic differences in the generated text, leading to language that is more polarised (both positively and negatively) and, anecdotally, appears more skewed towards controversial entities and events.
|
| 18 |
+
|
| 19 |
+
# 1 Introduction
|
| 20 |
+
|
| 21 |
+
With subtle changes in word choice, a writer can influence a reader's perspective on a matter in many ways (Thomas et al., 2006; Recasens et al., 2013). For example, Table 1 shows how the verbs claimed and said, although reasonable paraphrases for one another in the given sentence, have very different implications. Saying claimed casts doubt on the certainty of the underlying proposition and might implicitly bias a reader's interpretation of the sentence. That is, such linguistic cues (e.g., hedges, implicatives, intensifiers) can induce subtle biases through implied sentiment and presupposed facts about the entities and events with which they interact (Rashkin et al., 2015). When models of language are trained on large web corpora that consist of text written by many people, distributional patterns might lead the lexical representations of these
|
| 22 |
+
|
| 23 |
+
<table><tr><td>Bias Prompt (Assertive)</td><td>Neutral Prompt</td></tr><tr><td>In a speech on June 9, 2005, Bush claimed that the “Patriot Act” had been used to bring charges against more than 400 suspects, more than half of whom had been convicted. William Graff, a former Texas primary voter who was also shot on his go-go days, was shot and killed at one point in the fight between Bush and the two terrorists, which Bush called executive order had taken “adrenaline.”</td><td>In a speech on June 9, 2005, Bush said that the “Patriot Act” had been used to bring charges against more than 400 suspects, more than half of whom had been convicted.
|
| 24 |
+
“This agreement done are out of a domestic legal order,” Bush said in referring to the presidential Domestic Violence policy and the president’s new domestic violence policy; Roe v. Wade.
|
| 25 |
+
“The president is calling on everyone..</td></tr></table>
|
| 26 |
+
|
| 27 |
+
Table 1: Table shows generations from a language model (GPT-2); when prompted with a linguistically biased sentence (left) and one edited to be neutral (right). Prompts are in gray while model generations are in black.
|
| 28 |
+
|
| 29 |
+
seemingly innocuous words to encode broader information about the opinions, preferences, and topics with which they co-occur. Although studies have shown that humans recognise these framing effects in written text (Recasens et al., 2013; Pavalanathan et al., 2018), it remains to be seen whether language models trained on large corpora respond to, or even recognise, such linguistic cues.
|
| 30 |
+
|
| 31 |
+
In this work, we investigate the extent to which generative language models following the GPT-2 (124M-1.5B parameters) (Radford et al., 2019) and GPT-3 (175B parameters) (Brown et al., 2020) architecture respond to such framing effects. We compare the generations that models produce when given linguistically-biased prompts to those produced when given minimally-different neutral prompts. We measure the distributional changes in the two sets of generations, as well as analyse the frequency of words from specific style lexi
|
| 32 |
+
|
| 33 |
+
cons, such as hedges, assertives, and subjective terms. We also investigate the differences in the civility of the text generated from the two sets of prompts, as measured by the PERSPECTIVE $\mathrm{API}^1$ , a tool used to detect rude or hateful speech. To understand the topical differences, we compare frequency of the references made by models to specific entities and events. Overall, we find that linguistically-biased prompts lead to generations with increased use of linguistically biased words (e.g., hedges, implicatives), and heightened sentiment and polarity. Anecdotally, we see that the named entities and events referred to are also more polarised. Interestingly, we see no significant trends in model size, but observe that even the smallest model we test (124M parameters) is sufficiently capable of differentiating the subtly biased vs. the neutral prompts.
|
| 34 |
+
|
| 35 |
+
# 2 Setup
|
| 36 |
+
|
| 37 |
+
# 2.1 Biased vs. Neutral Prompts
|
| 38 |
+
|
| 39 |
+
As a source of prompts for the model, we use sentences from the "neutral point of view" (henceforth, NPOV) corpus from Recasens et al. (2013). This corpus was created from Wikipedia edits specifically aimed at removing opinion bias and subjective language, and consists of minimally-paired sentences $\langle s_b, s_n \rangle$ . The first sentence $(s_b)$ in each pair is a linguistically biased sentence, i.e., one that was deemed by Wikipedia editors to be in violation of Wikipedia's NPOV policy. The second sentence $(s_n)$ is an edited version of the original, which communicates the same key information but does so with a more neutral tone. For example, the gray text in Table 1 illustrates one such pair, and Table 2 shows example sentences that fall into different linguistic bias categories. Edits range from one to five words, and may include insertions, deletions, or substitutions. For our analysis, we discard sentence pairs in which the edits only added a hyperlink, symbols or URLs, or were spelling-error edits (character-based Levenshtein distance $< 4$ ), leaving us with a total of 11, 735 sentence pairs.
|
| 40 |
+
|
| 41 |
+
# 2.2 Bias-Word Lexicons
|
| 42 |
+
|
| 43 |
+
Prior work has studied how syntactic and lexical semantic cues induce biases via presuppositions
|
| 44 |
+
|
| 45 |
+
and other framing effects (Hooper, 1975; Hyland, 2018; Karttunen, 1971; Greene and Resnik, 2009). Recasens et al. (2013) categorise these into two broad classes, namely, epistemological bias and framing bias. The former occurs when certain words (often via presupposition) focus on the believability of a proposition thus casting negative implications. The latter occurs when common subjective terms denote a person's point of view (for e.g., pro-life vs. anti-abortion). In our analyses, we use lexicons covering several categories of such linguistic cues, summarized below.
|
| 46 |
+
|
| 47 |
+
1. Assertives (Hooper, 1975) (words like says, allege, verify and claim) are verbs which take complement clause, however their degree of certainty depends on the verb. For example, the assertive says is more neutral than argues, since the latter implies that a case must be made, thus casting doubt on the certainty of the proposition. We use the lexicon compiled by (Hooper, 1975) that contains 67 assertive verbs occurring in 1731 of the total prompts.
|
| 48 |
+
2. Implicatives (Karttunen, 1971) are verbs that either imply the truth or untruth of their complement, based on the polarity of the main predicate. Example words are avoid, hesitate, refrain, attempt. For instance, both coerced into accepting and accepted entail that an accepting event occurred, but the former implies that it was done unwillingly. We use the lexicon from (Karttunen, 1971) containing 31 implicatives that occur in 935 prompts.
|
| 49 |
+
3. Hedges are words that reduce one's commitment to the truth of a proposition (Hyland, 2018). For example, words like apparently, possibly, maybe and claims are used to avoid bold predictions and statements, since they impart uncertainty onto a clause. The lexicon of hedges from Hyland (2018) contains 98 hedge words that occur in 4028 prompts.
|
| 50 |
+
4. Report Verbs are verbs that are used to indicate that discourse is being quoted or paraphrased (Recasens et al., 2013) from a source other than the author. Example report verbs are dismissed, praised, claimed or disputed that are all references to discourse-related events. We use the lexicon from Recasens
|
| 51 |
+
|
| 52 |
+
et al. (2013) containing 180 report verbs that occur in 3404 prompts.
|
| 53 |
+
|
| 54 |
+
5. Factives (Hooper, 1975) are verbs that presuppose the truth of their complement clause, often representing a person's stand or experimental result. These include words like reveal, realise, regret or point out. E.g., the phrase revealed that he was lying takes for granted that it is true that he was lying. We use the lexicon from Hooper (1975) that contains 98 words occurring in 4028 prompts.
|
| 55 |
+
|
| 56 |
+
6. Polar Words are words that elicit strong emotions (Wiebe et al., 2004) thus denoting either a positive or negative sentiment. For example, saying joyful, super, achieve or weak, foolish, hectic have strongly positive and negative connotations respectively. We use the lexicon of positive and negative words from Liu et al. (2005) containing 2006 and 4783 words respectively. These occur in 6187 and 7300 of the total prompts.
|
| 57 |
+
|
| 58 |
+
7. Subjective Words are those that add strong subjective force to the meaning of a phrase (Riloff and Wiebe, 2003), denoting speculations, sentiments and beliefs, rather than something that could be directly observed or verified by others. These can be categorised into words that are strongly subjective (e.g., celebrate, dishonor) or weakly subjective (e.g., widely, innocently), denoting their reliability as subjectivity markers. The lexicon of strong subjectives contains 5569 words, that occur in 5603 prompts, while the weak subjectives lexicon contains 2653 words that occur in 7520 prompts.
|
| 59 |
+
|
| 60 |
+
# 2.3 Probing Language Model Generations
|
| 61 |
+
|
| 62 |
+
We focus on five autoregressive language models of varying size, that are all Transformer-based (Vaswani et al., 2017), following the GPT model architecture (Radford et al., 2019). We analyze four GPT-2 models (124M, 355M, 774M, and 1.5B parameters; §3) as well as the GPT-3 model $^{2}$ (175B parameters; §5). The GPT-2 models are pre-trained
|
| 63 |
+
|
| 64 |
+
on the OPENAI-WT dataset, composed of 40GB of English web text available on the internet.
|
| 65 |
+
|
| 66 |
+
We prompt the language models with each sentence from a pair (the original sentence $s_b$ containing linguistic bias, and the edited sentence $s_n$ with the bias removed) to obtain two sets of generations from the language model, a set $B$ that resulted from biased prompts and a set $N$ that resulted from minimally-differing neutral prompts. Note that we often abuse terminology slightly and use the phrase "biased generations" to refer to $B$ (even though the generations may or may not themselves be biased), and analogously use "neutral generations" to refer to $N$ . We generate up to 300 tokens per prompt and, to improve the robustness of our analyses, generate 3 samples for every prompt. We use a temperature of 1 during generation, and sample from the softmax probabilities produced at each time step using nucleus sampling (Holtzman et al., 2019) with $p = 0.85$ .
|
| 67 |
+
|
| 68 |
+
# 3 Experiments and Results
|
| 69 |
+
|
| 70 |
+
# 3.1 Distributional Differences in Generations
|
| 71 |
+
|
| 72 |
+
First, we must verify that, when present in prompts, the linguistic cues described above lead to measurable differences in the type of language generated by the model. We use perplexity to quantify whether there are differences in the overall distribution of language generated from each of the two sets of prompts. To do this, for each set of prompts, we pool together all the generations from a model, and simply model the two distributions, with two different language models. More concretely, for each set of generated texts (for e.g., considering all generations from bias prompts as corpus $B$ ), we train a simple transformer language model $M_B$ from scratch on $B$ . Once the perplexity is sufficiently low (as tested on a held-out test set), we obtain a perplexity score for the opposite corpus (e.g., text generated from all neutral prompts $N$ ) from the trained model. If the perplexity of $M_B$ on corpus $N$ (denoted $M_B(N)$ ) is significantly higher than the perplexity of $M_B$ on corpus $B$ (denoted $M_B(B)$ ), and $M_N(N)$ is significantly higher than $M_N(N)$ , we can say that the two sets of text are sufficiently far-apart distributionally. However, if the perplexity is roughly equal (or even lower), that means the second corpus is similar to the first distributionally, hinting at no topical or stylistic changes.
|
| 73 |
+
|
| 74 |
+
<table><tr><td rowspan="2">Assertives</td><td>Israel argues that the regime of restrictions is necessary to protect Israelis living in Israel proper and the Israeli settlements. The regime of violence is necessary to maintain security for Israel," they said in their..</td></tr><tr><td>Israel says that the regime of restrictions is necessary to protect Israelis living in Israel proper and the Israeli settlements. It says Israel should also keep all the settlements under its control. Some Israeli sources say..</td></tr><tr><td rowspan="2">Implicatives</td><td>Sami Amin Al-Arian (computer engineer who was coerced into accepting a plea bargain which resulted in his conviction with conspiracy to help Palestinian Islamic Jihad. Al-Arian was sentenced to 7 years, whilst Nasser Al-Khan cites a letter written by a former accomplice and Nazi member of a terrorist cell. Al-Khan..</td></tr><tr><td>Sami Amin Al-Arian (computer engineer who accepted a plea bargain which resulted in his conviction with conspiracy to help Palestinian Islamic Jihad. He is now sentenced to two years imprisonment in civilian Kurdistan.) The Peshmerga are quick to denounce the Canadian and US efforts but their Islamists have a..</td></tr><tr><td rowspan="2">Hedges</td><td>The new leaked information indicates China claims that they only suffered 6,954 lost. They are all from a "battleship"—a Chinese military base that was used for training exercises to combat insurgencies in Central..</td></tr><tr><td>The new leaked information indicates that China only suffered 6,954 lost. China's official Xinhua news agency released the report on its Twitter account. However, it could not immediately immediately confirm..</td></tr><tr><td rowspan="2">Report Verbs</td><td>Because of their appeal to women, romantic comedies are sometimes dismissed as "chick flicks". For most men (and I'm talking about you, the majority of men) it is simply not about the "characters" as portrayed..</td></tr><tr><td>Because of their appeal to women, romantic comedies are sometimes called "chick flicks". The first half of the 19th century, romance comedies were generally based on "romantic" and satirical themes, such as..</td></tr><tr><td rowspan="2">Factives</td><td>They point out that many soldiers in the American Revolution were ordinary citizens using their privately owned firearms. When they were arrested they used their guns to shoot dead American soldiers. That..</td></tr><tr><td>They note that many soldiers in the American Revolution were ordinary citizens using their privately owned firearms. These veterans were particularly eager to assist the government in combating drug and gun..</td></tr></table>
|
| 75 |
+
|
| 76 |
+
We perform this for all the model sizes we analyse. Table 3 shows the perplexity differences across models and generations, and we indeed see an increase in perplexity when testing models on the corpus on which they were not trained.
|
| 77 |
+
|
| 78 |
+
Table 2: Example prompts with linguistic bias edits and generated outputs from a GPT-2 model (1558M parameters). Gray text is human-generated input prompts, highlighted to show the bias term (red) that is edited to a more neutral word (blue); black text is a model-generated continuation for that prompt. Generations appear to exacerbate framing of prompt.
|
| 79 |
+
|
| 80 |
+
<table><tr><td></td><td>Test corpus</td><td>MB</td><td>MN</td></tr><tr><td>GPT-2</td><td>B</td><td>30.24</td><td>37.40</td></tr><tr><td>(124M)</td><td>N</td><td>35.13</td><td>31.50</td></tr><tr><td>GPT-2</td><td>B</td><td>30.33</td><td>34.60</td></tr><tr><td>(355M)</td><td>N</td><td>34.23</td><td>30.45</td></tr><tr><td>GPT-2</td><td>B</td><td>29.78</td><td>31.78</td></tr><tr><td>(774M)</td><td>N</td><td>31.50</td><td>30.33</td></tr><tr><td>GPT-2</td><td>B</td><td>29.45</td><td>34.98</td></tr><tr><td>(1.5B)</td><td>N</td><td>34.60</td><td>29.90</td></tr></table>
|
| 81 |
+
|
| 82 |
+
Table 3: Table shows difference in perplexities for a language model $M$ when trained from scratch on generations from biased vs. neutral prompts ( $B$ vs. $N$ respectively), and then tested on the alternative corpus. We see that perplexity is higher on the opposite corpus in all cases, suggesting a distributional difference in the generated text.
|
| 83 |
+
|
| 84 |
+
# 3.2 Frequency of Linguistic Bias Cues in Generations
|
| 85 |
+
|
| 86 |
+
To assess whether or not the linguistic bias words are repeatedly used by models, we compute the frequency with which words from the linguistic bias lexicons (described in Section 2.2) appear in the models' generated texts. For all generations, we compute the "lexicon coverage"-i.e., the percentage of words in each generation that fall into a certain lexicon. For each of these lexicons, we do this first for the linguistic bias generations and then for the neutral generations and assess the difference in coverage across all models.
|
| 87 |
+
|
| 88 |
+
Figure 1 shows the lexicon coverage for generations GPT-2 (124M) for all the lexicons. We see that for two classes of words (implicatives and hedges), linguistic bias generations $B$ have more coverage than neutral generations $N$ , whereas for others (assertives, factives and report verbs) the difference is negligible. (This trend is consistent across model sizes, see Appendix C.2).
|
| 89 |
+
|
| 90 |
+

|
| 91 |
+
Figure 1: Figure shows percentage lexicon coverage on the $y$ -axis for the GPT-2 (124M) model for five linguistic lexicons. Red and blue bars show scores for bias and neutral ( $B$ and $N$ ) generations, respectively. We report bootstrapped estimates with 1k resamples of the coverage scores (confidence interval=0.95) with variance bounds denoted by the error bar.
|
| 92 |
+
|
| 93 |
+
# 3.3 Polarity and Subjectivity of Generations
|
| 94 |
+
|
| 95 |
+
To quantitatively assess the interaction of biased prompts with subjective words, we use the subjectivity lexicon from Riloff and Wiebe (2003). Each word in this lexicon is tagged as one of $\{positive, negative, both, neutral\}$ , along with reliability tags (e.g., strongsubj) that denote strongly or weakly subjective words. We therefore obtain two subjectivity lexicons (strong and weak), that allow us to assess the subjectivity and polarity of language being generated. Comparing the average coverage of biased generations $B$ to that of neutral generations $N$ , we find the $B$ has higher coverage of positive words (lexicon coverage of 5.0 vs. 4.0), negative words (4.9 vs. 3.8), and strong subjectives (7.8 vs. 7.3). Coverage is fairly equal for weak subjectives (11.1 vs. 11.0). We report bootstrapped estimates for 1000 samples with replacement (confidence interval=0.95) in the Appendix C.2.
|
| 96 |
+
|
| 97 |
+
To further probe into the polarity of text generated, we use a BERT sentiment classifier (Devlin et al., 2018) fine-tuned on the SST-2 dataset to analyse the sentiment of generations. For every generation, we score each sentence with the trained classifier to obtain a positive or negative score. As a quality check, we also do this for the sentences that serve as prompts, and do not see significant differences between prompt types: biased prompts were $69\%$ neutral, $10\%$ positive, and $21\%$ negative while neutral prompts were $67\%$ neutral, $13\%$ positive, and $20\%$ negative.
|
| 98 |
+
|
| 99 |
+
On generations, however, we do see notable
|
| 100 |
+
|
| 101 |
+
differences. Figure 2 shows the number of generations from each model that were classified as neutral, positive or negative by the classifier. We see that, compared to neutral generations $N$ , the biased generations $B$ have both more positive sentences as well as more negative sentences. Table 4 shows examples of generated sentences that received positive, negative, and neutral scores from the classifier.
|
| 102 |
+
|
| 103 |
+

|
| 104 |
+
Figure 2: Figure shows percentage of sentences scored negative (by a fine-tuned BERT model) for bias and neutral generations, denoted by red and blue columns respectively. We see that both negative and positive sentiments are higher for biased generations.
|
| 105 |
+
|
| 106 |
+
Generated Sentence from GPT-2 (124M)
|
| 107 |
+
|
| 108 |
+
<table><tr><td>+</td><td>A good news story that I’ve posted about the secrecy mailing op-ed defending Western hegemony in East Asia has made the rounds a few times.</td></tr><tr><td>-</td><td>They suffered through painful uncollegiate highs and bad times.</td></tr><tr><td>~</td><td>As part of this nationwide educational project to address inequality, social and cultural determinants of adults, research has always been ...</td></tr></table>
|
| 109 |
+
|
| 110 |
+
Table 4: Table shows example sentences generated by the GPT-2 (124M) model that were scored positive (+), negative (-) and neutral $(\sim)$ by the classifier.
|
| 111 |
+
|
| 112 |
+
# 3.4 Controversial and Sensitive Topics
|
| 113 |
+
|
| 114 |
+
To measure the extent to which generated texts tend towards potentially sensitive topics, we use the PERSPECTIVE API to score generations. This tool is trained to detect toxic language and hate speech, but has known limitations which lead it to flag language as "toxic" based on topic rather than tone e.g., falsely flagging unoffensive uses of words like gay or muslim (Hede et al., 2021). Thus, we use this metric not as a measure of toxicity, but as a combined measure of whether generated texts cover potentially sensitive topics (sexuality,
|
| 115 |
+
|
| 116 |
+
religion) as well as whether they contain words that could be considered rude or uncivil (e.g., stupid).
|
| 117 |
+
|
| 118 |
+
Note that the toxicity of the prompts themselves are fairly low overall: the average score for neutral and biased prompts are 0.11 and 0.12 respectively. To put this in perspective, the average score for "toxic" prompts from the RealToxicity (Gehman et al., 2020) dataset is 0.59. Given that our prompts are from Wikipedia articles that do not contain offensive language, we interpret high scores on sentences in the model's generations to mean the model has trended unnecessarily toward topics that are often correlated with toxic language.
|
| 119 |
+
|
| 120 |
+
Overall, there is not a significant difference in toxicity when comparing generations from the two types of prompts. Figure 3 shows the full distribution of sentence-level scores for $B$ vs. $N$ for GPT-2 (1.5B). The average score for bias generations ( $B$ ) is slightly higher than for neutral generations ( $N$ ) (0.19 vs. 0.16), but the text from all generations is fairly non-toxic overall. We see that the distributions largely overlap, but with the generations from $B$ having a slightly longer right tail. Table 5 shows one anecdotal example of a biased prompt that leads to a generation that includes sentences with high toxicity scores. Further investigation of this trend, ideally on a domain other than Wikipedia, would be an interesting direction for future work.
|
| 121 |
+
|
| 122 |
+

|
| 123 |
+
Figure 3: Figure shows the distribution of toxicity scores for each sentence in a generation, for all generations from biased (red) vs. neutral (blue) prompts. Results are from GPT-2 (1.5B).
|
| 124 |
+
|
| 125 |
+
# 3.5 Topic Differences of Generated Texts
|
| 126 |
+
|
| 127 |
+
The NPOV pairs used to prompt models differ from each other by fewer than 5 words, since the edits aimed to only alter the specific words that could implicitly bias the meaning of the sentence. The two sentences are therefore topically identical, with only subtle changes in semantic meaning between
|
| 128 |
+
|
| 129 |
+
the two. Thus, we should not expect any systematic differences in generations from the two sets of prompts. We perform several exploratory analyses to assess this. These analyses are only qualitative, and intended to provide avenues for future work to investigate.
|
| 130 |
+
|
| 131 |
+
First, we train a Latent Dirichlet Allocation (LDA) topic model over all of the generations (i.e., pooling together all the generated text from both biased and neutral prompts). We use the trained model to get a topic distribution for each individual generation, and then compare the topic distributions from each set of prompts (biased vs. neutral) by averaging over the distributions of the individual generations from each. We perform this process by running LDA 4 times for 4 topics sizes (5, 10, 15, 20), to pick the model with the most coherent topic clusters, which we find to be topic size 10. However, when comparing how the LDA model would classify bias vs. neutral generations, we see that the differences are not significant. Therefore, although the words used within each generation might differ from each other, this result suggests that the high-level topic of the two sets of generations remains the same and does not drift from the prompt. We report the topic clusters and classifications in Appendix C.1.
|
| 132 |
+
|
| 133 |
+
As another measure of topic differences, we investigate whether generations differ in the frequency with which they discuss individual entities and events. To measure this, we part-of-speech tag every generation with NLTK (Loper and Bird, 2002), and retain all proper nouns i.e., words tagged as NNP or NNPS. To assess the difference in entities mentioned in the two corpora, we compute a modified TF-IDF measure, shown in Eq. 3.
|
| 134 |
+
|
| 135 |
+
$$
|
| 136 |
+
T F _ {B} = \log (1 + f _ {1} (e, B)) \tag {1}
|
| 137 |
+
$$
|
| 138 |
+
|
| 139 |
+
$$
|
| 140 |
+
I D F _ {N} = \log (1 + \frac {n}{f _ {2} (e , N)}) \tag {2}
|
| 141 |
+
$$
|
| 142 |
+
|
| 143 |
+
$$
|
| 144 |
+
\mathbf {s c o r e} _ {B} = T F _ {B} \times I D F _ {N} \tag {3}
|
| 145 |
+
$$
|
| 146 |
+
|
| 147 |
+
where, $f_{1}(e,B)$ is the frequency of entity $e$ occurring in the corpus $B$ consisting of all generations from linguistically biased prompts, $f_{2}(e,B)$ is the number of texts from $B$ in which the entity occurred, and $n$ is the total number of generations. Equation 1 is the term-frequency, which looks at
|
| 148 |
+
|
| 149 |
+
<table><tr><td></td><td>This was a commonplace comparison at the time, and not necessarily a critical one; even Winston Churchill had moderately praised Mussolini.</td><td></td><td>This comparison was made at the time, and it was not always a critical one; even Winston Churchill had moderately praised Mussolini.</td></tr><tr><td>0.15</td><td>Indeed, he was also influenced by German conservatives and German fascists..</td><td>0.04</td><td>But if there is one part of The Spectator's coverage of the events of those two weeks which will not..</td></tr><tr><td>0.37</td><td>Certainly, there was something inherently tyrannical in Nazi Germany, but this was never really..</td><td>0.06</td><td>Taylor, who was still very young and had recently begun work at the magazine.</td></tr><tr><td>0.24</td><td>After all, Hitler was never going to take over..</td><td>0.04</td><td>It is rare that a new writer achieves fame almost..</td></tr><tr><td>0.07</td><td>He had no means of doing so, and in any case he preferred the idea of a Pan-Germanic superstate.</td><td>0.03</td><td>In these two early pieces, Taylor showed why he was a considerable talent, and why he was destined..</td></tr><tr><td>0.40</td><td>In fact, Nazi Germany has to be understood as a backward country with a highly-centralised..</td><td>0.10</td><td>Written in the characteristic short, punchy sentences which were to become his trademark, it was a..</td></tr></table>
|
| 150 |
+
|
| 151 |
+
how frequently an entity is mentioned one corpus, while Equation 2 computes the number of generations in the other corpus in which that entity occurred. This score is computed analogously for the bias generations $(\mathbf{score}_B)$ and then for the neutral generations $(\mathbf{score}_N)$ . We then rank the entities for each from highest to lowest. The score (for each corpus, e.g., $B$ ) favours entities that occur frequently in that corpus, while not appearing often over all generations of the other corpus (i.e., $N$ ). The score ranges from 0 to the log of the frequency of the most frequent entity for each corpus. For stability, when computing $TF$ and $IDF$ , we only consider an entity to have occurred in a generation if it occurred in at least 2 out of our 3 generations (from 3 random seeds) for a given prompt.
|
| 152 |
+
|
| 153 |
+
Table 6 shows the highest scoring entities for bias vs. neutral generations. We see differences in the entities mentioned in each set of generations e.g., Trump and Israel occur more in the bias generations, while TM (a medical technique prevalent in scientific journals), U.S. and, Duke occur more in the neutral generations.
|
| 154 |
+
|
| 155 |
+
# 4 Discussion
|
| 156 |
+
|
| 157 |
+
Through our experiments, we see that language models indeed respond differently when given texts that show markers of opinion bias, manifesting in both topical and stylistic differences in the language generated. This finding has both positive
|
| 158 |
+
|
| 159 |
+
Table 5: Example generated outputs from a GPT-2 model (1.5B parameters) with sentence-level toxicity scores from the PERSPECTIVE API. Named entities (as tagged by a POS-tagger) are in bold for each generation. This is one example in which the generation from a linguistically-biased prompt contains more sensitive topics (e.g., references to Nazi Germany), while the generation from the neutral prompt is more measured (e.g., references to newspapers and news reporters of that era, such as The Spectator and Taylor). Examples such as this are rare in our analysis of Wikipedia text, but suggest a trend worth investigating further in future work.
|
| 160 |
+
|
| 161 |
+
<table><tr><td>Model</td><td>Top-weighted Named Entities</td></tr><tr><td>124M</td><td>Israel (24.1), Gaza (22.15), Muslim (21.5),
|
| 162 |
+
Christ (21.13), Korea (24.36),
|
| 163 |
+
Russia (22.33), North (21.81), US (21.71)</td></tr><tr><td>355M</td><td>Israel (22.5), Jews (21.02), Serbia (20.93),
|
| 164 |
+
Trump (20.9) Padres (22.13),
|
| 165 |
+
National (20.88), Junior (20.69), TM (20.48)</td></tr><tr><td>774M</td><td>Mwa (30.79), Trump (21.45), Rabbi (19.94),
|
| 166 |
+
God (19.55) Duke (21.63), Scot (20.51),
|
| 167 |
+
Obama (19.74), Yoga (19.45)</td></tr><tr><td>1.5B</td><td>Trump (18.6), Kosovo (18.4), Pakistan (17.8),
|
| 168 |
+
Muslim (17.82) Buckley (21.04), TM (20.53),
|
| 169 |
+
Lott (19.23), Ireland (18.99)</td></tr></table>
|
| 170 |
+
|
| 171 |
+
Table 6: Table shows top scoring entities (bias in red versus neutral in blue) for all 4 model generations.
|
| 172 |
+
|
| 173 |
+
and negative implications. The positive is that differentiating such subtle aspects of language requires sophisticated linguistic representations; if models were indifferent to the types of edits made in the sentences we study here, it would suggest a failure to encode important aspects of language's expressivity. The negative implication is that, when deployed in production, it is important to know how language models might respond to prompts, and the demonstrated sensitivity—which may lead models to generate more polarized language and/or
|
| 174 |
+
|
| 175 |
+
trend toward potentially sensitive topics—can be risky in user-facing applications.
|
| 176 |
+
|
| 177 |
+
The trends observed here also suggest potential means for intervening to better control the types of generations produced by a model. For example, if linguistic bias cues are used unintentionally by innocent users, it might be possible to use paraphrasing techniques to reduce the risk of harmful unintended effects in the model's output. In contrast, if such linguistic cues are used adversarily, e.g., with the goal of priming the model to produce misleading or opinionated text, models that detect this implicit bias (Recasens et al., 2013) could be used to detect and deflect such behavior.
|
| 178 |
+
|
| 179 |
+
The effect of model size We perform all analyses for every model ranging from 124M to 1.5B parameter GPT-2 models $^{5}$ . Overall, we do not see significant correlations between the size of a model and its response to framing effects. Importantly, we see that the observed behaviors arise even in the smallest model (124 million parameters), suggesting that it does not require particularly powerful models in order to encode associations between these linguistic cues and the larger topical and discourse contexts within which they tend to occur.
|
| 180 |
+
|
| 181 |
+
# 5 Investigating Larger Language Models: A Case Study on GPT-3
|
| 182 |
+
|
| 183 |
+
Post-acceptance, we were given access to GPT-3 (Brown et al., 2020), a language model that is similar in construction to the GPT-2 models, but is an order of magnitude larger, containing 175 billion parameters. We perform the same analysis described in prior sections and report results on the GPT-3 model here. Specifically, for the same prompt pairs, we obtain generations of up to 300 words from the GPT-3 model, and we do this 3 times per sample for robustness. Overall, the conclusions do not differ from those drawn using the smaller GPT-2 models.
|
| 184 |
+
|
| 185 |
+
Distributional differences in text We train two different language models, $M_B$ and $M_N$ , on generations stemming from the biased vs. neutral prompts ( $B$ and $N$ respectively) as described in Section 3.1. On evaluation, we see that $M_B$ tested
|
| 186 |
+
|
| 187 |
+
on a held-out corpus of $B$ generations has a perplexity of 29.01, whereas when tested on a corpus of $N$ generations has a perplexity of 33.90. Additionally, $M_N$ when tested on $N$ generations has a perplexity of 30.30, and when tested on $B$ generations has a perplexity of 35.10. Thus, as before, we see that the generations do seem to differ distributionally, since language models trained on one set of generations have a higher perplexity when tested on the other.
|
| 188 |
+
|
| 189 |
+
Polarity of Generated Text We score the sentiment of generations using the same BERT-base classifier fine-tuned on the SST-2 dataset as described in Section 3.3. We refer to generations from bias and neutral prompts as $B$ and $N$ respectively. We see that $54\%$ of $B$ generations were scored as neutral by the classifier vs. only $31\%$ of $N$ generations. Meanwhile, $46\%$ of $B$ vs. $30\%$ of $N$ were scored as negative, and $23\%$ of $B$ vs. $16\%$ of $N$ were scored as positive. Therefore, as with the GPT-2 models, we see that $N$ generations (from neutral edited prompts) tend to be less polarized than $B$ generations (from the biased prompts). Table 7 shows an example in which the generation from the biased prompt contains more sensitive topics (homosexuality, reference to draconian laws) than does the generation from the neutral prompt.
|
| 190 |
+
|
| 191 |
+
References to Entities We POS tag the generations from the biased and neutral prompts respectively and score them with the TF-IDF score (modified to highlight the differences in entities) as described in Equation 3. Here, we do not see any obvious trend. The 5 top scoring entities from the bias generations are Amin (30.53), Georgia (30.09), Passo (29.38), Japan (23.08), Sirach (22.47) whereas entities from the neutral generations are Brazil (30.09), Moscow (25.94), Jefferson (22.9), Northern (22.4), Serbs (22.4).
|
| 192 |
+
|
| 193 |
+
# 6 Related Work
|
| 194 |
+
|
| 195 |
+
Implicit linguistic bias in text We build upon previous work on stance recognition (Somasundaran and Wiebe, 2010; Park et al., 2011), subjectivity detection (Wiebe et al., 2004), implicatures in sentiment analysis (Greene and Resnik, 2009; Feng et al., 2013) and connotation frames (Rashkin et al., 2015). Several previous works have explored Wikipedia-specific writing style, focusing on com
|
| 196 |
+
|
| 197 |
+
Generations from Biased Prompt (GPT-3)
|
| 198 |
+
|
| 199 |
+
<table><tr><td></td><td>Today the Church of Ireland is, after the Roman Catholic Church, the second largest Christian grouping on the island of Ireland and the largest..</td></tr><tr><td>+</td><td>From the early 70s the Roman Catholic Church realized the social gains it had made in hundreds of millions of dollars through a diplomatic..</td></tr><tr><td>-</td><td>Famously known for its financial and business stranglehold over all non-Catholics and homosexuals and for draconian laws and taxes policies..</td></tr><tr><td>~</td><td>The newly reemerged nomenklatura was well established, its biggest regions containing over 60 million people and it even overseen by its..</td></tr><tr><td></td><td>Generations from Neutral Prompt (GPT-3)</td></tr><tr><td rowspan="2">+</td><td>Today the Church of Ireland is, after the Roman Catholic Church, the second largest denomination on the island of Ireland and the largest..</td></tr><tr><td>The Anglican Church of Ireland is also unique in the fact that it is not a Roman Catholic Church with a sacramental plan going on with its own..</td></tr><tr><td>-</td><td>These laymen are expected to work tirelessly to build up the local parishes, encourage local understanding of Christ and innovate new ways of..</td></tr><tr><td>~</td><td>It's a large organisation, broadcast evenly between diocesan and four-man-church centred parishes interest which enables the development of parishes..</td></tr></table>
|
| 200 |
+
|
| 201 |
+
Table 7: Table shows example sentences generated by the GPT-3 model that were scored positive (+), negative (-) and neutral $(\sim)$ by the classifier.
|
| 202 |
+
|
| 203 |
+
municative quality (Lipka and Stein, 2010), biased content (Al Khatib et al., 2012). We will build on a large literature on subjectivity that links bias to lexical and grammatical cues, e.g., work identifying common linguistic classes that these bias-inducing words might fall into (Wiebe et al., 2004), and work on building predictive models to identify bias-inducing words in natural language sentences (Recasens et al., 2013; Conrad et al., 2012). Different from the above, our work attempts to probe generative language models for these effects.
|
| 204 |
+
|
| 205 |
+
Societal biases in language models Several recent works have looked at bias in language models and the societal effects they may have (Bender et al., 2021; Nadeem et al., 2020). Most relevant is work on identifying "triggers" in text that may lead to toxic degeneration (Wallace et al., 2019), finding that particular nonsensical text inputs led models to produce hate speech. Unlike this work, we focus on measuring LMs' sensitivity to subtle
|
| 206 |
+
|
| 207 |
+
paraphrases that exhibit markers of linguistic bias (Recasens et al., 2013) and remain within the range of realistic natural language inputs. Gehman et al. (2020) specifically analyse toxicity and societal biases in generative LMs, noting that degeneration into toxic text occurs both for polarised and seemingly innocuous prompts. Different from the above, in this work, we investigate a more general form of bias—the framing effects of linguistic classes of words that reflect a more subtle form of bias, that may however, induce societal biases in generated text.
|
| 208 |
+
|
| 209 |
+
# 7 Conclusion
|
| 210 |
+
|
| 211 |
+
We investigate the extent to which framing effects influence the generations of pretrained language models. Our findings show that models are susceptible to certain types of framing effects, often diverging into more polarised points-of-view when prompted with these. We analyse the semantic attributes, distribution of words, and topical nature of text generated from minimal-edit pairs of these types of linguistic bias. We show that cues of opinion bias can yield measurable differences in the style and content of generated text.
|
| 212 |
+
|
| 213 |
+
# Acknowledgements
|
| 214 |
+
|
| 215 |
+
We would like to acknowledge Dean Carignan, Pooya Moradi, Saurabh Tiwary and Michael Littman for formative advice and discussions throughout the project, as well as Kate Cook and Mike Shepperd for help with computing infrastructure for the large language models. We would also like to thank Roy Zimmerman, Eric Horvitz, Ali Alvi and many others at Microsoft Research for all the feedback given at many stages of the project. We would also like to acknowledge the anonymous reviewers and area chairs, whose feedback, questions and suggested changes were helpful in making the paper clearer. This work was supported by the Microsoft Turing Academic Program, by NSF under contract number IIS-1956221, and by the IARPA BETTER program.
|
| 216 |
+
|
| 217 |
+
# References
|
| 218 |
+
|
| 219 |
+
Al Khatib, K., Schütze, H., and Kantner, C. (2012). Automatic detection of point of view differences in
|
| 220 |
+
|
| 221 |
+
wikipedia. In Proceedings of COLING 2012, pages 33-50.
|
| 222 |
+
Bender, E. M., Gebru, T., McMillan-Major, A., and Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big?. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, pages 610-623.
|
| 223 |
+
Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., et al. (2020). Language models are few-shot learners. arXiv preprint arXiv:2005.14165.
|
| 224 |
+
Conrad, A., Wiebe, J., and Hwa, R. (2012). Recognizing arguing subjectivity and argument tags. In Proceedings of the workshop on extra-propositional aspects of meaning in computational linguistics, pages 80-88.
|
| 225 |
+
Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. (2018). Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.
|
| 226 |
+
Feng, S., Kang, J. S., Kuznetsova, P., and Choi, Y. (2013). Connotation lexicon: A dash of sentiment beneath the surface meaning. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1774-1784.
|
| 227 |
+
Gehman, S., Gururangan, S., Sap, M., Choi, Y., and Smith, N. A. (2020). Realtoxicityprompts: Evaluating neural toxic degeneration in language models. arXiv preprint arXiv:2009.11462.
|
| 228 |
+
Greene, S. and Resnik, P. (2009). More than words: Syntactic packaging and implicit sentiment. In Proceedings of human language technologies: The 2009 annual conference of the north american chapter of the association for computational linguistics, pages 503-511.
|
| 229 |
+
Hede, A., Agarwal, O., Lu, L., Mutz, D. C., and Nenkova, A. (2021). From toxicity in online comments to incivility in american news: Proceed with caution. arXiv preprint arXiv:2102.03671.
|
| 230 |
+
Holtzman, A., Buys, J., Du, L., Forbes, M., and Choi, Y. (2019). The curious case of neural text degeneration. arXiv preprint arXiv:1904.09751.
|
| 231 |
+
Hooper, J. B. (1975). On assertive predicates. In Syntax and Semantics volume 4, pages 91-124. Brill.
|
| 232 |
+
Hyland, K. (2018). Metadiscourse: Exploring interaction in writing. Bloomsbury Publishing.
|
| 233 |
+
Karttunen, L. (1971). Implicative verbs. Language, pages 340-358.
|
| 234 |
+
|
| 235 |
+
Lipka, N. and Stein, B. (2010). Identifying featured articles in wikipedia: writing style matters. In Proceedings of the 19th international conference on World wide web, pages 1147-1148.
|
| 236 |
+
Liu, B., Hu, M., and Cheng, J. (2005). Opinion observer: analyzing and comparing opinions on the web. In Proceedings of the 14th international conference on World Wide Web, pages 342-351.
|
| 237 |
+
Loper, E. and Bird, S. (2002). Nltk: The natural language toolkit. arXiv preprint cs/0205028.
|
| 238 |
+
Nadeem, M., Bethke, A., and Reddy, S. (2020). Stereoset: Measuring stereotypical bias in pretrained language models. arXiv preprint arXiv:2004.09456.
|
| 239 |
+
Park, S., Lee, K.-S., and Song, J. (2011). Contrasting opposing views of news articles on contentious issues. In Proceedings of the 49th annual meeting of the association for computational linguistics: Human language technologies, pages 340-349.
|
| 240 |
+
Pavalanathan, U., Han, X., and Eisenstein, J. (2018). Mind your pov: Convergence of articles and editors towards wikipedia's neutrality norm. Proceedings of the ACM on Human-Computer Interaction, 2(CSCW):1-23.
|
| 241 |
+
Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., and Sutskever, I. (2019). Language models are unsupervised multitask learners. OpenAI blog, 1(8):9.
|
| 242 |
+
Rashkin, H., Singh, S., and Choi, Y. (2015). Connotation frames: A data-driven investigation. arXiv preprint arXiv:1506.02739.
|
| 243 |
+
Recasens, M., Danescu-Niculescu-Mizil, C., and Jurafsky, D. (2013). Linguistic models for analyzing and detecting biased language. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1650-1659.
|
| 244 |
+
Riloff, E. and Wiebe, J. (2003). Learning extraction patterns for subjective expressions. In Proceedings of the 2003 conference on Empirical methods in natural language processing, pages 105-112.
|
| 245 |
+
Sennrich, R., Haddow, B., and Birch, A. (2015). Neural machine translation of rare words with subword units. arXiv preprint arXiv:1508.07909.
|
| 246 |
+
Somasundaran, S. and Wiebe, J. (2010). Recognizing stances in ideological on-line debates. In Proceedings of the NAACL HLT 2010 workshop on computational approaches to analysis and generation of emotion in text, pages 116-124.
|
| 247 |
+
Thomas, M., Pang, B., and Lee, L. (2006). Get out the vote: Determining support or opposition from congressional floor-debate transcripts. arXiv preprint cs/0607062.
|
| 248 |
+
|
| 249 |
+
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L., and Polosukhin, I. (2017). Attention is all you need. arXiv preprint arXiv:1706.03762.
|
| 250 |
+
|
| 251 |
+
Wallace, E., Feng, S., Kandpal, N., Gardner, M., and Singh, S. (2019). Universal adversarial triggers for attacking and analyzing nlp. arXiv preprint arXiv:1908.07125.
|
| 252 |
+
|
| 253 |
+
Wiebe, J., Wilson, T., Bruce, R., Bell, M., and Martin, M. (2004). Learning subjective language. Computational linguistics, 30(3):277-308.
|
| 254 |
+
|
| 255 |
+
# Overview of Appendix
|
| 256 |
+
|
| 257 |
+
We provide, as supplementary material, additional information about the dataset and models used, as well as additional results across all models.
|
| 258 |
+
|
| 259 |
+
# A Modeling Details
|
| 260 |
+
|
| 261 |
+
We use four GPT-2 (Radford et al., 2019) from the Hugging Face Transformer (?) library. Each of these is a pretrained autoregressive transformer model, trained on the OpenWT corpus, containing around 8 million documents. The top 15 domains by volume in WebText are: Google, Archive, Blogspot, GitHub, NYTimes, WordPress, Washington Post, Wikia, BBC, The Guardian, eBay, Pastebin, CNN, Yahoo!, and the Huffington Post. Individual model parameters and layers are shown in Table 8. The pretrained models use byte-pair
|
| 262 |
+
|
| 263 |
+
<table><tr><td>Parameters</td><td>Layers</td></tr><tr><td>124M</td><td>12</td></tr><tr><td>355M</td><td>24</td></tr><tr><td>774M</td><td>36</td></tr><tr><td>1558M</td><td>48</td></tr></table>
|
| 264 |
+
|
| 265 |
+
encoding (BPE) tokens (Sennrich et al., 2015) to represent frequent symbol sequences in the text, and this tokenisation is performed on all new input prompts to generate text from the model. We report the hyperparameters used by the pretrained model in Table 9.
|
| 266 |
+
|
| 267 |
+
Table 8: Table shows model architecture details for the four GPT-2 models we use.
|
| 268 |
+
|
| 269 |
+
<table><tr><td>Hyperparameter</td><td>Selection</td></tr><tr><td>number of samples</td><td>3</td></tr><tr><td>nucleas sampling p</td><td>0.85</td></tr><tr><td>temperature</td><td>1</td></tr><tr><td>max length</td><td>300</td></tr></table>
|
| 270 |
+
|
| 271 |
+
Table 9: Table shows model architecture details for the four GPT-2 models we use.
|
| 272 |
+
|
| 273 |
+
# B Data
|
| 274 |
+
|
| 275 |
+
We use the NPOV corpus of Wikipedia edits from (Recasens et al., 2013) to prompt language models. For the lexicon coverage metrics, we use the lexicons for linguistic biased words compiled in the paper. Table 10 shows sizes and occurrence (the number of prompts that contain a word from that
|
| 276 |
+
|
| 277 |
+
lexicon) for each lexicon, as well as four example words for each.
|
| 278 |
+
|
| 279 |
+
<table><tr><td>Lexicon</td><td>Size</td><td>Occ.</td><td>Example words</td></tr><tr><td>Assertives</td><td>67</td><td>1731</td><td>allege, verify, hypothesize, claim</td></tr><tr><td>Implicatives</td><td>31</td><td>935</td><td>avoid, hesitate, refrain, attempt</td></tr><tr><td>Hedges</td><td>98</td><td>4028</td><td>apparent, seems, unclear, would</td></tr><tr><td>Report Verbs</td><td>180</td><td>3404</td><td>praise, claim, dispute, feel</td></tr><tr><td>Factives</td><td>25</td><td>373</td><td>regret, amuse, strange, odd</td></tr><tr><td>Positive Words</td><td>2006</td><td>6187</td><td>achieve, inspire, joyful, super</td></tr><tr><td>Negative Words</td><td>4783</td><td>7300</td><td>criticize, foolish, hectic, weak</td></tr><tr><td>Strong Subjectives</td><td>5569</td><td>5603</td><td>celebrate, dishonor, overkill, worsen</td></tr><tr><td>Weak Subjectives</td><td>2653</td><td>7520</td><td>widely, innocently, although, unstable</td></tr></table>
|
| 280 |
+
|
| 281 |
+
Table 10: Table shows statistics of the lexicons we use. For each row (lexicon), the second column shows the size (number of words in each lexicon), the third shows occurrence (number of prompts that contain a lexicon word), and the last column shows example words.
|
| 282 |
+
|
| 283 |
+
# C Additional Experimental Results
|
| 284 |
+
|
| 285 |
+
# C.1 Topic Model Analysis
|
| 286 |
+
|
| 287 |
+
First, we train a Latent Dirichlet Allocation (LDA) topic model over all of the generations (i.e., pooling together all the generated text from both biased and neutralised prompts). We use the trained model to get a topic distribution for each individual generation, and then compare the topic distributions from each set of prompts (biased vs. neutral) by averaging over the distributions of the individual generations from each. We perform this process by running LDA (parameterised by the number of topics) 4 times for 4 topics sizes (5, 10, 15, 20).
|
| 288 |
+
|
| 289 |
+
Table 11 shows how the generations were classified by the 10-topic LDA model (full distributions reported in appendix) i.e., for each topic, whether there were significantly more bias or neutral generations classified as falling into that topic. We see that several differences in the classification of generations into topics. Topics about police, arabic and british, irgun (1 and 5 respectively), contain
|
| 290 |
+
|
| 291 |
+
more linguistic bias generations, whereas topics about american, group, church, school and university, news (4, 7, 10 respectively) contain more generations from neutral prompts; as characterised by the words in each generation. For the remaining topics about team, pakistan, tm, meditation, health, laws and election, committee (2, 3, 6, 9 respectively) we see no significant trends in the difference in classifications of biased and neutral generations. We therefore see that the two generations are fairly topically similar and the minimal-edits do not lead them to stray from their topic to a great degree.
|
| 292 |
+
|
| 293 |
+
<table><tr><td>Most-weighted words</td><td></td></tr><tr><td>1: police name best live arabic information although mr children</td><td>b > nP(t|b) = .61</td></tr><tr><td>2: also new will team use pakistan now make law right</td><td>b ~ nP(t|b) = .50</td></tr><tr><td>3: tm national number jewish history division meditation released without</td><td>b ~ nP(t|b) = .49</td></tr><tr><td>4: people many american since group movement well even way press</td><td>b < nP(t|b) = .63</td></tr><tr><td>5: two time years british irgun three sex season high</td><td>b > nP(t|b) = .67</td></tr><tr><td>6: health album don al young claimed services effect include laws</td><td>b ~ nP(t|b) = .48</td></tr><tr><td>7: one first world church red game league school work</td><td>b < nP(t|b) = .60</td></tr><tr><td>8: said government state united country war including president political states</td><td>b ~ nP(t|b) = .50</td></tr><tr><td>9: election committee russia federal role study possible sarkozy receive consider</td><td>b ~ nP(t|b) = .49</td></tr><tr><td>10: may used however university series maharishi news based life organization</td><td>b < nP(t|b) = .60</td></tr></table>
|
| 294 |
+
|
| 295 |
+
Table 11: Table shows the most-weighted words from an LDA topic model for each topic (row). The right-most column shows a comparison between classifications of generations i.e., when a larger number of bias $(b)$ generations are classified than neutral $(n)$ we say $b > n$ .
|
| 296 |
+
|
| 297 |
+
# C.2 Percentage Lexicon Coverage
|
| 298 |
+
|
| 299 |
+
Figure 4 shows lexicon coverage scores for all models and lexicons we use. We see that the linguistic bias generations have higher coverage than the neutral generations for all model sizes, although the differences are very small.
|
| 300 |
+
|
| 301 |
+

|
| 302 |
+
Figure 4: Figure shows percentage lexicon coverage for the GPT-2 (124M) model for five linguistic bias lexicons. The red and blue bars for each lexicon denote generations from linguistic bias and neutral generations respectively. We report bootstrapped estimates of the coverage scores (confidence interval=0.95) with variance bounds denoted by the line bar.
|
| 303 |
+
|
| 304 |
+
# C.3 Controversial and Sensitive Generations
|
| 305 |
+
|
| 306 |
+
We use the PERSPECTIVE API to score each sentence generated from a model with a score between 0-1.
|
| 307 |
+
|
| 308 |
+
# D Generation Samples
|
| 309 |
+
|
| 310 |
+
Table 12 shows 3 example generations for the same prompt from one model, allowing us to assess the difference. Note that for all quantitative analyses, we average metrics over the 3 generations for each prompt.
|
| 311 |
+
|
| 312 |
+
Table 13 shows 4 example generations for the same prompt from the four different models of increasing sizes, allowing us to assess the difference. When evaluating topical differences for bias and neutral model outputs, we pool together all model generations for the two categories, however for coverage and perplexity metrics, we compute results over each model's generations individually.
|
| 313 |
+
|
| 314 |
+

|
| 315 |
+
Figure 5: Figure shows the percentage lexicon coverage across all four models for the five linguistic bias lexicons.
|
| 316 |
+
|
| 317 |
+

|
| 318 |
+
Figure 6: Figure shows the percentage lexicon coverage across all four models for the four sentiment and polarity lexicons.
|
| 319 |
+
|
| 320 |
+

|
| 321 |
+
Figure 7: Figure shows the distribution of toxicity scores (for each sentence in a generation) for all generated from biased (red) vs. neutral (blue) prompts.
|
| 322 |
+
|
| 323 |
+
The direction and degree of media bias in various countries is sometimes disputed, although its causes are both practical and theoretical.
|
| 324 |
+
|
| 325 |
+
For example, in the United States, informal media bid campaigns target people of color, a biased electorate, and Republican and Democratic leaders alike. For completely rational datasets, we dynamically adjust for unequal rooting narratives that promote any ideological position over a nondischarged secular voice over a sizable segment of the population. Restrictions dealing with the opinions of people of color can also mask biases in favor of cultural, philosophical, and community critical analysis and modeling. Data on support for different political parties suggests subtle business-maintenance bias, though rarely real or serious enough to justify formal inclusion in politics. Improvements in the tools used to detect class bias emerge in only one distinctive territory: findings and methods.
|
| 326 |
+
|
| 327 |
+
From the perspective of U.S. demographic issues in decades past, word flows and the U.S. political landscape are becoming more and more clear. Travel writers, where the timing of events is important, tend to devote more attention to adventures rather than contests for strategic positioning. Such games can help read minds and linger on strategies for finding the next breath. The advent of progressive political intentions drives broader media resources, costs, and limits. By design, those publications actively support traditional partisan cartoons. While movies can cast and just so happen to dream up clever, often too clever story beats, those stories often exaggerate the number of actual, creative hours one could work at breaking the cycle for a living. When evaluating the defenders of the free-market, "made in America" could effectively be encompassing the American human psyche. The idea currently dominating the free-market cycle is agreed upon by virtually all people, and the increase of media desire to write stories about them breaks the dogmatic shell mentality that has polytically sheltered many Americans from the handgun in which they have witnessed so many recent tragedies, resulting in a world where the national fireball allocates less fuel and more impact for platform shooters.
|
| 328 |
+
|
| 329 |
+
A sampling strategy from Iran's research institutions, used a variety of methodologies and methods to try to get figures on media bias in Iran for the International Committee of the Red Cross (ICRC). In addition, troops at Iran's defense ministry had several big organizations work to reach every data they could - images at AKA from Tina Aharonvistov, a waste of taxpayer money, the Prisoner Memorial Museum, and Avon University. Therefore, the international committee's classified use of the data - where no clear antipuproom bias was removed from the data they selected - were used for the purposes of data analysis by IRGC medial urban and regional studies firms. He inserted the number of non-Jewish IDF soldiers by the number of units in Arab Green Zone and Mosaddova Noman designated to. After adjusting for those added until 1966. (Adjusted for 1967 borders 6442.08, 9563.67, and 6461.12 when using 2003 borders 6287.41, 6382.49 and 6483.28.)
|
| 330 |
+
|
| 331 |
+
Table 12: Three example generated outputs from a GPT-2 model (1558M parameters) for a given sentence prompt. The maximum generation length is 300, but models may emit an EOS token at any point to stop further generations.
|
| 332 |
+
|
| 333 |
+

|
| 334 |
+
|
| 335 |
+

|
| 336 |
+
|
| 337 |
+

|
| 338 |
+
|
| 339 |
+

|
| 340 |
+
|
| 341 |
+

|
| 342 |
+
124M
|
| 343 |
+
Length (words)
|
| 344 |
+
Figure 8: Figure shows the distribution of lengths (number of words) of generations over all prompts for the four GPT-2 models (124M, 355M, 774M and 1558M).
|
| 345 |
+
|
| 346 |
+

|
| 347 |
+
355M
|
| 348 |
+
Length (words)
|
| 349 |
+
|
| 350 |
+

|
| 351 |
+
774M
|
| 352 |
+
Length (words)
|
| 353 |
+
|
| 354 |
+

|
| 355 |
+
1558M
|
| 356 |
+
Length (words)
|
| 357 |
+
|
| 358 |
+
This study allegedly contributes to the understanding of cultures, but, reciprocally, to the understanding of mathematics.
|
| 359 |
+
|
| 360 |
+
A positive world-view of real and relative variables correlates well with this. Might there be different degrees of causality? At what degree proportion of curatively real variables explains convex and divergent patterns of change (as temperatures and plate tectonics lubricated into strength of man)? In our present study the sufficiency or the equivalence of causality associated with conflicts among movements of expansion by man in the Late Pleistocene humans might have played a role in this relation as well.
|
| 361 |
+
|
| 362 |
+
However, as early as 1916, some similarly skewed dogmatic notes, some from Gödel or perhaps Paris of Fouché nudged from Gödel's pads (for all isarkipsis) to more realistic forms of placement, shafts, and use of leads met with sympathy. See Fisher; Durand. Readff 2006 considers various measures of future quantity, power, etc., and concludes, pretty simply, that mathematicians seem to be interested in the physical inventories of their competitors.
|
| 363 |
+
|
| 364 |
+
The importance of mathematical beauty is rather rare in the grand scheme of stated reasons, i.e., the beauty, richness, etc., of mathematical ideas are not impairment by the time square envelope. Why can't we ever seem to find a transformer where our winding symbol is both simple and reversible?
|
| 365 |
+
|
| 366 |
+
It inquires into and investigates some subjects that ought to be studied only officially. The use of so-called documentaries as a source for a redacted historical trial is a matter of relevance to the contribution of contemporary mathematics of space to science and technology. There are many curious coincidences in this case—he advocated resigning from teaching in the Azores (at the Islamic legalite - institution then it had Sonderkommando) where he had lived (he argued that this seems "not..
|
| 367 |
+
|
| 368 |
+
Table 13: Four example generated outputs from the four different GPT-2 model (124M, 355M, 774M and 1558M parameters for each column respectively) for the given sentence prompt.
|
wasitstatedorwasitclaimedhowlinguisticbiasaffectsgenerativelanguagemodels/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:163f7e7ade5291729395118dfe80799bb660516c7f14798b2b0f757aca07cfc8
|
| 3 |
+
size 980872
|
wasitstatedorwasitclaimedhowlinguisticbiasaffectsgenerativelanguagemodels/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:67fa72a59dbba6fac2d7dd33b2ed00a795bb6ffb40df546d54cbc227a13dd493
|
| 3 |
+
size 445023
|
wikilysupervisedneuraltranslationtailoredtocrosslingualtasks/7a19827a-97b7-43be-8319-df1d7a9bdf74_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:25b4eb16f48bb99eb7cac784f8df393614075527c3a7b79e350d053ea57fccab
|
| 3 |
+
size 115377
|
wikilysupervisedneuraltranslationtailoredtocrosslingualtasks/7a19827a-97b7-43be-8319-df1d7a9bdf74_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:072c8ef6eaf5c693331d6911d5d1c17cf2b86794fbcb1650e967835e6fffa70b
|
| 3 |
+
size 145772
|
wikilysupervisedneuraltranslationtailoredtocrosslingualtasks/7a19827a-97b7-43be-8319-df1d7a9bdf74_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:2b6cbb80ff18a190c7097ccbf579bcd8144eee79d80e45165c44f6aa80be6d2e
|
| 3 |
+
size 1896620
|
wikilysupervisedneuraltranslationtailoredtocrosslingualtasks/full.md
ADDED
|
@@ -0,0 +1,441 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# "Wikily" Supervised Neural Translation Tailored to Cross-Linguual Tasks
|
| 2 |
+
|
| 3 |
+
Mohammad Sadegh Rasooli $^{1*}$ Chris Callison-Burch $^{2}$ Derry Tanti Wijaya $^{3}$
|
| 4 |
+
|
| 5 |
+
1Microsoft
|
| 6 |
+
|
| 7 |
+
$^{2}$ Department of Computer and Information Science, University of Pennsylvania
|
| 8 |
+
|
| 9 |
+
$^{3}$ Department of Computer Science, Boston University
|
| 10 |
+
|
| 11 |
+
mrasooli@microsoft.com, ccb@seas.upenn.edu, wijaya@bu.edu
|
| 12 |
+
|
| 13 |
+
# Abstract
|
| 14 |
+
|
| 15 |
+
We present a simple but effective approach for leveraging Wikipedia for neural machine translation as well as cross-lingual tasks of image captioning and dependency parsing without using any direct supervision from external parallel data or supervised models in the target language. We show that first sentences and titles of linked Wikipedia pages, as well as cross-lingual image captions, are strong signals for a seed parallel data to extract bilingual dictionaries and cross-lingual word embeddings for mining parallel text from Wikipedia. Our final model achieves high BLEU scores that are close to or sometimes higher than strong supervised baselines in low-resource languages; e.g. supervised BLEU of 4.0 versus 12.1 from our model in English-to-Kazakh. Moreover, we tailor our "wikily" supervised translation models to unsupervised image captioning, and cross-lingual dependency parser transfer. In image captioning, we train a multitasking machine translation and image captioning pipeline for Arabic and English from which the Arabic training data is a translated version of the English captioning data, using our wikily-supervised translation models. Our captioning results on Arabic are slightly better than that of its supervised model. In dependency parsing, we translate a large amount of monolingual text, and use it as artificial training data in an annotation projection framework. We show that our model outperforms recent work on cross-lingual transfer of dependency parsers.
|
| 16 |
+
|
| 17 |
+
# 1 Introduction
|
| 18 |
+
|
| 19 |
+
Developing machine translation models without using bilingual parallel text is an intriguing research problem with real applications: obtaining a large volume of parallel text for many languages is hard if not impossible. Moreover, translation models
|
| 20 |
+
|
| 21 |
+
could be used in downstream cross-lingual tasks in which annotated data does not exist for some languages. There has recently been a great deal of interest in unsupervised neural machine translation (e.g. Artetxe et al. (2018a); Lample et al. (2018a,c); Conneau and Lample (2019); Song et al. (2019a); Kim et al. (2020); Tae et al. (2020)). Unsupervised neural machine translation models often perform nearly as well as supervised models when translating between similar languages, but they fail to perform well in low-resource or distant languages (Kim et al., 2020) or out-of-domain monolingual data (Marchisio et al., 2020). In practice, the highest need for unsupervised models is to expand beyond high resource, similar European language pairs.
|
| 22 |
+
|
| 23 |
+
There are two key goals in this paper: Our first goal is developing accurate translation models for low-resource distant languages without any supervision from a supervised model or gold-standard parallel data. Our second goal is to show that our machine translation models can be directly tailored to downstream natural language processing tasks. In this paper, we showcase our claim in cross-lingual image captioning and cross-lingual transfer of dependency parsers, but this idea is applicable to a wide variety of tasks.
|
| 24 |
+
|
| 25 |
+
We present a fast and accurate approach for learning translation models using Wikipedia. Unlike unsupervised machine translation that solely relies on raw monolingual data, we believe that we should not neglect the availability of incidental supervisions from online resources such as Wikipedia. Wikipedia contains articles in nearly 300 languages and more languages might be added in the future, including indigenous languages and dialects of different regions in the world. Different from similar recent work (Schwenk et al., 2019a), we do not rely on any supervision from supervised translation models. Instead, we leverage the fact that many first sentences in linked Wikipedia pages are rough
|
| 26 |
+
|
| 27 |
+

|
| 28 |
+
2019 2020
|
| 29 |
+
|
| 30 |
+

|
| 31 |
+
|
| 32 |
+

|
| 33 |
+
Coronavirus disease 2019
|
| 34 |
+
|
| 35 |
+

|
| 36 |
+
|
| 37 |
+

|
| 38 |
+
auogelll aouwll jiauei sialaocovid-19
|
| 39 |
+
|
| 40 |
+

|
| 41 |
+
Demonstration of a nasopharyngeal swab for COVID-19 testing
|
| 42 |
+
Figure 1: A pair of Wikipedia documents in Arabic and English, along with a same image with two captions.
|
| 43 |
+
|
| 44 |
+
translations, and furthermore, many captions of the same images are similar sentences, sometimes translations. Figure 1 shows a real example of a pair of linked Wikipedia pages in Arabic and English in which the titles, first sentences, and also the image captions are rough translations of each other. Our method learns a seed bilingual dictionary from a small collection of first sentence pairs, titles and captions, and then learns cross-lingual word embeddings. We make use of cross-lingual word embeddings to extract parallel sentences from Wikipedia. Our experiments show that our approach improves over strong unsupervised translation models for low-resource languages: we improve the BLEU score of English $\rightarrow$ Gujarati from 0.6 to 15.2, and English $\rightarrow$ Kazakh from 0.8 to 12.1.
|
| 45 |
+
|
| 46 |
+
In the realm of downstream tasks, we show that we can easily use our translation models to generate high-quality translations of MS-COCO (Chen et al., 2015) and Flickr (Hodosh et al., 2013) datasets, and train a cross-lingual image captioning model in a multi-task pipeline paired with machine translation in which the model is initialized by the parameters from our translation model. Our results on Arabic captioning show a BLEU score of 5.72 that is slightly better than a supervised captioning model with a BLEU score of 5.22. As another task, in dependency parsing, we first translate a large amount of monolingual data using our translation models and then apply transfer using the annotation projection method (Yarowsky et al., 2001; Hwa et al., 2005). Our results show that our approach performs similarly compared to using gold-standard parallel text in high-resource scenarios, and significantly
|
| 47 |
+
|
| 48 |
+
better in low-resource languages.
|
| 49 |
+
|
| 50 |
+
A summary of our contribution is as follows: 1) We propose a simple, fast and effective approach towards using the Wikipedia monolingual data for machine translation without any explicit supervision. Our mining algorithm easily scales on large comparable data using limited computational resources. We achieve very high BLEU scores for distant languages, especially those in which current unsupervised methods perform very poorly. 2) We propose novel methods for leveraging our current translation models in image captioning. We show that how a combination of translating caption training data, and multi-task learning with English captioning as well as translation improves the performance. Our results on Arabic shows results slightly superior to that of a supervised captioning model trained on gold-standard datasets. 3) We propose a novel modification to the annotation projection method to be able to leverage our translation models. Our results on dependency parsing performs better than previous work in most cases, and performs similarly to using gold-standard parallel datasets.
|
| 51 |
+
|
| 52 |
+
Our translation and captioning code and models are publicly available online<sup>1</sup>.
|
| 53 |
+
|
| 54 |
+
# 2 Background
|
| 55 |
+
|
| 56 |
+
Supervised neural machine translation Supervised machine translation uses a parallel text $\mathcal{P} = \{(s_i,t_i)\}_{i = 1}^n$ in which each sentence $s_i\in l_1$ is a translation of $t_i\in l_2$ . Neural machine translation uses sequence-to-sequence models with attention (Cho et al., 2014; Bahdanau et al., 2015; Vaswani et al., 2017) for which the likelihood of training data is maximized by maximizing the log-likelihood of predicting each target word given its previous predicted words and source sequence:
|
| 57 |
+
|
| 58 |
+
$$
|
| 59 |
+
\mathcal {L} (\mathcal {P}) = \sum_ {i = 1} ^ {n} \sum_ {j = 1} ^ {| t _ {i} |} \log p (t _ {i, j} | t _ {i, k < j}, s _ {i}; \theta)
|
| 60 |
+
$$
|
| 61 |
+
|
| 62 |
+
where $\theta$ is a collection of parameters to be learned.
|
| 63 |
+
|
| 64 |
+
Unsupervised neural machine translation Unsupervised neural machine translation does not have access to any parallel data. Instead, it tailors monolingual datasets $\mathcal{M}_{l_1}$ and $\mathcal{M}_{l_2}$ for learning multilingual language models. These language
|
| 65 |
+
|
| 66 |
+
models usually mask parts of every input sentence, and try to uncover the masked words (Devlin et al., 2019). The monolingual language models are used along with iterative back-translation (Hoang et al., 2018) to learn unsupervised translation. An input sentence $s$ is translated to $t'$ using current model $\theta$ , then the model assumes that $(t', s)$ is a gold-standard translation, and uses the same training objective as of supervised translation.
|
| 67 |
+
|
| 68 |
+
Dependency parsing Dependency parsing algorithms capture the best scoring dependency trees for sentences among an exponential number of possible dependency trees. A valid dependency tree for a sentence $s = s_1, \ldots, s_n$ assigns heads $h_i$ for each word $s_i$ where $1 \leq i \leq n$ , $0 \leq h_i \leq n$ and $h_i \neq i$ . The zeroth word represents a dummy root token as an indicator for the root of the sentence. For more details about efficient parsing algorithms, we encourage the reader to see Kübler et al. (2009).
|
| 69 |
+
|
| 70 |
+
Annotation projection Annotaton projection is an effective method for transferring supervised annotation from a rich-resource language to a low-resource language through translated text (Yarowsky et al., 2001). Having a parallel data $\mathcal{P} = \{(s_i,t_i)\}_{i = 1}^n$ , and supervised source annotations for source sentences $s_i$ , we transfer those annotations through word translation links $0\leq a_{i}^{(j)}\leq |t_{i}|$ for $1\le j\le |s_i|$ where $a_{i}^{(j)} = 0$ shows a null alignment. The alignment links are learned in an unsupervised fashion using unsupervised word alignment algorithms (Och and Ney, 2003a). In dependency parsing, if $h_i = j$ and $a^{(j)} = k$ and $a^{(i)} = m$ , we project a dependency $k\rightarrow m$ (i.e. $h_m = k$ ) to the target side. Previous work (Rasooli and Collins, 2017, 2019) has shown that annotation projection only works when a large amount of translation data exists. In the absence of parallel data, we create artificial parallel data using our translation models. Figure 2 shows an example of annotation projection using translated text.
|
| 71 |
+
|
| 72 |
+
# 3 Learning Translation from Wikipedia
|
| 73 |
+
|
| 74 |
+
The key component of our approach is to leverage the multilingual cues from linked Wikipedia pages across languages. Wikipedia is a great comparable data in which many of its pages explain entities in the world in different languages. In most cases, first sentences define or introduce the mentioned entity in that page (e.g. Figure 1). Therefore, we observe that many first sentence pairs in linked
|
| 75 |
+
|
| 76 |
+
Wikipedia documents are rough translations of each other. Moreover, captions of images in different languages are usually similar but not necessarily direct translations of each other. We leverage this information to extract many parallel sentences from Wikipedia without using any external supervision. In this section, we describe our algorithm which is briefly shown in Figure 3.
|
| 77 |
+
|
| 78 |
+
# 3.1 Data Definitions
|
| 79 |
+
|
| 80 |
+
For languages $e$ and $f$ in which $e$ is English and $f$ is a low-resource target language of interest, there are Wikipedia documents $w_{e} = \{w_{1}^{(e)}\ldots w_{n}^{(e)}\}$ and $w_{f} = \{w_{1}^{(f)}\ldots w_{m}^{(f)}\}$ . We refer to $w_{(i,j)}^{(l)}$ as the $j$ th sentence in the $i$ th document for language $l$ . A subset of these documents are aligned (using Wikipedia languages links). Thus we have an aligned set of document pairs in which we can easily extract many sentence pairs that are potentially translations of each other. A smaller subset $\mathcal{F}$ is the set of first sentences in Wikipedia $(w_{(i,1)}^{(e)},w_{(i',1)}^{(f)})$ in which documents $i$ and $i'$ are linked and their first sentence lengths are in a similar range. In addition to text content, Wikipedia has a large set of images. Each image comes along with one or more captions, sometimes in different languages. A small subset of these images have captions both in English and the target language. We refer to this set as $\mathcal{C}$ . We use the set of all caption pairs $(\mathcal{C})$ , title pairs $(\mathcal{T})$ , and first sentences $(\mathcal{F})$ as the seed parallel data: $S = F \cup C \cup T$ .
|
| 81 |
+
|
| 82 |
+
# 3.2 Bilingual Dictionary Extraction and Cross-Lingual Word Embeddings
|
| 83 |
+
|
| 84 |
+
Having the seed parallel data $S$ , we run unsupervised word alignment (Dyer et al., 2013) in both English-to-target, and target-to-English directions. We use the intersected alignments to extract highly confident word-to-word connections. Finally, we pick the most frequently aligned word for each word in English as translation. This set serves as a bilingual dictionary $\mathcal{D}$ .
|
| 85 |
+
|
| 86 |
+
Given two monolingual trained word embeddings $v_{e} \in \mathbb{R}^{N_{e} \times d}$ and $v_{f} \in \mathbb{R}^{N_{f} \times d}$ , and the extracted bilingual dictionary $\mathcal{D}$ , we use the method of Faruqui and Dyer (2014) to project these two embedding vectors to a shared cross-lingual space. This method uses a bilingual dictionary along with
|
| 87 |
+
|
| 88 |
+

|
| 89 |
+
Figure 2: An example of annotation projection for which the source on top is a translation of the Romanian target via our wikily translation model. The supervised source tree is projected using intersected word alignments.
|
| 90 |
+
Figure 3: A brief depiction of the training pipeline.
|
| 91 |
+
|
| 92 |
+
Definitions: 1) $e$ is English, $f$ is the foreign language, and $g$ is a language similar to $f, 2)$ learn_dict $(P)$ extracts a bilingual dictionary from parallel data $P, 3)$ $t(x|m)$ translates input $x$ given model $m,$ , 4) pretrain $(x)$ pretrains on monolingual data $x$ using MASS (Song et al., 2019a), 5) train $(P|m)$ trains on parallel data $P$ initialized by model $m, 6)$ bt_train $(x_1, x_2|m)$ trains iterative back-translation on monolingual data $x_1 \in e$ and $x_2 \in f$ initialized by model $m.$
|
| 93 |
+
|
| 94 |
+
Inputs: 1) Wikipedia documents $w^{(e)}$ , $w^{(f)}$ , and $w^{(g)}$ , 2) Monolingual word embedding vectors $v_{e}$ and $v_{f}$ , 3) Set of linked pages from Wikipedia COMP, their aligned titles $\mathcal{T}$ , and their first sentence pairs $\mathcal{F}$ , 4) Set of paired image captions $\mathcal{C}$ , and 5) Gold-standard parallel data $\mathcal{P}^{(e,g)}$ .
|
| 95 |
+
|
| 96 |
+
# Algorithm:
|
| 97 |
+
|
| 98 |
+
$\rightarrow$ Learn bilingual dictionary and embeddings
|
| 99 |
+
|
| 100 |
+
$\mathcal{S} = \mathcal{F}\cup \mathcal{C}\cup \mathcal{T}$ $\mathcal{D}^{(f,e)} = \boxed{\mathrm{learn\_dict}}(\mathcal{S})$ $\mathcal{D}^{(g,e)} = \boxed{\mathrm{learn\_dict}}(\mathcal{P}^{(e,g)})\quad \triangleright$ Related language Learn $v_{e}\rightarrow v_{e}^{\prime}$ and $v_{f}\to v_{f}^{\prime}$ using $\mathcal{D}^{(f,e)}\cup \mathcal{D}^{(g,e)}$
|
| 101 |
+
|
| 102 |
+
# $\rightarrow$ Mine parallel data
|
| 103 |
+
|
| 104 |
+
Extract comparable sentences $\mathcal{Z}$ from COMP
|
| 105 |
+
|
| 106 |
+
Extract $\mathcal{P}^{(f,e)}$ from $\mathcal{Z}$
|
| 107 |
+
|
| 108 |
+
$\overline{\mathcal{P}}^{(f,e)} = \mathcal{P}^{(f,e)}\cup \mathcal{T}$
|
| 109 |
+
|
| 110 |
+
$\rightarrow$ Train MT with pretraining and back-translation
|
| 111 |
+
|
| 112 |
+
$\theta_0 = \text{pretrain} (w^{(e)} \cup w^{(f)} \cup w^{(g)})$ $\triangleright$ MASS Training
|
| 113 |
+
|
| 114 |
+
$\theta \rightleftharpoons$ train $(\mathcal{P}^{(f,e)}\cup \mathcal{P}^{(g,e)}|\theta_0)$ NMT Training
|
| 115 |
+
|
| 116 |
+
$\mathcal{P}^{(e\rightarrow f)} = (\mathrm{t}(w^{(f)}|\theta_{\rightleftarrows}),w^{(f)})$
|
| 117 |
+
|
| 118 |
+
$\mathcal{P}^{(f\rightarrow e)} = (\overline{\mathbf{t}} (w^{(e)}|\theta_{\rightleftarrows}),w^{(e)})$
|
| 119 |
+
|
| 120 |
+
$\mathcal{P}^{\prime (f,e)} = \mathcal{P}^{(e\rightarrow f)}\cup \mathcal{P}^{(f\rightarrow e)}\cup \mathcal{P}^{(f,e)}$
|
| 121 |
+
|
| 122 |
+
$\theta_{\rightleftarrows}^{\prime} = \mathrm{train}\left(\mathcal{P}^{\prime (f,e)}|\theta_0\right)$
|
| 123 |
+
|
| 124 |
+
$\theta *_{\rightleftarrows} = \mathrm{bt\_train}(w^{(e)},w^{(f)}|\theta_{\leftarrow}^{\prime})$
|
| 125 |
+
|
| 126 |
+
Output: $\theta^{*}$
|
| 127 |
+
|
| 128 |
+
canonical correlation analysis (CCA) to learn two projection matrices to map each embedding vector to a shared space $v_{e}^{\prime} \in \mathbb{R}^{N_{e} \times d^{\prime}}$ and $v_{f}^{\prime} \in \mathbb{R}^{N_{f} \times d^{\prime}}$ where $d^{\prime} \leq d$ .
|
| 129 |
+
|
| 130 |
+
# 3.3 Mining Parallel Sentences
|
| 131 |
+
|
| 132 |
+
We use cross-lingual embedding vectors $v_{e}^{\prime} \in \mathbb{R}^{N_{e} \times d}$ and $v_{f}^{\prime} \in \mathbb{R}^{N_{f} \times d^{\prime}}$ for calculating the cosine similarity between pairs of words. Moreover, we use the extracted bilingual dictionary to boost the accuracy of the scoring function. For a pair of sentences $(s, t)$ where $s = s_1 \ldots s_n$ and $t = t_1 \ldots t_m$ ,
|
| 133 |
+
|
| 134 |
+
after filtering sentence pairs with different numerical values (e.g. sentences containing 2019 in the source and 1987 in the target), we use a modified version of cosine similarity between words:
|
| 135 |
+
|
| 136 |
+
$$
|
| 137 |
+
s i m (s _ {i}, t _ {j}) = \left\{ \begin{array}{l l} 1. 0, & \text {i f} (s _ {i}, t _ {j}) \in \mathcal {D} \\ c o s (s _ {i}, t _ {j}), & \text {o t h e r w i s e} \end{array} \right.
|
| 138 |
+
$$
|
| 139 |
+
|
| 140 |
+
Using the above definition of word similarity, we use the average-maximum similarity between pairs of sentences.
|
| 141 |
+
|
| 142 |
+
$$
|
| 143 |
+
s c o r e (s, t) = \frac {\sum_ {i = 1} ^ {n} \max _ {j = 1} ^ {m} s i m (s _ {i} , t _ {i})}{n}
|
| 144 |
+
$$
|
| 145 |
+
|
| 146 |
+
From a pool of candidates, we pick those pairs that have the highest score in both directions.
|
| 147 |
+
|
| 148 |
+
# 3.4 Leveraging Similar Languages
|
| 149 |
+
|
| 150 |
+
In many low-resource scenarios, the number of paired documents is very small, leading to a small number and often noisy extracted parallel sentences. To alleviate this problem to some extent, we assume to have another language $g$ in which $g$ has a large lexical overlap with the target language $f$ (such as $g =$ Russian and $f =$ Kazakh). We assume that a parallel data exists between language $g$ and English, and we can use it both as an auxiliary parallel data in training, and also for extracting extra lexical entries for the bilingual dictionaries: as shown in Figure 3, we supplement the extracted bilingual dictionary from seed parallel data with the bilingual dictionary extracted from related language parallel data.
|
| 151 |
+
|
| 152 |
+
# 3.5 Translation Model
|
| 153 |
+
|
| 154 |
+
We use a standard sequence-to-sequence transformer-based translation model (Vaswani et al., 2017) with a six-layer BERT-based (Devlin et al., 2019) encoder-decoder architecture
|
| 155 |
+
|
| 156 |
+
from HuggingFace (Wolf et al., 2019) and Pytorch (Paszke et al., 2019) with a shared SentencePiece (Kudo and Richardson, 2018) vocabulary. All input and output token embeddings are summed up with the language id embedding. First tokens of every input and output sentence are shown by the language ID. Our training pipeline assumes that the encoder and decoder are shared across different languages, except that we use a separate output layer for each language in order to prevent input copying (Artetxe et al., 2018b; Sen et al., 2019). We pretrain the model on a tuple of three Wikipedia datasets for the three languages $g$ , $f$ , and $e$ using the MASS model (Song et al., 2019a). The MASS model masks a contiguous span of input tokens, and recovers that span in the output sequence.
|
| 157 |
+
|
| 158 |
+
To facilitate multi-task learning with image captioning, our model has an image encoder that is used in cases of image captioning (more details in §4.1). In other words, the decoder is shared between the translation and captioning tasks. We use the pretrained ResNet-152 model (He et al., 2016) from Pytorch to encode every input image. We extract the final layer as a $7 \times 7$ grid vector $(g \in \mathbb{R}^{7 \times 7 \times d_g})$ , and project it to a new space by a linear transformation $(g' \in \mathbb{R}^{49 \times dt})$ , and then add location embeddings $(l \in \mathbb{R}^{49 \times dt})$ by using entry-wise addition. Afterwards, we assume that the 49 vectors are encoded text representations as if a sentence with 49 words occurs. This is similar to but not exactly the same as the Virtex model (Desai and Johnson, 2021).
|
| 159 |
+
|
| 160 |
+
# 3.6 Back-Translation: One-shot and Iterative
|
| 161 |
+
|
| 162 |
+
Finally, we use the back-translation technique to improve the quality of our models. Back-translation is done by translating a large amount of monolingual text to and from the target language. The translated texts serve as noisy input text along with the monolingual data as the silver-standard translations. Previous work (Sennrich et al., 2016b; Edunov et al., 2018) has shown that back-translation is a very simple but effective technique to improve the quality of translation models. Henceforth, we refer to this method as one-shot back-translation. Another approach is to use iterative back-translation (Hoang et al., 2018), the most popular approach in unsupervised translation (Artetxe et al., 2018b; Conneau and Lample, 2019; Song et al., 2019a). The main difference
|
| 163 |
+
|
| 164 |
+
from one-shot translation is that the model uses an online approach, and updates its parameters in every batch.
|
| 165 |
+
|
| 166 |
+
We empirically find one-shot back-translation faster to train but with much less potential to reach a high translation accuracy. A simple and effective way to have both a reliable and accurate model is to first initialize a model with one-shot back-translation, and then apply iterative back-translation. The model that is initialized with a more accurate model reaches a higher accuracy.
|
| 167 |
+
|
| 168 |
+
# 4 Cross-Lingual Tasks
|
| 169 |
+
|
| 170 |
+
In this section, we describe our approaches for tailoring our translation models to cross-lingual tasks. Note that henceforth we assume that our translations model training is finished, and we have access to trained translation models for cross-lingual tasks.
|
| 171 |
+
|
| 172 |
+
# 4.1 Cross-Lingual Image Captioning
|
| 173 |
+
|
| 174 |
+
Having gold-standard image captioning training data $\mathcal{I} = \{(I_i, c_i)\}_{i=1}^n$ where $I_i$ is the image as pixel values, and $c_i = c_i^{(1)}, \ldots, c_i^{k_i}$ as the textual description with $k_i$ words, our goal is to learn a captioning model that is able to describe new (unseen) images. As described in §3.5, we use a transformer decoder from our translation model and a ResNet image encoder (He et al., 2016) for our image captioning pipeline. Unfortunately, annotated image captioning datasets do not exist in many languages. Having our translation model parameter $\theta_{\rightleftarrows}^*$ , we can use its translation functionality to translate each caption $c_i$ to $c_i' = \text{translate}(c_i | \theta_{\rightleftarrows}^*)$ . Afterwards, we will have a translated annotated dataset $\mathcal{I}' = \{(I_i, c_i')\}_{i=1}^n$ in which the textual descriptions are not gold-standard but translations from the English captions. Figure 4 shows a real example from MS-Coco (Chen et al., 2015) in which Arabic translations are provided by our translation model. Furthermore, to augment our learning capability, we initialize our decoder with decoding parameters of $\theta_{\rightleftarrows}^*$ , and also continue training with both English captioning and translation.
|
| 175 |
+
|
| 176 |
+
# 4.2 Cross-Lingual Dependency Parsing
|
| 177 |
+
|
| 178 |
+
Assuming that we have a large body of monolingual text, we translate that monolingual text to create artificial parallel data. We run unsupervised word alignments on the artificial parallel text. Following previous work (Rasooli and Collins, 2015; Ma and Xia, 2014), we run Giza++ (Och and Ney,
|
| 179 |
+
|
| 180 |
+

|
| 181 |
+
Figure 4: An image from MS-Coco (Chen et al., 2015) with gold-standard English captions, and Arabic translations from our wikily translation model.
|
| 182 |
+
|
| 183 |
+
<table><tr><td>This is an open box containing four
|
| 184 |
+
cucumbers.</td></tr><tr><td>An open food container box with four
|
| 185 |
+
unknown food items.</td></tr><tr><td>A small box filled with four green
|
| 186 |
+
vegetables.</td></tr><tr><td>An opened box of four chocolate
|
| 187 |
+
bananas.</td></tr><tr><td>An open box contains an unknown,
|
| 188 |
+
purple object</td></tr></table>
|
| 189 |
+
|
| 190 |
+
2003b) alignments on both source-to-target and target-to-source directions, and extract intersected alignments to keep high-precision one-to-one alignments. We run a supervised dependency parser of English as our rich-resource language. Then, we project dependencies to the target language sentences via word alignment links. Inspired by previous work (Rasooli and Collins, 2015), to remove noisy projections, we keep those sentences that at least $50\%$ of words or 5 consecutive words in the target side have projected dependencies.
|
| 191 |
+
|
| 192 |
+
# 5 Experiments
|
| 193 |
+
|
| 194 |
+
In this section, we provide details about our experimental settings and results for translation, captioning, and dependency parsing. We put more details about our settings as well as thorough analysis of our results in the supplementary material.
|
| 195 |
+
|
| 196 |
+
# 5.1 Datasets and Settings
|
| 197 |
+
|
| 198 |
+
Languages We focus on four language pairs: Arabic-English, Gujarati-English, Kazakh-English, and Romanian-English. We choose these pairs to provide enough evidence that our model works in distant languages, morphologically-rich languages, as well as similar languages. As for similar languages, we use Persian for Arabic (written with very similar scripts and have many words in common), Hindi for Gujarati (similar languages), Russian for Kazakh (written with the same script), and Italian for Romanian (Romance languages).
|
| 199 |
+
|
| 200 |
+
Monolingual and Translation Datasets We use a shared SentencePiece vocabulary (Kudo and Richardson, 2018) with size 60K. Table 1 shows the sizes of Wikipedia data in different languages. For evaluation, we use the Arabic-English UN data (Ziemski et al., 2016), WMT 2019 data (Barrault et al., 2019) for Gujarati-English and Kazakh-English, and WMT 2016 shared task data (Bojar
|
| 201 |
+
|
| 202 |
+
<table><tr><td>Direction</td><td>ar←en</td><td>gu←en</td><td>kk←en</td><td>ro←en</td></tr><tr><td>Foreign docs</td><td>1.0m</td><td>28k</td><td>230k</td><td>400k</td></tr><tr><td>Paired docs</td><td>745k</td><td>7.3k</td><td>80k</td><td>270k</td></tr><tr><td>First sents.</td><td>205k</td><td>3.2k</td><td>52k</td><td>78k</td></tr><tr><td>Captions</td><td>92k</td><td>2.2k</td><td>1.9k</td><td>35k</td></tr><tr><td>Comparable pairs</td><td>0.1b</td><td>14m</td><td>32m</td><td>64m</td></tr><tr><td>Mined sents.</td><td>1.7m</td><td>49k</td><td>183k</td><td>675k</td></tr><tr><td>BT</td><td>2.1m</td><td>1.5m</td><td>2.2m</td><td>2.1m</td></tr><tr><td>Iterative BT</td><td>4.0m</td><td>3.8m</td><td>4.0m</td><td>6.1m</td></tr></table>
|
| 203 |
+
|
| 204 |
+
Table 1: Data sizes for different pairs. We use a sample of English sentences with similar sizes to each data.
|
| 205 |
+
|
| 206 |
+
et al., 2016) for Romanian-English. Following previous work (Sennrich et al., 2016a), diacritics are removed from the Romanian data. More details about other datasets and their sizes, we refer the reader to the supplementary material.
|
| 207 |
+
|
| 208 |
+
Pretraining We pretrain four models on 3-tuples of languages via a single NVIDIA Geforce RTX 2080 TI with 11GB of memory. We create batches of 4K words, run pretraining for two million iterations where we alternate between language batches, and accumulate gradients for 8 steps. We use the apex library<sup>3</sup> to use FP-16 tensors. This whole process takes four weeks in a single GPU. We use the Adam optimizer (Kingma and Ba, 2015) with inverse square root and learning rate of $10^{-4}$ , 4000 warm-up steps, and dropout probability of 0.1.
|
| 209 |
+
|
| 210 |
+
Translation Training Table 1 shows the sizes of different types of datasets in our experiments. We pick comparable candidates for sentence pairs whose lengths are within a range of half to twice of each other. As we see, the final size of mined datasets heavily depends on the number of paired English-target language Wikipedia documents. We train our translation models initialized by pretrained models. More details about our hyperparameters are in the supplementary material. All of our evaluations are conducted using Sacre-BLEU (Post, 2018) except for en $\leftrightarrow$ ro in which we use BLEU score (Papineni et al., 2002) from Moses decoder scripts (Koehn et al., 2007) for the sake of comparison to previous work.
|
| 211 |
+
|
| 212 |
+
Image Captioning We use the Flickr (Hodosh et al., 2013) and MS-Coco (Chen et al., 2015) datasets for English<sup>4</sup>, and the gold-standard Arabic Flickr dataset (ElJundi. et al., 2020) for evaluation. The Arabic test set has 1000 images with 3 captions
|
| 213 |
+
|
| 214 |
+
per image. We translate all the training datasets to Arabic for having translated caption data. The final training data contains $620K$ captions for about $125K$ unique images. Throughout experiments, we use the pretrained Resnet-152 models (He et al., 2016) from Pytorch (Paszke et al., 2019), and let it fine-tune during our training pipeline. Each training batch contains 20 images. We accumulate gradients for 16 steps, and use a dropout of 0.1 for the projected image output representations. Other training parameters are the same as our translation training. To make our pipeline fully unsupervised, we use translated development sets to pick the best checkpoint during training.
|
| 215 |
+
|
| 216 |
+
Dependency Parsing We use the Universal Dependencies v2.7 collection (Zeman et al., 2020) for Arabic, Kazakh, and Romanian. We use the Stanza (Qi et al., 2020) pretrained supervised models for getting supervised parse trees for Arabic and Romanian, and use the UDPipe (Straka et al., 2016) pretrained model for Kazakh. We translate about 2 million sentences from each language to English, and also 2 million English sentences to Arabic. We use a simple modification to Stanza to facilitate training on partially projected trees by masking dependency and label assignments for words with missing dependencies. All of our training on projected dependencies is blindly conducted with $100k$ training steps with default parameters of Stanza (Qi et al., 2020). As for gold-standard parallel data, we use our supervised translation training data for Romanian-English and Kazakh-English and use a sample of 2 million sentences from the UN Arabic-English data due to its large size that causes word alignment significant slowdown. For Kazakh wikily projections, due to low supervised POS accuracy, we use the projected POS tags for projected words and supervised tags for unprojected words. We observe a two percent increase in performance by using projected tags.
|
| 217 |
+
|
| 218 |
+
# 5.2 Translation Results
|
| 219 |
+
|
| 220 |
+
Table 2 shows the results of different settings in addition to baseline and state-of-the-art results. We see that Arabic as a clear exception needs more rounds of training: we train our Arabic model once again on mined data by initializing it by our back-translation model.<sup>5</sup> We have not seen fur
|
| 221 |
+
|
| 222 |
+
ther improvement by back-translation. To have a fair comparison, we list the best supervised models for all language pairs (to the best of our knowledge). In low-resource settings, we outperform strong supervised models that are boosted by back-translation. In high-resource settings, our Arabic models achieve very high performance but regarding the fact that the parallel data for Arabic has 18M sentences, it is quite impossible to reach that level of accuracy.
|
| 223 |
+
|
| 224 |
+
Figure 5 shows a randomly chosen example from the Gujarati-English development data. As depicted, we see that the model after back-translation reaches to somewhat the core meaning of the sentence with a bit of divergence from exactly matching the reference. The final iterative back-translation output almost catches a correct translation. We also see that the use of the word "creative" is seen in Google Translate output, a model that is most likely trained on much larger parallel data than what is currently available for public use. In general, unsupervised translation performs very poorly compared to our approach in all directions.
|
| 225 |
+
|
| 226 |
+
# 5.3 Captioning Results
|
| 227 |
+
|
| 228 |
+
Table 4 shows the final results on the Arabic test set using the SacreBLEU measure (Post, 2018). First, we should note that similar to ElJundi. et al. (2020), we see lower scales of BLEU scores due to morphological richness in Arabic. We see that if we initialize our model with the translation model and multitask it with translation and also English captioning, we achieve much higher performance. It is interesting to observe that translating the English output on the test data to Arabic achieves a much lower result. This is a strong indicator of the strength of our approach. We also see that supervised translation fails to perform well. This might due to the UN translation training dataset which has a different domain from the caption dataset. Furthermore, we see that our model outperforms Google Translate which is a strong machine translation system, and that is actually what is being used as seed data for manual revision in the Arabic dataset. Finally, it is interesting to see that our model outperforms supervised captioning. Multi-tasking make translation performance slightly worse.
|
| 229 |
+
|
| 230 |
+
Figure 6 shows a randomly picked example with
|
| 231 |
+
|
| 232 |
+
<table><tr><td></td><td>Model</td><td>ar→en</td><td>en→ar</td><td>gu→en</td><td>en→gu</td><td>kk→en</td><td>en→kk</td><td>ro→en</td><td>en→ro</td></tr><tr><td rowspan="3">UNMT</td><td>Conneau and Lample (2019)</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>31.8</td><td>33.3</td></tr><tr><td>Song et al. (2019a) (MASS; 8 GPUs)</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>33.1</td><td>35.2</td></tr><tr><td>Best published results</td><td>11.0*</td><td>9.4*</td><td>0.61</td><td>0.61</td><td>2.01</td><td>0.81</td><td>37.64</td><td>36.32</td></tr><tr><td rowspan="6">WKly UnMT</td><td>First sentences + captions + titles</td><td>6.1</td><td>3.1</td><td>0.7</td><td>1.1</td><td>2.3</td><td>1.0</td><td>2.0</td><td>1.9</td></tr><tr><td>Mined Corpora</td><td>23.1</td><td>19.7</td><td>4.2</td><td>4.9</td><td>2.8</td><td>1.6</td><td>22.1</td><td>21.6</td></tr><tr><td>+ Related Language</td><td>-</td><td>-</td><td>9.1</td><td>7.8</td><td>7.3</td><td>2.3</td><td>23.2</td><td>21.5</td></tr><tr><td>+ One-shot back-translation (bt-beam=4)</td><td>23.0</td><td>18.8</td><td>13.8</td><td>13.9</td><td>7.0</td><td>12.1</td><td>25.2</td><td>28.1</td></tr><tr><td>+ Iterative back-translation (bt-beam=1)</td><td>24.4</td><td>18.9</td><td>13.3</td><td>15.2</td><td>9.0</td><td>10.8</td><td>32.5</td><td>33.0</td></tr><tr><td>+ Retrain on mined data</td><td>30.6</td><td>23.4</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td colspan="2">(Semi-)Supervised</td><td>48.9*</td><td>40.6*</td><td>14.21</td><td>4.01</td><td>12.51</td><td>3.11</td><td>39.93</td><td>38.53</td></tr></table>
|
| 233 |
+
|
| 234 |
+
Table 2: BLEU scores for different models. Reference results are from*: Our implementation, 1: Kim et al. (2020), 2: Li et al. (2020), 3: Liu et al. (2020) (supervised), 4: Tran et al. (2020) (unsupervised with mined parallel data).
|
| 235 |
+
|
| 236 |
+
<table><tr><td rowspan="2" colspan="2">Method</td><td rowspan="2">Version</td><td rowspan="2">Token and POS</td><td colspan="3">Arabic</td><td colspan="3">Kazakh</td><td colspan="3">Romanian</td></tr><tr><td>UAS</td><td>LAS</td><td>BLEX</td><td>UAS</td><td>LAS</td><td>BLEX</td><td>UAS</td><td>LAS</td><td>BLEX</td></tr><tr><td rowspan="3">Previous</td><td>Rasooli and Collins (2019)</td><td>2.0</td><td>gold/supervised</td><td>61.2</td><td>48.8</td><td>-</td><td>-</td><td>-</td><td>-</td><td>76.3</td><td>64.3</td><td>-</td></tr><tr><td>Ahmad et al. (2019)</td><td>2.2</td><td>gold</td><td>38.1</td><td>28.0</td><td>-</td><td>-</td><td>-</td><td>-</td><td>65.1</td><td>54.1</td><td>-</td></tr><tr><td>Kurniawan et al. (2021)</td><td>2.2</td><td>gold</td><td>48.3</td><td>29.9</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td rowspan="4">Projection</td><td rowspan="2">Wikily translation</td><td rowspan="5">2.7</td><td>gold</td><td>62.5</td><td>50.7</td><td>46.3</td><td>46.8</td><td>28.5</td><td>25.0</td><td>74.1</td><td>57.7</td><td>52.6</td></tr><tr><td>supervised</td><td>60.2</td><td>48.7</td><td>42.1</td><td>46.2</td><td>27.8</td><td>14.1</td><td>73.6</td><td>57.4</td><td>50.9</td></tr><tr><td rowspan="2">Gold-standard Parallel data</td><td>gold</td><td>61.5</td><td>47.3</td><td>42.4</td><td>22.2</td><td>9.3</td><td>7.9</td><td>75.9</td><td>62.4</td><td>57.3</td></tr><tr><td>supervised</td><td>59.1</td><td>45.3</td><td>38.5</td><td>21.8</td><td>9.2</td><td>3.8</td><td>75.6</td><td>62.0</td><td>55.6</td></tr><tr><td colspan="2">Supervised</td><td>supervised</td><td>84.2</td><td>79.8</td><td>72.7</td><td>48.0</td><td>29.8</td><td>13.7</td><td>90.8</td><td>86.0</td><td>80.0</td></tr></table>
|
| 237 |
+
|
| 238 |
+
Table 3: Dependency parsing results on the Universal Dependencies dataset (Zeman et al., 2020). Previous work has used different sub-versions of the Universal Dependencies data in which slight differences are expected.
|
| 239 |
+
|
| 240 |
+
<table><tr><td>Input</td><td>अन्यावर्था निकारी अधिकारी अधिकारी अधिकारी अधिकारी अधिकारी अधिकारी अधिकारी अधिकारी अधिकारी अधिकारी अधिकारी अधिकारी अधिकाऻि.</td></tr><tr><td>Outputs</td><td>UnsupervisedFirst sentences + captions + titlesMined Corpora+ Related Language+ One-shot back-translation+ Iterative back-translation</td></tr><tr><td>Google Translate</td><td>That means we have to be more creative than before.</td></tr><tr><td>Reference</td><td>That means we have to be more constructive than before.</td></tr></table>
|
| 241 |
+
|
| 242 |
+
Figure 5: An example of a Gujarati sentence and its outputs from different models, as well as Google Translate.
|
| 243 |
+
|
| 244 |
+
<table><tr><td>English gold</td><td>A child on a red slide.
|
| 245 |
+
A little boy sits on a slide on the playground.
|
| 246 |
+
A little boy slides down a bright red corkscrew slide.
|
| 247 |
+
A little boy slides down a red slide.
|
| 248 |
+
a young boy wearing a blue outfit sliding down a red slide.</td></tr><tr><td>English supervised</td><td>A boy is sitting on a red slide.</td></tr><tr><td>En- supervised translate
|
| 249 |
+
En- unsupervised translate
|
| 250 |
+
En- Google translate</td><td>.
|
| 251 |
+
.
|
| 252 |
+
.
|
| 253 |
+
Supervised MT
|
| 254 |
+
Unsupervised (mt + ar + en)
|
| 255 |
+
Unsupervised (mt + ar)</td></tr><tr><td>Supervised</td><td>Supervised MT</td></tr><tr><td>Arabic Gold</td><td>Arabic Gold</td></tr></table>
|
| 256 |
+
|
| 257 |
+
Figure 6: An example of different outputs in our captioning experiments both for English and Arabic, as well as Arabic translations of English outputs on the Arabic Flickr dataset (ElJundi. et al., 2020).
|
| 258 |
+
|
| 259 |
+
different model outputs. We see that the two outputs from our approach with multi-tasking are roughly the same but one of them as more syntactic order overlap with the reference while both orders are correct in Arabic as a free-word order language.
|
| 260 |
+
|
| 261 |
+
The word means "orange" which is close to that means "red". The word means "slide" which is correct but other meanings of this word exist in the reference. In general, we observe
|
| 262 |
+
|
| 263 |
+
<table><tr><td rowspan="2"></td><td rowspan="2">Supervision</td><td rowspan="2">Pretrained</td><td colspan="2">Multi-task</td><td colspan="2">BLEU</td></tr><tr><td>EN</td><td>MT</td><td>@1</td><td>@4</td></tr><tr><td rowspan="6">Translate train data</td><td>wikily</td><td>X</td><td>X</td><td>X</td><td>33.1</td><td>4.57</td></tr><tr><td>wikily</td><td>✓</td><td>X</td><td>X</td><td>32.9</td><td>5.28</td></tr><tr><td>wikily</td><td>✓</td><td>✓</td><td>X</td><td>32.8</td><td>4.37</td></tr><tr><td>wikily</td><td>✓</td><td>X</td><td>✓</td><td>33.3</td><td>5.72</td></tr><tr><td>wikily</td><td>✓</td><td>✓</td><td>✓</td><td>36.8</td><td>5.60</td></tr><tr><td>supervised</td><td>✓</td><td>X</td><td>X</td><td>17.7</td><td>1.26</td></tr><tr><td rowspan="4">Translate test</td><td></td><td colspan="3">English test performance→</td><td>68.7</td><td>20.42</td></tr><tr><td>wikily</td><td>✓</td><td>X</td><td>X</td><td>30.6</td><td>4.20</td></tr><tr><td>supervised</td><td>✓</td><td>X</td><td>X</td><td>15.8</td><td>0.92</td></tr><tr><td>Google</td><td>✓</td><td>X</td><td>X</td><td>31.8</td><td>5.56</td></tr><tr><td></td><td>Gold</td><td>✓</td><td>X</td><td>X</td><td>33.7</td><td>3.76</td></tr><tr><td></td><td></td><td>���</td><td>✓</td><td>X</td><td>37.9</td><td>5.22</td></tr></table>
|
| 264 |
+
|
| 265 |
+
Table 4: Image captioning results evaluated on the Arabic Flickr dataset (ElJundi. et al., 2020) using Sacre-BLEU (Post, 2018). "pretrained" indicates initializing our captioning model with our translation parameters.
|
| 266 |
+
|
| 267 |
+
that although superficially the BLEU scores for Arabic is low, it is mostly due to its lexical diversity, free-word order, and morphological complexity.
|
| 268 |
+
|
| 269 |
+
# 5.4 Dependency Parsing Results
|
| 270 |
+
|
| 271 |
+
Table 3 shows the results for dependency parsing experiments. We see that our model performs very high in Romanian with a UAS of 74 which is much higher than that of Ahmad et al. (2019) and slightly lower than that of Rasooli and Collins (2019) which uses a combination of multi-source annotation projection and direct model transfer. Our work on Arabic outperforms all previous work and performs even better than using gold-standard parallel data. One clear highlight is our result in Kazakh. As mentioned before, by projecting the part-of-speech tags, we achieve roughly 2 percent absolute improvement. Our final results on Kazakh are significantly higher than that of using gold-standard parallel text (7K sentences).
|
| 272 |
+
|
| 273 |
+
# 6 Related Work
|
| 274 |
+
|
| 275 |
+
Kim et al. (2020) has shown that unsupervised translation models often fail to provide good translation systems for distant languages. Our work solves this problem by leveraging the Wikipedia data. Using pivot languages has been used in previous work (Al-Shedivat and Parikh, 2019), as well as using related languages (Zoph et al., 2016; Nguyen and Chiang, 2017). Our work only explores a simple idea of adding one similar language pair. Most likely, adding more language pairs and using ideas from recent work might improve the performance.
|
| 276 |
+
|
| 277 |
+
Wikipedia is an interesting dataset for solving NLP problems including machine translation (Li
|
| 278 |
+
|
| 279 |
+
et al., 2012; Patry and Langlais, 2011; Lin et al., 2011; Tufiş et al., 2013; Barrón-Cedeño et al., 2015; Wijaya et al., 2017; Ruiter et al., 2019; Srinivasan et al., 2021). The WikiMatrix data (Schwenk et al., 2019a) is the most similar effort to ours in terms of using Wikipedia, but with using supervised translation models. Bitext mining has a longer history of research (Resnik, 1998; Resnik and Smith, 2003) in which most efforts are spent on using a seed supervised translation model (Guo et al., 2018; Schwenk et al., 2019b; Artetxe and Schwenk, 2019; Schwenk et al., 2019a; Jones and Wijaya, 2021). Recently, a number of papers have focused on unsupervised extraction of parallel data (Ruiter et al., 2019; Hangya and Fraser, 2019; Keung et al., 2020; Tran et al., 2020; Kuwanto et al., 2021). Ruiter et al. (2019) focus on using vector similarity of sentences to extract parallel text from Wikipedia. Their work does not leverage structural signals from Wikipedia.
|
| 280 |
+
|
| 281 |
+
Cross-lingual and unsupervised image captioning has been studied in previous work (Gu et al., 2018; Feng et al., 2019; Song et al., 2019b; Gu et al., 2019; Gao et al., 2020; Burns et al., 2020). Unlike previous work, we do not have a supervised translation model. Cross-lingual transfer of dependency parser have a long history. We encourage the reader to read a recent survey on this topic (Das and Sarkar, 2020). Our work does not use gold-standard parallel data or even supervised translation models to apply annotation projection.
|
| 282 |
+
|
| 283 |
+
# 7 Conclusion
|
| 284 |
+
|
| 285 |
+
We have described a fast and effective algorithm for learning translation systems using Wikipedia. We show that by wisely choosing what to use as seed data, we can have very good seed parallel data to mine more parallel text from Wikipedia. We have also shown that our translation models can be used in downstream cross-lingual natural language processing tasks. In the future, we plan to extend our approach beyond Wikipedia to other comparable datasets like the BBC World Service. A clear extension of this work is to try our approach on other cross-lingual tasks. Moreover, as many captions of the same images in Wikipedia are similar sentences and sometimes translations, multimodal machine translation (Specia et al., 2016; Caglayan et al., 2019; Hewitt et al., 2018; Yao and Wan, 2020) based on this data or the analysis of the data, such as whether more similar languages may share more similar captions (Khani et al., 2021) are other interesting avenues.
|
| 286 |
+
|
| 287 |
+
# Acknowledgments
|
| 288 |
+
|
| 289 |
+
We would like to thank reviewers and the editor for their useful comments. We also would like to thank Alireza Zareian, Daniel (Joongwon) Kim, Qing Sun, and Afra Feyza Akyurek for their help and useful comments throughout this project. This work is supported in part by the DARPA HR001118S0044 (the LwLL program), and the Department of the Air Force FA8750-19-2-3334 (Semi-supervised Learning of Multimodal Representations). The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes. The views and conclusions contained in this publication are those of the authors and should not be interpreted as representing official policies or endorsements of DARPA, the Air Force, and the U.S. Government.
|
| 290 |
+
|
| 291 |
+
# References
|
| 292 |
+
|
| 293 |
+
Wasi Ahmad, Zhisong Zhang, Xuezhe Ma, Eduard Hovy, Kai-Wei Chang, and Nanyun Peng. 2019. On difficulties of cross-lingual transfer with order differences: A case study on dependency parsing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2440-2452, Minneapolis, Minnesota. Association for Computational Linguistics.
|
| 294 |
+
Maruan Al-Shedivat and Ankur Parikh. 2019. Consistency by agreement in zero-shot neural machine translation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1184-1197, Minneapolis, Minnesota. Association for Computational Linguistics.
|
| 295 |
+
Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2018a. Unsupervised statistical machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3632-3642, Brussels, Belgium. Association for Computational Linguistics.
|
| 296 |
+
Mikel Artetxe, Gorka Labaka, Eneko Agirre, and Kyunghyun Cho. 2018b. Unsupervised neural machine translation. In International Conference on Learning Representations.
|
| 297 |
+
Mikel Artetxe and Holger Schwenk. 2019. Margin-based parallel corpus mining with multilingual sentence embeddings. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3197-3203, Florence, Italy. Association for Computational Linguistics.
|
| 298 |
+
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by
|
| 299 |
+
|
| 300 |
+
jointly learning to align and translate. CoRR, abs/1409.0473.
|
| 301 |
+
Loic Barrault, Ondrej Bojar, Marta R. Costa-jussa, Christian Federmann, Mark Fishel, Yvette Graham, Barry Haddow, Matthias Huck, Philipp Koehn, Shervin Malmasi, Christof Monz, Mathias Muller, Santanu Pal, Matt Post, and Marcos Zampieri. 2019. Findings of the 2019 conference on machine translation (WMT19). In Proceedings of the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1), pages 1-61, Florence, Italy. Association for Computational Linguistics.
|
| 302 |
+
Alberto Barrón-Cedeno, Cristina España-Bonet, Josu Boldoba, and Lluis Márquez. 2015. A factory of comparable corpora from Wikipedia. In Proceedings of the Eighth Workshop on Building and Using Comparable Corpora, pages 3-13, Beijing, China. Association for Computational Linguistics.
|
| 303 |
+
Ondrej Bojar, Rajen Chatterjee, Christian Federmann, Yvette Graham, Barry Haddow, Matthias Huck, Antonio Jimeno Yepes, Philipp Koehn, Varvara Logacheva, Christof Monz, Matteo Negri, Aurélie Néveol, Mariana Neves, Martin Popel, Matt Post, Raphael Rubino, Carolina Scarton, Lucia Specia, Marco Turchi, Karin Verspoor, and Marcos Zampieri. 2016. Findings of the 2016 conference on machine translation. In Proceedings of the First Conference on Machine Translation: Volume 2, Shared Task Papers, pages 131-198, Berlin, Germany. Association for Computational Linguistics.
|
| 304 |
+
Ondrej Bojar, Vojtech Diatka, Pavel Rychly, Pavel Stranák, Vít Suchomel, Ales Tamchyna, and Daniel Zeman. 2014. Hindencorp-hindi-english and hundi-only corpus for machine translation. In LREC, pages 3550-3555.
|
| 305 |
+
Andrea Burns, Donghyun Kim, Derry Wijaya, Kate Saenko, and Bryan A Plummer. 2020. Learning to scale multilingual representations for vision-language tasks. In European Conference on Computer Vision, pages 197-213. Springer.
|
| 306 |
+
Ozan Caglayan, Pranava Madhyastha, Lucia Specia, and Loic Barrault. 2019. Probing the need for visual context in multimodal machine translation. arXiv preprint arXiv:1903.08678.
|
| 307 |
+
Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Dólár, and C Lawrence Zitnick. 2015. Microsoft coco captions: Data collection and evaluation server. arXiv preprint arXiv:1504.00325.
|
| 308 |
+
Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder-decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1724-1734, Doha, Qatar. Association for Computational Linguistics.
|
| 309 |
+
|
| 310 |
+
Alexis Conneau and Guillaume Lample. 2019. Crosslingual language model pretraining. In Advances in Neural Information Processing Systems 32, pages 7059-7069. Curran Associates, Inc.
|
| 311 |
+
Ayan Das and Sudeshna Sarkar. 2020. A survey of the model transfer approaches to cross-lingual dependency parsing. ACM Transactions on Asian and Low-Resource Language Information Processing (TALLIP), 19(5):1-60.
|
| 312 |
+
Karan Desai and Justin Johnson. 2021. VirTex: Learning Visual Representations from Textual Annotations. In CVPR.
|
| 313 |
+
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.
|
| 314 |
+
Chris Dyer, Victor Chahuneau, and Noah A. Smith. 2013. A simple, fast, and effective reparameterization of IBM model 2. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 644-648, Atlanta, Georgia. Association for Computational Linguistics.
|
| 315 |
+
Sergey Edunov, Myle Ott, Michael Auli, and David Grangier. 2018. Understanding back-translation at scale. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 489–500, Brussels, Belgium. Association for Computational Linguistics.
|
| 316 |
+
Obeida ElJundi., Mohamad Dhaybi., Kotaiba Mokadam., Hazem Hajj., and Daniel Asmar. 2020. Resources and end-to-end neural network models for arabic image captioning. In Proceedings of the 15th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 5: VISAPP, pages 233-241. INSTICC, SciTePress.
|
| 317 |
+
Miquel Esplà, Mikel Forcada, Gema Ramírez-Sánchez, and Hieu Hoang. 2019. ParaCrawl: Web-scale parallel corpora for the languages of the EU. In Proceedings of Machine Translation Summit XVII Volume 2: Translator, Project and User Tracks, pages 118-119, Dublin, Ireland. European Association for Machine Translation.
|
| 318 |
+
Manaal Faruqui and Chris Dyer. 2014. Improving vector space word representations using multilingual correlation. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics, pages 462-471, Gothenburg, Sweden. Association for Computational Linguistics.
|
| 319 |
+
|
| 320 |
+
Yang Feng, Lin Ma, Wei Liu, and Jiebo Luo. 2019. Unsupervised image captioning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4125-4134.
|
| 321 |
+
Jiahui Gao, Yi Zhou, Philip LH Yu, and Jieuxiang Gu. 2020. Unsupervised cross-lingual image captioning. arXiv preprint arXiv:2010.01288.
|
| 322 |
+
Edouard Grave, Piotr Bojanowski, Prakhar Gupta, Armand Joulin, and Tomas Mikolov. 2018. Learning word vectors for 157 languages. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan. European Language Resources Association (ELRA).
|
| 323 |
+
Jieuxiang Gu, Shafiq Joty, Jianfei Cai, and Gang Wang. 2018. Unpaired image captioning by language pivoting. In Proceedings of the European Conference on Computer Vision (ECCV), pages 503-519.
|
| 324 |
+
Jieuxiang Gu, Shafiq Joty, Jianfei Cai, Handong Zhao, Xu Yang, and Gang Wang. 2019. Unpaired image captioning via scene graph alignments. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 10323-10332.
|
| 325 |
+
Mandy Guo, Qinlan Shen, Yinfei Yang, Heming Ge, Daniel Cer, Gustavo Hernandez Abrego, Keith Stevens, Noah Constant, Yun-Hsuan Sung, Brian Strope, and Ray Kurzweil. 2018. Effective parallel corpus mining using bilingual sentence embeddings. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 165-176, Brussels, Belgium. Association for Computational Linguistics.
|
| 326 |
+
Viktor Hangya and Alexander Fraser. 2019. Unsupervised parallel sentence extraction with parallel segment detection helps machine translation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1224-1234, Florence, Italy. Association for Computational Linguistics.
|
| 327 |
+
K. He, X. Zhang, S. Ren, and J. Sun. 2016. Deep residual learning for image recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 770-778.
|
| 328 |
+
John Hewitt, Daphne Ippolito, Brendan Callahan, Reno Kriz, Derry Tanti Wijaya, and Chris Callison-Burch. 2018. Learning translations via images with a massively multilingual image dataset. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2566-2576.
|
| 329 |
+
Vu Cong Duy Hoang, Philipp Koehn, Gholamreza Haffari, and Trevor Cohn. 2018. Iterative backtranslation for neural machine translation. In Proceedings of the 2nd Workshop on Neural Machine Translation and Generation, pages 18-24, Melbourne, Australia. Association for Computational Linguistics.
|
| 330 |
+
|
| 331 |
+
Micah Hodosh, Peter Young, and Julia Hockenmaier. 2013. Framing image description as a ranking task: Data, models and evaluation metrics. Journal of Artificial Intelligence Research, 47:853-899.
|
| 332 |
+
Rebecca Hwa, Philip Resnik, Amy Weinberg, Clara Cabezas, and Okan Kolak. 2005. Bootstrapping parsers via syntactic projection across parallel texts. Natural language engineering, 11(03):311-325.
|
| 333 |
+
Alex Jones and Derry Tanti Wijaya. 2021. Majority voting with bidirectional pre-translation for bitext retrieval.
|
| 334 |
+
Omid Kashefi. 2018. Mizan: a large Persian-English parallel corpus. arXiv preprint arXiv:1801.02107.
|
| 335 |
+
Phillip Keung, Julian Salazar, Yichao Lu, and Noah A Smith. 2020. Unsupervised bitext mining and translation via self-trained contextual embeddings. arXiv preprint arXiv:2010.07761.
|
| 336 |
+
Nikzad Khani, Isidora Tourni, Mohammad Sadegh Rasooli, Chris Callison-Burch, and Derry Tanti Wijaya. 2021. Cultural and geographical influences on image translatability of words across languages. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 198-209.
|
| 337 |
+
Yunsu Kim, Miguel Graça, and Hermann Ney. 2020. When and why is unsupervised neural machine translation useless? In Proceedings of the 22nd Annual Conference of the European Association for Machine Translation, pages 35-44, Lisboa, Portugal. European Association for Machine Translation.
|
| 338 |
+
Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.
|
| 339 |
+
Philipp Koehn. 2005. Europarl: A parallel corpus for statistical machine translation. In $MT$ summit, volume 5, pages 79-86. CiteSeer.
|
| 340 |
+
Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, et al. 2007. Moses: Open source toolkit for statistical machine translation. In Proceedings of the 45th annual meeting of the ACL on interactive poster and demonstration sessions, pages 177-180. Association for Computational Linguistics.
|
| 341 |
+
Sandra Kübler, Ryan McDonald, and Joakim Nivre. 2009. Dependency parsing. Synthesis lectures on human language technologies, 1(1):1-127.
|
| 342 |
+
Taku Kudo and John Richardson. 2018. SentencePiece: A simple and language independent subword tokenizer and tokenizer for neural text processing. In Proceedings of the 2018 Conference on Empirical
|
| 343 |
+
|
| 344 |
+
Methods in Natural Language Processing: System Demonstrations, pages 66-71, Brussels, Belgium. Association for Computational Linguistics.
|
| 345 |
+
Anoop Kunchukuttan, Pratik Mehta, and Pushpak Bhattacharyya. 2018. The IIT Bombay English-Hindi parallel corpus. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan. European Language Resources Association (ELRA).
|
| 346 |
+
Kemal Kurniawan, Lea Frermann, Philip Schulz, and Trevor Cohn. 2021. Pvt: Parsimonious parser transfer for unsupervised cross-lingual adaptation. arXiv preprint arXiv:2101.11216.
|
| 347 |
+
Garry Kuwanto, Afra Feyza Akyurek, Isidora Chara Tourni, Siyang Li, and Derry Wijaya. 2021. Low-resource machine translation for low-resource languages: Leveraging comparable data, codeswitching and compute resources.
|
| 348 |
+
Guillaume Lample, Alexis Conneau, Ludovic Denoyer, and Marc'Aurelio Ranzato. 2018a. Unsupervised machine translation using monolingual corpora only. In International Conference on Learning Representations.
|
| 349 |
+
Guillaume Lample, Alexis Conneau, Marc'Aurelio Ranzato, Ludovic Denoyer, and Hervé Jégou. 2018b. Word translation without parallel data. In International Conference on Learning Representations.
|
| 350 |
+
Guillaume Lample, Myle Ott, Alexis Conneau, Ludovic Denoyer, and Marc'Aurelio Ranzato. 2018c. Phrase-based & neural unsupervised machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 5039-5049, Brussels, Belgium. Association for Computational Linguistics.
|
| 351 |
+
Shen Li, Joao V Graça, and Ben Taskar. 2012. Wiki-ly supervised part-of-speech tagging. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 1389-1398. Association for Computational Linguistics.
|
| 352 |
+
Zuchao Li, Rui Wang, Kehai Chen, Masso Utiyama, Eiichiro Sumita, Zhuosheng Zhang, and Hai Zhao. 2020. Data-dependent gaussian prior objective for language generation. In International Conference on Learning Representations.
|
| 353 |
+
Wen-Pin Lin, Matthew Snover, and Heng Ji. 2011. Unsupervised language-independent name translation mining from wikipedia infoboxes. In Proceedings of the First workshop on Unsupervised Learning in NLP, pages 43-52.
|
| 354 |
+
Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, and Luke Zettlemoyer. 2020. Multilingual denoising pre-training for neural machine translation. arXiv.cs.CL.2001.08210.
|
| 355 |
+
|
| 356 |
+
Xuezhe Ma and Fei Xia. 2014. Unsupervised dependency parsing with transferring distribution via parallel guidance and entropy regularization. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1337-1348, Baltimore, Maryland. Association for Computational Linguistics.
|
| 357 |
+
Kelly Marchisio, Kevin Duh, and Philipp Koehn. 2020. When does unsupervised machine translation work? In Proceedings of the Fifth Conference on Machine Translation, pages 571-583, Online. Association for Computational Linguistics.
|
| 358 |
+
Toan Q. Nguyen and David Chiang. 2017. Transfer learning across low-resource, related languages for neural machine translation. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 296-301, Taipei, Taiwan. Asian Federation of Natural Language Processing.
|
| 359 |
+
Franz Josef Och and Hermann Ney. 2003a. A systematic comparison of various statistical alignment models. Computational linguistics, 29(1):19-51.
|
| 360 |
+
Franz Josef Och and Hermann Ney. 2003b. A systematic comparison of various statistical alignment models. Computational Linguistics, 29(1):19-51.
|
| 361 |
+
Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting of the Association for Computational Linguistics, pages 311-318.
|
| 362 |
+
Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. 2019. Pytorch: An imperative style, high-performance deep learning library. In Advances in neural information processing systems, pages 8026-8037.
|
| 363 |
+
Alexandre Patry and Philippe Langlais. 2011. Identifying parallel documents from a large bilingual collection of texts: Application to parallel article extraction in Wikipedia. In Proceedings of the 4th Workshop on Building and Using Comparable Corpora: Comparable Corpora and the Web, pages 87-95, Portland, Oregon. Association for Computational Linguistics.
|
| 364 |
+
Matt Post. 2018. A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 186-191, Brussels, Belgium. Association for Computational Linguistics.
|
| 365 |
+
Peng Qi, Yuhao Zhang, Yuhui Zhang, Jason Bolton, and Christopher D. Manning. 2020. Stanza: A python natural language processing toolkit for many human languages. In Proceedings of the 58th Annual Meeting of the Association for Computational
|
| 366 |
+
|
| 367 |
+
Linguistics: System Demonstrations, pages 101-108, Online. Association for Computational Linguistics.
|
| 368 |
+
Mohammad Sadegh Rasooli and Michael Collins. 2015. Density-driven cross-lingual transfer of dependency parsers. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 328-338, Lisbon, Portugal. Association for Computational Linguistics.
|
| 369 |
+
Mohammad Sadegh Rasooli and Michael Collins. 2017. Cross-lingual syntactic transfer with limited resources. Transactions of the Association for Computational Linguistics, 5:279-293.
|
| 370 |
+
Mohammad Sadegh Rasooli and Michael Collins. 2019. Low-resource syntactic transfer with unsupervised source reordering. In Proceedings of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3845-3856, Minneapolis, Minnesota. Association for Computational Linguistics.
|
| 371 |
+
Philip Resnik. 1998. Parallel strands: A preliminary investigation into mining the web for bilingual text. In Conference of the Association for Machine Translation in the Americas, pages 72-82. Springer.
|
| 372 |
+
Philip Resnik and Noah A Smith. 2003. The web as a parallel corpus. Computational Linguistics, 29(3):349-380.
|
| 373 |
+
Dana Ruiter, Cristina Espana-Bonet, and Josef van Genabith. 2019. Self-supervised neural machine translation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1828-1834, Florence, Italy. Association for Computational Linguistics.
|
| 374 |
+
Holger Schwenk, Vishrav Chaudhary, Shuo Sun, Hongyu Gong, and Francisco Guzmán. 2019a. Wikimatrix: Mining 135m parallel sentences in 1620 language pairs from wikipedia. arXiv.cs.CL 1907.05791.
|
| 375 |
+
Holger Schwenk, Guillaume Wenzek, Sergey Edunov, Edouard Grave, and Armand Joulin. 2019b. Ccmatrix: Mining billions of high-quality parallel sentences on the web. arXiv preprint arXiv:1911.04944.
|
| 376 |
+
Sukanta Sen, Kamal Kumar Gupta, Asif Ekbal, and Pushpak Bhattacharyya. 2019. Multilingual unsupervised NMT using shared encoder and language-specific decoders. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3083-3089, Florence, Italy. Association for Computational Linguistics.
|
| 377 |
+
Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016a. Edinburgh neural machine translation systems for WMT 16. In Proceedings of the First Conference on Machine Translation: Volume 2, Shared Task Papers, pages 371-376, Berlin, Germany. Association for Computational Linguistics.
|
| 378 |
+
|
| 379 |
+
Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016b. Improving neural machine translation models with monolingual data. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 86-96, Berlin, Germany. Association for Computational Linguistics.
|
| 380 |
+
Piyush Sharma, Nan Ding, Sebastian Goodman, and Radu Soricut. 2018. Conceptual captions: A cleaned, hypernymed, image alt-text dataset for automatic image captioning. In Proceedings of ACL.
|
| 381 |
+
Amanpreet Singh, Vedanuj Goswami, and Devi Parikh. 2020. Are we pretraining it right? digging deeper into visio-linguistic pretraining. arXiv preprint arXiv:2004.08744.
|
| 382 |
+
Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, and TieYan Liu. 2019a. MASS: Masked sequence to sequence pre-training for language generation. arXiv.cs.CL 1905.02450.
|
| 383 |
+
Yuqing Song, Shizhe Chen, Yida Zhao, and Qin Jin. 2019b. Unpaired cross-lingual image caption generation with self-supervised rewards. In Proceedings of the 27th ACM International Conference on Multimedia, pages 784-792.
|
| 384 |
+
Lucia Specia, Stella Frank, Khalil Sima'An, and Desmond Elliott. 2016. A shared task on multimodal machine translation and crosslingual image description. In Proceedings of the First Conference on Machine Translation: Volume 2, Shared Task Papers, pages 543-553.
|
| 385 |
+
Krishna Srinivasan, Karthik Raman, Jiecao Chen, Michael Bendersky, and Marc Najork. 2021. Wit: Wikipedia-based image text dataset for multimodal multilingual machine learning. arXiv preprint arXiv:2103.01913.
|
| 386 |
+
Milan Straka, Jan Hajic, and Jana Straková. 2016. Udpipe: trainable pipeline for processing conll-u files performing tokenization, morphological analysis, pos tagging and parsing. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 4290-4297.
|
| 387 |
+
Yunwon Tae, Cheonbok Park, Taehee Kim, Soyoung Yang, Mohammad Azam Khan, Eunjeong Park, Tao Qin, and Jaegul Choo. 2020. Meta-learning for low-resource unsupervised neural machine translation. arXiv preprint arXiv:2010.09046.
|
| 388 |
+
Chau Tran, Yuqing Tang, Xian Li, and Jiatao Gu. 2020. Cross-lingual retrieval for iterative self-supervised training. Advances in Neural Information Processing Systems, 33.
|
| 389 |
+
Dan Tufis, Radu Ion, Stefan Daniel Dumitrescu, and Dan Stefanescu. 2013. Wikipedia as an SMT training corpus. In Proceedings of the International Conference Recent Advances in Natural Language Processing RANLP 2013, pages 702-709.
|
| 390 |
+
|
| 391 |
+
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems, pages 5998-6008.
|
| 392 |
+
Derry Tanti Wijaya, Brendan Callahan, John Hewitt, Jie Gao, Xiao Ling, Marianna Apidianaki, and Chris Callison-Burch. 2017. Learning translations via matrix completion. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1452-1463.
|
| 393 |
+
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, et al. 2019. Huggingface's transformers: State-of-the-art natural language processing. ArXiv, pages arXiv-1910.
|
| 394 |
+
Shaowei Yao and Xiaojun Wan. 2020. Multimodal transformer for multimodal machine translation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4346-4350.
|
| 395 |
+
David Yarowsky, Grace Ngai, and Richard Wicentowski. 2001. Inducing multilingual text analysis tools via robust projection across aligned corpora. In Proceedings of the First International Conference on Human Language Technology Research, HLT '01, pages 1-8, Stroudsburg, PA, USA. Association for Computational Linguistics.
|
| 396 |
+
Daniel Zeman, Joakim Nivre, et al. 2020. Universal dependencies 2.7. LINDAT/CLARIAH-CZ digital library at the Institute of Formal and Applied Linguistics (UFAL), Faculty of Mathematics and Physics, Charles University.
|
| 397 |
+
Michal Ziemski, Marcin Junczys-Dowmunt, and Bruno Pouliquen. 2016. The United Nations parallel corpus v1.0. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 3530-3534.
|
| 398 |
+
Barret Zoph, Deniz Yuret, Jonathan May, and Kevin Knight. 2016. Transfer learning for low-resource neural machine translation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1568-1575, Austin, Texas. Association for Computational Linguistics.
|
| 399 |
+
|
| 400 |
+
# A Cross-Lingual Embedding
|
| 401 |
+
|
| 402 |
+
We use the off-the-shelf 300-dimensional FastText embeddings (Grave et al., 2018) as monolingual embedding vectors. We run FastAlign (Dyer et al., 2013) on the seed parallel text from both source-to-target and target-to-source directions, run alignment intersection to get intersected alignments, and extract the highest occurring alignment for every word as the dictionary entry. We make use of the
|
| 403 |
+
|
| 404 |
+
cross-lingual CCA tool (Faruqui and Dyer, 2014) to extract 150-dimensional vectors. This tool can be run on a single CPU within a few hours.
|
| 405 |
+
|
| 406 |
+
# B Monolingual and Translation Datasets
|
| 407 |
+
|
| 408 |
+
We use an off-the-shelf Indic-transliteration library to convert the Devanagari script to Hindi script to make the Hindi documents look like Gujarati by removing the graphical vertical bars from Hindi letters, thus increasing the chance of capturing more words in common. We boost the Romanian, Gujarati, and Kazakh monolingual data with newstext dataset from WMT. For parallel data in similar languages, we use the Mizan parallel data for Persian (Kashefi, 2018) with one million sentences, the IITB data (Kunchukuttan et al., 2018) and HindiEnCorp 0.5 (Bojar et al., 2014) for Hindi with a total of 367K sentences, ParaCrawl for Russian (Esplà et al., 2019) with 12M sentences, and Europarl for Italian (Koehn, 2005) with 2M sentences. We use the Arabic-English UN data (Ziemski et al., 2016), WMT 2019 data (Barrault et al., 2019) for Gujarati-English and Kazakh-English, and WMT 2016 shared task data (Bojar et al., 2016) for Romanian-English. Following previous work (Sennrich et al., 2016a), diacritics are removed from the Romanian data.
|
| 409 |
+
|
| 410 |
+
# C Translation Training Parameters
|
| 411 |
+
|
| 412 |
+
We pick comparable candidates for sentence pairs whose lengths are within a range of half to twice of each other. As we see, the final size of mined datasets heavily depends on the number of paired English-target language Wikipedia documents. We train our translation models initialized by pretrained models. Each batch has roughly 4K tokens. Except for Arabic, for which the size of mined data significantly outnumbers the size of Persian-English parallel data, we use the related language data before using iterative backtranslation in which we only use the source and target monolingual datasets. We use similar learning hyper-parameters to pretraining except for iterative back-translation in which we accumulate gradients for 100 steps, and use a dropout probability of 0.2 and 10000 warmup steps since we find smaller dropout and warmup make the model diverge. Our one-shot back-translation experiments use a beam size of 4, but we use a beam size of one for iterative
|
| 413 |
+
|
| 414 |
+

|
| 415 |
+
Figure 7: Results using our mined data versus WikiMatrix (Schwenk et al., 2019a) and gold-standard data.
|
| 416 |
+
|
| 417 |
+

|
| 418 |
+
Figure 8: Results using mined data (no back-translation) with and without pretraining.
|
| 419 |
+
|
| 420 |
+
back-translation since we have not seen much gains in using beam-based iterative back-translation except for purely unsupervised settings. All of our translations are performed with a beam size of 4 and max_len_a = 1.3 and max_len_b = 5. We alternate between supervised parallel data of a similar language paired with English and the mined data.
|
| 421 |
+
|
| 422 |
+
We train translation models for roughly 400K batches except for Gujarati that has smaller mined data for which we train for 200K iterations. We have seen a quick divergence in Kazakh iterative back-translation, thereby we stopped it early after running it for one epoch of all monolingual data. Most likely, the mined data for Kazakh-English has lower quality (see the supplementary material for more details), and that leads to very noisy translations in back-translation outputs. All of our evaluations are conducted using SacreBLEU (Post, 2018) except for en $\leftrightarrow$ ro in which we use BLEU score (Papineni et al., 2002) from Moses decoder scripts (Koehn et al., 2007) for the sake of comparison to previous work.
|
| 423 |
+
|
| 424 |
+
# D Quality of Mines Data
|
| 425 |
+
|
| 426 |
+
The quality of parallel data matters a lot for getting high-accuracy. For example, we manually observe that the quality of mined data for all languages are very good except for Kazakh. Our hypothesis is that the Kazakh Wikipedia data is less aligned with the English content. We compare our mined data to that of the supervised mined data from Wiki
|
| 427 |
+
|
| 428 |
+

|
| 429 |
+
Figure 9: Our best results versus the supervised model of Tran et al. (2020).
|
| 430 |
+
|
| 431 |
+
Matrix (Schwenk et al., 2019a) as well as gold-standard data. Figure 7 shows the difference between the three datasets of three language pairs (WikiMatrix does not contain Gujarati). As we see, our data has BLEU scores near to WikiMatrix in all languages, and in the case of Kazakh, the model trained on our data performs higher than WikiMatrix. In other words, in the case of having very noisy comparable data, as is the case for Kazakh-English, our model even outperforms a contextualized supervised model. It is also interesting to see that our model outperforms the supervised model for Kazakh that has only $7.7\mathrm{K}$ gold-standard training data. These are all strong evidences of the strength of our approach in truly low-resource settings.
|
| 432 |
+
|
| 433 |
+
# E Pretraining Matters
|
| 434 |
+
|
| 435 |
+
It is a truth universally acknowledged, that a single model in possession of a small training data and high learning capacity, must be in want of a pretrained model. To prove this, we run our translation experiments with and without pretraining. In this case, all models with the same training data and parameters are equal, but some models are more equal. Figure 8 shows the results on the mined data. Clearly, there is a significant gain by using pre-trained models. For Gujarati, which is our the lowest-resource language in our experiments, the distance is more notable: from BLEU score of 2.9 to 9.0. If we had access to a cluster of high-memory GPUs, we could potentially obtain even higher results throughout all of our experiments. Therefore, we believe that part of the blame for our results in English-Romanian is on pretraining. As we see in Figure 7, our supervised results without backtranslation are also low for English-Romanian.
|
| 436 |
+
|
| 437 |
+
# F Comparing to CRISS
|
| 438 |
+
|
| 439 |
+
The recent work of Tran et al. (2020) shows impressive gains using high-quality pretrained models and iterative parallel data mining from a larger compa
|
| 440 |
+
|
| 441 |
+
rable data than that of Wikipedia. Their pretrained model is trained using 256 Nvidia V100 GPUs in approximately 2.5 weeks (Liu et al., 2020). Figure 9 shows that by considering all these facts, our model still outperforms their supervised model in English-to-Kazakh with a big margin (4.3 cs 10.8) and gets close to their performance in other directions. We should emphasize on the fact that Tran et al. (2020) explores a much bigger comparable data than ours. One clear addition to our work is exploring parallel data from other available comparable datasets. Due to limited computational resources, we skip this part but we do believe that using our current unsupervised models can help extract even more high-quality parallel data from comparable datasets, and this might lead to further gains for low-resource languages.
|
wikilysupervisedneuraltranslationtailoredtocrosslingualtasks/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:691679eec31cd4a18f91af3db5bb25c11764e4398f3f1728ec07faeee6c939ae
|
| 3 |
+
size 514601
|
wikilysupervisedneuraltranslationtailoredtocrosslingualtasks/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:d0e6eaad5d72e5b4b6d4f300331676f0fdc21a4ba90fe1da63580434c0877c0f
|
| 3 |
+
size 564720
|
zeroshotdialoguedisentanglementbyselfsupervisedentangledresponseselection/e6a5921f-cf3c-46be-a9f7-737faa10faab_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:cb9d28d3928f9c2abde8001e477ef53b594ce2a991001b888cc104d451568a93
|
| 3 |
+
size 41499
|
zeroshotdialoguedisentanglementbyselfsupervisedentangledresponseselection/e6a5921f-cf3c-46be-a9f7-737faa10faab_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:763ad27b8bb94ccc1e8c53bf5f84229f64b1a0d1beb59b2c6319a551124e8b4a
|
| 3 |
+
size 51976
|
zeroshotdialoguedisentanglementbyselfsupervisedentangledresponseselection/e6a5921f-cf3c-46be-a9f7-737faa10faab_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:62155ac8c1d1910e95c7b5a2147bb246d78afffcc6c44e3fc99d1011056683a7
|
| 3 |
+
size 378554
|
zeroshotdialoguedisentanglementbyselfsupervisedentangledresponseselection/full.md
ADDED
|
@@ -0,0 +1,191 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Zero-Shot Dialogue Disentanglement by Self-Supervised Entangled Response Selection
|
| 2 |
+
|
| 3 |
+
Ta-Chung Chi
|
| 4 |
+
|
| 5 |
+
Language Technologies Institute
|
| 6 |
+
|
| 7 |
+
Carnegie Mellon University
|
| 8 |
+
|
| 9 |
+
tachungc@andrew.cmu.edu
|
| 10 |
+
|
| 11 |
+
Alexander I. Rudnicky
|
| 12 |
+
|
| 13 |
+
Language Technologies Institute
|
| 14 |
+
|
| 15 |
+
Carnegie Mellon University
|
| 16 |
+
|
| 17 |
+
air@cs.cmu.edu
|
| 18 |
+
|
| 19 |
+
# Abstract
|
| 20 |
+
|
| 21 |
+
Dialogue disentanglement aims to group utterances in a long and multi-participant dialogue into threads. This is useful for discourse analysis and downstream applications such as dialogue response selection, where it can be the first step to construct a clean context/response set. Unfortunately, labeling all reply-to links takes quadratic effort w.r.t the number of utterances: an annotator must check all preceding utterances to identify the one to which the current utterance is a reply. In this paper, we are the first to propose a zero-shot dialogue disentanglement solution. Firstly, we train a model on a multi-participant response selection dataset harvested from the web which is not annotated; we then apply the trained model to perform zero-shot dialogue disentanglement. Without any labeled data, our model can achieve a cluster F1 score of 25. We also fine-tune the model using various amounts of labeled data. Experiments show that with only $10\%$ of the data, we achieve nearly the same performance of using the full dataset<sup>1</sup>.
|
| 22 |
+
|
| 23 |
+
# 1 Introduction
|
| 24 |
+
|
| 25 |
+
Multi-participant chat platforms such as Messenger and WhatsApp are common on the Internet. While being easy to communicate with others, messages often flood into a single channel, entangling chat history which is poorly organized and difficult to structure. In contrast, Slack provides a thread-opening feature that allows users to manually organize their discussions. It would be ideal if we could design an algorithm to automatically organize an entangled conversation into its constituent threads. This is referred to as the task of dialogue disentanglement (Shen et al., 2006; Elsner and Charniak, 2008; Wang and Oard, 2009; Elsner and Charniak, 2011; Jiang et al., 2018; Kummerfeld et al., 2018; Zhu et al., 2020; Li et al., 2020; Yu and Joty, 2020).
|
| 26 |
+
|
| 27 |
+

|
| 28 |
+
Figure 1: This is the high-level flow of our proposed approach.
|
| 29 |
+
|
| 30 |
+
Training data for the dialogue disentanglement task is difficult to acquire due to the need for manual annotation. Typically, the data is annotated in the reply-to links format, i.e. every utterance is linked to one preceding utterance. The effort is quadratic w.r.t the length of dialogue, partly explaining the sole existence of human-annotated large-scale dataset (Kummerfeld et al., 2018), which was constructed based on the Ubuntu IRC forum. To circumvent the need for expensive labeled data, we aim to train a self-supervised model first then use the model to perform zero-shot dialogue disentanglement. In other words, our goal is to find a task that can learn implicit reply-to links without labeled data.
|
| 31 |
+
|
| 32 |
+
Entangled response selection (Gunasekara et al., 2020) is the task that we will focus on. It is similar to the traditional response selection task, whose goal is to pick the correct next response among candidates, with the difference that its dialogue context consists of multiple topics and participants, leading to a much longer context (avg. 55 utterances). We hypothesize that:
|
| 33 |
+
|
| 34 |
+
A well-performing model of entangled response selection requires recovery of reply-to links to preceding dialogue.
|
| 35 |
+
|
| 36 |
+
This is the only way that a model can pick the correct next response given an entangled context. Two challenges are ahead of us:
|
| 37 |
+
|
| 38 |
+
- Choosing a design for such a model. Previous work relies on heuristics to filter out utterances to condense context. A model should not rely on heuristics. See §2.3 and 2.5.
|
| 39 |
+
- Even though we can train a well-performing model, how should we reveal the links learned implicitly? See §3.4.
|
| 40 |
+
|
| 41 |
+
Finally, we want to highlight the high practical value of our proposed method. Consider that we have access to a large and unlabeled corpus of chat (e.g. WhatsApp/Messenger) history. The only cost should be training the proposed entangled response selection model with attention supervision using unlabeled data. The trained model is immediately ready for dialogue disentanglement. In summary, the contributions of this work are:
|
| 42 |
+
|
| 43 |
+
- Show that complex pruning strategies are not necessary for entangled response selection.
|
| 44 |
+
- With the proposed objective, the model trained on entangled response selection can perform zero-shot dialogue disentanglement.
|
| 45 |
+
- By tuning with $10\%$ of the labeled data, our model achieves comparable performance to that trained using the full dataset.
|
| 46 |
+
|
| 47 |
+
# 2 Entangled Response Selection
|
| 48 |
+
|
| 49 |
+
# 2.1 Task Description
|
| 50 |
+
|
| 51 |
+
The dataset we use is DSTC8 subtask-2 (Gunasekara et al., 2020), which was constructed by crawling the Ubuntu IRC forum. Concretely, given an entangled dialogue context, the model is expected to pick the next response among 100 candidates. The average context length is 55 and the number of speakers is 20 with multiple (possibly relevant) topics discussed concurrently. The context is too long to be encoded by transformer-based models (Devlin et al., 2018; Liu et al., 2019). Despite the existence of models capable of handling long context (Yang et al., 2019; Zaheer et al., 2020; Beltagy et al., 2020), it is difficult to reveal the implicitly learned reply-to links as done in §3.4.
|
| 52 |
+
|
| 53 |
+
# 2.2 Related Work
|
| 54 |
+
|
| 55 |
+
To the best of our knowledge, previous works adopt complex heuristics to prune out utterances in the long context (Wu et al., 2020; Wang et al., 2020; Gu et al., 2020; Bertero et al., 2020). For example, keeping the utterances whose speaker is the same as
|
| 56 |
+
|
| 57 |
+
<table><tr><td>Model</td><td>R@1</td><td>R@5</td><td>R@10</td><td>MRR</td></tr><tr><td>Concatenate</td><td>51.6</td><td>72.3</td><td>80.1</td><td>61.2</td></tr><tr><td>+ aug</td><td>64.3</td><td>82.9</td><td>88.4</td><td>72.8</td></tr><tr><td>Hierarchical</td><td>50.0</td><td>72.1</td><td>81.4</td><td>60.3</td></tr><tr><td>+ aug</td><td>65.7</td><td>84.8</td><td>91.8</td><td>74.3</td></tr></table>
|
| 58 |
+
|
| 59 |
+
Table 1: Test set performance of the entangled response selection task. Concatenate is the model with complex pruning heuristics described in Wu et al. (2020).
|
| 60 |
+
|
| 61 |
+
or referred to by the candidates. This is problematic for two reasons. 1) The retained context is still noisy as there are multiple speakers present in the candidates. 2) We might accidentally prune out relevant utterances even though they do not share the same speakers. A better solution is to let the model decide which utterances should be retained.
|
| 62 |
+
|
| 63 |
+
# 2.3 Model (Solid Arrows in Figure 2)
|
| 64 |
+
|
| 65 |
+
We use a hierarchical encoder as shown in the middle part of Figure 2. Suppose the input context is $\{U_i\}_{i=1}^n$ and the next response candidate set is $\{C_k\}_{k=1}^m$ . For every candidate utterance $C_k$ , we concatenate it with all $U_i$ s. For example, we form $n$ pairs for $k = 1$ , $(U_i + C_1)_{i=1}^n$ . Then we use BERT as the encoder ( $\varphi$ ) to encode pairs and get the last layer embedding of the [CLS] token as $\mathbf{V}_i$ :
|
| 66 |
+
|
| 67 |
+
$$
|
| 68 |
+
\mathbf {V} _ {i} = \varphi \left(U _ {i} + C _ {1}\right) _ {i = 1} ^ {n}, \forall i \in 1 \dots n \tag {1}
|
| 69 |
+
$$
|
| 70 |
+
|
| 71 |
+
$$
|
| 72 |
+
\mathbf {V} _ {n + 1} = \varphi \left(C _ {k} + C _ {k}\right) \tag {2}
|
| 73 |
+
$$
|
| 74 |
+
|
| 75 |
+
While $(C_k + C_k)$ is not necessary for response selection, it is useful later for predicting self-link, which acts as the first utterance of a thread. We will see its role in §3.4. Then we use the output embeddings of a one layer transformer $(\psi)$ with 8 heads to encode contextualized representations:
|
| 76 |
+
|
| 77 |
+
$$
|
| 78 |
+
\left\{\mathbf {V} _ {i} ^ {\prime} \right\} _ {i = 1} ^ {n + 1} = \psi \left(\left\{\mathbf {V} _ {i} \right\} _ {i = 1} ^ {n + 1}\right) \tag {3}
|
| 79 |
+
$$
|
| 80 |
+
|
| 81 |
+
To determine relative importance, we use an attention module $(\mathcal{A})$ to calculate attention scores:
|
| 82 |
+
|
| 83 |
+
$$
|
| 84 |
+
v _ {i} = \operatorname {M L P} \left(\mathbf {V} _ {i} ^ {\prime}\right), \forall i \in 1 \dots n + 1 \tag {4}
|
| 85 |
+
$$
|
| 86 |
+
|
| 87 |
+
$$
|
| 88 |
+
\left\{\alpha_ {i} \right\} _ {i = 1} ^ {n + 1} = \operatorname {s o f t m a x} \left(\left\{v _ {i} \right\} _ {i = 1} ^ {n + 1}\right) \tag {5}
|
| 89 |
+
$$
|
| 90 |
+
|
| 91 |
+
The final predicted score is:
|
| 92 |
+
|
| 93 |
+
$$
|
| 94 |
+
s = \operatorname {M L P} \left(\sum_ {i = 1} ^ {n + 1} \alpha_ {i} \mathbf {V} _ {i} ^ {\prime}\right) \tag {6}
|
| 95 |
+
$$
|
| 96 |
+
|
| 97 |
+
Note that $s$ should be 1 for $C_1$ (the correct next response), and otherwise 0 (row 1 of the multi-task loss table in Figure 2). This can be optimized using the binary cross-entropy loss.
|
| 98 |
+
|
| 99 |
+

|
| 100 |
+
Figure 2: Solid arrows: Given an entangled context $U_{1,2,3}$ and $C_{k=1}$ as the correct next response ( $C_{k=2}$ is a negative sample), each pair of the concatenated inputs is encoded separately by $\varphi$ (BERT) to get $\mathbf{V}_{\mathrm{i}}$ . A context-aware model $\psi$ (transformer) is applied over $\mathbf{V}_{\mathrm{i}}$ s to generate contextualized $\mathbf{V}_{\mathrm{i}}'$ . An attention module $\mathcal{A}$ is used to calculate the attention scores $\alpha_{i}$ and weighted sum $s$ . Model is optimized according to the target values for $s$ and $\alpha_{4}$ in the multi-task loss table. Dashed arrows: Given another entangled context, we know that the current utterance $C_{k=1}$ is replying to $U_{2}$ by taking the arg max of attention scores $\alpha_{i}$ in a zero-shot manner.
|
| 101 |
+
|
| 102 |
+
# 2.4 Results
|
| 103 |
+
|
| 104 |
+
We show the results in Table 1. The performance of our approach is comparable to previous work. Note that our model does not use any heuristics to prune out utterances. Instead, the attention scores $\alpha_{i}$ are decided entirely by the model. We also run an experiment using augmented data following Wu et al. (2020), which is constructed by excerpting partial context from the original context<sup>2</sup>. Finally, we want to highlight the importance of the attention module $\mathcal{A}$ , where the performance drops by 10 points if removed.
|
| 105 |
+
|
| 106 |
+
# 2.5 Attention Analysis
|
| 107 |
+
|
| 108 |
+
The empirical success of the hierarchical encoder has an important implication: it is able to link the candidate with one or multiple relevant utterances in the context. This can be proved by the attention distribution $\alpha_{i}$ . Intuitively, if $C_k$ is the correct next response (i.e. $k = 1$ ), then the attention distribution should be sharp, which indicates an implicit reply to that links to one of the previous utterances. In contrast, if $C_k$ is incorrect (i.e. $k \neq 1$ ), our model is less likely to find an implicit link, and the attention distribution should be flat. Entropy is a good tool
|
| 109 |
+
|
| 110 |
+
to quantify sharpness. Numerically, the entropy is 1.4 (sharp) when $C_k$ is correct and 2.1 (flat) for incorrect ones, validating our suppositions.
|
| 111 |
+
|
| 112 |
+
Is it possible to reveal these implicit links? The solution is inspired by the labeled data of dialogue disentanglement as elaborated in §3.4.
|
| 113 |
+
|
| 114 |
+
# 3 Zero shot Dialogue disentanglement
|
| 115 |
+
|
| 116 |
+
# 3.1 Task Description
|
| 117 |
+
|
| 118 |
+
The dataset used is DSTC8 subtask-4 (Kummerfeld et al., 2018) $^3$ . We want to find the parent utterance in an entangled context to which the current utterance is replying, and repeat this process for every utterance. After all the links are predicted, we run a connected component algorithm over them, where each connected component is one thread.
|
| 119 |
+
|
| 120 |
+
# 3.2 Related Work
|
| 121 |
+
|
| 122 |
+
All previous work (Shen et al., 2006; Elsner and Charniak, 2008; Wang and Oard, 2009; Elsner and Charniak, 2011; Jiang et al., 2018; Kummerfeld et al., 2018; Zhu et al., 2020; Li et al., 2020; Yu and Joty, 2020) treat the task as a sequence of multiple-choice problems. Each of them consists of a sliding window of $n$ utterances. The task is to link the last
|
| 123 |
+
|
| 124 |
+
<table><tr><td rowspan="2">Setting</td><td rowspan="2">w</td><td rowspan="2">data%</td><td colspan="5">CLUSTER</td><td colspan="3">LINK</td></tr><tr><td>VI</td><td>ARI</td><td>P</td><td>R</td><td>F1</td><td>P</td><td>R</td><td>F1</td></tr><tr><td>1) zero shot</td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td></td><td>0.00</td><td>0.0</td><td>62.9</td><td>14.7</td><td>2.9</td><td>0.3</td><td>0.5</td><td>41.2</td><td>39.7</td><td>40.5</td></tr><tr><td></td><td>0.25</td><td>0.0</td><td>84.4</td><td>50.1</td><td>25.9</td><td>24.8</td><td>25.3</td><td>43.7</td><td>41.4</td><td>42.2</td></tr><tr><td></td><td>0.50</td><td>0.0</td><td>84.6</td><td>51.5</td><td>24.6</td><td>23.8</td><td>24.2</td><td>41.5</td><td>40.0</td><td>40.8</td></tr><tr><td></td><td>0.75</td><td>0.0</td><td>84.6</td><td>49.2</td><td>23.3</td><td>23.1</td><td>23.2</td><td>41.8</td><td>40.3</td><td>41.1</td></tr><tr><td></td><td>1.00</td><td>0.0</td><td>84.3</td><td>47.5</td><td>22.9</td><td>23.0</td><td>23.0</td><td>41.6</td><td>40.1</td><td>40.9</td></tr><tr><td>2) few shot</td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>finetune</td><td>0.25</td><td>1</td><td>89.7</td><td>60.2</td><td>26.1</td><td>33.9</td><td>29.5</td><td>65.0</td><td>62.7</td><td>63.8</td></tr><tr><td>scratch</td><td>-</td><td>-</td><td>88.7</td><td>58.7</td><td>22.6</td><td>28.6</td><td>25.2</td><td>63.7</td><td>61.4</td><td>62.6</td></tr><tr><td>finetune</td><td>0.25</td><td>10</td><td>90.6</td><td>59.8</td><td>32.4</td><td>38.4</td><td>35.1</td><td>70.5</td><td>68.0</td><td>69.3</td></tr><tr><td>scratch</td><td>-</td><td>-</td><td>90.4</td><td>61.0</td><td>32.4</td><td>36.5</td><td>34.3</td><td>70.4</td><td>67.9</td><td>69.1</td></tr><tr><td>finetune</td><td>0.25</td><td>100</td><td>91.1</td><td>62.7</td><td>35.3</td><td>42.0</td><td>38.3</td><td>74.2</td><td>71.6</td><td>72.9</td></tr><tr><td>scratch</td><td>-</td><td>-</td><td>91.2</td><td>62.1</td><td>35.6</td><td>40.3</td><td>37.8</td><td>74.0</td><td>71.3</td><td>72.6</td></tr></table>
|
| 125 |
+
|
| 126 |
+
Table 2: $w = 0$ indicates pure entangled response selection training. In the few-shot section, scratch is the disentanglement model not trained self-supervisedly on entangled response selection before. The evaluation metrics and labeled data used for fine-tuning are in Kummerfeld et al. (2018). Results are the average of three runs.
|
| 127 |
+
|
| 128 |
+
utterance to one of the preceding $n - 1$ utterances. This model is usually trained in supervised mode using the labeled reply-to links. Our model also follows the same formulation.
|
| 129 |
+
|
| 130 |
+
# 3.3 Model (Dashed Arrows in Figure 2)
|
| 131 |
+
|
| 132 |
+
We use the trained hierarchical model in §2.3 without the final MLP layer used for scoring. In addition, we only have one candidate now, which is the last utterance in a dialogue. We use $C_{k=1}$ to represent it for consistency. Note that we only need to calculate $i' = \arg \max_i \alpha_i$ . This indicates that $C_{k=1}$ is replying to utterance $U_{i'}$ in the context.
|
| 133 |
+
|
| 134 |
+
# 3.4 Proposed Attention Supervision
|
| 135 |
+
|
| 136 |
+
We note that the labeled reply-to links act as supervision to the attention $\alpha_{i}$ : they indicate which $\alpha_{i}$ should be 1. We call this extrinsic supervision. Recall the implicit attention analysis in §2.5, from which we exploit two kinds of intrinsic supervision:
|
| 137 |
+
|
| 138 |
+
- If $C_k$ is the correct next response, then $\alpha_{n+1} = 0$ because $C_k$ should be linking to one previous utterance, not itself.
|
| 139 |
+
- If $C_k$ is incorrect, then it should point to itself, acting like the start utterance of a new thread. Hence, $\alpha_{n+1} = 1$ .
|
| 140 |
+
|
| 141 |
+
We train this intrinsic attention using MSE (row 2 of the multi-task loss table in Figure 2) along with the original response selection loss using a weight $w$ for linear combination $L = (1 - w)*L_{res} + w*L_{attn}$ . Note that we do not use any labeled
|
| 142 |
+
|
| 143 |
+
disentanglement data in the training process.
|
| 144 |
+
|
| 145 |
+

|
| 146 |
+
Figure 3: Different amounts of labeled data for finetuning. The model with self-supervised response selection training outperforms the one trained from scratch.
|
| 147 |
+
|
| 148 |
+
# 3.5 Results
|
| 149 |
+
|
| 150 |
+
We present the results in Table 2. In the first section, we focus on zero-shot performance, where we vary $w$ to see its effect. As we can see, $w = 0.25$ gives a close-to-best performance in terms of cluster and link scores. Therefore, we use it for few-shot finetuning setting, under which our proposed method outperforms baselines trained from scratch by a large margin. We pick the best checkpoint based on the validation set performance and evaluate it on the test set. This procedure is repeated three times with different random seeds to get the averaged performance reported in Table 2. With $10\%$ of the data, we can achieve $92\%$ of the performance
|
| 151 |
+
|
| 152 |
+
trained using full data. The performance gap becomes smaller when more data is used as illustrated in Figure 3.
|
| 153 |
+
|
| 154 |
+
# 3.6 Real-World Application
|
| 155 |
+
|
| 156 |
+
Our method only requires one additional MLP layer attached to the architecture of Li et al. (2020) to train on the entangled response selection task, hence it is trivial to swap the trained model into a production environment. Suppose a dialogue disentanglement system (Li et al., 2020) is already up and running:
|
| 157 |
+
|
| 158 |
+
1. Train a BERT model on the entangled response selection task (§2.1) with attention supervision loss (§3.4). This is also the multi-task loss depicted in Figure 2.
|
| 159 |
+
2. Copy the weight of the pretrained model into the existing architecture (Li et al., 2020).
|
| 160 |
+
3. Perform zero-shot dialogue disentanglement (zero-shot section of Table 2) right away, or finetune the model further when more labeled data becomes available (few-shot section of Table 2).
|
| 161 |
+
|
| 162 |
+
This strategy will be useful especially when we want to bootstrap a system with limited and expensive labeled data.
|
| 163 |
+
|
| 164 |
+
# 4 Conclusion
|
| 165 |
+
|
| 166 |
+
In this paper, we first demonstrate that entangled response selection does not require complex heuristics for context pruning. This implies the model might have learned implicit reply-to links useful for dialogue disentanglement. By introducing intrinsic attention supervision to shape the distribution, our proposed method can perform zero-shot dialogue disentanglement. Finally, with only $10\%$ of the data for tuning, our model can achieve $92\%$ of the performance of the model trained on full labeled data. Our method is the first attempt to zero-shot dialogue disentanglement, and it can be of high practical value for real-world applications.
|
| 167 |
+
|
| 168 |
+
# References
|
| 169 |
+
|
| 170 |
+
Iz Beltagy, Matthew E Peters, and Arman Cohan. 2020. Longformer: The long-document transformer. arXiv preprint arXiv:2004.05150.
|
| 171 |
+
Dario Bertero, Kenichi Yokote Takeshi Homma and, Makoto Iwayama, and Kenji Nagamatsu. 2020.
|
| 172 |
+
|
| 173 |
+
Model ensembling of esim and bert for dialogue response selection. DSTC8.
|
| 174 |
+
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.
|
| 175 |
+
Micha Elsner and Eugene Charniak. 2008. You talking to me? a corpus and algorithm for conversation disentanglement. In Proceedings of ACL-08: HLT, pages 834-842.
|
| 176 |
+
Micha Elsner and Eugene Charniak. 2011. Disentangling chat with local coherence models. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 1179-1189.
|
| 177 |
+
Jia-Chen Gu, Tianda Li, Quan Liu, Xiaodan Zhu, Zhen-Hua Ling, and Yu-Ping Ruan. 2020. Pre-trained and attention-based neural networks for building noetic task-oriented dialogue systems. arXiv preprint arXiv:2004.01940.
|
| 178 |
+
Chulaka Gunasekara, Jonathan K. Kummerfeld, Luis Lastras, and Walter S. Lasecki. 2020. Noesis ii: Predicting responses, identifying success, and managing complexity in task-oriented dialogue. In 8th Edition of the Dialog System Technology Challenges at AAAI 2019.
|
| 179 |
+
Jyun-Yu Jiang, Francine Chen, Yan-Ying Chen, and Wei Wang. 2018. Learning to disentangle interleaved conversational threads with a siamese hierarchical network and similarity ranking. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1812-1822.
|
| 180 |
+
Jonathan K Kummerfeld, Sai R Gouravajhala, Joseph Peper, Vignesh Athreya, Chulaka Gunasekara, Jatin Ganhotra, Siva Sankalp Patel, Lazaros Polymenakos, and Walter S Lasecki. 2018. A large-scale corpus for conversation disentanglement. arXiv preprint arXiv:1810.11118.
|
| 181 |
+
Tianda Li, Jia-Chen Gu, Xiaodan Zhu, Quan Liu, Zhen-Hua Ling, Zhiming Su, and Si Wei. 2020. Dialbert: A hierarchical pre-trained model for conversation disentanglement. arXiv preprint arXiv:2004.03760.
|
| 182 |
+
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692.
|
| 183 |
+
Dou Shen, Qiang Yang, Jian-Tao Sun, and Zheng Chen. 2006. Thread detection in dynamic text message streams. In Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval, pages 35-42.
|
| 184 |
+
|
| 185 |
+
Lidan Wang and Douglas W Oard. 2009. Context-based message expansion for disentanglement of interleaved text conversations. In Proceedings of human language technologies: The 2009 annual conference of the North American chapter of the association for computational linguistics, pages 200-208. Citeseer.
|
| 186 |
+
Weishi Wang, Shafiq Joty, and Steven CH Hoi. 2020. Response selection for multi-party conversations with dynamic topic tracking. arXiv preprint arXiv:2010.07785.
|
| 187 |
+
Shuangzhi Wu, Xu Wang Yufan Jiang and, Wei Mia, Zhenyu Zhao, Jun Xie, and Mu Li. 2020. Enhancing response selection with advanced context modeling and post-training. DSTC8.
|
| 188 |
+
Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. arXiv preprint arXiv:1906.08237.
|
| 189 |
+
Tao Yu and Shafiq Joty. 2020. Online conversation disentanglement with pointer networks. arXiv preprint arXiv:2010.11080.
|
| 190 |
+
Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontonon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, et al. 2020. Big bird: Transformers for longer sequences. arXiv preprint arXiv:2007.14062.
|
| 191 |
+
Henghui Zhu, Feng Nan, Zhiguo Wang, Ramesh Nallapati, and Bing Xiang. 2020. Who did they respond to? conversation structure modeling using masked hierarchical transformer. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 9741-9748.
|
zeroshotdialoguedisentanglementbyselfsupervisedentangledresponseselection/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:07164b448ec9192e1c5227f02c7887f708e99299b3791b1ac3297c10a9abb944
|
| 3 |
+
size 218593
|
zeroshotdialoguedisentanglementbyselfsupervisedentangledresponseselection/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:a0049b9af4f33e8e9f58ef051c05b6a22ca69f9460c13c28168b999294c6ebe9
|
| 3 |
+
size 235388
|
zeroshotdialoguestatetrackingviacrosstasktransfer/960a6ef3-99a8-43fd-9617-b685940e7e4f_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:b70ed065fa312b004850f2cc509d2620c526ff39c39d4ffc993c9d3cfec508e2
|
| 3 |
+
size 72783
|
zeroshotdialoguestatetrackingviacrosstasktransfer/960a6ef3-99a8-43fd-9617-b685940e7e4f_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:21999c7d08f9ee1a0ee3f4921abffedf775a176136e05456b32a80849cc27f20
|
| 3 |
+
size 90495
|