SlowGuess commited on
Commit
591686b
·
verified ·
1 Parent(s): 8f2eecc

Add Batch 5e41e3b7-285e-4bae-8385-6e9a1e773279

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. abenchmarkanddatasetforpostocrtextcorrectioninsanskrit/6ace9380-d009-4d26-a28f-c5ace922e1a0_content_list.json +3 -0
  2. abenchmarkanddatasetforpostocrtextcorrectioninsanskrit/6ace9380-d009-4d26-a28f-c5ace922e1a0_model.json +3 -0
  3. abenchmarkanddatasetforpostocrtextcorrectioninsanskrit/6ace9380-d009-4d26-a28f-c5ace922e1a0_origin.pdf +3 -0
  4. abenchmarkanddatasetforpostocrtextcorrectioninsanskrit/full.md +161 -0
  5. abenchmarkanddatasetforpostocrtextcorrectioninsanskrit/images.zip +3 -0
  6. abenchmarkanddatasetforpostocrtextcorrectioninsanskrit/layout.json +3 -0
  7. acceleratinglearnedsparseindexesviatermimpactdecomposition/46d9e91e-f335-4078-b0da-6f0f03b5aeea_content_list.json +3 -0
  8. acceleratinglearnedsparseindexesviatermimpactdecomposition/46d9e91e-f335-4078-b0da-6f0f03b5aeea_model.json +3 -0
  9. acceleratinglearnedsparseindexesviatermimpactdecomposition/46d9e91e-f335-4078-b0da-6f0f03b5aeea_origin.pdf +3 -0
  10. acceleratinglearnedsparseindexesviatermimpactdecomposition/full.md +361 -0
  11. acceleratinglearnedsparseindexesviatermimpactdecomposition/images.zip +3 -0
  12. acceleratinglearnedsparseindexesviatermimpactdecomposition/layout.json +3 -0
  13. acceptabilityjudgementsviaexaminingthetopologyofattentionmaps/691b92a0-1ad0-4a13-a1ea-1bf2414d7261_content_list.json +3 -0
  14. acceptabilityjudgementsviaexaminingthetopologyofattentionmaps/691b92a0-1ad0-4a13-a1ea-1bf2414d7261_model.json +3 -0
  15. acceptabilityjudgementsviaexaminingthetopologyofattentionmaps/691b92a0-1ad0-4a13-a1ea-1bf2414d7261_origin.pdf +3 -0
  16. acceptabilityjudgementsviaexaminingthetopologyofattentionmaps/full.md +656 -0
  17. acceptabilityjudgementsviaexaminingthetopologyofattentionmaps/images.zip +3 -0
  18. acceptabilityjudgementsviaexaminingthetopologyofattentionmaps/layout.json +3 -0
  19. acriticalreflectionandforwardperspectiveonempathyandnaturallanguageprocessing/5c6de628-b2da-41a3-a620-40a9fd0d8d7a_content_list.json +3 -0
  20. acriticalreflectionandforwardperspectiveonempathyandnaturallanguageprocessing/5c6de628-b2da-41a3-a620-40a9fd0d8d7a_model.json +3 -0
  21. acriticalreflectionandforwardperspectiveonempathyandnaturallanguageprocessing/5c6de628-b2da-41a3-a620-40a9fd0d8d7a_origin.pdf +3 -0
  22. acriticalreflectionandforwardperspectiveonempathyandnaturallanguageprocessing/full.md +429 -0
  23. acriticalreflectionandforwardperspectiveonempathyandnaturallanguageprocessing/images.zip +3 -0
  24. acriticalreflectionandforwardperspectiveonempathyandnaturallanguageprocessing/layout.json +3 -0
  25. activelearningforabstractivetextsummarization/fde33ad9-4bb7-4be9-8522-7653659dd8a5_content_list.json +3 -0
  26. activelearningforabstractivetextsummarization/fde33ad9-4bb7-4be9-8522-7653659dd8a5_model.json +3 -0
  27. activelearningforabstractivetextsummarization/fde33ad9-4bb7-4be9-8522-7653659dd8a5_origin.pdf +3 -0
  28. activelearningforabstractivetextsummarization/full.md +631 -0
  29. activelearningforabstractivetextsummarization/images.zip +3 -0
  30. activelearningforabstractivetextsummarization/layout.json +3 -0
  31. adapromptadaptivemodeltrainingforpromptbasednlp/739ece06-1173-464f-9b64-c71543c35cb4_content_list.json +3 -0
  32. adapromptadaptivemodeltrainingforpromptbasednlp/739ece06-1173-464f-9b64-c71543c35cb4_model.json +3 -0
  33. adapromptadaptivemodeltrainingforpromptbasednlp/739ece06-1173-464f-9b64-c71543c35cb4_origin.pdf +3 -0
  34. adapromptadaptivemodeltrainingforpromptbasednlp/full.md +337 -0
  35. adapromptadaptivemodeltrainingforpromptbasednlp/images.zip +3 -0
  36. adapromptadaptivemodeltrainingforpromptbasednlp/layout.json +3 -0
  37. adaptersforenhancedmodelingofmultilingualknowledgeandtext/7b86874c-c8d8-473f-89fd-815cbc935167_content_list.json +3 -0
  38. adaptersforenhancedmodelingofmultilingualknowledgeandtext/7b86874c-c8d8-473f-89fd-815cbc935167_model.json +3 -0
  39. adaptersforenhancedmodelingofmultilingualknowledgeandtext/7b86874c-c8d8-473f-89fd-815cbc935167_origin.pdf +3 -0
  40. adaptersforenhancedmodelingofmultilingualknowledgeandtext/full.md +377 -0
  41. adaptersforenhancedmodelingofmultilingualknowledgeandtext/images.zip +3 -0
  42. adaptersforenhancedmodelingofmultilingualknowledgeandtext/layout.json +3 -0
  43. adaptingmultilingualmodelsforcodemixedtranslation/34deb24e-6aa2-4be4-be3e-63c105f3ceda_content_list.json +3 -0
  44. adaptingmultilingualmodelsforcodemixedtranslation/34deb24e-6aa2-4be4-be3e-63c105f3ceda_model.json +3 -0
  45. adaptingmultilingualmodelsforcodemixedtranslation/34deb24e-6aa2-4be4-be3e-63c105f3ceda_origin.pdf +3 -0
  46. adaptingmultilingualmodelsforcodemixedtranslation/full.md +194 -0
  47. adaptingmultilingualmodelsforcodemixedtranslation/images.zip +3 -0
  48. adaptingmultilingualmodelsforcodemixedtranslation/layout.json +3 -0
  49. adaptivegraphconvolutionalnetworkforknowledgegraphentityalignment/ebf05760-0f16-42b6-b028-14c066a2ffb3_content_list.json +3 -0
  50. adaptivegraphconvolutionalnetworkforknowledgegraphentityalignment/ebf05760-0f16-42b6-b028-14c066a2ffb3_model.json +3 -0
abenchmarkanddatasetforpostocrtextcorrectioninsanskrit/6ace9380-d009-4d26-a28f-c5ace922e1a0_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:398e1d4fcc6a5b0719e442772c59fc9c43bc46215f857e49e3dc4172fa4a7732
3
+ size 46924
abenchmarkanddatasetforpostocrtextcorrectioninsanskrit/6ace9380-d009-4d26-a28f-c5ace922e1a0_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b15ce2663d8e706f2b799883a57d0be94b275d30091fc646fe97b0c0e2c6fc79
3
+ size 54870
abenchmarkanddatasetforpostocrtextcorrectioninsanskrit/6ace9380-d009-4d26-a28f-c5ace922e1a0_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4fbdf981e23da2e0d8e26580d6161e71d522a01c0f5ed8ef1cdb53f967f3913a
3
+ size 929749
abenchmarkanddatasetforpostocrtextcorrectioninsanskrit/full.md ADDED
@@ -0,0 +1,161 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # A Benchmark and Dataset for Post-OCR text correction in Sanskrit
2
+
3
+ Ayush Maheshwari<sup>1</sup>, Nikhil Singh*, Amrith Krishna<sup>2</sup> and Ganesh Ramakrishnan<sup>1</sup>
4
+
5
+ {ayusham, ganesh}@cse.iitb.ac.in, {nikhil3198, krishnamrith12}@gmail.com
6
+ <sup>1</sup>Indian Institute of Technology Bombay, <sup>2</sup>Uniphore
7
+
8
+ # Abstract
9
+
10
+ Sanskrit is a classical language with about 30 million extant manuscripts fit for digitisation, available in written, printed or scanned-image forms. However, it is still considered to be a low-resource language when it comes to available digital resources. In this work, we release a post-OCR text correction dataset containing around 218,000 sentences, with 1.5 million words, from 30 different books. Texts in Sanskrit are known to be diverse in terms of their linguistic and stylistic usage since Sanskrit was the 'lingua franca' for discourse in the Indian subcontinent for about 3 millennia. Keeping this in mind, we release a multi-domain dataset, from areas as diverse as astronomy, medicine and mathematics, with some of them as old as 18 centuries. Further, we release multiple strong baselines as benchmarks for the task, based on pre-trained Seq2Seq language models. We find that our best-performing model, consisting of byte level tokenization in conjunction with phonetic encoding (Byt5+SLP1), yields a $23\%$ point increase over the OCR output in terms of word and character error rates. Moreover, we perform extensive experiments in evaluating these models on their performance and analyse common causes of mispredictions both at the graphemic and lexical levels. Our code and dataset is publicly available at https://github.com/ayushbits/pe-ocr-sanskrit.
11
+
12
+ # 1 Introduction
13
+
14
+ Post-OCR text correction is a crucial post-processing step employed for correcting errors from the predictions of Optical Character Recognition (OCR) systems (Rijhwani et al., 2021a). A post-OCR corrector leverages the distributional information encoded in language models that aims to not only handle the systemic errors introduced
15
+
16
+ ![](images/77f16c7901715a6afbd8e71d5c2c718e6e4819ac483d8901350ffd27845020ea.jpg)
17
+ Figure 1: Image samples from different pages of our dataset.
18
+
19
+ by the OCR engine but also to predict meaningful and fluent sequences based on the context (Saluja et al., 2019). For a language like Sanskrit, the sources of OCR errors are diverse, owing to the availability of printed historical documents that vary vastly on a number of factors such as scan quality, book layout, typefaces, the orthographic similarity of letters in the alphabet, etc. (see Figure 1). Moreover, processing texts in Sanskrit is often challenging as the language is morphologically rich, lexically productive, follows relatively free-word order and is a low-resource language with limited available machine-readable corpora (Krishna et al., 2021).
20
+
21
+ In this work, we release a large Sanskrit post-correction dataset of more than 218,000 manually verified sentences, consisting of 1.5 million words. Our dataset consists of sentences from 30 books from domains as diverse as philosophy, literature, astronomy, medicine, mathematics etc.. Figure 1 shows a sample of the scanned images from these books, from which we obtained our dataset. Further, the sample clearly demonstrates the diversity in some of the aforementioned factors affecting the quality of the OCR predictions. Histori
22
+
23
+ cally, depending on the region or the time period in which it was used, several writing systems and scripts were employed for writing Sanskrit. However, the advent of the printing press largely standardized the use of the 'Devanāgari' script as the default writing system for Sanskrit.
24
+
25
+ We additionally release a set of strong seq2seq baselines to benchmark for the task, including a CopyNet based LSTM model (Gu et al., 2016) and four pretrained seq2seq systems (LMs). We find that all the pretrained-LM based baselines improve over the predictions from the original OCR. The best model which invokes byte level tokenization, viz., ByT5 (Xue et al., 2022), in conjunction with phonetic encoding (SLP1), among these benchmarks (all described in § 3) reports a character and word error rates of $2.98\%$ and $23.19\%$ respectively, as against that of $3.89\%$ and $30.23\%$ from the original OCR. This is primarily due to the ability of byte-level tokenizers to learn arbitrarily longer text in a setting where the frequency of words is low and out-of-vocabulary words are high (§2). Moreover, this goes well with the fact that the writings in Sanskrit follow a phonemic orthography, i.e. phonemes have a direct one-to-one correspondence with the orthographic symbols. We identify that most errors arising from the original OCR are from mispredictions in word boundary detection, diacritics and orthographically similar characters. Further, as the performance of the post-OCR text correction system is highly dependent on the predictions of the OCR, we also release the test dataset used for testing our current OCR. The test dataset consists of 500 images and their corresponding text, which can be used to benchmark an OCR, prior to using its predictions for post-OCR text correction.
26
+
27
+ # 2 Dataset
28
+
29
+ Sanskrit used to be the 'lingua franca' for scholarly discourse in the Indian subcontinent for about three millennia and the classical language is still in sustenance in the region. It is estimated that as many as 30 million extant documents, more than that in Greek and Latin combined, are fit for digitisation in Sanskrit (Goyal et al., 2012; Adiga et al., 2021). The current corpus is released as part of our attempt at large scale digitisation of old manuscripts in Sanskrit. Our corpus contains about 30 books, a subset of 103 books in our digitisation pipeline. These books were originally
30
+
31
+ <table><tr><td>Devanāgari</td><td>कू</td><td>क</td><td>का</td><td>िि</td><td>कोी</td><td>कू</td><td>क्तक</td><td>अ</td></tr><tr><td>Romanized</td><td>k</td><td>ka</td><td>kā</td><td>ki</td><td>ki​</td><td>kr​</td><td>kka</td><td>a</td></tr></table>
32
+
33
+ Figure 2: Devanāgari and Romanised representation of 'k' followed by different vowels. Conjunct consonant ('kka') may also have separate symbols in Devanāgari.
34
+
35
+ <table><tr><td>Split</td><td># sentences</td><td># words</td></tr><tr><td>Train</td><td>208,173</td><td>1,444,913</td></tr><tr><td>Validation</td><td>5000</td><td>34,762</td></tr><tr><td>Test</td><td>5000</td><td>34,705</td></tr></table>
36
+
37
+ Table 1: Number of words and sentences in the dataset split.
38
+
39
+ published at least a century ago, and are manually verified to have no copyright issues. We consider printed versions of these books, most of them reprinted in the first half of the twentieth century. While, these books are widely accessible to the public via libraries and academic institutions, we manually had to scan several of them as part of its digitisation process. These books vary widely in their vocabulary and stylistic usage owing to the differences in the domain and the original time period of publication, where the latter can be as old as the fifth century AD.
40
+
41
+ We release a multi-domain dataset from 30 different books and have 218,000 manually verified sentences in it. The share of each book in the corpus amounts to $3.33\%$ on average with a variance of 4.09, in terms of the number of pages. The corpus consists of more than 1.5 million tokens, with an average frequency of 2.59, and has a vocabulary of 581,445 unique words. Further, $88\%$ has a frequency of one and more than $96\%$ of the words appear less than 5 times. Such a frequency distribution of tokens in Sanskrit corpora is common, given the morphological richness and lexical productivity (due to compounding) in Sanskrit (Krishna et al., 2017; Hellwig, 2010-2016). Further, the average word length is 10.4 characters. In Table 1, we present count of sentences and tokens in our train-test-val split. Of the 20,738 words in the test data vocabulary, $54.53\%$ of those are out of vocabulary. Earlier approaches to post-OCR text correction have employed lexicon-driven approaches for several languages, though such approaches without a wide coverage lexicon might be challenging for a language like Sanskrit (Bassil and Alwani, 2012; Carlson and Fette, 2007).
42
+
43
+ Prior work in post-OCR text correction in Sanskrit (Krishna et al., 2018) focused on texts written in IAST or Romanized Sanskrit (Monier-Williams et al., 1899), while the current dataset is focused on Devanāgari. All the books we consider here use Devanāgari script as the writing system. While Devanāgari is in existence since the fourth century CE (State), 1896), it has become the primary standard for writing Sanskrit, and several other languages, with the advent of the printing press in India. The script consists of 47 primary characters and is a left-to-right abiguda, where contiguous consonant-vowel sequences are treated as unitary units. As shown in Figure 2, the vowels following the consonants (k in the figure), are treated as secondary units. These units are expressed as 'mātras' in the writing system, which essentially are diacritic markers. These markers may appear before, after, above or below an orthographic consonant symbol as shown in Figure 2. The same vowel, say 'a' in the romanised script, is written as a different character when used independently as a primary unit. Similarly, conjunct consonants, like 'kka' in the figure, would also result in a different orthographic unit, leading to an increase in possible output units for the original OCR. Moreover, these orthographically similar units can also be confusing to an OCR system.
44
+
45
+ # 2.1 OCR Editing Process
46
+
47
+ The current work is part of an OCR project that aims to digitize hundreds of Sanskrit books present in scanned image format<sup>1</sup>. The dataset is an outcome of a publicly funded project, primarily carried out by researchers at IIT Bombay. The project currently has 103 books in its pipeline. Our dataset consists of books primarily from philosophy, literature, mathematics, medicine and astronomy. List of books is provided in Figure 4 in appendix.
48
+
49
+ To aid the process of correction of OCR output, we developed an open-source post-OCR editing tool (Maheshwari et al., 2022) that reduces the cognitive and editing load of the users and increases the speed of text correction. The in-house developed tool is used for OCR correction, verification and proofreading. Currently, 14 experts contribute to various stages of the digitisation process. These experts are either linguists,
50
+
51
+ trained specifically in Sanskrit linguistics, or computational linguists, and seven of them are working full time for the project. Each page in the book passes through a three step process, and a separate expert oversees each step. The 3 steps are: 1) Manual correction/post-editing of OCR prediction by looking at the original scanned image, 2) Verification of the corrected text performed in the previous step, and 3) Proofreading of the text to check for obvious errors. Verification is primarily aimed at maintaining fidelity of the corrected text to the scanned lines and proofreading is aimed at ensuring linguistic and semantic correctness of the text.
52
+
53
+ # 3 System Descriptions
54
+
55
+ OCR post-correction is a text correction task which can be formalised as a monotone seq2seq model (Schnober et al., 2016). We use an encoder-decoder framework that takes predictions from an OCR as its input. While we use multiple pretrained seq2seq models as our baselines, none of these has Sanskrit as one of their languages. However, Devanāgari script is employed in other languages, such as Hindi, which are present in these models. Secondly, unicode encoding of Devanāgari often poses several challenges owing to the variable byte length employed per character for encoding. Hence, we losslessly transliterate the text into SLP1, an ASCII-based case-sensitive transliteration scheme in our experiments.
56
+
57
+ Baseline OCR Model : Our baseline OCR model is an OCR engine that uses the Tesseract OCR (Smith, 2007). The model is fine-tuned upon 20,000 synthetically-created images with the Sanskrit language flag. We release our OCR test set along with the post-OCR correction dataset.
58
+
59
+ CopyNet (Gu et al., 2016): uses a copying mechanism in an LSTM-based seq2seq framework to leverage the (partial) overlap between input and output strings. The model consists of 3 LSTM modules stacked on top of each other for both the encoder and decoder. Following Krishna et al. (2018), we use BPE for learning the vocabulary, which has shown the ability to handle corpora with 'rare words' (Sennrich et al., 2016).
60
+
61
+ mBART (Liu et al., 2020) is a multilingual variant of the BART, both of which are seq2seq models. It has an autoregressive decoder and a BERT-based encoder. We used mBART-50 (Tang et al., 2020), specifically its HuggingFace implementation
62
+
63
+ tion (large), in our experiments which has been trained on large monolingual corpus of 50 languages. Here, we use text in its original form as well as in the transliterated SLP1 form.
64
+
65
+ mT5 (Xue et al., 2021) is a multilingual variant of T5 (Raffel et al., 2020), trained on 107 languages. T5 is a seq2seq text generation model, pretrained on a mixture of supervised and unsupervised tasks using a span-corruption objective. In experiments, we employ the mT5 base model from HuggingFace along with BPE tokenization and use both devanagari and SLP1 encoded text.
66
+
67
+ ByT5 (Xue et al., 2022): Given that we have a corpus with mostly 'rare words' (c.f., section 2), any unseen set in Sanskrit will suffer from out-of-vocabulary words. A natural solution is to tokenize words at a character level where each character is represented by UTF-8 bytes. ByT5 is a variant of mT5 except that model is fed with a fixed 256 byte values. In experiments, we use ByT5 small model from HuggingFace with byte tokenizer and SLP1 encoded text.
68
+
69
+ IndicBART (Dambre et al., 2021): It is a multilingual BART-based model trained on 11 Indic-family languages in devanagari script. The model is roughly half the size of mT5 model.
70
+
71
+ # 4 Experiments and Results
72
+
73
+ In Table 2, we present the macro-averaged Word Error Rate (WER) and Character Error Rate (CER) for each of our baseline systems. The predictions directly from OCR report a CER and WER of $3.89\%$ and $30.23\%$ respectively. In our experiments, CopyNet's predictions worsen as per both our metrics, resulting in a CER and WER of $13.25\%$ and $50.38\%$ respectively. However, all of the pre-trained language model configurations employed for post-OCR correction improves over the original OCR predictions. In general, we find that use of Devanagari scripts instead of SLP1 to encode text in Sanskrit, results in improved performance for the task. However, with ByT5, we find that our model produces truncated outputs mostly due to an increase in sequence length in ByT5 due to the byte-level vocabulary used in it. The output from ByT5-Dev has a CER of $6.17\%$ and a WER of $27.72\%$ , higher than that of ByT5-SLP1, when a sequence length of 1024 was used. Even though the CER is further reduced to 4.59 (from 6.17) for ByT5-Dev, when its maximum sequence length is reconfigured to 2048, it still does
74
+
75
+ <table><tr><td>Encoding</td><td>Model</td><td>CER</td><td>WER</td></tr><tr><td>Dev</td><td>OCR</td><td>3.89</td><td>30.23</td></tr><tr><td>Dev</td><td>mBART</td><td>3.50 (+10)</td><td>26.11 (+13.7)</td></tr><tr><td>SLP1</td><td>mBART</td><td>3.71 (+4.5)</td><td>26.60 (+12)</td></tr><tr><td>Dev</td><td>IndicBART</td><td>3.55 (+8.7)</td><td>25.73 (+14.9)</td></tr><tr><td>Dev</td><td>CopyNet</td><td>13.25 (-240)</td><td>50.38 (-66)</td></tr><tr><td>SLP1</td><td>mT5</td><td>3.53 (+9.2)</td><td>26.47 (+12.5)</td></tr><tr><td>Dev</td><td>mT5</td><td>3.34 (+14.1)</td><td>25.57 (+15.4)</td></tr><tr><td>SLP1</td><td>ByT5</td><td>2.98 (+23.4)</td><td>23.19 (+23.3)</td></tr></table>
76
+
77
+ Table 2: CER and WER (lower is better) on PostOCR correction task for different encoding schemes and model. Numbers in brackets () correspond to percentage improvement over OCR model output (top row). All methods are evaluated on devānagari text and all models except ByT5 uses BPE tokenizer. Dev refers to devānagari.
78
+
79
+ not outperform ByT5-SLP1 (refer Table 3). The use of SLP1 encoding for Sanskrit converts them to ASCII sequences, thereby reducing the overall sequence length for input. The ByT5 configuration with SLP1 encoding of text currently yields the best outcome in our experiments. We discuss the impact of different sequence length and memory overheads between Dev and SLP1 variants of ByT5 in Appendix A.
80
+
81
+ We observe three primary sources of errors from the original OCR predictions, namely, word and sentence boundary identification owing to missing or extraneous prediction of space and sentence boundary markers, mispredictions due to mātra or diacritics, and mispredictions arising out of orthographically similar characters. All of these cumulatively contribute to $61.76\%$ of the character-level errors. Boundary detection at the word level and at the sentence level, identified by a space marker or by sentence terminating punctuation, contributes to $26.96\%$ of the OCR errors. $89.3\%$ of the boundary detection errors arise out of identifying word boundaries. Similarly, errors due to incorrect or missing mātras (diacritics) contribute to $22.41\%$ of all the errors in OCR. In Sanskrit, these mātras are generally secondary vowels following a consonant, a phenomenon common in abiguda writing systems ( $\S 2$ ). Mispredictions specifically due to orthographically similar characters contribute to $12.39\%$ of the total errors. With ByT5, the best performing model we report, we find an error reduction of $44.45\%$ , $37.74\%$ and $14.16\%$ reduction in boundary identification, mātra prediction
82
+
83
+ and error corrections from orthographically similar predictions respectively.
84
+
85
+ As a further analysis, we collect the most frequent 300 tokens in the corpus, with at least three letters for a word, and find a total of 2875 occurrences in the ground truth corpus. Among these frequent tokens, OCR predicts each of them correctly at least once. Further, to identify possible similar tokens predicted instead of the correct token, we use the Ratcliff pattern recognition algorithm (Black, 2004), with a matching ratio of 0.6. Here, we find that $8.27\%$ of the token occurrences among the most frequent tokens do not have a corresponding prediction that satisfies the matching criteria. With ByT5, this number is reduced by $2.09\%$ . Moreover, in both OCR predictions and ByT5 based predictions, we mostly find unique patterns in mispredictions for each token and are not able to find any consistent or systemic patterns for each token.
86
+
87
+ # 4.1 Experiments on out of domain test dataset
88
+
89
+ Similar to prior works (Rijhwani et al., 2021b; Krishna et al., 2017), we ensure that there is no sentence-level (sequence) overlap between the train, test and validation split. Though there is an overlap in terms of the books, we ensure that none of the test-data sentences are seen during training. To test the generalizability of our models, we use an out-of-domain test data comes from a completely new book, Brihat-samhita, not included in any of the train-test-validation splits. We also release a new out-of-domain test dataset which comes from a text that is not part of any of the 30 texts included in our dataset. Figure 3 shows the performance of all the OCR, ByT5 and the mT5 systems for this dataset. Here, ByT5 has shown to significantly reduce the CER and WER from the OCR outputs.
90
+
91
+ # 5 Conclusion
92
+
93
+ We release a dataset consisting of 218,000 sentences from 30 books for Sanskrit Post-OCR text correction. We also release a set of strong baselines as a benchmark, which currently shows consistent and significant improvements from the OCR predictions, both on the in-domain test data and out-of-domain test data. All our baselines, in spite of not seeing Sanskrit during pretraining, have shown to generalise well for the task. While
94
+
95
+ ![](images/533901bbd2d939d74e1b47a93f2c63bc403f46b22b91be58263a52e3a2f8769a.jpg)
96
+ Figure 3: Comparison of CER with different word lengths on an out-of-corpus test set of 500 sentences.
97
+
98
+ using Devanagari Unicode encoding for our experiments has shown to perform better than using SLP1 for multiple baselines, SLP1-based encoding on ByT5 gives the best performance overall.
99
+
100
+ # 6 Limitations
101
+
102
+ A major limitation with the current baselines is the mispredictions happening at the word level. Here, of the mispredicted words by ByT5-SLP, our best performing model, $71.17\%$ are not even valid words in Sanskrit. None of our pretrained models currently are lexically or morphologically aware resulting in the formation of invalid words in the language. Moreover, owing to the low-resource nature of the language, none of the pretrained language models we employed used Sanskrit for its pretraining. An immediate challenge with outputs of our post-OCR text correction would be the use of these predictions for downstream tasks, which are heavily reliant on rule-based morphological analysis of these words (Krishna et al., 2021). We plan to incorporate morphologically aware self-training approaches and dynamic markup decoding (De Cao et al., 2021) which can incorporate various valid inflected forms of a stem in a trie to handle such scenarios.
103
+
104
+ # 7 Acknowledgements
105
+
106
+ We thank anonymous reviewers for providing constructive feedback. Ayush Maheshwari is supported by a Fellowship from Ekal Foundation (www.ekal.org). Ganesh Ramakrishnan is grateful to NLTM OCR Bhashini project as well as the IIT Bombay Institute Chair Professorship for their support and sponsorship.
107
+
108
+ # References
109
+
110
+ Devaraja Adiga, Rishabh Kumar, Amrith Krishna, Preethi Jyothi, Ganesh Ramakrishnan, and Pawan Goyal. 2021. Automatic speech recognition in sanskrit: A new speech corpus and modelling insights. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 5039-5050.
111
+ Youssef Bassil and Mohammad Alwani. 2012. Ocr context-sensitive error correction based on google web 1t 5-gram data set. arXiv preprint arXiv:1204.0188.
112
+ Paul E Black. 2004. Ratcliff/overshelp pattern recognition. Dictionary of algorithms and data structures, 17.
113
+ Andrew Carlson and Ian Fette. 2007. Memory-based context-sensitive spelling correction at web scale. In Sixth International Conference on Machine Learning and Applications (ICMLA 2007), pages 166-171. IEEE.
114
+ Raj Dabre, Himani Shrotriya, Anoop Kunchukuttan, Ratish Puduppully, Mitesh M Khapra, and Pratyush Kumar. 2021. Indicbart: A pre-trained model for natural language generation of indic languages. arXiv preprint arXiv:2109.02903.
115
+ Nicola De Cao, Gautier Izacard, Sebastian Riedel, and Fabio Petroni. 2021. Autoregressive entity retrieval. In International Conference on Learning Representations.
116
+ Pawan Goyal, Gerard Huet, Amba Kulkarni, Peter Scharf, and Ralph Bunker. 2012. A distributed platform for Sanskrit processing. In Proceedings of COLING 2012, pages 1011-1028, Mumbai, India. The COLING 2012 Organizing Committee.
117
+ Jiatao Gu, Zhengdong Lu, Hang Li, and Victor OK Li. 2016. Incorporating copying mechanism in sequence-to-sequence learning. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1631-1640.
118
+ Oliver Hellwig. 2010-2016. DCS - The Digital Corpus of Sanskrit. Berlin.
119
+ Amrith Krishna, Bodhisattwa P Majumder, Rajesh Bhat, and Pawan Goyal. 2018. Upcycle yourOCR: ReusingOCR for post-ocr text correction in romanticised sanskrit. In Proceedings of the 22nd Conference on Computational Natural Language Learning, pages 345-355.
120
+ Amrith Krishna, Bishal Santra, Ashim Gupta, Pavankumar Satuluri, and Pawan Goyal. 2021. A Graph-Based Framework for Structured Pre dictio n Tasks in Sanskrit. Computational Linguistics, 46(4):785-845.
121
+
122
+ Amrith Krishna, Pavan Kumar Satuluri, and Pawan Goyal. 2017. A dataset for Sanskrit word segmentation. In Proceedings of the Joint SIGHUM Workshop on Computational Linguistics for Cultural Heritage, Social Sciences, Humanities and Literature, pages 105-114, Vancouver, Canada. Association for Computational Linguistics.
123
+ Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, and Luke Zettlemoyer. 2020. Multilingual denoising pre-training for neural machine translation. Transactions of the Association for Computational Linguistics, 8:726-742.
124
+ Ayush Maheshwari, Ajay Ravindran, Venkatapathy Subramanian, Akshay Jalan, and Ganesh Ramakrishnan. 2022. Udaan-machine learning based post-editing tool for document translation. arXiv preprint arXiv:2203.01644.
125
+ Monier Monier-Williams, Ernst Leumann, and Carl Cappeller. 1899. A sanskrit-english dictionary: etymologically and philologically arranged with special reference to cognate indo-european languages.
126
+ Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, et al. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res., 21(140):1-67.
127
+ Shruti Rijhwani, Daisy Rosenblum, Antonios Anastasopoulos, and Graham Neubig. 2021a. Lexically aware semi-supervised learning forOCR post-correction. Transactions of the Association for Computational Linguistics, 9:1285-1302.
128
+ Shruti Rijhwani, Daisy Rosenblum, Antonios Anastasopoulos, and Graham Neubig. 2021b. Lexically aware semi-supervised learning for OCR post-correction. Transactions of the Association for Computational Linguistics, 9:1285-1302.
129
+ Rohit Saluja, Ayush Maheshwari, Ganesh Ramakrishnan, Parag Chaudhuri, and Mark Carman. 2019. Ocr on-the-go: Robust end-to-end systems for reading license plates & street signs. In 2019 International Conference on Document Analysis and Recognition (ICDAR), pages 154-159. IEEE.
130
+ Carsten Schnober, Steffen Eger, Erik-Lan Do Dinh, and Iryna Gurevych. 2016. Still not there? comparing traditional sequence-to-sequence models to encoder-decoder neural networks on monotone string translation tasks. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 1703-1714, Osaka, Japan. The COLING 2016 Organizing Committee.
131
+ Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational
132
+
133
+ Linguistics (Volume 1: Long Papers), pages 1715-1725, Berlin, Germany. Association for Computational Linguistics.
134
+
135
+ Ray Smith. 2007. An overview of the tesseractOCR engine. In Ninth international conference on document analysis and recognition (ICDAR 2007), volume 2, pages 629-633. IEEE.
136
+
137
+ Bombay (India: State). 1896. Gazetteer of the Bombay Presidency. v. 1, pt. 1. Printed at the Government Central Press.
138
+
139
+ Yuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Naman Goyal, Vishrav Chaudhary, Jiatao Gu, and Angela Fan. 2020. Multilingual translation with extensible multilingual pretraining and finetuning. arXiv preprint arXiv:2008.00401.
140
+
141
+ Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, and Colin Raffel. 2022. Byt5: Towards a token-free future with pre-trained byte-to-byte models. Transactions of the Association for Computational Linguistics, 10:291-306.
142
+
143
+ Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, and Colin Raffel. 2021. mt5: A massively multilingual pre-trained text-to-text transformer. In NAACL-HLT.
144
+
145
+ <table><tr><td>Max token</td><td colspan="2">2048</td><td colspan="2">1024</td></tr><tr><td>Model</td><td>CER</td><td>WER</td><td>CER</td><td>WER</td></tr><tr><td>OCR</td><td>4.02</td><td>30.7</td><td>4.67</td><td>30.02</td></tr><tr><td>ByT5-Dev</td><td>4.59</td><td>26.6</td><td>13.12</td><td>38.6</td></tr><tr><td>ByT5-SLP1</td><td>3.05</td><td>23.5</td><td>3.49</td><td>25.3</td></tr></table>
146
+
147
+ Table 3: ByT5-Dev and SLP1 models are trained with different maximum token length. Max token size of 2048 is equivalent to 135 characters while max token size of 1024 is equivalent to 56 characters. We truncated the maximum character length in the test set corresponding to each experiment.
148
+
149
+ # Appendix
150
+
151
+ # A Impact of difference sequence length
152
+
153
+ ByT5 splits each character into bytes. Since the unicode encoding of Devanāgari characters typically have higher byte lengths, an input sequence in ByT5 often tends to be shorter, affecting contextual information in longer sentences. We present corresponding results in Table 3.
154
+
155
+ # B List of Books
156
+
157
+ We present list of books used in our dataset in Table 4.
158
+
159
+ <table><tr><td>Book Name</td><td>Genre</td></tr><tr><td>Uttararamacharita by Bhavabhuti - commentary by Veeraraghava</td><td>Arts</td></tr><tr><td>Grahalaghava of Ganesh Daivajna</td><td>Astronomy</td></tr><tr><td>Suryasiddhanta of Ranganatha</td><td>Astronomy</td></tr><tr><td>Mahabhaskariyam</td><td>Astronomy</td></tr><tr><td>Aryabhatiya Bhashya of Gargyakerala Nilakantha Samabasiva Sastri K. Vol 1</td><td>Astronomy</td></tr><tr><td>Aryabhatiya Bhashya of Gargyakerala Nilakantha Samabasiva Sastri K. Vol 2</td><td>Astronomy</td></tr><tr><td>Aryabhatiya Bhashya of Gargyakerala Nilakantha Samabasiva Sastri K. Vol 3</td><td>Astronomy</td></tr><tr><td>Karana-Kutuhalam</td><td>Astronomy</td></tr><tr><td>LaghuManasa</td><td>Astronomy</td></tr><tr><td>Aryabhatiya commentary by Suryadeva Yajvan</td><td>Astronomy</td></tr><tr><td>Khandakhadyaka</td><td>Astronomy</td></tr><tr><td>Ganak Tarangini</td><td>Astronomy</td></tr><tr><td>Bijaganita with Navankjura-Apte</td><td>Mathematics</td></tr><tr><td>Bijaganitavatamksa of Narayan Shukla</td><td>Mathematics</td></tr><tr><td>Bijaganita with Bijankura</td><td>Mathematics</td></tr><tr><td>Ganitakaumudi of Narayanapandita (vol. 1)</td><td>Mathematics</td></tr><tr><td>Rekhaganita of Jagannatha Vol. 2</td><td>Mathematics</td></tr><tr><td>Bijaganita by Tr Abhyankar</td><td>Mathematics</td></tr><tr><td>Laghubhaskariyam Part 2</td><td>Mathematics</td></tr><tr><td>Rekhaganita of Jagannatha Vol. 1</td><td>Mathematics</td></tr><tr><td>Lilavati with kriyakramakri</td><td>Mathematics</td></tr><tr><td>Patiganita of Sridhara</td><td>Mathematics</td></tr><tr><td>Brahmasphutasiddhanta of Brahmagupta</td><td>Mathematics</td></tr><tr><td>Laghubhaskariyam</td><td>Mathematics</td></tr><tr><td>Hathayogapradipika by Svatmarama</td><td>Medicine</td></tr><tr><td>Mimamsanyayaprakashia by Aapdeva</td><td>Philosophy</td></tr><tr><td>Shastradipika by Parthasarthi</td><td>Philosophy</td></tr><tr><td>Shabdashaktiprakashika by Jagdishtarkalankara</td><td>Philosophy</td></tr><tr><td>Prakaranapanchika-Shalikanatha</td><td>Philosophy</td></tr></table>
160
+
161
+ Table 4: List of books used in the experiments.
abenchmarkanddatasetforpostocrtextcorrectioninsanskrit/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:027fb12260b4973ac449095e279be2a5a933c6aa4c46a189bc0c154de5071fe2
3
+ size 365586
abenchmarkanddatasetforpostocrtextcorrectioninsanskrit/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:637c65ac8f0285cd51c9cd3cd027842cf260f721eac90dcb8c7013cdbb7844fb
3
+ size 183672
acceleratinglearnedsparseindexesviatermimpactdecomposition/46d9e91e-f335-4078-b0da-6f0f03b5aeea_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7a51fef2cdaf2cb826ea763ff02c5e0486fc97edc87349c2b48c9c72155cc675
3
+ size 88988
acceleratinglearnedsparseindexesviatermimpactdecomposition/46d9e91e-f335-4078-b0da-6f0f03b5aeea_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8469c24f476f94cfb4115953d55a472018ba69db2daaceab816a1e7b0c70af6f
3
+ size 107620
acceleratinglearnedsparseindexesviatermimpactdecomposition/46d9e91e-f335-4078-b0da-6f0f03b5aeea_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:77df39efd51c467d488afa0a78f82a41167662e674d033b713c3cceb0d80f48f
3
+ size 814135
acceleratinglearnedsparseindexesviatermimpactdecomposition/full.md ADDED
@@ -0,0 +1,361 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Accelerating Learned Sparse Indexes Via Term Impact Decomposition
2
+
3
+ # Joel Mackenzie
4
+
5
+ University of Queensland, Australia
6
+
7
+ joel.mackenzie@uq.edu.au
8
+
9
+ # Alistair Moffat
10
+
11
+ University of Melbourne, Australia
12
+
13
+ ammoffat@unimelb.edu.au
14
+
15
+ # Abstract
16
+
17
+ Novel inverted index-based learned sparse ranking models provide more effective, but less efficient, retrieval performance compared to traditional ranking models like BM25. In this paper, we introduce a technique we call postings clipping to improve the query efficiency of learned representations. Our technique amplifies the benefit of dynamic pruning query processing techniques by accounting for changes in term importance distributions of learned ranking models. The new clipping mechanism accelerates top- $k$ retrieval by up to $9.6 \times$ without any loss in effectiveness.
18
+
19
+ # 1 Introduction
20
+
21
+ Sparse term importance representations such as DeepImpact (Mallia et al., 2021) and uniCOIL (Gao et al., 2021; Lin and Ma, 2021) have enabled the use of effective transformer-based text representations that can match the effectiveness of recent dense text representations (Karpukhin et al., 2020; Qu et al., 2021) while still being supported by inverted indexes and their query operations. This is of importance as inverted indexes have been optimized to provide search functionality in distributed settings at web-scale through 40 years of research, providing a variety of time, space and retrieval quality tradeoffs; while also supporting efficient updates, advanced querying modes such as phrase matching or filtering, and good scalability, all of which are crucial in real-world settings (Risvik et al., 2013; Tonellotto et al., 2018).
22
+
23
+ One of the key techniques that enables efficient top- $k$ query processing in inverted indexes is storing additional metadata about index term importance scores (also referred to as impacts), seeking to facilitate the bypassing of the majority of the matching documents, and thus allowing faster retrieval than would be possible via exhaustive disjunctive processing. For example, dynamic pruning algorithms such as MaxScore (Turtle and
24
+
25
+ # Antonio Mallia
26
+
27
+ Amazon Alexa, Italy
28
+
29
+ malliaam@amazon.com
30
+
31
+ # Matthias Petri
32
+
33
+ Amazon Alexa, USA
34
+
35
+ mkp@amazon.com
36
+
37
+ ![](images/5dd29ff0f78cf3c68647f0d2c0d991cdc22606214991cde5ce13de84c4c8ced0.jpg)
38
+ Figure 1: Normalized maximum list impact distribution stratified by list length buckets $b \in [2^b, 2^{b+1})$ . Classical schemes such as BM25 and DocT5Query exhibit small maximum impacts for long lists, whereas DeepImpact and uniCOLL assign high importance to terms regardless of their document frequency. Note the irregular scale on the horizontal axis.
39
+
40
+ Flood, 1995) and WAND (Broder et al., 2003) store the index-wide maximum impact of each term; at query-time, these impacts can be used to rapidly estimate document scores, allowing documents that have no prospect of entering the current min-heap of $k$ results to be bypassed.
41
+
42
+ Traditional similarity models such as BM25 guarantee that frequent terms have low importance scores, a symbiotic relationship that allows fast query processing. On the other hand, recent transformer-based learned term importance techniques such as DeeplImpact are not constrained by term occurrence frequency when assigning importance scores to terms in documents. For example, consider the term "does" in an English corpus. Traditional models such as BM25 would assign a low impact for this term in all documents, since it occurs so frequently; a direct effect of inverse document frequency (IDF). On the other hand, learned
43
+
44
+ <table><tr><td>Impact</td><td>MSMARCO-v1 Passage</td></tr><tr><td>0.91</td><td>Does are the females in the deer family of mammals, individually called a doe (pronounced doe as in toe). The plural of doe can also be doe. Does is also the word meaning the present tense of the verb do. Pronounced duz as opposed to the pronunciation for the female deer.</td></tr><tr><td>0.71</td><td>You spelled it right in the question: does. The word does (performs action) is a verb, and the plural of the noun doe (female deer). Many people mix up two similar words: dose and does. Dose is a noun and is the amount of medication prescribed. Does is a verb, a form of to do.</td></tr><tr><td>0.59</td><td>Job Seekers The District of Columbia Department of Employment Services (DOES) was created to develop Jobs for People and People for Jobs. DOES provides job seekers with a number of employment opportunities through its American Job Centers.</td></tr><tr><td>0.50</td><td>Take your medication exactly as prescribed. Taking higher does of Benzedrine may cause a change in a person&#x27;s sex drive, allergic reactions, chills, depression, irritability or mood swings, and problems with the digestive system.</td></tr><tr><td>0.02</td><td>when my dog passes wind its the worst smell ever. does anyone know how to stop it smelling so bad. Add your answer.</td></tr></table>
45
+
46
+ Table 1: Sample MSMARCO-v1 passages and the normalized impact assigned by DeepImpact to the word "does", which occurs in $61\%$ of all passages. High impact scores can be assigned to common terms legitimately (based on the context of the term), or may be caused by misspellings, acronyms, or homonyms.
47
+
48
+ models are trained to exploit the contextual information of a passage to assign term impacts, resulting in high importance scores for even the most common terms. Table 1 demonstrates this behavior for DeepImpact, where passages extracted from MSMARCO-v1 contain normalized impact scores of widely varying magnitudes. While DeepImpact assigns impacts from 0.91 (very important) all the way down to 0.02 (not important) for the term "does", the equivalent maximum impact observed over our two BM25-based indexes (see Section 4) are 0.21 (BM25) and 0.004 (DocT5Query).
49
+
50
+ Figure 1 further highlights the pervasive nature of this issue. Learned representations such as DeepImpact or uniCOLL assign high term importance to even very frequent terms (such as "does"), whereas BM25 always assigns low importance to such terms. This divergent behavior substantially reduces the ability of MaxScore and WAND to bypass low-impact documents during querying, with both techniques relying on maximum list-wise impact scores to prune the search space.
51
+
52
+ Contribution We adapt the MaxScore and WAND dynamic pruning mechanisms to enable efficient query processing for learned term importance schemes such as DeepImpact via a simple technique we call term impact decomposition. We describe partitioning schemes that separate the postings for each term into two groups - those that are high-impact, and are likely to result in documents being scored; and those that are low-impact, and more likely to be associated with bypassed documents; and present a new form of impact decomposition
53
+
54
+ that we call postings clipping. When integrated into the retrieval engine, impact decomposition allows almost $10 \times$ faster top- $k$ term-based querying, with negligible increases to index storage costs, and no effect on result quality.
55
+
56
+ # 2 Background
57
+
58
+ Term-Based Similarity Many retrieval similarity formulations can be expressed as a sum over per-document query-term impacts, computed for document $d$ and query $Q$ as
59
+
60
+ $$
61
+ S (Q, d) = C (d) + \sum_ {t \in Q} w _ {t, d} \tag {1}
62
+ $$
63
+
64
+ where $C(d)$ is a static score component associated with document $d$ ; and $w_{t,d}$ is the importance, or impact, of term $t$ in document $d$ (see, for example, Zobel and Moffat, 2006). The values of $w_{t,d}$ might be pre-computed at indexing time and stored in the inverted index in quantized form; or might be computed via a function $F()$ from raw index statistics such as term frequency and document length.
65
+
66
+ Learned Sparse Models The recent development of pre-trained contextualized language models (LMs) has resulted in impressive benefits in search effectiveness, albeit with higher retrieval cost than traditional lexical models (MacAvaney et al., 2019; Pradeep et al., 2021; Khattab and Zaharia, 2020). This has motivated recent work on making transformer-based ranking more efficient (Cohen et al., 2022; Karpukhin et al., 2020; Zhan et al., 2021).
67
+
68
+ Different solutions have been proposed to address this performance bottleneck, including the
69
+
70
+ application of approximate nearest neighbor search on dense representations (see, for example, Izacard et al., 2020; Zhan et al., 2022; Yamada et al., 2021). Another approach is to apply LMs to improve the effectiveness of inverted index-based "sparse" representations.
71
+
72
+ Document expansion is one such innovation. It uses LMs to predict expansion terms to add to each document in an effort to address the vocabulary mismatch problem (Zhao, 2012), while still applying traditional scoring regimes like BM25. Currently, DocT5Query (Nogueira and Lin, 2019) and TILDE (Zhuang and Zuccon, 2021b,a) are the most effective expansion methods.
73
+
74
+ LMs can also be used to learn term importance directly. Early approaches such as DeepCT (Dai and Callan, 2019) learned an updated term frequency value which could be plugged into the existing ranking model. Other more effective approaches such as DeeplImpact (Mallia et al., 2021), uniCOIL (Lin and Ma, 2021; Ma et al., 2022), TILDE (Zhuang and Zuccon, 2021a),<sup>1</sup> and SPLADE (Formal et al., 2021b,a) have been devised which predict the impact of each term within each document (that is, they learn the value of $w_{t,d}$ in Eqn. 1). These models are all tuned to optimize the downstream retrieval task, but differ in their vocabulary structures, document expansion techniques, and query expansion strategies. For example, DeeplImpact first expands the documents in the collection via DocT5Query, and then directly estimates a single impact for each token in each document. The model is trained by directly optimizing the sum of the query term impacts to maximize the score difference between relevant and non-relevant documents for a query. Similar in spirit, uniCOIL also performs weighting on the query terms, such that document ranking becomes a weighted sum over term impacts.
75
+
76
+ We focus on DeepImpact and uniCOIL as effective learned representations, but our methods are also applicable to other learned sparse techniques.
77
+
78
+ Indexing An inverted index stores one postings list $\mathcal{I}_t$ for each distinct term $t$ in the given text collection, with each postings list containing a sequence of postings of the form $\mathcal{I}_{t,i} = \langle d_{t,i},w_{t,i}\rangle$ , where $d_{t,i}$ is the document number of the $i$ th document containing $t$ , and $w_{t,i}$ can be taken to be the corresponding impact score (see Eqn. 1). These lists are normally stored in increasing document
79
+
80
+ ![](images/41219b89d7cba4aef275266e155d8eed6b6e4b1c7dc9cd846e1d5af6e68f9537.jpg)
81
+
82
+ ![](images/9c589d64a082b3f5f4c18298707defd22ef445cd1fad605a7479540b8c4de70d.jpg)
83
+
84
+ Figure 2: Dynamic pruning on a two-term query (term $A$ , top left, and term $B$ , top right) for top $k = 2$ retrieval. At the start of processing, the heap threshold $\theta$ is $-\infty$ . After processing the three documents shown in green (bottom) we have $\theta > U_A$ , and documents that contain only term $A$ can be bypassed from now on.
85
+ ![](images/3cd48e06564f84d0744274493cb23294037191ad9f0fdeef875275f2713e665c.jpg)
86
+ Once 3 documents processed, $U_{A}$ is smaller than $\theta$ , allowing all subsequent orange documents to be bypassed
87
+
88
+ order, and compressed using integer compression techniques, see Zobel and Moffat (2006) and Pibiri and Venturini (2021) for examples and further explanation.
89
+
90
+ Querying To retrieve the top- $k$ highest scoring documents for a bag-of-terms query $Q$ consisting of $q = |Q|$ query terms, a document-at-a-time processing regime is often used (Tonellotto et al., 2018). All $q$ postings lists are open concurrently, each with a local cursor to step through the postings. Each document $d$ that is encountered is fully scored at that time, and a min-heap maintained of the $k$ highest-scoring documents encountered so far. Once all $q$ postings lists are exhausted, the $k$ documents in the heap can either be directly presented to the user or passed to another processing phase for a more sophisticated similarity computation that re-ranks that initial answer set.
91
+
92
+ MaxScore Algorithm 1 provides details of document-at-a-time querying processing, and also introduces the MaxScore dynamic pruning mechanism of Turtle and Flood (1995), structured in a manner that allows the development we propose in Section 3. In this description the $U_{t}$ per-term upper bounds are a static attribute of the collection, established at indexing time, and $\mathcal{I}_{t,c[t]}$ is the "current" posting for term $t$ , indicated by the cursor $c[t]$ , with each index list $\mathcal{I}_t$ ordered by increasing document number, denoted $\mathcal{I}_{t,c[t]}$ .
93
+
94
+ Figure 2 helps explain the pseudo-code. In the diagram a two-term query is being processed, con
95
+
96
+ Algorithm 1 Standard MaxScore. Input is a set of $q$ postings lists $\mathcal{I}_t$ , with $\mathcal{I}_{t,i} = \langle d, w \rangle$ the docnum and impact score of the $i$ th posting for the $t$ th term; and a vector $U_t = \max_i \{\mathcal{I}_{t,i}.w\}$ , the maximum impact for the $t$ th term.
97
+
98
+ 1: active $\leftarrow \{0\dots q - 1\}$ // active terms
99
+ 2: passive $\leftarrow \{\}$ // passive terms
100
+ 3: sum_pass $\leftarrow 0$ // sum of passive $U_{t}$ 's
101
+ 4: heap $\leftarrow \{\}$ // heap of "best so far"
102
+ 5: $c[t] \gets 0$ for $0 \leq t < q$ // cursors
103
+ 6: $\theta \gets -\infty$ // heap threshold
104
+ 7: while active postings remain do
105
+ 8: // select next document, match all cursors
106
+ 9: $d \gets \min \{\mathcal{I}_{t,c[t]}.d \mid t \in active\}$
107
+ 10: for $t \in passive$ do
108
+ 11: $c[t] \gets \text{SeekGEQ}(\mathcal{I}_t, d)$
109
+ 12: // score document
110
+ 13: score_d $\leftarrow \sum \{\mathcal{I}_{t,c[t]}.w \mid \mathcal{I}_{t,c[t]} \cdot d = d\}$
111
+ 14: // advance cursors
112
+ 15: for $t \in active$ do
113
+ 16: if $\mathcal{I}_{t,c[t]}.d = d$ then
114
+ 17: $c[t] \gets c[t] + 1$
115
+ 18: // check against heap, update if needed
116
+ 19: if score_d > $\theta$ then
117
+ 20: heap $\leftarrow$ heap $\cup \{\langle d, score_d\rangle\}$
118
+ 21: if |heap| > k then
119
+ 22: eject the least weight $\langle d, score_d\rangle$
120
+ 23: heap item and update $\theta$
121
+ 24: // try to expand passive set
122
+ 25: $y \gets \operatorname{argmax}_t \{|I_t| \mid t \in active\}$
123
+ 26: if sum_pass + $U_y < \theta$ then
124
+ 27: // toggle term $y$ from active to passive
125
+ 28: active $\leftarrow$ active - $\{y\}$
126
+ 29: passive $\leftarrow$ passive $\cup \{y\}$
127
+ 30: sum_pass $\leftarrow$ sum_pass + $U_y$
128
+
129
+ sisting of postings for term $A$ (top left) and term $B$ (top right), and seeking the highest-scoring $k = 2$ documents. The index also records $U_A$ and $U_B$ , the maximum impact contributions of $A$ and $B$ across the collection. Once the first three documents in the union set of $A$ and $B$ have been scored, the $k$ th largest-known document score – denoted by $\theta$ – is greater than $U_A$ . After that point no further documents that contain term $A$ alone need be considered; all candidates for scoring must contain $B$ . In terms of Algorithm 1, term $A$ is thus permanently moved from the active set to the passive set (steps 25–30) to record this change of status.
130
+
131
+ Algorithm 1 includes a number of subtleties. The
132
+
133
+ ordering assumed at step 25 is constant, and computed once upon query commencement, rather than each loop iteration. As well, steps 25 to 30, shown as executing after every document has been scored, can be carried out infrequently without affecting the correctness of the top- $k$ result set. For example, they might trigger only every 100 or 1000 iterations of the main while loop at step 7.
134
+
135
+ The key invariant in Algorithm 1 is that contributions from passive terms alone cannot yield a document score large enough to make it into the current top $k$ answer set. That means that postings that appear only in passive postings lists can be bypassed, achieved at step 11 by function SeekGEQ $(\mathcal{I}_t,d)$ which advances the cursor $c[t]$ until a document number $\geq d$ is found in $\mathcal{I}_t$ . Processing terminates when all of the postings associated with the active terms have been consumed. At that time, the required top- $k$ documents are all in the heap.
136
+
137
+ WAND The WAND dynamic pruning mechanism (Broder et al., 2003) makes use of similar logic. But instead of labeling entire terms as being active or passive, it constantly rearranges the list cursors according to their next documents, in effect treating individual postings as being passive or active. That means that it can be more flexible in determining which postings combinations might yield scores greater than $\theta$ , and hence is more discerning in terms of which documents need scoring. Those gains must be offset against the additional cost of maintaining the list cursors in sorted order. Petri et al. (2013) give pseudo-code for WAND pruning.
138
+
139
+ Block-Max WAND Even more precise control over which documents need to be fully scored is achieved if localized upper bounds are used (denoted $U_{t,b}$ ) as well as whole-of-list $U_{t}$ values. In the BlockMax-WAND (BMW) and Variable BlockMaxWAND (VBMW) approaches there are multiple $U_{t,b}$ values stored for each postings list, each of which provides a localized maximum impact bound for a block of contiguous postings (Ding and Suel, 2011; Mallia et al., 2017; Mallia and Porciani, 2019). During querying, global $U_{t}$ values are used to select a candidate document, and the $U_{t,b}$ values are then used to refine the score estimate before checking whether the document should be scored or bypassed. Thus, storing these additional bounds allows more documents to be bypassed, albeit with increased processing required to handle the complex decision logic that arises, and the additional
140
+
141
+ space costs required to store localized bounds.
142
+
143
+ High-Impact List Segments and Priming Several authors have proposed explicitly or implicitly splitting postings lists into two (or more) parts, a high-impact segment $\mathcal{H}(t)$ and a low-impact segment $\mathcal{L}(t)$ to facilitate efficient processing; see, for example, Strohman and Croft (2007), Ding and Suel (2011), Daoud et al. (2016), Daoud et al. (2017), Kane and Tompa (2018) and Mackenzie et al. (2022a).
144
+
145
+ Another technique known as priming (Kane and Tompa, 2018; Petri et al., 2019) improves query performance by estimating lower bounds on the final heap threshold $\theta$ : if the value of the $k$ th highest impact (or a value for the $k' > k$ th highest impact) for any of the $q$ query terms is known, then the heap threshold $\theta$ can be initialized to the largest of those (up to) $q$ values – it is certain that there will be $k$ or more documents in the collection that score more highly than that value, even in the absence of any term overlaps. Moreover, if those $k'$ high impact postings are maintained as a separate postings list, then the $q$ high-impact list segments can be resolved against each other before any low-impact postings are considered, and might further lift the value of $\theta$ used when the $q$ low-impact postings lists are employed to finalize the query.
146
+
147
+ # 3 Impact Decomposition
148
+
149
+ This section introduces the notion of postings list splitting, and shows how it can be combined with both MaxScore and with WAND variants. We then introduce a new technique, postings clipping that replicates the high-impact postings, rather than separating them from the low-impact postings. It has the benefit of allowing more precise score estimations, and hence faster pruned querying.
150
+
151
+ List Splitting Ding and Suel (2011), and later Daoud et al. (2016) and Kane and Tompa (2018), note that each postings list $\mathcal{I}_t$ can be split into two parts, denoted here as $\mathcal{H}(t)$ and $\mathcal{L}(t)$ , with $\mathcal{H}(t)$ containing the postings with the highest impacts for $t$ , and $\mathcal{L}(t)$ containing all the remaining ones. Since $\mathcal{H}(t)$ and $\mathcal{L}(t)$ are disjoint, query processing algorithms can treat them as independent terms.
152
+
153
+ The top part of Figure 3 illustrates list splitting. The complete set of postings for some term $t$ (left) is reduced from (in the example) 21 postings to 17 posting to form $\mathcal{L}(t)$ , with the other 4 postings assigned to $\mathcal{H}(t)$ . Each of $\mathcal{L}(t)$ and $\mathcal{H}(t)$ then
154
+
155
+ ![](images/cfe8086172ffefc1404c9e996c1a68626bdd6073613d26e55c7880c900d1de0b.jpg)
156
+
157
+ ![](images/de7df29c03f4ae884f2fd537b556d69ef5ad4ad159dc22879933265c7e9a3702.jpg)
158
+ Figure 3: Two types of impact decomposition. List splitting involves moving high-impact postings into a separate postings list, $\mathcal{H}(t)$ ; whereas postings clipping involves trimming the impact scores in the low-impact list, and creating new postings in a separate list $\mathcal{H}(t)$ to account for the remainder.
159
+
160
+ receives its own upper bound (middle and right, $U_{\mathcal{L}(t)}$ and $U_{\mathcal{H}(t)}$ respectively), with $U_{\mathcal{H}(t)} > U_{\mathcal{L}(t)}$ .
161
+
162
+ A number of splitting rules can be considered. For example, a fixed fraction of the original list might be taken; or the split could be based on local or global threshold scores. In this work, we take a fixed fraction, set at $1/64$ (based on preliminary experimentation) and respecting quantized impact levels, so that $|\mathcal{H}(t)|$ is maximized subject to $|\mathcal{H}(t)| \leq |\mathcal{I}_t|/64$ , and also subject to the smallest impact in $\mathcal{H}(t)$ being greater than $U_{\mathcal{L}(t)}$ . We also only apply splitting to lists with more than 256 postings, as short lists are always handled quickly.
163
+
164
+ Where the impact score distribution of the postings is skewed and has a long tail, splitting results in reduced variance inside each part. The maximum term importance $U_{t}$ stored for any list $\mathcal{I}_t$ is intended to approximate the distribution of the impacts of the postings in that list; and hence storing two upper bounds, one for $\mathcal{H}(t)$ and one for $\mathcal{L}(t)$ , allows a better approximation to the underling distribution. Note also that list splitting is performed at indexing time and results in only a modest increase in index size. At query time, each term is mapped to (one or) two postings lists, with at most twice as many cursors to maintain, but the same total number of postings to be processed.
165
+
166
+ MaxScore, WAND, and BMW Our first observation is simply that MaxScore should be implemented so that the static ordering over terms assumed at step 25 of Algorithm 1 is by decreasing list length, rather than by the more usual increas
167
+
168
+ ing $U_{t}$ , respecting the separation of these concepts that was noted above (that is, IDF is not obeyed by learned sparse models). The MaxScore pseudo-code presented earlier already shows this adaptation.
169
+
170
+ There are then a number of ways of proceeding when list splitting is considered. The simplest option is to ignore any knowledge of the list pairings, and allow a $q$ -term query to be processed in the standard document-at-a-time manner over as many as $2q$ postings lists (Kane and Tompa, 2018). In terms of MaxScore, any combination of low- and high-impact lists might be in passive, with the remainder in active. However the use of the $U_{t}$ limits to decide if a document that is in an active list should be scored remains valid – no document that might generate a similarity score greater than $\theta$ and thus should get scored will get bypassed. On the other hand, when the $\mathcal{L}(t)$ list for one of the terms is in passive (and because it is longer, it will enter earlier), only the postings in $\mathcal{H}(t)$ can now trigger a document scoring caused by term $t$ , and hence there is a very real capacity for additional documents to be bypassed. Similar considerations arise with WAND and BMW: in all three processing modes the mere act of splitting the lists introduces the possibility of accelerated query processing, without risking any loss in terms of answer set correctness.
171
+
172
+ As an orthogonal enhancement, priming can be applied whenever any high-impact list contains $k$ or more postings, $|\mathcal{H}(t)| \geq k$ . If that holds, then
173
+
174
+ $$
175
+ \theta_ {0} = \max \left\{U _ {\mathcal {L} (t)} \mid t \in Q \wedge | \mathcal {H} (t) | \geq k \right\} \tag {2}
176
+ $$
177
+
178
+ can be used as a priming value for the heap bound, without risking the integrity of the top- $k$ answers.
179
+
180
+ Next, if additional bookkeeping operations can be tolerated, it is also possible to compute what we denote as smart bounds. When the low-impact list for some term $t$ first joins passive, the variable sum_pass is correctly increased by $U_{\mathcal{L}(t)}$ . But if and when the partner term $\mathcal{H}(t)$ also joins passive, increasing sum_pass by $U_{\mathcal{H}(t)}$ is needlessly pessimistic, since no document can appear in both $\mathcal{L}(t)$ and $\mathcal{H}(t)$ . Hence, the correct second increment associated with term $t$ is by $U_{\mathcal{H}(t)} - U_{\mathcal{L}(t)}$ . In the case of MaxScore, the corresponding smart bounds are easily computed, and are required only occasionally – when a postings list is moved from active to passive. However, for WAND and BMW the estimations must be modified much more frequently, and while smart bounds can certainly be computed, their benefit is less clear. One key part of
181
+
182
+ the experimentation in Section 4 is to quantify the relationship between document scoring and bounds manipulation. Ding and Suel (2011) and Kane and Tompa (2018) also noted the idea of smart bounds estimation in their descriptions of list splitting, but they did not consider MaxScore-based processing.
183
+
184
+ Postings Clipping Our additional proposal – denoted postings clipping – is illustrated in the bottom half of Figure 3. Rather than partitioning the set of postings in $\mathcal{I}_t$ across $\mathcal{L}(t)$ and $\mathcal{H}(t)$ , every posting remains in $\mathcal{L}(t)$ , and we “clip” the high-impact postings by slicing them into two parts, and forming a posting pair. The base part remains in $\mathcal{L}(t)$ as a posting with an impact equal to $U_{\mathcal{L}(t)}$ , the maximum score contribution permitted in $\mathcal{L}(t)$ ; and the second component of the pair becomes a new posting in $\mathcal{H}(t)$ , to account for the “trimmed” part of the original impact value, and retain the same total.
185
+
186
+ This arrangement has the singular advantage of no longer requiring any smart bounds management, or equivalent run-time manipulation of score estimates. Smart bounds are needed in the list splitting approach of Kane and Tompa (2018) to adjust for the constraint that no document can appear in both $\mathcal{L}(t)$ and $\mathcal{H}(t)$ , and hence that $U_{\mathcal{L}(t)} + U_{\mathcal{H}(t)}$ is an over-estimate (by an addend of $U_{\mathcal{L}(t)}$ ) of $t$ 's true upper bound $U_t$ . But with postings clipping, $U_{\mathcal{H}(t)}$ is instead set to the maximum residual amount across all of $t$ 's postings, and hence we have $U_t = U_{\mathcal{L}(t)} + U_{\mathcal{H}(t)}$ . In turn, that means that when queries are being processed the lists $\mathcal{L}(t)$ and $\mathcal{H}(t)$ can be treated as if they were derived from completely independent terms, with all interactions between them handled by the underlying processing logic, be that MaxScore, WAND, or BMW.
187
+
188
+ That is, while there are more total postings to be stored and processed, the change from list splitting with smart bounds to postings clipping substantially simplifies the query-time processing logic. Indeed, with the exception of priming – which can still be applied on the basis that is noted in Eqn. 2 – a MaxScore-based postings clipping implementation remains exactly as is shown by the logic provided in Algorithm 1. The result is that – as we demonstrate in Section 4 – quite dramatic reductions in query processing times for the learned sparse retrieval models can be achieved.
189
+
190
+ Figure 4 crystallizes the difference between list splitting and postings clipping. In the left pane (list splitting) the $U_{\mathcal{H}(t)}$ values rise as $U_{\mathcal{L}(t)}$ increases,
191
+
192
+ ![](images/5d4c84eae60dcae35b8b6f87f27acf755ba09288b4c0d0430914d15cfc5000d9.jpg)
193
+ Figure 4: Bounding scores for list splitting (left) and postings clipping (right) using DeepImpact, with $U_{\mathcal{H}(t)}$ plotted as a function of $U_{\mathcal{L}(t)}$ for the unique terms occurring in the MSMARCO-v1 queries.
194
+
195
+ plotted over the set of MSMARCO-v1 query terms; whereas in the right pane (postings clipping) $U_{\mathcal{H}(t)}$ becomes increasingly constrained as $U_{\mathcal{L}(t)}$ grows. The difference affects the pruning bounds estimation, and while it can be partially ameliorated by smart bounds adjustments, the postings clipping mechanism is more precise.
196
+
197
+ # 4 Experiments
198
+
199
+ We now describe experiments that quantify the benefits arising from the postings clipping approach. Our experiments make use of both MSMARCO-v1 (8.8 million passages) and MSMARCO-v2 (138.4 million passages) collections, four representative ranking algorithms, and the PISA query processing system which was recently shown to outperform the commonly used Anserini system for document-at-a-time retrieval over learned sparse indexes (Mackenzie et al., 2021). Full details of the experimental setup are provided in Appendix A.
200
+
201
+ Index Size Table 2 reports the space consumption of each index/model combination for both the default index, and the index with postings clipping. Since clipping is applied only to postings with more than 256 elements, and even then only adds $1/64$ as many new postings, the space overhead compared to the default index is negligible. For instance, the largest overhead of $600\mathrm{MiB}$ to the $\approx 33\mathrm{GiB}$ index for the uniCOIL model on MSMARCO-v2 represents an increase of only $1.8\%$ .
202
+
203
+ Query Speed Table 3 presents query processing times recorded for the MSMARCO-v1 collection and DeepImpact retrieval, with response latency measured as average milliseconds per query, and with
204
+
205
+ <table><tr><td>Collection</td><td>Model</td><td>Default</td><td>Clipping</td></tr><tr><td rowspan="4">MSMARCO-v1</td><td>BM25</td><td>0.8</td><td>0.8</td></tr><tr><td>DocT5Query</td><td>1.2</td><td>1.2</td></tr><tr><td>DeepImpact</td><td>1.6</td><td>1.6</td></tr><tr><td>uniCOIL</td><td>2.1</td><td>2.2</td></tr><tr><td rowspan="4">MSMARCO-v2</td><td>BM25</td><td>20.3</td><td>20.6</td></tr><tr><td>DocT5Query</td><td>27.7</td><td>27.9</td></tr><tr><td>DeepImpact</td><td>24.7</td><td>25.0</td></tr><tr><td>uniCOIL</td><td>32.7</td><td>33.3</td></tr></table>
206
+
207
+ Table 2: Index space requirement, in GiB, for default inverted indexes, and those with postings clipping. Results are shown for both collections and all four ranking models.
208
+
209
+ <table><tr><td>Method</td><td>k=10</td><td>k=1000</td></tr><tr><td>MaxScore baseline</td><td>8.1</td><td>18.8</td></tr><tr><td>+ length-based ordering</td><td>6.3</td><td>18.0</td></tr><tr><td>+ 1/64 list splitting</td><td>2.0</td><td>7.9</td></tr><tr><td>+ 1/64 priming</td><td>1.9</td><td>6.3</td></tr><tr><td>+ smart bounds</td><td>2.1</td><td>7.0</td></tr><tr><td>or postings clipping</td><td>1.6</td><td>5.9</td></tr><tr><td>WAND baseline</td><td>14.9</td><td>34.0</td></tr><tr><td>+ 1/64 list splitting</td><td>3.5</td><td>13.8</td></tr><tr><td>+ 1/64 priming</td><td>3.2</td><td>11.3</td></tr><tr><td>+ smart bounds</td><td>3.0</td><td>11.1</td></tr><tr><td>or postings clipping</td><td>2.7</td><td>10.8</td></tr><tr><td>VBMW baseline</td><td>4.2</td><td>12.2</td></tr><tr><td>+ 1/64 list splitting</td><td>3.0</td><td>11.7</td></tr><tr><td>+ 1/64 priming</td><td>2.9</td><td>9.8</td></tr><tr><td>+ smart bounds</td><td>2.8</td><td>10.0</td></tr><tr><td>or postings clipping</td><td>3.3</td><td>9.7</td></tr></table>
210
+
211
+ Table 3: Query processing times, all in average milliseconds per query, for the MSMARCO-v1 collection and DeepImpact retrieval model. Algorithmic enhancements are cumulative stepping down each of the three blocks in the table, except for postings clipping, which is an independent enhancement relative to smart bounds. Similar relativities were also observed in regard to median query times, and $90\%$ and $99\%$ tail latency query times.
212
+
213
+ the three blocks of values corresponding to three dynamic query pruning approaches. Within each block, we systematically add heuristics. First to be added in the MaxScore block is static term ordering based on length rather than on maximum
214
+
215
+ <table><tr><td rowspan="2">Method</td><td colspan="2">BM25</td><td colspan="2">DocT5Query</td><td colspan="2">DeepImpact</td><td colspan="2">uniCOIL</td></tr><tr><td>k = 10</td><td>1000</td><td>k = 10</td><td>1000</td><td>k = 10</td><td>1000</td><td>k = 10</td><td>1000</td></tr><tr><td>MaxScore baseline</td><td>11.0</td><td>38.7</td><td>8.8</td><td>28.2</td><td>828.0</td><td>1170.4</td><td>164.9</td><td>267.9</td></tr><tr><td>+ postings clipping</td><td>10.5</td><td>30.8</td><td>8.7</td><td>26.2</td><td>50.6</td><td>108.2</td><td>46.5</td><td>114.6</td></tr><tr><td>WAND baseline</td><td>15.8</td><td>61.3</td><td>17.4</td><td>60.3</td><td>1972.9</td><td>2592.7</td><td>213.4</td><td>510.2</td></tr><tr><td>+ postings clipping</td><td>10.4</td><td>36.1</td><td>11.5</td><td>41.0</td><td>166.2</td><td>449.0</td><td>54.4</td><td>169.6</td></tr><tr><td>VBMW baseline</td><td>11.3</td><td>37.5</td><td>12.0</td><td>44.7</td><td>488.2</td><td>719.2</td><td>128.2</td><td>219.6</td></tr><tr><td>+ postings clipping</td><td>13.4</td><td>39.4</td><td>13.7</td><td>45.4</td><td>167.8</td><td>293.3</td><td>164.7</td><td>235.4</td></tr><tr><td>× Speedup on best bl.</td><td>1.06</td><td>1.22</td><td>1.01</td><td>1.08</td><td>9.65</td><td>6.65</td><td>2.76</td><td>1.92</td></tr></table>
216
+
217
+ Table 4: Query processing times, all in average milliseconds per query, for the MSMARCO-v2 collection, four retrieval models, and three dynamic pruning approaches. The fastest time in each column is highlighted in blue, and the best of the three baseline approaches in each column is shown in black. The speedups in the last row are the ratio between the black and blue values in that column.
218
+
219
+ impact score; then the list splitting mechanism is added, with 1/64 of the postings in each list longer than 256 extracted and placed in the high-impact list $\mathcal{H}(t)$ ; then the application (where possible) of Eqn. 2 to set an initial heap threshold; then the further addition of smart bounds. Finally, the last row in each block shows the combination of postings clipping, again with 1/64 postings taken into $\mathcal{H}(t)$ , in conjunction with priming (and length-based ordering for MaxScore). Both WAND and BMW apply the smart bounds adjustments during the pivoting step, and have no equivalent to the MaxScore static sorting step. The fastest query time in each of the six sections is highlighted in blue.
220
+
221
+ As can be seen, for DeepImpact retrieval, the fastest approach in five of the six table sections is achieved by MaxScore pruning with postings clipping. That combination takes less than half the time of standard MaxScore processing. The gains from posting clipping are less for WAND and VBMW, in part because both algorithms exhibit greater sensitivity to doubling the number of query terms.
222
+
223
+ Table 4 then applies all four retrieval models to the large MSMARCO-v2 collection. The six rows correspond to the first and last rows in each block in Table 3, with the first row in each pair showing "standard" retrieval, applying an inverted index and a dynamic pruning method; and then the second row compares that baseline against what can be achieved by postings clipping, priming using the same $U_{\mathcal{L}(t)}$ information that arises from the clipping, and (in the case of MaxScore), the matched static sorting. The best baseline in each column is
224
+
225
+ shown in black, and the best overall time in each column in blue. Compared to a standard computation, postings clipping creates speedups of between two and nearly ten in connection with the two most expensive models, with MaxScore plus postings clipping being the best overall method for both $k = 10$ shallow retrieval and $k = 1000$ deep retrieval.
226
+
227
+ Retrieval Quality All of the enhancements investigated above result in rank-safe effectiveness. That is, the changes to the indexing structures and query processing regimes shown in Tables 3 and 4 do not degrade the quality of results compared to the unmodified algorithms, making the speedups even more attractive to search practitioners. Detailed effectiveness results for the four retrieval models are presented in Appendix B.
228
+
229
+ # 5 Conclusion and Future Work
230
+
231
+ To keep up with increasingly large volumes of data, search practitioners require sophisticated structures and processing algorithms, so that response times can remain plausible. In this paper, we have demonstrated the speed benefits that arise through the use of a new technique we call postings clipping. We have established new benchmarks for querying speed, with minimal costs overheads, for both shallow $k = 10$ and deep $k = 1000$ retrieval. Our techniques can also be embedded as part of a multiphase processing stack, and are applicable to both normal term-based search and also to retrieval via enhanced learned sparse approaches.
232
+
233
+ # Limitations
234
+
235
+ This paper modifies existing inverted index-based storage and query processing schemes to handle the different impact distributions produced by learned index representations. We have not explored how adjusting the training objective of models such as DeepImpact could produce better impact distributions directly targeting efficient query processing algorithms that exploit list upper bounds. Such approaches, if they were fruitful, would potentially mitigate the need for the techniques proposed in this paper.
236
+
237
+ Table 1 indicates that some of the latency problems arise from learned representations distinguishing between different semantic meaning of words, correctly assigning high importance to terms based on context. We have not explored incorporating these pre-index construction insights into the proposed splitting and subsequent query processing schemes, and instead have relied solely on numeric impact values. It is possible that making splitting decisions in conjunction with the learning process might lead to even better outcomes.
238
+
239
+ Resource constraints have meant that we have restricted our investigation to the DeepImpact- and uniCOIL-based learned sparse representations. While we believe our techniques will provide similar benefits to other learned sparse retrieval techniques such as TILDE (Zhuang and Zuccon, 2021a) and SPLADE (Formal et al., 2021b,a), we have not explored those approaches as part of this work.
240
+
241
+ Finally, our investigation explored how split lists can be used to prime the initial heap threshold $\theta$ . Recent work has shown that more accurate predictions can further accelerate querying on traditional ranking models (Petri et al., 2019; Mallia et al., 2020). To determine whether these approaches translate to learned sparse models, we applied idealized "oracle" thresholds to our experimental framework (see Appendix B for details). While the results are promising (up to a $2.1\times$ speedup over the best results in Table 4), it remains unclear if existing threshold estimators can be applied to learned sparse models, or if more sophisticated estimators are necessary.
242
+
243
+ # Ethics Statement
244
+
245
+ The authors have no external conflicts of interest to declare, and have not been required to seek any ethics clearances in order to undertake this work.
246
+
247
+ If widely adopted, the techniques we propose
248
+
249
+ will lead to fewer computational resources being required for querying tasks carried out via learned sparse models, and hence reduced electrical consumption and greenhouse emissions.
250
+
251
+ # Acknowledgements
252
+
253
+ This work was supported by the Australian Research Council (project DP200103136). We thank the referees for their helpful suggestions.
254
+
255
+ # References
256
+
257
+ V. N. Anh, O. de Kretser, and A. Moffat. 2001. Vector-space ranking with effective early termination. In Proc. SIGIR, pages 35-42.
258
+ N. Arabzadeh, A. Vtyurina, X. Yan, and C. L. A. Clarke. 2021. Shallow pooling for sparse labels. arXiv:2109.00062v2.
259
+ P. Bajaj, D. Campos, N. Craswell, L. Deng, J. Gao, X. Liu, R. Majumder, A. McNamara, B. Mitra, T. Nguyen, M. Rosenberg, X. Song, A. Stoica, S. Tiwary, and T. Wang. 2018. MS MARCO: A human generated MAchine Reading COprehension dataset. arXiv:1611.09268v3.
260
+ A. Z. Broder, D. Carmel, M. Herscovici, A. Soffer, and J. Zien. 2003. Efficient query evaluation using a two-level retrieval process. In Proc. CIKM, pages 426-434.
261
+ N. Cohen, A. Portnoy, B. Fetahu, and A. Ingber. 2022. SDR: efficient neural re-ranking using succinct document representation. In Proc. ACL, pages 6624-6637.
262
+ Z. Dai and J. Callan. 2019. Context-aware sentence/passage term importance estimation for first stage retrieval. arXiv:1910.10687.
263
+ C. M. Daoud, E. S. de Moura, A. L. Carvalho, A. S. da Silva, D. Fernandes, and C. Rossi. 2016. Fast top-k preserving query processing using two-tier indexes. Inf. Proc. & Man., 52(5):855-872.
264
+ C. M. Daoud, E. S. de Moura, D. Fernandes, A. S. da Silva, C. Rossi, and A. Carvalho. 2017. Waves: A fast multi-tier top- $k$ query processing algorithm. Inf. Retr., 20(3):292-316.
265
+ L. Dhulipala, I. Kabiljo, B. Karrer, G. Ottaviano, S. Pupyrev, and A. Shalita. 2016. Compressing graphs and indexes with recursive graph bisection. In Proc. KDD, pages 1535-1544.
266
+ S. Ding and T. Suel. 2011. Faster top- $k$ document retrieval using block-max indexes. In Proc. SIGIR, pages 993-1002.
267
+ T. Formal, C. Lassance, B. Piwowarski, and S. Clinchant. 2021a. SPLADE v2: Sparse lexical and expansion model for information retrieval. arXiv:2109.10086.
268
+
269
+ T. Formal, B. Piwowarski, and S. Clinchant. 2021b. SPLADE: Sparse lexical and expansion model for first stage ranking. In Proc. SIGIR, pages 2288-2292.
270
+ L. Gao, Z. Dai, and J. Callan. 2021. COIL: Revisit exact lexical match in information retrieval with contextualized inverted list. In Proc. NAACL, pages 3030-3042.
271
+ G. Izacard, F. Petroni, L. Hosseini, N. De Cao, S. Riedel, and E. Grave. 2020. A memory efficient baseline for open domain question answering. arXiv:2012.15156.
272
+ A. Kane and F. W. Tompa. 2018. Split-lists and initial thresholds for WAND-based search. In Proc. SIGIR, pages 877-880.
273
+ V. Karpukhin, B. Oguz, S. Min, P. Lewis, L. Wu, S. Edunov, D. Chen, and W. Yih. 2020. Dense passage retrieval for open-domain question answering. In Proc. EMNLP, pages 6769-6781.
274
+ O. Khattab and M. Zaharia. 2020. ColBERT: Efficient and effective passage search via contextualized late interaction over BERT. In Proc. SIGIR, pages 39-48.
275
+ D. Lemire and L. Boytsov. 2015. Decoding billions of integers per second through vectorization. Soft. Proc. & Exp., 41(1):1-29.
276
+ J. Lin and X. Ma. 2021. A few brief notes on DeepImpact, COIL, and a conceptual framework for information retrieval techniques. arXiv:2106.14807.
277
+ J. Lin, X. Ma, S.-C. Lin, J.-H. Yang, R. Pradeep, and R. Nogueira. 2021. Pyserini: A Python toolkit for reproducible information retrieval research with sparse and dense representations. In Proc. SIGIR, pages 2356-2362.
278
+ J. Lin, J. Mackenzie, C. Kamphuis, C. Macdonald, A. Mallia, M. Siedlaczek, A. Trotman, and A. de Vries. 2020. Supporting interoperability between open-source search engines with the common index file format. In Proc. SIGIR, pages 2149-2152.
279
+ X. Ma, R. Pradeep, R. Nogueira, and J. Lin. 2022. Document expansions and learned sparse lexical representations for MSMARCO V1 and V2. In Proc. SIGIR.
280
+ S. MacAvaney, A. Yates, A. Cohan, and N. Goharian. 2019. CEDR: Contextualized embeddings for document ranking. In Proc. SIGIR, pages 1101-1104.
281
+ J. Mackenzie, M. Petri, and A. Moffat. 2022a. Anytime ranking on document-ordered indexes. ACM Trans. Inf. Sys., 40(1):13:1-13:32.
282
+ J. Mackenzie, M. Petri, and A. Moffat. 2022b. Tradeoff options for bipartite graph partitioning. IEEE Trans. Know. & Data Eng. To appear.
283
+ J. Mackenzie, A. Trotman, and J. Lin. 2021. Wacky weights in learned sparse representations and the revenge of score-at-a-time query evaluation. arXiv:2110.11540.
284
+
285
+ A. Mallia, O. Khattab, N. Tonellotto, and T. Suel. 2021. Learning passage impacts for inverted indexes. In Proc. SIGIR, pages 1723-1727.
286
+ A. Mallia, G. Ottaviano, E. Porciani, N. Tonellotto, and R. Venturini. 2017. Faster BlockMax WAND with variable-sized blocks. In Proc. SIGIR, pages 625-634.
287
+ A. Mallia and E. Porciani. 2019. Faster BlockMax WAND with longer skipping. In Proc. ECIR, pages 771-778.
288
+ A. Mallia, M. Siedlaczek, J. Mackenzie, and T. Suel. 2019. PISA: Performant indexes and search for academia. In Proc. OSIRRC at SIGIR 2019, pages 50-56.
289
+ A. Mallia, M. Siedlaczek, M. Sun, and T. Suel. 2020. A comparison of top- $k$ threshold estimation techniques for disjunctive query processing. In Proc. CIKM, pages 2141-2144.
290
+ R. Nogueira and J. Lin. 2019. From doc2query to docTTTTTquery. Unpublished report, David R. Cheriton School of Computer Science, University of Waterloo, Canada.
291
+ M. Petri, J. S. Culpepper, and A. Moffat. 2013. Exploring the magic of WAND. In Proc. Aust. Doc. Comp. Symp., pages 58-65.
292
+ M. Petri, A. Moffat, J. Mackenzie, J. S. Culpepper, and D. Beck. 2019. Accelerated query processing via similarity score prediction. In Proc. SIGIR, pages 485-494.
293
+ G. E. Pibiri and R. Venturini. 2021. Techniques for inverted index compression. ACM Comp. Surv., 53(6):125.1-125.36.
294
+ R. Pradeep, R. Nogueira, and J. Lin. 2021. The expando-mono-duo design pattern for text ranking with pretrained sequence-to-sequence models. arXiv:2101.05667.
295
+ Y. Qu, Y. Ding, J. Liu, K. Liu, R. Ren, W. X. Zhao, D. Dong, H. Wu, and H. Wang. 2021. RocketQA: An optimized training approach to dense passage retrieval for open-domain question answering. In Proc. NAACL, pages 5835-5847.
296
+ C. Raffel, N. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena, Y. Zhou, W. Li, and P. J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res., 21(140):1-67.
297
+ K. M. Risvik, T. Chilimbi, H. Tan, K. Kalyanaraman, and C. Anderson. 2013. Maguro, a system for indexing and searching over very large text collections. In Proc. WSDM, pages 727-736.
298
+ S. E. Robertson and H. Zaragoza. 2009. The probabilistic relevance framework: BM25 and beyond. Found. Trnd. Inf. Retr., 3:333-389.
299
+
300
+ T. Strohman and W. B. Croft. 2007. Efficient document retrieval in main memory. In Proc. SIGIR, pages 175-182.
301
+ N. Tonellootto, C. Macdonald, and I. Ounis. 2018. Efficient query processing for scalable web search. Found. Trnd. Inf. Retr., 12(4-5):319-500.
302
+ H. R. Turtle and J. Flood. 1995. Query evaluation: Strategies and optimizations. Inf. Proc. & Man., 31(6):831-850.
303
+ I. Yamada, A. Asai, and H. Hajishirzi. 2021. Efficient passage retrieval with hashing for open-domain question answering. In Proc. ACL, pages 979-986.
304
+ P. Yang, H. Fang, and J. Lin. 2018. Anserini: Reproducible ranking baselines using lucene. J. Data Inf. Qual., 10(4):1-20.
305
+ J. Zhan, J. Mao, Y. Liu, J. Guo, M. Zhang, and S. Ma. 2021. Jointly optimizing query encoder and product quantization to improve retrieval performance. In Proc. CIKM, pages 2487-2496.
306
+ J. Zhan, J. Mao, Y. Liu, J. Guo, M. Zhang, and S. Ma. 2022. Learning discrete representations via constrained clustering for effective and efficient dense retrieval. In Proc. WSDM, pages 1328-1336.
307
+ L. Zhao. 2012. Modeling and solving term mismatch for full-text retrieval. SIGIR Forum, 46(2):117-118.
308
+ S. Zhuang and G. Zuccon. 2021a. Fast passage reranking with contextualized exact term matching and efficient passage expansion. arXiv:2108.08513.
309
+ S. Zhuang and G. Zuccon. 2021b. TILDE: Term independent likelihood moDEl for passage re-ranking. In Proc. SIGIR, pages 1483-1492.
310
+ J. Zobel and A. Moffat. 2006. Inverted files for text search engines. ACM Comp. Surv., 38(2):6:1-6:56.
311
+
312
+ # A Experimental Setup
313
+
314
+ Hardware and Latency Measurement All of our experiments are performed entirely in memory on a Linux server with two Intel Xeon Gold 6144 CPUs (3.5GHz) and 512 GiB of memory. Latency measurements are taken as the average of three independent runs, where each run utilizes 16 processing cores to process the query stream in parallel in a one-thread-per-query manner.
315
+
316
+ Collections and Metrics The MSMARCO-v1 passage collection contains around 8.8 million passages and a total of 6,980 dev queries (Bajaj et al., 2018). We measured effectiveness using the official RR@10 metric (see Arabzadeh et al. (2021) for additional discussion of this). The much larger MSMARCO-v2 collection contains around 138.4 million passages and 8,184 queries (after combining
317
+
318
+ both the dev and dev2 query sets), with effectiveness measured using the official RR@100 metric. In this second collection, the short text passages are augmented with ancillary fields, specifically URLs, titles, and headings, distributed as additional resources (Ma et al., 2022). We also measured effectiveness using the deeper Recall@1000 metric, to validate the quality of our generated runs.
319
+
320
+ Indexing and Query Processing Indexes were constructed using Anserini (Yang et al., 2018) and converted into PISA indexes (Mallia et al., 2019) via the common index file format (Lin et al., 2020), allowing pre-built indexes to be used for improved reproducibility (Ma et al., 2022). Before time or space efficiency was measured, the indexes were also reordered via the recursive graph bisection algorithm to reduce their space consumption and improve locality of access (Dhulipala et al., 2016; Mackenzie et al., 2022b). All remaining experimentation was conducted with the PISA engine. Indexes were compressed with the SIMD-BP128 bitpacking codec (Lemire and Boytsov, 2015). The VBMW algorithm used an average block size of $40 \pm 0.5$ following the results of Mallia et al. (2017).
321
+
322
+ Ranking Models Our experiments made use of two traditional ranking models:
323
+
324
+ - BM25 (Robertson and Zaragoza, 2009) is the traditional BM25 lexical ranking model. The exact formulation of the BM25 version we employed is outlined in the PISA overview (Mallia et al., 2019). We applied BM25 to MSMARCO-v1 with $k_{1} = 0.82$ and $b = 0.68$ (Lin et al., 2021), and used the default parameters $k_{1} = 0.9$ and $b = 0.4$ for MSMARCO-v2.
325
+ - DocT5Query (Nogueira and Lin, 2019) applies the BM25 ranking model over an expanded version of the document corpus using the T5 sequence-to-sequence model (Raffel et al., 2020). The same BM25 formulation and parameters are used as above; DocT5Query can be thought of as a "neurally augmented" corpus with a traditional ranking model.
326
+
327
+ To those we added two representative learned sparse retrieval models:
328
+
329
+ - DeepImpact (Mallia et al., 2021) employs the DocT5Query model to expand the corpus, and then learns a per-term impact score for each passage.
330
+
331
+ <table><tr><td rowspan="2">Method</td><td colspan="2">BM25</td><td colspan="2">DocT5Query</td><td colspan="2">DeepImpact</td><td colspan="2">uniCOIL</td></tr><tr><td>k = 10</td><td>1000</td><td>k = 10</td><td>1000</td><td>k = 10</td><td>1000</td><td>k = 10</td><td>1000</td></tr><tr><td>MaxScore baseline</td><td>1.7</td><td>5.5</td><td>0.8</td><td>3.7</td><td>8.1</td><td>18.8</td><td>13.8</td><td>27.5</td></tr><tr><td>+ postings clipping</td><td>1.5</td><td>5.0</td><td>0.8</td><td>3.4</td><td>1.6</td><td>5.9</td><td>5.8</td><td>15.7</td></tr><tr><td>WAND baseline</td><td>2.3</td><td>7.4</td><td>1.3</td><td>7.0</td><td>14.9</td><td>34.0</td><td>23.3</td><td>56.5</td></tr><tr><td>+ postings clipping</td><td>1.7</td><td>5.6</td><td>1.0</td><td>5.5</td><td>2.7</td><td>10.8</td><td>8.1</td><td>27.5</td></tr><tr><td>VBMW baseline</td><td>2.0</td><td>7.3</td><td>1.3</td><td>7.0</td><td>4.2</td><td>12.2</td><td>11.5</td><td>28.5</td></tr><tr><td>+ postings clipping</td><td>2.4</td><td>7.4</td><td>1.3</td><td>6.7</td><td>3.3</td><td>9.7</td><td>13.6</td><td>27.2</td></tr><tr><td>× Speedup on best bl.</td><td>1.13</td><td>1.10</td><td>1.00</td><td>1.09</td><td>2.63</td><td>2.10</td><td>1.98</td><td>1.75</td></tr></table>
332
+
333
+ Table 5: Query processing times, all in average milliseconds per query, for the MSMARCO-v1 collection, four retrieval models, and three dynamic pruning approaches, with the same structure and interpretation as Table 4. The fastest time in each column is highlighted in blue, and the best of the three baseline approaches in each column is shown in black. The speedups in the last row are the ratio between the black and blue values in that column.
334
+
335
+ - uniCOIL (Lin and Ma, 2021) employs TILDE (Zhuang and Zuccon, 2021b,a) document expansion, and learns per-term weights according to a simplified (1-dimensional) COIL model (Gao et al., 2021). Unlike DeepImpact, uniCOIL also applies term weighting at query-time, transforming bag-of-words queries into weighted queries (and resulting in weighted bag-of-words ranking; see Section 2).
336
+
337
+ The learned sparse models work with pre-quantized scores, and so we also pre-computed and quantized the BM25 and DocT5Query indexes into integer impact scores in the range [0, 255] using uniform quantization (Anh et al., 2001). All experimentation then involved computing document scores as (weighted) sums of impacts. Note also that DocT5Query, DeepImpact, and uniCOIL were all fine-tuned on the MSMARCO-v1 training data; those same models are then applied in a zero-shot manner to MSMARCO-v2.
338
+
339
+ Setting Hyperparameters In order to decide the split points for use in our experimentation, we ran a preliminary experiment where we tried splits of $1 / p$ for $p \in \{8, 16, 32, 64, 128, 256\}$ using the DeepImpact ranker and the MSMARCO-v1 dev queries. While all split values resulted in large efficiency improvement, $p = 64$ was the best choice. We then fixed $p = 64$ for all remaining collections and experiments, and did not further tune this value.
340
+
341
+ Reproducibility Our list splitting, clipping, and priming contributions were all implemented inside the C++ PISA framework; this modified version of PISA is available at https://github.com/jmmackenzie/postings-clipping. Scripts for downloading and pre-processing data, computing split points, building indexes, and running the experiments are also available in that repository to facilitate reproducibility. Our experimentation is all based on widely-available datasets (Bajaj et al., 2018; Ma et al., 2022).
342
+
343
+ # B Additional Measurements and Results
344
+
345
+ Query Speed Table 5 provides a set of timings for the MSMARCO-v1 collection, in the same format as was employed in Table 4. A similar pattern of behavior arises, demonstrating that our findings apply to both the smaller MSMARCO-v1 and the large MSMARCO-v2 collections. Unsurprisingly, the observed speedup ratios for MSMARCO-v1 are typically less than those measured for MSMARCO-v2.
346
+
347
+ Effectiveness Table 6 presents effectiveness scores of the four retrieval models, as measured within our experimental framework. While the emphasis in this paper is on efficiency rather than effectiveness, it is interesting to note the strong improvements that the neural augmented DocT5Query method and the two learned sparse methods obtain relative to the standard BM25 approach. Those substantial gains arise because of a combination of document-level term expansion, and the non-linear context-based relationships that are uncovered be
348
+
349
+ <table><tr><td>Model</td><td>RR@d</td><td>Rec.@1000</td></tr><tr><td colspan="3">MSMARCO-v1 (d=10)</td></tr><tr><td>BM25</td><td>0.187</td><td>0.858</td></tr><tr><td>DocT5Query</td><td>0.267</td><td>0.945</td></tr><tr><td>DeepImpact</td><td>0.327</td><td>0.948</td></tr><tr><td>uniCOIL</td><td>0.350</td><td>0.965</td></tr><tr><td colspan="3">MSMARCO-v2 (d=100)</td></tr><tr><td>BM25</td><td>0.086</td><td>0.696</td></tr><tr><td>DocT5Query</td><td>0.110</td><td>0.762</td></tr><tr><td>DeepImpact</td><td>0.132</td><td>0.736</td></tr><tr><td>uniCOIL</td><td>0.153</td><td>0.772</td></tr></table>
350
+
351
+ tween term frequency and term impact.
352
+
353
+ Our implementations achieve similar effectiveness scores to those previously reported for these three recent techniques - see, for example, Mackenzie et al. (2021) and Ma et al. (2022).
354
+
355
+ Idealized Initial Thresholds Threshold estimation is a technique that improves the efficiency of query processing (Mallia et al., 2020; Petri et al., 2019). Like priming, it enables better skipping over unimportant documents during index traversal by providing an initial minimum threshold score a document needs to obtain to be considered during ranking; unlike priming, however, various unsafe alternatives can be used for predicting initial thresholds. Table 7 demonstrates the potential speedup if the initial heap threshold for each query could (in an omniscient manner) be set at exactly the final score of the $k$ th most similar document; that is, if the priming process could be clairvoyant. The substantial difference in execution times achievable, up to $2.1 \times$ relative to the clipping runs shown in Table 4, indicates that more accurate initial threshold prediction mechanisms are a promising direction for further accelerating learned sparse retrieval mechanisms.
356
+
357
+ Table 6: Effectiveness of the different models on both collections, using the official metrics associated with each, and runs of length $k = 1000$ .
358
+
359
+ <table><tr><td>Method</td><td>k=10</td><td>1000</td></tr><tr><td>MaxScore baseline</td><td>828.0</td><td>1170.4</td></tr><tr><td>+ postings splitting</td><td>50.6</td><td>108.2</td></tr><tr><td>+ oracle thresholds</td><td>35.4</td><td>66.8</td></tr><tr><td>WAND baseline</td><td>1972.9</td><td>2592.7</td></tr><tr><td>+ postings splitting</td><td>166.2</td><td>449.0</td></tr><tr><td>+ oracle thresholds</td><td>77.7</td><td>244.4</td></tr><tr><td>VBMW baseline</td><td>488.2</td><td>719.2</td></tr><tr><td>+ postings splitting</td><td>167.8</td><td>293.3</td></tr><tr><td>+ oracle thresholds</td><td>149.5</td><td>188.5</td></tr></table>
360
+
361
+ Table 7: Demonstrating the potential of accurate threshold estimation on the MSMARCO-v2 collection and the DeepImpact model, assuming clairvoyant pre-knowledge for each query. If the final heap threshold could be predicted accurately, further speedups are possible.
acceleratinglearnedsparseindexesviatermimpactdecomposition/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1796b60bb76e28c8610e8d9070159653fbcd1b370ecc97fc23ca2fe9d30dc239
3
+ size 522652
acceleratinglearnedsparseindexesviatermimpactdecomposition/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:822c7e53b7ac32f27b90a4421325dcdbe7d378df8cca50bde8b2f5cbabb701d1
3
+ size 529186
acceptabilityjudgementsviaexaminingthetopologyofattentionmaps/691b92a0-1ad0-4a13-a1ea-1bf2414d7261_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:af9ee0feb32952f22276a2436745898255488592840e4eb507b94109f413db0c
3
+ size 147052
acceptabilityjudgementsviaexaminingthetopologyofattentionmaps/691b92a0-1ad0-4a13-a1ea-1bf2414d7261_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d4c8eca3bac7b3d758fd66ddcf3510918f0f5d5a2ea465aaded3d7cfa2047699
3
+ size 183570
acceptabilityjudgementsviaexaminingthetopologyofattentionmaps/691b92a0-1ad0-4a13-a1ea-1bf2414d7261_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:99d43dbd2efc6de34fd2a1bfeee87e33f0bf62d8df36f4fb4e3c3cb5a7795bc1
3
+ size 1027740
acceptabilityjudgementsviaexaminingthetopologyofattentionmaps/full.md ADDED
@@ -0,0 +1,656 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Acceptability Judgements via Examining the Topology of Attention Maps
2
+
3
+ Daniil Cherniavskii $^{1,2*}$ , Eduard Tulchinskii $^{1*}$ , Vladislav Mikhailov $^{3*}$ ,
4
+
5
+ Irina Proskurina\* Laida Kushnareva, Ekaterina Artemova5,7
6
+
7
+ Serguei Barannikov<sup>1,6</sup>, Irina Piontkovskaya<sup>5</sup>, Dmitri Piontkovski<sup>4</sup>, Evgeny Burnaev<sup>1,2</sup>
8
+
9
+ $^{1}$ Skolkovo Institute of Science and Technology, $^{2}$ AIRI, $^{3}$ SberDevices,
10
+
11
+ $^{4}$ HSE University, $^{5}$ Huawei Noah's Ark lab, $^{6}$ CNRS, IMJ
12
+
13
+ <sup>7</sup>Center for Information and Language Processing (CIS), LMU Munich, Germany
14
+
15
+ Correspondence: Eduard.Tulchinskiy@skoltech.ru
16
+
17
+ # Abstract
18
+
19
+ The role of the attention mechanism in encoding linguistic knowledge has received special interest in NLP. However, the attention heads' ability to judge the grammatical acceptability of a sentence has been underexplored. This paper approaches the paradigm of acceptability judgments with topological data analysis (TDA), showing that the topological properties of the attention graph can be efficiently exploited for two standard practices in linguistics: binary judgments and linguistic minimal pairs. Topological features enhance the BERT-based acceptability classifier scores by up to 0.24 Matthew's correlation coefficient score on COLA in three languages (English, Italian, and Swedish). By revealing the topological discrepancy between attention graphs of minimal pairs, we achieve the human-level performance on the BLiMP benchmark, outperforming nine statistical and Transformer LM baselines. At the same time, TDA provides the foundation for analyzing the linguistic functions of attention heads and interpreting the correspondence between the graph features and grammatical phenomena. We publicly release the code and other materials used in the experiments<sup>1</sup>.
20
+
21
+ # 1 Introduction
22
+
23
+ Linguistic competence of neural language models (LMs) has emerged as one of the core sub-fields in NLP. The research paradigms explore whether Transformer LMs (Vaswani et al., 2017) induce linguistic generalizations from raw pre-training corpora (Warstadt et al., 2020b; Zhang et al., 2021), what properties are learned during task-specific fine-tuning (Miaschi et al., 2020; Merchant et al., 2020), and how the experimental results are connected to grammar and language acquisition theories (Pater, 2019; Manning et al., 2020).
24
+
25
+ One of these paradigms is centered around acceptability judgments, which have formed an empirical foundation in generative linguistics over the last six decades (Chomsky, 1965; Schütze, 1996; Scholz et al., 2021). Acceptability of linguistic stimuli is traditionally investigated in the form of a forced choice between binary categories or minimal pairs (Sprouse, 2018), which are widely adopted for acceptability classification (Linzen et al., 2016; Warstadt et al., 2019) and probabilistic LM scoring (Lau et al., 2017).
26
+
27
+ A scope of approaches has been proposed to interpret the roles of hundreds of attention heads in encoding linguistic properties (Htut et al., 2019; Wu et al., 2020) and identify how the most influential ones benefit the downstream performance (Voita et al., 2019; Jo and Myaeng, 2020). Prior work has demonstrated that heads induce grammar formalisms and structural knowledge (Zhou and Zhao, 2019; Lin et al., 2019; Luo, 2021), and linguistic features motivate attention patterns (Kovaleva et al., 2019; Clark et al., 2019). Recent studies also show that certain heads can have multiple functional roles (Pande et al., 2021) and even perform syntactic functions for typologically distant languages (Ravishankar et al., 2021).
28
+
29
+ Our paper presents one of the first attempts to analyze attention heads in the context of linguistic acceptability (LA) using topological data analysis $(\mathrm{TDA}^2$ ; Chazal and Michel, 2017). TDA allows for exploiting complex structures underlying textual data and investigating graph representations of Transformer's attention maps. We show that topological features are sensitive to well-established LA contrasts, and the grammatical phenomena can be encoded with the topological properties of the attention map.
30
+
31
+ The main contributions are the following: (i) We adapt TDA methods to two standard approaches to LA judgments: acceptability classification and scoring minimal pairs ( $\S 3$ ). (ii) We conduct acceptability classification experiments in three Indo-European languages (English, Italian, and Swedish) and outperform the established baselines ( $\S 4$ ). (iii) We introduce two scoring functions, which reach the human-level performance in discriminating between minimal pairs in English and surpass nine statistical and Transformer LM baselines ( $\S 5$ ). (iv) The linguistic analysis of the feature space proves that TDA can serve as a complementary approach to interpreting the attention mechanism and identifying heads with linguistic functions ( $\S 4.3, \S 5.3, \S 6$ ).
32
+
33
+ # 2 Related Work
34
+
35
+ # 2.1 Linguistic Acceptability
36
+
37
+ Acceptability Classification. Early works approach acceptability classification with classic ML methods, hand-crafted feature templates, and probabilistic syntax parsers (Cherry and Quirk, 2008; Wagner et al., 2009; Post, 2011). Another line employs statistical LMs (Heilman et al., 2014), including threshold-based classification with LM scoring functions (Clark et al., 2013). The ability of RNN-based models (Elman, 1990; Hochreiter and Schmidhuber, 1997) to capture long-distance regularities has stimulated investigation of their grammatical sensitivity (Linzen et al., 2016). With the release of the Corpus of Linguistic Acceptability (CoLA; Warstadt et al., 2019) and advances in language modeling, the focus has shifted towards Transformer LMs (Yin et al., 2020), establishing LA as a proxy for natural language understanding (NLU) abilities (Wang et al., 2018) and linguistic competence of LMs (Warstadt and Bowman, 2019).
38
+
39
+ Linguistic Minimal Pairs. A forced choice between minimal pairs is a complementary approach to LA, which evaluates preferences between pairs of sentences that contrast an isolated grammatical phenomenon (Schütze, 1996). The idea of discriminating between minimal contrastive pairs has been widely applied to scoring generated hypotheses in downstream tasks (Pauls and Klein, 2012; Salazar et al., 2020), measuring social biases (Nangia et al., 2020), analyzing machine translation models (Burlot and Yvon, 2017; Senrich, 2017), and linguistic
40
+
41
+ profiling of LMs in multiple languages (Marvin and Linzen, 2018; Mueller et al., 2020).
42
+
43
+ # 2.2 Topological Data Analysis in NLP
44
+
45
+ TDA has found several applications in NLP. One of them is word sense induction by clustering word graphs and detecting their connected components. The graphs can be built from word dictionaries (Levary et al., 2012), association networks (Dubuisson et al., 2013), and word vector representations (Jakubowski et al., 2020). Another direction involves building classifiers upon geometric structural properties for movie genre detection (Doshi and Zadrozny, 2018), textual entailment (Savle et al., 2019), and document classification (Das et al., 2021; Werenski et al., 2022). Recent works have mainly focused on the topology of LMs' internal representations. Kushnareva et al. (2021) represent attention maps with TDA features to approach artificial text detection. Colombo et al. (2021) introduce BARYSCORE, an automatic evaluation metric for text generation that relies on Wasserstein distance and barycenters. To the best of our knowledge, TDA methods have not yet been applied to LA.
46
+
47
+ # 3 Methodology
48
+
49
+ # 3.1 Attention Graph
50
+
51
+ We treat Transformer's attention matrix $A^{att}$ as a weighted graph $G$ , where the vertices represent tokens, and the edges connect pairs of tokens with mutual attention weights. This representation can be used to build a family of attention graphs called filtration, i.e., an ordered set of graphs $G^{\tau_i}$ filtered by increasing attention weight thresholds $\tau_i$ . Filtering edges lower than the given threshold affects the graph structure and its core features, e.g., the number of edges, connected components, or cycles. TDA techniques allow tracking these changes, identifying the moments of when the features appear (i.e., their "birth") or disappear (i.e., their "death"), and associating a lifetime to them. The latter is encoded as a set of intervals called a "barcode", where each interval ("bar") lasts from the feature's "birth" to its "death". The barcode characterizes the persistent features of attention graphs and describes their stability.
52
+
53
+ Example. Let us illustrate the process of computing the attention graph filtration and barcodes given an Example (1).
54
+
55
+ ![](images/7d4e45d54922d6bbe051551aef86af489bee10a56d86fca3bf7c59c594199951.jpg)
56
+ (a) Attention map (left); Barcode (right); [L: 1; H: 11].
57
+
58
+ ![](images/26a6aba36770b9038951411dbafd6dda15aa105d87e1899325fd13b68d1fa6b4.jpg)
59
+
60
+ ![](images/c133d39a3ad7233a11142834ed2bf26315dda1c29a9539d4dc5a15b1fce5f53f.jpg)
61
+ (c) Attention graph filtration: [L: 1; H: 11].
62
+
63
+ ![](images/efeb1b684fbf35ca211d241a287b89bd1454fbde8a2f91b54899182a1084d19d.jpg)
64
+ (b) Attention map (left); Barcode (right); [L: 9; H: 9].
65
+
66
+ ![](images/ccf960a1997f7816ed92da8afa742b98a8914fd3295b22c50b964174281c638c.jpg)
67
+
68
+ ![](images/c5a5d85e30344c9bf20362a2ffe8ac892bbffda8543d155cd07d8641425e0bf1.jpg)
69
+
70
+ ![](images/85d2e3dd73542fa7f0d04c19433bec7cf4cf66a1064ad6f05a85aeaa48dc192c.jpg)
71
+ (d) Attention graph filtration: [L: 9; H: 9].
72
+
73
+ ![](images/9fd21f54e802caa7c0668fa1873ffa451b8ab60b311903e0404764ce188518ec.jpg)
74
+
75
+ ![](images/18ae712df214d5a9000793222e7bae9ff38b1f8eb6ac7cd60939c940ec4afd36.jpg)
76
+
77
+ ![](images/85e569500ad9b73650b6a15d44308778673350824255626ed4d5c587018652d8.jpg)
78
+
79
+ ![](images/88f2f8ef9b2ef8c4f86c9bfc644eae6d1c0d5bd09c755c775f43fbc7ad911655.jpg)
80
+ Figure 1: An example of attention maps, barcodes, and filtration procedure for the sentence "There is snowing today". Model=En-BERT-base (Devlin et al., 2019). Heads=[Layer 1; Head 11] and [Layer 9; Head 9].
81
+
82
+ ![](images/6ff514bf4c11096883f53b3be25efbbf3f762c633e99c73825710a37906a9a5b.jpg)
83
+
84
+ ![](images/d499f4f7aed02d58c60ff3c54c516757927da8f02274179ba2f4d265fa5694a2.jpg)
85
+
86
+ ![](images/3ae2286abcc7f8b1e5f459bda40ff3369f03698d9d1e11b178d66779db833c42.jpg)
87
+
88
+ ![](images/6553fa476409237d1d29f800506460e8ae0e20b1018332670a3aa9d3ba2a9450.jpg)
89
+
90
+ ![](images/b2535e469994c637d6bac9168e88da5cbcb49a0f566cf5bf5f6e229f34323dc1.jpg)
91
+
92
+ # (1) There is snowing today.
93
+
94
+ First, we compute attention maps for each Transformer head as shown in Figure 1a-1b (left). These two heads follow different attention patterns (Clark et al., 2019): attention to the next token (Figure 1a) and to the [SEP] token (Figure 1b). Next, we represent the map as a weighted graph, and conduct the filtration procedure for a fixed set of attention weight thresholds. The edges lower than each given threshold are discarded, which results in a set of six attention graphs with their maximum spanning trees (MSTs) becoming a chain (Figure 1c; $\tau = 0.1$ ), and a star (Figure 1d; $\tau = 0.5$ ). The families of attention graphs are used to compute persistent features ( $\S 3.2$ ).
95
+
96
+ Figure 1a-1b (right) depict barcodes for each family of graphs. The bars are sorted by length. The number of bars equals $|T| - 1$ , where $|T|$ is the number of tokens in the input sentence. The bars in yellow correspond to the 0-dimensional features acquired from the edges of the MST. The bars in blue refer to 1-dimensional features, which stand for non-trivial simple cycles. Such cycle appears in the first family (Figure 1c; $\tau = 0.05$ ), which is shown as a blue bar in Figure 1a. By contrast, there are no cycles in the second family (Figure 1d) and on the corresponding barcode.
97
+
98
+ # 3.2 Persistent Features of Attention Graphs
99
+
100
+ We follow Kushnareva et al. (2021) to design three groups of persistent features of the attention graph: (i) topological features, (ii) features derived from barcodes, and (iii) features based on distance to attention patterns. The features are computed on attention maps produced by a Transformer LM.
101
+
102
+ Topological Features. Topological features include the first two Betti numbers of the undirected graph $\beta_0$ and $\beta_{1}$ and standard properties of the directed graph, such as the number of strongly connected components, edges, and cycles. The features are calculated on pre-defined thresholds over undirected and directed attention graphs from each head separately and further concatenated.
103
+
104
+ Features Derived from Barcodes. Barcode is the representation of the graph's persistent homology (Barannikov, 2021). We use the Ripser++ toolkit (Zhang et al., 2020) to compute $0/1$ -dimensional barcodes for $A^{attn}$ . Since Ripser++ leverages upon distance matrices, we transform $A^{attn}$ as $A' = 1 - \max(A^{attn}, A^{attn T})$ .
105
+
106
+ Next, we compute descriptive characteristics of each barcode, such as the sum/average/variance of lengths of bars, the number of bars with the time of birth/death greater/lower than a threshold, and the
107
+
108
+ ![](images/d5ffc8a4414338df1ff39a6df39d2e8a923e940c1b7f2ac87de37cf6c15791b7.jpg)
109
+
110
+ ![](images/54513c8f9ab9916a0a84dd873b75fd671549e7702ae28a1fa908749caaba9fb3.jpg)
111
+
112
+ ![](images/b6a00cacc550447598a2039ae6d208fbf83c27c122328fefa1272be7ed36f2a1.jpg)
113
+ Figure 2: A graphical representation of RTD-barcodes. In the top row given $A'$ matrices derived from attention maps for acceptable and unacceptable sentences. Edges present in both graphs $G_{a}^{\alpha_{i}}$ and $G_{b}^{\alpha_{i}}$ at a given threshold $\alpha_{i}$ are colored in grey. Edges present only in graph $G_{b}^{\alpha_{i}}$ are colored in green.
114
+
115
+ ![](images/3e8d5bafc40aaa1642b333c72c17d612b883c060aae0fb1bfc57da7a66b5bcf5.jpg)
116
+
117
+ ![](images/bde4354dcd643d7f619b8586b28c2e6ebae0347e5c9507e73613e975f77226c7.jpg)
118
+
119
+ entropy of the barcodes. The sum of lengths of bars $(H_0S)$ corresponds to the 0-dimensional barcode and represents the sum of edge weights in the $A^{\prime}$ 's minimum spanning tree. The average length of bars $(H_0M)$ corresponds to the mean edge weight in this tree, i.e. $H_0M = 1$ (the mean edge weight of the maximum spanning tree in $A^{attn}$ ).
120
+
121
+ Features Based on Distance to Patterns. The shape of attention graphs can be divided into several patterns: attention to the previous/current/next token, attention to the [SEP]/[CLS] token, and attention to punctuation marks (Clark et al., 2019). We formalize attention patterns by binary matrices and calculate distances to them as follows. We take the Frobenius norm of the difference between the matrices normalized by the sum of their norms. The distances to patterns are used as a feature vector.
122
+
123
+ Notations. We summarize the notations used throughout the paper:
124
+
125
+ - $H_{i}$ : $i$ -th Homology Group
126
+ - $\beta_{i}$ : Betti number, dimension of $H_{i}$
127
+ - $H_0 S$ : The sum of lengths of bars
128
+ - ${H}_{0}M$ : The average of lengths of bars
129
+ - PCA: Principal Component Analysis
130
+ - $PC^{\{i\}}$ : Subset $\{i\}$ of principal components
131
+ - MST: Maximum Spanning Tree
132
+ - RTD: Representation Topology Divergence
133
+
134
+ # 3.3 Representation Topology Divergence
135
+
136
+ Representation Topology Divergence (RTD; Barannikov et al., 2022) measures topological dissimilarity between a pair of weighted graphs with one-to-one vertex correspondence. Figure 2 outlines computation of RTD for a sentence pair in Example (2).
137
+
138
+ (2) a. Cheryl had trained Tara.
139
+ b. *Cheryl had murmured Tara.
140
+
141
+ First, we compute attention maps for the input sentences $S_{a}$ and $S_{b}$ with a Transformer LM, and represent them as the weighted graphs $G_{a}$ and $G_{b}$ . Next, we establish a one-to-one match between the vertices, and sort the filtrations $G_{a}^{\alpha_{i}}$ and $G_{b}^{\alpha_{i}}$ with $\alpha = 1 - \tau$ in the ascending order. We then track the hierarchical formation of connected components in the graph $G_{a}^{\alpha_{i}} \cap G_{b}^{\alpha_{i}}$ while increasing $\alpha_{i}$ . The RTD $(G_{a}, G_{b})$ feature appears at threshold $\alpha_{i}$ if an edge with the weight $\alpha_{i}$ in the graph $G_{b}$ joins two different connected components of the graph $G_{a}^{\alpha_{i}} \cap G_{b}^{\alpha_{i}}$ . This feature disappears at the threshold $\alpha_{j}$ if the two $G_{a}^{\alpha_{i}} \cap G_{b}^{\alpha_{i}}$ connected components become joined in the graph $G_{a}^{\alpha_{j}}$ .
142
+
143
+ Example. We can identify the "birth" of the RTD feature at $\alpha = 0.47$ , when an edge appears in $G_{b}^{\alpha = 0.47}$ between the connected component "trained/murmured" and the connected component with four vertices, namely "[SEP]", "[CLS]", ".", and "Tara" (Figure 2; the appearing edge is colored in green). We observe its "death", when the edge becomes present in both attention graphs at $\alpha = 0.65$ (the corresponding edge changes its color to grey in the graph $G_{a}^{\alpha = 0.65} \cap G_{b}^{\alpha = 0.65}$ ). When comparing the graphs in this manner, we can associate a lifetime to the feature by computing the difference between the moments of its "death" (e.g., $\alpha_{j} = 0.65$ ) and "birth" (e.g., $\alpha_{i} = 0.47$ ). The lifetimes are illustrated as the orange bars $[\alpha_{i}, \alpha_{j}]$ in Figure 2. The resulting value of $\mathrm{RTD}(G_{a}, G_{b})$ is the sum of lifetimes $\alpha_{j} - \alpha_{i}$ over all such features. A formal description of RTD is provided in Appendix A.
144
+
145
+ # 4 Acceptability Classification
146
+
147
+ # 4.1 Data
148
+
149
+ We use three LA classification benchmarks in English (CoLA; Warstadt et al., 2019), Italian (ITAcoLA; Trotta et al., 2021) and Swedish (DALAJ; Volodina et al., 2021). CoLA and ITA-CoLA contain sentences from linguistic textbooks and cover morphological, syntactic, and semantic phenomena. The target labels are the original authors' acceptability judgments. DALAJ includes L2-written sentences with morphological violations or incorrect word choices. The benchmark statistics are described in Table 1 (see Appendix C). We provide examples of acceptable and unacceptable sentences in English (3), Italian (4), and Swedish (5) from the original papers.
150
+
151
+ (3) a. What did Betsy paint a picture of?
152
+ b. *Maryann should leaving.
153
+ (4) a. Ho voglia di salute Mare Maria.
154
+
155
+ "I want to greet Maria."
156
+
157
+ b. *Questa donna mi hanno colpito.
158
+
159
+ "This woman have impressed me."
160
+
161
+ (5) a. Jag kände mig jättekonstig. "I felt very strange."
162
+ b. *Alla blir busiga med sociala medier.
163
+
164
+ "Everyone is busy with social media."
165
+
166
+ # 4.2 Models
167
+
168
+ We run the experiments on the following Transformer LMs: En-BERT-base (Devlin et al., 2019), It-BERT-base (Schweter, 2020), Sw-BERT-base (Malmsten et al., 2020), and XLM-R-base (Conneau et al., 2020). Each LM has two instances: frozen (a pre-trained model with frozen weights), and fine-tuned (a model fine-tuned for LA classification in the corresponding language).
169
+
170
+ Baselines. We use the fine-tuned LMs, and a linear layer trained over the pooler output from the frozen LMs as baselines.
171
+
172
+ Our Models. We train Logistic Regression classifiers over the persistent features computed with each model instance: (i) the average length of bars $(H_0M)$ ; (ii) concatenation of all topological features referred to as TDA (§3.2). Following Warstadt et al., we evaluate the performance with the accuracy score (Acc.) and Matthew's Correlation Coefficient (MCC; Matthews, 1975). The fine-tuning details are provided in Appendix B.
173
+
174
+ <table><tr><td rowspan="2">Model</td><td colspan="4">Frozen LMs</td><td colspan="4">Fine-tuned LMs</td></tr><tr><td>IDD / Dev Acc.</td><td>MCC</td><td>OODD / Test Acc.</td><td>MCC</td><td>IDD / Dev Acc.</td><td>MCC</td><td>OODD / Test Acc.</td><td>MCC</td></tr><tr><td colspan="9">CoLA</td></tr><tr><td>En-BERT</td><td>69.6</td><td>0.037</td><td>69.0</td><td>0.082</td><td>83.1</td><td>0.580</td><td>81.0</td><td>0.536</td></tr><tr><td>En-BERT + H0M</td><td>75.0</td><td>0.338</td><td>75.2</td><td>0.372</td><td>85.2</td><td>0.635</td><td>81.2</td><td>0.542</td></tr><tr><td>En-BERT + TDA</td><td>77.2</td><td>0.420</td><td>76.7</td><td>0.420</td><td>88.6</td><td>0.725</td><td>82.1</td><td>0.565</td></tr><tr><td>XLM-R</td><td>68.9</td><td>0.041</td><td>68.6</td><td>0.072</td><td>80.8</td><td>0.517</td><td>79.3</td><td>0.489</td></tr><tr><td>XLM-R + H0M</td><td>71.3</td><td>0.209</td><td>69.8</td><td>0.187</td><td>81.2</td><td>0.532</td><td>77.7</td><td>0.445</td></tr><tr><td>XLM-R + TDA</td><td>73.0</td><td>0.336</td><td>70.3</td><td>0.297</td><td>86.9</td><td>0.683</td><td>80.4</td><td>0.522</td></tr><tr><td colspan="9">ItaCoLA</td></tr><tr><td>It-BERT</td><td>81.1</td><td>0.032</td><td>82.1</td><td>0.140</td><td>87.4</td><td>0.351</td><td>86.8</td><td>0.382</td></tr><tr><td>It-BERT + H0M</td><td>85.0</td><td>0.124</td><td>83.6</td><td>0.055</td><td>87.0</td><td>0.361</td><td>85.7</td><td>0.370</td></tr><tr><td>It-BERT + TDA</td><td>89.2</td><td>0.478</td><td>85.8</td><td>0.352</td><td>91.1</td><td>0.597</td><td>86.4</td><td>0.424</td></tr><tr><td>XLM-R</td><td>85.4</td><td>0.000</td><td>84.2</td><td>0.000</td><td>85.7</td><td>0.397</td><td>85.6</td><td>0.434</td></tr><tr><td>XLM-R + H0M</td><td>84.7</td><td>0.095</td><td>83.3</td><td>0.072</td><td>86.9</td><td>0.370</td><td>86.6</td><td>0.397</td></tr><tr><td>XLM-R + TDA</td><td>88.3</td><td>0.411</td><td>84.4</td><td>0.208</td><td>92.8</td><td>0.683</td><td>86.1</td><td>0.398</td></tr><tr><td colspan="9">DaLAJ</td></tr><tr><td>Sw-BERT</td><td>58.3</td><td>0.183</td><td>59.0</td><td>0.188</td><td>71.9</td><td>0.462</td><td>74.2</td><td>0.500</td></tr><tr><td>Sw-BERT + H0M</td><td>69.3</td><td>0.387</td><td>58.4</td><td>0.169</td><td>76.9</td><td>0.542</td><td>68.7</td><td>0.375</td></tr><tr><td>Sw-BERT + TDA</td><td>62.1</td><td>0.243</td><td>64.4</td><td>0.289</td><td>71.8</td><td>0.442</td><td>73.4</td><td>0.478</td></tr><tr><td>XLM-R</td><td>52.2</td><td>0.069</td><td>51.5</td><td>0.038</td><td>60.6</td><td>0.243</td><td>62.8</td><td>0.297</td></tr><tr><td>XLM-R + H0M</td><td>61.1</td><td>0.224</td><td>61.8</td><td>0.237</td><td>62.5</td><td>0.256</td><td>64.5</td><td>0.295</td></tr><tr><td>XLM-R + TDA</td><td>51.1</td><td>0.227</td><td>54.1</td><td>0.218</td><td>62.5</td><td>0.255</td><td>65.5</td><td>0.322</td></tr></table>
175
+
176
+ Table 1: Acceptability classification results by benchmark. $\mathbf{{IDD}} =$ "in domain dev" set (CoLA). $\mathbf{{OODD}} =$ "out of domain dev" set (CoLA). Dev; Test=dev and test sets in ITACOLA and DALAJ. The best score is put in bold, the second best score is underlined.
177
+
178
+ # 4.3 Results
179
+
180
+ Table 1 outlines the LA classification results. Our TDA classifiers generally outperform the baselines by up to 0.14 MCC for English, 0.24 MCC for Italian, and 0.08 MCC for Swedish. The $H_0M$ feature can solely enhance the performance for English and Italian up to 0.1, and concatenation of all features receives the best scores. Comparing the results under the frozen and fine-tuned settings, we draw the following conclusions. The TDA features significantly improve the frozen baseline performance but require the LM to be fine-tuned for maximizing the performance. However, the $TDA / H_0M$ classifiers perform on par with the fine-tuned baselines for Swedish. The results suggest that our features may fail to infer lexical items and word derivation violations peculiar to the DALAJ benchmark.
181
+
182
+ Effect of Freezing Layers. Another finding is that freezing the Transformer layers significantly affects acceptability classification. Most of the frozen baselines score less than 0.1 MCC across all languages. The results align with Lee et al. (2019), who discuss the performance degradation of BERT-based models depending upon the number of frozen layers. With all layers frozen, the model performance can fall to zero.
183
+
184
+ ![](images/f92b348f197eac788f9292214064f884ef143c851731812efcf1fa690946c957.jpg)
185
+ Figure 3: Performance (MCC) of the fine-tuned EnBERT and XLM-R by major linguistic feature. Average MCC scores are represented with dashed lines. The number of sentences including the feature is placed in square brackets.
186
+
187
+ Results by Linguistic Features. We run a diagnostic evaluation of the fine-tuned models using a grammatically annotated version of the CoLA development set (Warstadt and Bowman, 2019). Figure 3 (En-BERT and XLM-R; Figure 1 in Appendix C.1) present the results of measuring the MCC of the sentences including the major features.
188
+
189
+ The overall pattern is that the TDA classifiers may outperform the fine-tuned baselines, while the $H_0M$ ones perform on par with the latter. The performance is high on sentences with default syntax (Simple) and marked argument structure, including prepositional phrase arguments (Arg. Type), and verb phrases with unusual structures (Arg. Altern). The TDA features capture surface properties, such as presence of auxiliary or modal verbs (Auxiliary), and structural ones, e.g., embedded complement clauses (Comp Clause) and infinitive constructions (to-VP). The models receive moderate MCC scores on sentences with question-like properties (Question), adjuncts performing semantic functions (Adjunct), negative polarity items, and comparative constructions (Determiner).
190
+
191
+ Analysis of the Feature Space. The LA classification experiments are conducted in the sparse feature space, where the TDA features can strongly correlate with one another, and their contribution is unclear. We run a complementary experiment to understand better how linguistic features are modeled with topology. We investigate the feature space with dimensionality reduction (principal component analysis, PCA; Pearson, 1901) by interpreting components' structure and identifying the feature importance to the classifier's predictions using
192
+
193
+ Shapley values (Shapley, 1953), a game-theoretic approach to the attribution problem (Sundararajan and Najmi, 2020). Appendix C.2 describes the experiment on the fine-tuned En-BERT + TDA model using the grammatically annotated CoLA development set.
194
+
195
+ The results show that $(i)$ features of the higher-layer heads, such as the average vertex degree, the number of connected components, edges, and cycles, and attention to the current token, contribute most to the major linguistic features. (ii) Attention to the [CLS]/next token is important to the Determiner, Arg. Type, Comp Clause, and to-VP properties, while attention to the first token and punctuation marks has the least effect in general. (iii) The number of nodes influences the classifier behavior, which is in line with Warstadt and Bowman, who discuss the effect of the sentence length on the performance.
196
+
197
+ # 5 Linguistic Minimal Pairs
198
+
199
+ # 5.1 Data
200
+
201
+ BLIMP (Benchmark of Linguistic Minimal Pairs; Warstadt et al., 2020a) evaluates the sensitivity of LMs to acceptability contrasts in terms of a forced choice between minimal pairs, as in Example (6). The benchmark consists of 67 pair types, each including 1k pairs covering 12 language phenomena in morphology, syntax, and semantics.
202
+
203
+ (6) a. Whose hat should Tonya wear?
204
+ b. *Whose should Tonya wear hat?
205
+
206
+ # 5.2 Models
207
+
208
+ We conduct the experiments using two Transformer LMs for English: BERT-base and RoBERTa-base (Liu et al., 2019).
209
+
210
+ Baselines. We compare our methods with the results on BLIMP for human annotators and nine LMs (Warstadt et al., 2020a; Salazar et al., 2020). The baselines range from statistical N-gram LMs to Transformer LMs.
211
+
212
+ Our Models. Given a minimal pair as in Example (2), we build attention graphs $G_{a}$ and $G_{b}$ from each attention head of a frozen Transformer LM. We use the $H_{0}M$ feature (§3.2) and RTD (§3.3) as scoring functions to distinguish between the sentences $S_{a}$ and $S_{b}$ . The scoring is based on empirically defined decision rules modeled after the forced-choice task:
213
+
214
+ - $H_{0}M$ scoring: $S_{a}$ is acceptable if and only if $H_{0}M(G_{a}) < H_{0}M(G_{b})$ ; otherwise $S_{b}$ is acceptable.
215
+ - RTD scoring: $S_{a}$ is acceptable if and only if $\mathrm{RTD}(G_{a}, G_{b}) < \mathrm{RTD}(G_{b}, G_{a})$ ; otherwise $S_{b}$ is acceptable.
216
+
217
+ We evaluate the scoring performance of each attention head, head ensembles, and all heads w.r.t. each and all linguistic phenomena in BLIMP. The following head configurations are used for each Transformer LM and scoring function:
218
+
219
+ - Phenomenon Head and Top Head are the best-performing attention heads for each and all phenomena, respectively. The heads undergo the selection with a brute force search and operate as independent scorers.
220
+ - Head Ensemble is a group of the best-performing attention heads selected with beam search. The size of the group is always odd. We collect majority vote scores from attention heads in the group.
221
+ - All Heads involves majority vote scoring with all 144 heads. We use random guessing in case of equality of votes. This setup serves as a proxy for the efficiency of the head selection.
222
+
223
+ Notes on Head Selection. Recall that the head selection procedure imposes the following limitation. Auxiliary labeled minimal pairs are required to find the best-performing Phenomenon, Top Heads, and Head Ensembles. However, this procedure is more optimal and beneficial than All Heads since it maximizes the performance when utilizing only one or 9-to-59 heads. We also analyze the effect of the amount of auxiliary data used for the head selection on the scoring performance (§5.3). Appendix D.1 presents a more detailed description of the head selection procedure.
224
+
225
+ # 5.3 Results
226
+
227
+ We provide the results of scoring BLiMP pairs in Table 2. The accuracy is the proportion of the minimal pairs in which the method prefers an acceptable sentence to an unacceptable one. We report the maximum accuracy scores for our methods across five experiment restarts. The general trends
228
+
229
+ are that the best head configuration performs on par with the human baseline and achieves the highest overall performance (RoBERTa-base + RTD; Head Ensemble). RoBERTa predominantly surpasses BERT and other baselines, and topological scoring may improve on scores from both BERT and RoBERTa for particular phenomena.
230
+
231
+ Top Head Results. We find that $H_0M / \mathrm{RTD}$ scoring with only one Top Head overall outperforms majority voting of 144 heads (All Heads) by up to $10.6\%$ and multiple baselines by up to $20.7\%$ (5-gram, LSTM, Transformer-XL, GPT2-large). However, this head configuration performs worse than masked LM scoring (Salazar et al., 2020) for BERT-base (by $8.8\%$ ; Top Head=[8; 0]) and RoBERTa-base (by $4.6\%$ ; Top Head=[11; 10]).
232
+
233
+ Phenomenon Head Results. We observe that the $H_0M / TDA$ scoring performance of Phenomenon Heads insignificantly differ for the same model. Phenomenon Heads generally receive higher scores than the corresponding Top Heads for BERT/RoBERTa (e.g., Binding: +17.1/+5.8%; Quantifiers: +17.7/+5.0%; Det.-Noun agr: +13.0/+1.7%), and perform best or second-best on Binding and Ellipsis. Their overall performance further adds up to 6.4/6.0% and is comparable with Salazar et al.. The results indicate that heads encoding the considered phenomena are distributed at the same or nearby layers, namely [3; 6-9; 11] (BERT), and [2-3; 8-11] (RoBERTa).
234
+
235
+ Head Ensemble Results. Table 3 describes the most optimal Head Ensembles by Transformer LM. Most heads selected under the $H_0M$ and RTD scoring functions are similar w.r.t. LMs. While the selected BERT heads are distributed across all layers, the RoBERTa ones tend to be localized at the middle-to-higher layers. Although RoBERTa utilizes smaller ensembles when delivering the best overall score, some heads contribute in both LMs, most notably at the higher layers.
236
+
237
+ Overall, the RoBERTa $H_0M$ /RTD ensembles get the best results on Filler gap, Quantifiers, Island effects, NPI, and S-V agr as shown in Table 2, matching the human level and surpassing four larger LMs on all phenomena by up to $7.4\%$ (GPT2-medium and GPT2/BERT/RoBERTa-large).
238
+
239
+ Effect of Auxiliary Data. Note that the head selection can be sensitive to the number of additional examples. The analysis of this effect is presented in
240
+
241
+ <table><tr><td>Model</td><td>Overall</td><td>Ana. agr</td><td>Arg. str</td><td>Binding</td><td>Ctrl./rais.</td><td>D-N agr</td><td>Ellipsis</td><td>Filler</td><td>Irreg.</td><td>Island</td><td>NPI</td><td>Quant.</td><td>S-V agr</td></tr><tr><td colspan="14">Warstadt et al. (2020a)</td></tr><tr><td>5-gram</td><td>61.2</td><td>47.9</td><td>71.9</td><td>64.4</td><td>68.5</td><td>70.0</td><td>36.9</td><td>60.2</td><td>79.5</td><td>57.2</td><td>45.5</td><td>53.5</td><td>60.3</td></tr><tr><td>LSTM</td><td>69.8</td><td>91.7</td><td>73.2</td><td>73.5</td><td>67.0</td><td>85.4</td><td>67.6</td><td>73.9</td><td>89.1</td><td>46.6</td><td>51.7</td><td>64.5</td><td>80.1</td></tr><tr><td>Transformer-XL</td><td>69.6</td><td>94.1</td><td>72.2</td><td>74.7</td><td>71.5</td><td>83.0</td><td>77.2</td><td>66.6</td><td>78.2</td><td>48.4</td><td>55.2</td><td>69.3</td><td>76.0</td></tr><tr><td>GPT2-large</td><td>81.5</td><td>99.6</td><td>78.3</td><td>80.1</td><td>80.5</td><td>93.3</td><td>86.6</td><td>81.3</td><td>84.1</td><td>70.6</td><td>78.9</td><td>71.3</td><td>89.0</td></tr><tr><td>Human Baseline</td><td>88.6</td><td>97.5</td><td>90.0</td><td>87.3</td><td>83.9</td><td>92.2</td><td>85.0</td><td>86.9</td><td>97.0</td><td>84.9</td><td>88.1</td><td>86.6</td><td>90.9</td></tr><tr><td colspan="14">Salazar et al. (2020)</td></tr><tr><td>GPT2-medium</td><td>82.6</td><td>99.4</td><td>83.4</td><td>77.8</td><td>83.0</td><td>96.3</td><td>86.3</td><td>81.3</td><td>94.9</td><td>71.7</td><td>74.7</td><td>74.1</td><td>88.3</td></tr><tr><td>BERT-base</td><td>84.2</td><td>97.0</td><td>80.0</td><td>82.3</td><td>79.6</td><td>97.6</td><td>89.4</td><td>83.1</td><td>96.5</td><td>73.6</td><td>84.7</td><td>71.2</td><td>92.4</td></tr><tr><td>BERT-large</td><td>84.8</td><td>97.2</td><td>80.7</td><td>82.0</td><td>82.7</td><td>97.6</td><td>86.4</td><td>84.3</td><td>92.8</td><td>77.0</td><td>83.4</td><td>72.8</td><td>91.9</td></tr><tr><td>RoBERTa-base</td><td>85.4</td><td>97.3</td><td>83.5</td><td>77.8</td><td>81.9</td><td>97.0</td><td>91.4</td><td>90.1</td><td>96.2</td><td>80.7</td><td>81.0</td><td>69.8</td><td>91.9</td></tr><tr><td>RoBERTa-large</td><td>86.5</td><td>97.8</td><td>84.6</td><td>79.1</td><td>84.1</td><td>96.8</td><td>90.8</td><td>88.9</td><td>96.8</td><td>83.4</td><td>85.5</td><td>70.2</td><td>91.4</td></tr><tr><td colspan="14">BERT-base + H0M</td></tr><tr><td>[Layer, Head]</td><td>x</td><td>[8; 8]</td><td>[8; 0]</td><td>[7; 0]</td><td>[8; 0]</td><td>[7; 0]</td><td>[7; 11]</td><td>[9; 7]</td><td>[6; 1]</td><td>[11; 7]</td><td>[8; 9]</td><td>[3; 7]</td><td>[8; 0]</td></tr><tr><td>Phenomenon Head</td><td>81.7</td><td>94.9</td><td>75.9</td><td>80.4</td><td>79.2</td><td>96.7</td><td>89.1</td><td>75.9</td><td>93.0</td><td>70.5</td><td>84.6</td><td>81.2</td><td>82.1</td></tr><tr><td>Top Head [8; 0]</td><td>75.4</td><td>86.8</td><td>75.9</td><td>63.4</td><td>79.2</td><td>83.7</td><td>72.2</td><td>67.3</td><td>90.3</td><td>70.0</td><td>83.1</td><td>63.5</td><td>82.1</td></tr><tr><td>Head Ensemble</td><td>84.3</td><td>93.3</td><td>79.9</td><td>83.5</td><td>78.6</td><td>96.4</td><td>78.4</td><td>79.5</td><td>93.8</td><td>74.4</td><td>92.5</td><td>81.7</td><td>86.8</td></tr><tr><td>All Heads</td><td>64.8</td><td>79.6</td><td>69.1</td><td>63.9</td><td>62.6</td><td>86.2</td><td>70.7</td><td>47.3</td><td>90.7</td><td>49.5</td><td>61.1</td><td>50.0</td><td>72.0</td></tr><tr><td colspan="14">BERT-base + RTD</td></tr><tr><td>[Layer, Head]</td><td>x</td><td>[8; 3]</td><td>[8; 0]</td><td>[7; 0]</td><td>[8; 0]</td><td>[7; 0]</td><td>[7; 11]</td><td>[9; 7]</td><td>[6; 1]</td><td>[9; 7]</td><td>[8; 9]</td><td>[3; 7]</td><td>[8; 0]</td></tr><tr><td>Phenomenon Head</td><td>81.8</td><td>94.5</td><td>75.8</td><td>80.4</td><td>79.2</td><td>96.7</td><td>89.1</td><td>75.0</td><td>93.0</td><td>72.2</td><td>84.4</td><td>81.2</td><td>82.0</td></tr><tr><td>Top Head [8; 0]</td><td>75.4</td><td>86.8</td><td>75.8</td><td>63.3</td><td>79.2</td><td>83.6</td><td>72.1</td><td>67.3</td><td>90.2</td><td>70.2</td><td>83.1</td><td>63.6</td><td>82.0</td></tr><tr><td>Head Ensemble</td><td>85.8</td><td>93.9</td><td>82.5</td><td>85.6</td><td>77.0</td><td>96.3</td><td>88.1</td><td>80.7</td><td>95.7</td><td>77.0</td><td>92.5</td><td>83.8</td><td>88.9</td></tr><tr><td>All Heads</td><td>65.3</td><td>77.8</td><td>68.5</td><td>63.2</td><td>63.6</td><td>86.0</td><td>73.4</td><td>48.4</td><td>91.0</td><td>51.3</td><td>62.0</td><td>51.2</td><td>72.6</td></tr><tr><td colspan="14">RoBERTa-base + H0M</td></tr><tr><td>[Layer, Head]</td><td>x</td><td>[9; 5]</td><td>[9; 8]</td><td>[9; 5]</td><td>[9; 6]</td><td>[9; 0]</td><td>[8; 4]</td><td>[11; 10]</td><td>[9; 6]</td><td>[3; 5]</td><td>[11; 5]</td><td>[11; 3]</td><td>[8; 9]</td></tr><tr><td>Phenomenon Head</td><td>86.5</td><td>97.6</td><td>79.9</td><td>90.5</td><td>80.7</td><td>91.6</td><td>89.9</td><td>87.1</td><td>95.9</td><td>78.9</td><td>91.1</td><td>83.4</td><td>90.2</td></tr><tr><td>Top Head [11; 10]</td><td>81.9</td><td>90.1</td><td>66.0</td><td>84.7</td><td>71.7</td><td>91.0</td><td>86.7</td><td>87.1</td><td>89.5</td><td>76.8</td><td>90.7</td><td>78.4</td><td>85.0</td></tr><tr><td>Head Ensemble</td><td>87.8</td><td>96.3</td><td>79.6</td><td>87.6</td><td>82.6</td><td>93.6</td><td>84.9</td><td>90.4</td><td>94.3</td><td>83.0</td><td>94.6</td><td>80.6</td><td>92.8</td></tr><tr><td>All Heads</td><td>74.3</td><td>80.6</td><td>71.2</td><td>78.6</td><td>67.8</td><td>90.6</td><td>89.9</td><td>75.5</td><td>65.1</td><td>73.4</td><td>65.1</td><td>57.0</td><td>75.6</td></tr><tr><td colspan="14">RoBERTa-base + RTD</td></tr><tr><td>[Layer, Head]</td><td>x</td><td>[9; 5]</td><td>[9; 8]</td><td>[9; 5]</td><td>[9; 6]</td><td>[9; 0]</td><td>[8; 4]</td><td>[11; 10]</td><td>[2; 9]</td><td>[3; 5]</td><td>[10; 2]</td><td>[11; 3]</td><td>[9; 5]</td></tr><tr><td>Phenomenon Head</td><td>86.8</td><td>97.6</td><td>80.7</td><td>90.3</td><td>80.7</td><td>91.9</td><td>90.4</td><td>86.8</td><td>95.9</td><td>78.9</td><td>91.3</td><td>82.9</td><td>90.4</td></tr><tr><td>Top Head [11; 10]</td><td>80.8</td><td>88.8</td><td>63.3</td><td>83.7</td><td>69.8</td><td>90.2</td><td>88.8</td><td>86.8</td><td>89.1</td><td>76.5</td><td>89.0</td><td>78.4</td><td>83.1</td></tr><tr><td>Head Ensemble</td><td>88.9</td><td>97.0</td><td>83.3</td><td>86.9</td><td>81.6</td><td>93.7</td><td>88.1</td><td>88.8</td><td>95.5</td><td>83.0</td><td>94.2</td><td>84.2</td><td>92.7</td></tr><tr><td>All Heads</td><td>74.3</td><td>81.1</td><td>66.6</td><td>76.8</td><td>65.2</td><td>89.9</td><td>91.1</td><td>74.6</td><td>63.3</td><td>74.1</td><td>63.2</td><td>56.9</td><td>74.9</td></tr></table>
242
+
243
+ Table 2: Percentage accuracy of the baseline models, human baseline, and our methods on BLIMP. Overall is the average across all phenomena. The best score is put in bold, the second best score is underlined.
244
+
245
+ ![](images/3aad58d55607cdf5ccf596f14b4ae52b4a30b33d296bc84c1e1f53914d9ee93c.jpg)
246
+ Table 3: Results of selecting the best-performing Head Ensembles with $H_{0}M$ /RTD-based scoring. $H_{0}M$ heads are colored in green; RTD heads are colored in yellow.
247
+
248
+ Appendix D.2. The results show that head ensembles, their size, and average performance tend to be more stable when using sufficient examples (the more, the better); however, using only one extra example can yield the performance above $80\%$ .
249
+
250
+ # 6 Discussion
251
+
252
+ Topology and Acceptability. The topological properties of the attention graph represent interpretable and versatile features for judging sentence
253
+
254
+ acceptability and identifying acceptability contrasts in minimal pairs. As one of such properties, the sum length of bars $(H_0S)$ — and its normalized version $(H_0M)$ — have proved to be efficient for both LA approaches. This simple feature can serve as a profitable input for LA classifiers and a scoring function to discriminate between minimal pairs. Figure 4 shows an example of the $H_0S$ sensitivity to CoLA's question-like properties, such as wh movement out of syntactic islands and matrix and embedded questions. We provide more examples in Appendix E, which demonstrate the distribution shifts between the acceptable and unacceptable sentences.
255
+
256
+ Acceptability Phenomena. The underlying structure of the attention graph encodes various well-established grammatical concepts. We observe that the persistent graph features capture surface properties, morphological agreement, structural relationships, and simple/complex syntactic phenomena well. However, with topology, lexical items, optional syntactic elements, and abstract semantic factors may be difficult to infer. Attention
257
+
258
+ ![](images/2670491df283568ff3f5576e26eeaea18909cb8cc58a1b8ac910e93a4552d2cb.jpg)
259
+ Figure 4: The distribution shift of the $H_0S$ feature between the acceptable and unacceptable sentences (Question); [L: 10; H: 0].
260
+
261
+ to the first token and punctuation marks contribute least to LA classification, while the other attention pattern features capture various phenomena.
262
+
263
+ Linguistic Roles of Heads. Topological tools help gain empirical evidence about the linguistic roles of heads from another perspective. Our findings on the heads' roles align with several related studies. The results on the CoLA-style and BLiMP benchmarks indicate that (i) a single head can perform multiple linguistic functions (Pande et al., 2021), (ii) some linguistic phenomena, e.g., phrasal movement and island effects, are better captured by head ensembles rather than one head (Htut et al., 2019), and (iii) heads within the same or nearby layers extract similar grammatical phenomena (Bian et al., 2021).
264
+
265
+ # 7 Conclusion and Future Work
266
+
267
+ Our paper studies the ability of attention heads to judge grammatical acceptability, demonstrating the profitable application of TDA tools to two LA paradigms. Topological features can boost LA classification performance in three typologically close languages. The $H_0M/RTD$ scoring matches or outperforms larger Transformer LMs and reaches human-level performance on BLiMP, while utilizing 9-to-59 attention heads. We also interpret the correspondence between the persistent features of the attention graph and grammatical concepts, revealing that the former efficiently infer morphological, structural, and syntactic phenomena but may lack lexical and semantic information.
268
+
269
+ In our future work we hope to assess the linguistic competence of Transformer LMs on related resources for typologically diverse languages and analyze which language-specific phenomena are and
270
+
271
+ are not captured by the topological features. We are also planning to examine novel features, e.g., the number of vertex covers, the graph clique-width, and the features of path homology (Grigor'yan et al., 2020). Another direction is to evaluate the benefits and limitations of the $H_0M$ /RTD features as scoring functions in downstream applications.
272
+
273
+ We also plan to introduce support for new deep learning frameworks such as MindSpore $^4$ (Tong et al., 2021) to bring TDA-based experimentation to the wider industrial community.
274
+
275
+ # 8 Limitations
276
+
277
+ # 8.1 Computational complexity
278
+
279
+ Acceptability classification. Calculation of any topological feature relies on the Transformer's attention matrices. Hence, the computational complexity of our features is not lower than producing an attention matrix with one head, which is asymptotically $O(n^{2}d + nd^{2})$ given that $n$ is the maximum number of tokens, and $d$ is the token embedding dimension (Vaswani et al., 2017).
280
+
281
+ The calculation complexity of the pattern-based and threshold-based features is done in linear time $O(e + n)$ , where $e$ is the number of edges in the attention graph. In turn, the number of edges is not higher than $\frac{n(n - 1)}{2} \sim n^2$ . The computation of the $\mathbf{0}^{th}$ Betti number $\beta_0$ takes linear time $O(e + n)$ as $\beta_0$ is equal to the number of the connected components in an undirected graph. The computation of the $1^{st}$ Betti number $\beta_1$ takes constant time, since $\beta_1 = e - n + \beta_0$ . The computational complexity of the number of simple cycles and the 1-dimensional barcode features is exponential in the worst case. To reduce the computational burden, we stop searching for simple cycles after a pre-defined amount of them is found.
282
+
283
+ Note that the computational costs could be reduced, e.g., by identifying the most contributing features or the best-performing heads. Consider an example in Figure 5, which illustrates how the CoLA performance gets changed depending on the number of En-BERT heads. Here, the head selection is based on a simple procedure. First, we score the attention heads by calculating the maximum correlation between each head's features and the vector of the target classes on the train set. Second, we train a linear classifier over the TDA features produced by $N$ attention heads ranked by the correlation values, as specified in §4.2. Satisfactory
284
+
285
+ ![](images/62ea434ebb69be0ff8914f0ba38133e1fef6651b81fd441464083e50fa5f22aa.jpg)
286
+ Figure 5: Performance on the CoLA development set depending on the number of heads for En-BERT + TDA.
287
+
288
+ MCC scores can be achieved when utilizing less than 40 heads with a significant speed up at the inference stage.
289
+
290
+ Linguistic Minimal Pairs. Computation of the $H_0M$ and RTD features is run via the Ripser++ GPU library. Under this library, the minimum spanning tree is found according to Kruskal's algorithm giving the computational complexity of $H_0M$ as $O(n^2\log n)$ . The complexity can be reduced using other algorithms, e.g., Prim's algorithm, which takes $O(n^2)$ . The RTD's computational complexity is more difficult to estimate. RTD is computed via persistence barcodes of dimension 1 for a specific graph with $2n$ vertices. Many optimization techniques and heuristics are implemented in the Ripser++ library that significantly reduce the RTD's complexity.
291
+
292
+ Empirical estimate. Computing the $H_0M / \mathrm{RTD}$ features with 144 BERT heads in the worst case of a 512-token text takes 2.41 and 94.5 sec (NVIDIA Tesla K80 12GB RAM). However, the actual computation time on the considered tasks is empirically more optimal. We provide empirical estimates on the entire BLiMP and LA datasets: 2.4/15.7 hours on BLiMP ( $H_0M / \mathrm{RTD}$ ) and up to 2 hours on CoLA/ITACOLA/DALAJ (estimates by the feature groups: topological features=24%; features derived from barcodes=70%; and features based on distance to patterns=6% of the total time).
293
+
294
+ # 8.2 Application Limitations
295
+
296
+ We also outline several application limitations of our approach. (i) The LA classifiers require preliminary fine-tuning of Transformer LMs to extract
297
+
298
+ more representative attention graph features and, therefore, achieve better performance. (ii) RTD operates upon a one-to-one vertex correspondence, which may be hindered by tokens segmented into an unequal amount of sub-tokens. As a result, identifying the topological discrepancy between pairs of attention graphs can be restricted in practice, where the graphs are of an arbitrary number of nodes. Regardless of the potential information loss due to sentence truncation in such cases, the RTD heads still receive the best overall score on BLIMP. (iii) The head selection procedure relies on auxiliary data to identify the best-performing head configurations. Annotating the auxiliary data may require additional resources and expertise for practical purposes. However, the procedure maximizes the performance and reduces the computational costs by utilizing less attention heads.
299
+
300
+ # 8.3 Linguistic Acceptability
301
+
302
+ Acceptability judgments have been broadly used to investigate whether LMs learn grammatical concepts central to human linguistic competence. However, this approach has several methodological limitations. (i) The judgments may display low reproducibility in multiple languages (Linzen and Oseki, 2018), and (ii) be influenced by an individual's exposure to ungrammatical language use (Dabrowska, 2010). (iii) Distribution shifts between LMs' pretraining corpora and LA datasets may introduce bias in the evaluation since LMs tend to assign higher probabilities to frequent patterns and treat them as acceptable in contrast to rare ones (Marvin and Linzen, 2018; Linzen and Baroni, 2021).
303
+
304
+ # 9 Ethical Statement
305
+
306
+ Advancing acceptability evaluation methods can improve the quality of natural language generation (Batra et al., 2021). We recognize that this, in turn, can increase the misuse potential of such models, e.g., generating fake product reviews, social media posts, and other targeted manipulation (Jawahar et al., 2020; Weidinger et al., 2021). However, the acceptability classifiers and scoring functions laid out in this paper are developed for research purposes only. Recall that the topological tools can be employed to develop adversarial defense and artificial text detection models for mitigating the risks (Kushnareva et al., 2021).
307
+
308
+ # Acknowledgements
309
+
310
+ This work was supported by Ministry of Science and Higher Education grant No. 075-10-2021-068 and by the Mindspore community. Irina Proskurina was supported by the framework of the HSE University Basic Research Program.
311
+
312
+ # References
313
+
314
+ Serguei Barannikov. 1994. Framed Morse complexes and its invariants. Adv. Soviet Math., 21:93-115.
315
+ Serguei Barannikov. 2021. Canonical Forms = Persistence Diagrams. Tutorial. In European Workshop on Computational Geometry (EuroCG 2021).
316
+ Serguei Barannikov, Ilya Trofimov, Nikita Balabin, and Evgeny Burnaev. 2022. Representation Topology Divergence: A Method for Comparing Neural Network Representations. In Proceedings of the 39th International Conference on Machine Learning, volume 162, pages 1607-1626. PMLR.
317
+ Soumya Batra, Shashank Jain, Peyman Heidari, Ankit Arun, Catharine Youngs, Xintong Li, Pinar Donmez, Shawn Mei, Shiunzu Kuo, Vikas Bhardwaj, Anuj Kumar, and Michael White. 2021. Building adaptive acceptability classifiers for neural NLG. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 682-697, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
318
+ Yuchen Bian, Jiaji Huang, Xingyu Cai, Jiahong Yuan, and Kenneth Church. 2021. On attention redundancy: A comprehensive study. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 930-945, Online. Association for Computational Linguistics.
319
+ Franck Burlot and François Yvon. 2017. Evaluating the morphological competence of machine translation systems. In Proceedings of the Second Conference on Machine Translation, pages 43-55, Copenhagen, Denmark. Association for Computational Linguistics.
320
+ Gunnar Carlsson and Mikael Vejdemo-Johansson. 2021. Topological Data Analysis with Applications. Cambridge University Press.
321
+ Frédéric Chazal and Bertrand Michel. 2017. An Introduction to Topological Data Analysis: Fundamental and Practical Aspects for Data Scientists. arXiv preprint arXiv:1710.04019.
322
+ Colin Cherry and Chris Quirk. 2008. Discriminative, syntactic language modeling through latent SVMs. In Proceedings of the 8th Conference of the Association for Machine Translation in the Americas: Research Papers, pages 65-74, Waikiki, USA. Association for Machine Translation in the Americas.
323
+
324
+ Noam Chomsky. 1965. Aspects of the Theory of Syntax. MIT Press.
325
+ Alexander Clark, Gianluca Giorgolo, and Shalom Lappin. 2013. Statistical representation of grammaticality judgements: the limits of n-gram models. In Proceedings of the Fourth Annual Workshop on Cognitive Modeling and Computational Linguistics (CMCL), pages 28-36, Sofia, Bulgaria. Association for Computational Linguistics.
326
+ Kevin Clark, Urvashi Khandelwal, Omer Levy, and Christopher D. Manning. 2019. What does BERT look at? an analysis of BERT's attention. In Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 276-286, Florence, Italy. Association for Computational Linguistics.
327
+ Pierre Colombo, Guillaume Staerman, Chloe Clavel, and Pablo Piantanida. 2021. Automatic text evaluation through the lens of Wasserstein barycenters. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 10450-10466, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
328
+ Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8440-8451, Online. Association for Computational Linguistics.
329
+ Ewa Dabrowska. 2010. Naive v. expert intuitions: An empirical study of acceptability judgments. The Linguistic Review.
330
+ Shouman Das, Syed A Haque, Md Tanveer, et al. 2021. Persistence Homology of TEDtalk: Do Sentence Embeddings Have a Topological Shape? arXiv preprint arXiv:2103.14131.
331
+ Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.
332
+ Pratik Doshi and Wlodek Zadrozny. 2018. Movie Genre Detection Using Topological Data Analysis. In International Conference on Statistical Language and Speech Processing. Springer, Cham.
333
+ Jimmy Dubuisson, Jean-Pierre Eckmann, Christian Scheible, and Hinrich Schütze. 2013. The topology of semantic knowledge. In Proceedings of the 2013 Conference on Empirical Methods in Natural
334
+
335
+ Language Processing, pages 669-680, Seattle, Washington, USA. Association for Computational Linguistics.
336
+ Herbert Edelsbrunner and John Harer. 2010. Computational Topology - an Introduction. American Mathematical Society.
337
+ Jeffrey L Elman. 1990. Finding Structure in Time. Cognitive science, 14(2):179-211.
338
+ AA Grigor'yan, Yong Lin, Yu V Muranov, and Shing-Tung Yau. 2020. Path Complexes and Their Homologies. Journal of Mathematical Sciences, 248(5):564-599.
339
+ Michael Heilman, Aoife Cahill, Nitin Madnani, Melissa Lopez, Matthew Mulholland, and Joel Tetreault. 2014. Predicting grammaticality on an ordinal scale. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 174-180, Baltimore, Maryland. Association for Computational Linguistics.
340
+ Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long Short-Term Memory. Neural computation, 9(8):1735-1780.
341
+ Phu Mon Htut, Jason Phang, Shikha Bordia, and Samuel R Bowman. 2019. Do attention heads in BERT track syntactic dependencies? arXiv preprint arXiv:1911.12246.
342
+ Alexander Jakubowski, Milica Gasic, and Marcus Zibrowius. 2020. Topology of word embeddings: Singularities reflect polysemy. In Proceedings of the Ninth Joint Conference on Lexical and Computational Semantics, pages 103-113, Barcelona, Spain (Online). Association for Computational Linguistics.
343
+ Ganesh Jawahar, Muhammad Abdul-Mageed, and Laks Lakshmanan, V.S. 2020. Automatic detection of machine generated text: A critical survey. In Proceedings of the 28th International Conference on Computational Linguistics, pages 2296-2309, Barcelona, Spain (Online). International Committee on Computational Linguistics.
344
+ Jae-young Jo and Sung-Hyon Myaeng. 2020. Roles and utilization of attention heads in transformer-based neural language models. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3404-3417, Online. Association for Computational Linguistics.
345
+ Olga Kovaleva, Alexey Romanov, Anna Rogers, and Anna Rumshisky. 2019. Revealing the dark secrets of BERT. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4365-4374, Hong Kong, China. Association for Computational Linguistics.
346
+
347
+ Laida Kushnareva, Daniil Cherniavskii, Vladislav Mikhailov, Ekaterina Artemova, Serguei Barannikov, Alexander Bernstein, Irina Piontkovskaya, Dmitri Piontkovski, and Evgeny Burnaev. 2021. Artificial text detection via examining the topology of attention maps. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 635-649, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
348
+ Jey Han Lau, Alexander Clark, and Shalom Lappin. 2017. Grammaticality, Acceptability, and Probability: a Probabilistic View of Linguistic Knowledge. Cognitive science, 41(5):1202-1241.
349
+ Jaejun Lee, Raphael Tang, and Jimmy Lin. 2019. What Would Elsa Do? Freezing Layers During Transformer Fine-tuning. arXiv preprint arXiv:1911.03090.
350
+ David Levary, Jean-Pierre Eckmann, Elisha Moses, and Tsvi Tlusty. 2012. Loops and Self-reference in the Construction of Dictionaries. Physical Review X, 2(3):031018.
351
+ Yongjie Lin, Yi Chern Tan, and Robert Frank. 2019. Open sesame: Getting inside BERT's linguistic knowledge. In Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 241-253, Florence, Italy. Association for Computational Linguistics.
352
+ Tal Linzen and Marco Baroni. 2021. Syntactic Structure from Deep Learning. Annual Review of Linguistics, 7:195-212.
353
+ Tal Linzen, Emmanuel Dupoux, and Yoav Goldberg. 2016. Assessing the Ability of LSTMs to Learn Syntax-sensitive Dependencies. Transactions of the Association for Computational Linguistics, 4:521-535.
354
+ Tal Linzen and Yohei Oseki. 2018. The reliability of acceptability judgments across languages. *Glossa: a journal of general linguistics*, 3(1).
355
+ Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A Robustly Optimized BERT Pre-training Approach. arXiv preprint arXiv:1907.11692.
356
+ Ziyang Luo. 2021. Have attention heads in BERT learned constituency grammar? In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Student Research Workshop, pages 8-15, Online. Association for Computational Linguistics.
357
+ Martin Malmsten, Love Börjeson, and Chris Haffenden. 2020. Playing with Words at the National Library of Sweden – Making a Swedish BERT.
358
+ Christopher D Manning, Kevin Clark, John Hewitt, Urvashi Khandelwal, and Omer Levy. 2020. Emergent Linguistic Structure in Artificial Neural Networks
359
+
360
+ Trained by Self-supervision. Proceedings of the National Academy of Sciences, 117(48):30046-30054.
361
+ Rebecca Marvin and Tal Linzen. 2018. Targeted syntactic evaluation of language models. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1192-1202, Brussels, Belgium. Association for Computational Linguistics.
362
+ Brian W. Matthews. 1975. Comparison of the Predicted and Observed Secondary Structure of T4 Phage Lysozyme. Biochimica et biophysica acta, 405 2:442-51.
363
+ Amil Merchant, Elahe Rahimtoroghi, Ellie Pavlick, and Ian Tenney. 2020. What happens to BERT embeddings during fine-tuning? In Proceedings of the Third Blackbox NLP Workshop on Analyzing and Interpreting Neural Networks for NLP, pages 33-44, Online. Association for Computational Linguistics.
364
+ Alessio Miaschi, Dominique Brunato, Felice Dell'Orletta, and Giulia Venturi. 2020. Linguistic profiling of a neural language model. In Proceedings of the 28th International Conference on Computational Linguistics, pages 745-756, Barcelona, Spain (Online). International Committee on Computational Linguistics.
365
+ Aaron Mueller, Garrett Nicolai, Panayiotia Petrou-Zeniou, Natalia Talmina, and Tal Linzen. 2020. Cross-linguistic syntactic evaluation of word prediction models. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5523-5539, Online. Association for Computational Linguistics.
366
+ Nikita Nangia, Clara Vania, Rasika Bhalerao, and Samuel R. Bowman. 2020. CrowS-pairs: A challenge dataset for measuring social biases in masked language models. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1953-1967, Online. Association for Computational Linguistics.
367
+ Madhura Pande, Aakriti Budhraja, Preksha Nema, Pratyush Kumar, and Mitesh M Khapra. 2021. The Heads Hypothesis: a Unifying Statistical Approach Towards Understanding Multi-Headed Attention in BERT. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 13613-13621.
368
+ Joe Pater. 2019. Generative Linguistics and Neural Networks at 60: Foundation, Friction, and Fusion. Language, 95(1):e41-e74.
369
+ Adam Pauls and Dan Klein. 2012. Large-scale syntactic language modeling with treelets. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 959-968, Jeju Island, Korea. Association for Computational Linguistics.
370
+
371
+ Karl Pearson. 1901. LIII. On lines and planes of closest fit to systems of points in space. The London, Edinburgh, and Dublin philosophical magazine and journal of science, 2(11):559-572.
372
+ Matt Post. 2011. Judging grammaticality with tree substitution grammar derivations. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 217-222, Portland, Oregon, USA. Association for Computational Linguistics.
373
+ Vinit Ravishankar, Artur Kulmizev, Mostafa Abdou, Anders Søgaard, and Joakim Nivre. 2021. Attention can reflect syntactic structure (if you let it). In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 3031-3045, Online. Association for Computational Linguistics.
374
+ Julian Salazar, Davis Liang, Toan Q. Nguyen, and Katrin Kirchhoff. 2020. Masked language model scoring. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2699-2712, Online. Association for Computational Linguistics.
375
+ Ketki Savle, Wlodek Zadrozny, and Minwoo Lee. 2019. Topological data analysis for discourse semantics? In Proceedings of the 13th International Conference on Computational Semantics - Student Papers, pages 34-43, Gothenburg, Sweden. Association for Computational Linguistics.
376
+ Barbara C. Scholz, Francis Jeffry Pelletier, and Geoffrey K. Pullum. 2021. Philosophy of Linguistics. In Edward N. Zalta, editor, The Stanford Encyclopedia of Philosophy, Fall 2021 edition. Metaphysics Research Lab, Stanford University.
377
+ Carson T. Schütze. 1996. The Empirical Base of Linguistics: Grammaticality Judgments and Linguistic Methodology. University of Chicago Press.
378
+ Stefan Schweter. 2020. Italian BERT and ELECTRA models.
379
+ Rico Sennrich. 2017. How grammatical is character-level neural machine translation? assessing MT quality with contrastive translation pairs. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 376-382, Valencia, Spain. Association for Computational Linguistics.
380
+ Lloyd S Shapley. 1953. A value for $n$ -person games.
381
+ Jon Sprouse. 2018. Acceptability Judgments and Grammaticality, Prospects and Challenges. Syntactic Structures after, 60:195-224.
382
+ Mukund Sundararajan and Amir Najmi. 2020. The Many Shapley Values for Model Explanation. In International Conference on Machine Learning, pages 9269-9278. PMLR.
383
+
384
+ Zhihao Tong, Ning Du, Xiaobo Song, and Xiaoli Wang. 2021. Study on mindspore deep learning framework. In 2021 17th International Conference on Computational Intelligence and Security (CIS), pages 183-186.
385
+ Daniela Trotta, Raffaele Guaracci, Elisa Leonardelli, and Sara Tonelli. 2021. Monolingual and Cross-Lingual Acceptability Judgments with the Italian CoLA corpus. CoRR, abs/2109.12053.
386
+ Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you Need. Advances in Neural Information Processing Dystems, 30.
387
+ Elena Voita, David Talbot, Fedor Moiseev, Rico Senrich, and Ivan Titov. 2019. Analyzing multi-head self-attention: Specialized heads do the heavy lifting, the rest can be pruned. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5797-5808, Florence, Italy. Association for Computational Linguistics.
388
+ Elena Volodina, Yousuf Ali Mohammed, and Julia Klezl. 2021. DaLAJ - a dataset for linguistic acceptability judgments for Swedish. In Proceedings of the 10th Workshop on NLP for Computer Assisted Language Learning, pages 28-37, Online. LiU Electronic Press.
389
+ Joachim Wagner, Jennifer Foster, and Josef van Genabith. 2009. Judging Grammaticality: Experiments in Sentence Classification. Calico Journal, 26(3):474-490.
390
+ Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2018. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 353-355, Brussels, Belgium. Association for Computational Linguistics.
391
+ Alex Warstadt and Samuel R Bowman. 2019. Linguistic Analysis of Pre-trained Sentence Encoders with Acceptability Judgments. arXiv preprint arXiv:1901.03438.
392
+ Alex Warstadt, Alicia Parrish, Haokun Liu, Anhad Mohananey, Wei Peng, Sheng-Fu Wang, and Samuel R. Bowman. 2020a. BLiMP: The benchmark of linguistic minimal pairs for English. Transactions of the Association for Computational Linguistics, 8:377-392.
393
+ Alex Warstadt, Amanpreet Singh, and Samuel R. Bowman. 2019. Neural network acceptability judgments. Transactions of the Association for Computational Linguistics, 7:625-641.
394
+ Alex Warstadt, Yian Zhang, Xiaocheng Li, Haokun Liu, and Samuel R. Bowman. 2020b. Learning which features matter: RoBERTa acquires a preference for
395
+
396
+ linguistic generalizations (eventually). In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 217-235, Online. Association for Computational Linguistics.
397
+ Laura Weidinger, John Mellor, Maribeth Rauh, Conor Griffin, Jonathan Uesato, Po-Sen Huang, Myra Cheng, Mia Glaese, Borja Balle, Atoosa Kasirzadeh, et al. 2021. Ethical and Social Risks of Harm from Language Models. arXiv preprint arXiv:2112.04359.
398
+ Matthew E Werenski, Ruijie Jiang, Abiy Tasissa, Shuchin Aeron, and James M Murphy. 2022. Measure estimation in the barycentric coding model. In International Conference on Machine Learning, pages 23781-23803. PMLR.
399
+ Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Association for Computational Linguistics.
400
+ Zhiyong Wu, Yun Chen, Ben Kao, and Qun Liu. 2020. Perturbed masking: Parameter-free probing for analyzing and interpreting BERT. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4166-4176, Online. Association for Computational Linguistics.
401
+ Fan Yin, Quanyu Long, Tao Meng, and Kai-Wei Chang. 2020. On the robustness of language encoders against grammatical errors. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3386-3403, Online. Association for Computational Linguistics.
402
+ Simon Zhang, Mengbai Xiao, and Hao Wang. 2020. GPU-accelerated computation of Vietoris-Rips persistence barcodes. In 36th International Symposium on Computational Geometry (SoCG 2020). Schloss Dagstuhl-Leibniz-Zentrum für Informatik.
403
+ Yian Zhang, Alex Warstadt, Xiaocheng Li, and Samuel R. Bowman. 2021. When do you need billions of words of pretraining data? In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1112-1125, Online. Association for Computational Linguistics.
404
+ Junru Zhou and Hai Zhao. 2019. Head-Driven Phrase Structure Grammar parsing on Penn Treebank. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2396–2408, Florence, Italy. Association for Computational Linguistics.
405
+
406
+ Afra J Zomorodian. 2001. Computing and comprehending topology: Persistence and hierarchical Morse complexes (Ph.D.Thesis). University of Illinois at Urbana-Champaign.
407
+
408
+ # A Representation Topology Divergence
409
+
410
+ Suppose we have two weighted full graphs $G_{a}, G_{b}$ with one-to-one vertex correspondence. Define their vertices as $\{a_{1}, a_{2}, \ldots, a_{n}\}$ and $\{b_{1}, b_{2}, \ldots, b_{n}\}$ respectively so that $a_{i}$ corresponds to $b_{i}$ for each $i$ . RTD $(G_{a}, G_{b})$ is calculated as follows:
411
+
412
+ 1. Build a full weighted graph $G_{ab}$ with the vertices set $V = \{v_{1}, v_{2}, \ldots, v_{n}, u_{1}, u_{2}, \ldots, u_{n}\}$ and the edge weights computed as
413
+
414
+ $$
415
+ \left\{ \begin{array}{l} w (v _ {i}, v _ {j}) = 0 \\ w (v _ {i}, u _ {i}) = 0 \\ w (u _ {i}, u _ {j}) = w _ {b} (b _ {i}, b _ {j}) \\ w (v _ {i}, u _ {j}) = \max \left(w _ {a} (a _ {i}, a _ {j}), w _ {b} (b _ {i}, b _ {j})\right) \end{array} \right.
416
+ $$
417
+
418
+ where $w_{a}$ and $w_{b}$ are the edge weights in the corresponding graphs.
419
+
420
+ 2. Compute the barcode (Barannikov, 2021) of the $H_{1}$ homology group of the graph $G_{ab}$ flag complex. It should be emphasized that the $H_{0}$ homology group barcode for this graph is empty since the minimum spanning tree of $G_{ab}$ has the total weight of 0. Instead of $H_{1}$ , the higher-order homology groups (e.g., $H_{2}, H_{3}$ ) can be considered. However, the preliminary experiments have shown that they are less helpful for LA tasks.
421
+ 3. $\mathrm{RTD}(G_a, G_b)$ is calculated as the sum of bar lengths in the barcode from the previous step.
422
+
423
+ It should be noted that this procedure is asymmetric on $G_{a}$ and $G_{b}$ , and for non-equal graphs holds $\mathrm{RTD}(G_{a}, G_{b}) \neq \mathrm{RTD}(G_{b}, G_{a})$ . To compute barcodes, we use the Ripser++ toolkit, which cannot work with asymmetric graphs. Hence, we represent the asymmetric attention maps as the distance matrices to obtain the symmetric graphs $G_{a}$ and $G_{b}$ as described in §3.2. We consider only the forward-looking part of attention, i.e., how each token affects the rest of the sentence.
424
+
425
+ The majority of the BLIMP minimal pairs are of equal length in the BERT/RoBERTa tokens. Otherwise, we truncate the longest sentence to achieve an equal length since the one-to-one correspondence between tokens is crucial for RTD. We assume that the truncation may remove tokens that help to discriminate between the acceptable and unacceptable sentences. We leave improvement of the pre-processing stage for future work.
426
+
427
+ # B Fine-tuning Details
428
+
429
+ Fine-tuning and evaluation of the BERT-based/XLM-R acceptability classifiers follow the standard procedure under the HuggingFace library (Wolf et al., 2020). Each model is fine-tuned for 4 epochs with the learning rate of $1e^{-2} / 1e^{-3}$ , batch size of 32, and the other default hyperparameters.
430
+
431
+ # C Acceptability Classification
432
+
433
+ <table><tr><td></td><td></td><td>CoLA</td><td>ItaCoLA</td><td>DaLAJ</td></tr><tr><td>Language</td><td></td><td>English</td><td>Italian</td><td>Swedish</td></tr><tr><td># Train sent.</td><td></td><td>8,551</td><td>7,801</td><td>6,870</td></tr><tr><td># Dev sent.</td><td></td><td>1,043</td><td>946</td><td>892</td></tr><tr><td># Test sent.</td><td></td><td>1,063</td><td>975</td><td>952</td></tr><tr><td>Type</td><td></td><td>Expert</td><td>Expert</td><td>L2</td></tr><tr><td># Sources</td><td></td><td>23</td><td>12</td><td>SweLL</td></tr><tr><td>Phenomena</td><td colspan="2">Morph, Syntax, Semantics</td><td>Syntax</td><td>Lexis, Morph</td></tr><tr><td>%</td><td></td><td>70.5</td><td>84.5</td><td>50.0</td></tr></table>
434
+
435
+ Table 1: Statistics of acceptability classification benchmarks. Type=Type of data source. %=Percentage of acceptable sentences. Morph=Morphology.
436
+
437
+ # C.1 Results by Linguistic Features
438
+
439
+ ![](images/93ebe54564ea5e3129d45cce0c96e10bf1155a5c7b2a9744440171a9a9cbc078.jpg)
440
+ Figure 1: Performance (MCC) of the fine-tuned XLM-R by major linguistic feature. Average MCC scores are represented with dashed lines. The number of sentences including the feature is placed in square brackets.
441
+
442
+ # C.2 Analysis of the Feature Space
443
+
444
+ We analyze the contribution of the topological features to acceptability classification in the context of linguistic phenomena. We interpret the principal components computed on the fine-tuned En-BERT $+TDA$ features and identify their importance w.r.t. each head/layer with Shapley values.
445
+
446
+ <table><tr><td rowspan="2">Model</td><td colspan="2">IDD</td><td colspan="2">OODD</td></tr><tr><td>Acc.</td><td>MCC</td><td>Acc.</td><td>MCC</td></tr><tr><td>En-BERT + TDA</td><td>88.6</td><td>0.725</td><td>82.1</td><td>0.565</td></tr><tr><td>En-BERT + TDA + PCA</td><td>84.8</td><td>0.632</td><td>81.8</td><td>0.558</td></tr><tr><td>En-BERT + TDA + PC1</td><td>84.1</td><td>0.609</td><td>79.1</td><td>0.482</td></tr><tr><td>En-BERT + TDA + PC2</td><td>84.3</td><td>0.615</td><td>81.2</td><td>0.541</td></tr></table>
447
+
448
+ Table 2: Acceptability classification results with PCA on CoLA. $\mathbf{IDD} =$ "in domain dev" set. $\mathbf{ODDD} =$ "out of domain dev" set. Components Set $^1$ : {1}, Components Set $^2$ : {1,7,9,24,12,0}.
449
+
450
+ Method. The pipeline assembles the feature standardization, PCA and training a logistic regression classifier. We conduct a grid search over two pipeline's parameters (i) the number of components $N_{comp} \in [10,20,\dots,100]$ (the found optimum: $N_{comp} = 100$ ) and (ii) regularization parameter of logistic regression $L_{1} \in [0.01,0.02,\dots,0.1]$ (the found optimum: $L_{1} = 0.1$ ). The parameter search is run across 3 stratified folds, where the CoLA train set is randomly split into train/development sets. The classifier performance is evaluated on the grammatically annotated CoLA development set. We also explore masking principal components, i.e., training the classifier using only the most important components while zeroing the weights of the others.
451
+
452
+ Table 2 shows results for the full pipeline (En-BERT + TDA + PCA) and masked pipelines (En-BERT + TDA + PC $^1$ /PC $^2$ ). Since the performance is comparable with the En-BERT + TDA classifier in §4, we rely on the PCA decomposition for the feature analysis and interpretation.
453
+
454
+ Results. The following six principal components (PCs) contribute most to acceptability classification according to the mean absolute Shapley values $\phi$ (see Figure 2). Figure 3 shows the Shapley values for these PCs by the major linguistic feature.
455
+
456
+ PC1 ( $\phi = 3.179$ ) has the most impact on the classifiers' performance. PC1 primarily contains simple topological features (the average vertex degree, the number of edges, and the number of connected components) from the heads at the last layer, which is affected most by the fine-tuning.
457
+
458
+ PC7 $(\phi = 0.601)$ includes same heads as PC1, but its features utilize the number of cycles in the attention graph.
459
+
460
+ PC9 ( $\phi = 0.442$ ) groups all attention patterns except for attention to commas for heads at the lower and middle layers. The component attributes to all phenomena.
461
+
462
+ ![](images/59ad09d3b834843e1d557fdc1a9bfe2f3f546090b9578825babf9f198c6aad31.jpg)
463
+ Figure 2: Importance of the PCs for judging sentence acceptability. Shapley values $\phi$ reflect the PCs' impact on the classifier output.
464
+
465
+ PC24 $(\phi = 0.296)$ is responsible for topological features at the first and last layers.
466
+
467
+ PC12 ( $\phi = 0.278$ ) contains the attention to the [CLS] token/next token patterns. The PC is important for classifying sentences including the following features: negative polarity and free choice items (Determiner), obliques, expletives, prepositional phrases and arguments (Argument Type), complement clauses without complementizers (Complement Clause).
468
+
469
+ PC0 ( $\phi = 0.274$ ) represents features equal to the number of nodes in the graph, that is, the sentence length in tokens. The PC influences the classifier prediction w.r.t. most of the phenomena, except for Determiner, Complement Clause, and Argument Types.
470
+
471
+ The following four PCs are less important for acceptability classification $(\phi_j < 0.25)$ in general but may contribute to some linguistic phenomena.
472
+
473
+ PC16 ( $\phi = 0.243$ ) comprises topological and distance to pattern features of different heads at the middle layers. The PC contributes to negative polarity and free choice items, non-finite complementizer phrases, and comparative constructions.
474
+
475
+ PC20 ( $\phi = 0.216$ ) reflects attention-to-comma for various heads at the lower layers. However, this feature helps to classify sentences that fall under the S-Syntax and Question categories.
476
+
477
+ ![](images/3ba085959953f643d0e68a195e41c20dd09915bec2687033f1310b2f949d8873.jpg)
478
+ Figure 3: Concatenated mean absolute Shapley values for the important PCs by major linguistic feature.
479
+
480
+ PC15 ( $\phi = 0.216$ ) includes the attention to the first token pattern for the middle-to-higher layer heads (generally 4-to-10). It works for Passive and By-Phrases.
481
+
482
+ PC2 $(\phi = 0.203)$ reflects the number of graph edges for heads at the first layer, which captures strong pair-wise information about tokens. This head is important for sentences with default syntax (Simple).
483
+
484
+ PC3 and PC6 ( $\phi < 0.05$ ) represent attention to the dot pattern. The PCs are not important for any of the linguistic phenomena and have large eigenvalues.
485
+
486
+ # D Attention Head Selection
487
+
488
+ # D.1 Head Selection Procedure
489
+
490
+ We use publicly available scripts to generate up to 100 minimal pairs per each of 67 types, ensuring no overlap with the BLiMP pairs. We select the best-performing individual heads and head ensembles by estimating their scoring performance on the generated data and further evaluate them on BLiMP.
491
+
492
+ Algorithms 1-2 describe the Top Head and Phenomenon Head selection procedures using a brute force search, while Algorithm 3 presents the process of selecting the Head Ensembles via beam search.
493
+
494
+ # D.2 Effect of Auxiliary Data
495
+
496
+ We analyze the effect of the amount of auxiliary generated data on the RTD scoring performance. We explore $N \in [1, 5, 10, \dots, 100]$ sentence pairs
497
+
498
+ # Algorithm 1 Top Head Selection
499
+
500
+ Input: Set $Q_{1}$ : contains all possible pairs $(h, r)$ , where $h$ -attention head and $r \in \{1, 2\}$ scoring rule
501
+
502
+ Require: $acc(.)$ : accuracy evaluation function of the head with the selected rule on pairs for all phenomena
503
+
504
+ Output: Pair $(H_B,R_B)$ procedure SELECTING TOP HEAD(Q1)
505
+
506
+ 2: BestAcc $\leftarrow 0$
507
+
508
+ $(H_B,R_B)\gets (-1, - 1)$
509
+
510
+ 4: for $(h,r)\in Q_1$ do
511
+
512
+ if $acc((h,r)) > BestAcc$ then
513
+
514
+ 6: BestAcc $\leftarrow$ accC((h,r))
515
+
516
+ $(H_B,R_B)\gets (h,r)$
517
+
518
+ 8: end if
519
+
520
+ end for
521
+
522
+ 10: return: $(H_B, R_B)$
523
+
524
+ end procedure
525
+
526
+ # Algorithm 2 Phenomenon Head Selection
527
+
528
+ Input: Set $Q_{1}$ : contains all possible pairs $(h, r)$ , where $h$ -attention head and $r \in \{1, 2\}$ scoring rule
529
+
530
+ Require: $C$ : linguistic category
531
+
532
+ Require: $acc_C(\cdot)$ : accuracy evaluation function of the head with the selected scoring rule on the $C$ pairs
533
+
534
+ Output: $\bar{\mathrm{Pair}} (H_C,R_C)$
535
+
536
+ procedure SELECTING PHENOMONON HEAD $(Q_{1})$
537
+
538
+ 2: BestAcc $\leftarrow 0$
539
+
540
+ $(H_{C},R_{C})\gets (-1, - 1)$
541
+
542
+ 4: for $(h,r)\in Q_1\mathbf{d}\mathbf{o}$
543
+
544
+ if $acc_{C}((h,r)) > BestAcc$ then
545
+
546
+ 6: BestAcc $\leftarrow$ accC((h,r))
547
+
548
+ $(H_{C},R_{C})\gets (h,r)$
549
+
550
+ 8: end if
551
+
552
+ end for
553
+
554
+ 10: return: $(H_C, R_C)$
555
+
556
+ end procedure
557
+
558
+ # Algorithm 3 Head Ensemble Selection
559
+
560
+ Input: Set $Q_{1}$ : contains all possible pairs $(h, r)$ , where $h$ -attention head and $r \in \{1, 2\}$ scoring rule
561
+
562
+ Require: $acc(.):$ accuracy evaluation function of the ensembles with selected scoring rules using majority voting
563
+
564
+ Output: Ensemble $\vec{B}$ : set of pairs $(\vec{H}, R)$
565
+
566
+ procedure SELECTING HEAD ENSEMBLE $(Q_{1})$
567
+
568
+ 2: $Q \gets \{ \{(h, r) \} \mid (h, r) \in Q_1 \}$
569
+
570
+ do
571
+
572
+ 4: $Q^{\prime}\gets \emptyset$
573
+
574
+ for $q\in Q$ do
575
+
576
+ 6: for $(h_1, r_1) \neq (h_2, r_2) \in Q_1 \setminus q$ do
577
+
578
+ $q^{\prime}\gets q\cup \{(h_{1},r_{1}),(h_{2},$
579
+
580
+ 8: if $acc(q^{\prime}) > acc(q)$ then
581
+
582
+ $Q^{\prime}\gets Q^{\prime}\cup \{q^{\prime}\}$
583
+
584
+ 10: end if
585
+
586
+ end for
587
+
588
+ 12: end for
589
+
590
+ if $|Q| \geq 40$ then
591
+
592
+ 14: $Q\gets$ top-40 pairs $q$ , scored by acc(., $q\in Q^{\prime}$
593
+
594
+ else
595
+
596
+ 16: if $|Q'| > 0$ then
597
+
598
+ $Q\gets Q^{\prime}$
599
+
600
+ 18: end if
601
+
602
+ end if
603
+
604
+ 20: while $Q^{\prime}\neq \emptyset$
605
+
606
+ return: Ensemble from $Q$
607
+
608
+ 22: end procedure
609
+
610
+ per language phenomenon used for selecting Head Ensembles as described in Appendix D.1. The experiments are run ten times, where each run includes generation of auxiliary minimal pairs, the corresponding head selection procedure, and evaluation on BLiMP for each and all language phenomena. The accuracy performance is averaged over all experiment runs.
611
+
612
+ ![](images/f7b4819fcfceb662c0fb5aac05a645ac6d18b8f1d6d439c8319625f61951a766.jpg)
613
+ Figure 4: The effect of a given amount of examples on the BLIMP performance of selected Head Ensembles by major category. Method=RTD scoring. N=number of extra examples per phenomenon.
614
+
615
+ <table><tr><td>---</td><td>Anaphor agr.</td><td>---</td><td>Control / raising</td><td>---</td><td>Filler gap</td><td>---</td><td>NPI licensing</td></tr><tr><td>---</td><td>Arg. structure</td><td>---</td><td>Determiner-Noun agr.</td><td>---</td><td>Irregular forms</td><td>---</td><td>Quantifiers</td></tr><tr><td>---</td><td>Binding</td><td>---</td><td>Ellipsis</td><td>---</td><td>Island effects</td><td>---</td><td>Subject-verb agr.</td></tr></table>
616
+
617
+ ![](images/3cfd2ab4bc3b9eb8df0aca844e0b30f6780df106d44653fee1f82b8f769b9e68.jpg)
618
+
619
+ Results. Figure 4 presents the results for the BERT-base and RoBERTa-base models. We observe that some phenomena receive prominent performance using only one auxiliary example (e.g., BERT-base: Irregular forms: $85\%$ ; Determiner-Noun agr: $84\%$ ; Anaphor agr: $80\%$ ; Subject-verb agr: $80\%$ ; RoBERTa-base: Anaphor agr: $90\%$ ; Determiner-Noun agr: $87\%$ ; Subject-verb agr: $83\%$ ). Both models receive similar maximum scores for some phenomena with a given different amount of examples (Control / raising: $79\%/80\%$ , $\mathbf{N} = 30/60$ ; Ellipsis: $88\%/87\%$ , $\mathbf{N} = 80/90$ ; Filler gap: $80\%/88\%$ , $\mathbf{N} = 60/30$ ). By contrast, there is a significant difference between the maximum and minimum scores on certain phenomena, e.g., Quantifiers $(10\%/9\%)$ , NPI licensing $(8\%/6\%)$ , and Ellipsis $(8\%/6\%)$ .
620
+
621
+ # E The $H_0S$ Feature Distributions
622
+
623
+ Figure 5 and Figure 6 illustrate examples of the $H_0S$ feature distribution shifts between the acceptable and unacceptable sentences from the entire CoLA development set.
624
+
625
+ ![](images/130b16db645ca7b0bddb6115b49a5eed17273d24b432b73b750be3ce61b433f0.jpg)
626
+ Figure 5: The distribution shift of the $H_0S$ feature between the acceptable and unacceptable sentences (Simple); [L: 10; H: 3].
627
+
628
+ ![](images/ba646b18019763032b04b5072319a65d06db5d7b07a97515c40e16cf7c30f7fb.jpg)
629
+ Figure 6: The distribution shift of the $H_0S$ feature between the acceptable and unacceptable sentences (Binding); [L: 11; H: 4].
630
+
631
+ # F Toy Examples of Calculating Features
632
+
633
+ Let us demonstrate a calculation of essential barcode features of a toy graph $G_{\text{toy}}$ (Figure 7a). First, we calculate the $H_0$ -barcode of this graph. To do it, we build the graph $G_{\text{toy}}'$ by replacing each edge weight $w$ with $1 - w$ , as in Figure 7b. Next we calculate the minimum spanning tree of this new graph (Figure 7c). We end up with the $H_0$ -barcode with the lengths of bars, equal to the weights of the minimum spanning tree (Figure 7d). We can derive $H_0 S(G_{\text{toy}}) = 0.3 + 0.4 + 0.5 = 1.2$ and $H_0 M(G_{\text{toy}}) = H_0 S(G_{\text{toy}}) / 3 = 0.4$ from this barcode diagram.
634
+
635
+ Note that the directions of the bars (Figure 1) were reversed comparing to the actual ripser++ output (Figure 7d). The reversed representation is more intuitive: edges with lower weights are filtered out earlier than edges with the higher weights.
636
+
637
+ Next we compute the Betti numbers for the same graph $G_{\text{toy}}$ given three thresholds: $\tau_1 = 0$ , $\tau_2 = 0.4$ and $\tau_3 = 1$ .
638
+
639
+ ![](images/45868c4177c0eafe7ce8cd05ed88a43cd3d64448e7766e34016d08baae200e72.jpg)
640
+ (a) $G_{toy}$
641
+
642
+ ![](images/1dad32ee0f71ad7beffad08bf2edda111267f68ed11ca9f5a03915f042c399d6.jpg)
643
+ (b) $G_{toy}^{\prime}$
644
+
645
+ ![](images/6defab0ba6b707e2fa7c32e53b23e11f166952032945a115accd9c6297559355.jpg)
646
+ (c) minimum spanning tree of $G_{toy}^{\prime}$
647
+
648
+ ![](images/72fd9a2229ed922067bc7856ccb8f61eabf1fdbed51a93b51044ee678f9808c2.jpg)
649
+ (d) $H_0$ -barcode of $G_{toy}^{\prime}$
650
+ Figure 7: An example of a weighted graph and corresponding $H_0$ -barcode, calculated with the Ripser++ library.
651
+
652
+ At $\tau = 0$ we do not drop any edges. We have the full graph with one connected component (Figure 7a). $\beta_0$ is defined as the number of connected components. Hence $\beta_0$ equals to 1. Next we calculate $\beta_{1}$ , using the shortcut formula for graphs: $\beta_{1} = |E| + |C| - |V|$ . In our case, $|E| = 6$ is the number of edges, $|C| = 1$ is the number of connected components and $|V| = 4$ is the number of vertices. Finally, we get $\beta_{1} = 3$ . Note that $\beta_{1}$ corresponds to three simple undirected loops in the graph. There is also an alternative method to represent the graph and to calculate the first Betti number. This method does not account for "trivial" loops, which are defined by the triangle borders. It was used in the example above, see Figure 1.
653
+
654
+ At $\tau = 0.4$ , we drop all edges with weights lower than 0.4. We get the same structure as the minimum spanning tree of the graph $G_{toy}^{\prime}$ (Figure 7c), but without weights inversion. For this graph, $\beta_0 = 1$ : there is a single connected component, $\beta_{1} = 3 + 1 - 4 = 0$ . It corresponds to the number of simple loops, which equals to 0.
655
+
656
+ At $\tau = 1$ , we drop all edges as all edges have weights below than 1. The resulting graph consists only of vertices without edges. For this case, we have four connected components, so $\beta_0 = 4$ , and $\beta_{1} = 0 + 4 - 4 = 0$ .
acceptabilityjudgementsviaexaminingthetopologyofattentionmaps/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2ac4994e6f0daecf98a2f656a6577413136f6addacd2003d7467be075ff4081c
3
+ size 832427
acceptabilityjudgementsviaexaminingthetopologyofattentionmaps/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7c987fa3ab40d725a9d3d47cf30629c0e35d5fb8f9f47dfbbc6a6c4e14c35043
3
+ size 868560
acriticalreflectionandforwardperspectiveonempathyandnaturallanguageprocessing/5c6de628-b2da-41a3-a620-40a9fd0d8d7a_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:01b872eb4438277e651b26c803ac26925a756c65b7726da88a4530accef8a25f
3
+ size 130843
acriticalreflectionandforwardperspectiveonempathyandnaturallanguageprocessing/5c6de628-b2da-41a3-a620-40a9fd0d8d7a_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e025f9e8069d068c6c2b7b107646697c427958d1d92f92932d715621ef0864d2
3
+ size 172501
acriticalreflectionandforwardperspectiveonempathyandnaturallanguageprocessing/5c6de628-b2da-41a3-a620-40a9fd0d8d7a_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a2c0d8f97dad68ab0ec4762f9063db71641d9ae1b1ea1a669c68c6c11d9bd50a
3
+ size 329398
acriticalreflectionandforwardperspectiveonempathyandnaturallanguageprocessing/full.md ADDED
@@ -0,0 +1,429 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # A Critical Reflection and Forward Perspective on Empathy and Natural Language Processing
2
+
3
+ Allison Lahnala<sup>1</sup>, Charles Welch<sup>1,3</sup>, David Jurgens<sup>2</sup>, Lucie Flek<sup>1,3</sup>
4
+
5
+ $^{1}$ Conversational AI and Social Analytics (CAISA) Lab
6
+
7
+ Department of Mathematics and Computer Science, University of Marburg
8
+
9
+ $^{2}$ School of Information, University of Michigan
10
+
11
+ <sup>3</sup> The Hessian Center for Artificial Intelligence (Hessian.AI)
12
+
13
+ {allison.lahnala,welchc,lucie.flek}@uni-marburg.de,jurgens@umich.edu
14
+
15
+ # Abstract
16
+
17
+ We review the state of research on empathy in natural language processing and identify the following issues: (1) empathy definitions are absent or abstract, which (2) leads to low construct validity and reproducibility. Moreover, (3) emotional empathy is overemphasized, skewing our focus to a narrow subset of simplified tasks. We believe these issues hinder research progress and argue that current directions will benefit from a clear conceptualization that includes operationalizing cognitive empathy components. Our main objectives are to provide insight and guidance on empathy conceptualization for NLP research objectives and to encourage researchers to pursue the overlooked opportunities in this area, highly relevant, e.g., for clinical and educational sectors.
18
+
19
+ # 1 Introduction
20
+
21
+ Interest in empathetic language continues to grow in natural language processing research. Empathy recognition and empathetic response generation tasks have become well-established research directions, especially since the introduction of now fairly mainstream benchmark datasets for each task, EMPATHIC REACTIONS (Buechel et al., 2018) and EMPATHETIC DIALOGUES (Rashkin et al., 2019), and an empathy detection shared task established on the former (Tafreshi et al., 2021; Barriere et al., 2022).
22
+
23
+ These empathy-focused tasks are highly motivated by myriad benefits, including (i) improved experiences with conversational agents (Shuster et al., 2020; Roller et al., 2021) and satisfaction with customer care dialogue agents (Firdaus et al., 2020; Sanguinetti et al., 2020), (ii) computational social science analyses, e.g., supportive interactions in online forums (Khanpour et al., 2017; Zhou et al., 2020; Sharma et al., 2020), and (iii) tools to assist in training and evaluating health practitioners (Demasi et al., 2019; Pérez-Rosas et al., 2017). With
24
+
25
+ this increased interest in empathy-focused NLP, however, has come little clarity on what empathy is and how it is being operationalized. Papers vary substantially in how they define, annotate, and evaluate empathy, leading to a fractured landscape that hinders progress.
26
+
27
+ Most NLP work has loosely defined empathy as the ability to understand another person's feelings and respond appropriately. As a result, these works have focused primarily on detecting sentiment and emotions in text as proxies for understanding feelings and consider empathetic responses to be those that demonstrate success in emotion recognition and express so with a tone consistent with the valence of the target's sentiment. Under this view, systems that primarily recognize or perform emotionally coherent interactions supposedly fulfill the goal of having an empathetic system. However, established findings and theories about human empathy show that this objective is short-sighted—or even misdirected—and misses a critical system known as cognitive empathy.
28
+
29
+ We argue that NLP research has omitted key aspects of empathy through its narrow focus on emotion and, as a result, led us to neglect the cognitive components. Our paper offers three key contributions. First, to show this gap in research on empathy, we provide a theoretical grounding for empathy from psychology (§2). We then survey empathy literature in NLP focusing on ACL* venues (§3) and highlight three central problems: (1) computational work has overlooked much of empathy through vague definitions and a focus on emotion (§4), (2) our current underspecification of empathy leads to issues in construct and data validity (§5), and (3) the narrow empathy tasks we choose as a community limit our progress (§6). Finally, we propose a way forward (§7) through a clear conceptualization of empathy that operationally considers processes of cognitive empathy and highlight overlooked research opportunities (§8).
30
+
31
+ # 2 What is Empathy? A Theoretical Guide
32
+
33
+ Empathy is a multi-dimensional construct that contains both emotional and cognitive aspects which relate to how an observer reacts to a target (Davis, 1980, 1983). In practice, empathy is diversely defined in terms of these social, emotional, cognitive, and even neurological dimensions (Cuff et al., 2016). Indeed, while folk conceptions of empathy refer to a single construct, multiple studies have pointed to distinct neurological systems for emotional and cognitive empathy (Decety and Jackson, 2006; Shamay-Tsoory, 2011).
34
+
35
+ In describing empathy in communication, we adopt the standard terminology of (i) a target as someone experiencing an emotion or situation and (ii) an observer as another person at the disposition for an empathic experience through perceiving the target's emotions and situation.
36
+
37
+ In psychology, the most discussed aspects of empathy are its affective and cognitive components (Cuff et al., 2016). The affective components, referred to as emotional empathy, relate to the observer's emotional reaction to the target (Shamay-Tsoory, 2011). The cognitive components, referred to as cognitive empathy, relate to active processes used by the observer to infer the mental state of the target (Blair, 2005). Emotional empathy represents automatic (bottom-up) processes whereas cognitive empathy represents controlled (top-down) processes (Lamm et al., 2007), and they interact with each other.
38
+
39
+ # 2.1 Emotional Empathy
40
+
41
+ Emotional empathy, or the observer's capacity to experience affective reactions upon perceiving the target, can involve processes such as emotion recognition, contagion, and pain sharing (Shamay-Tsoory, 2011). When these affective processes caused by perceiving the target's emotional state interact with certain contextual factors, the observer's experience is a form of emotional empathy. Such forms include the concepts of sympathy, compassion, and tenderness. In NLP, we often use these terms to define empathy without any distinction between them. Accordingly, our review finds we do not investigate them as specific empathy-related phenomena or characterizations of empathetic responses. As we describe in this section, however, the degree to which specific contextual factors interact with the internal affective processes renders
42
+
43
+ these concepts distinct. We propose differentiable characteristics of sympathy, compassion, and tenderness based on psychology literature. Furthermore, these characteristics are operationalizable for more precise NLP research on emotional empathy.
44
+
45
+ Distinguishing sympathy, compassion, and tenderness. While each of these concepts relates to having feelings for the target, each one is distinct regarding the perceived immediacy of the target's needs, vulnerability, and a desire to help. Sympathy relates to the present situation of the target and involves sorrow and concern for the target arising from perceived suffering, whereas tenderness relates to their long-term needs, and involves warm and fuzzy feelings from perceiving the target as vulnerable, delicate, or defenseless. Compassion is a higher construct that involves feelings based on the target's perceived needs and motivated desires to protect the vulnerable and provide care to those who suffer (Goetz et al., 2010).
46
+
47
+ Feeling for the target means emotions can be different between the target and observer. Emotional congruence is used to describe when an emotion is felt by both target and observer, though some define congruence as a response to a situation that is similar but may result in a different emotion. In this case, the emotion is congruent if it is appropriate for the situation. This idea plays a significant role in how the NLP community views empathetic responses, though we consider it unnecessary for empathy in general.
48
+
49
+ We can perceive another person as vulnerable or suffering and experience tenderness, sympathy, and compassion for them without thinking hard about it. In other words, cognitive processes are not strongly required for these experiences, distinguishing them as emotional empathy. However, empathic cognitive processes can help render these experiences by elevating awareness of those contextual factors (i.e., realizing the target's needs and vulnerability) through active deliberation of the target's situation. In the next section, we describe such processes of cognitive empathy.
50
+
51
+ # 2.2 Cognitive Empathy
52
+
53
+ Cognitive empathy centers, in part, on perspectivetaking, the process of the observer conceptualizing the target's point of view. This active process is the primary way of achieving cognitive empathy, though other scenarios such as imagined memories or fictional scenarios may also be processes that
54
+
55
+ help to achieve it (Eisenberg, 2014; Stinson and Ickes, 1992).
56
+
57
+ Psychologists have proposed a variety of frameworks for what actions or processes constitute perspective-taking. For example, in the appraisal theory of empathy (Wondra and Ellsworth, 2015), empathy is considered with respect to different aspects or dimensions of perspective that an observer might have for the target's situation. Six different types of appraisals are proposed: pleasantness, anticipated effort to deal with the situation, anticipated control of the situation, self-other agency for responsibility of the situation, attentional activity (degree of surprise), and certainty of the situation or outcome. With this view, how "empathetic" the observer is depends on the number of appraisals and the degree to which their responses or actions mirror the true feelings of the target.
58
+
59
+ To be able to empathize successfully, the observer needs to be able to accurately infer the content of the target's thoughts and feelings (Ickes, 2011). The degree of accuracy in the observer's inferences is known as empathic accuracy. High empathic accuracy is particularly essential in clinical domains, such as Motivational Interviewing (Miller and Rollnick, 2012), where the therapist (observer) formulates reflections based on interpreting what the patient (target) says (see examples in Table 1).
60
+
61
+ When considering the factors that affect empathic accuracy, the observer is not the only possible source of error. The target must also express and convey the situation accurately. As a result, it is unlikely that both the observer and target will ever perceive situations exactly the same (Stotland et al., 1978). This limitation has implications for empathy annotations, with the inter-annotator agreement being subject to the empathic accuracy and subjective interpretations of the annotators. The unstructured dyadic interaction and standard stimulus paradigms are two study approaches developed in interpersonal perception research for measuring empathic accuracy that involve comparing observer inferences with the target self-reports of their actual thoughts and feelings (Ickes, 2001); such approaches to study empathic accuracy have (to our knowledge) never been explored in NLP research.
62
+
63
+ # 3 Literature Review: Empathy in NLP
64
+
65
+ Empathy research in NLP primarily focuses on empathy recognition and empathetic response gen
66
+
67
+ <table><tr><td>Client</td><td>Well, I know I need to stay active. “Use it or lose it,” they say. I want to get my strength back, and they say regular exercise is good for your brain, too.</td></tr><tr><td>Interviewer</td><td>So that’s a puzzle for you – how to be active enough to get your strength back and be healthy, but not so much that would put you in danger of another heart attack.</td></tr><tr><td>Client</td><td>I think I’m probably being too careful. My last test results were good. It just scares me when I feel pain like that.</td></tr><tr><td>Interviewer</td><td>It reminds you of your heart attack.</td></tr><tr><td>Client</td><td>That doesn’t make much sense, does it– staying away from activity so I won’t have another one?</td></tr><tr><td>Interviewer</td><td>Like staying away from people so you won’t be lonely.</td></tr><tr><td>Client</td><td>Right. I guess I just need to do it, figure out how to gradually do more so I can stick around for a while.</td></tr></table>
68
+
69
+ Table 1: A motivational interviewing interaction demonstrating complex reflections. Example from (Miller and Rollnick, 2012).
70
+
71
+ eration. There is also work that analyzes empathy-related behaviors, such as supportive intents and counselor strategies. Across this literature, we have found highly varied and often underspecified usage of the term empathy.
72
+
73
+ We summarize our findings from reviewing a collection of computational linguistics and natural language processing papers. Papers were identified by searching for the term "empath" in the ACL anthology. Then we narrowed the resulting sample from 90 to 69 papers whose investigation involves empathy, as opposed to mentioning "empathy", "empathetic", or "empathic" rhetorically. We also included a selection of relevant works outside the ACL anthology from venues such as AAAI.
74
+
75
+ We focused on papers presenting systems for empathy prediction or empathetic response generation, evaluations of empathy dialogue models, empathy annotation schemes and datasets, and corpus analyses of empathetic language. We labeled a total of 48 papers (excluding 14 WASSA $^2$ shared task papers) based on their descriptions of empathy and empathy evaluation approaches.
76
+
77
+ # 3.1 Findings
78
+
79
+ We identified six predominant themes of empathy descriptions (D0-5), and seven empathy evaluation approaches (E0-6). Tables 2 and 3 show the categories we defined for grouping the description
80
+
81
+ <table><tr><td></td><td>Definition Themes</td><td>Count</td></tr><tr><td>D0</td><td>Studies do not define or describe empathy.</td><td>8</td></tr><tr><td>D1</td><td>Studies do not describe empathy but an abstract conceptualization could be inferred from the task or system description.</td><td>10</td></tr><tr><td>D2</td><td>Studies provide empathy descriptions that are vague or ambiguous with other concepts.</td><td>10</td></tr><tr><td>D3</td><td>Studies describe relationships between certain behaviors and empathy.</td><td>3</td></tr><tr><td>D4</td><td>Studies provide succinct yet explicitly theory-grounded empathy descriptions and adhere to the theoretical foundation.</td><td>9</td></tr><tr><td>D5</td><td>Studies provide thorough, theory-based descriptions of empathy.</td><td>8</td></tr></table>
82
+
83
+ Table 2: The number of papers identified for each Definition Theme in our review.
84
+
85
+ and evaluation themes, together with the number of papers we identified characterized by these themes.
86
+
87
+ The definition themes are as follows (details and examples are provided in Appendix A):
88
+
89
+ D0: Studies do not define or describe empathy. Despite the significance of empathy in such studies, a conceptualization is not given nor can it be reasonably inferred. In some cases, empathy is associated with politeness and courtesy.
90
+
91
+ D1: Studies do not describe empathy but an abstract conceptualization could be inferred from the task or system description. This is especially the case among empathetic response generation research. In task descriptions, the tendency is to describe empathetic response generation as the task of understanding the user's emotions and responding appropriately. System descriptions can be informative about the grounding concept when the design reflects specific dimensions of empathy. For instance, an empathetic response generation system that mainly relies on an emotion recognition module for the purpose of response conditioning suggests the significance of emotion understanding in the work's conceptualization.
92
+
93
+ D2: Studies provide empathy descriptions that are vague or ambiguous with other concepts. Some studies use sympathy and empathy interchangeably, and others nearly exchange the term empathy with emotion recognition by strongly associating them without providing disambiguation. These often regard an appropriate response as one that mimics or mirrors the target's response. This characterization usually accompanies a system that conditions a response on emotions or sentiments predicted by a dedicated module. In other cases, papers reference the conceptualization linked to the dataset they used, as is frequent among WASSA shared task papers (listed in Appendix §B) using
94
+
95
+ the EMPATHIC REACTIONS dataset (Buechel et al., 2018).
96
+
97
+ D3: Studies describe relationships between certain behaviors and empathy. These works, mainly concerned with narrative and conversation analyses, provide thorough descriptions of other concepts and their relations to empathy that are consistent with multiple aspects of empathy described in §2.
98
+ D4: Studies provide succinct yet explicitly theory-grounded empathy descriptions and adhere to the theoretical foundation. This research often involves a counseling/therapy conversations dataset with labels from a scheme developed by experts in those domains.
99
+ D5: Studies provide thorough, theory-based descriptions of empathy. This is often the case in works that develop a novel multi-dimensional empathy framework for analysis and annotations. As in D4, they also tend to involve experts in behavioral, social, and health domains.
100
+
101
+ # 4 The Definition Problem
102
+
103
+ The lack of delineation between empathy and related concepts in psychology ultimately leaves us wondering what we are studying. When empathy is not explicitly conceptualized (D0-1), the training data is often implicitly focused on components of emotional empathy and responses based on emotion-matching strategies. However, since the data properties are often under-specified in the NLP papers as well, we only have indirect proxies to infer such emotion-centric conceptualizations. In an ideal case, the training data description would reveal more about the relevant features of emotion matching/mirroring, e.g., separating contextual factors that would define a particular emotional empathy behavior such as sympathy, compassion, or tenderness.
104
+
105
+ <table><tr><td></td><td>Evaluation Themes</td><td>Count</td></tr><tr><td>E6</td><td>Multi-item: cognitive &amp; emotional</td><td>8</td></tr><tr><td>E5</td><td>Single label/rating: cognitive &amp; emotional empathy</td><td>3</td></tr><tr><td>E4</td><td>Single label/rating: only emotional empathy</td><td>19</td></tr><tr><td>E3</td><td>Single label/rating: no specification</td><td>8</td></tr><tr><td>E2</td><td>Heuristic empathy labels/ratings</td><td>4</td></tr><tr><td>E1</td><td>Target-observer role labeling</td><td>2</td></tr><tr><td>E0</td><td>No manual evaluation or only automatic</td><td>4</td></tr></table>
106
+
107
+ Table 3: Themes of evaluations and annotations of empathetic language in NLP literature with counts of papers described the theme. Detailed descriptions of the themes are provided in Appendix B.
108
+
109
+ While empathy is indeed complex, we demonstrated in Section 2.1 a way to disambiguate these empathetic behaviors by considering the variables of how the observer perceives the target's vulnerability and needs. These are just some possible manifestations of emotional empathy that NLP researchers could investigate using more detailed study designs to control such variables. To alleviate the problem of abstractness and potential inconsistencies, we recommend that researchers disambiguate their objectives from simply "empathy" by considering such concepts as sub-areas of the empathy research direction.
110
+
111
+ Nearly thirty studies in our review provide vague or no empathy descriptions (D0-2). Without the ability to focus on individual aspects or an understanding of the nature of empathetic interactions and thus what constitutes "appropriate" responses, the issue of what we are measuring arises. Figure 1 displays the interaction between empathy definitions and evaluation designs. There is a clear correspondence between under-specified empathy descriptions and under-specified evaluations. Even when a study provides a multi-dimensional definition of empathy (D4-5), it often is not reflected in the evaluation design. Rather the evaluations default intuitively to the emotional components (E4).
112
+
113
+ The undefined nature of empathy ultimately manifests in poor operationalization and may contribute to inconsistency in how empathy is measured from observations, resulting in poor construct validity (Coll et al., 2017). With unstable validity, these works may not be deemed reliable for clinical applications, such as counselor training, which has been noted as a critical application of artificial intelligence to the field of psychotherapy (Imel et al., 2017). Aside from such applications, poor validity and lack of definitions lead to reproducibility issues
114
+
115
+ ![](images/cfaf6a70ad7964d72ab385af8bebda531244df435e2a3f7eec2f561d814b8fa3.jpg)
116
+ Figure 1: Heatmap of definition and evaluation themes.
117
+
118
+ (Cuff et al., 2016). Lacking a shared understanding and robust conceptualization of empathy makes it difficult to interpret findings and compare studies.
119
+
120
+ # 5 The Measurement Problem
121
+
122
+ Given the overwhelmingly abstract portrayal of empathy, the effectiveness and validity of our approaches for measuring, annotating, and evaluating it are questionable. Simply put, we do not know if we are investigating the same thing or doing so consistently. Following, we outline where NLP needs to improve in its measurements to move forward.
123
+
124
+ Construct validity implications for resource construction and evaluation. Yalçın (2019) reviews several scales developed in psychology and suggests potential adaptations for evaluating empathy in NLP. However, issues with measurement, construct, and predictive validity also persist in psychology (Ickes, 2001) and reflect similarly in the limitations of NLP research, e.g., ambiguous evaluation (Coll et al., 2017).
125
+
126
+ Established psychological scales can help measure empathy, such as the often used Empathic Concern and Personal Distress Scale (Batson et al.,
127
+
128
+ 1987; Buechel et al., 2018). One could, for instance, ask participants to read a passage written by an individual and ask the reader about their emotions. However, various non-empathetic factors can affect how someone feels after reading, not resulting from perspective-taking or understanding the target's emotional state. Psychological studies that use these scales devise experimental controls to promote an other-focus state (Fabi et al., 2019; Batson et al., 1997). One method is to instruct participants to "imagine how the person ... feels about what has happened and how it has affected his or her life" (Toi and Batson, 1982). Another is to control how similar the observer perceives themselves to be to the target (Batson et al., 1981). Self-report measures such as the Davis IRI scale (Davis, 1983) can measure empathic capabilities, but empathy still varies across situations (Litvak et al., 2016; Cuff et al., 2016).
129
+
130
+ Empathy involves an accurate understanding of another's mental state. As such, one can compare descriptions given by an observer and a target (Ickes, 2001). These first-person assessments are more accurate than annotations by a third party attempting to judge mental states from language, behaviors, or situational factors.
131
+
132
+ Empathy and domain shift: What exactly are we trying to transfer? The scarcity and diversity of existing datasets motivate investigating knowledge transfer between them (Wu et al., 2021b; Lahnala et al., 2022). Domains vary across datasets, but datasets also vary in how they capture empathy. These differences make larger-scale efforts that combine datasets more difficult. Some studies leverage heuristics to curate empathy data (see Appendix B), such as bootstrapping with pretrained models, selecting interactions from particular contexts (e.g., particular subreddits), and crowdsourcing conversations grounded on particular emotions. While heuristic approaches can help mitigate curation costs (Hosseini and Caragea, 2021b; Welivita et al., 2021; Wu et al., 2021b), the effectiveness is subject to their validity.
133
+
134
+ In a study motivated to minimize annotation effort through domain transfer, for instance, Wu et al. (2021b)'s experimental results demonstrated the heuristically curated datasets, EMPATHETIC DILOGUES and PEC (Zhong et al., 2020), were insufficient for predicting empathy in the Motivational Interviewing domain. They suggest that more fine-grained empathy annotation labels would
135
+
136
+ help smooth the domain gaps by distinguishing the empathy aspects expected to be present. Such efforts are needed for the community to build off of each other's work. Meanwhile, further investigations of knowledge transfer techniques between existing resources could have positive contributions.
137
+
138
+ # 6 The Narrow Task Problem
139
+
140
+ While empathy is widely recognized as a construct NLP should be modeling, we argue that the field has focused on a narrow set of tasks that have held back progress. By bridging current work with a perspective of cognitive empathy, we motivate a series of new and reimagined tasks that would advance the field's ability to model empathy.
141
+
142
+ Empathy is more than emotions. The predominant empathy conceptualization requires the observer to understand the target's emotions and respond appropriately. We recommended that researchers disambiguate sympathy, compassion, and tenderness and view these as subareas of empathy research. However, we argue that this research would still suffer by ignoring cognitive empathy and its interaction with emotion understanding and empathetic responses. Some perspectives from psychology emphasize that the distinction between emotional and cognitive empathy is less important than the interaction between the two components (Cuff et al., 2016) and we can draw inspiration from that. Cognitive empathy processes are necessary considerations that can more effectively achieve goals for the emotional empathy subareas.
143
+
144
+ Cognitive empathy processes can improve methods for understanding emotions. In NLP, we often consider emotions first, assuming they can be inferred directly from what the target expresses. However, the target could minimize expressions, making the emotions harder to perceive (refer to the discussion on empathic accuracy §2.2). Furthermore, most papers only operate on text (e.g., transcripts), which misses opportunities for empathetic cues in other modalities.
145
+
146
+ Cognitive processes help with better emotional understanding yielding higher empathic accuracy of inferences about the target's affective state. Cuff et al. (2016) argue that the observer can infer emotionality through perspective-taking, imagination, and retrieval of relevant memories. In this way, a cognitive approach first leads to an understanding and more accurate perception of emotions.
147
+
148
+ We argue that cognitive empathy can benefit
149
+
150
+ from approaches such as common sense reasoning (Sabour et al., 2021), external knowledge (Li et al., 2020b), and abductive NLI (Bhagavatula et al., 2020). For example, Shen et al. (2021) integrated external knowledge to improve counselor reflection generation. Tu et al. (2022) presented an approach to understanding the target's emotional state using commonsense knowledge that improves over more emotion-focused models. Their approach was inspired by Ma et al. (2020)'s survey of empathetic dialogue systems, which argued that future work should go beyond emotional empathy by pursuing personalization and knowledge (both contextual and general external knowledge. Integrating these types of knowledge supports reasoning about emotion cause, as opposed to only recognizing emotions, which itself is insufficient (Gao et al., 2021; Kim et al., 2021). Models that incorporate reasoning about the cause of emotion were shown to outperform emotion recognition and mirroring counterparts, notably including those that included external commonsense and emotional lexical knowledge, by human evaluated empathy scores, i.e. (Lin et al., 2019; Li et al., 2020a; Majumder et al., 2020).
151
+
152
+ Studies may even be able to specify and investigate what type of information is necessary in order to understand the user's emotions and model empathically accurate responses. This can be done with a controlled selection of samples from knowledge bases. For instance, (Shen et al., 2020, 2022)'s work uses particular aspects of common sense and domain-specific knowledge. Researchers must become more intimate with data and understand the nature of dialogue to design empathetic systems.
153
+
154
+ Cognitive empathic processes can improve methods to select appropriate response strategies. Approaches inspired by cognitive processes could thus result in not only valuable representations of emotion but also enhanced methods for response strategies. Though we critiqued the generally-lacking specificity about response appropriateness earlier, the most salient idea we grasp is that observers should mirror the target's emotion or express a similar sentiment valence. Contrary to this idea, Xie and Pu (2021) find that empathetic listeners often respond to negative emotions such as sadness and anger with questions rather than expressing similar or opposite emotions. In addition, specific empathetic question intents of observers may play more significant roles in regulating the tar
155
+
156
+ get's emotions (Svikhnushina et al., 2022). Approaches designed both on emotional and cognitive schemes also had effective results in assisting students write peer-reviews that are perceived more empathetic (Wambsganss et al., 2021, 2022).
157
+
158
+ # 7 Refocusing our efforts
159
+
160
+ We overlook research opportunities that better align with described motivations, and these problems necessitate cognitive empathy. Most empathy research in NLP focuses on emotional chatbots or social media analysis. This focus probably drives the overemphasis on emotional empathy. Neither of the domains, we argue, is helpful for the purposes where empathetic NLP systems would be needed the most; for clinical or educational praxis or other rather formal contexts. Such scarce applications include research on Motivational Interviewing (Pérez-Rosas et al., 2016, 2017, 2018, 2019; Wu et al., 2021b; Shen et al., 2022), assistive writing systems for peer-to-peer mental health support (Sharma et al., 2021, 2022), and other counseling conversations (Althoff et al., 2016; Zhang and Danescu-Niculescu-Mizil, 2020; Zhang et al., 2020; Sun et al., 2021). Work in these areas make the need for cognitive empathy even more apparent.
161
+
162
+ Empathic capacities vary across individuals, as some experience more intense affective responses to certain target perceptions than others (Eisenberg, 2014). Without techniques to manage affective responses, people can experience significant distress. Cognitive empathy skills strengthen observers' ability to regulate or manage their affective responses. Thus, these skills are critical for individuals in caregiving roles (e.g., counselors and doctors) who interpersonally engage with others who are vulnerable, suffering, or in crisis daily. At the same time, the ability to empathize with the target is essential for their roles in providing effective treatment, making the role of cognitive empathy significant both for self-care and care for others.
163
+
164
+ NLP research is highly needed for virtual standardized patient (VSP) systems. VSPs provide an effective way for learners to develop cognitive empathy and clinical skills (Lok and Foster, 2019). The need for VSPs has grown from the necessity for methods that allow learners to practice rare and realistic crisis scenarios and concern for the safety and expenses of standardized patients. Needs in this area include but are not limited to conversational models for practice in intercultural commu
165
+
166
+ nication, end-of-life discussions, and breaking bad news. Development of such systems naturally requires the skills of NLP researchers, and these research efforts would benefit the field.
167
+
168
+ So far, the task of modeling and simulating the empathy target has received little attention in NLP, leaving a significant research gap that our community is positioned to fill, with exciting challenges that align with the motivations of research on empathy and that can have a broad impact within and beyond our field. One work, for instance, focused on simulating individuals in crisis for the use case of counselor training (Demasi et al., 2019). Similarly, modeling and simulating the target could support broader educational objectives, such as training students' emotionally and cognitively empathic feedback skills (Wambsganss et al., 2021). This direction of simulating a mental health support seeker aligns with the goals of related work for developing tools to assist with evaluating and training counselors (Pérez-Rosas et al., 2018, 2019; Imel et al., 2017; Zhang and Danescu-Niculescu-Mizil, 2020). The ethics section further discusses our perspective on this approach compared to developing empathetic response generation models for support-related motivations.
169
+
170
+ # 8 A Forward Perspective
171
+
172
+ Here we summarize the main obstacles we identified and our recommendations for moving forward.
173
+
174
+ The Definition Problem: Define research goals and design methodologies based on empathy's more concrete and measurable aspects. The abstractness of empathy has negative implications for the construct validity of measurement approaches and, thereby, the scientific effectiveness and comparability.
175
+
176
+ The Measurement Problem: Draw inspiration from measurements established in psychology, and in addition, learn from investigations and critiques on their construct validity. As well as referencing psychology literature to aid the developments of measurement techniques (Yalçın, 2019), we should be familiar with existing evaluations and discussions on their construct validity. Future work is needed to investigate the validity of our current approaches methodologically.
177
+
178
+ The Narrow Task Problem: Lessons from our own field. The interaction between cognitive and emotional processes is significant. Cognitive
179
+
180
+ processes can enable higher empathic accuracy. Perspective-taking is a method of cognitive empathy that enables better empathic understanding, and the appraisal theory of empathy provides a framework specific to situational aspects an observer can consider. These processes relate clearly to reasoning processes explored by NLP tasks. We recommend future work that intersects with the newer area of abductive commonsense reasoning (Bhagavatula et al., 2020).
181
+
182
+ Refocusing our efforts: Supporting clinical and educational domains aligns better with the motivations of the empathy research area. Much of what motivates the empathy research area is a desire to support those in need (i.e., compassionate motivations, to employ the new terminology). Our position is that current efforts in empathetic dialogue generation should be reallocated to support the needs of clinical domains, e.g., systems that support communicative skill training for people in care-providing roles and systems for supportive cognitive empathic skills development in-general. The need for cognitive empathy is even more pressing for these applications, and therefore NLP research can best support these needs by emphasizing the cognitive aspects in our empathy conceptualizations going forward.
183
+
184
+ # 9 Conclusion
185
+
186
+ Language technology is deployed increasingly in interpersonal settings that require an empathetic understanding of the speaker. Our field's construct represents a significant obstacle to advancing our ability to develop empathetic language technologies. While multiple NLP works have attempted to incorporate empathy into their models and applications, we argue that these works have underspecified empathy—focusing primarily on emotion—and overlooked a major component: cognitive empathy. However, we argue that this gap represents a significant opportunity for developing new models that better reflect cognitive processes and theory of mind while supporting much-needed human-centered applications, such as those in clinical settings.
187
+
188
+ # Ethical Perspectives
189
+
190
+ While our paper is largely an argument for increased depth and specificity in the study of NLP,
191
+
192
+ our call to overcome these obstacles still comes with ethical considerations. Following, we outline two main ethical points.
193
+
194
+ The conversational settings used for empathetic communication often feature highly personal dialog from the target that require special care, e.g., those in the medical domain. Advocating for more work in empathy comes at a potential risk to these targets in keeping their potentially-sensitive data private outside of valid uses. The sensitive nature of the data also likely increases the costs and difficulty of developing empathy resources, rendering such efforts rather infeasible for many (Hosseini and Caragea, 2021b; Welivita et al., 2021). Furthermore, some datasets cannot be made public to uphold license agreements to protect the rights and privacy of the stakeholders, which is especially the case in counseling domains (Althoff et al., 2016; Pérez-Rosas et al., 2017; Sharma et al., 2020; Zhang and Danescu-Niculescu-Mizil, 2020; Zhang et al., 2020). While these efforts to preserve privacy are rightly stringent, they also create a two-tier system where only researchers who have access to subjects can use the data to participate in such research. Thus, we call for consideration of possible initiatives to enable researchers to utilize datasets crafted for domains beyond publishable social media data. For instance, shared tasks (such as WASSA 2021 and 2022 (Tafreshi et al., 2021; Barriere et al., 2022)) may enable many to experiment on carefully crafted datasets. Initiatives may draw inspiration from recent CLPsych shared task models that enabled approved researchers to work on sensitive data under signed data use and ethical practice agreements.[4]
195
+
196
+ Prior work has, for the most part, assumed that empathy must be prosocial and, therefore, improved empathy models would offer societal benefits. However, emotional and cognitive understanding is not always employed for prosocial purposes. For instance, understanding emotions can be used for abuse, and manipulation (Hart et al., 1995; Hodges and Biswas-Diener, 2007). Empathy is also capable of motivating hostile acts and fueling hostility toward out-groups (Breithaupt, 2012, 2018); or, if an observer engages with a target with whom they have a bad relationship, empathy for the target's negative experiences may lead to "malicious gloating" on the observer's behalf (Bischof-Kohler, 1991, p. 259). Thus, the development of new em
197
+
198
+ pathetic systems and data could lead to uses that are decidedly antisocial. Consider also the role of empathy in persuasion. In a prosocial context, one study on advice-giving robots showed that students were more likely to be persuaded by the advice of the robot when it used empathetic strategies (Langedijk and Ham, 2021). Emotional appeals are an effective rhetorical tool for persuasion, such as through personal narratives that can increase understanding of other perspectives (Wachsmuth et al., 2017; Vecchi et al., 2021). But on that end, emotional understanding can also be used for manipulative appeals to emotion. Huffman et al. (2020) introduced a task of detecting emotionally manipulative language, language intended to induce an emotional reaction in the reader (e.g., fear-mongering rhetoric to spread). Their study reviews how adversarial actors have strategically used emotionally manipulative rhetoric in media manipulation. NLP attention on abstract ideas of empathy could be dedicated to such problems. For instance, a new setup could be imagined to anticipate emotional empathy to emotionally manipulative stimuli or focus more on the target's language, which leads to pathogenic empathy in the observers.
199
+
200
+ # Limitations
201
+
202
+ We presented theoretical groundwork on empathy from psychology and neuroscience. We provide distinct descriptions of empathy-related concepts in a way we view practical for aligning research methodologies with more precise conceptualizations for the NLP community. The disambiguation approach we presented is informed by our efforts to review and understand multiple perspectives from psychology and neuroscience thoroughly. We ultimately constructed this approach with references to selected studies from fields that helped us to identify meaningful delineations between the concepts. However, there is no single conceptualization of empathy or related concepts from those areas, so our construct will differ from some. We hope our paper as a whole inspires collective efforts from our community to scrutinize the empathy frameworks that ground the empathy research direction with more informed perspectives.
203
+
204
+ This work presented themes about empathy descriptions and evaluations based on a systematic literature review. The search methodology is limited in that it focuses primarily on papers in the ACL Anthology. Our review of related works in
205
+
206
+ cludes literature outside of ACL venues, and they are consistent with the themes we identify. However, a more extensive study, including works from a broader set of publication venues using standard survey methodology, would provide a more comprehensive outlook on empathetic technology research.
207
+
208
+ # Acknowledgements
209
+
210
+ This work has been supported by the German Federal Ministry of Education and Research (BMBF) as a part of the Junior AI Scientists program under the reference 01-S20060, the Alexander von Humboldt Foundation, and by Hessian.AI. Any opinions, findings, conclusions, or recommendations in this material are those of the authors and do not necessarily reflect the views of the BMBF, Alexander von Humboldt Foundation, or Hessian.AI.
211
+
212
+ # References
213
+
214
+ Muhammad Abdul-Mageed, Anneke Buffone, Hao Peng, Johannes Eichstaedt, and Lyle Ungar. 2017. Recognizing pathogenic empathy in social media. In Proceedings of the International AAAI Conference on Web and Social Media, volume 11, pages 448-451. 19
215
+ Aseel Addawood, Rezvaneh Rezapour, Omid Abdar, and Jana Diesner. 2017. Telling apart tweets associated with controversial versus non-controversial topics. In Proceedings of the Second Workshop on NLP and Computational Social Science, pages 32-41, Vancouver, Canada. Association for Computational Linguistics. 20
216
+ Firoj Alam, Shammur Absar Chowdhury, Morena Danieli, and Giuseppe Riccardi. 2016a. How interlocutors coordinate with each other within emotional segments? In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 728-738, Osaka, Japan. The COLING 2016 Organizing Committee. 19
217
+ Firoj Alam, Morena Danieli, and Giuseppe Riccardi. 2016b. Can we detect speakers' empathy?: A real-life case study. In 2016 7th IEEE International Conference on Cognitive Infocommunications (CogInfoCom). IEEE. 19
218
+ Firoj Alam, Morena Danieli, and Giuseppe Riccardi. 2018. Annotating and modeling empathy in spoken conversations. Computer Speech & Language, 50. 19, 20
219
+ Tim Althoff, Kevin Clark, and Jure Leskovec. 2016. Large-scale analysis of counseling conversations: An application of natural language processing to mental
220
+
221
+ health. Transactions of the Association for Computational Linguistics, 4:463-476. 7, 9
222
+ Valentin Barriere, Shabnam Tafreshi, João Sedoc, and Sawsan Alqahtani. 2022. WASSA 2022 shared task: Predicting empathy, emotion and personality in reaction to news stories. In Proceedings of the 12th Workshop on Computational Approaches to Subjectivity, Sentiment & Social Media Analysis, Dublin, Ireland. Association for Computational Linguistics. 1, 9, 20
223
+ C Daniel Batson, Bruce D Duncan, Paula Ackerman, Terese Buckley, and Kimberly Birch. 1981. Is empathic emotion a source of altruistic motivation? Journal of personality and Social Psychology, 40(2):290. 6
224
+ C Daniel Batson, Shannon Early, and Giovanni Salvarani. 1997. Perspective taking: Imagining how another feels versus imaging how you would feel. *Personality and social psychology bulletin*, 23(7):751-758. 6
225
+ C Daniel Batson, Jim Fultz, and Patricia A Schoenrade. 1987. Distress and empathy: Two qualitatively distinct vicarious emotions with different motivational consequences. Journal of personality, 55(1). 5
226
+ Chandra Bhagavatula, Ronan Le Bras, Chaitanya Malaviya, Keisuke Sakaguchi, Ari Holtzman, Hannah Rashkin, Doug Downey, Wen-tau Yih, and Yejin Choi. 2020. Abductive commonsense reasoning. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net. 7, 8
227
+ Vitthal Bhandari and Poonam Goyal. 2022. bitsa_nlp@LT-EDI-ACL2022: Leveraging pretrained language models for detecting homophobia and transophobia in social media comments. In Proceedings of the Second Workshop on Language Technology for Equality, Diversity and Inclusion, pages 149–154, Dublin, Ireland. Association for Computational Linguistics. 20
228
+ Shweta Bhargava, Srinivasan Janarthanam, Helen Hastie, Amol Deshmukh, Ruth Aylett, Lee Corrigan, and Ginevra Castellano. 2013. Demonstration of the EmoteWizard of Oz interface for empathic robotic tutors. In Proceedings of the SIGDIAL 2013 Conference, pages 363-365, Metz, France. Association for Computational Linguistics. 20
229
+ Doris Bischof-Köhler. 1991. The development of empathy in infants. Infant development: Perspectives from German-speaking countries, pages 245-273. 9
230
+ Robert James R Blair. 2005. Responding to the emotions of others: Dissociating forms of empathy through the study of typical and psychiatric populations. Consciousness and cognition, 14(4):698-718. 2
231
+ Fritz Breithaupt. 2012. A three-person model of empathy. Emotion Review, 4(1). 9
232
+
233
+ Fritz Breithaupt. 2018. The bad things we do because of empathy. *Interdisciplinary Science Reviews*, 43(2). 9
234
+ Sven Buechel, Anneke Buffone, Barry Slaff, Lyle Ungar, and João Sedoc. 2018. Modeling empathy and distress in reaction to news stories. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4758-4765, Brussels, Belgium. Association for Computational Linguistics. 1, 4, 6, 19
235
+ Yash Butala, Kanishk Singh, Adarsh Kumar, and Shrey Shrivastava. 2021. Team phoenix at WASSA 2021: Emotion analysis on news stories with pre-trained language models. In Proceedings of the Eleventh Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis, pages 274-280, Online. Association for Computational Linguistics. 20
236
+ Laurie Carr, Marco Iacoboni, Marie-Charlotte Dubeau, John C. Mazziotta, and Gian Luigi Lenzi. 2003. Neural mechanisms of empathy in humans: A relay from neural systems for imitation to limbic areas. Proceedings of the National Academy of Sciences, 100(9):5497-5502. 20
237
+ Ginevra Castellano, Ana Paiva, Arvid Kappas, Ruth Aylett, Helen Hastie, Wolmet Barendregt, Fernando Nabais, and Susan Bull. 2013. Towards empathic virtual and robotic tutors. In International conference on artificial intelligence in education. Springer. 20
238
+ Kezhen Chen, Qiuyuan Huang, Daniel McDuff, Xiang Gao, Hamid Palangi, Jianfeng Wang, Kenneth Forbus, and Jianfeng Gao. 2021. NICE: Neural image commenting with empathy. In *Findings of the Association for Computational Linguistics: EMNLP* 2021, Punta Cana, Dominican Republic. Association for Computational Linguistics. 19
239
+ Yue Chen, Yingnan Ju, and Sandra Kübler. 2022. IUCL at WASSA 2022 shared task: A text-only approach to empathy and emotion detection. In Proceedings of the 12th Workshop on Computational Approaches to Subjectivity, Sentiment & Social Media Analysis, Dublin, Ireland. Association for Computational Linguistics. 20
240
+ Michel-Pierre Coll, Essi Viding, Markus Rütgen, Giorgia Silani, Claus Lamm, Caroline Catmur, and Geoffrey Bird. 2017. Are we really measuring empathy? proposal for a new measurement framework. *Neuroscience & Biobehavioral Reviews*, 83:132-139. 5
241
+ Benjamin MP Cuff, Sarah J Brown, Laura Taylor, and Douglas J Howat. 2016. Empathy: A review of the concept. Emotion review, 8(2). 2, 5, 6
242
+ Mark Davis. 1980. A multidimensional approach to individual differences in empathy. JSAS Catalog Sel. Doc. Psychol., 10. 2, 19
243
+
244
+ Mark H Davis. 1983. Measuring individual differences in empathy: evidence for a multidimensional approach. Journal of personality and social psychology, 44(1). 2, 6, 19
245
+ Jean Decety and Philip L Jackson. 2006. A social-neuroscience perspective on empathy. Current directions in psychological science, 15(2):54-58. 2
246
+ Jean Decety and Claus Lamm. 2006. Human empathy through the lens of social neuroscience. The Scientific World Journal, 6:1146-1163. 20
247
+ Flor Miriam Del Arco, Jaime Collado-Montanez, L. Alfonso Ureña, and María-Teresa Martín-Valdivia. 2022. Empathy and distress prediction using transformer multi-output regression and emotion analysis with an ensemble of supervised and zero-shot learning models. In Proceedings of the 12th Workshop on Computational Approaches to Subjectivity, Sentiment & Social Media Analysis, Dublin, Ireland. Association for Computational Linguistics. 20
248
+ Orianna Demasi, Marti A. Hearst, and Benjamin Recht. 2019. Towards augmenting crisis counselor training by improving message retrieval. In Proceedings of the Sixth Workshop on Computational Linguistics and Clinical Psychology, pages 1-11, Minneapolis, Minnesota. Association for Computational Linguistics. 1, 8, 20
249
+ Dorottya Demszky, Dana Movshovitz-Attias, Jeongwoo Ko, Alan Cowen, Gaurav Nemade, and Sujith Ravi. 2020. GoEmotions: A dataset of fine-grained emotions. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4040–4054, Online. Association for Computational Linguistics. 20
250
+ Alexandre Denis, Samuel Cruz-Lara, Nadia Bellalem, and Lotfi Bellalem. 2014. Synalp-empathic: A valence shifting hybrid system for sentiment analysis. In Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014), pages 605-609, Dublin, Ireland. Association for Computational Linguistics. 20
251
+ Nancy Eisenberg. 2014. Altruistic emotion, cognition, and behavior (PLE: Emotion). Psychology Press. 3, 7
252
+ Sarah Fabi, Lydia Anna Weber, and Hartmut Leuthold. 2019. Empathic concern and personal distress depend on situational but not dispositional factors. *PloS one*, 14(11):e0225102. 6
253
+ Neele Falk and Gabriella Lapesa. 2022. Reports of personal experiences and stories in argumentation: datasets and analysis. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 5530-5553, Dublin, Ireland. Association for Computational Linguistics. 20
254
+
255
+ Mauajama Firdaus, Asif Ekbal, and Pushpak Bhattacharyya. 2020. Incorporating politeness across languages in customer care responses: Towards building a multi-lingual empathetic dialogue agent. In Proceedings of the 12th Language Resources and Evaluation Conference, pages 4172-4182, Marseille, France. European Language Resources Association. 1, 19, 20
256
+ Tommaso Fornaciari, Federico Bianchi, Debora Nozza, and Dirk Hovy. 2021. MilaNLP @ WASSA: Does BERT feel sad when you cry? In Proceedings of the Eleventh Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis, pages 269-273, Online. Association for Computational Linguistics. 20
257
+ Pascale Fung, Anik Dey, Farhad Bin Siddique, Ruixi Lin, Yang Yang, Dario Bertero, Yan Wan, Ricky Ho Yin Chan, and Chien-Sheng Wu. 2016a. Zara: A virtual interactive dialogue system incorporating emotion, sentiment and personality recognition. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: System Demonstrations, pages 278-281, Osaka, Japan. The COLING 2016 Organizing Committee. 20
258
+ Pascale Fung, Anik Dey, Farhad Bin Siddique, Ruixi Lin, Yang Yang, Yan Wan, and Ho Yin Ricky Chan. 2016b. Zara the Supergirl: An empathetic personality recognition system. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Demonstrations, pages 87-91, San Diego, California. Association for Computational Linguistics. 20
259
+ Jun Gao, Yuhan Liu, Haolin Deng, Wei Wang, Yu Cao, Jiachen Du, and Ruifeng Xu. 2021. Improving empathetic response generation by recognizing emotion cause in conversations. In *Findings of the Association for Computational Linguistics: EMNLP* 2021, Punta Cana, Dominican Republic. Association for Computational Linguistics. 7, 18, 19
260
+ Soumitra Ghosh, Dhirendra Maurya, Asif Ekbal, and Pushpak Bhattacharyya. 2022. Team IITP-AINLPML at WASSA 2022: Empathy detection, emotion classification and personality detection. In Proceedings of the 12th Workshop on Computational Approaches to Subjectivity, Sentiment & Social Media Analysis, Dublin, Ireland. Association for Computational Linguistics. 20
261
+ Jennifer L Goetz, Dacher Keltner, and Emiliana Simon-Thomas. 2010. Compassion: an evolutionary analysis and empirical review. Psychological bulletin, 136(3):351. 2
262
+ Alvin I. Goldman. 1993. Ethics and cognitive science. Ethics, 103(2):337-360. 20
263
+ Bhanu Prakash Reddy Guda, Aparna Garimella, and Niyati Chhaya. 2021. EmpathBERT: A BERT-based framework for demographic-aware empathy prediction. In Proceedings of the 16th Conference of the
264
+
265
+ European Chapter of the Association for Computational Linguistics: Main Volume, pages 3072-3079, Online. Association for Computational Linguistics. 20
266
+ Marco Guerini, Sara Falcone, and Bernardo Magnini. 2018. A methodology for evaluating interaction strategies of task-oriented conversational agents. In Proceedings of the 2018 EMNLP Workshop SCAI: The 2nd International Workshop on Search-Oriented Conversational AI, pages 24-32, Brussels, Belgium. Association for Computational Linguistics. 20
267
+ Yuting Guo and Jinho D. Choi. 2021. Enhancing cognitive models of emotions with representation learning. In Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics, pages 141-148, Online. Association for Computational Linguistics. 20
268
+ Stephen David Hart, David Neil Cox, and Robert D Hare. 1995. Hare psychopathy checklist: Screening version (PCL: SV). Multi-Heath Systems. 9
269
+ Helen Hastie, Mei Yii Lim, Srini Janarthanam, Amol Deshmukh, Ruth Aylett, Mary Ellen Foster, and Lynne Hall. 2016. I remember you! interaction with memory for an empathic virtual robotic tutor. 20
270
+ Devamanyu Hazarika, Soujanya Poria, Rada Mihalcea, Erik Cambria, and Roger Zimmermann. 2018a. ICON: Interactive conversational memory network for multimodal emotion detection. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2594-2604, Brussels, Belgium. Association for Computational Linguistics. 20
271
+ Devamanyu Hazarika, Soujanya Poria, Amir Zadeh, Erik Cambria, Louis-Philippe Morency, and Roger Zimmermann. 2018b. Conversational memory network for emotion recognition in dyadic dialogue videos. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2122-2132, New Orleans, Louisiana. Association for Computational Linguistics. 20
272
+ Sara D Hodges and Robert Biswas-Diener. 2007. Balancing the empathy expense account: Strategies for regulating empathic response. Empathy in mental illness, pages 389-407. 9
273
+ Mahshid Hosseini and Cornelia Caragea. 2021a. Distilling knowledge for empathy detection. In *Findings of the Association for Computational Linguistics: EMNLP* 2021, Punta Cana, Dominican Republic. Association for Computational Linguistics. 20
274
+ Mahshid Hosseini and Cornelia Caragea. 2021b. It takes two to empathize: One to seek and one to provide. Proceedings of the AAAI Conference on Artificial Intelligence, 35(14):13018-13026. 6, 9, 20
275
+
276
+ Dou Hu, Lingwei Wei, and Xiaoyong Huai. 2021a. DialogueCRN: Contextual reasoning networks for emotion recognition in conversations. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 7042-7052, Online. Association for Computational Linguistics. 20
277
+ Jingwen Hu, Yuchen Liu, Jinming Zhao, and Qin Jin. 2021b. MMGCN: Multimodal fusion via deep graph convolution network for emotion recognition in conversation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 5666-5675, Online. Association for Computational Linguistics. 20
278
+ Jordan S. Huffaker, Jonathan K. Kummerfeld, Walter S. Lasecki, and Mark S. Ackerman. 2020. Crowdsourced detection of emotionally manipulative language. In CHI '20: CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, April 25-30, 2020, pages 1-14. ACM. 9
279
+ William Ickes. 2001. Measuring empathic accuracy. Interpersonal sensitivity: Theory and measurement, 1:219-241. 3, 5, 6
280
+ William Ickes. 2011. Everyday mind reading is driven by motives and goals. *Psychological Inquiry*, 22(3):200-206. 3, 20
281
+ Tatsuya Ide and Daisuke Kawahara. 2022. Building a dialogue corpus annotated with expressed and experienced emotions. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop, pages 21-30, Dublin, Ireland. Association for Computational Linguistics. 20
282
+ Zac E Imel, Derek D Caperton, Michael Tanana, and David C Atkins. 2017. Technology-enhanced human interaction in psychotherapy. Journal of counseling psychology, 64(4):385. 5, 8
283
+ Koji Inoue, Divesh Lala, Kenta Yamamoto, Shizuka Nakamura, Katsuya Takanashi, and Tatsuya Kawahara. 2020. An attentive listening system with android ERICA: Comparison of autonomous and WOZ interactions. In Proceedings of the 21th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 118-127, 1st virtual meeting. Association for Computational Linguistics. 20
284
+ Koji Inoue, Hiromi Sakamoto, Kenta Yamamoto, Divesh Lala, and Tatsuya Kawahara. 2021. A multi-party attentive listening robot which stimulates involvement from side participants. In Proceedings of the 22nd Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 261-264, Singapore and Online. Association for Computational Linguistics. 20
285
+
286
+ Micah Iserman and Molly Ireland. 2017. A dictionary-based comparison of autobiographies by people and murderous monsters. In Proceedings of the Fourth Workshop on Computational Linguistics and Clinical Psychology — From Linguistic Signal to Clinical Reality, pages 74–84, Vancouver, BC. Association for Computational Linguistics. 20
287
+ Etsuko Ishii, Genta Indra Winata, Samuel Cahyawijaya, Divesh Lala, Tatsuya Kawahara, and Pascale Fung. 2021. ERICA: An empathetic android companion for Covid-19 quarantine. In Proceedings of the 22nd Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 257–260, Singapore and Online. Association for Computational Linguistics. 20
288
+ Koichiro Ito, Masaki Murata, Tomohiro Ohno, and Shigeki Matsubara. 2020. Relation between degree of empathy for narrative speech and type of responsive utterance in attentive listening. In Proceedings of the 12th Language Resources and Evaluation Conference, pages 696-701, Marseille, France. European Language Resources Association. 18, 19, 20
289
+ Jin Yea Jang, San Kim, Minyoung Jung, Saim Shin, and Gahgene Gweon. 2021. BPM_MT: Enhanced backchannel prediction model using multi-task learning. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 3447-3452, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. 20
290
+ Hamed Khanpour, Cornelia Caragea, and Prakhar Biyani. 2017. Identifying empathetic messages in online health communities. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 246-251, Taipei, Taiwan. Asian Federation of Natural Language Processing. 1, 18, 19
291
+ Hyunwoo Kim, Byeongchang Kim, and Gunhee Kim. 2021. Perspective-taking and pragmatics for generating empathetic responses focused on emotion causes. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. 7, 19
292
+ Atharva Kulkarni, Sunanda Somwase, Shivam Rajput, and Manisha Marathe. 2021. PVG at WASSA 2021: A multi-input, multi-task, transformer-based architecture for empathy and distress prediction. In Proceedings of the Eleventh Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis, pages 105-111, Online. Association for Computational Linguistics. 20
293
+ Allison Lahnala, Charles Welch, and Lucie Flek. 2022. CAISA at WASSA 2022: Adapter-tuning for empathy prediction. In Proceedings of the 12th Workshop on Computational Approaches to Subjectivity, Sentiment & Social Media Analysis, Dublin, Ireland. Association for Computational Linguistics. 6, 20
294
+
295
+ Claus Lamm, C. Daniel Batson, and Jean Decety. 2007. The Neural Substrate of Human Empathy: Effects of Perspective-taking and Cognitive Appraisal. Journal of Cognitive Neuroscience, 19(1). 2
296
+ Rosalyn M Langedijk and Jaap Ham. 2021. More than advice: The influence of adding references to prior discourse and signals of empathy on the persuasiveness of an advice-giving robot. Interaction Studies, 22(3). 9, 20
297
+ Bin Li, Yixuan Weng, Qiya Song, Bin Sun, and Shutao Li. 2022. Continuing pre-trained model with multiple training strategies for emotional classification. In Proceedings of the 12th Workshop on Computational Approaches to Subjectivity, Sentiment & Social Media Analysis, pages 233-238, Dublin, Ireland. Association for Computational Linguistics. 20
298
+ Qintong Li, Hongshen Chen, Zhaochun Ren, Pengjie Ren, Zhaopeng Tu, and Zhumin Chen. 2020a. EmpDG: Multi-resolution interactive empathetic dialogue generation. In Proceedings of the 28th International Conference on Computational Linguistics, pages 4454-4466, Barcelona, Spain (Online). International Committee on Computational Linguistics. 7, 19
299
+ Qintong Li, Piji Li, Zhumin Chen, and Zhaochun Ren. 2020b. Towards empathetic dialogue generation over multi-type knowledge. arXiv preprint arXiv:2009.09708. 7
300
+ Zhaojiang Lin, Andrea Madotto, Jamin Shin, Peng Xu, and Pascale Fung. 2019. MoEL: Mixture of empathetic listeners. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 121-132, Hong Kong, China. Association for Computational Linguistics. 7, 18, 19
301
+ Marina Litvak, Jahna Otterbacher, Chee Siang Ang, and David Atkins. 2016. Social and linguistic behavior and its correlation to trait empathy. In Proceedings of the Workshop on Computational Modeling of People's Opinions, Personality, and Emotions in Social Media (PEOPLES), pages 128-137, Osaka, Japan. The COLING 2016 Organizing Committee. 6, 19
302
+ Benjamin Lok and Adriana E. Foster. 2019. Can Virtual Humans Teach Empathy?, pages 143-163. Springer International Publishing, Cham. 7
303
+ Xin Lu, Yijian Tian, Yanyan Zhao, and Bing Qin. 2021. Retrieve, discriminate and rewrite: A simple and effective framework for obtaining affective response in retrieval-based chatbots. In *Findings of the Association for Computational Linguistics: EMNLP* 2021, pages 1956-1969, Punta Cana, Dominican Republic. Association for Computational Linguistics. 20
304
+ Yukun Ma, Khanh Linh Nguyen, Frank Z. Xing, and Erik Cambria. 2020. A survey on empathetic dialogue systems. Information Fusion, 64. 7
305
+
306
+ Khyati Mahajan and Samira Shaikh. 2019. Emoji usage across platforms: A case study for the charlottesville event. In Proceedings of the 2019 Workshop on Widening NLP, pages 160-162, Florence, Italy. Association for Computational Linguistics. 20
307
+ Himanshu Maheshwari and Vasudeva Varma. 2022. An ensemble approach to detect emotions at an essay level. In Proceedings of the 12th Workshop on Computational Approaches to Subjectivity, Sentiment & Social Media Analysis, pages 276-279, Dublin, Ireland. Association for Computational Linguistics. 20
308
+ Navonil Majumder, Pengfei Hong, Shanshan Peng, Jiankun Lu, Deepanway Ghosal, Alexander Gelbukh, Rada Mihalcea, and Soujanya Poria. 2020. MIME: MIMicking emotions for empathetic response generation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 8968-8979, Online. Association for Computational Linguistics. 7, 19, 20
309
+ William R Miller and Stephen Rollnick. 2012. Motivational interviewing: Helping people change. Guilford press. 3
310
+ Jay Mundra, Rohan Gupta, and Sagnik Mukherjee. 2021. WASSA@IITK at WASSA 2021: Multi-task learning and transformer finetuning for emotion classification and empathy prediction. In Proceedings of the Eleventh Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis, pages 112-116, Online. Association for Computational Linguistics. 20
311
+ Tarek Naous, Wissam Antoun, Reem Mahmoud, and Hazem Hajj. 2021. Empathetic BERT2BERT conversational model: Learning Arabic language generation with little data. In Proceedings of the Sixth Arabic Natural Language Processing Workshop, pages 164-172, Kyiv, Ukraine (Virtual). Association for Computational Linguistics. 19, 20
312
+ Tarek Naous, Christian Hokayem, and Hazem Hajj. 2020. Empathy-driven Arabic conversational chatbot. In Proceedings of the Fifth Arabic Natural Language Processing Workshop, pages 58-68, Barcelona, Spain (Online). Association for Computational Linguistics. 19
313
+ Verónica Pérez-Rosas, Rada Mihalcea, Kenneth Resnicow, Satinder Singh, and Lawrence An. 2016. Building a motivational interviewing dataset. In Proceedings of the Third Workshop on Computational Linguistics and Clinical Psychology, pages 42-51, San Diego, CA, USA. Association for Computational Linguistics. 7
314
+ Verónica Pérez-Rosas, Rada Mihalcea, Kenneth Resnicow, Satinder Singh, and Lawrence An. 2017. Understanding and predicting empathic behavior in counseling therapy. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1426-1435, Vancouver, Canada. Association for Computational Linguistics. 1, 7, 9, 18, 19
315
+
316
+ Verónica Pérez-Rosas, Xuetong Sun, Christy Li, Yuchen Wang, Kenneth Resnicow, and Rada Mihalcea. 2018. Analyzing the quality of counseling conversations: the tell-tale signs of high-quality counseling. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan. European Language Resources Association (ELRA). 7, 8
317
+ Verónica Pérez-Rosas, Xinyi Wu, Kenneth Resnicow, and Rada Mihalcea. 2019. What makes a good counselor? learning to distinguish between high-quality and low-quality counseling conversations. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 926-935, Florence, Italy. Association for Computational Linguistics. 7, 8
318
+ Vitou Phy, Yang Zhao, and Akiko Aizawa. 2020. Deconstruct to reconstruct a configurable evaluation metric for open-domain dialogue systems. In Proceedings of the 28th International Conference on Computational Linguistics, pages 4164-4178, Barcelona, Spain (Online). International Committee on Computational Linguistics. 19
319
+ Yada Pruksachatkun, Sachin R. Pendse, and Amit Sharma. 2019. Moments of change: Analyzing peer-based cognitive support in online mental health forums. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 64. ACM. 20
320
+ Shenbin Qian, Constantin Orasan, Diptesh Kanojia, Hadeel Saadany, and Félix Do Carmo. 2022. SURREY-CTS-NLP at WASSA2022: An experiment of discourse and sentiment analysis for the prediction of empathy, distress and emotion. In Proceedings of the 12th Workshop on Computational Approaches to Subjectivity, Sentiment & Social Media Analysis, Dublin, Ireland. Association for Computational Linguistics. 20
321
+ Hannah Rashkin, Eric Michael Smith, Margaret Li, and Y-Lan Boureau. 2019. Towards empathetic open-domain conversation models: A new benchmark and dataset. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5370-5381, Florence, Italy. Association for Computational Linguistics. 1, 18, 19, 20
322
+ Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Eric Michael Smith, Y-Lan Boureau, and Jason Weston. 2021. Recipes for building an open-domain chatbot. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 300-325, Online. Association for Computational Linguistics. 1, 20
323
+ Sahand Sabour, Chujie Zheng, and Minlie Huang. 2021. Cem: Commonsense-aware empathetic response generation. arXiv preprint arXiv:2109.05739. 7, 18, 19, 20
324
+
325
+ Manuela Sanguinetti, Alessandro Mazzei, Viviana Patti, Marco Scalerandi, Dario Mana, and Rossana Simeoni. 2020. Annotating errors and emotions in humanchatbot interactions in Italian. In Proceedings of the 14th Linguistic Annotation Workshop, pages 148-159, Barcelona, Spain. Association for Computational Linguistics. 1, 19, 20
326
+ João Sedoc, Sven Buechel, Yehonathan Nachmany, Anneke Buffone, and Lyle Ungar. 2020. Learning word ratings for empathy and distress from document-level user responses. In Proceedings of the 12th Language Resources and Evaluation Conference, pages 1664-1673, Marseille, France. European Language Resources Association. 20
327
+ Simone G. Shamay-Tsoory. 2011. The neural bases for empathy. *The Neuroscientist*, 17(1):18-24. PMID: 21071616. 2, 20
328
+ Ashish Sharma, Inna W Lin, Adam S Miner, David C Atkins, and Tim Althoff. 2021. Towards facilitating empathic conversations in online mental health support: A reinforcement learning approach. In Proceedings of the Web Conference 2021. 7
329
+ Ashish Sharma, Inna W. Lin, Adam S. Miner, David C. Atkins, and Tim Althoff. 2022. Human-ai collaboration enables more empathic conversations in text-based peer-to-peer mental health support. 7
330
+ Ashish Sharma, Adam Miner, David Atkins, and Tim Althoff. 2020. A computational approach to understanding empathy expressed in text-based mental health support. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 5263-5276, Online. Association for Computational Linguistics. 1, 9, 18, 19
331
+ Lei Shen and Yang Feng. 2020. CDL: Curriculum dual learning for emotion-controllable response generation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 556-566, Online. Association for Computational Linguistics. 20
332
+ Lei Shen, Jinchao Zhang, Jiao Ou, Xiaofang Zhao, and Jie Zhou. 2021. Constructing emotional consensus and utilizing unpaired data for empathetic dialogue generation. In *Findings of the Association for Computational Linguistics: EMNLP* 2021, Punta Cana, Dominican Republic. Association for Computational Linguistics. 7, 18, 19
333
+ Siqi Shen, Veronica Perez-Rosas, Charles Welch, Soujanya Poria, and Rada Mihalcea. 2022. Knowledge enhanced reflection generation for counseling dialogues. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 3096-3107, Dublin, Ireland. Association for Computational Linguistics. 7
334
+ Siqi Shen, Charles Welch, Rada Mihalcea, and Verónica Pérez-Rosas. 2020. Counseling-style reflection generation using generative pretrained transformers with
335
+
336
+ augmented context. In Proceedings of the 21th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 10-20, 1st virtual meeting. Association for Computational Linguistics. 7
337
+ Kurt Shuster, Da Ju, Stephen Roller, Emily Dinan, Y-Lan Boureau, and Jason Weston. 2020. The dialogue dodecathlon: Open-domain knowledge and image grounded conversational agents. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2453-2470, Online. Association for Computational Linguistics. 1, 20
338
+ Farhad Bin Siddique, Onno Kampman, Yang Yang, Anik Dey, and Pascale Fung. 2017. Zara returns: Improved personality induction and adaptation by an empathetic virtual agent. In Proceedings of ACL 2017, System Demonstrations, pages 121-126, Vancouver, Canada. Association for Computational Linguistics. 20
339
+ Eric Michael Smith, Mary Williamson, Kurt Shuster, Jason Weston, and Y-Lan Boureau. 2020. Can you put it all together: Evaluating conversational agents' ability to blend skills. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2021-2030, Online. Association for Computational Linguistics. 19
340
+ Achim Stephan. 2015. Empathy for artificial agents. International Journal of Social Robotics, 7(1). 20
341
+ Linda Stinson and William Ickes. 1992. Empathic accuracy in the interactions of male friends versus male strangers. Journal of personality and social psychology, 62(5):787. 3
342
+ E Stotland, KE Mathews Jr, S Sherman, R.O. Hanson, and B.Z. Richardson. 1978. Empathy, fantasy and helping. Empathy, fantasy, and helping. Beverly Hills: Sage. 3
343
+ Merlin Teodosia Suarez, Jocelynn Cu, and Madelene Sta. Maria. 2012. Building a multimodal laughter database for emotion recognition. In Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12), pages 2347-2350, Istanbul, Turkey. European Language Resources Association (ELRA). 20
344
+ Hao Sun, Zhenru Lin, Chujie Zheng, Siyang Liu, and Minlie Huang. 2021. PsyQA: A Chinese dataset for generating long counseling text for mental health support. In *Findings of the Association for Computational Linguistics: ACL-IJCNLP* 2021, pages 1489-1503, Online. Association for Computational Linguistics. 7, 20
345
+ Ekaterina Svikhnushina, Iuliana Voinea, Anuradha Welivita, and Pearl Pu. 2022. A taxonomy of empathetic questions in social dialogs. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Dublin, Ireland. Association for Computational Linguistics. 7, 19
346
+
347
+ Shabnam Tafreshi, Orphee De Clercq, Valentin Barriere, Sven Buechel, João Sedoc, and Alexandra Balahur. 2021. WASSA 2021 shared task: Predicting empathy and emotion in reaction to news stories. In Proceedings of the Eleventh Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis, pages 92-104, Online. Association for Computational Linguistics. 1, 9, 20
348
+ Miho Toi and C Daniel Batson. 1982. More evidence that empathy is a source of altruistic motivation. Journal of personality and social psychology, 43(2):281. 6
349
+ Alicia Tsai, Shereen Oraby, Vittorio Perera, Jiun-Yu Kao, Yuheng Du, Anjali Narayan-Chen, Tagyoung Chung, and Dilek Hakkani-Tur. 2021. Style control for schema-guided natural language generation. In Proceedings of the 3rd Workshop on Natural Language Processing for Conversational AI, Online. Association for Computational Linguistics. 20
350
+ Quan Tu, Yanran Li, Jianwei Cui, Bin Wang, Ji-Rong Wen, and Rui Yan. 2022. MISC: A mixed strategy-aware model integrating COMET for emotional support conversation. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Dublin, Ireland. Association for Computational Linguistics. 7, 19, 20
351
+ Lindsey Vanderlyn, Gianna Weber, Michael Neumann, Dirk Väth, Sarina Meyer, and Ngoc Thang Vu. 2021. "it seemed like an annoying woman": On the perception and ethical considerations of affective language in text-based conversational agents. In Proceedings of the 25th Conference on Computational Natural Language Learning, Online. Association for Computational Linguistics. 20
352
+ Deeksha Varshney, Asif Ekbal, and Pushpak Bhattacharyya. 2021. Modelling context emotions using multi-task learning for emotion controlled dialog generation. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 2919-2931, Online. Association for Computational Linguistics. 20
353
+ Himil Vasava, Pramegh Uikey, Gaurav Wasnik, and Raksha Sharma. 2022. Transformer-based architecture for empathy prediction and emotion classification. In Proceedings of the 12th Workshop on Computational Approaches to Subjectivity, Sentiment & Social Media Analysis, Dublin, Ireland. Association for Computational Linguistics. 20
354
+ Eva Maria Vecchi, Neele Falk, Iman Juni, and Gabriella Lapesa. 2021. Towards argument mining for social good: A survey. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1338-1352, Online. Association for Computational Linguistics. 9
355
+
356
+ Giuseppe Vettigli and Antonio Sorgente. 2021. EmpNa at WASSA 2021: A lightweight model for the prediction of empathy, distress and emotions from reactions to news stories. In Proceedings of the Eleventh Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis, pages 264-268, Online. Association for Computational Linguistics. 20
357
+ Henning Wachsmuth, Nona Naderi, Yufang Hou, Yonatan Bilu, Vinodkumar Prabhakaran, Tim Alberdingk Thijm, Graeme Hirst, and Benno Stein. 2017. Computational argumentation quality assessment in natural language. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 176-187, Valencia, Spain. Association for Computational Linguistics. 9
358
+ Thiemo Wambsganss, Christina Niklaus, Matthias Sollenner, Siegfried Handschuh, and Jan Marco Leimeister. 2021. Supporting cognitive and emotional empathic writing of students. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4063-4077, Online. Association for Computational Linguistics. 7, 8, 18, 19, 20
359
+ Thiemo Wambsganss, Matthias Soellner, Kenneth R Koedinger, and Jan Marco Leimeister. 2022. Adaptive empathy learning support in peer review scenarios. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, CHI '22, New York, NY, USA. Association for Computing Machinery. 7
360
+ Yan Wang, Jiayu Zhang, Jun Ma, Shaojun Wang, and Jing Xiao. 2020. Contextualized emotion recognition in conversation as sequence tagging. In Proceedings of the 21th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 186-195, 1st virtual meeting. Association for Computational Linguistics. 20
361
+ Anuradha Welivita and Pearl Pu. 2020. A taxonomy of empathetic response intents in human social conversations. In Proceedings of the 28th International Conference on Computational Linguistics, pages 4886-4899, Barcelona, Spain (Online). International Committee on Computational Linguistics. 19
362
+ Anuradha Welivita, Yubo Xie, and Pearl Pu. 2021. A large-scale dataset for empathetic response generation. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. 6, 9, 20
363
+ Joshua D Wondra and Phoebe C Ellsworth. 2015. An appraisal theory of empathy and other vicarious emotional experiences. *Psychological review*, 122(3):411. 3
364
+
365
+ Chen Henry Wu, Yinhe Zheng, Xiaoxi Mao, and Minlie Huang. 2021a. Transferable persona-grounded dialogues via grounded minimal edits. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 2368-2382, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. 19
366
+ Zixiu Wu, Rim Helaoui, Diego Reforgiato Recupero, and Daniele Riboni. 2021b. Towards low-resource real-time assessment of empathy in counselling. In Proceedings of the Seventh Workshop on Computational Linguistics and Clinical Psychology: Improving Access, pages 204-216, Online. Association for Computational Linguistics. 6, 7, 18, 19
367
+ Yubo Xie and Pearl Pu. 2021. Empathetic dialog generation with fine-grained intents. In Proceedings of the 25th Conference on Computational Natural Language Learning, Online. Association for Computational Linguistics. 7, 19
368
+ Özge Nilay Yalçın. 2019. Evaluating empathy in artificial agents. In 2019 8th International Conference on Affective Computing and Intelligent Interaction (ACII), pages 1-7. 5, 8
369
+ Clay H. Yoo, Shriphani Palakodety, Rupak Sarkar, and Ashiqur KhudaBukhsh. 2021. Empathy and hope: Resource transfer to model inter-country social media dynamics. In Proceedings of the 1st Workshop on NLP for Positive Impact, pages 125-134, Online. Association for Computational Linguistics. 20
370
+ Chengkun Zeng, Guanyi Chen, Chenghua Lin, Ruizhe Li, and Zhi Chen. 2021. Affective decoding for empathetic response generation. In Proceedings of the 14th International Conference on Natural Language Generation, pages 331-340, Aberdeen, Scotland, UK. Association for Computational Linguistics. 19
371
+ Justine Zhang and Cristian Danescu-Niculescu-Mizil. 2020. Balancing objectives in counseling conversations: Advancing forwards or looking backwards. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5276-5289, Online. Association for Computational Linguistics. 7, 8, 9, 18, 19
372
+ Justine Zhang, Sendhil Mullainathan, and Cristian Danescu-Niculescu-Mizil. 2020. Quantifying the causal effects of conversational tendencies. Proceedings of the ACM on Human-Computer Interaction, 4(CSCW2). 7, 9
373
+ Chujie Zheng, Yong Liu, Wei Chen, Yongcai Leng, and Minlie Huang. 2021. CoMAE: A multi-factor hierarchical framework for empathetic response generation. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 813-824, Online. Association for Computational Linguistics. 19
374
+ Peixiang Zhong, Chen Zhang, Hao Wang, Yong Liu, and Chunyan Miao. 2020. Towards persona-based empathetic conversational models. In Proceedings of the
375
+
376
+ 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6556-6566, Online. Association for Computational Linguistics. 6, 20
377
+ Li Zhou, Jianfeng Gao, Di Li, and Heung-Yeung Shum. 2020. The design and implementation of XiaoIce, an empathetic social chatbot. Computational Linguistics, 46(1):53-93. 1, 20
378
+ Naitian Zhou and David Jurgens. 2020. Condolence and empathy in online communities. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 609-626, Online. Association for Computational Linguistics. 19
379
+ Xianda Zhou and William Yang Wang. 2018. MojiTalk: Generating emotional responses at scale. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1128-1137, Melbourne, Australia. Association for Computational Linguistics. 20
380
+ Ling.Yu Zhu, Zhengkun Zhang, Jun Wang, Hongbin Wang, Haiying Wu, and Zhenglu Yang. 2022. Multi-party empathetic dialogue generation: A new task for dialog systems. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Dublin, Ireland. Association for Computational Linguistics. 18, 19
381
+
382
+ # A Definition Themes
383
+
384
+ (5) Thorough, theory-based description of empathy. Empathy is extensively described with the inclusion of perceptive insights from psychology, neuroscience, or other related fields. There are clear distinctions between empathy and concepts such as emotion recognition and mirroring, sympathy, and compassion. There may be explicit efforts to describe and distinguish between emotional and cognitive empathy and the significance of the distinction. Examples: Sharma et al. (2020); Wambsganss et al. (2021); Sabour et al. (2021).
385
+ (4) Description is succinct yet explicitly theory-grounded and adheres to the theoretical foundation. Empathy is explicitly defined or described in a way grounded in a complex psychological perspective. As opposed to (5), longer, more specified descriptions of empathy and its dimensions are not provided. Nevertheless, these works remain consistent with the selected perspective through their analyses and interpretations of their findings. Example: Pérez-Rosas et al. (2017).
386
+ (3) Thorough descriptions of other concepts and their relations to empathy are consistent with multi-dimensional theories. Empathy itself is not
387
+
388
+ explicitly defined or described. It may be unclear whether the idea is rooted in psychology with no specific references. However, behaviors associated with empathy are described that are consistent with multiple aspects of empathy described in the Theoretical Guide §2. This category can also include studies that do not specifically describe (or explore) the concept of empathy, but relate empathy to other behaviors (e.g., counselor strategies) in a way that is based on psychology literature. There may also be descriptions of concepts that are not specifically conveyed as empathy, but suggest an independent thought process that arrived at perspectives consistent with multiple aspects reviewed in §2. Examples: Ito et al. (2020); Zhang and Danescu-Niculescu-Mizil (2020); Wu et al. (2021b); Gao et al. (2021).
389
+ (2) Descriptions provided are vague or ambiguous with other concepts. The descriptions may go briefly beyond the "understanding the user and responding emotionally" concept by mentioning other aspects (e.g., perspective-taking) in passing or by describing empathetic responses to be those that mimic or mirror the target's response. Other concepts may be mentioned in a way that lacks separable distinction. Some papers reference a psychology definition but leave one or more (usually cognitive) components untouched in their work (Khanpour et al., 2017). This theme typically emerges when the study derives a conception based on their own reasoning (Shen et al., 2021; Zhu et al., 2022), which could benefit by incorporating terminology and distinctions we provide in $\S 2$ .
390
+ (1) No empathy description but abstract conceptualization may be inferred through task or system description. No definition nor description of empathy or empathetic behaviors is explicitly stated, however some abstract conceptualization of empathy can be inferred through the description of the task or requirements of an empathetic system. This includes describing the empathetic response generation task as "to understand the user emotion and respond appropriately" Lin et al. (2019); Rashkin et al. (2019). Elaboration on the process of understanding the user's emotion in an empathetic way or responding appropriately in an empathetic way are not provided.
391
+ (0) No empathy description, and the conceptualization is not inferrable through other information. No definition of empathy is provided, despite the apparent relevance of empathy to the study.
392
+
393
+ Such works may perform a human evaluation of empathy, and an emotional perspective on empathy may be inferred through a description of the evaluation if provided (Phy et al., 2020). Generally, the conceptualization cannot be inferred. This label does not include works that reflect abstract conceptualization in their task description as in Theme 1. This category includes cases where empathy may be ambiguous with concepts such as "politeness" and "courtesy" (Firdaus et al., 2020).
394
+
395
+ # B Evaluation Themes
396
+
397
+ Here we categorize papers by how they approached evaluating, labeled, or rated empathy in their research results or when constructing datasets.
398
+
399
+ Multi-item: cognitive & emotional (8). These studies report manual tasks for labeling or rating multiple items (e.g., behaviors, strategies) of empathy and include cognitive and emotional aspects in these items. This category includes studies that developed a new scheme for labeling or rating response intents and behaviors, or labeled counselor behaviors based on schemes designed for a counseling style. In a setup particularly unique among the NLP literature, one recruited participants to complete a multi-item self-report scale for trait empathy (Davis' Interpersonal Reactivity Index (IRI) (Davis, 1980, 1983)) and then analyzed linguistics behaviors with respect to those scales (Litvak et al., 2016). All papers: Litvak et al. (2016); Pérez-Rosas et al. (2017); Sharma et al. (2020); Welivita and Pu (2020); Ito et al. (2020); Zhang and Danescu-Niculescu-Mizil (2020); Wambsganss et al. (2021); Svikhnushina et al. (2022).
400
+
401
+ Single label/rating: cognitive & emotional empathy (3). These studies report manual tasks for labeling or rating empathy as a single item or score based on multiple items or aspects representing both cognitive and emotional empathy. Two are based on theoretical and practical psychology conceptualizations (Appraisal Theory of Empathy (Zhou and Jurgens, 2020) and MISC (Wu et al., 2021b)). The other is based on a description of empathy that includes intents or acts by the observer to help regulate the target's emotions (Sanguinetti et al., 2020). All papers: Zhou and Jurgens (2020); Sanguinetti et al. (2020); Wu et al. (2021b).
402
+
403
+ Single label/rating: emotional empathy (19). These studies report manual tasks for labeling or rating empathy as a single item or score based on
404
+
405
+ a description of empathy (reported to be provided to the annotators) representing only emotional aspects of empathy. Task setups include comparisons between two items (which item is more empathetic based on the provided description of empathy), binary labels (empathetic or not), and ratings on different Likert scales. The typical descriptions of empathy provided are defined by whether or the degree to which the observer item shows, demonstrates, or expresses an ability to infer or an understanding/awareness of the target's emotions or feelings. Often these descriptions include that the observer should respond in a way that is appropriate, emotionally or otherwise, without guidelines on appropriate vs. inappropriate empathetic responses. Some descriptions refer to emotion sharing, such as saying the observer should manifest, share, or experience the target's emotions (Zhu et al., 2022). Some studies appear to use sympathy and empathy interchangeably (Rashkin et al., 2019; Lin et al., 2019). We determined this category to be the best fit for Buechel et al. (2018)'s study, in which the single empathy ratings are based on a multi-item questionnaire focused on emotions. All papers: Alam et al. (2016b); Alam et al. (2016a); Alam et al. (2018); Buechel et al. (2018); Rashkin et al. (2019); Lin et al. (2019); Smith et al. (2020); Majumder et al. (2020); Naous et al. (2020); Phy et al. (2020); Li et al. (2020a); Zeng et al. (2021); Wu et al. (2021a); Shen et al. (2021); Xie and Pu (2021); Sabour et al. (2021); Zheng et al. (2021); Naous et al. (2021); Zhu et al. (2022).
406
+
407
+ Single label/rating: no specification (8). These studies refer to or report a manual task performed to label or rate empathy without a clear description of empathy upon which the task was based. Task setups include asking annotators to label items as "empathic responses" and to compare two items of observer text (empathetic response generation output) for which is "more empathetic." Four of these studies ask annotators to rate empathy on Likert scales (3-point, 4-point, 5-point, and 7-point). A few studies indicated they recruited annotators with linguistics or psychology backgrounds (Tu et al., 2022). One study describes a procedure for familiarizing the annotators with the concept based on selected papers from psychology and unifying their understanding with the help of psychologists (Khanpour et al., 2017). All papers: Abdul-Mageed et al. (2017); Khanpour et al. (2017); Kim et al. (2021); Gao et al. (2021); Chen et al. (2021);
408
+
409
+ Ishii et al. (2021); Jang et al. (2021); Tu et al. (2022).
410
+
411
+ Heuristic empathy labels or ratings (4). These studies include heuristic methods to gather or label empathetic data automatically. One study created an "empathy lexicon" based on associations with single-item empathy ratings on a prior dataset (Sedoc et al., 2020). Another built a dataset by training a model on data labeled with scheme of empathetic response intents (Welivita et al., 2021). One study selected conversations from two subreddits (r/happy and r/offmychest) and manually labeled samples from them and a control group (r/CausalConversations) as empathetic or non-empathetic (1 or 0), and compared the averages of the selected subreddits to the control to demonstrate that they were more empathetic on average (Zhong et al., 2020). Another considered an "empathetic language style" to be the language style of "more empathetic" of two Myers-Briggs Type Indicator (MBTI) personality types (Vanderlyn et al., 2021). We labeled Rashkin et al. (2019)'s study based on their human evaluation of the empathetic response generation model. However, we also consider their data curation, in which crowdworkers have conversations grounded on specific emotions, to be a heuristic-based approach. All papers: Sedoc et al. (2020); Zhong et al. (2020); Welivita et al. (2021); Vanderlyn et al. (2021).
412
+
413
+ Role labeling (2). Instead of labeling, rating, or associating behaviors with empathy, these studies label target and observer roles (seeker vs. provider (Hosseini and Caragea, 2021a) and seeking empathy vs. providing empathy (Hosseini and Caragea, 2021b)). All papers: Hosseini and Caragea (2021b); Hosseini and Caragea (2021a).
414
+
415
+ Only automatic or no manual evaluation (4). This category includes studies that report empathetic systems or results without reporting manual evaluation (Siddique et al., 2017; Zhou et al., 2020; Firdaus et al., 2020) or that evaluate empathy automatically only (Tsai et al., 2021). All papers: Siddique et al. (2017); Zhou et al. (2020); Firdaus et al. (2020); Tsai et al. (2021).
416
+
417
+ Buechel/WASSA (15). All papers: Fornaciari et al. (2021); Tafreshi et al. (2021); Butala et al. (2021); Vettigli and Sorgente (2021); Mundra et al. (2021); Kulkarni et al. (2021); Guda et al. (2021); Qian et al. (2022); Chen et al. (2022); Del Arco et al. (2022); Vasava et al. (2022); Ghosh et al. (2022);
418
+
419
+ Lahnala et al. (2022); Barriere et al. (2022).
420
+
421
+ Not categorized (38). Not empathy task or does not evaluate. All papers: Suarez et al. (2012); Castellano et al. (2013); Bhargava et al. (2013); Denis et al. (2014); Fung et al. (2016a); Hastie et al. (2016); Fung et al. (2016b); Addawood et al. (2017); Iserman and Ireland (2017); Hazarika et al. (2018a); Hazarika et al. (2018b); Guerini et al. (2018); Zhou and Wang (2018); Mahajan and Shaikh (2019); Demasi et al. (2019); Demszky et al. (2020); Shen and Feng (2020); Shuster et al. (2020); Inoue et al. (2020); Wang et al. (2020); Roller et al. (2021); Hu et al. (2021a); Varshney et al. (2021); Yoo et al. (2021); Inoue et al. (2021); Guo and Choi (2021); Lu et al. (2021); Hu et al. (2021b); Li et al. (2022); Maheshwari and Varma (2022); Falk and Lapesa (2022); Ide and Kawahara (2022); Bhandari and Goyal (2022); Stephan (2015); Langedijk and Ham (2021); Pruksachatkun et al. (2019); Sabour et al. (2021).
422
+
423
+ # C Emotion recognition, contagion, and mimicry
424
+
425
+ Two underlying processes of emotional empathy, emotion recognition, and contagion, are closely related (Shamay-Tsoory, 2011). Emotional contagion ("catching feelings") refers to when an observer's brain activates similarly to a target's (e.g., perception of pain), where the observer lacks awareness of the origin of their emotion (Decety and Lamm, 2006; Ickes, 2011). Research on the human mechanism of imitating an affective state (Goldman, 1993; Carr et al., 2003) relates to some response generation approaches in NLP (Majumder et al., 2020). However, mimicking emotions without experiencing contagion is a separate concept referred to as "mimopathy" (Ickes, 2011).
426
+
427
+ # D Non-English Datasets
428
+
429
+ Arabic Naous et al., 2021, Italian Alam et al., 2018; Sanguinetti et al., 2020, Chinese Sun et al., 2021, Japanese Ito et al., 2020; Sanguinetti et al., 2020, German Wambsganss et al., 2021.
acriticalreflectionandforwardperspectiveonempathyandnaturallanguageprocessing/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8330d6d831b62989cbc70575bb4fe4aa00a13f5aea70a0de84de332c79a9e94a
3
+ size 219668
acriticalreflectionandforwardperspectiveonempathyandnaturallanguageprocessing/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:759f2365340cddd91d42185f7dc4655d64545f5512df3a80a50ef3a6f9f77141
3
+ size 509260
activelearningforabstractivetextsummarization/fde33ad9-4bb7-4be9-8522-7653659dd8a5_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:92c3bbf251081478b8d60b830643f56173f64ff2d01d89359c3cbb47eda8f951
3
+ size 134432
activelearningforabstractivetextsummarization/fde33ad9-4bb7-4be9-8522-7653659dd8a5_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b06985b20715c432db235aaa6e12112523defef496f954df44df0247111af515
3
+ size 172353
activelearningforabstractivetextsummarization/fde33ad9-4bb7-4be9-8522-7653659dd8a5_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d6bc46d769824f93d2f2d7fc3f16051e77aeee75972616b55cb458ad9d904e37
3
+ size 1949949
activelearningforabstractivetextsummarization/full.md ADDED
@@ -0,0 +1,631 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Active Learning for Abstractive Text Summarization
2
+
3
+ Akim Tsvigun $^{1,2}$ , Ivan Lysenko $^{2}$ , Danila Sedashov $^{2}$ , Ivan Lazichny $^{1}$ , Eldar Damirov $^{2,4}$ , Vladimir Karlov $^{2}$ , Artemy Belousov $^{2}$ , Leonid Sanochkin $^{1,2}$ , Maxim Panov $^{7}$ , Alexander Panchenko $^{3}$ , Mikhail Burtsev $^{1,5}$ , Artem Shelmanov $^{1,6,8}$
4
+
5
+ $^{1}$ AIRI, $^{2}$ HSE, $^{3}$ Skoltech, $^{4}$ SberDevices, $^{5}$ MIPT, $^{6}$ MBZUAI, $^{7}$ TII,
6
+
7
+ $^{8}$ ISP RAS Research Center for Trusted Artificial Intelligence
8
+
9
+ {tsvigun, shelmanov} @airi.net, artem.shelmanov@mbzuai.ac.ae
10
+
11
+ # Abstract
12
+
13
+ Construction of human-curated annotated datasets for abstractive text summarization (ATS) is very time-consuming and expensive because creating each instance requires a human annotator to read a long document and compose a shorter summary that would preserve the key information relayed by the original document. Active Learning (AL) is a technique developed to reduce the amount of annotation required to achieve a certain level of machine learning model performance. In information extraction and text classification, AL can reduce the amount of labor up to multiple times. Despite its potential for aiding expensive annotation, as far as we know, there were no effective AL query strategies for ATS. This stems from the fact that many AL strategies rely on uncertainty estimation, while as we show in our work, uncertain instances are usually noisy, and selecting them can degrade the model performance compared to passive annotation. We address this problem by proposing the first effective query strategy for AL in ATS based on diversity principles. We show that given a certain annotation budget, using our strategy in AL annotation helps to improve the model performance in terms of ROUGE and consistency scores. Additionally, we analyze the effect of self-learning and show that it can further increase the performance of the model.
14
+
15
+ # 1 Introduction
16
+
17
+ Abstractive text summarization (ATS) aims to compress a document into a brief yet informative and readable summary, which would retain the key information of the original document. State-of-the-art results in this task are achieved by neural seq-to-seq models (Lewis et al., 2020; Zhang et al., 2020; Qi et al., 2020; Guo et al., 2021; Liu and Liu, 2021) based on the Transformer architecture (Vaswani et al., 2017). Training a model for ATS requires a dataset that contains pairs of original documents and their short summaries, which are usually writ
18
+
19
+ ten by human annotators. Manually composing a summary is a very tedious task, which requires one to read a long original document, select crucial information, and finally write a small text. Each of these steps is very time-consuming, resulting in the fact that constructing each instance in annotated corpora for text summarization is very expensive.
20
+
21
+ Active Learning (AL; Cohn et al. (1996)) is a well-known technique that helps to substantially reduce the amount of annotation required to achieve a certain level of machine learning model performance. For example, in tasks related to named entity recognition, researchers report annotation reduction by 2-7 times with a loss of only $1\%$ of F1-score (Settles and Craven, 2008a). This makes AL especially important when annotation is expensive, which is the case for ATS.
22
+
23
+ AL works iteratively: on each iteration, (1) a model is trained on the so far annotated dataset; (2) the model is used to select some informative instances from a large unlabeled pool using a query strategy; (3) informative instances are presented to human experts, which provide gold-standard annotations; (4) finally, the instances with annotations are added to the labeled dataset, and a new iteration begins. Traditional AL query strategies are based on uncertainty estimation techniques (Lewis and Gale, 1994; Scheffer et al., 2002). The hypothesis is that the most uncertain instances for the model trained on the current iteration are informative for training the model on the next iteration. We argue that uncertain predictions of ATS models (uncertain summaries) are not more useful than randomly selected instances. Moreover, usually, they introduce more noise and detriment to the performance of summarization models. Therefore, it is not possible to straightforwardly adapt the uncertainty-based approach to AL in text summarization.
24
+
25
+ In this work, we present the first effective query strategy for AL in ATS, which we call in-domain diversity sampling (IDDS). It is based on the idea
26
+
27
+ of the selection of diverse instances that are semantically dissimilar from already annotated documents but at the same time similar to the core documents of the considered domain. The empirical investigation shows that while techniques based on uncertainty cannot overcome the random sampling baseline, IDSs substantially increases the performance of summarization models. We also experiment with the self-learning technique that leverages a training dataset expanded with summaries automatically generated by an ATS model trained only on the human-annotated dataset. This approach shows improvements when one needs to generate short summaries. The code for reproducing the experiments is available online<sup>1</sup>. The contributions of this paper are the following:
28
+
29
+ - We propose the first effective AL query strategy for ATS that beats the random sampling baseline.
30
+ - We conduct a vast empirical investigation and show that in contrast to such tasks as text classification and information extraction, in ATS, uncertainty-based AL query strategies cannot outperform the random sampling baseline.
31
+ - To our knowledge, we are the first to investigate the effect of self-learning in conjunction with AL for ATS and demonstrate that it can substantially improve results on the datasets with short summaries.
32
+
33
+ # 2 Related Work
34
+
35
+ Abstractive Text Summarization. The advent of seq2seq models (Sutskever et al., 2014) along with the development of the attention mechanism (Bahdanau et al., 2015) consolidated neural networks as a primary tool for ATS. The attention-based Transformer (Vaswani et al., 2017) architecture has formed the basis of many large-scale pre-trained language models that achieve state-of-the-art results in ATS (Lewis et al., 2020; Zhang et al., 2020; Qi et al., 2020; Guo et al., 2021). Recent efforts in this area mostly focus on minor modifications of the existing architectures (Liu and Liu, 2021; Aghajanyan et al., 2021; Liu et al., 2022).
36
+
37
+ Active Learning in Natural Language Generation. While many recent works leverage AL for text classification or sequence-tagging tasks (Yuan et al., 2020; Zhang and Plank, 2021; Shelmanov
38
+
39
+ et al., 2021; Margatina et al., 2021), little attention has been paid to natural language generation tasks. Among the works in this area, it is worth mentioning (Haffari et al., 2009; Ambati, 2012; Ananthakrishnan et al., 2013). These works focus on neural machine translation (NMT) and suggest several uncertainty-based query strategies for AL. Peris and Casacuberta (2018) successfully apply AL in the interactive machine translation. Liu et al. (2018) exploit reinforcement learning to train a policy-based query strategy for NMT. Although there is an attempt to apply AL in ATS (Gidiotis and Tsoumakas, 2021), to the best of our knowledge, there is no published work on this topic yet.
40
+
41
+ Uncertainty Estimation in Natural Language Generation. A simple yet effective approach for uncertainty estimation in generation is proposed by Wang et al. (2019). They use a combination of expected translation probability and variance of the translation probability, demonstrating that it can handle noisy instances better and noticeably improve the quality of back-translation. Malinin and Gales (2021) investigate the ensemble-based measures of uncertainty for NMT. Their results demonstrate the superiority of these methods for OOD detection and for identifying generated translations of low-quality. Xiao et al. (2020) propose a method for uncertainty estimation of long sequences of discrete random variables, which they dub "BLEU Variance", and apply it for OOD sentence detection in NMT. It is also shown to be useful for identifying instances of questionable quality in ATS (Gidiotis and Tsoumakas, 2022). In this work, we investigate these uncertainty estimation techniques in AL and show that they do not provide any benefits over annotating randomly selected instances.
42
+
43
+ Diversity-based Active Learning. Along with the uncertainty-based query strategies, a series of diversity-based methods have been suggested for AL (Kim et al., 2006; Sener and Savarese, 2018; Ash et al., 2019; Citovsky et al., 2021). The most relevant work among them is (Kim et al., 2006), where the authors propose to use a Maximal Marginal Relevance (MMR; Carbonell and Goldstein (1998))-based function as a query strategy in AL for named entity recognition. This function aims to capture uncertainty and diversity and selects instances for annotation based on these two perspectives. We adapt this strategy for the ATS task and compare the proposed method with it.
44
+
45
+ # 3 Uncertainty-based Active Learning for Text Generation
46
+
47
+ In this section, we give a brief formal definition of the AL procedure for text generation and uncertainty-based query strategies. Here and throughout the rest of the paper, we denote an input sequence as $\mathbf{x} = (x_{1}\dots x_{m})$ and the output sequence as $\mathbf{y} = (y_{1}\dots y_{n})$ , with $m$ and $n$ being lengths of $\mathbf{x}$ and $\mathbf{y}$ respectively.
48
+
49
+ Let $\mathcal{D} = \{(\mathbf{x}^{(k)},\mathbf{y}^{(k)})\}_{k = 1}^{K}$ be a dataset of pairs (documents, summaries). Consider a probabilistic model $p_{\mathbf{w}}(\mathbf{y}\mid \mathbf{x})$ parametrized by a vector w. Usually, $p_{\mathbf{w}}(\mathbf{y}\mid \mathbf{x})$ is a neural network, while the parameter estimation is done via the maximum likelihood approach:
50
+
51
+ $$
52
+ \hat {\mathbf {w}} = \underset {\mathbf {w}} {\arg \max } L (\mathcal {D}, \mathbf {w}), \tag {1}
53
+ $$
54
+
55
+ where $L(\mathcal{D},\mathbf{w}) = \sum_{k = 1}^{K}\log p_{\mathbf{w}}(\mathbf{y}^{(k)}\mid \mathbf{x}^{(k)})$ is log-likelihood.
56
+
57
+ Many AL methods are based on greedy query strategies that select instances for annotation, optimizing a certain criterion $\mathcal{A}(\mathbf{x}\mid \mathcal{D},\hat{\mathbf{w}})$ called an acquisition function:
58
+
59
+ $$
60
+ \mathbf {x} ^ {*} = \arg \max _ {\mathbf {x}} \mathcal {A} (\mathbf {x} \mid \mathcal {D}, \hat {\mathbf {w}}). \tag {2}
61
+ $$
62
+
63
+ The selected instance $\mathbf{x}^*$ is then annotated with a target value $\mathbf{y}^*$ (document summary) and added to the training dataset: $\mathcal{D} := \mathcal{D} \cup (\mathbf{x}^*, \mathbf{y}^*)$ . Subsequently, the model parameters $\mathbf{w}$ are updated and the instance selection process continues until the desired model quality is achieved or the available annotation budget is depleted.
64
+
65
+ The right choice of an acquisition function is crucial for AL. A common heuristic for acquisition is selecting instances with high uncertainty. Below, we consider several measures of uncertainty used in text generation.
66
+
67
+ Normalized Sequence Probability (NSP) was originally proposed by Ueffing and Ney (2007) and has been used in many subsequent works (Haffari et al., 2009; Wang et al., 2019; Xiao et al., 2020; Lyu et al., 2020). This measure is given by
68
+
69
+ $$
70
+ \operatorname {N S P} (\mathbf {x}) = 1 - \bar {p} _ {\hat {\mathbf {w}}} (\mathbf {y} \mid \mathbf {x}), \tag {3}
71
+ $$
72
+
73
+ where we define the geometric mean of probabilities of tokens predicted by the model as: $\bar{p}_{\hat{\mathbf{w}}}(\mathbf{y}|\mathbf{x}) = \exp \left\{\frac{1}{n}\log p_{\hat{\mathbf{w}}}(\mathbf{y}|\mathbf{x})\right\}$
74
+
75
+ A wide family of uncertainty measures can be derived using the Bayesian approach to modeling.
76
+
77
+ Under the Bayesian approach, it is assumed that model parameters have a prior distribution $\pi (\mathbf{w})$ . Optimization of the log-likelihood $L(\mathcal{D},\mathbf{w})$ in this case leads to the optimization of the posterior distribution of the model parameters:
78
+
79
+ $$
80
+ \pi (\mathbf {w} \mid \mathcal {D}) \propto \exp \{L (\mathcal {D}, \mathbf {w}) \} \cdot \pi (\mathbf {w}). \tag {4}
81
+ $$
82
+
83
+ Usually, the exact computation of the posterior is intractable, and to perform training and inference, a family of distributions $q_{\theta}(\mathbf{w})$ parameterized by $\theta$ is introduced. The parameter estimate $\hat{\theta}$ minimizes the KL-divergence between the true posterior $\pi (\mathbf{w}\mid \mathcal{D})$ and the approximation $q_{\hat{\theta}}(\mathbf{w})$ . Given such an approximation, several uncertainty measures can be constructed.
84
+
85
+ Expected Normalized Sequence Probability (ENSP) is proposed by Wang et al. (2019) and is also used in (Xiao et al., 2020; Lyu et al., 2020):
86
+
87
+ $$
88
+ \operatorname {E N S P} (\mathbf {x}) = 1 - \mathbb {E} _ {\mathbf {w} \sim q _ {\hat {\theta}}} \bar {p} _ {\mathbf {w}} (\mathbf {y} \mid \mathbf {x}). \tag {5}
89
+ $$
90
+
91
+ In practice, the expectation is approximated via Monte Carlo dropout (Gal and Ghahramani, 2016), i.e. averaging multiple predictions obtained with activated dropout layers in the network.
92
+
93
+ Expected Normalized Sequence Variance (ENSV; Wang et al. (2019)) measures the variance of the sequence probabilities obtained via Monte Carlo dropout:
94
+
95
+ $$
96
+ \operatorname {E N S V} (\mathbf {x}) = \operatorname {V a r} _ {\mathbf {w} \sim q _ {\hat {\theta}}} \bar {p} _ {\mathbf {w}} (\mathbf {y} \mid \mathbf {x}). \tag {6}
97
+ $$
98
+
99
+ BLEU Variance (BLEUVar) is proposed by Xiao et al. (2020). It treats documents as points in some high dimensional space and uses the BLEU metric (Papineni et al., 2002) for measuring the difference between them. In such a setting, it is possible to calculate the variance of generated texts in the following way:
100
+
101
+ $$
102
+ \begin{array}{l} \operatorname {B L E U V a r} (\mathbf {x}) = \tag {7} \\ = \mathbb {E} _ {\mathbf {w} \sim q _ {\hat {\boldsymbol {\theta}}}} \mathbb {E} _ {\mathbf {y}, \mathbf {y} ^ {\prime} \sim p _ {\mathbf {w}} (\cdot | \mathbf {x})} \big (1 - \mathrm {B L E U} (\mathbf {y}, \mathbf {y} ^ {\prime}) \big) ^ {2}. \\ \end{array}
103
+ $$
104
+
105
+ The BLEU metric is calculated as a geometric mean of n-grams overlap up to 4-grams. Consequently, when summaries consist of less than 4 tokens, the metric is equal to zero since there will be no common 4-grams. This problem can be mitigated with the SacreBLEU metric (Post, 2018), which smoothes the n-grams with zero counts. When we use this query strategy with the Sacre-BLUE metric, we refer to it as SacreBLEUvar.
106
+
107
+ ![](images/f3fb62b86d1ce65568c1562f9f3fc91169501a773e39076007a8ac17a2f4d974.jpg)
108
+ Figure 1: The visualization of the idea behind the IDDS algorithm on the synthetic data: select instances located far from labeled data while close on average to unlabeled data.
109
+
110
+ # 4 Proposed Methods
111
+
112
+ # 4.1 In-Domain Diversity Sampling
113
+
114
+ We argue that uncertainty-based query strategies tend to select noisy instances that have little value for training ATS models. To alleviate this issue, we propose a novel query strategy named in-domain diversity sampling (IDDS). It aims to maximize the diversity of the annotated instances by selecting instances that are dissimilar from the already annotated ones. At the same time, it avoids selecting noisy outliers. These noisy documents that are harmful to training an ATS model are usually semantically dissimilar from the core documents of the domain represented by the unlabeled pool. Therefore, IDS queries instances that are dissimilar to the annotated instances but at the same time are similar to unannotated ones (Figure 1).
115
+
116
+ We propose the following acquisition function that implements the aforementioned idea (the higher the value - the higher the priority for the annotation):
117
+
118
+ $$
119
+ \operatorname {I D D S} (\mathbf {x}) = \lambda \frac {\sum_ {j = 1} ^ {| U |} s \left(\mathbf {x} , \mathbf {x} _ {j}\right)}{| U |} - (1 - \lambda) \frac {\sum_ {i = 1} ^ {| L |} s \left(\mathbf {x} , \mathbf {x} _ {i}\right)}{| L |}, \tag {8}
120
+ $$
121
+
122
+ where $s(\mathbf{x},\mathbf{x}^{\prime})$ is a similarity function between texts, $U$ is the unlabeled set, $L$ is the labeled set, and $\lambda \in [0;1]$ is a hyperparameter.
123
+
124
+ Below, we formalize the resulting algorithm of the IDDS query strategy.
125
+
126
+ 1. For each document in the unlabeled pool $\mathbf{x}$ , we obtain an embedding vector $\mathbf{e}(\mathbf{x})$ . For this purpose, we suggest using the [CLS] pooled sequence embeddings from BERT. We note that using a pre-trained checkpoint straightforwardly may lead to unreasonably high similarity scores between instances since they all belong to the same domain, which can be quite specific. We mitigate this problem by using the task-adaptive pre-training (TAPT; Gururangan et al. (2020)) on the unlabeled pool. TAPT performs several epochs of self-supervised training of the pre-trained model on the target dataset to acquaint it with the peculiarities of the data.
127
+
128
+ 2. Deduplicate the unlabeled pool. Instances with duplicates will have an overrated similarity score with the unlabeled pool.
129
+ 3. Calculate the informativeness scores using the IDS acquisition function (8). As a similarity function, we suggest using a scalar product between document representations: $s(\mathbf{x},\mathbf{x}^{\prime}) = \langle \mathbf{e}(\mathbf{x}),\mathbf{e}(\mathbf{x}^{\prime})\rangle$
130
+
131
+ The idea of IDDS is close to the MMR-based strategy proposed in (Kim et al., 2006). Yet, despite the resemblance, IDS differs from it in several crucial aspects. The MMR-based strategy focuses on the uncertainty and diversity components. However, as shown in Section 6.1, selecting instances by uncertainty leads to worse results compared to random sampling. Consequently, instead of using uncertainty, IDS leverages the unlabeled pool to capture the representativeness of the instances. Furthermore, IDS differs from the MMR-based strategy in how they calculate the diversity component. MMR directly specifies the usage of the "max" aggregation function for calculating the similarity with the already annotated data, while IDS uses "average" similarity instead and achieves better results as shown in Section 6.2.
132
+
133
+ We note that IDDS does not require retraining an acquisition model in contrast to uncertainty-based strategies since document vector representations and document similarities can be calculated before starting the AL annotation process. This results in the fact that no heavy computations during AL are required. Consequently, IDDS does not harm the interactiveness of the annotation process, which is a common bottleneck (Tsvigun et al., 2022).
134
+
135
+ # 4.2 Self-learning
136
+
137
+ Pool-based AL assumes that there is a large unlabeled pool of data. We propose to use this data source during AL to improve text summarization models with the help of self-learning. We train the model on the labeled data and generate summaries for the whole unlabeled pool. Then, we concatenate the generated summaries with the labeled set and use this data to fine-tune the final model. We note that generated summaries can be noisy: irrelevant, grammatically incorrect, contain factual inconsistency, and can harm the model performance. We detect such instances using the uncertainty estimates obtained via NSP scores and exclude $k_l\%$ instances with the lowest scores and $k_h\%$ of instances with the highest scores. We choose this uncertainty metric because according to our experiments in Section 6.1, high NSP scores correspond to the noisiest instances. We note that adding the filtration step does not introduce additional computational overhead, since the NSP scores are calculated simultaneously with the summary generation for self-learning.
138
+
139
+ # 5 Experimental Setup
140
+
141
+ # 5.1 Active Learning Setting
142
+
143
+ We evaluate IDDS and other query strategies using the conventional scheme of AL annotation emulation applied in many previous works (Settles and Craven, 2008b; Shen et al., 2017; Siddhant and Lipton, 2018; Shelmanov et al., 2021; Dor et al., 2020). For uncertainty-based query strategies and random sampling, we start from a small annotated seeding set selected randomly. This set is used for fine-tuning the summarization model on the first iteration. For IDS, the seeding set is not used, since this query strategy does not require fine-tuning the model to make a query. On each AL iteration, we select top-k instances from the unlabeled pool according to the informativeness score obtained with a query strategy. The selected instances with their gold-standard summaries are added to the so-far annotated set and are excluded from the unlabeled pool. On each iteration, we fine-tune a summarization model from scratch and evaluate it on a held-out test set. We report the performance of the model on each iteration to demonstrate the dynamics of the model performance depending on the invested annotation effort.
144
+
145
+ The query size (the number of instances selected for annotation on each iteration) is set to 10 documents. We repeat each experiment 9 times with
146
+
147
+ different random seeds and report the mean and the standard deviation of the obtained scores. For the WikiHow and PubMed datasets, on each iteration, we use a random subset from the unlabeled pool since generating predictions for the whole unlabeled dataset is too computationally expensive. In the experiments, the subset size is set to 10,000 for WikiHow and 1,000 for PubMed.
148
+
149
+ # 5.2 Baselines
150
+
151
+ We use random sampling as the main baseline. To our knowledge, in the ATS task, this baseline has not been outperformed by any other query strategy yet. In this baseline, an annotator is given randomly selected instances from the unlabeled pool, which means that AL is not used at all. We also report results of uncertainty-based query strategies and an MMR-based query strategy (Kim et al., 2006) that is shown to be useful for named entity recognition.
152
+
153
+ # 5.3 Metrics
154
+
155
+ Quality of Text Summarization. To measure the quality of the text summarization model, we use the commonly adopted ROUGE metric (Lin, 2004). Following previous works (See et al., 2017; Nallapati et al., 2017; Chen and Bansal, 2018; Lewis et al., 2020; Zhang et al., 2020), we report ROUGE-1, ROUGE-2, and ROUGE-L. Since we found the dynamics of these metrics coinciding, for brevity, in the main part of the paper, we keep only the results with the ROUGE-1 metric. The results with other metrics are presented in the appendix.
156
+
157
+ Factual Consistency. Inconsistency (hallucination) of the generated summaries is one of the most crucial problems in summarization (Kryscinski et al., 2020; Huang et al., 2021; Nan et al., 2021; Goyal et al., 2022). Therefore, in addition to the ROUGE metrics, we measure the factual consistency of the generated summaries with the original documents. We use the SummaC-ZS (Laban et al., 2022) - a state-of-the-art model for inconsistency detection. We set granularity = "sentence" and model_name = "vitc".
158
+
159
+ # 5.4 Datasets
160
+
161
+ We experiment with three datasets widely-used for evaluation of ATS models: AESLC (Zhang and Tetreault, 2019), PubMed (Cohan et al., 2018), and WikiHow (Koupaee and Wang, 2018). AESLC consists of emails with their subject lines as summaries. WikiHow contains articles from WikiHow pages
162
+
163
+ ![](images/e7e382cc159664ac50c97d665f06b87b5df397f197817d7d3fe3c608d5d37ed8.jpg)
164
+ a) AESLC dataset
165
+
166
+ ![](images/1d34aaf91e1bb6c3d8cb88d7de96e3d74d0cd6923fdb87653a6058345d84c986.jpg)
167
+ b) WikiHow dataset
168
+
169
+ ![](images/07c398b89d0e3b6fb88f683ab14851f7a4d313c94a392685ea9054fabc6af47f.jpg)
170
+ c) PubMed dataset
171
+
172
+ ![](images/2eb1ed70e9568be1d3850cdeafe25fe67e734b2ca8e98a8645ede1219fc69402.jpg)
173
+ Figure 2: ROUGE-1 scores of BART-base with various uncertainty-based strategies compared with random sampling (baseline) on various datasets. Full results are provided in Figures 6, 8, 9, respectively.
174
+ a) AESLC dataset
175
+ Figure 3: ROUGE-1 scores of BART-base with the IDDS strategy compared with random sampling (baseline) and NSP (uncertainty-based strategy) on various datasets. Full results are provided in Figures 10, 12 and 13, respectively.
176
+
177
+ ![](images/83a0fd4e3aa84880f4925eee8ed6c3c2c5461480701565a4935e1ac77e31bac1.jpg)
178
+ b) WikiHow dataset
179
+
180
+ ![](images/0eef6df3a53258606848e7593eed533386c77364a2510ce1088bbd73e5b2add7.jpg)
181
+ c) PubMed dataset
182
+
183
+ with their headlines as summaries. PubMed (Cohen et al., 2018) is a collection of scientific articles from the PubMed archive with their abstracts. The choice of datasets is stipulated by the fact that AESLC contains short documents and summaries, WikiHow contains medium-sized documents and summaries, and PubMed contains long documents and summaries. We also use two non-intersecting subsets of the Gigaword dataset (Graff et al., 2003; Rush et al., 2015) of sizes 2,000 and 10,000 for hyperparameter optimization of ATS models and additional experiments with self-learning, respectively. Gigaword consists of news articles and their headlines representing summaries. The dataset statistics is presented in Table 2 in Appendix A.
184
+
185
+ # 5.5 Models and Hyperparameters
186
+
187
+ We conduct experiments using the state-of-the-art text summarization models: BART (Lewis et al., 2020) and PEGASUS (Zhang et al., 2020). In all experiments, we use the "base" pre-trained version of BART and the "large" version of PEGASUS. Most of the experiments are conducted with the BART model, while PEGASUS is only used for the AESLC dataset (results are presented in Appendices B, C) since running it on two other datasets in AL introduces a computational bottleneck.
188
+
189
+ We tune hyperparameter values of ATS models using the ROUGE-L score on the subset of the
190
+
191
+ Gigaword dataset. The hyperparameter values are provided in Table 3 in Appendix A.
192
+
193
+ For the IDDS query strategy, we use $\lambda = 0.67$ . We analyze the effect of different values of this parameter in Section 6.2.
194
+
195
+ # 6 Results and Discussion
196
+
197
+ # 6.1 Uncertainty-based Query Strategies
198
+
199
+ In this series of experiments, we demonstrate that selected uncertainty-based query strategies are not suitable for AL in ATS. Figure 2a and Figures 6, 7 in Appendix B present the results on the AESLC dataset. As we can see, none of the uncertainty-based query strategies outperform the random sampling baseline for both BART and PEGASUS models. NSP and ENSP strategies demonstrate the worst results with the former having the lowest performance for both ATS models. Similar results are obtained for the WikiHow and PubMed datasets (Figures 2b and 2c).
200
+
201
+ In some previous work on NMT, uncertainty-based query strategies outperform the random sampling baseline (Haffari et al., 2009; Ambati, 2012; Ananthakrishnan et al., 2013). Their low results for ATS compared to NMT might stem from the differences between these tasks. Both NMT and ATS are seq2seq tasks and can be solved via similar models. However, NMT is somewhat easier, since the
202
+
203
+ <table><tr><td>AL Strat.</td><td>Document</td><td>Golden Summary</td><td>Gener. Summ.</td></tr><tr><td>NSP</td><td>Aquarius - Horoscope Friday, September 8, 2000 by Astronet.com. Powerful forces are at work to challenge you (...) Don’t let hurt feelings prevent you from (...)</td><td>These things are beginning to scare me...</td><td>Invitation – Aquarius</td></tr><tr><td>NSP</td><td>Prod Area and Long Haul k# Volume Rec Del 3.6746 5000 St 62 (...) #6563 PPL (Non NY) should have this contract tomorrow. (...) 3.5318 6500 Leidy PSE&amp;G</td><td>TRCO capacity for Sep</td><td>Prod Area</td></tr><tr><td>IDDS</td><td>Greg, I wanted to forward this letter to you that I received from a good friend of mine who is interested in discussing (...) with Enron. (...) set up a meeting (...) Sincerely,</td><td>Meeting with Enron Networks</td><td>n/a</td></tr><tr><td>IDDS</td><td>Larry, Could I have the price for a 2 day swing peaker option at NGI Chicago, that can be exercised on any day in February 2002. Strike is FOM February, (...)</td><td>Peaker price for NGI Chicago Feb</td><td>n/a</td></tr></table>
204
+
205
+ Table 1: Examples of instances selected with the NSP and IDDS strategies. Tokens overlapping with the source document are highlighted with green. Tokens that refer to paraphrasing a part of the document and the corresponding part are highlighted with blue. Tokens that cannot be derived from the document are highlighted with red.
206
+
207
+ output is usually of similar length as the input and its variability is smaller. It is much easier to train a model to reproduce an exact translation rather than make it generate an exact summary. Therefore, uncertainty estimates of ATS models are way less reliable than estimates of translation models. These estimates often select for annotation noisy documents that are useless or even harmful for training summarization models. Table 1 reveals several documents selected by the worst-performing strategy NSP on AESLC. We can see that NSP selects domain-irrelevant documents or very specific ones. Their summaries can hardly be restored from the source documents, which means that they most likely have little positive impact on the generalization ability of the model. More examples of instances selected by different query strategies are presented in Table 5 in Appendix E.
208
+
209
+ # 6.2 In-Domain Diversity Sampling
210
+
211
+ In this series of experiments, we analyze the proposed IDDS query strategy. Figure 3a and Figures 10, 11 in Appendix C show the performance of the models with various query strategies on AESLC. We can see that the proposed strategy outperforms random sampling on all iterations for both ATS models and subsequently outperforms the uncertainty-based strategy NSP. IDS demonstrates similar results on the WikiHow and PubMed datasets, outperforming the baseline with a large margin as depicted in Figures 3b and 3c. We also report the improvement of IDS over random sampling in percentage on several AL iterations in Table 4. We can see that IDS provides an especially large improvement in the cold-start AL scenario when the amount of labeled data is very small.
212
+
213
+ We carry out several ablation studies for the proposed query strategy. First, we investigate
214
+
215
+ the effect of various models for document embeddings construction and the necessity of performing TAPT. Figures 17 and 18 in Appendix F illustrate that TAPT substantially enhances the performance of IDDS. Figure 17 also shows that the BERT-base encoder appears to be better than SentenceBERT (Reimers and Gurevych, 2019) and LongFormer (Beltagy et al., 2020).
216
+
217
+ Second, we try various functions for calculating the similarity between instances. Figures 19, 20 in Appendix F compare the originally used dot product with Mahalanobis and Euclidean distances on AESLC and WikiHow. On AESLC, IDS with Mahalanobis distance persistently demonstrates lower performance. IDS with the Euclidean distance shows a performance drop on the initial AL iterations compared to the dot product. On WikiHow, however, all the variants perform roughly the same. Therefore, we suggest keeping the dot product for computing the document similarity in IDSs since it provides the most robust results across the datasets.
218
+
219
+ We also compare the dot product with its normalized version - cosine similarity on AESLC and PubMed, see Figures 21 and 22 in Appendix F. On both datasets, adding normalization leads to substantially worse results on the initial AL iterations. We deem that this happens because normalization may damage the representativeness component since the norm of the embedding can be treated as a measure of the representativeness of the corresponding document.
220
+
221
+ Third, we investigate how different values for the lambda coefficient influence the performance of IDDS. Table 7 and Figure 23 in Appendix F shows that smaller values of $\lambda \in \{0,0.33,0.5\}$ substantially deteriorate the performance. Smaller values correspond to selecting instances that are highly
222
+
223
+ dissimilar from the documents in the unlabeled pool, which leads to picking many outliers. Higher values lead to the selection of instances from the core of the unlabeled dataset, but also very similar to the annotated part. This also results in a lower quality on the initial AL iterations. The best and most stable results are obtained with $\lambda = 0.67$ .
224
+
225
+ Fourth, we compare IDSs with the MMR-based strategy suggested in (Kim et al., 2006). Since it uses uncertainty, it requires a trained model to calculate the scores. Consequently, the initial query is taken randomly as no trained model is available on the initial AL iteration. Therefore, we use the modification, when the initial query is done with IDSs because it provides substantially better results on the initial iteration. We also experiment with different values of the $\lambda$ hyperparameter of the MMR-based strategy. Figure 24 illustrates a large gap in performance of IDSs and the MMR-based strategy regardless of the initialization $/\lambda$ on AESLC. We believe that this is attributed to the fact that strategies incorporating uncertainty are harmful to AL in ATS as shown in Section 6.1.
226
+
227
+ Finally, we compare "aggregation" functions for estimating the similarity between a document and a collection of documents (labeled and unlabeled pools). Following the MMR-based strategy (Kim et al., 2006), instead of calculating the average similarity between the embedding of the target document and the embeddings of documents from the labeled set, we calculate the maximum similarity. We also try replacing the "average" aggregation function with "maximum" in both IDSs components in (8). Figures 25 and 26 in Appendix F show that average leads to better performance on both AESLC and WikiHow datasets.
228
+
229
+ The importance of diversity sampling is illustrated in Table 6 in Appendix E. We can see that NSP-based query batches contain a large number of overlapping instances. This may partly stipulate the poor performance of the NSP strategy since almost $9\%$ of labeled instances are redundant. IDDS, on the contrary, does not have instances with overlapping summaries inside batches at all.
230
+
231
+ # 6.3 Self-learning
232
+
233
+ In this section, we investigate the effect of self-learning in the AL setting. Figures 4a, 4b illustrate the effect of self-learning on the AESLC and Gigaword datasets. For this experiment, we use $k_{l} = 10$ , $k_{h} = 1$ , filtering out $11\%$ of automati
234
+
235
+ cally generated summaries. In both cases: with AL and without, adding automatically generated summaries of documents from the unlabeled pool to the training set improves the performance of the summarization model. On AESLC, the best results are obtained with both AL and self-learning: their combination achieves up to $58\%$ improvement in all ROUGE metrics compared to using passive annotation without self-learning.
236
+
237
+ The same experiment on the WikiHow dataset is presented in Figure 4c. To make sure that the quality is not deteriorated due to the addition of noisy uncertain instances, we use $k_{l} = 38$ , $k_{h} = 2$ for this experiment, filtering out $40\%$ of automatically generated summaries. On this dataset, self-learning reduces the performance for both cases (with AL and without). We deem that the benefit of self-learning depends on the length of the summaries in the dataset. AESLC and Gigaword contain very short summaries (less than 13 tokens on average, see Table 2). Since the model is capable of generating short texts that are grammatically correct and logically consistent, such data augmentation does not introduce much noise into the dataset, resulting in performance improvement. WikiHow, on the contrary, contains long summaries (77 tokens on average). Generation of long, logically consistent, and grammatically correct summaries is still a challenging task even for the state-of-the-art ATS models. Therefore, the generated summaries are of low quality, and using them as an additional training signal deteriorates the model performance. Consequently, we suggest using self-learning only if the dataset consists of relatively short texts. We leave a more detailed investigation of this topic for future research.
238
+
239
+ # 6.4 Consistency
240
+
241
+ We analyze how various AL strategies and self-learning affect the consistency of model output in two ways. We measure the consistency of the generated summaries with the original documents on the test set on each AL iteration. Figure 5 shows that the model trained on instances queried by IDDS generates the most consistent summaries across all considered AL query strategies on AESLC. On the contrary, the model trained on the instances selected by the uncertainty-based NSP query strategy generates summaries with the lowest consistency.
242
+
243
+ Figure 28 in Appendix G demonstrates that on AESLC, self-learning also improves consistency
244
+
245
+ ![](images/850bfd43ffec04824fd574980916e1e6f6bb0590681c70bef5de4d6ee0627313.jpg)
246
+ a) AESLC dataset $k_{l} = 10, k_{h} = 1$ .
247
+
248
+ ![](images/a4a0d528647e85bad5f95bbdb671bbd8f20a33ef94e7c6c768cf8f2f88e02421.jpg)
249
+ b) Gigaword dataset $k_{l} = 10,k_{h} = 1$
250
+
251
+ ![](images/7a9ae5610f0606cb7be55a971ecc92d4d24a3eb3200968f0a970ae3b62379e00.jpg)
252
+ c) WikiHow dataset $k_{l} = 38, k_{h} = 2$
253
+
254
+ ![](images/65bff0040e08e03a689b6a54e80a120894a3ed18f4f1875f4a12e258f38d5647.jpg)
255
+ Figure 4: ROUGE-1 scores of the BART-base model with IDS and random sampling strategies with and without self-learning on AESLC, Gigaword, and WikiHow. Full results are provided in Figures 14, 15, and 16, respectively.
256
+ Figure 5: The consistency score calculated via SummaC with BART-base on AESLC with various AL strategies.
257
+
258
+ regardless of the AL strategy. The same trend is observed on Gigaword (Figure 27 in Appendix G).
259
+
260
+ However, for WikiHow, there is no clear trend. Figure 29 in Appendix G shows that all query strategies lead to similar consistency results, with NSP producing slightly higher consistency, and BLEU-Var - slightly lower. We deem that this may be due to the fact that summaries generated by the model on WikiHow are of lower quality than the golden summaries regardless of the strategy. Therefore, this leads to biased scores of the SummaC model with similar results on average.
261
+
262
+ # 6.5 Query Duration
263
+
264
+ We compare the average duration of AL iterations for various query strategies. Figure 30 in the Appendix H presents the average training time and the average duration of making a query. We can see that training a model takes considerably less time than selecting the instances from the unlabeled pool for annotation. Therefore, the duration of AL iterations is mostly determined by the efficiency of the query strategy. The IDDS query strategy does not
265
+
266
+ require any heavy computations during AL, which makes it also the best option for keeping the AL process interactive.
267
+
268
+ # 7 Conclusion
269
+
270
+ In this work, we convey the first study of AL in ATS and propose the first active learning query strategy that outperforms the baseline random sampling. The query strategy aims at selecting for annotation the instances with high similarity with the documents in the unlabeled pool and low similarity with the already annotated documents. It outperforms the random sampling in terms of ROUGE metrics on all considered datasets. It also outperforms random sampling in terms of the consistency score calculated via the SummaC model on the AESLC dataset. We also demonstrate that uncertainty-based query strategies fail to outperform random sampling, resulting in the same or even worse performance. Finally, we show that self-learning can improve the performance of an ATS model in terms of both the ROUGE metrics and consistency. This is especially favorable in AL since there is always a large unlabeled pool of data. We show that combining AL and self-learning can give an improvement of up to $58\%$ in terms of ROUGE metrics.
271
+
272
+ In future work, we look forward to investigating IDSs in other sequence generation tasks. This query strategy might be beneficial for tasks with the highly variable output when uncertainty estimates of model predictions are unreliable and cannot outperform the random sampling baseline. IDSs facilitates the representativeness of instances in the training dataset without leveraging uncertainty scores.
273
+
274
+ # Limitations
275
+
276
+ Despite the benefits, the proposed methods require some conditions to be met to be successfully applied in practice. IDS strategy requires making TAPT of the embeddings-generated model, which may be computationally consuming for a large dataset. Self-learning, in turn, may harm the performance when the summaries are too long, as shown in Section 6.3. Consequently, its application requires a detailed analysis of the properties of the target domain summaries.
277
+
278
+ # Ethical Considerations
279
+
280
+ It is important to note that active learning is a method of biased sampling, which can lead to biased annotated corpora. Therefore, active learning can be used to deliberately increase the bias in the datasets. Our research improves the active learning performance; hence, our contribution would also make it more efficient for introducing more bias as well. We also note that our method uses the pre-trained language models, which usually contain different types of biases by themselves. Since bias affects all applications of pre-trained models, this can also unintentionally facilitate the biased selection of instances for annotation during active learning.
281
+
282
+ # Acknowledgements
283
+
284
+ We thank anonymous reviewers for their insightful suggestions to improve this paper. The work was supported by a grant for research centers in the field of artificial intelligence (agreement identifier 000000D730321P5Q0002 dated November 2, 2021 No. 70-2021-00142 with ISP RAS). This research was supported in part by computational resources of the HPC facilities at the HSE University (Kostenetskiy et al., 2021).
285
+
286
+ # References
287
+
288
+ Armen Aghajanyan, Akshit Shrivastava, Anchit Gupta, Naman Goyal, Luke Zettlemoyer, and Sonal Gupta. 2021. Better fine-tuning by reducing representational collapse. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net.
289
+ Vamshi Ambati. 2012. Active Learning and Crowdsourcing for Machine Translation in Low Resource Scenarios. Ph.D. thesis, Language Technologies Institute School of Computer Science Carnegie Mellon University, USA. AAI3528171.
290
+
291
+ Sankaranarayanan Ananthakrishnan, Rohit Prasad, David Stallard, and Prem Natarajan. 2013. Batchmode semi-supervised active learning for statistical machine translation. Computer Speech & Language, 27(2):397-406.
292
+ Jordan T Ash, Chicheng Zhang, Akshay Krishnamurthy, John Langford, and Alekh Agarwal. 2019. Deep batch active learning by diverse, uncertain gradient lower bounds. In International Conference on Learning Representations.
293
+ Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.
294
+ Iz Beltagy, Matthew E. Peters, and Arman Cohan. 2020. Longformer: The long-document transformer. CoRR, abs/2004.05150.
295
+ Jaime G. Carbonell and Jade Goldstein. 1998. The use of mmr, diversity-based reranking for reordering documents and producing summaries. In SIGIR '98: Proceedings of the 21st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, August 24-28 1998, Melbourne, Australia, pages 335-336. ACM.
296
+ Yen-Chun Chen and Mohit Bansal. 2018. Fast abstractive summarization with reinforce-selected sentence rewriting. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018, Melbourne, Australia, July 15-20, 2018, Volume 1: Long Papers, pages 675-686. Association for Computational Linguistics.
297
+ Gui Citovsky, Giulia DeSalvo, Claudio Gentile, Lazaros Karydas, Anand Rajagopalan, Afshin Rostamizadeh, and Sanjiv Kumar. 2021. Batch active learning at scale. Advances in Neural Information Processing Systems, 34:11933-11944.
298
+ Arman Cohan, Franck Dernoncourt, Doo Soon Kim, Trung Bui, Seokhwan Kim, Walter Chang, and Nazli Goharian. 2018. A discourse-aware attention model for abstractive summarization of long documents. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 615-621, New Orleans, Louisiana. Association for Computational Linguistics.
299
+ David A Cohn, Zoubin Ghahramani, and Michael I Jordan. 1996. Active learning with statistical models. Journal of artificial intelligence research, 4:129-145.
300
+ Liat Ein Dor, Alon Halfon, Ariel Gera, Eyal Shnarch, Lena Dankin, Leshem Choshen, Marina Danilevsky, Ranit Aharonov, Yoav Katz, and Noam Slonim. 2020. Active learning for bert: an empirical study. In Proceedings of the 2020 Conference on Empirical
301
+
302
+ Methods in Natural Language Processing (EMNLP), pages 7949-7962.
303
+ Yarin Gal and Zoubin Ghahramani. 2016. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In Proceedings of the 33nd International Conference on Machine Learning, ICML 2016, New York City, NY, USA, June 19-24, 2016, volume 48 of JMLR Workshop and Conference Proceedings, pages 1050-1059. JMLR.org.
304
+ Alexios Gidiotis and Grigorios Tsoumakas. 2021. Bayesian active summarization. CoRR, abs/2110.04480.
305
+ Alexios Gidiotis and Grigorios Tsoumakas. 2022. Should we trust this summary? bayesian abstractive summarization to the rescue. In *Findings of the Association for Computational Linguistics: ACL* 2022, Dublin, Ireland, May 22-27, 2022, pages 4119-4131. Association for Computational Linguistics.
306
+ Tanya Goyal, Jiacheng Xu, Junyi Jessy Li, and Greg Durrett. 2022. Training dynamics for text summarization models. In *Findings of the Association for Computational Linguistics: ACL* 2022, Dublin, Ireland, May 22-27, 2022, pages 2061-2073. Association for Computational Linguistics.
307
+ David Graff, Junbo Kong, Ke Chen, and Kazuaki Maeda. 2003. English gigaword. Linguistic Data Consortium, Philadelphia, 4(1):34.
308
+ Mandy Guo, Joshua Ainslie, David C. Uthus, Santiago Ontañón, Jianmo Ni, Yun-Hsuan Sung, and Yinfei Yang. 2021. Longt5: Efficient text-to-text transformer for long sequences. CoRR, abs/2112.07916.
309
+ Suchin Gururangan, Ana Marasovic, Swabha Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey, and Noah A. Smith. 2020. Don't stop pretraining: Adapt language models to domains and tasks. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 8342-8360. Association for Computational Linguistics.
310
+ Gholamreza Haffari, Maxim Roy, and Anoop Sarkar. 2009. Active learning for statistical phrase-based machine translation. In Human Language Technologies: Conference of the North American Chapter of the Association of Computational Linguistics, Proceedings, May 31 - June 5, 2009, Boulder, Colorado, USA, pages 415-423. The Association for Computational Linguistics.
311
+ Yichong Huang, Xiachong Feng, Xiaocheng Feng, and Bing Qin. 2021. The factual inconsistency problem in abstractive text summarization: A survey.
312
+ Seokhwan Kim, Yu Song, Kyungduk Kim, Jeongwon Cha, and Gary Geunbae Lee. 2006. Mmr-based active machine learning for bio named entity recognition. In Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics, Proceedings, June 4-9,
313
+
314
+ 2006, New York, New York, USA. The Association for Computational Linguistics.
315
+ P. S. Kostenetskiy, R. A. Chulkevich, and V. I. Kozyrev. 2021. HPC Resources of the Higher School of Economics. Journal of Physics: Conference Series, 1740(1):012050.
316
+ Mahnaz Koupae and William Yang Wang. 2018. Wikihow: A large scale text summarization dataset. CoRR, abs/1810.09305.
317
+ Wojciech Kryscinski, Bryan McCann, Caiming Xiong, and Richard Socher. 2020. Evaluating the factual consistency of abstractive text summarization. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pages 9332-9346. Association for Computational Linguistics.
318
+ Philippe Laban, Tobias Schnabel, Paul N. Bennett, and Marti A. Hearst. 2022. SummaC: Re-visiting NLIBased models for inconsistency detection in summarization. Transactions of the Association for Computational Linguistics, 10:163-177.
319
+ David D. Lewis and William A. Gale. 1994. A sequential algorithm for training text classifiers. In Proceedings of the 17th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval. Dublin, Ireland, 3-6 July 1994 (Special Issue of the SIGIR Forum), pages 3-12. ACM/Springer.
320
+ Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 7871-7880. Association for Computational Linguistics.
321
+ Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches Out, pages 74-81, Barcelona, Spain. Association for Computational Linguistics.
322
+ Ming Liu, Wray L. Buntine, and Gholamreza Haffari. 2018. Learning to actively learn neural machine translation. In Proceedings of the 22nd Conference on Computational Natural Language Learning, CoNLL 2018, Brussels, Belgium, October 31 - November 1, 2018, pages 334-344. Association for Computational Linguistics.
323
+ Yixin Liu and Pengfei Liu. 2021. Simcls: A simple framework for contrastive learning of abstractive summarization. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021, (Volume 2: Short Papers), Virtual Event, August 1-6, 2021, pages 1065-1072. Association for Computational Linguistics.
324
+
325
+ Yixin Liu, Pengfei Liu, Dragomir R. Radev, and Graham Neubig. 2022. BRIO: bringing order to abstractive summarization. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2022, Dublin, Ireland, May 22-27, 2022, pages 2890-2903. Association for Computational Linguistics.
326
+ Zhihao Lyu, Danier Duolikun, Bowei Dai, Yuan Yao, Pasquale Minervini, Tim Z Xiao, and Yarin Gal. 2020. You need only uncertain answers: Data efficient multilingual question answering. Workshop on Uncertainty and Ro-Bustness in Deep Learning.
327
+ Andrey Malinin and Mark J. F. Gales. 2021. Uncertainty estimation in autoregressive structured prediction. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net.
328
+ Katerina Margatina, Loic Barrault, and Nikolaos Aletras. 2021. Bayesian active learning with pretrained language models. arXiv preprint arXiv:2104.08320.
329
+ Ramesh Nallapati, Feifei Zhai, and Bowen Zhou. 2017. Summarrunner: A recurrent neural network based sequence model for extractive summarization of documents. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, February 4-9, 2017, San Francisco, California, USA, pages 3075-3081. AAAI Press.
330
+ Feng Nan, Cicero Nogueira dos Santos, Henghui Zhu, Patrick Ng, Kathleen R. McKeown, Ramesh Nallapati, Dejiao Zhang, Zhiguo Wang, Andrew O. Arnold, and Bing Xiang. 2021. Improving factual consistency of abstractive summarization via question answering. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021, (Volume 1: Long Papers), Virtual Event, August 1-6, 2021, pages 6881-6894. Association for Computational Linguistics.
331
+ Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311-318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.
332
+ Álvaro Peris and Francisco Casacuberta. 2018. Active learning for interactive neural machine translation of data streams. In Proceedings of the 22nd Conference on Computational Natural Language Learning, CoNLL 2018, Brussels, Belgium, October 31 - November 1, 2018, pages 151-160. Association for Computational Linguistics.
333
+ Matt Post. 2018. A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 186-191, Brussels, Belgium. Association for Computational Linguistics.
334
+
335
+ Weizhen Qi, Yu Yan, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang, and Ming Zhou. 2020. Prophetnet: Predicting future n-gram for sequence-to-sequence pre-training. In *Findings of the Association for Computational Linguistics: EMNLP* 2020, Online Event, 16-20 November 2020, volume EMNLP 2020 of *Findings of ACL*, pages 2401-2410. Association for Computational Linguistics.
336
+ Nils Reimers and Iryna Gurevych. 2019. Sentence-bert: Sentence embeddings using siamese bert-networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, pages 3980-3990. Association for Computational Linguistics.
337
+ Alexander M. Rush, Sumit Chopra, and Jason Weston. 2015. A neural attention model for abstractive sentence summarization. Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing.
338
+ Tobias Scheffer, Stefan Wrobel, Borislav Popov, Damyan Ognianov, Christian Decomain, and Susanne Hoche. 2002. Lerning hidden markov models for information extraction actively from partially labeled text. Kunstliche Intell., 16(2):17-22.
339
+ Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointer-generator networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017, Vancouver, Canada, July 30 - August 4, Volume 1: Long Papers, pages 1073-1083. Association for Computational Linguistics.
340
+ Ozan Sener and Silvio Savarese. 2018. Active learning for convolutional neural networks: A core-set approach. In International Conference on Learning Representations.
341
+ Burr Settles and Mark Craven. 2008a. An analysis of active learning strategies for sequence labeling tasks. In 2008 Conference on Empirical Methods in Natural Language Processing, EMNLP 2008, Proceedings of the Conference, 25-27 October 2008, Honolulu, Hawaii, USA, A meeting of SIGDAT, a Special Interest Group of the ACL, pages 1070-1079. Association for Natural Language Processing.
342
+ Burr Settles and Mark Craven. 2008b. An analysis of active learning strategies for sequence labeling tasks. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, pages 1070-1079, Honolulu, Hawaii. Association for Computational Linguistics.
343
+ Artem Shelmanov, Dmitri Puzyrev, Lyubov Kupriyanova, Denis Belyakov, Daniil Larionov, Nikita Khromov, Olga Kozlova, Ekaterina Artemova, Dmitry V. Dylov, and Alexander Panchenko. 2021. Active learning for sequence tagging with deep
344
+
345
+ pre-trained models and Bayesian uncertainty estimates. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 1698-1712, Online. Association for Computational Linguistics.
346
+ Yanyao Shen, Hyokun Yun, Zachary Lipton, Yakov Kronrod, and Animashree Anandkumar. 2017. Deep active learning for named entity recognition. In Proceedings of the 2nd Workshop on Representation Learning for NLP, pages 252-256, Vancouver, Canada. Association for Computational Linguistics.
347
+ Aditya Siddhant and Zachary C. Lipton. 2018. Deep bayesian active learning for natural language processing: Results of a large-scale empirical study. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018, pages 2904-2909. Association for Computational Linguistics.
348
+ Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, December 8-13 2014, Montreal, Quebec, Canada, pages 3104-3112.
349
+ Akim Tsvigun, Artem Shelmanov, Gleb Kuzmin, Leonid Sanochkin, Daniil Larionov, Gleb Gusev, Manvel Avetisian, and Leonid Zhukov. 2022. Towards computationally feasible deep active learning. In Findings of the Association for Computational Linguistics: NAACL 2022, pages 1198-1218, Seattle, United States. Association for Computational Linguistics.
350
+ Nicola Ueffing and Hermann Ney. 2007. Word-level confidence estimation for machine translation. Comput. Linguistics, 33(1):9-40.
351
+ Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 5998-6008.
352
+ Shuo Wang, Yang Liu, Chao Wang, Huanbo Luan, and Maosong Sun. 2019. Improving back-translation with uncertainty-based confidence estimation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 791–802, Hong Kong, China. Association for Computational Linguistics.
353
+ Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, and Jamie Brew. 2019. Huggingface's transformers:
354
+
355
+ State-of-the-art natural language processing. CoRR, abs/1910.03771.
356
+ Tim Z. Xiao, Aidan N. Gomez, and Yarin Gal. 2020. Wat zei je? detecting out-of-distribution translations with variational transformers. CoRR, abs/2006.08344.
357
+ Michelle Yuan, Hsuan-Tien Lin, and Jordan Boyd-Graber. 2020. Cold-start active learning through self-supervised language modeling. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 7935-7948, Online. Association for Computational Linguistics.
358
+ Jingqing Zhang, Yao Zhao, Mohammad Saleh, and Peter J. Liu. 2020. PEGASUS: pre-training with extracted gap-sentences for abstractive summarization. In Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event, volume 119 of Proceedings of Machine Learning Research, pages 11328-11339. PMLR.
359
+ Mike Zhang and Barbara Plank. 2021. Cartography active learning. In Findings of the Association for Computational Linguistics: EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 16-20 November, 2021, pages 395-406. Association for Computational Linguistics.
360
+ Rui Zhang and Joel R. Tetreault. 2019. This email could save your life: Introducing the task of email subject line generation. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28-August 2, 2019, Volume 1: Long Papers, pages 446-456. Association for Computational Linguistics.
361
+
362
+ # A Dataset Statistics and Model Hyperparameters
363
+
364
+ Table 2: Dataset statistics. We provide a number of instances for the training and test sets and average lengths of documents / summaries in terms of tokens. All the datasets are English-language. We filter the WikiHow dataset since it contains many noisy instances: we exclude instances with documents that have 10 or less tokens and instances with summaries that have 3 or less tokens.
365
+
366
+ <table><tr><td>Dataset</td><td>Subset</td><td>Num. instances</td><td>Av. document len.</td><td>Av. summary len.</td></tr><tr><td rowspan="2">AESLC</td><td>Train</td><td>14.4K</td><td>142.4</td><td>7.8</td></tr><tr><td>Test</td><td>1.9K</td><td>143.8</td><td>7.9</td></tr><tr><td rowspan="2">WikiHow</td><td>Train</td><td>184.6K</td><td>377.5</td><td>77.2</td></tr><tr><td>Test</td><td>1K</td><td>386.9</td><td>77.0</td></tr><tr><td rowspan="2">Pubmed</td><td>Train</td><td>119.1K</td><td>495.4</td><td>263.9</td></tr><tr><td>Test</td><td>6.7K</td><td>509.5</td><td>268.0</td></tr><tr><td rowspan="2">Gigaword (self-learning)</td><td>Train</td><td>10K</td><td>38.9</td><td>11.9</td></tr><tr><td>Test</td><td>2K</td><td>37.1</td><td>12.8</td></tr><tr><td rowspan="2">Gigaword (hyperparam. optimiz.)</td><td>Train</td><td>200</td><td>40.8</td><td>13.3</td></tr><tr><td>Test</td><td>2K</td><td>38.6</td><td>12.5</td></tr></table>
367
+
368
+ For WikiHow and PubMed datasets, we reduce the batch size and increase gradient accumulation steps by the same amount due to computational bottleneck.
369
+
370
+ Hardware configuration: 2 Intel Xeon Platinum 8168, 2.7 GHz, 24 cores CPU; NVIDIA Tesla v100 GPU, 32 Gb of VRAM.
371
+
372
+ Table 3: Hyperparameter values and checkpoints from the HuggingFace repository (Wolf et al., 2019) of the models. We imitate the low-resource case by randomly selecting 200 instances from Gigaword train dataset as a train sample, and 2,000 instances from the validation set as a test sample for evaluation consistency. For each model, we find the optimal hyperparameters according to evaluation scores on the sampled subset. Generation maximum length is set to the maximum summary length from the available labeled set.
373
+
374
+ <table><tr><td>Hparam</td><td>BART</td><td>PEGASUS</td></tr><tr><td>Checkpoint</td><td>facebook/bart-base</td><td>google/pegasus-large</td></tr><tr><td>#Param.</td><td>139M</td><td>570M</td></tr><tr><td>Number of epochs</td><td>6</td><td>4</td></tr><tr><td>Batch size</td><td>16</td><td>2</td></tr><tr><td>Gradient accumulation steps</td><td>1</td><td>8</td></tr><tr><td>Min. number of training steps</td><td>350</td><td>200</td></tr><tr><td>Max. sequence length</td><td>1024</td><td>1024</td></tr><tr><td>Optimizer</td><td>AdamW</td><td>AdamW</td></tr><tr><td>Learning rate</td><td>2e-5</td><td>5e-4</td></tr><tr><td>Weight decay</td><td>0.028</td><td>0.03</td></tr><tr><td>Gradient clipping</td><td>0.28</td><td>0.3</td></tr><tr><td>Sheduler</td><td>STLR</td><td>STLR</td></tr><tr><td>% warm-up steps</td><td>10</td><td>10</td></tr><tr><td>Num. beams at evaluation</td><td>4</td><td>4</td></tr><tr><td>Generation max. length</td><td>Adapt.</td><td>Adapt.</td></tr></table>
375
+
376
+ # B Full Results for Uncertainty-based Methods
377
+
378
+ ![](images/10ce48d5221334430cc242f8931821dca447f843e5d2ccaa10cd73281155d442.jpg)
379
+ a) ROUGE-1
380
+
381
+ ![](images/fb96b2ede568d9e85814fbd27e7faed5eee799b2ed869783031248f6f32cb3bd.jpg)
382
+ b) ROUGE-2
383
+
384
+ ![](images/c4625964ad7cc25eb7428e744455b9580ca5c2b7f1180577b70ca56821dc7353.jpg)
385
+ c) ROUGE-L
386
+
387
+ ![](images/ba8ff13852537293f2eb3a7e7c41f1bbfe06d295bcfd7973306852a73702b264.jpg)
388
+ a) ROUGE-1
389
+
390
+ ![](images/1ffb86d24d856dac0115ccbfa37665dc07fdce94d1010fe7aaf69100af67c5ac.jpg)
391
+ Figure 6: The performance of the BART-base model with various uncertainty-based strategies compared with random sampling (baseline) on AESLC.
392
+ b) ROUGE-2
393
+
394
+ ![](images/aeacadd7ebd7b35d14ddb2fe6a8f06db8eee418f969d1ca8c68e8f092dd8a4cc.jpg)
395
+ c) ROUGE-L
396
+
397
+ ![](images/daa67022249740b3c07ec0ab14e7887f1e242a82e0c0ea937aa444c119484363.jpg)
398
+ Figure 7: The performance of the PEGASUS-large model with various uncertainty-based strategies compared with random sampling (baseline) on AESLC.
399
+ a) ROUGE-1
400
+
401
+ ![](images/d85b97e6bea28d22fdad51b3c6cedca1629f8f08223bc4fced4f86c30edcede0.jpg)
402
+ b) ROUGE-2
403
+
404
+ ![](images/5998cbdba849308e6b00e3d8d780d9fea12f4673526f14a50f0f0bfdebf0a1eb.jpg)
405
+ c) ROUGE-L
406
+
407
+ ![](images/d47ac69aacdb20c59dc41e299a073542801681520bb47292f471726efc75e193.jpg)
408
+ Figure 8: The performance of the BART-base model with various uncertainty-based strategies compared with random sampling (baseline) on WikiHow.
409
+ a) ROUGE-1
410
+ Figure 9: The performance of the BART-base model with various uncertainty-based strategies compared with random sampling (baseline) on PubMed.
411
+
412
+ ![](images/312f1f3b44e2506d26164475c45b90a090e7556b12727dfeb235355d3ac54bce.jpg)
413
+ b) ROUGE-2
414
+
415
+ ![](images/2aca31854f0605d356df21b5ab08cb39042131848a583720f1b317be4fe3c792.jpg)
416
+ c) ROUGE-L
417
+
418
+ # C Full Results for IDDS
419
+
420
+ <table><tr><td>Iter. 0
421
+ R-1/R-2/R-L</td><td>Iter. 5
422
+ R-1/R-2/R-L</td><td>Iter. 10
423
+ R-1/R-2/R-L</td><td>Iter. 15
424
+ R-1/R-2/R-L</td><td>Average
425
+ R-1/R-2/R-L</td></tr><tr><td colspan="5">AESLC + BART-base</td></tr><tr><td>48.8 / 52.5 / 48.4</td><td>11.2 / 14.9 / 11.4</td><td>5.2 / 5.4 / 5.0</td><td>4.1 / 2.6 / 3.8</td><td>10.2 / 11.9 / 10.0</td></tr><tr><td colspan="5">AESLC + PEGASUS-large</td></tr><tr><td>-24.8 / -19.7 / -24.5</td><td>6.9 / 7.3 / 7.4</td><td>1.6 / 0.4 / 2.0</td><td>4.8 / 3.5 / 4.7</td><td>7.6 / 6.7 / 8.0</td></tr><tr><td colspan="5">WikiHow + BART-base</td></tr><tr><td>6.3 / 12.5 / 5.4</td><td>1.9 / 2.7 / 1.3</td><td>3.0 / 4.2 / 2.5</td><td>2.6 / 2.9 / 1.8</td><td>2.3 / 3.2 / 1.5</td></tr><tr><td colspan="5">PubMed + BART-base</td></tr><tr><td>8.0 / 10.4 / 5.8</td><td>12.0 / 11.7 / 8.0</td><td>8.1 / 6.4 / 4.9</td><td>9.5 / 6.7 / 5.1</td><td>8.9 / 7.7 / 5.5</td></tr></table>
426
+
427
+ Table 4: Percentage increase in ROUGE F-scores of IDDS over the baseline on different AL iterations. Average refers to the average increase throughout the whole AL cycle.
428
+
429
+ ![](images/a382fac8038b8074655f019e14cab3e67a401182b6ebcb57b0906ca27c354bb7.jpg)
430
+ a) ROUGE-1
431
+
432
+ ![](images/309bfaa59213a3842c098c1f6352dfa2d65802d80760b044001959dfdadb282e.jpg)
433
+ b) ROUGE-2
434
+ Figure 10: The performance of the BART-base model with the IDDS strategy compared with random sampling (baseline) and NSP (uncertainty-based strategy) on AESLC.
435
+
436
+ ![](images/5777968da6953b82dd1b08d27fd90caa3a6de30d1875e52565a3bfd0dd4e16fc.jpg)
437
+ c) ROUGE-L
438
+
439
+ ![](images/eea399f39f479b96641de68d92bac9ab98ebaad0b854365274ee4b9bd0388f30.jpg)
440
+ a) ROUGE-1
441
+
442
+ ![](images/55734c121491288b26840b1fe4e7b09de6d8d5c902d74187af6018334860dbbe.jpg)
443
+ b) ROUGE-2
444
+
445
+ ![](images/40445fbe4bb0339f11a6264d09c4a17e8ab5be45aa2872dc60a1b4db61577200.jpg)
446
+ c) ROUGE-L
447
+ Figure 11: The performance of the PEGASUS-large model with the IDS strategy compared with random sampling (baseline) and NSP (uncertainty-based strategy) on AESLC.
448
+
449
+ ![](images/26196d23ed69cef025e4b6b6bc1da29d52698c8c4880be1f6af2f5072a086864.jpg)
450
+ a) ROUGE-1
451
+
452
+ ![](images/0f59d16e60fb14047c220acd66710284e814b6755ccd8a903bcbb69a4ae63b7c.jpg)
453
+ b) ROUGE-2
454
+ Figure 12: The performance of the BART-base model with the IDDS strategy compared with random sampling (baseline) and NSP (uncertainty-based strategy) and NSP (uncertainty-based strategy) on WikiHow.
455
+
456
+ ![](images/642a11bf7999c67c48df32dc6e9e6e5ca19499a55670022e39ddc2f4a9c4026f.jpg)
457
+ c) ROUGE-L
458
+
459
+ ![](images/a6256bc3b39b0c56d0a5ba015cafa74d9efe3466a93252d2712b20892d2486df.jpg)
460
+ a) ROUGE-1
461
+ Figure 13: The performance of the BART-base model with the IDS strategy compared with random sampling (baseline) and NSP (uncertainty-based strategy) on PubMed.
462
+
463
+ ![](images/87cb8b42d65f6e10557c804fca9db7aadadc793bd1bcf68acf4d652494e60b6f.jpg)
464
+ b) ROUGE-2
465
+
466
+ ![](images/70cd82daf66733ad9b540ab45eb37ba4d67a400191ad643885ec5c4893e0f793.jpg)
467
+ c) ROUGE-L
468
+
469
+ # D Full Results for Self-learning
470
+
471
+ ![](images/4e43b84bc168782236f87f3b98651979713e74396dfb1cad8278e6bad80fc18c.jpg)
472
+ a) ROUGE-1
473
+
474
+ ![](images/58511bdf696d9290f527436c9613e55b48575bcabaf5a8cf9485075fabd5d02a.jpg)
475
+ b) ROUGE-2
476
+ Figure 14: The performance of the BART-base model with the IDDS and random sampling strategies with and without pseudo-labeling of the unlabeled data on AESLC ( $k_{l} = 0.1$ , $k_{h} = 0.01$ ).
477
+
478
+ ![](images/fcd3d6c3a0b611fc325bcc574d21b46a20f60a706e148e39179eed23bba4ffc5.jpg)
479
+ c) ROUGE-L
480
+
481
+ ![](images/2b28a6a7ed57ec9568cd403249a5437d6c458483adfbe463bed2e9e1078362d6.jpg)
482
+ a) ROUGE-1
483
+
484
+ ![](images/71854f8d63bb441dfc901cd309aaee388bdaa4a7799bb4751d118dacc1e84b5a.jpg)
485
+ b) ROUGE-2
486
+
487
+ ![](images/e02fe53b91f368cdb155d033827012a1ed87ce1349b7e6b016d100e756122cd7.jpg)
488
+ c) ROUGE-L
489
+
490
+ ![](images/d349a6f5f8006c7efa91d68d235ac737cce5b4f2073c5668883224ce8cc55063.jpg)
491
+ Figure 15: The performance of the BART-base model without AL (random sampling) with and without pseudolabeling of the unlabeled data on the randomly sampled subset of Gigaword ( $k_{l} = 0.1$ , $k_{h} = 0.01$ ).
492
+ a) ROUGE-1
493
+ Figure 16: The performance of the BART-base model with the IDDS and random sampling strategies with and without self-supervised learning on WikiHow ( $k_{l} = 0.38$ , $k_{h} = 0.02$ ).
494
+
495
+ ![](images/a94e5a8c063f2379cce84566615e1738e2b9daa032d55c6bdf7ecf8cece3c712.jpg)
496
+ b) ROUGE-2
497
+
498
+ ![](images/7742b38af8fb52b6894bb68ef8ef32269d1367a9adc8f4a32e7482e2ba645920.jpg)
499
+ c) ROUGE-L
500
+
501
+ # E Diversity Statistics and Query Examples
502
+
503
+ <table><tr><td>AL Strat.</td><td>Document</td><td>Golden summary</td><td>Gen. summary</td></tr><tr><td>IDDS</td><td>&quot;Here&#x27;s the latest info regarding Bloomberg&#x27;s ability to accept deals with Pinnacle West (formerly Arizona Public Service Co.) (...)</td><td>Bloomberg-Pinnacle/APS deals</td><td>n/a</td></tr><tr><td>IDDS</td><td>Hi. Nice to see you in Houston. I&#x27;m giving a presentation on gas issues on Tuesday. I&#x27;ve got a draft of (...)</td><td>Gas Presentation</td><td>n/a</td></tr><tr><td>IDDS</td><td>Kelley, I am writing to you to (...) Can you give me the name and contact information for the person within your company that would work with us to put a Confidentiality Agreement in place (...)</td><td>confidentiality agreement</td><td>n/a</td></tr><tr><td>NSP</td><td>tantivy (tan-TIV-ee) adverb At full gallop; at full speed. noun A fast gallop; rush.adjective Swift.interjection A hunting cry by a hunter riding a horse at full speed(...)</td><td>A.Word.A.Day-tantivy</td><td>tricky (tan-TIV-ee) adjective</td></tr><tr><td>NSP</td><td>Prod Area and Long Haul k# Volume Rec Del 3.6746 5000 St 62 Con Ed 3.4358 15000 St 65 Con Ed 3.5049 10000 St (...)</td><td>TRCO capacity for Sep</td><td>Prod Area and Long Haul k# Volume</td></tr><tr><td>NSP</td><td>This is a list of RisktRAC book-ids corresponding to what has been created in ERMS. Let me know if the book-id naming is ok with you. Regards</td><td>Book2.xls</td><td>RisktRAC</td></tr><tr><td>ENSP</td><td>Fred, I suggest a phone call among the team today to make sure we are all on the same wave length. What is your schedule? Thanks</td><td>PSEG</td><td>Firm schedule</td></tr><tr><td>ENSP</td><td>Stephanie - When you get a chance, could you finalize the attached (also found in Tana&#x27;s O drive). I am not sure where the originals need to go after signed by Enron, but I have a request for that information currently out to Hess. Thanks.</td><td>Hess NDA</td><td>Enron O Drive</td></tr><tr><td>ENSP</td><td>Current Notes User: To ensure that you experience a successful migration from Notes to Outlook, it is necessary to gather individual user information prior to your date of migration. Please take a few minutes to completely fill out the following survey (...)</td><td>2-SURVEY /INFORMATION EMAIL 5-17-01</td><td>Office 2000 Migration Survey</td></tr><tr><td>Sacre-BleuVAR</td><td>Sheri, We are going to NO for JazzFest at the end of April. April 27th-29th to be exact. Let me know if you&#x27;re going. DG</td><td>southwest.com weekly specials</td><td>JazzFest</td></tr><tr><td>Sacre-BleuVAR</td><td>This warning is sent automatically to inform you that your mailbox is approaching the maximum size limit. Your mailbox size is currently 78515 KB. Mailbox size limits (...)</td><td>WARNING: Your mailbox is approaching the size limit</td><td>Mailbox size limit</td></tr></table>
504
+
505
+ Table 5: Examples of the instances queried with different AL strategies. Tokens overlapping with the source document are highlighted with green. Tokens that refer to paraphrasing the part of the document and the corresponding part are highlighted with blue. Tokens that cannot be derived from the document are highlighted with red. Tokens, the usage of which depends on the peculiarities of the dataset, are not highlighted. Summaries for IDSs are not presented, because IDS does not require model inference.
506
+
507
+ <table><tr><td>AL Iter.</td><td>SP</td><td>ESP</td><td>SacreBleuVAR</td><td>Random</td><td>IDDS</td></tr><tr><td>1</td><td>33.3% / 0%</td><td>30.0% / 4.4%</td><td>0% / 0%</td><td>0% / 0%</td><td>0% / 0%</td></tr><tr><td>6</td><td>15.6% / 0%</td><td>0% / 1.1%</td><td>0% / 2.2%</td><td>0% / 0%</td><td>0% / 0%</td></tr><tr><td>15</td><td>3.3% / 0%</td><td>0% / 0%</td><td>0% / 0%</td><td>0% / 2.2%</td><td>0% / 0%</td></tr><tr><td>Mean</td><td>7.8% / 1.0%</td><td>2.1% / 0.8%</td><td>0.1% / 0.3%</td><td>0% / 0.7%</td><td>0% / 0%</td></tr></table>
508
+
509
+ Table 6: Share of fully / partly overlapping summaries among batches of instances, queried with various AL strategies during AL using BART-base model on AESLC. We consider two summaries to be partly overlapping if their ROUGE-1 score $>0.66$ . The results are averaged across 9 seeds for all the strategies except for IDDS, which has constant seed-independent queries.
510
+
511
+ ![](images/22e3fab0c5acc9b569e6ba3c4d4f46021e167aa9f8a109f6e588f7a66f49984d.jpg)
512
+ F Ablation Studies of IDDS
513
+ a) ROUGE-1
514
+
515
+ ![](images/d9bf49fb8bba4047a0196f5c75efaf77536c5dc821277a0a0723836afd4777e7.jpg)
516
+ b) ROUGE-2
517
+
518
+ ![](images/0b6ff651257956679e025f9aa24177c7db5bbe3759f95bac80a3b8de7ab657af.jpg)
519
+ c) ROUGE-L
520
+
521
+ ![](images/b9141b02cd3dfb4fab3641151bc94b2eebaa1a575e035f075f9379c838100a13.jpg)
522
+ a) ROUGE-1
523
+
524
+ ![](images/7bdfd38ec2b4e036ace5f755e1002bccad89fb60078495b2cc7ea36ce9b4d971.jpg)
525
+ b) ROUGE-2
526
+
527
+ ![](images/9f8d95c4c2770ce88de50d3cc6c91acb938e3ef71cff9e97f0863e18511c77e2.jpg)
528
+ Figure 17: Ablation study of the document embeddings model & the necessity of performing TAPT for it in the IDS strategy with BART-base on AESLC.
529
+ c) ROUGE-L
530
+
531
+ ![](images/ff34c745cff056b81dc02e9e9935b9ab060ff2db602934f2f30161b1932469ca.jpg)
532
+ Figure 18: Ablation study of the necessity of performing TAPT for the model, which generates embeddings in the IDS strategy with BART-base on WikiHow.
533
+ a) ROUGE-1
534
+
535
+ ![](images/3c13446d30d73b5526a3ffbd9681a3b4a8e84a65361c8f25add7b3f0df2d3661.jpg)
536
+ b) ROUGE-2
537
+
538
+ ![](images/faae0026f898249601aa93e3acee0d02af7bbd860a723451a80f4220b0acf328.jpg)
539
+ c) ROUGE-L
540
+
541
+ ![](images/49671c2a5be1aba21ebd8e537bb2add083e9a9a9bf0f2474f1a9b2f945269307.jpg)
542
+ Figure 19: The performance of IDS with different similarity functions with BART-base on AESLC.
543
+ a) ROUGE-1
544
+ Figure 20: The performance of IDS with different similarity functions with BART-base on WikiHow.
545
+
546
+ ![](images/b96abb48b8182d533b085c56a495e4fc002e593f5ce51c38678d55022c0e725d.jpg)
547
+ b) ROUGE-2
548
+
549
+ ![](images/0ccd57094390d497ccb7eb506ba88e419db927fa5b044fda2d1b884636eda75e.jpg)
550
+ c) ROUGE-L
551
+
552
+ ![](images/0a3a939d92c6777294528166cd7bbc731c7e287567ddec24a5a2c9fd8c403dd1.jpg)
553
+ a) ROUGE-1
554
+
555
+ ![](images/f0f5197215bfd26d7a6ab06460aeba31dc98841a768cb4fe8fadf33b89eed8ee.jpg)
556
+ b) ROUGE-2
557
+
558
+ ![](images/cb998d8d2e682db56a11bf8ae8dec0ae8d9779f3c273da958951117a4797f66c.jpg)
559
+ c) ROUGE-L
560
+
561
+ ![](images/9a5c492549183a1a0f1419ebb2f7adff5619b1bd59b06b49fc6bc867f50bee84.jpg)
562
+ Figure 21: The performance of the BART-base model with the standard IDS strategy compared with its modification when embeddings are normalized on AESLC.
563
+ a) ROUGE-1
564
+
565
+ ![](images/4707b09ca8cc060856eea08e7147e3d60fec515855656f039e4932332c2f877f.jpg)
566
+ b) ROUGE-2
567
+
568
+ ![](images/6daf6f8e4618229c653789b3ce928b8abb743304bf50f52e67debc51bf535b93.jpg)
569
+ c) ROUGE-L
570
+
571
+ ![](images/766470c2e1a93697beb18bcdd06429e97dc47fe18ffa405a0f39942924a6b28f.jpg)
572
+ Figure 22: The performance of the BART-base model with the standard IDS strategy compared with its modification when embeddings are normalized on PubMed.
573
+ a) ROUGE-1
574
+ Figure 23: Ablation study for the hyperparameter $\lambda$ in the IDS strategy with BART-base on AESLC.
575
+
576
+ ![](images/3e959552da48aacf7adc99c94aedc5913189aa7e77f106b14fb42fa7aa193012.jpg)
577
+ b) ROUGE-2
578
+
579
+ ![](images/34257a36064bf055c7e8e9932a9c976bbafc458baffd3dd7681b7930072af76e.jpg)
580
+ c) ROUGE-L
581
+
582
+ <table><tr><td>AL Strategy</td><td>Iter. 0</td><td>Iter. 5</td><td>Iter. 10</td><td>Iter. 15</td><td>Average</td></tr><tr><td>λ = 0.</td><td>9.08 / 4.8 / 8.79</td><td>19.6 / 10.87 / 19.29</td><td>22.6 / 12.58 / 22.1</td><td>23.68 / 13.32 / 23.23</td><td>21.25 / 11.72 / 20.88</td></tr><tr><td>λ = 0.33</td><td>15.77 / 7.67 / 15.46</td><td>22.47 / 12.18 / 22.07</td><td>23.98 / 13.54 / 23.51</td><td>24.68 / 13.81 / 24.21</td><td>23.19 / 12.88 / 22.78</td></tr><tr><td>λ = 0.5</td><td>12.15 / 6.07 / 12.03</td><td>23.82 / 12.97 / 23.3</td><td>25.69 / 14.33 / 25.06</td><td>26.81 / 14.77 / 26.17</td><td>23.84 / 12.94 / 23.31</td></tr><tr><td>λ = 0.67 (orig.)</td><td>17.8 / 8.97 / 17.52</td><td>26.4 / 14.4 / 25.86</td><td>27.25 / 14.55 / 26.55</td><td>28.72 / 15.56 / 27.97</td><td>26.7 / 14.43 / 26.07</td></tr><tr><td>λ = 0.75</td><td>10.84 / 4.93 / 10.61</td><td>26.7 / 14.62 / 26.26</td><td>27.42 / 14.62 / 26.72</td><td>28.29 / 15.36 / 27.61</td><td>26.54 / 14.31 / 26.0</td></tr><tr><td>λ = 0.83</td><td>16.47 / 7.84 / 16.03</td><td>26.06 / 14.42 / 25.59</td><td>26.57 / 14.12 / 25.92</td><td>28.7 / 15.22 / 28.0</td><td>26.0 / 13.93 / 25.46</td></tr><tr><td>λ = 1.</td><td>16.41 / 8.66 / 16.23</td><td>25.2 / 13.66 / 24.72</td><td>26.74 / 14.4 / 26.04</td><td>27.44 / 14.66 / 26.67</td><td>25.73 / 13.81 / 25.16</td></tr></table>
583
+
584
+ Table 7: ROUGE scores on AL iterations for different values of the lambda hyperparameter in IDDS. We select with bold the largest values w.r.t. the confidence intervals.
585
+
586
+ ![](images/7c447fa574121085768926d906b40f9b618d9da518200e5f5a15ce6b86aa7751.jpg)
587
+ a) ROUGE-1
588
+
589
+ ![](images/5e8c93275113fcb71a10ca5730c17daad7ba9d80fc2aa91274d95d8b5385cc68.jpg)
590
+ b) ROUGE-2
591
+ Figure 24: Comparison of IDS with the MMR-based strategy suggested in (Kim et al., 2006) with BART-base on AESLC. We experiment with different $\lambda$ values in MMR and the initialization schemes.
592
+
593
+ ![](images/dddea10f2f733cf0a2523637b05acae99056396e1edf9a94a2655f411346ff13.jpg)
594
+ c) ROUGE-L
595
+
596
+ ![](images/d8d128272d10baf4104905d6fd0613ae596e27778f5adb4a4dbe7cd07a2c566d.jpg)
597
+ a) ROUGE-1
598
+ Figure 25: Comparison of the average and maximum aggregation functions in IDS with BART-base on AESLC.
599
+
600
+ ![](images/d49107f6995292c389c384186290b6626216f7da0941de99cd15f8d0475c1426.jpg)
601
+ b) ROUGE-2
602
+
603
+ ![](images/3ab67e0b23d04ee1748b940b5ce785704b740b284f02e38a75118af82e57a7c4.jpg)
604
+ c) ROUGE-L
605
+
606
+ ![](images/1b5413bc19812827e4c9299d20f9cb9a9c0b4c6b2647ef2c20211a308d81c292.jpg)
607
+ a) ROUGE-1
608
+ Figure 26: Comparison of the average and maximum aggregation functions in IDS with BART-base on WikiHow.
609
+
610
+ ![](images/01fc2b1c1f9d6b0040b523e798c8f1146d9a9ae70331abf0244a69c60b517eb3.jpg)
611
+ b) ROUGE-2
612
+
613
+ ![](images/bc31c0f534c8351528a08547f544cb25b683f99ab829f1c92eacb89018c6d45a.jpg)
614
+ c) ROUGE-L
615
+
616
+ # G Additional Experiments with Consistency Analysis
617
+
618
+ ![](images/80a2a9aee8d23ab3922e686b804add9672822d0d83850afcfe70c9191dc98920.jpg)
619
+ Figure 27: The consistency score calculated via SummaC with BART-base on Gigaword without AL (random sampling) with and without self-learning.
620
+
621
+ ![](images/3373b2397cda19b7e9df931818fea4ed0309cc5ccd2a285211f5de754e7e7a91.jpg)
622
+ Figure 28: The consistency score calculated via SummaC on the test sample on AESLC for BART-base with the IDDS and random sampling strategies with and without self-learning.
623
+
624
+ ![](images/712880dd7c19434329fa7443baf70e621241f496f12026eca48cf756753e2546.jpg)
625
+ Figure 29: The consistency score calculated via SummaC on the test subset of WikiHow for the BART-base model with various AL strategies.
626
+
627
+ # H Query Duration
628
+
629
+ ![](images/30360bd6248b2955b89f7df2884c5a4ce4ee8aca92f63f594c29c37f87f90297.jpg)
630
+ Query Strategy
631
+ Figure 30: Average duration in seconds of one AL query of 10 instances with different strategies on the AESLC dataset with BART-base as an acquisition model. Train refers to the average time required for training the model throughout the AL cycle. Hardware configuration: 2 Intel Xeon Platinum 8168, 2.7 GHz, 24 cores CPU; NVIDIA Tesla v100 GPU, 32 Gb of VRAM.
activelearningforabstractivetextsummarization/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0a3de84ec9d301bed95fb1c5600d4644b362a4a6ee31b11275638f3a37ccbcc0
3
+ size 1899137
activelearningforabstractivetextsummarization/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:72ef1df7517170d0777f1c1eaa5723c38c386d6b8dd99899fd082bd63a6805c5
3
+ size 726694
adapromptadaptivemodeltrainingforpromptbasednlp/739ece06-1173-464f-9b64-c71543c35cb4_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9ba276b46f827d21da74925d509c17752fd358e590319f608758f35171d6329d
3
+ size 84955
adapromptadaptivemodeltrainingforpromptbasednlp/739ece06-1173-464f-9b64-c71543c35cb4_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3337d18179d73494dca272054a34c764c5109505f2c0507fe32008106f0660b6
3
+ size 102087
adapromptadaptivemodeltrainingforpromptbasednlp/739ece06-1173-464f-9b64-c71543c35cb4_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:704c7b2ec3558c009571f5310d1074c0922dac3cb4ef32210a4944503971bc07
3
+ size 2842916
adapromptadaptivemodeltrainingforpromptbasednlp/full.md ADDED
@@ -0,0 +1,337 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # AdaPrompt: Adaptive Model Training for Prompt-based NLP
2
+
3
+ Yulong Chen\*, Yang Liu\*, Li Dong\*, Shuohang Wang\*, Chenguang Zhu\*, Michael Zeng\*, Yue Zhang
4
+
5
+ $\spadesuit$ Zhejiang University $\quad \text{♥}$ Westlake University
6
+
7
+ Microsoft Research Westlake Institute for Advanced Study
8
+
9
+ yulongchen1010@gmail.com yaliu10@microsoft.com yue.zhang@wias.org.cn
10
+
11
+ # Abstract
12
+
13
+ Prompt-based learning, with its capability to tackle zero-shot and few-shot NLP tasks, has gained much attention in community. The main idea is to bridge the gap between NLP downstream tasks and language modeling (LM), by mapping these tasks into natural language prompts, which are then filled by pretrained language models (PLMs). However, for prompt learning, there are still two salient gaps between NLP tasks and pretraining. First, prompt information is not necessarily sufficiently present during LM pretraining. Second, task-specific data are not necessarily well represented during pretraining. We address these two issues by proposing AdaPrompt, adaptively retrieving external data for continual pretraining of PLMs by making use of both task and prompt characteristics. In addition, we make use of knowledge in Natural Language Inference models for deriving adaptive verbalizers. Experimental results on five NLP benchmarks show that AdaPrompt can improve over standard PLMs in few-shot settings. In addition, in zero-shot settings, our method outperforms standard prompt-based methods by up to $26.35\%$ relative error reduction.
14
+
15
+ # 1 Introduction
16
+
17
+ Prompt-based methods (Brown et al., 2020; Liu et al., 2021; Schick and Schütze, 2021a; Li and Liang, 2021) have received increasing attention in Natural Language Processing (NLP) recently. The main idea is to make the most use of pretrained language models (PLMs) by adapting an NLP task into a natural language prompt, which can then be filled by PLMs. Take sentiment classification (Socher et al., 2013; Bai et al., 2021) for example. Given the sentence "I love the movie," the standard task is to make a binary classification on its sentiment polarity (i.e., positive or negative).
18
+
19
+ ![](images/42bf3b41783d6573fdb4792a9fd4187bd974f4707ebbee8ada227da7fb0385c1.jpg)
20
+ Figure 1: The distributions of data in prompt-based models. Task data, domain data, prompt data, and general data (for LM pretraining) are usually sampled from different distributions while remaining certain overlap (target data for prompt training). We aim to explore data from the overlapping area to bridge the gap between PLM and downstream tasks in prompt-based systems.
21
+
22
+ Prompt-based methods first transform the sentence into "I love the movie. The movie is $\langle \mathfrak{m}\mathfrak{s}\mathfrak{k}\rangle$ ." (the underlined text is called prompt), and then identify its polarity by checking whether PLMs tends to predict "good" or "bad" for the $\langle \mathfrak{m}\mathfrak{s}\mathfrak{k}\rangle$ token (where the predicted words are then verbalized into class labels). The prompt-based task formulation is close to masked language modeling (Schick and Schütze, 2021a,b), which is the mainstream pretraining strategy, allowing PLMs to provide rich language knowledge seamlessly. Prompt-based methods have been shown particularly useful in zero-shot and few-shot settings (Petroni et al., 2019; Yin et al., 2019; Min et al., 2022), where with limited direct task data, prompt-based inference benefits more from large-scale pretraining than task-oriented fine-tuning.
23
+
24
+ Existing methods, however, still suffer from several potential limitations. First, large raw text data used for pretraining do not necessarily contain sufficient patterns that are directly related to
25
+
26
+ task specific prompts (Illustrated in Figure 1). For instance, the prompt for a question classification task is "Can you tell me the ⟨mask⟩: What are the twin cities?", where ⟨mask⟩ should be a class label word, e.g., location, person and etc (the correct label for this sample is definition). However, LM pretraining data are typically BOOKCORPUS (Zhu et al., 2015) plus WIKIPEDIA corpus, where such prompts can occur scarcely in the literal or paraphrased form. As a result, directly using PLMs to fill such handcrafted prompts across domains can lead to poor performance. Second, to project label words to task labels, most existing work (Schick and Schütze, 2021a,b; Cui et al., 2021) uses a pre-defined verbalizer. However, it often requires expert knowledge to build a verbalizer that can thoroughly cover candidate words and a poorly-designed verbalizer limits the accuracy of predictions. These problems become even more serious under zero-shot or very-few-shot settings, where prompt-based models highly rely on the generalization ability of PLMs to new tasks and domains.
27
+
28
+ We propose AdaPrompt, a framework that adapts PLMs for end tasks considering both the prompts and the verbalizer. We are interested in addressing the above issues under a zero-shot setting, where little or no labeled training data are available for a particular task. The main idea is to adapt a PLM to a strong prompt-based model for an end task by exploring knowledge from its raw input data. In particular, as shown in Figure 2, given a raw test set without labels, we first ask a PLM to fill a prompt template for each input (e.g., "In summary, the movie is great.", where "great" is filled by PLMs). Then, we use the resulting text (input text + prompt + PLM output) as a prompt-aware query to retrieve relevant data from a large unlabeled corpus. In this manner, we can obtain a large dataset that contain both task and prompt characteristics, and we adaptively continual pretrain (Gururangan et al., 2020) the PLM on the retrieved data, which can substantially benefit prompt-based methods on downstream NLP tasks.
29
+
30
+ Meanwhile, we found current way of building verbalizers is also not optimal. Given a specific task, different words can be verbalized into the same class labels. For example, a large number of adjectives can express the positive sentiment, and the best-performing candidates depend on the domain, PLM and context. In AdaPrompt, we propose to adaptively augment verbalizers by making
31
+
32
+ use of knowledge from PLMs and Natural Language Inference (NLI) models. Take sentiment analysis for example, given "good" and "bad" as seed verbalizers, we first let PLMs to predict more candidate words, such as "amazing" and "great". Then, to identify if these candidates are suitable to verbalizer, we refer to an NLI model to predict whether "This movie is amazing." entails the meaning of "This movie is good". In this way, we can automatically expand the verbalizers.
33
+
34
+ Experiments on five text classification tasks show that AdaPrompt outperforms baseline prompt-based methods by $2.29\% - 5.79\%$ in very-few-shot setting and $2.46\% - 15.00\%$ in zero-shot setting on accuracy. To our knowledge, we are the first to consider how to bridge the gap between LM pretraining and NLP downstream tasks for prompt-based NLP. We release our code and data at https://github.com/cylnlp/AdaPrompt.
35
+
36
+ # 2 Related work
37
+
38
+ # 2.1 Zero/Few-shot Prompt-based NLP
39
+
40
+ Although prompt-based methods have been used for multiple NLP tasks (Brown et al., 2020; Raffel et al., 2020; Brown et al., 2020; Cui et al., 2021), most of existing work focuses on text classification (Shin et al., 2020; Gao et al., 2021; Min et al., 2022; Hu et al., 2022). A typical related work is PET (Schick and Schütze, 2021a), where Schick and Schütze (2021a) formally define pattern-verbalizer pairs that have been widely adopted by successive works. By using such pairs, Schick and Schütze (2021a,b) develop a series of work to explore the potential of PLMs, including annotating soft labels for raw training data, and data augmentation iteratively. However, different from PET that assumes the availability of large silver training set for downstream tasks, we focus on zero and very-few-shot settings, where even unannotated task-relevant dataset is also limited (Perez et al., 2021). Therefore, following Hu et al. (2022), we simply focus on standard pattern-verbalizer pairs for text classification.
41
+
42
+ Prompt engineering (Jiang et al., 2020; Gao et al., 2021) focuses on how to create prompts that can better induce PLMs to make correct predictions. Discrete prompt engineering works by replacing, deleting, inserting or paraphrasing parts of the prompt (Wallace et al., 2019; Yuan et al., 2021). Those methods can efficiently adapt PLMs to end tasks, but they highly reply on annotated data for
43
+
44
+ tuning parameters. Different from the above studies, we are interested in narrowing the gap between LM pretraining and NLP tasks for prompting learning in zero or very-few-shot settings.
45
+
46
+ It has been shown that using different verbalizers can also be a key factor for prompt learning (Hu et al., 2022; Cui et al., 2021). However, manually exploring label words is time-consuming and may neglect potential candidates. Recently, Hu et al. (2022) uses multiple external knowledge bases, such as related words and sentiment dictionaries, to augment verbalizers for corresponding tasks. Different from them, we focus on exploring knowledge in PLMs themselves. By making use of external NLI models AdaPrompt can select verbalizers automatically without the need of labeled task data, which is useful in zero-shot settings.
47
+
48
+ # 2.2 Continual Pretraining for Domain Adaptation
49
+
50
+ Continual pretraining (Gururangan et al., 2020) has shown benefit of optimizing a PLM to a target domain before further fine-tuning. It can be categorised into domain adaptive continual pretraining and task adaptive continual pretraining. The difference is that, domain adaptive pretraining (DAPT) uses domain relevant data while task adaptive pretraining (TAPT) uses task-specific data.
51
+
52
+ Similar to continual pretraining, many recent methods highlight the merits of relying on language modeling objectives for domain adaptation. Chronopoulou et al. (2019) and Radford et al. (2018) propose to train task-specific parameters for PLMs by using an auxiliary LM loss on target domains. Models like SciBERT (Beltagy et al., 2019), DialogLM (Zhong et al., 2021), AMRBART (Bai et al., 2022a), SARA-BERT (Bai et al., 2022b) and Dict-BERT (Yu et al., 2022) are PLMs that are continually pretrained on large amounts of domain/task-specific corpora.
53
+
54
+ Data selection is a common practice in domain adaption for NLP models (Moore and Lewis, 2010; Ruder and Plank, 2017; van der Wees et al., 2017). It has been used in machine translation (van der Wees et al., 2017; Wang et al., 2018), parsing (Plank and van Noord, 2011; Ruder and Plank, 2017) and sentiment analysis (Ruder et al., 2017). The main idea is to have a selection model that can distinguish in-domain and out-of-domain data. The selection model can be a supervised classifier (Aharoni and Goldberg, 2020), similarity-
55
+
56
+ based metric (Plank and van Noord, 2011) or language model perplexity (Moore and Lewis, 2010). Very recently, Yao et al. (2021) propose to retrieve a small set of training data from general corpora with labeled task data as queries, finding that using LM objective on this data as an auxiliary loss can help train task-specific NLP models without pretraining.
57
+
58
+ # 3 Method
59
+
60
+ Our method is based on prompt-based text classification methods (Section 3.1). The overall procedure of AdaPrompt is shown in Figure 2, which can be divided into two parts: PLM adaptation (Section 3.2) and verbalizer adaptation (Section 3.4). In Section 3.3, we introduce a method that adapts both PLMs and verbalizers in an iterative way for continual improvements.
61
+
62
+ # 3.1 Prompt-based Text Classification
63
+
64
+ Given an input text, $\mathbf{x} = (x_0, x_1, \dots, x_n)$ , we consider various tasks to classify the sentence into a class label $l \in \mathcal{L}$ . As mentioned in Section 1, the standard prompt-based method reformulates the input into a cloze-style question and identifies its label by checking PLMs' predictions. Table 1 shows the prompt templates and verbalizer patterns for the SST-2 (Socher et al., 2013), Yelp (Zhang et al., 2015), AGNews (Zhang et al., 2015), TREC (Voorhees and Tice, 2000) and DBPedia (Lehmann et al., 2015) datasets, which cover sentiment classification, topic classification and question classification tasks. Formally, let $\mathcal{M}$ be a language model pretrained on large-scale general data, and $\langle \mathfrak{mask} \rangle$ be the mask token. The prompt-based method first defines a pattern function, Prompt, that converts $\mathbf{x}$ into a cloze-style question containing $\langle \mathfrak{mask} \rangle$ . Then, it defines a verbalizer function $v$ , which maps a small set of pre-defined verbalizer words $(\mathcal{Y})$ predicted at the position of $\langle \mathfrak{mask} \rangle$ into class labels, i.e., $v: \mathcal{Y} \mapsto \mathcal{L}$ .
65
+
66
+ Take sentiment classification for movie review for instance. The task is to classify the sentiment polarity, where $\mathcal{L} = \{\text{positive}, \text{negative}\}$ . For an input $\mathbf{x}$ , we choose the pattern:
67
+
68
+ $$
69
+ \begin{array}{c} \text {P r o m p t} = ^ {\prime \prime} \mathbf {x}. \quad \text {I n s u m m a r y , t h e m o v i e i s} \\ \langle \text {m a s k} \rangle . \end{array}
70
+ $$
71
+
72
+ $$
73
+ \begin{array}{l} \begin{array}{l} \text {T h e n w e d e f i n e a v e r b a l i z e r t h a t m a p s} \mathcal {Y} = \\ \{\text {" g o o d "}, \text {" b a d "} \} \text {i n t o} \mathcal {L}: \end{array} \\ v \left(" g o o d "\right) = p o s i t i v e; \\ v \left(" b a d "\right) = \text {n e g a t i v e} \\ \end{array}
74
+ $$
75
+
76
+ ![](images/ea2c5f966d6a7ea24cd3a31a2ea9ed9a74f03eff31720c6667c0247e1d9d9aa5.jpg)
77
+ Figure 2: Overall framework of AdaPrompt.
78
+
79
+ Given an example:
80
+
81
+ $\mathbf{x} =$ "It's a charming journey."
82
+
83
+ we can convert the input into a cloze-style question using Prompt:
84
+
85
+ Prompt(x) = "It's a charming journey. In summary, the movie is ⟨mask⟩."
86
+
87
+ Using such pattern-verbalizer pairs, we ask $\mathcal{M}$ to directly give scores $s$ for each label $l\in \mathcal{L}$ as:
88
+
89
+ $$
90
+ s (l | \mathbf {x}) = P r [ < \text {m a s k} > = y | P r o m p t (\mathbf {x}), \mathcal {M} ] \tag {1}
91
+ $$
92
+
93
+ where $l = v(y)$ . The predicted label is:
94
+
95
+ $$
96
+ \hat {l} = \underset {l \in \mathcal {L}} {\arg \max } s (l | \mathbf {x}) \tag {2}
97
+ $$
98
+
99
+ # 3.2 Adaptively Retrieve Data for Continual Pretraining
100
+
101
+ As discussed in the Section 1, the lack of domain adaptation can be a potential challenge for prompt-based NLP models, especially under zero-shot and very-few-shot settings. To tackle this problem, we propose to build a continual pretraining dataset by retrieving from general corpora, with unannoted test texts, designed prompts and label words as queries. In this way, we can obtain task-relevant data for any tasks or domains, using only test input. Meanwhile, prompt and verbalzier information is also considered during the retrieval process, leading to a more comprehensive dataset for prompt-aware continual pretraining.
102
+
103
+ Formally, given a retrieval query $q$ , a retrieval engine $\mathcal{E}_D$ indexed on a large general dataset $\mathcal{D}$ can
104
+
105
+ return a set of similar text $d_{q} = \mathcal{E}_{D}(q)$ . To obtain prompt-aware data that can not only adapt PLMs to target domains but also make PLMs more sensitive to prompts, we include both task and prompt characteristics when building queries. As shown in Figure 2, for a raw input text $\mathbf{x}$ in text data, we first convert it into Prompt(x), and obtain a set of predicted label words using a PLM $\mathcal{M}$ :
106
+
107
+ $$
108
+ \mathcal {O} = \mathcal {M} (P r o m p t (\mathbf {x})) \tag {3}
109
+ $$
110
+
111
+ where $\mathcal{O} = \{o_1,o_2,\dots,o_{|\mathcal{O}|}\}$ are the top- $|\mathcal{O}|$ predictions. We replace the mask token in $P(\mathbf{x})$ with $o_i$ , to form a list $Q$ of queries. For example:
112
+
113
+ $$
114
+ Q = \left\{q _ {1}, \dots , q _ {| \mathcal {O} |} \right\}, \tag {4}
115
+ $$
116
+
117
+ where $q_{i} = \mathbf{“x}$ .In summary, the movie is $o_i$
118
+
119
+ With this set of prompt-based queries, we retrieve prompt-aware data $\mathcal{D}_p$ , which is a small subset of the general data. In this work, we use ElasticSearch<sup>1</sup> indexed on a large general corpus as the search engine and we ask it to return a list of top- $k$ texts that match the query. As shown in Figure 2, one test input can lead to multiple prompt-aware queries because the masked token in the prompt can be replaced by the $|\mathcal{O}|$ predictions. In addition, given one query, ElasticSearch can also give multiple returns with demanded $k$ .
120
+
121
+ We continue to pretrain the PLM $\mathcal{M}$ on $\mathcal{D}_p$ with masked language modeling loss and obtain
122
+
123
+ Algorithm 1 Verbalizer Adaptation
124
+ Input: prompt $P$ , seed verbalizer words $y \in \mathcal{Y}_l$ , candidate words $c \in \mathcal{C}$ and an NLI system $\mathcal{N}$
125
+ for $c$ in $\mathcal{C}$ do
126
+ if $\mathcal{N}(f(P, y), fill(P, c)) = Entail$ or $\mathcal{N}(fill(P, c), f(P, y)) = Entail$ then add $c$ in $\mathcal{Y}_l$
127
+ end if
128
+ end for
129
+ Return $\mathcal{Y}_l$
130
+
131
+ an adapted PLM $\mathcal{M}_{\mathcal{D}_p}$ . $\mathcal{M}_{\mathcal{D}_p}$ now contains richer knowledge of both the target domain and the prompts. It can be used to replace $\mathcal{M}$ in Eq. 1 for zero-shot text classification.
132
+
133
+ # 3.3 Iterative Adaptation
134
+
135
+ After obtaining $\mathcal{M}_{\mathcal{D}_p}$ , we can iterate the process by replacing $\mathcal{M}$ with $\mathcal{M}_{\mathcal{D}_p}$ in Eq. 3, and obtain an iterative set of predicted words and a list of queries marked as $\mathcal{O}'$ and $Q'$ . Given that $\mathcal{O}'$ contains more in-domain knowledge, we can retrieve higher quality pretraining data with more task relevant information, using $Q'$ to query the $\mathcal{E}_D$ . In this way, we obtain a new version of $\mathcal{D}_p'$ , and a new continual pretrained PLM $\mathcal{M}_{\mathcal{D}_p}'$ , which can also be used for zero-shot predictions using Eq. 1. In this work, we conduct this procedure twice.
136
+
137
+ # 3.4 Adaptive Verbalizer Augmentation
138
+
139
+ As described in Section 3.1, the regular prompt-based method defines the verbalizer that maps predicted label word into task classes, such as "good" for positive and "bad" for negative. However, predefined verbalizer can be limited. To expand this verbalizer, we first infer top- $|\mathcal{O}|$ label words at mask token position over all inputs in test set. We filter the predicted words and obtain a set of high frequent words $\mathcal{C}$ as candidates for verbalizer augmentation. Then, we propose a new method for exploring useful verbalizer words by using knowledge from a Natural Language Entailment model.
140
+
141
+ Specifically, given a seed verbalizer word $y_{l} \in \mathcal{V}_{l}$ for label $l$ , and a candidate word $c \in \mathcal{C}$ , we compare whether a prompt filled by $y_{l}$ is entailed with the prompt filled by $c$ . The pseudo code is shown in Algorithm 1. If entailment relation holds for this pair, we add $c$ add to $\mathcal{V}_{l}$ . And the new $\mathcal{V}$ which can be considered as an augmented verbalizer.
142
+
143
+ After obtaining the augmented set of verbalizer
144
+
145
+ words, Eq. 1 can be rewritten as:
146
+
147
+ $$
148
+ s (l | \mathbf {x}) = \frac {1}{| \mathcal {Y} _ {l} |} \sum_ {y \in \mathcal {Y} _ {l}} P r [ < \text {m a s k} > = y | P r o m p t (\mathbf {x}), \mathcal {M} ] \tag {5}
149
+ $$
150
+
151
+ and we can still use Eq. 2 for prediction.
152
+
153
+ # 4 Experiments
154
+
155
+ # 4.1 Datasets and Prompts
156
+
157
+ To evaluate our methods, we conduct experiments on five benchmarks: SST-2 (Socher et al., 2013), Yelp (Zhang et al., 2015), AGNews (Zhang et al., 2015), TREC (Voorhees and Tice, 2000) and DBPedia (Lehmann et al., 2015) datasets. Table 1 shows prompt templates and seed verbalizer words that we use for each dataset. For AGNews and YELP, we adapt patterns and verbalizers from PET (Schick and Schütze, 2021a) since it is the basic prompt-based method that has been mostly widely used.
158
+
159
+ AGNews is a text classification dataset in the domain of News. Given a headline and a main text body, the model is required to classify the news into one of the classes: (1) World, (2) Sports, (3) Business or (4) Science/Tech.
160
+
161
+ YELP is a sentiment analysis dataset. Given a restaurant review, the task is to predict whether the review is positive or negative.
162
+
163
+ SST-2 is a sentiment analysis dataset similar to YELP but its domain is movie reviews. Thus, we use the same seed prompt and verbalizer words as for YELP, but change "restaurant" in prompt template to "movie".
164
+
165
+ DBPedia 2014 is an ontology classification dataset, extracted from DBPedia 2014 with 14 non-overlap classes, such as Educational Institution and Office Holder. We define two patterns for this task:
166
+
167
+ $P1(\mathbf{x}) =$ "Description to the $\langle \mathrm{mask}\rangle \pmb {x}^{\prime \prime}$
168
+
169
+ $P2(\mathbf{x}) =$ "Introduction to the $\langle \mathrm{mask}\rangle \pmb {x}^{\prime \prime}$ d we use $P2$ as the seed pattern.
170
+
171
+ TREC-10 is a question classification dataset. Given a question, the task is to identify the objective that the question asks, and classify it into one of six classes, such as a definition question or a numeric question. We define two patterns for this task:
172
+
173
+ $P1(\mathbf{x}) = \text{"Tell me the} \langle \mathrm{mask} \rangle \mathbf{x}$
174
+
175
+ $P2(\mathbf{x}) =$ "Can you tell me the $\langle \mathrm{mask}\rangle$ .. $x$ d $P2$ as the seed prompt.
176
+
177
+ # 4.2 Settings
178
+
179
+ In this work, we take ROBERTA-large (Liu et al., 2019) as our foundation PLM and adopt pattern-verbalizer pairs from (Schick and Schütze, 2021a)
180
+
181
+ <table><tr><td>Dataset</td><td>Class</td><td>Objective</td><td>Prompt Template</td><td>Verbalizer</td></tr><tr><td>SST-2</td><td>2</td><td>sentiment</td><td>Text In summary, this movie is ⟨mask⟩.</td><td>“good”, “bad”</td></tr><tr><td>Yelp</td><td>2</td><td>sentiment</td><td>Text In summary, this restaurant is ⟨mask⟩.</td><td>“good”, “bad”</td></tr><tr><td>AGNews</td><td>4</td><td>news topic</td><td>[Category: ⟨mask⟩] Title , Body</td><td>“Sport”, “Tech”, “Business”, “World”</td></tr><tr><td>TREC</td><td>6</td><td>question</td><td>Can you tell me the ⟨mask⟩ Text</td><td>“explanation”, “description”, “person”, “location”, “number”, “entity”</td></tr><tr><td>DBPedia</td><td>14</td><td>ontology</td><td>Introduction to the ⟨mask⟩ Text</td><td>“company”, “school”, “artist”, “film”, “book”, “plan”, “building”, “village”, “animal”, “sport”, “album”, “officer”, “scenery”, “transportation”</td></tr></table>
182
+
183
+ Table 1: Datasets used in this paper with seed prompts and verbalizer words. Each seed verbalizer word corresponds to a class label.
184
+
185
+ <table><tr><td>Dataset</td><td>test set</td><td>Top-|O|</td><td>Espace</td><td>Resulting Data</td></tr><tr><td>TREC</td><td>500</td><td>20</td><td>100</td><td>60k</td></tr><tr><td>SST-2</td><td>872</td><td>20</td><td>100</td><td>205k</td></tr><tr><td>AGNews</td><td>7,600</td><td>10</td><td>50</td><td>414k</td></tr><tr><td>YELP</td><td>38,000</td><td>1</td><td>50</td><td>267k</td></tr><tr><td>DBPedia</td><td>70,000</td><td>1</td><td>50</td><td>1,301k</td></tr></table>
186
+
187
+ Table 2: Data statistics for datasets. $E_{space}$ corresponds to the ElasticSearch space. Note that the resulting data size is calculated after data de-duplication.
188
+
189
+ (Section 3.1) as the baseline setting which is widely used and can be easily extended to other methods (Shin et al., 2020).
190
+
191
+ We conduct experiments in zero-shot and few-shot settings. In the zero-shot setting, we directly use PLMs to infer label words at masked positions. Under the few-shot setting, we follow Schick and Schütze (2021a) and Hu et al. (2022) and use prompt-tuning, which directly fine-tunes a LM given a small set of annotated data and prompts.
192
+
193
+ For zero-shot settings, the choice of hyperparameters is based on previous work (Gao et al., 2021; Schick and Schütze, 2021a,b). For all continual pretraining, we use a learning rate of $1e^{-5}$ batch size of 96. We train each model for 3 epochs and use the checkpoint at 500 steps for evaluation.
194
+
195
+ For few-shot settings, we evaluate our models with 10, 50, 100 training samples. We follow previous work (Hu et al., 2022; Schick and Schütze, 2021a; Gao et al., 2021) and repeat the training and evaluation for 5 times using different seed, and report the averaged scores for each datasets.
196
+
197
+ Prompt-Aware Data Retrieval We take pretrain data of the ROBERTA model (Book-CORPUS (Zhu et al., 2015), WIKIPIEDIA, CCNEWS (Nagel, 2016), STORIES (Trinh and Le, 2018), and OPENWEBTEXT (Gokaslan and Cohen, 2019)) as the general dataset to query from. We index them on sentence level with ElasticSearch and consider TF-IDF as the similarity metric.
198
+
199
+ Table 2 presents the statistics of evaluation datasets used in this paper. TREC and SST contain smaller test sets, while YELP and DBPedia contain much larger test sets. To balance the retrieved data size, we set different top- $|\mathcal{O}|$ for predicted words and ElasticSearch space $(k)$ for different datasets based on our practical experience. In other words, given one test input, we have $|\mathcal{O}| \times k$ data. After de-duplication, the resulting retrieved data sizes are shown in Table 2.
200
+
201
+ Verbalizer Augmentation To obtain possible verbalizers that can better represent classes, we first obtain top- $N$ predicted words given a test sample ( $N = 20$ for SST-2 and TREC, $N = 10$ for AGNews and $N = 5$ for YELP and DBPedia, considering their test set sizes). We set the number of candidate words $|\mathcal{C}| = 20 \times |\mathcal{L}|$ , where $|\mathcal{L}|$ is number of classes. We use a ROBERTA-large model fine-tuned on MNLI (Williams et al., 2018), as the entailment model for identifying potential verbalizer words for augmentation. Candidate with probability higher than a threshold $t$ is then added to the augmented verbalizer. We set $t = 0.4$ by experiments.
202
+
203
+ For comparison, we also use Word2Vec (Mikolov et al., 2013) to obtain word vectors and explore potential verbalizer words by their similarity with the seed verbalizer words.
204
+
205
+ # 4.3 Results
206
+
207
+ # 4.3.1 Main Results
208
+
209
+ Zero-shot Performance In zero-shot setting, we compare AdaPrompt with prompt-based methods using ROBERTA (Schick and Schütze, 2021a), GPT-2 (Gao et al., 2021) and GPT-3 (Zhao et al., 2021), respectively. The Channel refers to noisy channel model (Min et al., 2022) based on GPT-2. Table 3 presents the results under zero-shot setting. Following previous work (Schick and Schütze,
210
+
211
+ <table><tr><td>Models</td><td colspan="2">SST-2</td><td>Yelp</td><td colspan="2">AGNEWS</td><td colspan="2">DBPedia</td><td colspan="2">TREC</td><td>Avg.</td></tr><tr><td>GPT-2</td><td>63.00/</td><td>NA(NA)</td><td>--</td><td>59.80/</td><td>NA(NA)</td><td>32.30/</td><td>NA(NA)</td><td>38.70/</td><td>NA(NA)</td><td>--</td></tr><tr><td>Channel</td><td>77.10/</td><td>NA(NA)</td><td>--</td><td>61.80/</td><td>NA(NA)</td><td>51.40/</td><td>NA(NA)</td><td>30.50/</td><td>NA(NA)</td><td>--</td></tr><tr><td>GPT-3</td><td>75.80/</td><td>0.00(75.80)</td><td>--</td><td colspan="2">73.90/0.00(73.90)</td><td colspan="2">59.70/0.00(59.70)</td><td colspan="2">57.40/0.00(57.40)</td><td>--</td></tr><tr><td>R.</td><td>64.56/</td><td>16.77(88.99)</td><td>72.63/</td><td>6.34(87.97)</td><td>69.52/6.96(78.76)</td><td colspan="2">56.32/0.49(56.67)</td><td colspan="2">45.50/0.14(45.60)</td><td>61.71</td></tr><tr><td>Ada</td><td>75.92/</td><td>17.36(91.28)</td><td>75.09/</td><td>17.57(89.25)</td><td>76.55/7.28(84.95)</td><td>70.95/</td><td>8.80(77.17)</td><td>60.50/</td><td>3.54(63.00)</td><td>71.80</td></tr><tr><td>iAda</td><td>77.18/</td><td>17.96(91.74)</td><td>75.81/</td><td>18.05(90.41)</td><td>74.28/9.00(83.37)</td><td>73.01/</td><td>6.70(77.92)</td><td>61.10/</td><td>1.27(62.00)</td><td>72.28</td></tr></table>
212
+
213
+ Table 3: Zero-shot results. We report average accuracy and standard deviation of different patterns here. Results of the best patterns are shown in brackets. The Avg. reports the overall averaged results. R. stands for ROBERTA-large. Ada and iAda denote to AdaPrompt and iterative AdaPrompt based on ROBERTA-large, respectively. The results of GPT-2 large and Channel are from (Min et al., 2022), and Channel is based on GPT-2 large. GPT-3 results are reported by Zhao et al. (2021), using GPT-3 (175B). NA denotes to that results are not reported. For GPT-3 (Zhao et al., 2021), they only use a fixed prompt format.
214
+
215
+ <table><tr><td>|T|</td><td>Models</td><td>SST-2</td><td>Yelp</td><td>AGNEWS</td><td>DBPedia</td><td>TREC</td><td>Avg.</td></tr><tr><td rowspan="2">10</td><td>ROBERTA</td><td>84.97 ± 9.88</td><td>86.84 ± 16.08</td><td>78.42 ± 6.23</td><td>86.78 ± 1.10</td><td>45.56 ± 9.55</td><td>76.51</td></tr><tr><td>AdaPrompt</td><td>90.42 ± 1.63</td><td>89.13 ± 13.30</td><td>84.21 ± 2.00</td><td>91.68 ± 1.84</td><td>57.56 ± 7.85</td><td>82.60</td></tr><tr><td rowspan="2">50</td><td>ROBERTA</td><td>92.56 ± 1.31</td><td>95.87 ± 0.57</td><td>85.50 ± 1.36</td><td>94.72 ± 0.49</td><td>73.88 ± 3.13</td><td>88.51</td></tr><tr><td>AdaPrompt</td><td>92.75 ± 1.03</td><td>95.74 ± 0.89</td><td>86.29 ± 0.80</td><td>94.59 ± 0.71</td><td>78.42 ± 6.17</td><td>89.56</td></tr><tr><td rowspan="2">100</td><td>ROBERTA</td><td>92.40 ± 1.04</td><td>95.89 ± 0.68</td><td>87.29 ± 1.31</td><td>95.59 ± 0.52</td><td>86.30 ± 2.14</td><td>91.49</td></tr><tr><td>AdaPrompt</td><td>92.75 ± 0.68</td><td>95.93 ± 0.95</td><td>87.98 ± 0.65</td><td>95.60 ± 0.51</td><td>87.58 ± 1.38</td><td>91.97</td></tr></table>
216
+
217
+ Table 4: Average accuracy and standard deviation on SST-2, YELP, AGNews, DBPedia and TREC under few-shot settings. $|T|$ is the training set size. Each experiment is repeated 5 times using different seeds.
218
+
219
+ 2021a,b), we report average accuracy, standard deviation and accuracy of the best pattern over different patterns.
220
+
221
+ First, compared with our foundation model, ROBERTA-large, we see that AdaPrompt consistently outperforms regular prompt-based methods on all datasets with better average performance and best pattern performance, bringing a $2.46 \sim 14.63$ improvement. It is noticeable that AdaPrompt outperforms GPT-3 in zero-shot setting, which is a huge model with $175B$ parameters pretrained on a gigantic corpus. This confirms the effectiveness of AdaPrompt in domain adaptation. We observe that iterative AdaPrompt can further bring improvements on most datasets (SST-2, YELP and DBPedia). This directly demonstrates that PLMs continual pretrained on the retrieved data can be more adaptive to downstream tasks, and thus generate more task relevant label words, which can serve as a source to find better texts. Performance of iterative AdaPrompt (iAda) decreases on AGNEWS, we believe this is because this news dataset is similar with general data used for pretraining ROBERTA, and thus continual pretraining on such retrieved data can be less useful. Finally, we see that AdaPrompt improves over 10.09 accuracy of the overall performance.
222
+
223
+ Few-shot Performance Table 4 reports the experimental results in few shot setting. Each experiment
224
+
225
+ is repeated 5 times using different seeds and we report the average accuracy and standard deviation. To explore whether AdaPrompt can consistently bring improvement to ROBERTA, we conduct experiments using 10, 50, 100 samples, respectively.
226
+
227
+ Compared with ROBERTA-large baseline, under few-shot setting, AdaPrompt can still improve model performance. Although the relative improvement decreases as the size of training set improves, we can see that AdaPrompt outperforms ROBERTA over all tasks in all few-shot settings. In particular, AdaPrompt outperforms standard ROBERTA models by $2.29 \sim 5.79\%$ in 10-shot setting, showing that it is useful in the very-few-shot setting.
228
+
229
+ # 4.3.2 Ablation Study
230
+
231
+ To study the effectiveness of continual pretraining on prompt-aware data and verbalizer augmentation, we conduct ablation experiments by removing continual pretraining (CP) or verbalizer augmentation (va). As shown in Table 5, We can see that compared with foundation model (-CP-va, 61.71 acc. on average), continual pretraining and verbalizer augmentation can both bring improvement to model performance (5.31 and 5.89 acc. on average, respectively), and the model has the best results when two methods are combined together (AdaPrompt), suggesting these two methods can benefit each other.
232
+
233
+ <table><tr><td>Models</td><td>SST-2</td><td>Yelp</td><td>AGNEWS</td><td>DBPedia</td><td>TREC</td><td>Avg.</td></tr><tr><td>AdaPrompt</td><td>75.92 ± 17.36</td><td>75.09 ± 17.57</td><td>76.55 ± 07.28</td><td>70.95 ± 08.80</td><td>60.50 ± 03.54</td><td>71.80</td></tr><tr><td>-va</td><td>71.07 ± 13.58</td><td>71.04 ± 15.57</td><td>72.16 ± 05.78</td><td>65.90 ± 02.71</td><td>45.40 ± 01.13</td><td>65.11</td></tr><tr><td>-CP</td><td>72.16 ± 16.35</td><td>75.72 ± 17.79</td><td>75.70 ± 07.88</td><td>50.95 ± 00.09</td><td>58.70 ± 03.25</td><td>66.65</td></tr><tr><td>-PR</td><td>71.22 ± 15.55</td><td>74.85 ± 17.51</td><td>75.12 ± 05.71</td><td>70.40 ± 07.48</td><td>58.60 ± 00.57</td><td>70.04</td></tr><tr><td>-CP-va</td><td>64.56 ± 16.77</td><td>72.63 ± 16.34</td><td>69.52 ± 06.96</td><td>56.32 ± 00.49</td><td>45.50 ± 00.14</td><td>61.71</td></tr></table>
234
+
235
+ Table 5: Experimental results of ablation study. “-” means “without” here. va: verbalizer augmentation, CP: Continual Pretraining, PR: Prompt-aware Retrieval. Note that -PR means we do not use prompt-aware retrieval, but simply use raw test input data for retrieval and continual pretraining, referred as in-domain adaptation.
236
+
237
+ <table><tr><td>Model</td><td>SST-2</td><td>DBPedia</td></tr><tr><td>ROBERTA</td><td>64.82 ± 11.62</td><td>56.49 ± 00.41</td></tr><tr><td>AdaPrompt</td><td>73.05 ± 13.08</td><td>70.97 ± 08.87</td></tr></table>
238
+
239
+ Table 6: Model performance tested on unseen test set. We report averaged accuracy and standard deviation.
240
+
241
+ <table><tr><td colspan="5">SST-2</td></tr><tr><td>\(E_{space}\)</td><td>1</td><td>10</td><td>50</td><td>100</td></tr><tr><td>Size</td><td>3k</td><td>23k</td><td>98k</td><td>205k</td></tr><tr><td>Accuracy</td><td>73.54 ±16.77</td><td>75.06 ±17.34</td><td>75.95 ±17.73</td><td>75.92 ±17.36</td></tr><tr><td colspan="5">DBPedia</td></tr><tr><td>\(E_{space}\)</td><td>1</td><td>5</td><td>25</td><td>50</td></tr><tr><td>Size</td><td>58k</td><td>235k</td><td>708k</td><td>1,301k</td></tr><tr><td>Accuracy</td><td>70.64 ±9.66</td><td>71.39 ±10.78</td><td>74.13 ±7.51</td><td>70.95 ±8.80</td></tr></table>
242
+
243
+ Table 7: Analysis on retrieved data size. Data sizes are calculated after de-duplication.
244
+
245
+ In addition, we investigate the influence on model performance by removing prompt-aware retrieval and only retrieving with raw texts. From the table we can see that on all datasets, using prompt-augmented queries (AdaPrompt) give substantially stronger results. Take SST-2 for example, the accuracy is 71.22 (SST-2 -PR) given only raw input queries, but 75.92 with prompt-augmented queries, with a 4.7 absolute improvement. This shows that continual pretraining using prompt-aware data is highly beneficial to zero-shot prompt-based NLP.
246
+
247
+ # 4.4 Analysis
248
+
249
+ Generalization Capability For experiments in section 4.3.1, we use task test set as the sources to build queries for retrieving pretraining data. However, in a more general setting, we want to learn when the query data and test set are different, whether AdaPrompt can still generalize to this test set. To this end, we build an unseen test set by using the original training set of SST-2 and DBPedia. We then evaluate models (trained using queries from the origin test set) on this unseen test set. As shown in Table 6, AdaPrompt achieves 73.05 and 70.97 accuracy on SST-2 and DBPedia,
250
+
251
+ respectively. Compared with performance on original test set (Table 3), although the performance of AdaPrompt slightly decreases when evaluated on SST-2 unseen test set, it can still outperform ROBERTA by a large margin (+8.23). It demonstrates that AdaPrompt has a strong generalization ability when query data and test set are different.
252
+
253
+ Size of Retrieved Data As stated, Elucidasearch returns top- $k$ texts in the order of matching scores. Using a smaller $k$ , the retrieved data are more textual related to the query, while using a larger $k$ , the retrieved data can contain certain noise. To compare the effects of different sizes of retrieved data for continual pretraining, We set $k$ to 1, 10, 50 100 for the SST-2 and set $k$ to 1, 5, 25, 50 for DBPedia, respectively. As shown in Table 7, we see that accuracy rises in the beginning when retrieval size increases. But as the retrieval size grows bigger, the accuracy starts to decrease slightly. This can be explained by that the lower-ranked retrieved data have a lower relevance to the target task, which introduces more noise in continual pretraining. We use fixed $k$ for our experiments in zero-shot settings (Section 4.2), due to lack of a validation set. In few-shot settings, in practice, $k$ can be considered as a hyperparameter and tuned over validation data.
254
+
255
+ The Effect of Verbalizer Strategies Table 8 compares the model performance when using different verbalizer augmentation strategies, namely using NLI model and word similarity (Section 4.2). Additional, we compare AdaPrompt with a verbalizer augmentation method using knowledge base (KB) (Hu et al., 2022) $^{2}$ . To set a fair comparison, we limit the verbalizer word set for each label within 5. We report average accuracy and standard deviation here.
256
+
257
+ <table><tr><td>Dataset</td><td>SST-2</td><td>YELP</td><td>AGNEWS</td><td>DBPedia</td><td>TREC</td><td>Avg.</td></tr><tr><td>\( va_w \)</td><td>74.91 ± 11.71</td><td>75.39 ± 17.47</td><td>69.07 ± 06.70</td><td>55.32 ± 11.33</td><td>60.60 ± 03.39</td><td>67.06</td></tr><tr><td>\( va_m \)</td><td>75.92 ± 17.36</td><td>75.09 ± 17.57</td><td>76.55 ± 07.28</td><td>70.95 ± 08.80</td><td>60.50 ± 03.54</td><td>71.80</td></tr><tr><td>\( va_k \)</td><td>69.07 ± 15.80</td><td>74.64 ± 17.55</td><td>60.15 ± 07.79</td><td>74.85 ± 17.50</td><td>24.00 ± 00.57</td><td>60.54</td></tr></table>
258
+
259
+ Table 8: Model performance of AdaPrompt using different verbalizer augmentation strategies. $va_{w}$ : using word2vec similarity. $va_{m}$ : using ROBERTA trained on MNLI. $va_{k}$ : using most related words/sentiment dictionary. Avg. refers to overall averaged results.
260
+
261
+ <table><tr><td>Model</td><td>Size</td><td>SST-2</td></tr><tr><td>Albert</td><td>17M</td><td>54.67 ± 3.30(58.94)</td></tr><tr><td>Albert+AdaPrompt</td><td>17M</td><td>58.51 ± 5.79(63.99)</td></tr><tr><td>Bert</td><td>340M</td><td>58.03 ± 6.18(63.53)</td></tr><tr><td>Bert+AdaPrompt</td><td>340M</td><td>68.89 ± 16.11(85.67)</td></tr><tr><td>ROBERTA</td><td>355M</td><td>64.56 ± 16.77(88.99)</td></tr><tr><td>ROBERTA+AdaPrompt</td><td>355M</td><td>77.18 ± 17.96(91.74)</td></tr></table>
262
+
263
+ Table 9: We report average accuracy and standard deviation here. Results of best patterns are shown in the bracket.
264
+
265
+ Results show that, compared with using word similarity to select candidate words and directly using KBs to augment verbalizer words, using NLI to augment verbalizer words gives better performance on most tasks, and is also more stable. We also find that using KBs to augment verbalizer words gives better performance on the DBPedia tasks, but much worse performance on the TREC task. This can be because TREC is less close to topic classification (Min et al., 2022), and directly using the most related words can be noisy. This also suggests that more sophisticated strategy that cares of tasks and prompt information can be useful, which we leave for future work.
266
+
267
+ AdaPrompt with different PLMs We apply AdaPrompt with different PLMs (Bert-large, Albert-large and ROBERTA-large). We report experimental results on the SST-2 dataset in Table 9. Although the performance of different models varies, we observe that AdaPrompt can consistently bring huge improvement over all models. We also find that model performance increases with model size. AdaPrompt using ROBERTA-large outperforms other models overall performance by a large margin $(8.29\sim 18.67)$ and achieves 91.74 accuracy with the best pattern.
268
+
269
+ # 5 Conclusion
270
+
271
+ We investigated AdaPrompt, a zero-shot prompt-based method for NLP that makes use of test input data and prompts for adaptive continual pretraining and verbalizer selection. Results on five classification datasets show that AdaPrompt improves
272
+
273
+ over a standard prompt method by large margins. In particular, retrieving relevant data for continual pretraining of a language model can serve to warm-up the model for both domain adaptation and prompt-filling tasks. In addition, an NLI model allows effective selection of filled tokens to achieve improved performance.
274
+
275
+ # Limitation
276
+
277
+ We acknowledge two major limitations of this work:
278
+
279
+ 1. We only tested AdaPrompt on text classification tasks. The intention is to use this clear setting to compare with other prompt-based models. However, it is possible to extend AdaPrompt to other natural language understanding tasks or languages, which we leave for future exploration.
280
+ 2. We only tested with ElasticSearch as the search method. However, there are signals showing the quality of retrieved text is constrained to the search engines. A better configuration or model of the search method might further improve AdaPrompt.
281
+
282
+ # Acknowledgements
283
+
284
+ Yue Zhang is the corresponding author. We appreciate all reviewers for their comments. This paper receives a support from the Pioneer and "Leading Goose" RD Program of Zhejiang under Grant Number 2022SDXHDX0003.
285
+
286
+ # References
287
+
288
+ Roee Aharoni and Yoav Goldberg. 2020. Unsupervised domain clusters in pretrained language models. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7747-7763, Online. Association for Computational Linguistics.
289
+ Xuefeng Bai, Yulong Chen, and Yue Zhang. 2022a. Graph pre-training for AMR parsing and generation. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 6001-6015, Dublin, Ireland. Association for Computational Linguistics.
290
+ Xuefeng Bai, Pengbo Liu, and Yue Zhang. 2021. Investigating typed syntactic dependencies for targeted sentiment classification using graph attention neural network. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 29:503-514.
291
+ Xuefeng Bai, Linfeng Song, and Yue Zhang. 2022b. Semantic-based pre-training for dialogue understanding. In Proceedings of the 29th International Conference on Computational Linguistics, pages 592-607, Gyeongju, Republic of Korea. International Committee on Computational Linguistics.
292
+ Iz Beltagy, Kyle Lo, and Arman Cohan. 2019. SciBERT: A pretrained language model for scientific text. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3615-3620, Hong Kong, China. Association for Computational Linguistics.
293
+ Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual.
294
+ Alexandra Chronopoulou, Christos Baziotis, and Alexandros Potamianos. 2019. An embarrassingly simple approach for transfer learning from pretrained language models. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2089-2095, Minneapolis, Minnesota. Association for Computational Linguistics.
295
+
296
+ Leyang Cui, Yu Wu, Jian Liu, Sen Yang, and Yue Zhang. 2021. Template-based named entity recognition using BART. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 1835-1845, Online. Association for Computational Linguistics.
297
+ Tianyu Gao, Adam Fisch, and Danqi Chen. 2021. Making pre-trained language models better few-shot learners. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 3816-3830, Online. Association for Computational Linguistics.
298
+ Aaron Gokaslan and Vanya Cohen. 2019. Openweb-text corpus.
299
+ Suchin Gururangan, Ana Marasovic, Swabha Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey, and Noah A. Smith. 2020. Don't stop pretraining: Adapt language models to domains and tasks. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8342-8360, Online. Association for Computational Linguistics.
300
+ Shengding Hu, Ning Ding, Huadong Wang, Zhiyuan Liu, Jingang Wang, Juanzi Li, Wei Wu, and Maosong Sun. 2022. Knowledgeable prompt-tuning: Incorporating knowledge into prompt verbalizer for text classification. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2225-2240, Dublin, Ireland. Association for Computational Linguistics.
301
+ Zhengbao Jiang, Frank F. Xu, Jun Araki, and Graham Neubig. 2020. How can we know what language models know? Transactions of the Association for Computational Linguistics, 8:423-438.
302
+ Jens Lehmann, Robert Isele, Max Jakob, Anja Jentzsch, Dimitris Kontokostas, Pablo N Mendes, Sebastian Hellmann, Mohamed Morsey, Patrick Van Kleef, Soren Auer, et al. 2015. Dbpedia-a large-scale, multilingual knowledge base extracted from wikipedia. Semantic web, 6(2):167-195.
303
+ Xiang Lisa Li and Percy Liang. 2021. Prefix-tuning: Optimizing continuous prompts for generation. arXiv preprint arXiv:2101.00190.
304
+ Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, and Graham Neubig. 2021. Pretrain, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586.
305
+ Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. ArXiv preprint, abs/1907.11692.
306
+
307
+ Tomas Mikolov, Kai Chen, Greg S Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space.
308
+ Sewon Min, Mike Lewis, Hannaneh Hajishirzi, and Luke Zettlemoyer. 2022. Noisy channel language model prompting for few-shot text classification. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 5316-5330, Dublin, Ireland. Association for Computational Linguistics.
309
+ Robert C. Moore and William Lewis. 2010. Intelligent selection of language model training data. In Proceedings of the ACL 2010 Conference Short Papers, pages 220-224, Uppsala, Sweden. Association for Computational Linguistics.
310
+ Sebastian Nagel. 2016. Cc-news.
311
+ Ethan Perez, Douwe Kiela, and Kyunghyun Cho. 2021. True few-shot learning with language models. *ArXiv preprint*, abs/2105.11447.
312
+ Fabio Petroni, Tim Rocttäschel, Sebastian Riedel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, and Alexander Miller. 2019. Language models as knowledge bases? In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2463-2473, Hong Kong, China. Association for Computational Linguistics.
313
+ Barbara Plank and Gertjan van Noord. 2011. *Effective measures of domain similarity for parsing*. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 1566–1576, Portland, Oregon, USA. Association for Computational Linguistics.
314
+ Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training.
315
+ Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research, 21:1-67.
316
+ Sebastian Ruder, Parsa Ghaffari, and John G Breslin. 2017. Data selection strategies for multi-domain sentiment analysis. ArXiv preprint, abs/1702.02426.
317
+ Sebastian Ruder and Barbara Plank. 2017. Learning to select data for transfer learning with Bayesian optimization. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 372-382, Copenhagen, Denmark. Association for Computational Linguistics.
318
+
319
+ Timo Schick and Hinrich Schütze. 2021a. Exploiting cloze-questions for few-shot text classification and natural language inference. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 255–269, Online. Association for Computational Linguistics.
320
+ Timo Schick and Hinrich Schütze. 2021b. It's not just size that matters: Small language models are also few-shot learners. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2339-2352.
321
+ Taylor Shin, Yasaman Razeghi, Robert L Logan IV, Eric Wallace, and Sameer Singh. 2020. Eliciting knowledge from language models using automatically generated prompts. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4222-4235.
322
+ Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1631-1642, Seattle, Washington, USA. Association for Computational Linguistics.
323
+ Trieu H Trinh and Quoc V Le. 2018. A simple method for commonsense reasoning. ArXiv preprint, abs/1806.02847.
324
+ Marlies van der Wees, Arianna Bisazza, and Christof Monz. 2017. Dynamic data selection for neural machine translation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1400-1410, Copenhagen, Denmark. Association for Computational Linguistics.
325
+ Ellen M Voorhees and Dawn M Tice. 2000. Building a question answering test collection. In Proceedings of the 23rd annual international ACM SIGIR conference on Research and development in information retrieval, pages 200-207.
326
+ Eric Wallace, Shi Feng, Nikhil Kandpal, Matt Gardner, and Sameer Singh. 2019. Universal adversarial triggers for attacking and analyzing NLP. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2153-2162, Hong Kong, China. Association for Computational Linguistics.
327
+ Wei Wang, Taro Watanabe, Macduff Hughes, Tetsuji Nakagawa, and Ciprian Chelba. 2018. Denoising neural machine translation training with trusted data and online data selection. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 133-143, Brussels, Belgium. Association for Computational Linguistics.
328
+
329
+ Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112–1122, New Orleans, Louisiana. Association for Computational Linguistics.
330
+ Xingcheng Yao, Yanan Zheng, Xiaocong Yang, and Zhilin Yang. 2021. Nlp from scratch without largescale pretraining: A simple and efficient framework. ArXiv preprint, abs/2111.04130.
331
+ Wenpeng Yin, Jamaal Hay, and Dan Roth. 2019. Benchmarking zero-shot text classification: Datasets, evaluation and entailment approach. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3914-3923, Hong Kong, China. Association for Computational Linguistics.
332
+ Wenhao Yu, Chenguang Zhu, Yuwei Fang, Donghan Yu, Shuohang Wang, Yichong Xu, Michael Zeng, and Meng Jiang. 2022. Dict-BERT: Enhancing language model pre-training with dictionary. In *Findings of the Association for Computational Linguistics: ACL* 2022, pages 1907-1918, Dublin, Ireland. Association for Computational Linguistics.
333
+ Weizhe Yuan, Graham Neubig, and Pengfei Liu. 2021. Bartscore: Evaluating generated text as text generation. ArXiv preprint, abs/2106.11520.
334
+ Xiang Zhang, Junbo Jake Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classification. In Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, December 7-12, 2015, Montreal, Quebec, Canada, pages 649-657.
335
+ Zihao Zhao, Eric Wallace, Shi Feng, Dan Klein, and Sameer Singh. 2021. Calibrate before use: Improving few-shot performance of language models. In Proceedings of the 38th International Conference on Machine Learning, ICML 2021, 18-24 July 2021, Virtual Event, volume 139 of Proceedings of Machine Learning Research, pages 12697-12706. PMLR.
336
+ Ming Zhong, Yang Liu, Yichong Xu, Chenguang Zhu, and Michael Zeng. 2021. Dialoglm: Pre-trained model for long dialogue understanding and summarization. ArXiv preprint, abs/2109.02492.
337
+ Yukun Zhu, Ryan Kiros, Richard S. Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. In 2015 IEEE International Conference on Computer Vision, ICCV 2015, Santiago, Chile, December 7-13, 2015, pages 19-27. IEEE Computer Society.
adapromptadaptivemodeltrainingforpromptbasednlp/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0eea58e2d43e4a2d43da325ebeac196c7202577eae9bc49d7c17047164cca41d
3
+ size 534605
adapromptadaptivemodeltrainingforpromptbasednlp/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:81fd6493ee603bb48c2c1b14d930d164c79fd15529c980bd197b5d482375e6e4
3
+ size 407138
adaptersforenhancedmodelingofmultilingualknowledgeandtext/7b86874c-c8d8-473f-89fd-815cbc935167_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:00f9cdbedf4f925f3d5e8363b68f8e234db96b868fc71ee12eb93083eb4effa3
3
+ size 117061
adaptersforenhancedmodelingofmultilingualknowledgeandtext/7b86874c-c8d8-473f-89fd-815cbc935167_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f504a51f942db6fd6b9de61daaa3d1042ea184634099cc35e488d4cf4289c085
3
+ size 136017
adaptersforenhancedmodelingofmultilingualknowledgeandtext/7b86874c-c8d8-473f-89fd-815cbc935167_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:aaa2651c390806efb29a3720fa70fa90d5197e558f559cfa21d58df322780c3d
3
+ size 781560
adaptersforenhancedmodelingofmultilingualknowledgeandtext/full.md ADDED
@@ -0,0 +1,377 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Adapters for Enhanced Modeling of Multilingual Knowledge and Text
2
+
3
+ Yifan Hou $^{1}$ , Wenxiang Jiao $^{2}$ , Meizhen Liu $^{3}$ , Carl Allen $^{1}$ , Zhaopeng Tu $^{2}$ , Mrinmaya Sachan $^{1}$
4
+
5
+ $^{1}$ ETH Zürich, $^{2}$ Tencent AI Lab, $^{3}$ Shandong University
6
+
7
+ $^{1}\{yifan.hou, carl.allen, mrinmaya.sachan\} @inf.ethz.ch$
8
+
9
+ $^{2}$ joelwxjiao, zptu}@tencent.com, $^{3}$ meizhen.liu@mail.sdu.edu.cn
10
+
11
+ # Abstract
12
+
13
+ Large language models appear to learn facts from the large text corpora they are trained on. Such facts are encoded implicitly within their many parameters, making it difficult to verify or manipulate what knowledge has been learned. Language models have recently been extended to multilingual language models (MLLMs), enabling knowledge to be learned across hundreds of languages. Meanwhile, knowledge graphs contain facts in an explicit triple format, which require careful and costly curation and are only available in a few high-resource languages, restricting their research and application. To address these issues, we propose to enhance MLLMs with knowledge from multilingual knowledge graphs (MLKGs) so as to tackle language and knowledge graph tasks across many languages, including low-resource ones. Specifically, we introduce a lightweight adapter set to enhance MLLMs with cross-lingual entity alignment and facts from MLKGs for many languages. Experiments on common benchmarks show that such enhancement benefits both MLLMs and MLKGs, achieving: (1) comparable or improved performance for knowledge graph completion and entity alignment relative to baselines, especially for low-resource languages (for which knowledge graphs are unavailable); and (2) improved MLLM performance on language understanding tasks that require multilingual factual knowledge; all while maintaining performance on other general language tasks. $^{1}$
14
+
15
+ # 1 Introduction
16
+
17
+ Knowledge graphs serve as a source of explicit factual information for various NLP tasks. However, language models (Devlin et al., 2019; Brown et al., 2020), which capture implicit knowledge from vast text corpora, are already being used in knowledge-intensive tasks. Recently, language models have
18
+
19
+ ![](images/4eeca23f281089cff2028cd52ec860b6d3aaaf328718624553bbb5908c424aa1.jpg)
20
+ MLLM task:
21
+ Relation extraction
22
+ 1. Sentence:
23
+ Alain de Botton Vive di recente in Swizzera.
24
+ 3. Knowledge triple:
25
+ (Alain de Botton, vivo in, Svizzera)
26
+ Figure 1: Combining MLLMs and MLKGs benefits both: MLKGs suffer from incompleteness and are limited to few languages, which MLLMs can supplement. MLLMs lack entity alignment and firm facts, which MLKGs can provide.
27
+
28
+ ![](images/c4d6d0695c4e89ac5c04332566bd36426145013d51cdb8ce958314214a48611f.jpg)
29
+ 2. Using MLKG:
30
+
31
+ been successfully extended to multilingual language models (MLLMs) that integrate information sourced across hundreds of languages (Devlin et al., 2019; Conneau and Lample, 2019; Conneau et al., 2020). However, as with most neural networks, the information is encoded in a diffused and opaque manner that is difficult to interpret, verify or utilize (AlKhamissi et al., 2022).
32
+
33
+ Meanwhile, multilingual knowledge graphs (MLKGs) require careful curation of explicit facts and annotation of entities that occur in languages (cross-lingual entity alignment), making knowledge graphs expensive and time-consuming to extend to new languages, restricting knowledge graph research to a few high-resource languages. Further, open-source MLKGs such as WordNet (Bond and Foster, 2013) and Wikidata (Vrandecic and Krötzsch, 2014) suffer from incompleteness as many true facts (or triples) and entity alignments are missing (Chen et al., 2017, 2020).
34
+
35
+ In this work, we propose to overcome the above limitations of each knowledge source by integrating MLKGs into MLLMs (as shown in Figure 1),
36
+
37
+ to enable (i) the transfer of MLKG knowledge from high-resource languages to low-resource languages; and (ii) explicit knowledge of MLKGs to supplement MLLMs for knowledge-intensive language tasks, one of the key challenges in MLLMs (AlKhamissi et al., 2022).
38
+
39
+ While this idea seems intuitive, there is no easy way to incorporate the explicit knowledge of MLKGs into the parametrically stored information of MLLMs. Existing knowledge integration methods utilize language models and knowledge graphs in two ways: (1) training knowledge graph embeddings individually and combining the embeddings corresponding to linked entities in sentences with the language model representations (e.g., KnowBERT (Peters et al., 2019) and ERNIE (Zhang et al., 2019)); or (2) absorbing the knowledge in knowledge graphs into the language model's parameters via joint training (e.g., K-BERT (Liu et al., 2020) and K-Adapter (Wang et al., 2021)).
40
+
41
+ The first method requires embedding knowledge graph entities and accurately extracting entities in sentences across hundreds of languages, which is highly challenging. The second method typically suffers from the curse of multilinguality (Conneau et al., 2020; Doddapaneni et al., 2021; Jiao et al., 2022) and catastrophic forgetting (Kirkpatrick et al., 2016) due to limited model capacity. Most importantly, both methods integrate knowledge implicitly such that it is difficult to access and extend to low-resource languages (AlKhamissi et al., 2022). Furthermore, both methods require large sets of aligned sentences and knowledge triples, which is costly to gather and accurately annotate across hundreds of languages.
42
+
43
+ To address above issues, we first collect and clean multilingual data from Wikidata² and Wikipedia³ for the enhancement, where rich factual knowledge and cross-lingual alignments are available. Then, we propose to enhance MLLMs with the MLKG information by using a set of adapters (Houlsby et al., 2019), which are lightweight, collectively having only around $0.5\%$ extra parameters than the MLLM. Each adapter integrates information from either MLKG Triples (i.e. facts) or cross-lingual Entity alignments, and is trained on either Phrase or Sentence level data. Each of the resulting four adapters (EP/TP/ES/TS) is trained individually to learn information sup
44
+
45
+ plemental to that already learned by the MLLM. Adapter outputs are combined by a fusion mechanism (Pfeiffer et al., 2021). Training objectives are similar to those for MLKG embedding (Chen et al., 2017) instead of mask language modeling, which are more efficient with large corpus.
46
+
47
+ We conduct experiments on various downstream tasks to demonstrate the effectiveness of our approach. For MLKG tasks, following the data collection methods of two existing benchmarks (Chen et al., 2020, 2017), we extended them from 2-5 languages to 22 languages, including two rare languages. Results show that our method obtains comparable performance to existing state-of-the-art baselines on the knowledge graph completion benchmark, and significantly better performance on the entity alignment benchmark. More importantly, we can perform these knowledge graph tasks in low-resource languages for which no knowledge graph exists, and achieve comparable results to the high-resource languages. Improvements over baseline MLLMs are significant. The results demonstrate that our proposed method integrates the explicit knowledge from MLKGs into MLLMs that can be used across many languages. Our method also improves existing MLLMs noticeably on knowledge-intensive language tasks, such as cross-lingual relation classification, whilst maintaining performance on general language tasks such as named entity recognition (NER) and question answering (QA).
48
+
49
+ # 2 Multilingual Knowledge Integration
50
+
51
+ In this paper, we fuse knowledge from a MLKG into a MLLM. Following previous works (Wang et al., 2021; Liu et al., 2021), we make use of an entity tagged corpus of text (called a knowledge integration corpus) for knowledge integration. We formally introduce these concepts below.
52
+
53
+ MLLM. A multilingual LM can be thought of as an encoder that can represent text in any language $l$ in a set of languages $\mathcal{L}$ . Let $\mathcal{V}$ denote the shared vocabulary over all languages. Let $t^l \in \mathcal{V}$ denote a token in language $l$ . A sentence $s^l$ in a language $l$ can be denoted as a sequence of tokens: $s^l = (t_1^l, t_2^l, \ldots)$ . The output representations of the MLLM for $s^l$ can be denoted by a sequence of vectors: $\mathrm{LM}(s^l) = (h_1, h_2, \ldots)$ . These vectors correspond to representations for each token in the
54
+
55
+ sentence, one representation per input token. Various tokenization schemes such as wordpiece or BPE might be considered here. We use the average of the token representations as the representation of the sentence: $\overline{\mathrm{LM}(s^l)} = \mathrm{mean}(h_1,h_2,\ldots)$ . Similarly, for a phrase $s_{ij}^{l}$ (starting from the $i$ -th token and ending in the $j$ -th token in the sentence), we can obtain its contextualized representation as $\overline{\mathrm{LM}(s_{ij}^{l})} = \mathrm{mean}(h_i,h_{i + 1},\dots h_j)$ .
56
+
57
+ MLKG. A multilingual knowledge graph is a graph with entities and knowledge triples in each language $l \in \mathcal{L}$ . Let $\mathcal{E}$ denote the set of entities and $\mathcal{T}$ denote the set of knowledge triples. In a MLKG, each entity indexed $i$ might appear in several languages. Let $e_i^l$ denote the entity label of the $i$ -th entity in language $l$ . Furthermore, we denote a knowledge triple in the MLKG as $(e_i^l, r_k^{l''}, e_j^{l'}) \in \mathcal{T}$ , where $r_k^{l''}$ is the $k^{th}$ relation. Note that since entities (as well as relations) may appear in various languages under different labels, knowledge triples can be defined across languages.
58
+
59
+ Knowledge Integration Corpus. For knowledge integration, besides the MLKG, we make use of a corpus of text $\mathcal{C}$ (as shown in the right part of Figure 2). The corpus $\mathcal{C}$ comprises of two kinds of texts. First, we have a set of texts $\mathcal{C}_1$ for the cross-lingual entity alignment, which comprise of sentences with mentions of entities in the MLKG. For example in Figure 2, given the sentence De Botton spent his early years in Zurich, we have the aligned entity Zurich and its cross-lingual labels. The second set of texts $\mathcal{C}_2$ is for the knowledge triple, which comprises of sentences aligned with knowledge triples in the MLKG. For example in Figure 2, given the sentence Zurich is the largest city in Switzerland, we have its aligned knowledge triple (Zurich, is located in, Switzerland).
60
+
61
+ # 3 Adapters and Adapter Fusion
62
+
63
+ In this section, we first describe how we incorporate adapters into language models and how they can be used to enhance them with different sources of knowledge from knowledge graphs.
64
+
65
+ Adapter. Adapters have become a popular choice for parameter-efficient finetuning of language models on downstream tasks (Houlsby et al., 2019) due to their flexibility, effectiveness, low cost and scalability (Pfeiffer et al., 2021). Adapters are new modules that are added between layers of language
66
+
67
+ models $^5$ , the parameters of which are updated only during finetuning while the language model parameters are frozen. An adapter is a bottleneck layer composed of two feed-forward layers with one non-linear activation function. For $h^m$ , the hidden representation of token $t_i^l$ at layer $m$ , the adapter acts as
68
+
69
+ $$
70
+ \mathrm {A} \left(\boldsymbol {h} ^ {m}\right) = \boldsymbol {W} _ {\text {u p}} \cdot \sigma \left(\boldsymbol {W} _ {\text {d o w n}} \cdot \boldsymbol {h} ^ {m} + \boldsymbol {b} _ {\text {d o w n}}\right) + \boldsymbol {b} _ {\text {u p}}. \tag {1}
71
+ $$
72
+
73
+ Here, $W_{\mathrm{down}}$ and $W_{\mathrm{up}}$ are weight matrices, which map the hidden representations to the low-dimensional space and then map them back. $b_{\mathrm{down}}$ and $b_{\mathrm{up}}$ are bias parameters, and $\sigma$ is a nonlinear activation function.
74
+
75
+ Adapter Fusion. We follow the architecture of Pfeiffer et al. (2021), but instead of using adapters for finetuning, we use them to enhance MLLMs with knowledge. Our approach is similar to Wang et al. (2021), but our adapters supplement and augment the existing implicit knowledge of MLLMs (into the explicit geometric properties of hidden representations). And our approach is more lightweight, with only $c.0.5\%$ additional parameters ( $cf > 10\%$ in Wang et al. (2021)).
76
+
77
+ As shown in Figure 2 (left), still considering the $m$ -th layer, the output representations of the feedforward layer (denoted $\pmb{h}^{m}$ as in Eq. 1) are input to the adapters. A fusion layer aggregates all adapter outputs $\mathrm{A}_n(\pmb{h}^m)$ ( $n \in \{1 \dots N\}$ indexes each adapter) and the un-adapted representations with a multiplicative attention mechanism:
78
+
79
+ $$
80
+ \begin{array}{l} \mathrm {A} _ {\text {f u s i o n}} \left(\boldsymbol {h} ^ {m}\right) = \sum_ {n = 0} ^ {N} a _ {n} ^ {m} \cdot \boldsymbol {V} ^ {m} \cdot \mathrm {A} _ {n} \left(\boldsymbol {h} ^ {m}\right), \\ a _ {n} ^ {m} = \operatorname {s o f t m a x} \left(\boldsymbol {h} ^ {m} \boldsymbol {Q} ^ {m} \otimes \mathrm {A} _ {n} \left(\boldsymbol {h} ^ {m}\right) \boldsymbol {K} ^ {m}\right). \\ \end{array}
81
+ $$
82
+
83
+ Here, $\mathrm{A}_0(\cdot)$ is the identity function; $Q^{m},K^{m},V^{m}$ are parameters in the multiplicative attention mechanism; and $\otimes$ is the Hadamard product.
84
+
85
+ The additional knowledge to be learned by the adapters comes from knowledge Triples and Entity alignments, each provided in both Phrase and Sentence format (hence $N = 2 \times 2 = 4$ ). As shown in Figure 2 (center), for a given entity in two languages $l$ and $l'$ , Adapter-EP learns to align the two (multilingual) representations of $e_i^l$ and $e_i^{l'}$ , e.g., Zurich is aligned with Zurigo. Adapter-TP learns knowledge triples, e.g., predicting Switzerland given entity and relation (Zurich, is located
86
+
87
+ ![](images/10287a3a6dc455f4a202c89bea36df24795b8ab079239c2902e03a63c60fd6d7.jpg)
88
+ Figure 2: The architecture of MLLMs with adapters and their roles. We enhance multilingual and factual knowledge in phrase and sentence levels using different knowledge integration corpus.
89
+
90
+ in,). Besides these non-contextualized settings, entities within context can be considered also (MLLM corpus). Thus, Adapter-ES. and Adapter-TS. have the similar objectives but use contextualized representations from input sentences.
91
+
92
+ # 4 Knowledgeable Adapters
93
+
94
+ Next, we design objectives with corresponding knowledge integration datasets to train a set of adapters. Similar to MLKG embedding (Chen et al., 2017), we aim to encode knowledge into the geometric properties of the adapted MLLM representations, i.e., the MLLM and adapters collectively act as an MLKG embedding model. Specifically, we use cosine distance within the contrastive learning loss of InfoNCE (van den Oord et al., 2018):
95
+
96
+ $$
97
+ \operatorname {I N C E} (\boldsymbol {x}, \boldsymbol {x} ^ {\prime}) = \log \frac {\cos (\boldsymbol {x} , \boldsymbol {x} ^ {\prime})}{\sum_ {\boldsymbol {x} ^ {\prime \prime} \in X} \cos (\boldsymbol {x} , \boldsymbol {x} ^ {\prime \prime})},
98
+ $$
99
+
100
+ where $X$ is a batch that includes the positive sample $\pmb{x}^{\prime}$ and a number of negative samples.
101
+
102
+ Adapter-EP. We use Wikidata (Vrandecic and Krötzsch, 2014) to enhance MLLMs with the knowledge of cross-lingual entity alignments. Inspired by the idea that languages are aligned implicitly in a universal space in MLLMs (Wu and Dredze, 2019; Wei et al., 2021), we train the aligned entities to have closer representations. Denoting the MLLM with this adapter as $\mathrm{LM}(\cdot)$ , the objective used to train EP is:
103
+
104
+ $$
105
+ \mathcal {L} _ {\mathrm {E P}} = \sum_ {(e _ {i} ^ {l}, e _ {i} ^ {l ^ {\prime}}) \in \mathcal {E}} \operatorname {I N C E} \bigg (\overline {{\operatorname {L M} (e _ {i} ^ {l})}}, \overline {{\operatorname {L M} (e _ {i} ^ {l ^ {\prime}})}} \bigg),
106
+ $$
107
+
108
+ where $\overline{\mathrm{LM}(\cdot)}$ means we take the mean of token representations as the entity representation vector.
109
+
110
+ Adapter-TP. We train this adapter using the knowledge triples in Wikidata. Inspired by previous knowledge graph embedding algorithms (e.g. Bordes et al., 2013), for a given fact triple, we train the (adapted) object entity embedding to be close to the (adapted) joint embedding of the subject entity and relation. The objective used to train TP is quite different from existing mask language modeling-based ones:
111
+
112
+ $$
113
+ \mathcal{L}_{\mathrm{TP}} = \sum_{(e_{i}^{l},r_{k}^{l^{\prime \prime}},e_{j}^{l^{\prime}})}\mathrm{INCE}\Big(\overline{\mathrm{LM}([e_{i}^{l};r_{k}^{l^{\prime \prime}}])},\overline{\mathrm{LM}(e_{j}^{l^{\prime}})}\Big),
114
+ $$
115
+
116
+ where $[\cdot ,]$ denotes text concatenation. Note that we apply code-switching (Liu et al., 2021), and thus entities and relations can be in different languages. This is helpful to capture knowledge triples for low-resource languages.
117
+
118
+ Adapter-ES. Entity alignment can also be applied to contextualized embeddings produced by the MLLM when entities are input within natural language sentences. For this purpose, we use summaries taken from multilingual Wikipedia. Specifically, we first align the entity in Wikidata with the Wikipedia title, and extract sentences that contain the entity label in its summary. As described earlier, we denoted this corpus as $\mathcal{C}_1$ . Thus, similar to Adapter-EP, we train ES by aligning contextualized entity representations of cross-lingually aligned entities with the objective:
119
+
120
+ $$
121
+ \mathcal {L} _ {\mathrm {E S}} = \sum_ {(e ^ {l ^ {\prime}}, s ^ {l}) \in \mathcal {C} _ {1}} \operatorname {I N C E} \left(\overline {{\operatorname {L M} (s _ {i j} ^ {l})}}, \overline {{\operatorname {L M} (e ^ {l ^ {\prime}})}}\right),
122
+ $$
123
+
124
+ ![](images/f30ab3413a990061ce58c0df226166da84c6effad2c0ab25391e743829ca3b51.jpg)
125
+ Figure 3: Four stages of using the knowledge adapter set in MLLMs. The dashed outlines mean the parameters are frozen.
126
+
127
+ where $s_{ij}^{l}$ means that we input sentence $s^l$ into an MLLM but keep only the representation of entity label $e^l$ (indexed from $i$ -th token to $j$ -th token). As in Figure 2 (right), $s^l$ is: De Botton spent his early years in Zurich, and $s_{ij}^{l}$ here is the entity label of $e^l$ as: Zurich. The difference between this adapter and Adapter-EP is that contextual information is included within the entity representation.
128
+
129
+ Adapter-TS. Knowledge triples can also be learned with contextualized embeddings. This requires paired data in which triples (entities and relations) are annotated in natural sentences. However, no such multilingual corpus exists. Thus, we use the T-REx-RC dataset (Elsahar et al., 2018)<sup>7</sup>, which provides aligned data in English and contains sentence and triple pairs. Thus, the objective used to train TS is:
130
+
131
+ $$
132
+ \mathcal{L}_{\text{TS}} = \sum_{(s_{k},(e_{i},r,e_{j}))\in \mathcal{C}_{2}}\text{INCE}\Big(\overline{\text{LM}(s_{k}\setminus e_{j})},\overline{\text{LM}(e_{j})}\Big),
133
+ $$
134
+
135
+ where $s_k \backslash e_j$ represents the sentence $s_k$ with entity label $e_j$ masked. As the example in Figure 2 (right), $s_k \backslash e_j$ is: [MASK] is the largest city in Switzerland, and the aligned triple is: (Zurich, is located in, Switzerland. In contrast to Adapter-TP, subject entities and relations occur in natural sentences.
136
+
137
+ # 4.1 Enhancement Workflow
138
+
139
+ We introduce our overall enhancement workflow, which contains four stages. In the first stage, an MLLM is pretrained on a large amount of data. In the second stage, the MLLM is frozen while each adapter is trained separately on its particular dataset (knowledge integration corpus) to extract additional information. Adapter outputs are aggregated in the fusion layer to enable their collective knowledge to be pooled (Pfeiffer et al., 2021). For example, we lack knowledge graph data for low-resource languages, however we have two adapters (TP, TS) that learn facts in a particular language (English) and two adapters (EP, ES) that learn cross-lingual alignment. By aggregating them, we can effectively integrate factual knowledge into the representations of low-resource languages. In the third
140
+
141
+ and final stages, all parameters of the MLLM, the adapters, and the fusion module are finetuned on a training set for a specific downstream task resulting in a specialized model for the task (see Figure 3).
142
+
143
+ # 5 Experiments
144
+
145
+ This section first introduces the general experimental settings (§5.1). We then show that our adapter set can enhance MLLMs with the knowledge of MLKGs and, in particular, that the enhanced MLLMs generalize well to perform MLKG-related tasks in low-resource languages (§5.2). We also show that enhancing MLLMs with MLKGs improves their performance on knowledge-intensive language tasks (§5.3). We compare our approach with the only existing MLKG integration work (§5.4). Finally, we present an ablation study of the adapter set to demonstrate the effectiveness of each adapter (§5.5).
146
+
147
+ # 5.1 MLLMs and Integration Corpus
148
+
149
+ We select three representative MLLMs implemented by Huggingface and train a set of adapters for each: the base version of mBERT (Devlin et al., 2019), and both the base and large versions of XLMR (i.e., XLM-RoBERTa) (Conneau et al., 2020). Since mBERT and XLMR cover different sets of languages, we consider the intersecting 84 languages supported by both models. All adapters are trained with the same hyperparameters (see Appendix A for details).
150
+
151
+ Table 1: Statistics of knowledge integration corpora for training adapters. Align.: all aligned multilingual entities; Relat.: all relations in triples; Sent.: sentences.
152
+
153
+ <table><tr><td>Module</td><td>Source</td><td>Statistics</td></tr><tr><td>Adapter-EP</td><td>Wikidata (MLKG)</td><td>Entity / Align.: 1.55M / 63.25M</td></tr><tr><td>Adapter-TP</td><td>Wikidata (MLKG)</td><td>Triple / Relat.: 9.42M / 1422</td></tr><tr><td>Adapter-ES</td><td>Wikipedia (C1)</td><td>Entity / Sent.: 0.20M / 1.93M</td></tr><tr><td>Adapter-TS</td><td>T-REx-RC (C2)</td><td>Triple-Sent. Pair: 0.97M</td></tr></table>
154
+
155
+ The statistics of the knowledge integration corpora are summarized in Table 1. Next, we introduce their preprocessing steps. The set of entity alignments used to train Adapter-EP is extracted from Wikidata by keeping only entities that have more
156
+
157
+ than 10 multilingual entity labels among the 84 considered languages. Knowledge graph triples are used to train Adapter-TP if both entities are in that entity set (see Table 8 of Appendix B for further details). For the Wikipedia dataset, we use entities in the Wikidata subset and query their descriptions (the first sentence in the Wikipedia summary that contains the entity label). We remove entities that have less than 2 multilingual descriptions, which results in 1.93 million multilingual sentences to train Adapter-ES. For Adapter-TS, we use the monolingual dataset T-REx-RC (Elsahar et al., 2018), which has 0.97 million alignments between knowledge triples and sentences in English.
158
+
159
+ # 5.2 MLKG Benchmarks
160
+
161
+ We show that our knowledge adapter set can enhance MLLM performance at MLKG-related tasks. We select two popular MLKG benchmarks for evaluation: DBP5L (Chen et al., 2020) for the knowledge graph completion task, and WK3L (Chen et al., 2017) for the cross-lingual entity alignment task. These tasks require the MLLM to identify the correct entity, which is performed by maximizing the similarity of output representations.
162
+
163
+ ![](images/ac1df09513fed5a5c62812bc474246248a45164556e42bdda73920911e235b2a.jpg)
164
+
165
+ ![](images/000f395dfaf49f473f69896535d379fe3e8d525c5601cf9f635120e0950bf1c9.jpg)
166
+ Figure 4: Statistics of the size of test sets for MLKG completion and entity alignment tasks. We can see that extended test sets for zero-shot languages have comparable number of samples as original test sets.
167
+
168
+ To evaluate MLLMs in a more comprehensive setting, we extend their test sets (from 2-5 languages) to 22 languages following their data construction settings<sup>9</sup>, where languages that contain
169
+
170
+ the most entity labels are selected. Statistics are in Figure 4. We split these languages into three categories to show the generalizability of enhanced MLLMs: Sup.: supervised languages, which are used to train adapters and for finetuning; ZS-In: zero-shot languages, which are used for adapter training but not for finetuning; ZS-Un.: unseen languages, which are unseen in both adapter training and finetuning.
171
+
172
+ # 5.2.1 Knowledge Graph Completion
173
+
174
+ The knowledge graph completion task tests if the model can find the missing triples in different languages. Specifically, for each test triple of a given language, the model is asked to retrieve the correct object entity from the entity set of that language given the subject entity and relation.
175
+
176
+ Settings. We follow the settings of DBP5L. $^{10}$ Specifically, we use the training set of knowledge triples of the five languages (i.e. the Sup. set) to finetune the model, and then use the provided test sets, as well as our extended test sets to evaluate it. For comparison, we select two typical knowledge graph embedding methods, TransE (Bordes et al., 2013) and DistMult (Yang et al., 2015), as baselines and compare the performance of MLLMs and MLLMs- $\mathbf{A}_{\mathrm{Fusion}}$ , enhanced with the knowledge adapter and fusion mechanism (see Appendix A for further implementation details).
177
+
178
+ Table 2: Results on the knowledge graph completion task. We attach the number of languages to each type. We can see that for zero-shot languages and unseen languages, using our adapters can significantly improve the performance of LMs on knowledge graph completion.
179
+
180
+ <table><tr><td rowspan="2">Model</td><td colspan="2">Sup. (5)</td><td colspan="2">ZS-In (15)</td><td colspan="2">ZS-Un. (2&#x27;)</td></tr><tr><td>Hit@1↑</td><td>MRR↑</td><td>Hit@1↑</td><td>MRR↑</td><td>Hit@1↑</td><td>MRR↑</td></tr><tr><td>TransE</td><td>14.5</td><td>23.7</td><td>-/-</td><td>-/-</td><td>-/-</td><td>-/-</td></tr><tr><td>DistMult</td><td>8.1</td><td>14.3</td><td>-/-</td><td>-/-</td><td>-/-</td><td>-/-</td></tr><tr><td>mBERT</td><td>11.2</td><td>13.8</td><td>12.8</td><td>15.7</td><td>48.2</td><td>49.1</td></tr><tr><td>mBERT-A_Fusion</td><td>13.1</td><td>15.7</td><td>16.1</td><td>18.8</td><td>51.8</td><td>52.4</td></tr><tr><td>XLMRbase</td><td>5.9</td><td>7.8</td><td>6.7</td><td>9.1</td><td>8.2</td><td>11.8</td></tr><tr><td>XLMRbase-A_Fusion</td><td>9.1</td><td>11.8</td><td>10.6</td><td>13.5</td><td>16.6</td><td>19.6</td></tr><tr><td>XLMRlarge</td><td>7.3</td><td>9.7</td><td>8.9</td><td>11.5</td><td>16.8</td><td>20.8</td></tr><tr><td>XLMRlarge-A_Fusion</td><td>13.1</td><td>15.6</td><td>14.3</td><td>17.3</td><td>23.9</td><td>27.4</td></tr></table>
181
+
182
+ Results. Results are summarized in Table 2 (with further detail in Table 9 of Appendix C). We report both Hit@1 score and Mean Reciprocal Rank (MRR) for evaluation. We find that enhancing MLLMs with adapters can improve performance for the supervised languages, which is comparable to existing knowledge graph embedding meth
183
+
184
+ ods. For the zero-shot languages and unseen languages, existing (transductive) knowledge graph embedding methods cannot perform the task since entities must be in the training set. Here we find that MLLMs still perform comparably to the supervised languages $^{11}$ , and the enhanced MLLMs- $\mathrm{A}_{\mathrm{Fusion}}$ models outperform MLLMs on zero-shot languages by significant margins. This indicates that the adapters allow factual knowledge to be transferred across languages.
185
+
186
+ # 5.2.2 Entity Alignment
187
+
188
+ The entity alignment task is to align entities in different languages. Specifically, given a target language and an entity in a source language (typically English), the model should retrieve that entity from the set of all entities in the target language.
189
+
190
+ Settings. We follow settings of WK3L. $^{12}$ Specifically, we train models using the entity alignments English to German, and English to French. We test models on those two supervised languages, as well as our extended 17 zero-shot languages and 2 unseen languages. $^{13}$ We select one typical MLKG embedding method, MTransE (Chen et al., 2017), and a state-of-the-art method, JEANS (Chen et al., 2021), as baselines (see Appendix A for details).
191
+
192
+ Table 3: Results on multilingual entity alignment tasks. We can find that using our adapters can significantly enhance MLLMs' performance on entity alignment tasks, which also outperforms existing MLKG embedding baselines.
193
+
194
+ <table><tr><td rowspan="2">Model</td><td colspan="2">Sup. (2)</td><td colspan="2">ZS-In (18)</td><td colspan="2">ZS-Un (2&#x27;)</td></tr><tr><td>Hit@1</td><td>MRR</td><td>Hit@1</td><td>MRR</td><td>Hit@1</td><td>MRR</td></tr><tr><td>MTransE</td><td>8.7</td><td>12.5</td><td>-/-</td><td>-/-</td><td>-/-</td><td>-/-</td></tr><tr><td>JEANS</td><td>40.0</td><td>47.5</td><td>-/-</td><td>-/-</td><td>-/-</td><td>-/-</td></tr><tr><td>mBERT</td><td>83.6</td><td>83.2</td><td>31.8</td><td>32.2</td><td>50.5</td><td>50.8</td></tr><tr><td>mBERT-A\( _{Fusion} \)</td><td>88.9</td><td>88.4</td><td>77.6</td><td>76.2</td><td>91.7</td><td>89.3</td></tr><tr><td>XLMR\( _{base} \)</td><td>54.8</td><td>54.8</td><td>9.4</td><td>9.7</td><td>10.9</td><td>11.1</td></tr><tr><td>XLMR\( _{base}-A_{Fusion} \)</td><td>88.6</td><td>88.0</td><td>82.4</td><td>81.8</td><td>85.4</td><td>84.3</td></tr><tr><td>XLMR\( _{large} \)</td><td>65.0</td><td>65.1</td><td>23.9</td><td>24.1</td><td>28.9</td><td>28.9</td></tr><tr><td>XLMR\( _{large}-A_{Fusion} \)</td><td>90.2</td><td>89.5</td><td>90.8</td><td>89.0</td><td>89.8</td><td>88.5</td></tr></table>
195
+
196
+ Results. The results are summarized in Table 3 (with further detail in Table 10 of Appendix C). Performance is evaluated again by Hits@1 and MRR. As previously, the (transductive) baselines cannot be extended to languages not in the training set. For the supervised languages, we can find that existing MLLMs often outperform classic base
197
+
198
+ lines. However, performance of MLLMs on zero-shot languages is noticeably worse. This indicates that existing MLLMs do not transfer entity alignment knowledge well to other languages. However, MLLMs enhanced with the adapter set, MLLMs- $\mathrm{A}_{\text {Fusion }}$ , generally achieve the best performance, often with significant improvement. The results indicate that our adapter set successfully enhances MLLMs with multilingual knowledge.
199
+
200
+ # 5.3 MLLM Benchmarks
201
+
202
+ Above results show that our adapter set can enhance MLLMs to perform well on MLKG-related tasks on both previously seen and unseen languages. Here, we show that our knowledge adapter set can allow MLKGs to enhance MLLM performance on language tasks. In particular, the enhanced MLLMs achieve improved performance on knowledge-intensive language task while maintaining performance on other general language tasks.
203
+
204
+ # 5.3.1 Cross-Lingual Relation Classification
205
+
206
+ We select a popular relation classification benchmark: RELX (Köksal and Özgür, 2020), for which MLLMs must extract relations from sentences in a cross-lingual setting. Models are finetuned on a high-resource corpus, and tested on low-resource languages in a zero-shot setting. For this task, MLLMs are required to transfer the knowledge across languages, as well as capture factual knowledge for the relation classification.
207
+
208
+ Settings. Our training data is only in English, and test data contains 4 more (zero-shot) languages. We follow the exact setting of Köksal and Özgür (2020) and use the same provided set of hyperparameters to evaluate all MLLMs. We also report the performance of the enhanced BERT model of Köksal and Özgür (2020) called Matching the Multilingual Blanks (MTMB) as a baseline.
209
+
210
+ Table 4: Results on the multilingual relation classification task (F1 score). We can find that our adapters can effectively enhance MLLMs on the knowledge-intensive downstream tasks, especially for the performance on zero-shot languages.
211
+
212
+ <table><tr><td>Model</td><td>Sup. (En)</td><td>ZS-In (4)</td><td>Ave.</td></tr><tr><td>mBERT</td><td>61.8</td><td>57.4</td><td>58.3</td></tr><tr><td>MTMB</td><td>63.6</td><td>59.6</td><td>60.4</td></tr><tr><td>mBERT-AFusion</td><td>64.0</td><td>60.9</td><td>61.5</td></tr><tr><td>XLMRbase</td><td>61.4</td><td>56.1</td><td>57.1</td></tr><tr><td>XLMRbase-AFusion</td><td>61.3</td><td>58.0</td><td>58.6</td></tr><tr><td>XLMRlarge</td><td>63.1</td><td>59.1</td><td>59.9</td></tr><tr><td>XLMRlarge-AFusion</td><td>64.2</td><td>60.4</td><td>61.1</td></tr></table>
213
+
214
+ Results. Results are summarized in Table 4 (see Table 11 of Appendix D for further detail). We
215
+
216
+ find that for supervised languages, mBERT- $\mathbf{A}_{\mathrm{Fusion}}$ outperforms both the base version of mBERT as well as the knowledge-enhanced version (MTMB), whereas XLMR with adapters obtains comparable performance. As for zero-shot languages, MLLMs- $\mathbf{A}_{\mathrm{Fusion}}$ achieve consistent and significant improvements over baselines. This demonstrates that our knowledge adapter set can enhance MLLMs for knowledge-intensive tasks.
217
+
218
+ # 5.3.2 General Language Tasks
219
+
220
+ Besides above knowledge-intensive tasks, we show that our knowledge adapter set can maintain the performance of MLLMs on general multilingual language tasks. We select the popular multilingual benchmark called XTREME (Hu et al., 2020) to evaluate the enhanced MLLMs, which are finetuned on English training data, and tested with many other languages. We select cross-lingual NER and QA as two general tasks. We follow the settings of the XTREME benchmark.
221
+
222
+ Table 5: Results on the multilingual NER task (F1 score). We can find that our adapters can enhance MLLMs on the performance of NER task for zero-shot languages.
223
+
224
+ <table><tr><td>Model</td><td>Sup. (En)</td><td>ZS-In (39)</td><td>Ave.</td></tr><tr><td>mBERT</td><td>85.2</td><td>61.6</td><td>62.2</td></tr><tr><td>mBERT-A\( _{Fusion} \)</td><td>84.0</td><td>62.3</td><td>62.9</td></tr><tr><td>\( XLMR_{large} \)</td><td>84.7</td><td>64.9</td><td>65.4</td></tr><tr><td>\( XLMR_{large}-A_{Fusion} \)</td><td>85.0</td><td>65.3</td><td>65.8</td></tr></table>
225
+
226
+ NER. We select the WikiAnn dataset (Pan et al., 2017) (under the setting of XTREME) for the NER task, where 40 languages are included for evaluation. The results are summarized in Table 5, and detailed results can be found in Table 12 in Appendix D. We find that MLLMs with our adapter set perform as well as the baseline MLLMs with slight improvements on the zero-shot languages.
227
+
228
+ Table 6: Results on the multilingual QA tasks. Using our adapters would not reduce the performance on language modeling tasks, while marginal improvement can even achieved.
229
+
230
+ <table><tr><td rowspan="2">Model</td><td colspan="2">Sup. (En)</td><td colspan="2">ZS-In (10)</td><td colspan="2">Ave.</td></tr><tr><td>F1</td><td>EM</td><td>F1</td><td>EM</td><td>F1</td><td>EM</td></tr><tr><td>mBERT</td><td>83.5</td><td>72.2</td><td>62.6</td><td>47.2</td><td>64.5</td><td>49.4</td></tr><tr><td>mBERT-A\( _{Fusion} \)</td><td>83.5</td><td>72.0</td><td>62.1</td><td>47.2</td><td>62.2</td><td>49.5</td></tr><tr><td>\( XLMR_{large} \)</td><td>86.5</td><td>75.7</td><td>75.6</td><td>59.3</td><td>76.6</td><td>60.8</td></tr><tr><td>\( XLMR_{large}-A_{Fusion} \)</td><td>88.0</td><td>77.6</td><td>75.7</td><td>59.7</td><td>76.8</td><td>61.3</td></tr></table>
231
+
232
+ Question Answering. Following the setting of XTREME, We finetune the models on the SQuAD (Rajpurkar et al., 2016) dataset (in English), and evaluate on the test sets of XQuAD (Artetxe et al., 2020) involving 11 languages. Detailed results are in Table 13 in Appendix D. We find that mBERT- $\mathbf{A}_{\mathrm{Fusion}}$ maintains
233
+
234
+ the performance as its original version, while $\mathrm{XLMR}_{\text {large }} - \mathrm{A}_{\text {Fusion }}$ can be boosted slightly. In general, MLLMs- $\mathrm{A}_{\text {Fusion }}$ with our adapters can obtain comparable or slightly better performance across different language tasks. For those tasks requiring rich knowledge about triples and entity alignments, our adapter set can indeed enhance the MLLMs.
235
+
236
+ # 5.4 Comparison with Existing Methods
237
+
238
+ We compare our approach with the only existing related work (Liu et al., 2021) that attempts to integrate MLKGs into MLLMs. However, it only considers a relatively small set of 10 languages and finetunes the entire MLLM with a joint objective, which is computationally expensive. In contrast, as shown below, our knowledge adapter set can achieve better performance at a much lower cost.
239
+
240
+ Settings. We follow settings and metrics in Liu et al. (2021), which are slightly different from original settings of RELX and WikiAnn (XTREME) datasets. We only report the performance for MLLMs that are implemented in their study.
241
+
242
+ Table 7: Comparison with Liu et al. (2021) (denoted by $\triangle$ on RELX, WikiAnn and XQuAD datasets involving 4, 10 and 11 languages, respectively. We can find that our light adapter-based knowledge enhancement method significantly outperforms previous finetuning-based enhancement method.
243
+
244
+ <table><tr><td rowspan="2">Model</td><td>RELX (4)</td><td>WikiAnn (10)</td><td colspan="2">XQuAD (11)</td></tr><tr><td>Acc.</td><td>F1</td><td>F1</td><td>EM</td></tr><tr><td>mBERT</td><td>60.1</td><td>-/-</td><td>-/-</td><td>-/-</td></tr><tr><td>mBERT△</td><td>61.1</td><td>-/-</td><td>-/-</td><td>-/-</td></tr><tr><td>mBERT-MLKG</td><td>64.7</td><td>-/-</td><td>-/-</td><td>-/-</td></tr><tr><td>XLMRbase</td><td>56.7</td><td>-/-</td><td>-/-</td><td>-/-</td></tr><tr><td>XLMR△base</td><td>58.3</td><td>-/-</td><td>-/-</td><td>-/-</td></tr><tr><td>XLMRbase-Afusion</td><td>61.7</td><td>-/-</td><td>-/-</td><td>-/-</td></tr><tr><td>XLMRlarge</td><td>61.3</td><td>68.5</td><td>76.6</td><td>60.8</td></tr><tr><td>XLMR△large</td><td>61.9</td><td>66.9</td><td>76.5</td><td>60.6</td></tr><tr><td>XLMRlarge-Afusion</td><td>64.6</td><td>67.6</td><td>76.8</td><td>61.3</td></tr></table>
245
+
246
+ Results. In Table 7, for the relation classification task, where Liu et al. (2021) outperforms the MLLM baseline, our method achieves significant further improvement. For NER, only 10 popular zero-shot languages (instead of 40 languages in XTREME) are selected for their knowledge integration and evaluation. Even if generally our method achieves better performance for $\mathrm{XLR}_{\mathrm{large}} - \mathrm{A}_{\mathrm{Fusion}}$ (40 languages) in Table 5, it performs slightly worse than the original version here (10 popular languages). However, the performance of Liu et al. (2021) is worse still. For QA, similar performance is achieved by all three MLLMs, although our enhanced MLLM slightly outperforms other methods.
247
+
248
+ ![](images/454c576de76347f7273676723a5c41de59ca43631802a6946cd325a19995e0d3.jpg)
249
+ Figure 5: Ablation study results. We select two MLKG-related tasks and the relation classification task for evaluation. We can find that adapters that integrate factual knowledge into MLLMs achieve better performance than others on the MLKG completion task, while adapters that integrate cross-lingual alignments outperform others on the entity alignment task. For the relation classification task, sentence-level adapters achieve better performance. For our adapter set, it can achieve roughly the best performance under all conditions.
250
+
251
+ ![](images/23e766a7080197311ba3bed8a21dba7abd6614602473e855e333ed433737274b.jpg)
252
+
253
+ ![](images/37b84a4e59c9b111f4727995132746173bae3ea385b23c1ba9cdda94a0a77e28.jpg)
254
+
255
+ # 5.5 Ablation Study
256
+
257
+ We conduct ablation studies to understand our knowledge adapters and show that they work as expected. $^{14}$ We also compare against a large adapter $\left(\mathrm{A}_{\text {Large }}\right)$ with a comparable total number of parameters (including fusion layers). The large adapter is trained with the same settings as our adapter set and has one set of parameters that integrate all knowledge types at once. As previously, we finetune the original mBERT, mBERT- $\mathrm{A}_{\text {Large }}$ , and mBERT with our adapters on each downstream task.
258
+
259
+ In Figure 5, for the knowledge graph completion task (left), mBERT- $\mathbf{A}_{\mathrm{TP}}$ and mBERT- $\mathbf{A}_{\mathrm{TS}}$ perform better than their entity-based counterparts. While mBERT- $\mathbf{A}_{\mathrm{Large}}$ also performs well, mBERT- $\mathbf{A}_{\mathrm{Fusion}}$ outperforms it significantly. For the entity alignment task (center), the situation is reversed such that better performance is achieved by mBERT- $\mathbf{A}_{\mathrm{EP}}$ and, mBERT- $\mathbf{A}_{\mathrm{ES}}$ . Our mBERT- $\mathbf{A}_{\mathrm{Fusion}}$ also achieves comparable performance which is much better than mBERT- $\mathbf{A}_{\mathrm{Large}}$ with shared parameters. As for the relation classification task (right), sentence-level adapters outperform phrase-level adapters, which is intuitive since the task requires sentence-level context. Fusing all four adapters (i.e., mBERT- $\mathbf{A}_{\mathrm{Fusion}}$ ) gives the best performance while mBERT- $\mathbf{A}_{\mathrm{Large}}$ performs worse than single smaller adapters. In summary, with our method, we learn different types of knowledge in separate adapters, which can be fused in different proportions according to the downstream task at hand to typically perform better and more consistently than any single adapter-enhanced MLLMs.
260
+
261
+ # 6 Other Related Work
262
+
263
+ MLLM for MLKG. Several works use the implicit knowledge in language models to improve knowledge graph-related tasks (Yao et al., 2019; Niu et al., 2022). However, these approaches are for monolingual knowledge triples and can not easily incorporate cross-lingual entity alignment. Huang et al. (2022) use MLLMs for knowledge graph completion, but language models only encode entities, and the task itself is achieved by graph neural networks. Previous MLKG embedding methods consider entity alignment (Chen et al., 2017, 2020), but are designed for existing MLKGs, and can not generalize to other, e.g. low-resource, languages without the multilingual knowledge in MLLMs (Pires et al., 2019; Wu and Dredze, 2019).
264
+
265
+ MLKG for MLLM. Liu et al. (2021) propose to synthesize code-switched sentences to solve the problem but the resulting MLKG-enhanced MLLMs achieve minimal improvement on language understanding tasks as shown in our experiment, and it cannot benefit the MLKG field. In summary, our work first combine MLKG and MLLM, showing that combining them using our light knowledge adapter set can effectively improve the downstream task performance on both sides.
266
+
267
+ # 7 Conclusion
268
+
269
+ In this paper we propose an approach to enhance MLLMs with MLKGs using a set of knowledge adapters, where explicit knowledge from MLKGs is integrated into the implicit knowledge learned by MLLMs. In experiments, we show that enhanced MLLMs can conduct MLKG-related tasks and achieve better performance on knowledge-intensive tasks, especially on low-resource languages where knowledge graphs are not available.
270
+
271
+ # Limitations
272
+
273
+ We point out that there are some limitations of our work. First, even if the adapter set can enhance MLLMs to perform well on various downstream tasks, it is not suitable for tasks with the fully zero-shot setting (without any training data), since the fusion module has to be tuned to suit the task. Second, as shown in our results, the fusion module cannot always outperform all single adapters. For some tasks, a better fusion mechanism could be proposed for the improvement.
274
+
275
+ # Reproducibility Statement
276
+
277
+ We elaborate the experiment settings and hyperparameters in the paper and in Appendix A. We have published our prepossessed multilingual knowledge integration data, extended MLKG-related task datasets, as well as our code.
278
+
279
+ # Ethics Statement
280
+
281
+ We do not foresee any significant ethical concerns in this work.
282
+
283
+ # Acknowledgments
284
+
285
+ We are grateful to the anonymous reviewers for their insightful comments and suggestions. Yifan Hou is supported by the Swiss Data Science Center PhD Grant (P22-05). Carl Allen is supported by an ETH AI Centre Postdoctoral Fellowship. We also acknowledge support from an ETH Zurich Research grant (ETH-19 21-1) and a grant from the Swiss National Science Foundation (project #201009) for this work.
286
+
287
+ # References
288
+
289
+ Badr AlKhamissi, Millicent Li, Asli Celikyilmaz, Mona Diab, and Marjan Ghazvininejad. 2022. A review on language models as knowledge bases. CoRR, abs/2204.06031.
290
+ Mikel Artetxe, Sebastian Ruder, and Dani Yogatama. 2020. On the cross-lingual transferability of monolingual representations. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4623–4637, Online. Association for Computational Linguistics.
291
+ Francis Bond and Ryan Foster. 2013. Linking and extending an open multilingual Wordnet. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1352-1362, Sofia, Bulgaria. Association for Computational Linguistics.
292
+
293
+ Antoine Bordes, Nicolas Usunier, Alberto García-Durán, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multi-relational data. In Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems 2013. Proceedings of a meeting held December 5-8, 2013, Lake Tahoe, Nevada, United States, pages 2787-2795.
294
+ Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual.
295
+ Muhao Chen, Weijia Shi, Ben Zhou, and Dan Roth. 2021. Cross-lingual entity alignment with incidental supervision. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 645–658, Online. Association for Computational Linguistics.
296
+ Muhao Chen, Yingtao Tian, Mohan Yang, and Carlo Zaniolo. 2017. Multilingual knowledge graph embeddings for cross-lingual knowledge alignment. In Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI 2017, Melbourne, Australia, August 19-25, 2017, pages 1511-1517. ijcai.org.
297
+ Xuelu Chen, Muhao Chen, Changjun Fan, Ankith Uppunda, Yizhou Sun, and Carlo Zaniolo. 2020. Multilingual knowledge graph completion via ensemble knowledge transfer. In *Findings of the Association for Computational Linguistics: EMNLP* 2020, pages 3227-3238, Online. Association for Computational Linguistics.
298
+ Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8440-8451, Online. Association for Computational Linguistics.
299
+ Alexis Conneau and Guillaume Lample. 2019. Cross-lingual language model pretraining. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 7057-7067.
300
+
301
+ Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.
302
+ Sumanth Doddapaneni, Gowtham Ramesh, Anoop Kunchukuttan, Pratyush Kumar, and Mitesh M. Khapra. 2021. A primer on pretrained multilingual language models. CoRR, abs/2107.00676.
303
+ Hady Elsahar, Pavlos Vougiouklis, Arslen Remaci, Christophe Gravier, Jonathon Hare, Frederique Laforest, and Elena Simperl. 2018. T-REx: A large scale alignment of natural language with knowledge base triples. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan. European Language Resources Association (ELRA).
304
+ Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin de Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. 2019. Parameter-efficient transfer learning for NLP. In Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, volume 97 of Proceedings of Machine Learning Research, pages 2790-2799. PMLR.
305
+ Junjie Hu, Sebastian Ruder, Aditya Siddhant, Graham Neubig, Orhan First, and Melvin Johnson. 2020. XTREME: A massively multilingual multitask benchmark for evaluating cross-lingual generalization. CoRR, abs/2003.11080.
306
+ Zijie Huang, Zheng Li, Haoming Jiang, Tianyu Cao, Hanqing Lu, Bing Yin, Karthik Subbian, Yizhou Sun, and Wei Wang. 2022. Multilingual knowledge graph completion with self-supervised adaptive graph alignment. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2022, Dublin, Ireland, May 22-27, 2022, pages 474-485. Association for Computational Linguistics.
307
+ Wenxiang Jiao, Zhaopeng Tu, Jiarui Li, Wenxuan Wang, Jen-tse Huang, and Shuming Shi. 2022. Tencent's multilingual machine translation system for WMT22 large-scale african languages. In Proceedings of the Seventh Conference on Machine Translation, Online. Association for Computational Linguistics.
308
+ James Kirkpatrick, Razvan Pascanu, Neil C. Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A. Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska, Demis Hassabis, Claudia Clopath, Dharshan Kumaran, and Raia Hadsell. 2016. Overcoming catastrophic forgetting in neural networks. CoRR, abs/1612.00796.
309
+
310
+ Abdullatif Köksal and Arzucan Özgür. 2020. The RELX dataset and matching the multilingual blanks for cross-lingual relation classification. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 340–350, Online. Association for Computational Linguistics.
311
+ Linlin Liu, Xin Li, Ruidan He, Lidong Bing, Shafiq R. Joty, and Luo Si. 2021. Knowledge based multilingual language model. CoRR, abs/2111.10962.
312
+ Weijie Liu, Peng Zhou, Zhe Zhao, Zhiruo Wang, Qi Ju, Haotang Deng, and Ping Wang. 2020. K-BERT: enabling language representation with knowledge graph. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pages 2901-2908. AAAI Press.
313
+ Guanglin Niu, Bo Li, Yongfei Zhang, and Shiliang Pu. 2022. CAKE: A scalable commonsense-aware framework for multi-view knowledge graph completion. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2022, Dublin, Ireland, May 22-27, 2022, pages 2867-2877. Association for Computational Linguistics.
314
+ Xiaoman Pan, Boliang Zhang, Jonathan May, Joel Nothman, Kevin Knight, and Heng Ji. 2017. Cross-lingual name tagging and linking for 282 languages. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1946–1958, Vancouver, Canada. Association for Computational Linguistics.
315
+ Matthew E. Peters, Mark Neumann, Robert Logan, Roy Schwartz, Vidur Joshi, Sameer Singh, and Noah A. Smith. 2019. Knowledge enhanced contextual word representations. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 43-54, Hong Kong, China. Association for Computational Linguistics.
316
+ Jonas Pfeiffer, Aishwarya Kamath, Andreas Rückle, Kyunghyun Cho, and Iryna Gurevych. 2021. AdapterFusion: Non-destructive task composition for transfer learning. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 487-503, Online. Association for Computational Linguistics.
317
+ Telmo Pires, Eva Schlinger, and Dan Garrette. 2019. How multilingual is multilingual BERT? In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4996-5001, Florence, Italy. Association for Computational Linguistics.
318
+
319
+ Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: $100,000+$ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383-2392, Austin, Texas. Association for Computational Linguistics.
320
+ Aäron van den Oord, Yazhe Li, and Oriol Vinyals. 2018. Representation learning with contrastive predictive coding. CoRR, abs/1807.03748.
321
+ Denny Vrandecic and Markus Krötzsch. 2014. Wiki-data: a free collaborative knowledgebase. Communications of the ACM, 57(10):78-85.
322
+ Ruize Wang, Duyu Tang, Nan Duan, Zhongyu Wei, Xuanjing Huang, Jianshu Ji, Guihong Cao, Daxin Jiang, and Ming Zhou. 2021. K-Adapter: Infusing Knowledge into Pre-Trained Models with Adapters. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 1405-1418, Online. Association for Computational Linguistics.
323
+ Xiangpeng Wei, Rongxiang Weng, Yue Hu, Luxi Xing, Heng Yu, and Weihua Luo. 2021. On learning universal representations across languages. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net.
324
+ Shijie Wu and Mark Dredze. 2019. Beto, bentz, becas: The surprising cross-lingual effectiveness of BERT. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 833–844, Hong Kong, China. Association for Computational Linguistics.
325
+ Bishan Yang, Wen-tau Yih, Xiaodong He, Jianfeng Gao, and Li Deng. 2015. Embedding entities and relations for learning and inference in knowledge bases. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.
326
+ Liang Yao, Chengsheng Mao, and Yuan Luo. 2019. KG-BERT: BERT for knowledge graph completion. CoRR, abs/1909.03193.
327
+ Zhengyan Zhang, Xu Han, Zhiyuan Liu, Xin Jiang, Maosong Sun, and Qun Liu. 2019. ERNIE: Enhanced language representation with informative entities. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1441-1451, Florence, Italy. Association for Computational Linguistics.
328
+
329
+ # A Implementation Details
330
+
331
+ We implement the adapters using the AdapterHub library<sup>15</sup>, where all Transformer layers in MLLMs are inserted with adapters.
332
+
333
+ Adapters in Knowledge Enhancement. To train these knowledgeable adapters, we use 8 GPUs (Tesla V100) with batch size as 128. The learning rate is set as $1e - 4$ . We use the Adam optimizer with $1e4$ warm-up steps. We train Adapter-EP by randomly sampling entity alignments in different languages. The number of sampled alignments is around 94.2 million. And the training epoch number for Adapter-TP, Adapter-ES, and Adapter-TS is all set as 10. As for the InfoNCE loss, we use the negative sampling within batch. Since we train adapters with sampling strategy and use the contrastive learning loss instead of mask language modeling, it only takes several hours to train one adapter (1-10 hours). The whole enhancement procedure would take around half a day.
334
+
335
+ Adapters in knowledge graph completion. For MLLM-based methods, we set all hyperparameters as the same to ensure the comparison is fair $^{16}$ . We use the average value of word(-piece) representation as the entity embedding. Specifically, we train MLLMs as well as MLLMs-AF (including adapters and the fusion mechanism) to embed entities, where the output representations of the object entities should be close to the context (subject entities with relations) output representations. The similarity is measure by cosine $^{17}$ . During the training, the learning rate is set as $1e - 8$ , and the epoch number is set as 10. The batch size is set as 8. We train MLLMs using the contrastive learning loss similar to the knowledge integration process.
336
+
337
+ Adapters in Entity Alignment. Similarly, we set all hyperparameters as the same for all MLLM-based methods. Specifically, we set the epoch number as 1 since the overfitting is easy with training data only on 2 languages. Other hyperparameters and settings are the same to that of the MLKG Completion task.
338
+
339
+ Adapters in Language Tasks. We evaluate our adapter set with MLLMs on the XTREME bench
340
+
341
+ mark. The evaluation settings are the same as theirs.
342
+
343
+ # B Knowledge Integration Dataset Statistics
344
+
345
+ The detailed statistics can be found in Table 8 below.
346
+
347
+ # C MLKG Dataset Statistics and Detailed Results
348
+
349
+ The detailed statistics and results can be found in Table 9 and Table 10.
350
+
351
+ # D MLLM Dataset Statistics and Detailed Results
352
+
353
+ The detailed statistics and results can be found in Table 11 (relation classification), Table 12 (name entity recognition), and Table 13 (question answering).
354
+
355
+ Table 8: Distribution of Wikidata for adapter training. We report the full name and ISO code for all languages. For the entity, relation, and triple, we report the ratio of the label in that specific language to the total number of it.
356
+
357
+ <table><tr><td>ISO</td><td>Lang.</td><td>Entity (%)</td><td>Relation (%)</td><td>Triple (%)</td><td>ISO</td><td>Lang.</td><td>Entity (%)</td><td>Relation (%)</td><td>Triple (%)</td><td>ISO</td><td>Lang.</td><td>Entity (%)</td><td>Relation (%)</td><td>Triple (%)</td></tr><tr><td>af</td><td>Afrikaans</td><td>56.4</td><td>20.5</td><td>31.8</td><td>gu</td><td>Gujarati</td><td>12.2</td><td>14.2</td><td>2.1</td><td>nn</td><td>Norwegian Nynorsk</td><td>70.6</td><td>44.4</td><td>57.9</td></tr><tr><td>an</td><td>Aragonese</td><td>59.8</td><td>0.7</td><td>10.7</td><td>he</td><td>Hebrew</td><td>25.3</td><td>62.2</td><td>29.7</td><td>no</td><td>Norwegian</td><td>0.0</td><td>-</td><td>-</td></tr><tr><td>ar</td><td>Arabic</td><td>33.5</td><td>91.0</td><td>42.3</td><td>hi</td><td>Hindi</td><td>14.8</td><td>13.3</td><td>4.4</td><td>oc</td><td>Occitan</td><td>60.9</td><td>23.9</td><td>32.1</td></tr><tr><td>ast</td><td>Asturian</td><td>84.2</td><td>28.3</td><td>71.3</td><td>hr</td><td>Croatian</td><td>57.8</td><td>23.1</td><td>33.5</td><td>pl</td><td>Polish</td><td>92.5</td><td>73.0</td><td>85.9</td></tr><tr><td>az</td><td>Azerbaijani</td><td>19.3</td><td>19.3</td><td>9.8</td><td>hu</td><td>Hungarian</td><td>71.9</td><td>64.6</td><td>70.2</td><td>pt</td><td>Portuguese</td><td>96.4</td><td>80.8</td><td>91.0</td></tr><tr><td>bar</td><td>Bavarian</td><td>52.4</td><td>1.8</td><td>10.0</td><td>hy</td><td>Armenian</td><td>21.5</td><td>21.4</td><td>17.0</td><td>ro</td><td>Romanian</td><td>81.7</td><td>32.8</td><td>59.3</td></tr><tr><td>be</td><td>Belarusian</td><td>18.0</td><td>52.7</td><td>11.6</td><td>id</td><td>Indonesian</td><td>65.2</td><td>48.2</td><td>47.7</td><td>ru</td><td>Russian</td><td>54.4</td><td>88.6</td><td>64.1</td></tr><tr><td>bg</td><td>Bulgarian</td><td>31.9</td><td>22.6</td><td>19.4</td><td>is</td><td>Icelandic</td><td>52.3</td><td>7.6</td><td>15.0</td><td>scn</td><td>Sicilian</td><td>39.6</td><td>25.5</td><td>17.7</td></tr><tr><td>bn</td><td>Bengali</td><td>18.3</td><td>34.6</td><td>11.5</td><td>it</td><td>Italian</td><td>97.8</td><td>78.2</td><td>97.0</td><td>sco</td><td>Scots</td><td>56.8</td><td>27.3</td><td>27.3</td></tr><tr><td>br</td><td>Breton</td><td>54.5</td><td>18.8</td><td>30.0</td><td>ja</td><td>Japanese</td><td>37.5</td><td>77.5</td><td>48.3</td><td>sh</td><td>Serbo-Croatian</td><td>21.7</td><td>9.2</td><td>6.9</td></tr><tr><td>bs</td><td>Bosnian</td><td>44.7</td><td>27.3</td><td>18.6</td><td>jv</td><td>Javanese</td><td>41.3</td><td>1.6</td><td>7.1</td><td>sk</td><td>Slovak</td><td>62.4</td><td>25.8</td><td>38.4</td></tr><tr><td>ca</td><td>Catalan</td><td>87.2</td><td>99.3</td><td>88.9</td><td>ka</td><td>Georgian</td><td>16.2</td><td>23.8</td><td>9.3</td><td>sl</td><td>Slovenian</td><td>69.1</td><td>24.8</td><td>56.0</td></tr><tr><td>ceb</td><td>Cebuano</td><td>51.5</td><td>0.3</td><td>0.2</td><td>kk</td><td>Kazakh</td><td>16.7</td><td>4.0</td><td>2.2</td><td>sq</td><td>Albanian</td><td>73.0</td><td>28.1</td><td>47.2</td></tr><tr><td>cs</td><td>Czech</td><td>73.5</td><td>68.4</td><td>66.4</td><td>kn</td><td>Kannada</td><td>13.7</td><td>7.8</td><td>2.1</td><td>sr</td><td>Serbian</td><td>23.3</td><td>92.6</td><td>17.5</td></tr><tr><td>cy</td><td>Welsh</td><td>61.4</td><td>35.4</td><td>43.3</td><td>ko</td><td>Korean</td><td>26.2</td><td>58.2</td><td>25.3</td><td>sv</td><td>Swedish</td><td>91.7</td><td>73.8</td><td>90.9</td></tr><tr><td>da</td><td>Danish</td><td>77.3</td><td>57.5</td><td>75.4</td><td>la</td><td>Latin</td><td>59.9</td><td>9.4</td><td>23.1</td><td>sw</td><td>Swhili</td><td>50.4</td><td>0.6</td><td>6.2</td></tr><tr><td>de</td><td>German</td><td>98.5</td><td>90.7</td><td>98.5</td><td>lb</td><td>Luxembourgish</td><td>55.3</td><td>25.2</td><td>33.5</td><td>ta</td><td>Tamil</td><td>14.8</td><td>18.8</td><td>4.8</td></tr><tr><td>el</td><td>Greek</td><td>19.5</td><td>46.5</td><td>16.5</td><td>lt</td><td>Lithuanian</td><td>52.5</td><td>15.7</td><td>27.4</td><td>te</td><td>Telugu</td><td>13.2</td><td>17.7</td><td>3.1</td></tr><tr><td>en</td><td>English</td><td>100.0</td><td>100.0</td><td>100.0</td><td>lv</td><td>Latvian</td><td>38.2</td><td>40.2</td><td>25.6</td><td>th</td><td>Thai</td><td>16.2</td><td>20.5</td><td>7.3</td></tr><tr><td>es</td><td>Spanish</td><td>98.7</td><td>94.0</td><td>98.6</td><td>mk</td><td>Macedonian</td><td>16.5</td><td>95.1</td><td>9.3</td><td>tl</td><td>Tagalog</td><td>16.4</td><td>7.2</td><td>5.3</td></tr><tr><td>et</td><td>Estonian</td><td>60.8</td><td>25.9</td><td>40.9</td><td>ml</td><td>Malayalam</td><td>15.9</td><td>14.6</td><td>4.7</td><td>tr</td><td>Turkish</td><td>64.4</td><td>81.2</td><td>50.9</td></tr><tr><td>eu</td><td>Basque</td><td>74.0</td><td>37.4</td><td>54.4</td><td>mn</td><td>Mongolian</td><td>11.5</td><td>1.3</td><td>0.2</td><td>tt</td><td>Tatar</td><td>19.0</td><td>35.7</td><td>12.4</td></tr><tr><td>fa</td><td>Persian</td><td>32.5</td><td>51.1</td><td>33.7</td><td>mr</td><td>Marathi</td><td>13.4</td><td>17.4</td><td>3.7</td><td>uk</td><td>Ukrainian</td><td>45.2</td><td>97.7</td><td>44.4</td></tr><tr><td>fi</td><td>Finnish</td><td>89.9</td><td>56.3</td><td>78.8</td><td>ms</td><td>Malay</td><td>56.3</td><td>40.9</td><td>35.0</td><td>ur</td><td>Urdu</td><td>16.7</td><td>28.1</td><td>7.8</td></tr><tr><td>fr</td><td>French</td><td>98.5</td><td>97.3</td><td>99.1</td><td>my</td><td>Burmese</td><td>11.7</td><td>5.3</td><td>0.9</td><td>uz</td><td>Uzbek</td><td>17.1</td><td>3.7</td><td>4.6</td></tr><tr><td>fy</td><td>Western Frisian</td><td>41.6</td><td>4.7</td><td>7.6</td><td>nds</td><td>Low Saxon</td><td>54.1</td><td>23.1</td><td>29.3</td><td>vi</td><td>Vietnamese</td><td>74.7</td><td>32.8</td><td>44.4</td></tr><tr><td>ga</td><td>Irish</td><td>78.4</td><td>25.2</td><td>57.3</td><td>ne</td><td>Nepali</td><td>11.3</td><td>7.7</td><td>1.1</td><td>war</td><td>Waray-Waray</td><td>61.1</td><td>0.1</td><td>0.0</td></tr><tr><td>gl</td><td>Galician</td><td>65.2</td><td>38.5</td><td>45.9</td><td>nl</td><td>Dutch</td><td>98.3</td><td>100.0</td><td>98.2</td><td>zh</td><td>Chinese</td><td>41.1</td><td>64.9</td><td>49.7</td></tr></table>
358
+
359
+ Table 9: The performance of various models for the MLKG completion task (Hit@1/MRR) across different languages. We also report the number of entities in the test set to show the general difficulty of the completion task in that language.
360
+
361
+ <table><tr><td>Language</td><td># of test set</td><td>TransE</td><td>DisMult</td><td>mBERT</td><td>mBERT-MLKG</td><td>XLM</td><td>XLM-MLKG</td><td>XLM-R</td><td>XLM-R-MLKG</td></tr><tr><td>el</td><td>1082</td><td>13.1 / 24.3</td><td>8.9 / 9.8</td><td>9.2 / 11.6</td><td>8.5 / 11.2</td><td>4.8 / 6.9</td><td>6.9 / 9.7</td><td>5.0 / 7.5</td><td>9.3 / 12.8</td></tr><tr><td>en</td><td>5984</td><td>7.3 / 16.9</td><td>8.8 / 18.3</td><td>15.2 / 17.7</td><td>18.5 / 21.3</td><td>8.2 / 10.0</td><td>11.7 / 14.8</td><td>10.4 / 12.5</td><td>17.5 / 19.9</td></tr><tr><td>es</td><td>4101</td><td>13.5 / 24.4</td><td>7.4 / 13.2</td><td>14.3 / 17.2</td><td>17.7 / 20.5</td><td>7.0 / 9.4</td><td>11.7 / 14.9</td><td>9.7 / 12.2</td><td>18.0 / 20.7</td></tr><tr><td>fr</td><td>4436</td><td>17.5 / 27.6</td><td>6.1 / 14.5</td><td>12.7 / 15.4</td><td>17.4 / 19.9</td><td>6.3 / 8.5</td><td>12.1 / 14.5</td><td>9.2 / 11.6</td><td>16.2 / 18.5</td></tr><tr><td>ja</td><td>2569</td><td>21.1 / 25.3</td><td>9.3 / 15.8</td><td>4.6 / 6.9</td><td>3.6 / 5.8</td><td>3.1 / 4.4</td><td>3.0 / 5.0</td><td>2.4 / 4.6</td><td>4.7 / 7.4</td></tr><tr><td>ast</td><td>2823</td><td>-</td><td>-</td><td>13.9 / 16.8</td><td>19.1 / 21.8</td><td>7.1 / 9.5</td><td>13.7 / 16.5</td><td>10.6 / 12.9</td><td>17.5 / 20.5</td></tr><tr><td>ca</td><td>2959</td><td>-</td><td>-</td><td>14.8 / 17.6</td><td>19.1 / 21.5</td><td>7.9 / 10.4</td><td>13.8 / 16.5</td><td>11.1 / 13.4</td><td>17.4 / 20.2</td></tr><tr><td>da</td><td>2566</td><td>-</td><td>-</td><td>16.1 / 19.2</td><td>19.9 / 23.0</td><td>8.7 / 11.6</td><td>13.3 / 16.9</td><td>11.5 / 14.1</td><td>17.6 / 21.4</td></tr><tr><td>de</td><td>4059</td><td>-</td><td>-</td><td>14.1 / 16.8</td><td>17.4 / 20.4</td><td>8.3 / 11.2</td><td>11.4 / 14.6</td><td>9.8 / 12.6</td><td>15.6 / 18.7</td></tr><tr><td>fa</td><td>2329</td><td>-</td><td>-</td><td>5.0 / 7.1</td><td>5.3 / 6.9</td><td>3.9 / 4.8</td><td>4.1 / 5.8</td><td>5.1 / 7.3</td><td>5.2 / 7.2</td></tr><tr><td>fi</td><td>2582</td><td>-</td><td>-</td><td>11.2 / 14.6</td><td>16.1 / 19.1</td><td>6.2 / 8.6</td><td>9.9 / 13.0</td><td>8.2 / 11.1</td><td>13.7 / 17.0</td></tr><tr><td>hu</td><td>2558</td><td>-</td><td>-</td><td>13.7 / 16.7</td><td>18.4 / 21.4</td><td>6.4 / 9.2</td><td>11.4 / 14.8</td><td>10.0 / 12.5</td><td>15.7 / 18.7</td></tr><tr><td>it</td><td>3614</td><td>-</td><td>-</td><td>14.4 / 17.0</td><td>17.3 / 19.8</td><td>7.6 / 9.8</td><td>12.2 / 15.2</td><td>10.4 / 12.8</td><td>15.7 / 18.6</td></tr><tr><td>nb</td><td>2717</td><td>-</td><td>-</td><td>16.4 / 19.4</td><td>19.5 / 23.3</td><td>8.9 / 11.6</td><td>13.5 / 17.0</td><td>11.3 / 13.9</td><td>18.0 / 21.4</td></tr><tr><td>nl</td><td>4316</td><td>-</td><td>-</td><td>14.0 / 16.8</td><td>19.1 / 21.7</td><td>7.3 / 9.8</td><td>13.3 / 15.9</td><td>8.6 / 11.5</td><td>17.4 / 20.2</td></tr><tr><td>pl</td><td>2998</td><td>-</td><td>-</td><td>13.4 / 17.2</td><td>18.6 / 21.8</td><td>6.1 / 8.5</td><td>9.7 / 13.3</td><td>8.7 / 11.5</td><td>14.6 / 18.0</td></tr><tr><td>pt</td><td>3184</td><td>-</td><td>-</td><td>15.4 / 18.4</td><td>18.0 / 20.6</td><td>7.3 / 9.7</td><td>12.3 / 15.4</td><td>9.6 / 12.1</td><td>17.5 / 20.6</td></tr><tr><td>ru</td><td>2887</td><td>-</td><td>-</td><td>9.4 / 11.8</td><td>10.3 / 12.1</td><td>3.5 / 5.5</td><td>4.6 / 6.6</td><td>4.8 / 7.4</td><td>6.3 / 8.6</td></tr><tr><td>sv</td><td>2993</td><td>-</td><td>-</td><td>15.7 / 18.5</td><td>18.7 / 22.0</td><td>9.2 / 11.7</td><td>13.0 / 16.4</td><td>11.0 / 13.6</td><td>17.8 / 21.3</td></tr><tr><td>zh</td><td>2591</td><td>-</td><td>-</td><td>5.1 / 7.4</td><td>4.1 / 6.4</td><td>2.2 / 4.2</td><td>2.7 / 5.1</td><td>3.4 / 5.3</td><td>4.3 / 6.8</td></tr><tr><td>eo</td><td>963</td><td>-</td><td>-</td><td>-</td><td>-</td><td>8.2 / 11.8</td><td>16.6 / 19.6</td><td>16.8 / 20.8</td><td>23.9 / 27.4</td></tr><tr><td>vo</td><td>164</td><td>-</td><td>-</td><td>48.1 / 49.1</td><td>51.8 / 52.4</td><td>-</td><td>-</td><td>-</td><td>-</td></tr></table>
362
+
363
+ Table 10: The performance of various models for the entity alignment task (Hit@1/MRR) across different languages. We also report the number of entities in the test set to show the general difficulty of the completion task in that language.
364
+
365
+ <table><tr><td>Language</td><td># of test set</td><td>MTransE</td><td>JEANS</td><td>mBERT</td><td>mBERT-MLKG</td><td>XLM</td><td>XLM-MLKG</td><td>XLM-R</td><td>XLM-R-MLKG</td></tr><tr><td>en-&gt;fr</td><td>39155</td><td>14.0 / 17.7</td><td>46.3 / 53.8</td><td>87.1 / 86.4</td><td>92.6 / 92.1</td><td>55.3 / 55.3</td><td>92.1 / 91.4</td><td>65.2 / 65.2</td><td>93.5 / 92.8</td></tr><tr><td>en-&gt;de</td><td>41018</td><td>3.4 / 7.2</td><td>33.7 / 41.2</td><td>80.1 / 79.9</td><td>85.2 / 84.7</td><td>54.3 / 54.3</td><td>85.1 / 84.6</td><td>64.8 / 64.9</td><td>86.8 / 86.2</td></tr><tr><td>en-&gt;ar</td><td>16818</td><td>-</td><td>-</td><td>8.9 / 10.0</td><td>68.6 / 67.4</td><td>0.7 / 0.9</td><td>63.4 / 62.5</td><td>0.9 / 1.1</td><td>81.8 / 80.0</td></tr><tr><td>en-&gt;ast</td><td>19834</td><td>-</td><td>-</td><td>41.8 / 41.9</td><td>85.2 / 83.9</td><td>13.0 / 13.2</td><td>93.6 / 92.6</td><td>33.5 / 33.8</td><td>97.3 / 96.3</td></tr><tr><td>en-&gt;ca</td><td>22567</td><td>-</td><td>-</td><td>38.2 / 38.3</td><td>81.5 / 80.2</td><td>10.8 / 11.1</td><td>90.2 / 88.8</td><td>29.9 / 30.2</td><td>94.5 / 93.4</td></tr><tr><td>en-&gt;cs</td><td>16570</td><td>-</td><td>-</td><td>40.0 / 40.3</td><td>82.5 / 81.1</td><td>11.9 / 12.2</td><td>89.8 / 88.6</td><td>30.4 / 30.5</td><td>93.9 / 92.8</td></tr><tr><td>en-&gt;da</td><td>20093</td><td>-</td><td>-</td><td>39.2 / 39.4</td><td>82.4 / 81.3</td><td>12.7 / 12.9</td><td>91.7 / 90.5</td><td>33.0 / 33.2</td><td>95.5 / 94.4</td></tr><tr><td>en-&gt;es</td><td>28288</td><td>-</td><td>-</td><td>40.6 / 40.3</td><td>81.8 / 80.2</td><td>11.5 / 11.7</td><td>90.1 / 88.6</td><td>33.2 / 32.3</td><td>94.3 / 92.7</td></tr><tr><td>en-&gt;fa</td><td>16120</td><td>-</td><td>-</td><td>10.1 / 11.3</td><td>69.4 / 68.2</td><td>1.0 / 1.2</td><td>67.6 / 66.9</td><td>1.8 / 2.2</td><td>83.1 / 81.8</td></tr><tr><td>en-&gt;fi</td><td>20608</td><td>-</td><td>-</td><td>39.4 / 39.4</td><td>81.3 / 79.9</td><td>12.4 / 12.6</td><td>90.0 / 88.8</td><td>32.2 / 32.4</td><td>94.2 / 93.1</td></tr><tr><td>en-&gt;hu</td><td>18896</td><td>-</td><td>-</td><td>36.3 / 36.7</td><td>80.5 / 79.4</td><td>11.3 / 11.4</td><td>89.2 / 88.0</td><td>29.6 / 29.9</td><td>93.6 / 92.7</td></tr><tr><td>en-&gt;it</td><td>26393</td><td>-</td><td>-</td><td>39.4 / 39.5</td><td>80.2 / 78.7</td><td>11.5 / 11.8</td><td>88.4 / 86.9</td><td>31.2 / 31.2</td><td>92.4 / 91.0</td></tr><tr><td>en-&gt;ja</td><td>22012</td><td>-</td><td>-</td><td>8.9 / 10.1</td><td>64.3 / 63.4</td><td>0.7 / 0.8</td><td>60.9 / 60.0</td><td>1.4 / 1.5</td><td>77.8 / 76.4</td></tr><tr><td>en-&gt;nb</td><td>20748</td><td>-</td><td>-</td><td>39.2 / 39.3</td><td>82.5 / 81.1</td><td>11.5 / 11.8</td><td>91.8 / 90.4</td><td>32.2 / 32.5</td><td>95.6 / 94.4</td></tr><tr><td>en-&gt;nl</td><td>29378</td><td>-</td><td>-</td><td>41.3 / 41.3</td><td>82.4 / 80.5</td><td>12.2 / 12.4</td><td>90.8 / 89.0</td><td>34.1 / 34.1</td><td>94.6 / 92.8</td></tr><tr><td>en-&gt;pl</td><td>21535</td><td>-</td><td>-</td><td>38.7 / 38.9</td><td>80.0 / 78.7</td><td>11.2 / 11.4</td><td>87.6 / 86.3</td><td>30.2 / 30.3</td><td>92.6 / 91.2</td></tr><tr><td>en-&gt;pt</td><td>23001</td><td>-</td><td>-</td><td>41.5 / 41.5</td><td>82.5 / 81.0</td><td>12.3 / 12.6</td><td>90.6 / 89.2</td><td>33.1 / 33.2</td><td>94.4 / 93.1</td></tr><tr><td>en-&gt;ru</td><td>22665</td><td>-</td><td>-</td><td>19.2 / 20.2</td><td>74.2 / 72.6</td><td>3.6 / 3.7</td><td>78.0 / 76.2</td><td>10.0 / 10.3</td><td>87.9 / 85.9</td></tr><tr><td>en-&gt;sv</td><td>22986</td><td>-</td><td>-</td><td>39.7 / 39.7</td><td>81.6 / 80.1</td><td>11.8 / 12.1</td><td>90.6 / 89.2</td><td>32.2 / 32.4</td><td>94.4 / 93.1</td></tr><tr><td>en-&gt;zh</td><td>20891</td><td>-</td><td>-</td><td>10.2 / 11.1</td><td>55.1 / 54.8</td><td>9.0 / 10.6</td><td>49.8 / 49.5</td><td>1.9 / 1.9</td><td>67.3 / 66.2</td></tr><tr><td>en-&gt;eo</td><td>8913</td><td>-</td><td>-</td><td>-</td><td>-</td><td>10.9 / 11.1</td><td>85.4 / 84.3</td><td>28.9 / 28.9</td><td>89.8 / 88.5</td></tr><tr><td>en-&gt;vo</td><td>2954</td><td>-</td><td>-</td><td>50.5 / 50.8</td><td>91.7 / 89.3</td><td>-</td><td>-</td><td>-</td><td>-</td></tr></table>
366
+
367
+ Table 11: Detailed results of the cross-lingual relation classification task (RELX) evaluated by F1 score.
368
+
369
+ <table><tr><td>Language</td><td>mBERT</td><td>mBERT-MLKG</td><td>XLM</td><td>XLM-MLKG</td><td>XLM-R</td><td>XLM-R-MLKG</td></tr><tr><td>en</td><td>61.8</td><td>64.0</td><td>61.4</td><td>61.3</td><td>63.1</td><td>64.2</td></tr><tr><td>de</td><td>57.5</td><td>60.0</td><td>57.5</td><td>56.1</td><td>58.0</td><td>60.2</td></tr><tr><td>es</td><td>57.9</td><td>63.1</td><td>56.9</td><td>59.7</td><td>59.8</td><td>60.7</td></tr><tr><td>fr</td><td>58.3</td><td>61.1</td><td>55.7</td><td>58.0</td><td>59.5</td><td>61.5</td></tr><tr><td>tr</td><td>55.8</td><td>59.3</td><td>54.1</td><td>58.0</td><td>59.1</td><td>59.0</td></tr><tr><td>average</td><td>58.3</td><td>61.5</td><td>57.1</td><td>58.6</td><td>59.9</td><td>61.1</td></tr></table>
370
+
371
+ Table 12: Detailed results of the NER task (Wikiann) evaluated by F1 socre.
372
+
373
+ <table><tr><td>Language</td><td>mBERT</td><td>mBERT-MLKG</td><td>XLM-R</td><td>XLM-R-MLKG</td><td>Language</td><td>mBERT</td><td>mBERT-MLKG</td><td>XLM-R</td><td>XLM-R-MLKG</td></tr><tr><td>en</td><td>85.2</td><td>84.0</td><td>84.7</td><td>85.0</td><td>ka</td><td>64.6</td><td>66.9</td><td>71.6</td><td>69.3</td></tr><tr><td>af</td><td>77.4</td><td>77.2</td><td>78.9</td><td>79.2</td><td>kk</td><td>45.8</td><td>49.1</td><td>56.2</td><td>53.3</td></tr><tr><td>ar</td><td>41.1</td><td>40.5</td><td>53.0</td><td>51.8</td><td>ko</td><td>59.6</td><td>60.2</td><td>60.0</td><td>61.0</td></tr><tr><td>bg</td><td>77.0</td><td>76.2</td><td>81.4</td><td>80.6</td><td>ml</td><td>52.3</td><td>53.1</td><td>67.8</td><td>61.0</td></tr><tr><td>bn</td><td>70.0</td><td>72.8</td><td>78.8</td><td>78.1</td><td>mr</td><td>58.2</td><td>55.0</td><td>68.1</td><td>67.2</td></tr><tr><td>de</td><td>78.0</td><td>78.6</td><td>78.8</td><td>78.5</td><td>ms</td><td>72.7</td><td>68.1</td><td>57.1</td><td>74.6</td></tr><tr><td>el</td><td>72.5</td><td>70.8</td><td>79.5</td><td>79.6</td><td>my</td><td>45.2</td><td>55.5</td><td>54.3</td><td>56.8</td></tr><tr><td>es</td><td>77.4</td><td>74.8</td><td>79.6</td><td>75.8</td><td>nl</td><td>81.8</td><td>82.3</td><td>84.0</td><td>83.5</td></tr><tr><td>et</td><td>75.4</td><td>78.6</td><td>79.1</td><td>78.1</td><td>pt</td><td>80.8</td><td>78.7</td><td>81.9</td><td>82.5</td></tr><tr><td>eu</td><td>66.3</td><td>68.3</td><td>60.9</td><td>59.0</td><td>ru</td><td>64.0</td><td>66.8</td><td>69.1</td><td>70.5</td></tr><tr><td>fa</td><td>46.2</td><td>38.6</td><td>61.9</td><td>48.9</td><td>sw</td><td>67.5</td><td>70.1</td><td>70.5</td><td>70.0</td></tr><tr><td>fi</td><td>77.2</td><td>78.3</td><td>79.2</td><td>79.0</td><td>ta</td><td>50.7</td><td>53.8</td><td>59.5</td><td>60.8</td></tr><tr><td>fr</td><td>79.6</td><td>78.9</td><td>80.5</td><td>80.2</td><td>te</td><td>48.5</td><td>48.2</td><td>55.8</td><td>50.9</td></tr><tr><td>he</td><td>56.6</td><td>54.4</td><td>56.8</td><td>57.9</td><td>th</td><td>3.6</td><td>0.1</td><td>1.3</td><td>2.9</td></tr><tr><td>hi</td><td>65.0</td><td>66.3</td><td>73.0</td><td>73.0</td><td>tl</td><td>71.7</td><td>74.6</td><td>73.2</td><td>78.0</td></tr><tr><td>hu</td><td>76.4</td><td>78.0</td><td>79.8</td><td>80.6</td><td>tr</td><td>71.8</td><td>74.4</td><td>76.1</td><td>80.6</td></tr><tr><td>id</td><td>53.5</td><td>54.6</td><td>53.0</td><td>55.9</td><td>ur</td><td>36.9</td><td>43.9</td><td>56.4</td><td>63.2</td></tr><tr><td>it</td><td>81.5</td><td>81.6</td><td>81.3</td><td>81.2</td><td>vi</td><td>71.8</td><td>70.7</td><td>79.4</td><td>78.9</td></tr><tr><td>ja</td><td>29.0</td><td>29.2</td><td>23.2</td><td>23.1</td><td>yo</td><td>44.9</td><td>50.9</td><td>33.6</td><td>45.4</td></tr><tr><td>jv</td><td>66.4</td><td>65.3</td><td>62.5</td><td>66.9</td><td>zh</td><td>42.7</td><td>45.0</td><td>33.1</td><td>28.9</td></tr></table>
374
+
375
+ Table 13: Detailed results of the QA task (XQuAD) evaluated by F1/EM score.
376
+
377
+ <table><tr><td>Language</td><td>mBERT</td><td>mBERT-MLKG</td><td>XLM-R</td><td>XLM-R-MLKG</td></tr><tr><td>en</td><td>83.5 / 72.2</td><td>83.5 / 72.0</td><td>86.5 / 75.7</td><td>88.0 / 77.6</td></tr><tr><td>ar</td><td>61.5 / 45.1</td><td>61.3 / 44.5</td><td>68.6 / 49.0</td><td>76.2 / 58.9</td></tr><tr><td>de</td><td>70.6 / 54.0</td><td>70.6 / 54.8</td><td>80.4 / 63.4</td><td>79.6 / 62.8</td></tr><tr><td>el</td><td>62.6 / 44.9</td><td>63.5 / 47.5</td><td>79.8 / 61.7</td><td>79.1 / 61.3</td></tr><tr><td>es</td><td>75.5 / 56.9</td><td>74.4 / 57.2</td><td>82.0 / 63.9</td><td>82.4 / 64.4</td></tr><tr><td>hi</td><td>59.2 / 46.0</td><td>57.2 / 42.9</td><td>76.7 / 59.7</td><td>75.6 / 59.3</td></tr><tr><td>ru</td><td>71.3 / 53.3</td><td>70.5 / 54.4</td><td>80.1 / 64.3</td><td>79.7 / 63.6</td></tr><tr><td>th</td><td>42.7 / 33.5</td><td>43.6 / 36.8</td><td>74.2 / 62.8</td><td>73.3 / 61.2</td></tr><tr><td>tr</td><td>55.4 / 40.1</td><td>53.7 / 38.0</td><td>75.9 / 59.3</td><td>74.9 / 58.9</td></tr><tr><td>vi</td><td>69.5 / 49.6</td><td>67.7 / 47.9</td><td>79.1 / 59.0</td><td>80.0 / 60.6</td></tr><tr><td>zh</td><td>58.0 / 48.3</td><td>58.0 / 48.3</td><td>59.3 / 50.0</td><td>56.0 / 46.7</td></tr><tr><td>average</td><td>64.5 / 49.4</td><td>62.2 / 49.5</td><td>76.6 / 60.8</td><td>76.8 / 61.3</td></tr></table>
adaptersforenhancedmodelingofmultilingualknowledgeandtext/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bec99ca2958ec20eb0b58fc5db24b4292556be551c771c26e6463a5c2d2c3a65
3
+ size 1090026
adaptersforenhancedmodelingofmultilingualknowledgeandtext/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c134594010caa0a5f83010a3beef91abc9c5f7b453766c37d7758c50d25e2ea6
3
+ size 488673
adaptingmultilingualmodelsforcodemixedtranslation/34deb24e-6aa2-4be4-be3e-63c105f3ceda_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7bc93a26798ee2a47922315d4cb74301630427b877a2b75ed46eaf20b1f925d3
3
+ size 61197
adaptingmultilingualmodelsforcodemixedtranslation/34deb24e-6aa2-4be4-be3e-63c105f3ceda_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c8f0037512a091ccd82cbff4af44f4966963b493dd107b82f9087b56c4df8856
3
+ size 74650
adaptingmultilingualmodelsforcodemixedtranslation/34deb24e-6aa2-4be4-be3e-63c105f3ceda_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:14718f9531c8854fba4da6850de2a72a284595b4edcde6bfcdebcb2dfca6e2a3
3
+ size 515896
adaptingmultilingualmodelsforcodemixedtranslation/full.md ADDED
@@ -0,0 +1,194 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Adapting Multilingual Models for Code-Mixed Translation
2
+
3
+ Aditya Vavre<sup>1</sup>, Abhirut Gupta<sup>2</sup>, and Sunita Sarawagi<sup>1</sup>
4
+
5
+ $^{1}$ IIT Bombay $^{2}$ Google Research
6
+
7
+ {adityavavre,sunita}@cse.iitb.ac.in,abhirut@google.com
8
+
9
+ # Abstract
10
+
11
+ The scarcity of gold standard code-mixed to pure language parallel data makes it difficult to train translation models reliably. Prior work has addressed the paucity of parallel data with data augmentation techniques. Such methods rely heavily on external resources making systems difficult to train and scale effectively for multiple languages. We present a simple yet highly effective two-stage back-translation based training scheme for adapting multilingual models to the task of code-mixed translation which eliminates dependence on external resources. We show a substantial improvement in translation quality (measured through BLEU), beating existing prior work by up to $+3.8$ BLEU on code-mixed $\mathrm{Hi}\rightarrow \mathrm{En}$ $\mathrm{Mr}\rightarrow \mathrm{En}$ ,and $\mathrm{Bn}\rightarrow \mathrm{En}$ tasks. On the LinCE Machine Translation leader board, we achieve the highest score for code-mixed $\mathrm{Es}\rightarrow \mathrm{En}$ , beating existing best baseline by $+6.5$ BLEU, and our own stronger baseline by $+1.1$ BLEU.
12
+
13
+ # 1 Introduction
14
+
15
+ As code-mixing (Diab et al., 2014; Winata et al., 2019; Khanuja et al., 2020; Aguilar et al., 2020) becomes widespread in an increasingly digitized bilingual community, it becomes important to extend translation systems to handle code-mixed input. A major challenge for training code-mixed translation models is the lack of parallel data. Recent work on generating synthetic parallel data using available non-code-mixed parallel data depend on language specific tools for transliteration, word-alignment, and language identification (Gupta et al., 2021). This makes the approach difficult to scale to new languages and increases software complexity. Back-translation (BT) is another effective and popular strategy to handle non-availability of parallel data (Sennrich et al., 2016; Edunov et al., 2018). However, for the code-mixed to English translation task, simple BT is not an option since we cannot
16
+
17
+ assume the presence of an English to code-mixed translation model.
18
+
19
+ Meanwhile the mainstream translation community is converging on frameworks based on multilingual models for translation between multiple language pairs (Johnson et al., 2017; Aharoni et al., 2019; Arivazhagan et al., 2019; Zhang et al., 2020; Fan et al., 2021). Going forward, code-mixed translation needs to be integrated within these frameworks to impact practical systems.
20
+
21
+ We propose a novel two stage back-translation methodology called Back-to-Back Translation (B2BT) targeted for adapting multilingual models to code-mixed translation. Our approach is simple and integrates easily with existing multilingual translation models without any need for special models or language specific tools. We compare B2BT with six other baselines on both standalone and mBART-based models across four benchmarks and show significant gains. For example, on codemixed Hindi to English translation B2BT improves state-of-art accuracy by $+3.8$ and by $+6.3$ over default back-translation. We analyze the reasons for the gains via both human evaluation and impact on downstream models. We release a new dataset and will publicly release our code.
22
+
23
+ # 2 Our Approach
24
+
25
+ Our objective is to train a model that can translate a sentence from the code-mixed language $\mathcal{C}$ , which contains words from English and an additional language $S$ , to monolingual English $\mathcal{E}$ . Following (Myers-Scotton, 1997) we refer to $S$ as the matrix language as it lends its grammar in a code-mixed utterance, and English as the embedded language since it lends only its words. We are given parallel $S$ to English corpus $(S,E)\subset (\mathcal{S},\mathcal{E})$ and a non-parallel code-mixed corpus $C\subset \mathcal{C}$ . Since code-mixing appears more in domains like social media, which differ from formal domains like news in which parallel data $(S,E)$ is available, we addi
26
+
27
+ ![](images/cc92cba99d09cf93295b4ca47c4146319d2462c7790f9b73f68d7e1bd51fd2de.jpg)
28
+ Figure 1: B2BT training pipeline, showing the two-stage back-translation based adaptation of an initial multilingual model. $(\widetilde{\cdot})$ indicates source side masking during training.
29
+
30
+ ![](images/0850c681c04d82025d4deca9952437a3ed31da56eb181e73a3dc77b980bd6845.jpg)
31
+
32
+ ![](images/b248a690a21079e173e1206c7f3608113570c0805571f3c7a2e99f16fa08f244.jpg)
33
+
34
+ tionally use a domain-specific monolingual English corpora $\mathrm{E}_{MD} \subset \mathcal{E}$ . Optionally, we can also exploit monolingual data $S_{M} \subset \mathcal{S}$ and $E_{M} \subset \mathcal{E}$ . Our method called B2BT of training a $\mathcal{C} \rightarrow \mathcal{E}$ translator without parallel data is summarised in Figure 1 and comprises of an initial training of a multi-lingual model and two stages of back-translation-based fine-tuning that we elaborate on next.
35
+
36
+ Training Base Multilingual Model The first step is to train a multilingual model $(\mathcal{M})$ on parallel matrix language to English corpus $(S,E)$ in both directions and non-parallel data in English $E_{M}$ , matrix language $S_{M}$ , and code-mixed $C$ . Following Johnson et al. (2017) we prefix source sentences with one of $<2\text{en}>$ , $<2\text{cm}>$ , and $<2\text{xx}>$ directing target as English, CM, or $S$ respectively. For the non-parallel corpora, we train the model to copy the source to the target by masking out $20\%$ tokens in the source as used in (Song et al., 2019b).
37
+
38
+ The above training exposes $\mathcal{M}$ to all three languages in both encoder and decoder, and a baseline is to just use this bidirectional model for our task. We will show that such a model provides marginal gains over a simple $S\rightarrow \mathcal{E}$ model. However, we adapt $\mathcal{M}$ further using synthetic parallel data for the $\mathcal{C}\to \mathcal{E}$ task. Back-translation (BT) of $\mathcal{E}$ to $\mathcal{C}$ using $\mathcal{M}$ to generate synthetic parallel data provides very poor quality as we show in Section 4. This motivates our two stage BT approach. A key insight of B2BT method is that $\mathcal{M}$ trained with parallel $S\rightarrow \mathcal{E}$ data gives better quality outputs when translating $\mathcal{C}$ to $\mathcal{E}$ than the reverse. The reason is $\mathcal{C}$ shares the grammar structure of $S$ and $\mathcal{M}$ is trained to handle noise in the input. We describe the two step BT next.
39
+
40
+ Fine-tune for $\mathcal{E}\to \mathcal{C}$ Here we prepare $\mathcal{M}$ to back-translate pure English sentences to code-mixed sentences so that the resulting synthetic parallel data can be used to train a better code-mixed to
41
+
42
+ English translation model. We first back-translate the monolingual code-mixed corpus $C$ to English $E_{B}$ using $\mathcal{M}$ . The back-translation is done by prefixing $<2\mathsf{en}>$ to the code-mixed input and sampling English output from $\mathcal{M}$ . This provides us with a synthetic English to code-mixed parallel corpus $(E_B,C)$ . We fine-tune $\mathcal{M}$ on $(E_B,C)$ to produce a model $\mathcal{M}'$ where source sentences are prefixed with $<2\mathsf{cm}>$ . Since the target distribution $C$ is preserved during training, we can now generate high quality in-domain code-mixed sentences using $\mathcal{M}'$ .
43
+
44
+ Fine-tune for $\mathcal{C} \to \mathcal{E}$ In the final step we realise our objective of $\mathcal{C} \to \mathcal{E}$ translation. We start by back-translating the in-domain monolingual English corpus $E_{MD}$ to code-mixed $C_B$ using $\mathcal{M}'$ . This is done by prefixing English sentences with the $<2\text{cm}>$ tag, and sampling code-mixed outputs from $\mathcal{M}'$ . We now have a synthetic code-mixed to English parallel corpus $(C_B, E_{MD})$ . We fine-tune $\mathcal{M}$ to obtain our final model $\mathcal{M}^*$ on this synthetic parallel corpus where all the source sentences in $C_B$ are prefixed with the $<2\text{en}>$ token.
45
+
46
+ # 3 Related Work
47
+
48
+ Code-mixing is receiving increasing interest in the NLP community Khanuja et al. (2020); Diab et al. (2014); Aguilar et al. (2018); Solorio et al. (2021); Song et al. (2019a). A primary focus area is training code-switched language models for applications like speech recognition (Winata et al., 2019; Gonen and Goldberg, 2019) under limited codemixed (CM) data. Pratapa et al. (2018); Chang et al. (2019); Gao et al. (2019); Samanta et al. (2019); Winata et al. (2019) all propose different methods for creating synthetic CM data to augment training data. Tarunesh et al. (2021) generates CM sentences by extending a translation model. The above papers are designed for LM training and do not generate $(\mathcal{C},\mathcal{E})$ parallel data.
49
+
50
+ The biggest challenge in translation of codemixed sentences is the lack of large parallel training data (Mahesh et al., 2005; Menacer et al., 2019; Nakayama et al., 2019; Srivastava and Singh, 2020). Gupta et al. (2021) propose to create synthetic parallel CM data via these two steps: (1) train an mBERT model to identify word set $W$ to switch in a sentence from $S$ to $\mathcal{E}$ , effectively creating a sentence from $\mathcal{C}$ (2) align parallel sentences from $(S, E)$ and replace words in $W$ to their aligned English words. We call this the mBertAln method in this paper. This pipeline for a new language $S$ requires the following four external tools: (1) mBERT pre-trained on $S$ , (2) a language identifier tool to spot English tokens in a CM sentence, (3) a word alignment model, and (4) a translator $\mathcal{E} \rightarrow S$ for BT. For low-resource languages such tools may not exist. In contrast B2BT is totally standalone. Even when external tools exist, we show empirically that the synthetic sentences thus generated tend to be of lower quality than ours because of errors in any of the two steps. The CALCS 2021 workshop (Solorio et al., 2021) also released a shared task for CM translation but the submissions so far are straightforward application of BART multilingual models, with which we also compare our method.
51
+
52
+ B2BT is reminiscent of dual learning NMT methods (He et al., 2016; Artetxe et al., 2018; Hoang et al., 2018; Cheng et al., 2016) but these methods were designed for two generic languages whereas B2BT for code-mixed translation handles three languages related in specific asymmetric ways. We exploit that asymmetry to design our training schedule. For example, since $\mathcal{C} \to \mathcal{E}$ translations are more accurate than the reverse we insert the intermediate BT stage.
53
+
54
+ # 4 Experiments
55
+
56
+ We use the notation $\mathrm{SoEn} \rightarrow \mathrm{En}$ , to indicate translation from a code-mixed matrix language with code 'So' to English. We evaluate on four codemixed datasets: Hindi (HiEn $\rightarrow$ En) from Gupta et al. (2021), Spanish (EsEn $\rightarrow$ En) on the LinCE leaderboard $^{1}$ , Bengali (BnEn $\rightarrow$ En) from Gupta et al. (2021) but augmented with the newly released Samanantar data to create a stronger baseline (evaluation is done on the splits released by the authors), and a new Marathi (MrEn $\rightarrow$ En) dataset that we
57
+
58
+ <table><tr><td>Lang Pair</td><td>Method</td><td>ST-Test</td><td>ST-OOV</td><td>ST-Hard</td></tr><tr><td rowspan="7">HiEn→En</td><td>Hi→En Model</td><td>36.9</td><td>33.9</td><td>2.1</td></tr><tr><td>Hi→En Model + BT</td><td>43.9</td><td>41.4</td><td>18.6</td></tr><tr><td>mBertAln</td><td>46.4</td><td>44.6</td><td>23.4</td></tr><tr><td>Multilingual</td><td>38.0</td><td>37.7</td><td>17.5</td></tr><tr><td>Multilingual + E → S BT</td><td>44.0</td><td>40.9</td><td>22.6</td></tr><tr><td>Multilingual + E → C BT</td><td>35.7</td><td>35.8</td><td>20.6</td></tr><tr><td>B2BT</td><td>50.2</td><td>49.9</td><td>30.7</td></tr><tr><td rowspan="6">BnEn→En</td><td>Bn→En Model</td><td>30.8</td><td>31.1</td><td>14.1</td></tr><tr><td>Bn→En Model + BT</td><td>40.9</td><td>41.2</td><td>21.2</td></tr><tr><td>mBertAln</td><td>41.4</td><td>41.9</td><td>22.3</td></tr><tr><td>Multilingual</td><td>30.9</td><td>31.4</td><td>13.8</td></tr><tr><td>Multilingual + E → S BT</td><td>41.7</td><td>42.0</td><td>22.0</td></tr><tr><td>B2BT</td><td>44.2</td><td>43.4</td><td>23.4</td></tr><tr><td rowspan="6">MrEn→En</td><td>Mr→En Model</td><td>26.6</td><td>25.7</td><td>0.9</td></tr><tr><td>Mr→En Model + BT</td><td>39.3</td><td>39.2</td><td>16.5</td></tr><tr><td>mBertAln</td><td>40.6</td><td>40.5</td><td>17.8</td></tr><tr><td>Multilingual</td><td>29.1</td><td>29.7</td><td>9.0</td></tr><tr><td>Multilingual + E → S BT</td><td>41.4</td><td>41.5</td><td>18.9</td></tr><tr><td>B2BT</td><td>41.2</td><td>41.3</td><td>18.7</td></tr></table>
59
+
60
+ Table 1: Comparing BLEU scores for B2BT trained from scratch against other baselines including mBertAln of Gupta et al. (2021). ST-OOV and ST-Hard are subsets of the test set (ST-Test) containing sentences with at least two OOV words, and 2,000 sentences the base model performed poorest on respectively.
61
+
62
+ introduce ${}^{2}$ . A summary of the training data used, and our model setup is in Appendix A and B.
63
+
64
+ Baselines We compare our method, B2BT against the mBertAln model (Gupta et al., 2021) and these baselines: (1) the base bi-lingual $S \to \mathcal{E}$ model, (2) base model fine-tuned with $\mathcal{E} \to S$ BT on domain data $E_{MD}$ , (3) base multilingual model $\mathcal{M}$ obtained after first stage of B2BT, (4) $\mathcal{M}$ fine-tuned with $\mathcal{E} \to S$ BT on domain data $E_{MD}$ , (5) $\mathcal{M}$ fine-tuned with $\mathcal{E} \to \mathcal{C}$ BT on $E_{MD}$ .
65
+
66
+ Results Table 1 compares B2BT approach against these baselines on $\mathrm{HiEn} \rightarrow \mathrm{En}$ , $\mathrm{BnEn} \rightarrow \mathrm{En}$ , and $\mathrm{MrEn} \rightarrow \mathrm{En}$ . Observe how B2BT significantly outperforms mBertAln and multilingual model adapted with existing single step back-translation across all language pairs. We also see substantial improvements on the two adversarial subsets ST-OOV and ST-Hard. This establishes the importance of our two-stage back-translation approach. Note in particular that when we fine-tuned with $E_{MD}$ back-translated to code-mixed with $\mathcal{M}$ , we observe a huge drop in accuracy! This is because the base multilingual model $(\mathcal{M})$ trained to denoise CM data and translate $S \rightarrow \mathcal{E}$ is much worse for $\mathcal{E} \rightarrow \mathcal{C}$ translations than $\mathcal{C} \rightarrow \mathcal{E}$ . This underlines
67
+
68
+ <table><tr><td>Lang Pair</td><td>Method</td><td>BLEU</td></tr><tr><td rowspan="3">HiEn →En</td><td>mBART Multilingual</td><td>35.1</td></tr><tr><td>mBART Multilingual + E → S BT</td><td>43.4</td></tr><tr><td>mBART Multilingual B2BT</td><td>48.0</td></tr><tr><td rowspan="4">EsEn →En</td><td>mBART (leaderboard)</td><td>43.9</td></tr><tr><td>mBART Multilingual</td><td>49.3</td></tr><tr><td>mBART Multilingual + E �� S BT</td><td>50.0</td></tr><tr><td>mBART Multilingual B2BT</td><td>50.4</td></tr></table>
69
+
70
+ Table 2: Results comparing B2BT fine-tuned on an mBART checkpoint against baselines and best existing models on the LinCE leaderboard.
71
+
72
+ <table><tr><td>Fine-tuning Dataset for Final Model</td><td>ST-Test</td></tr><tr><td>B2BT (M*)</td><td>50.2</td></tr><tr><td>M + synthetic data from Gupta et al. (2021)</td><td>45.3</td></tr></table>
73
+
74
+ the importance of the intermediate model $(\mathcal{M}^{\prime})$ that is fine-tuned to produce good code-mixed data from English.
75
+
76
+ Our approach can also complement existing multilingual pre-trained models such as mBART. Table 2 presents results with base multilingual model $\mathcal{M}$ trained by fine-tuning an mBART checkpoint. Here again we observe gains beyond simple BT-based fine-tuning of the multilingual model.
77
+
78
+ Why does B2BT outperform mBertAln? We hypothesize that the reason our model performs substantially better is that the synthetic data generated by our model is of higher quality. To test this hypothesis we replace the synthetic code-mixed parallel data of B2BT with synthetic data from mBertAln (Gupta et al., 2021) while keeping the rest of the training of $\mathcal{M}^*$ unchanged. Table 3 presents this result. It is important to note that all the fine-tuning sets have the exact same size and all fine-tuning is performed on the same multilingual base model, $\mathcal{M}$ . The only difference is in the method used to create the synthetic side of the fine-tuning dataset. The improvement of almost +4.9 BLEU points on ST-Test over using mBertAln
79
+
80
+ Table 3: Comparing BLEU on HiEn→En when using synthetic code-mixed data generated from $\mathcal{M}'$ in B2BT vs synthetic data from mBertAln
81
+
82
+ <table><tr><td>English Sentence</td><td>mBERT Synth Code-Mixed</td><td>B2BT Synth Code-Mixed</td></tr><tr><td>open layer properties dialog box again.</td><td>प्रदेश properties dialog किंति से अयोल. layer again open</td><td>प्रदेश से layer properties किंति अयोल से अयोल! again dialog box open</td></tr><tr><td>click on open button.</td><td>बाल्दी किंति वर्षा अयोल का! Open button अर्षा open</td><td>open किंति वर्षा अयोल का! button अर्षा click</td></tr></table>
83
+
84
+ Figure 2: Examples of synthetic sentences from mBer-tAln vs B2BT. English translations of Devanagari words are provided.
85
+
86
+ <table><tr><td>Metric</td><td>ST-Test</td><td>mBERTAln</td><td>B2BT</td></tr><tr><td>Human eval rating</td><td>-</td><td>3.74</td><td>4.27</td></tr><tr><td>Human eval win %</td><td>-</td><td>17%</td><td>39%</td></tr><tr><td>Code-Mixing Index</td><td>28.3</td><td>20.7</td><td>27.2</td></tr><tr><td>Common En tokens</td><td>0.16</td><td>0.20</td><td>0.18</td></tr><tr><td>Code switch probability</td><td>0.27</td><td>0.24</td><td>0.27</td></tr></table>
87
+
88
+ Table 4: Comparing the synthetic data generated through mBertAln against B2BT.
89
+
90
+ data, clearly shows that the synthetic data from our model has better quality.
91
+
92
+ To directly quantify this fact, we performed human evaluation of data quality. Human raters were asked to rate fluency and intent preservation for source-target pairs (similar to Wu et al. (2016)) on a scale of 0 (irrelevant) to 6 (perfect). Across 500 examples, we observe that synthetic data from B2BT is rated as 4.27 out of 6 on average compared to 3.74 for mBertAln. In $39\%$ of examples B2BT is rated higher than mBertAln, $45\%$ of examples get the same score, and only in $17\%$ examples is mBertAln better (Table 4). In mBertAln the quality of synthetic data could suffer because of poor back-translation, mBERT failing to capture the code-switching pattern, or the alignment model failing to predict the aligned English token. Figure 2 presents examples of synthetic sentences generated by B2BT vs mBertAln. The mBertAln method has word repetition like "open" in row 2, which could be an alignment mistake, and word omissions like "box" in row 1 which could be caused by poor back-translation or alignment.
93
+
94
+ Finally, we compare code-mixing statistics between the synthetic data generated by B2BT and mBERT in Table 4. The data generated from B2BT is closer to the test data in terms of Code-Mixing Index, fraction of English tokens common in the source and target, and the average probability of switching at a given word.
95
+
96
+ Varying degree of code-mixing Following Gupta et al. (2021), we also evaluate the effectiveness of our model across different splits of the test set with varying Code-Mixing Index (Gamback and Das, 2016) (CMI). Figure 3 presents the improvements from our model on the three splits of the test set. We see improvements across all splits, but the largest improvements are on the split with the highest degree of code-mixing. On the high CMI split, we see about +8.7 BLEU point improvement over the mBERT approach, and +14.5 BLEU point improvement over the baseline.
97
+
98
+ ![](images/521898f4bdd4a8cc158ea6a48a7dd3057a4a3d936826cc1fd0bda86df69cd181.jpg)
99
+ Figure 3: Improvements in BLEU with B2BT against the mBERT based model and the domain-adapted bilingual model baseline across three splits of the test set with varying degree of code-mixing in the source.
100
+
101
+ <table><tr><td>Lang Pair</td><td>Fine-tuning Approach</td><td>BLEU</td></tr><tr><td rowspan="2">HiEn→En</td><td>Un-masked</td><td>50.1</td></tr><tr><td>Masked</td><td>50.2</td></tr><tr><td rowspan="2">BnEn→En</td><td>Un-masked</td><td>42.8</td></tr><tr><td>Masked</td><td>44.2</td></tr><tr><td rowspan="2">MrEn→En</td><td>Un-masked</td><td>40.6</td></tr><tr><td>Masked</td><td>41.2</td></tr></table>
102
+
103
+ Table 5: Comparing BLEU on ST-Test between masked vs un-masked fine-tuning to train $\mathcal{M}^*$ in the B2BT approach.
104
+
105
+ Masking during fine-tuning in B2BT A distinctive property of code-mixed translation is word overlap between the source and target sentences. Such overlap makes the fine-tuned model overly biased towards the easier copy action. We alleviate this bias by introducing random masking of words in the source sentence (with masking probability 0.2). Unlike prior work (Song et al., 2019b) which apply such masking only for pre-training with monolingual corpora, we propose to mask tokens even when training with parallel data. We evaluate the impact of this source side masking in B2BT's fine-tuning stages. Table 5 compares model performance with and without source side masking when fine-tuning. We observe noticeable gains, with the highest for BnEn at +1.5.
106
+
107
+ # 5 Conclusion
108
+
109
+ We present a simple two-stage back-translation approach (B2BT) for adapting multilingual models for code-switched translation. B2BT shows remarkable improvements on four datasets compared to recent methods, and default back-translation baselines. Our approach fits naturally with existing multilingual translation frameworks, which is crucial in expanding coverage to low resource lan
110
+
111
+ guages without building per-language pair models. We demonstrate with ablation studies and human evaluations that the synthetic data created through the two step process in B2BT is objectively higher quality than the one used by existing work.
112
+
113
+ # 6 Limitations
114
+
115
+ Our method depends on code-mixed monolingual data which may not be always available. Additionally, for low resource languages, we might not have access to enough non-code-mixed parallel data which also forms a crucial component of our approach.
116
+
117
+ # References
118
+
119
+ Gustavo Aguilar, Fahad AlGhamdi, Victor Soto, Thamar Solorio, Mona Diab, and Julia Hirschberg, editors. 2018. Proceedings of the Third Workshop on Computational Approaches to Linguistic Code-Switching. Association for Computational Linguistics, Melbourne, Australia.
120
+ Gustavo Aguilar, Sudipta Kar, and Thamar Solorio. 2020. LinCE: A Centralized Benchmark for Linguistic Code-switching Evaluation. In Proceedings of The 12th Language Resources and Evaluation Conference, pages 1803-1813, Marseille, France. European Language Resources Association.
121
+ Roee Aharoni, Melvin Johnson, and Orhan First. 2019. Massively multilingual neural machine translation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3874-3884, Minneapolis, Minnesota. Association for Computational Linguistics.
122
+ Naveen Arivazhagan, Ankur Bapna, Orhan First, Dmitry Lepikhin, Melvin Johnson, Maxim Krikun, Mia Xu Chen, Yuan Cao, George Foster, Colin Cherry, et al. 2019. Massively multilingual neural machine translation in the wild: Findings and challenges. arXiv preprint arXiv:1907.05019.
123
+ Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2018. Unsupervised statistical machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3632-3642, Brussels, Belgium. Association for Computational Linguistics.
124
+ Ondrej Bojar, Christian Buck, Christian Federmann, Barry Haddow, Philipp Koehn, Johannes Leveling, Christof Monz, Pavel Pecina, Matt Post, Herve Saint-Amand, Radu Soricut, Lucia Specia, and Ales Tamchyna. 2014. Findings of the 2014 workshop on statistical machine translation. In Proceedings of the Ninth Workshop on Statistical Machine Translation,
125
+
126
+ pages 12-58, Baltimore, Maryland, USA. Association for Computational Linguistics.
127
+ Ching-Ting Chang, Shun-Po Chuang, and Hung-Yi Lee. 2019. Code-Switching Sentence Generation by Generative Adversarial Networks and its Application to Data Augmentation. In Proc. Interspeech 2019, pages 554-558.
128
+ Yong Cheng, Wei Xu, Zhongjun He, Wei He, Hua Wu, Maosong Sun, and Yang Liu. 2016. Semi-supervised learning for neural machine translation. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1965-1974, Berlin, Germany. Association for Computational Linguistics.
129
+ Mona Diab, Julia Hirschberg, Pascale Fung, and Thamar Solorio, editors. 2014. Proceedings of the First Workshop on Computational Approaches to Code Switching. Association for Computational Linguistics, Doha, Qatar.
130
+ Sergey Edunov, Myle Ott, Michael Auli, and David Grangier. 2018. Understanding back-translation at scale. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 489-500, Brussels, Belgium. Association for Computational Linguistics.
131
+ Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, Naman Goyal, Tom Birch, Vitaliy Liptchinsky, Sergey Edunov, Michael Auli, and Armand Joulin. 2021. Beyond english-centric multilingual machine translation. Journal of Machine Learning Research, 22(107):1-48.
132
+ Björn Gambäck and Amitava Das. 2016. Comparing the level of code-switching in corpora. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 1850-1855, Porto-rož, Slovenia. European Language Resources Association (ELRA).
133
+ Yingying Gao, Junlan Feng, Ying Liu, Beijing Hou, Xin Pan, and Yong Ma. 2019. Code-Switching Sentence Generation by Bert and Generative Adversarial Networks. In Proc. Interspeech 2019, pages 3525-3529.
134
+ Hila Gonen and Yoav Goldberg. 2019. Language modeling for code-switching: Evaluation, integration of monolingual data, and discriminative training. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4175-4185, Hong Kong, China. Association for Computational Linguistics.
135
+ Abhirut Gupta, Aditya Vavre, and Sunita Sarawagi. 2021. Training data augmentation for code-mixed translation. In Proceedings of the 2021 Conference of the North American Chapter of the Association for
136
+
137
+ Computational Linguistics: Human Language Technologies, pages 5760-5766, Online. Association for Computational Linguistics.
138
+ Di He, Yingce Xia, Tao Qin, Liwei Wang, Nenghai Yu, Tie-Yan Liu, and Wei-Ying Ma. 2016. Dual learning for machine translation. In Advances in Neural Information Processing Systems, volume 29. Curran Associates, Inc.
139
+ Vu Cong Duy Hoang, Philipp Koehn, Gholamreza Haffari, and Trevor Cohn. 2018. Iterative backtranslation for neural machine translation. In Proceedings of the 2nd Workshop on Neural Machine Translation and Generation, pages 18-24, Melbourne, Australia. Association for Computational Linguistics.
140
+ Melvin Johnson, Mike Schuster, Quoc V. Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda Viégas, Martin Wattenberg, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2017. Google's multilingual neural machine translation system: Enabling zero-shot translation. Transactions of the Association for Computational Linguistics, 5:339-351.
141
+ Simran Khanuja, Sandipan Dandapat, Anirudh Srinivasan, Sunayana Sitaram, and Monojit Choudhury. 2020. GLUECoS: An evaluation benchmark for code-switched NLP. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3575-3585, Online. Association for Computational Linguistics.
142
+ Anoop Kunchukuttan. 2020. The IndicNLP Library. https://github.com/anoopkunchukuttan/indic_nlpLibrary/blob/master/docs/indicnlp.pdf.
143
+ Anoop Kunchukuttan, Pratik Mehta, and Pushpak Bhattacharyya. 2018. The IIT Bombay English-Hindi parallel corpus. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan. European Language Resources Association (ELRA).
144
+ R. Mahesh, K. Sinha, and Anil Thakur. 2005. Machine translation of bi-lingual Hindi-english (hinglish) text. In MTSUMMIT.
145
+ Mohamed Menacer, David Langlois, Denis Jouvet, Dominique Fohr, Odile Mella, and Kamel Smaili. 2019. Machine Translation on a parallel Code-Switched Corpus. In *Canadian AI* 2019 - 32nd Conference on Canadian Artificial Intelligence, Lecture Notes in Artificial Intelligence, Ontario, Canada.
146
+ Carol Myers-Scotton. 1997. Duelling languages: Grammatical structure in codeswitching. Oxford University Press.
147
+ Sahoko Nakayama, Takatomo Kano, Andros Tjandra, Sakriani Sakti, and Satoshi Nakamura. 2019. Recognition and translation of code-switching speech utterances. In 2019 22nd Conference of the Orien
148
+
149
+ tal COCOSDA International Committee for the Coordination and Standardisation of Speech Databases and Assessment Techniques (O-COCOSDA), pages 1-6.
150
+ Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. *fairoseq: A fast, extensible toolkit for sequence modeling*. In *Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics* (Demonstrations), pages 48-53, Minneapolis, Minnesota. Association for Computational Linguistics.
151
+ Adithya Pratapa, Gayatri Bhat, Monjit Choudhury, Sunayana Sitaram, Sandipan Dandapat, and Kalika Bali. 2018. Language modeling for code-mixing: The role of linguistic theory based synthetic data. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1543-1553, Melbourne, Australia. Association for Computational Linguistics.
152
+ Gowtham Ramesh, Sumanth Doddapaneni, Aravinth Bheemaraj, Mayank Jobanputra, Raghavan AK, Ajitesh Sharma, Sujit Sahoo, Harshita Diddee, Mahalakshmi J, Divyanshu Kakwani, Navneet Kumar, Aswin Pradeep, Kumar Deepak, Vivek Raghavan, Anoop Kunchukuttan, Pratyush Kumar, and Mitesh Shantadevi Khapra. 2021. Samanantar: The largest publicly available parallel corpora collection for 11 indic languages. CoRR, abs/2104.05596.
153
+ Bidisha Samanta, Sharmila Reddy, Hussain Jagirdar, Niloy Ganguly, and Soumen Chakrabarti. 2019. A deep generative model for code switched text. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI-19, pages 5175-5181. International Joint Conferences on Artificial Intelligence Organization.
154
+ Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Improving neural machine translation models with monolingual data. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 86-96, Berlin, Germany. Association for Computational Linguistics.
155
+ Thamar Solorio, Shuguang Chen, Alan W. Black, Mona Diab, Sunayana Sitaram, Victor Soto, Emre Yilmaz, and Anirudh Srinivasan, editors. 2021. Proceedings of the Fifth Workshop on Computational Approaches to Linguistic Code-Switching. Association for Computational Linguistics, Online.
156
+ Kai Song, Yue Zhang, Heng Yu, Weihua Luo, Kun Wang, and Min Zhang. 2019a. Code-switching for enhancing NMT with pre-specified translation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 449-459, Minneapolis, Minnesota. Association for Computational Linguistics.
157
+
158
+ Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, and TieYan Liu. 2019b. MASS: Masked sequence to sequence pre-training for language generation. In Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pages 5926-5936. PMLR.
159
+ Vivek Srivastava and Mayank Singh. 2020. PHINC: A parallel Hinglish social media code-mixed corpus for machine translation. In Proceedings of the Sixth Workshop on Noisy User-generated Text (W-NUT 2020), pages 41-49, Online. Association for Computational Linguistics.
160
+ Ishan Tarunesh, Syamantak Kumar, and Preethi Jyothi. 2021. From machine translation to code-switching: Generating high-quality code-switched text. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 3154-3169, Online. Association for Computational Linguistics.
161
+ Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc.
162
+ Genta Indra Winata, Andrea Madotto, Chien-Sheng Wu, and Pascale Fung. 2019. Code-switched language models using neural based synthetic data from parallel sentences. In Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL), pages 271-280, Hong Kong, China. Association for Computational Linguistics.
163
+ Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Lukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyls, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2016. Google's neural machine translation system: Bridging the gap between human and machine translation. CoRR, abs/1609.08144.
164
+ Biao Zhang, Philip Williams, Ivan Titov, and Rico Sennrich. 2020. Improving massively multilingual neural machine translation and zero-shot translation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1628-1639, Online. Association for Computational Linguistics.
165
+
166
+ <table><tr><td>Dataset</td><td>Source</td><td>Size</td><td>Avg. tokens/sentence</td></tr><tr><td colspan="4">HiEn→En</td></tr><tr><td>Test (S,E)</td><td>ST-Test</td><td>30K</td><td>HiEn-14.46, En-13.09</td></tr><tr><td>C</td><td>IITB Parallel</td><td>1.5M</td><td>Hi-15.47, En-14.47</td></tr><tr><td>EMD</td><td>ST CM mono</td><td>40K</td><td>14.49</td></tr><tr><td>SM</td><td>ST En mono</td><td>53K</td><td>12.59</td></tr><tr><td></td><td>News Crawl</td><td>2M</td><td>18.95</td></tr><tr><td colspan="4">BnEn→En</td></tr><tr><td>Test (S,E)</td><td>ST-Test</td><td>29K</td><td>BnEn-11.32, En-13.31</td></tr><tr><td>C</td><td>Samanantar</td><td>2M</td><td>Bn-12.14, En-13.56</td></tr><tr><td>EMD</td><td>ST CM mono</td><td>31K</td><td>11.23</td></tr><tr><td>SM</td><td>ST En mono</td><td>57K</td><td>12.31</td></tr><tr><td></td><td>IndicCorp</td><td>2M</td><td>21.15</td></tr><tr><td colspan="4">MrEn→En</td></tr><tr><td>Test (S,E)</td><td>ST-Test</td><td>28K</td><td>MrEn-11.32, En-13.00</td></tr><tr><td>C</td><td>Samanantar</td><td>2M</td><td>Mr-10.86, En-12.43</td></tr><tr><td>EMD</td><td>ST CM mono</td><td>38K</td><td>11.14</td></tr><tr><td>SM</td><td>ST En mono</td><td>57K</td><td>12.58</td></tr><tr><td></td><td>IndicCorp</td><td>2M</td><td>16.22</td></tr><tr><td colspan="4">EsEn→En</td></tr><tr><td>Test (S,E)</td><td>LinCE</td><td>6.5K</td><td>EsEn-19.72, En-UNK</td></tr><tr><td>C</td><td>WMT 2013</td><td>2M</td><td>Es-33.32, En-29.74</td></tr><tr><td>EMD</td><td>LinCE</td><td>15K</td><td>19.67</td></tr><tr><td>SM</td><td>LinCE</td><td>15K</td><td>15.36</td></tr><tr><td></td><td>News Crawl</td><td>2M</td><td>28.19</td></tr><tr><td>EM</td><td>News Crawl</td><td>2M</td><td>23.90</td></tr></table>
167
+
168
+ Table 6: Brief statistics of the datasets used for each language pair. The English target for EsEn $\rightarrow$ En is private and results are obtained through submission to the leaderboard.
169
+
170
+ # A Datasets
171
+
172
+ We describe the evaluation sets and all the different types of training datasets used for our experiments.
173
+
174
+ Code-Mixed Parallel Test Corpus The Spoken Tutorial test sets are created by scraping and aligning transcripts for video lectures in multiple languages including English from the educational website Spoken Tutorial<sup>3</sup>. The video transcripts for Indian languages (like Hindi, Bengali, and Marathi) are heavily code-mixed, containing a large number of English words.
175
+
176
+ The Computational Approaches to Linguistic Code-Switching worksop (CALCS), 2021, released a code-mixed translation shared task. The codemixing machine translation test sets are a part of the LinCE Benchmark (Aguilar et al., 2020). We conduct experiment with the EsEn $\rightarrow$ En (referred to as the Spanglish-English task on the leaderboard) test set as this exactly matches our setting.
177
+
178
+ Parallel Corpus $(S,E)$ For HiEn $\rightarrow$ En experiments, we use the IIT Bombay English-Hindi Parallel Corpus (Kunchukuttan et al., 2018) as the base parallel training data $(S,E)$ for our models.
179
+
180
+ Test and validation splits are from the WMT 2014 English-Hindi shared task (Bojar et al., 2014). We move about 2,000 randomly selected sentences from the training set to augment the small (500 sentences) validation set. For $\mathrm{BnEn}\rightarrow \mathrm{En}$ and $\mathrm{MrEn}\rightarrow \mathrm{En}$ , we use 2M randomly sampled parallel sentences from Samanantar (Ramesh et al., 2021) as our parallel data $(S,E)$ for training and 2000 randomly sampled pairs each for validation and testing. For EsEn $\rightarrow$ En, we use 2M randomly sampled sentence pairs from the Common Crawl corpus released by WMT 2013.
181
+
182
+ Non-Parallel Code-Mixed Corpus $(C)$ We collect all code-mixed sentences from the Spoken Tutorial Project that are not a part of the parallel test data. For the EsEn→En task on the LinCE leaderboard, a set of 15K code-mixed Spanish sentences are provided as a part of the setup.
183
+
184
+ Monolingual Corpora $(E_{MD}, E_{M}, S_{M})$ For the in-domain English corpus $(E_{MD})$ , we collect sentences from Spoken Tutorial transcripts which are not a part of the parallel test data. For the EsEn→En task on the LinCE leaderboard, we use the monolingual English tweets provided for the reverse translation task as the in-domain monolingual corpus.
185
+
186
+ We use the News Crawl corpus of WMT 2014 as the additional monolingual English data $(E_M)$ for all experiments. For the monolingual matrix language $(S_M)$ , we use the News Crawl corpus of WMT 2014 for HiEn→En. For BnEn→En and MrEn→En, we use the IndicCorp Bengali and Marathi monolingual corpus respectively. For EsEn→En, we use the News Crawl corpus from WMT 2013.
187
+
188
+ # B Model Setup
189
+
190
+ All models are trained with the Fairseq toolkit (Ott et al., 2019). We experiment with two types of multilingual models: (1) standalone models that we train only on the given corpus above, and (2) mBART initialized models. During decoding we use a beam size of 5 in all experiments. The BLEU scores are computed using the mosesdecoder script $^{5}$ .
191
+
192
+ Standalone Multilingual Models For training all non-mBART models, we use the standard transformer architecture from Vaswani et al. (2017) with six encoder and decoder layers. In the data pre-processing step, we first tokenize with IndicNLP (Kunchukuttan, 2020) tokenizer for Indic language sentences and code-mixed sentences and Moses tokenizer $^{6}$ for pure English sentences. Next, we apply BPE with code learned on monolingual English and monolingual non-code-mixed datasets jointly, for 20,000 operations (the resulting dictionary is manually appended with the special tokens $<2\mathsf{en}>$ , $<2\mathsf{xx}>$ , $<2\mathsf{cm}>$ and $<\mathsf{M}>$ ). We use Adam optimizer with a learning rate of 5e-4 and 4000 warmup steps. We train all models for up to 100 epochs and select the best checkpoint based on loss on the validation split. For the two BT based finetuning stages in B2BT we use a constant learning rate of 1e-4 and use a random 2K subset of the BT data as the validation split.
193
+
194
+ Pre-trained mBART-based Multilingual Models The mBART models are trained by fine-tuning the CC25 mBART checkpoint. The model has 12 encoder and decoder layers, with model dimension of 1024 and 16 attention heads ( $\sim$ 610M parameters). We modify the existing sentence piece model by adding the three special tokens $<2\mathsf{en}$ , $<2\mathsf{xx}$ and $<2\mathsf{cm}$ , so they are not tokenized and also add them to the dictionary by replacing three tokens in a language we are not currently experimenting with. The multilingual model is trained for 100K steps, while fine-tuning stages of B2BT are trained for up to 25K steps.
adaptingmultilingualmodelsforcodemixedtranslation/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8c2584ca072652f377dde54f0ea04b86e5af7c3a07dbb26a3ca05436e55e0b03
3
+ size 326813
adaptingmultilingualmodelsforcodemixedtranslation/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:67389a189656622cff533386ac11156dae4b8adaf6415cda6c6e63805c172af5
3
+ size 346003
adaptivegraphconvolutionalnetworkforknowledgegraphentityalignment/ebf05760-0f16-42b6-b028-14c066a2ffb3_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d6800de05cb90996608c03a0e2bf68baaa95948aa941ddee39e49007fd769f22
3
+ size 78081
adaptivegraphconvolutionalnetworkforknowledgegraphentityalignment/ebf05760-0f16-42b6-b028-14c066a2ffb3_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:15cfd8fa6094ea86b4fcd77d207cf396d994fc6e974b7ddf3b54665eedfd9cd6
3
+ size 94031