Add Batch b60c91b8-8934-48a1-959b-3bfdf27cd0b3
Browse filesThis view is limited to 50 files because it contains too many changes. See raw diff
- abstractivesummarizationguidedbylatenthierarchicaldocumentstructure/e750f412-b989-43ea-8c30-4b219c7fdc6b_content_list.json +3 -0
- abstractivesummarizationguidedbylatenthierarchicaldocumentstructure/e750f412-b989-43ea-8c30-4b219c7fdc6b_model.json +3 -0
- abstractivesummarizationguidedbylatenthierarchicaldocumentstructure/e750f412-b989-43ea-8c30-4b219c7fdc6b_origin.pdf +3 -0
- abstractivesummarizationguidedbylatenthierarchicaldocumentstructure/full.md +399 -0
- abstractivesummarizationguidedbylatenthierarchicaldocumentstructure/images.zip +3 -0
- abstractivesummarizationguidedbylatenthierarchicaldocumentstructure/layout.json +3 -0
- abstractvisualreasoningwithtangramshapes/be53ce45-a171-4498-a723-e5dd579ce31c_content_list.json +3 -0
- abstractvisualreasoningwithtangramshapes/be53ce45-a171-4498-a723-e5dd579ce31c_model.json +3 -0
- abstractvisualreasoningwithtangramshapes/be53ce45-a171-4498-a723-e5dd579ce31c_origin.pdf +3 -0
- abstractvisualreasoningwithtangramshapes/full.md +394 -0
- abstractvisualreasoningwithtangramshapes/images.zip +3 -0
- abstractvisualreasoningwithtangramshapes/layout.json +3 -0
- acenetattentionguidedcommonsensereasoningonhybridknowledgegraph/57a5fae5-dacd-40b3-98b8-04cf2deeee18_content_list.json +3 -0
- acenetattentionguidedcommonsensereasoningonhybridknowledgegraph/57a5fae5-dacd-40b3-98b8-04cf2deeee18_model.json +3 -0
- acenetattentionguidedcommonsensereasoningonhybridknowledgegraph/57a5fae5-dacd-40b3-98b8-04cf2deeee18_origin.pdf +3 -0
- acenetattentionguidedcommonsensereasoningonhybridknowledgegraph/full.md +334 -0
- acenetattentionguidedcommonsensereasoningonhybridknowledgegraph/images.zip +3 -0
- acenetattentionguidedcommonsensereasoningonhybridknowledgegraph/layout.json +3 -0
- acomprehensivecomparisonofneuralnetworksascognitivemodelsofinflection/d92cb11c-2d95-4641-9dad-f4f9fd8a5b1f_content_list.json +3 -0
- acomprehensivecomparisonofneuralnetworksascognitivemodelsofinflection/d92cb11c-2d95-4641-9dad-f4f9fd8a5b1f_model.json +3 -0
- acomprehensivecomparisonofneuralnetworksascognitivemodelsofinflection/d92cb11c-2d95-4641-9dad-f4f9fd8a5b1f_origin.pdf +3 -0
- acomprehensivecomparisonofneuralnetworksascognitivemodelsofinflection/full.md +297 -0
- acomprehensivecomparisonofneuralnetworksascognitivemodelsofinflection/images.zip +3 -0
- acomprehensivecomparisonofneuralnetworksascognitivemodelsofinflection/layout.json +3 -0
- activeexampleselectionforincontextlearning/7df1d58e-95f9-4891-9f80-ce9df75cf31a_content_list.json +3 -0
- activeexampleselectionforincontextlearning/7df1d58e-95f9-4891-9f80-ce9df75cf31a_model.json +3 -0
- activeexampleselectionforincontextlearning/7df1d58e-95f9-4891-9f80-ce9df75cf31a_origin.pdf +3 -0
- activeexampleselectionforincontextlearning/full.md +402 -0
- activeexampleselectionforincontextlearning/images.zip +3 -0
- activeexampleselectionforincontextlearning/layout.json +3 -0
- adamixmixtureofadaptationsforparameterefficientmodeltuning/4d2f7f55-bfd8-4bb9-b067-0fabeb8655d7_content_list.json +3 -0
- adamixmixtureofadaptationsforparameterefficientmodeltuning/4d2f7f55-bfd8-4bb9-b067-0fabeb8655d7_model.json +3 -0
- adamixmixtureofadaptationsforparameterefficientmodeltuning/4d2f7f55-bfd8-4bb9-b067-0fabeb8655d7_origin.pdf +3 -0
- adamixmixtureofadaptationsforparameterefficientmodeltuning/full.md +434 -0
- adamixmixtureofadaptationsforparameterefficientmodeltuning/images.zip +3 -0
- adamixmixtureofadaptationsforparameterefficientmodeltuning/layout.json +3 -0
- adaptersharetaskcorrelationmodelingwithadapterdifferentiation/3f8d551b-99cc-46c1-a866-b4c2c94b841d_content_list.json +3 -0
- adaptersharetaskcorrelationmodelingwithadapterdifferentiation/3f8d551b-99cc-46c1-a866-b4c2c94b841d_model.json +3 -0
- adaptersharetaskcorrelationmodelingwithadapterdifferentiation/3f8d551b-99cc-46c1-a866-b4c2c94b841d_origin.pdf +3 -0
- adaptersharetaskcorrelationmodelingwithadapterdifferentiation/full.md +187 -0
- adaptersharetaskcorrelationmodelingwithadapterdifferentiation/images.zip +3 -0
- adaptersharetaskcorrelationmodelingwithadapterdifferentiation/layout.json +3 -0
- adaptingalanguagemodelwhilepreservingitsgeneralknowledge/3daf2795-b1f5-40a0-84ed-04f630dbdc4f_content_list.json +3 -0
- adaptingalanguagemodelwhilepreservingitsgeneralknowledge/3daf2795-b1f5-40a0-84ed-04f630dbdc4f_model.json +3 -0
- adaptingalanguagemodelwhilepreservingitsgeneralknowledge/3daf2795-b1f5-40a0-84ed-04f630dbdc4f_origin.pdf +3 -0
- adaptingalanguagemodelwhilepreservingitsgeneralknowledge/full.md +385 -0
- adaptingalanguagemodelwhilepreservingitsgeneralknowledge/images.zip +3 -0
- adaptingalanguagemodelwhilepreservingitsgeneralknowledge/layout.json +3 -0
- adaptivecontrastivelearningonmultimodaltransformerforreviewhelpfulnessprediction/ca62769c-92be-4c2a-9c9c-9cf26e3d6c1d_content_list.json +3 -0
- adaptivecontrastivelearningonmultimodaltransformerforreviewhelpfulnessprediction/ca62769c-92be-4c2a-9c9c-9cf26e3d6c1d_model.json +3 -0
abstractivesummarizationguidedbylatenthierarchicaldocumentstructure/e750f412-b989-43ea-8c30-4b219c7fdc6b_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:58aae37a7b715fbb48bc51ca71d78bc14cfbfce62358ebdf19111da90b559a4f
|
| 3 |
+
size 104506
|
abstractivesummarizationguidedbylatenthierarchicaldocumentstructure/e750f412-b989-43ea-8c30-4b219c7fdc6b_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:fa5eafd4509e0a379823faa94e240fc216aa71e4d4a8ee747c127d77ca435438
|
| 3 |
+
size 127132
|
abstractivesummarizationguidedbylatenthierarchicaldocumentstructure/e750f412-b989-43ea-8c30-4b219c7fdc6b_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:d78152818c46592f2b5f0b5f3a905a98ec25ec8c256148b131931c6c0ecddb0b
|
| 3 |
+
size 782253
|
abstractivesummarizationguidedbylatenthierarchicaldocumentstructure/full.md
ADDED
|
@@ -0,0 +1,399 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Abstractive Summarization Guided by Latent Hierarchical Document Structure
|
| 2 |
+
|
| 3 |
+
Yifu Qiu Shay B. Cohen
|
| 4 |
+
|
| 5 |
+
Institute for Language, Cognition and Computation
|
| 6 |
+
|
| 7 |
+
School of Informatics, University of Edinburgh
|
| 8 |
+
|
| 9 |
+
10 Crichton Street, Edinburgh, EH8 9AB
|
| 10 |
+
|
| 11 |
+
Y.QIU-20@sms.ed.ac.uk, scohen@inf.ed.ac.uk
|
| 12 |
+
|
| 13 |
+
# Abstract
|
| 14 |
+
|
| 15 |
+
Sequential abstractive neural summarizers often do not use the underlying structure in the input article or dependencies between the input sentences. This structure is essential to integrate and consolidate information from different parts of the text. To address this shortcoming, we propose a hierarchy-aware graph neural network (HierGNN) which captures such dependencies through three main steps: 1) learning a hierarchical document structure through a latent structure tree learned by a sparse matrix-tree computation; 2) propagating sentence information over this structure using a novel message-passing node propagation mechanism to identify salient information; 3) using graph-level attention to concentrate the decoder on salient information. Experiments confirm HierGNN improves strong sequence models such as BART, with a 0.55 and 0.75 margin in average ROUGE-1/2/L for CNN/DM and XSum. Further human evaluation demonstrates that summaries produced by our model are more relevant and less redundant than the baselines, into which HierGNN is incorporated. We also find HierGNN synthesizes summaries by fusing multiple source sentences more, rather than compressing a single source sentence, and that it processes long inputs more effectively. $^{1}$
|
| 16 |
+
|
| 17 |
+
# 1 Introduction
|
| 18 |
+
|
| 19 |
+
Sequential neural network architectures in their various forms have become the mainstay in abstractive summarization (See et al., 2017; Lewis et al., 2020). However, the quality of machine-produced summaries still lags far behind the quality of human summaries (Huang et al., 2020a; Xie et al., 2021; Cao et al., 2022; Lebanoff et al., 2019). Due to their sequential nature, a challenge with neural summarizers is to capture hierarchical and inter-sentential dependencies in the summarized document.
|
| 20 |
+
|
| 21 |
+
# Article Sentences:
|
| 22 |
+
|
| 23 |
+
1. The town is home to the prestigious Leander Club, which has trained more than 100 Olympic medal-winning rowers.
|
| 24 |
+
- 2 sentences are abbreviated here.
|
| 25 |
+
4. The Royal Mail has painted more than 50 postboxes gold following Team GB's gold medal haul at London 2012.
|
| 26 |
+
5. Originally it said it was only painting them in winners hometowns, or towns with which they are closely associated.
|
| 27 |
+
6. Town mayor Elizabeth Hodgkin said: "We are the home of rowing ... I feel very excited about it."
|
| 28 |
+
- 5 sentences are abbreviated here.
|
| 29 |
+
12. The Henley-on-Thames postbox was painted on Friday.
|
| 30 |
+
- one sentence is abbreviated here.
|
| 31 |
+
|
| 32 |
+
Reference Summary: The Royal Mail has painted a postbox gold in the Oxfordshire town of Henley-on-Thames - in recognition of its medal winning rowing club.
|
| 33 |
+
|
| 34 |
+
BART's Summary: A postbox in Henley-on-Thames has been painted gold as part of the Royal Mail's "Olympic gold" campaign.
|
| 35 |
+
|
| 36 |
+
Our HierGNN's Summary: A Royal Mail postbox in Henley-on-Thames has been painted gold in honour of the town's Olympic rowing success.
|
| 37 |
+
|
| 38 |
+
Table 1: Example of an article from XSum with summaries given by human-written reference, BART (Lewis et al., 2020) and our HierGNN equipped with BART. BART's summary fails to capture all information pieces as the reference (as highlighted in various colors), while HierGNN has advantages in combining the information from multiple locations in the source side.
|
| 39 |
+
|
| 40 |
+
Progress in cognitive science suggests that humans construct and reason over a latent hierarchical structure of a document when reading the text in it (Graesser et al., 1994; Goldman et al., 1999). Such reasoning behavior includes uncovering the salient contents and effectively aggregating all related clues spreading across the documents to understand the document. Lebanonoff et al. (2019) found that human editors usually prefer writing a summary by fusing information from multiple article sentences and reorganizing the information in summaries (sentence fusion), rather than dropping non-essential elements in an original sentence such as prepositional phrases and adjectives (sentence compression). Different summarization
|
| 41 |
+
|
| 42 |
+
benchmarks show there are between $60 - 85\%$ summary sentences that are generated by sentence fusing. These recent findings support our motivation to make use of hierarchical document structure when summarizing a document.
|
| 43 |
+
|
| 44 |
+
We present a document hierarchy-aware graph neural network (HierGNN), a neural encoder with a reasoning functionality that can be effectively incorporated into any sequence-to-sequence (seq2seq) neural summarizer. Our HierGNN first learns a latent hierarchical graph via a sparse variant of the matrix-tree computation (Koo et al., 2007; Liu et al., 2019a). It then formulates sentence-level reasoning as a graph propagation problem via a novel message passing mechanism. During decoding, a graph-selection attention mechanism serves as a source sentence selector, hierarchically indicating the attention module which tokens in the input sentences to focus on.
|
| 45 |
+
|
| 46 |
+
Our experiments with HierGNN, incorporated into both pointer-generator networks (See et al., 2017) and BART (Lewis et al., 2020), confirm that HierGNN substantially improves both the non-pretrained and pretrained seq2seq baselines in producing high-quality summaries. Specifically, our best HierGNN-BART achieves an average improvement of 0.55 and 0.75 points in ROUGE-1/2/L on CNN/DM and XSum. Compared with a plain seq2seq model, HierGNN encourages the summarizers to favor sentence fusion more than sentence compression when generating summaries. Modeling the hierarchical document structure via our sparse matrix-tree computation also enables HierGNN to treat long sequences more effectively. In addition, our sparse adaptive variant of the matrix-tree computation demonstrates a more powerful expressive ability over the original one (Koo et al., 2007; Liu et al., 2019a). We summarize our contributions as follows,
|
| 47 |
+
|
| 48 |
+
- We present a novel encoder architecture for improving seq2seq summarizers. This architecture captures the hierarchical document structure via an adaptive sparse matrix-tree computation, with a new propagation rule for achieving intersentence reasoning.
|
| 49 |
+
- We design a graph-selection attention mechanism to fully leverage the learned structural information during decoding in advantages over only using it in encoding.
|
| 50 |
+
- Results on CNN/DM and XSum demonstrates the effectiveness of HierGNN in improving the
|
| 51 |
+
|
| 52 |
+
quality of summaries for both non-pretrained and pretrained baselines. An in-depth analysis confirms our module improves the integration of information from multiple sites in the input article and that it is more effective in processing long sequence inputs.
|
| 53 |
+
|
| 54 |
+
# 2 Related Work
|
| 55 |
+
|
| 56 |
+
Neural Abstractive Summarization Rush et al. (2015) first proposed to use a sequence-to-sequence model with an attention mechanism to perform sentence compression. Mendes et al. (2019) demonstrated the advantages and limitations of neural methods based on sentence compression. The pointer-generator networks (PGN; See et al. 2017) enhances the attention model with a copying functionality. PGN has also been further extended to create summarization systems by incorporating the topic information (Liu et al., 2019b), document structural information (Song et al., 2018), semantic information (Hardy and Vlachos, 2018), and was improved by replacing the plain LSTM module with the more advanced Transformer model to overcome the difficulty in modeling long sequence input (Pilault et al., 2020; Wang et al., 2021; Fonseca et al., 2022). For the pretrained models, BERTSum (Liu and Lapata, 2019) adopted the BERT encoder for the summarizer, with a randomly initialized decoder. Lewis et al. (2020) presented BART which pre-trains both the underlying encoder and decoder. Dou et al. (2021) investigated "guidance signals" (e.g., keywords, salient sentences) for further boosting the performances.
|
| 57 |
+
|
| 58 |
+
Graph Neural Approach for Summarization Graph neural networks have demonstrated their ability to capture rich dependencies in documents to be summarized. Wang et al. (2020) use a "heterogeneous graph" with sentence nodes and cooccurring word nodes to capture the sentence dependencies. Jin et al. (2020) use two separate encoders to encode the input sequence with a parsed dependency graph. Cui et al. (2020) use a bipartite graph with a topic model to better capture the inter-sentence relationships. Kwon et al. (2021) capture both intra- and inter-sentence relationships via a nested tree structure. Zhu et al. (2021) use entityrelation information from the knowledge graph to increase the factual consistency in summaries.
|
| 59 |
+
|
| 60 |
+
Our approach is related to the structural attention model (Balachandran et al., 2021; Liu et al., 2019a), but differs in two major ways: (i) we introduce an adaptive sparse matrix-tree construction to
|
| 61 |
+
|
| 62 |
+
learn a latent hierarchical graph and a novel propagation rule; (ii) we investigate to use the structure information both with the encoder and the decoder for abstractive summarization, and not just the encoder. These shows to be more effective for unsupervised learning of the latent hierarchical structure while can defeat the approach that leverages external graph constructor (Balachandran et al., 2021).
|
| 63 |
+
|
| 64 |
+
# 3 Hierarchy-aware Graph Neural Encoder
|
| 65 |
+
|
| 66 |
+
HierGNN learns the document structure in an end-to-end fashion without any direct structure supervision, and does not need an external parser to construct the structure, unlike previous work (Balachandran et al., 2021; Huang et al., 2020b; Wang et al., 2020; Cardenas et al., 2022). In addition, it empirically improves over supervised graph construction, which has been a challenge (Balachandran et al., 2021).
|
| 67 |
+
|
| 68 |
+
Sequential summarizers encode an $N$ -token article, $X = (x_{1}, \dots, x_{N})$ as $d$ -dimensional latent vectors using an encoding function $\mathbf{h}_{enc}(x_t) \in \mathbb{R}^d$ and then decodes them into the target summary $Y$ . (We denote by $\mathbf{h}_{enc}(X)$ the sequence of $x_t$ encodings for $t \leq N$ .) Our model includes four modules in addition to this architecture: 1) a sparse matrix-tree computation for inferring the document hierarchical structure, ii) a novel message-passing layer to identify inter-sentence dependencies, iii) a reasoning fusion layer aggregating the outputs of the message-passing module; and vi) a graph-selection attention module to leverage the encoded structural information.
|
| 69 |
+
|
| 70 |
+
# 3.1 Learning the Latent Hierarchical Structure
|
| 71 |
+
|
| 72 |
+
We first introduce our latent structure learning algorithm that makes use of a sparse variant of the matrix-tree theorem (Tutte, 1986; Koo et al., 2007).
|
| 73 |
+
|
| 74 |
+
Latent Document Hierarchical Graph. We represent the document as a complete weighted graph, with each node representing a sentence. The edge weights are defined as the marginal probability of a directional dependency between two sentences. In addition, each sentence node has an extra probability value, the "root probability" which indicates the hierarchical role of the sentence, such as the roles of the lead, most important facts, or other information defined based on the inverted pyramid model for news articles (Pottker, 2003; Ytreberg, 2001).
|
| 75 |
+
|
| 76 |
+
Intuitively, a sentence with a high root probability (high hierarchical position) conveys more general information; namely, it is a connector, while a sentence with a lower root probability (information node) carries details supporting its higher connectors. The underlying graph structure is latent and not fixed, summed out in our overall probability model using the matrix-tree theorem.
|
| 77 |
+
|
| 78 |
+
Sparse Matrix-Tree Computation. For an article with $M$ sentences, we start from the sentence embeddings as the node initialization $H^{(0)} = [\mathbf{s}_1,\dots,\mathbf{s}_i,\dots,\mathbf{s}_M]$ . We then use two independent non-linear transformations to obtain a pair of parent and child representation for each sentence,
|
| 79 |
+
|
| 80 |
+
$$
|
| 81 |
+
\begin{array}{l} \mathbf {s} _ {i} ^ {(p)} = \sigma (W _ {p} \mathbf {s} _ {i} + b _ {p}), \\ \mathbf {s} _ {i} ^ {(c)} = \sigma (W _ {c} \mathbf {s} _ {i} + b _ {c}), \\ \end{array}
|
| 82 |
+
$$
|
| 83 |
+
|
| 84 |
+
where $W_{p}, W_{c}, b_{p}, b_{c}$ are parameters, $\sigma$ is the ReLU activation function (Dahl et al., 2013).
|
| 85 |
+
|
| 86 |
+
The standard use of the matrix-tree theorem (Tutte, 1986) computation (MTC; Smith and Smith 2007; Koo et al. 2007; McDonald and Satta 2007) includes the exponential function to calculate a matrix $F \in \mathbb{R}^{M \times M}$ with positive values with each element $f_{ij}$ representing the weight of the directional edge from a node $s_i$ to $s_j$ ; and a positive vector of root scores $\mathbf{f}^{(root)} \in \mathbb{R}^M$ . However, having a dense matrix degrades our graph reasoning module by including irrelevant information from redundant $M$ sentence nodes. Inspired by the work about sparse self-attention (Zhang et al., 2021; Correia et al., 2019), we introduce an adaptive solution to inject sparsity into MTC. We replace the exponential scoring function with the ReLU function $(\mathrm{ReLU}(x \in \mathbb{R}) = \max\{x, 0\}$ and similarly coordinate-wise when $x$ is a vector) and calculate the root $f_i^{(root)}$ and edge scores $f_{ij}$ by a fully-connected layer and a bi-linear attention layer, respectively,
|
| 87 |
+
|
| 88 |
+
$$
|
| 89 |
+
\begin{array}{l} f _ {i} ^ {(r o o t)} = \mathrm {R E L U} (W _ {r} \mathbf {s} _ {i} ^ {(p)} + b _ {r}) + \varepsilon , \\ f _ {i j} = \operatorname {R E L U} \left(\mathbf {s} _ {i} ^ {(p) ^ {\top}} W _ {b i} \mathbf {s} _ {j} ^ {(c)}\right) + \varepsilon , \\ \end{array}
|
| 90 |
+
$$
|
| 91 |
+
|
| 92 |
+
where $W_{bi}, W_r, b_r$ are learnable. (We use $\varepsilon = 10^{-6}$ to avoid matrix non-invertibility issues.) Compared to the exponential function, ReLU relaxes $F$ and $\mathbf{f}^{(root)}$ to be non-negative, thus being capable of assigning zero probability and pruning dependency edges and roots. We finally plug in these quantities to the standard MTC (Tutte, 1986)
|
| 93 |
+
|
| 94 |
+

|
| 95 |
+
Figure 1: Architecture for the sequence-to-sequence model with HierGNN reasoning encoder.
|
| 96 |
+
|
| 97 |
+

|
| 98 |
+
|
| 99 |
+
and marginalize the edge and root probabilities as the adjacency matrix $A(i,j) = P(z_{ij} = 1)$ and root probability $p_i^r$ representing the hierarchical role (i.e., the likelihood to be a connector) of each sentence.
|
| 100 |
+
|
| 101 |
+
# 3.2 Reasoning by Hierarchy-aware Message Passing
|
| 102 |
+
|
| 103 |
+
We present a novel message-passing mechanism over the learned hierarchical graph. This mechanism realizes the inter-sentence reasoning where connectors can aggregate information from their related information nodes while propagating the information to others. For the $i$ -th sentence node, the edge marginal controls the aggregation from its $K$ information nodes; and the root probability controls the neighbouring information is combined as $i$ -th node's update $\mathbf{u}^{(l)}$ in the $l$ -th reasoning layer,
|
| 104 |
+
|
| 105 |
+
$$
|
| 106 |
+
\mathbf {u} _ {i} ^ {(l)} = (1 - p _ {i} ^ {r}) \mathcal {F} _ {r} (\mathbf {s} _ {i} ^ {(l)}) + (p _ {i} ^ {r}) \sum_ {k = 1} ^ {K} A _ {i k} \mathcal {F} _ {n} (\mathbf {s} _ {k} ^ {(l)}),
|
| 107 |
+
$$
|
| 108 |
+
|
| 109 |
+
where $\mathcal{F}_r$ and $\mathcal{F}_n$ are parametric functions. Intuitively, if a sentence is a connector, it should have strong connectivity with the related information nodes, and aggregate more details. Each information node learns to either keep the uniqueness of its information or fuse the information from the connectors. To filter out the unnecessary information, we adopt a gated mechanism as the information gatekeeper in the node update,
|
| 110 |
+
|
| 111 |
+
$$
|
| 112 |
+
\begin{array}{r} \mathbf {g} _ {i} ^ {(l)} = \sigma (\mathcal {F} _ {g} ([ \mathbf {u} _ {i} ^ {(l)}; \mathbf {h} _ {i} ^ {(l)} ])), \\ \mathbf {h} _ {i} ^ {(l + 1)} = \mathrm {L N} (\mathbf {g} _ {i} ^ {(l)} \odot \phi (\mathbf {u} _ {i} ^ {(l)}) + (\mathbf {1} - \mathbf {g} _ {i} ^ {(l)}) \odot \mathbf {h} _ {i} ^ {(l)}), \end{array}
|
| 113 |
+
$$
|
| 114 |
+
|
| 115 |
+
where $\mathcal{F}_g$ is a parametric function and $\odot$ is the element-wise dot product. We use layer normalization (LN) to stabilize the output for the update function. The function $\sigma$ is the sigmoid function, and $\phi$ can be any non-linear function.
|
| 116 |
+
|
| 117 |
+
# 3.3 Reasoning Fusion Layer
|
| 118 |
+
|
| 119 |
+
We construct reasoning chains that consist of $L$ hops by stacking $L$ HierGNN blocks together. To handle cases where fewer than $L$ hops are needed, we add a fusion layer to aggregate the output from each reasoning hop to produce the final output of HierGNN. A residual connection is also introduced to pass the node initialization directly to the output,
|
| 120 |
+
|
| 121 |
+
$$
|
| 122 |
+
\mathbf {h} _ {i} ^ {(G)} = (W _ {g} [ \mathbf {h} _ {i} ^ {(1)}, \dots , \mathbf {h} _ {i} ^ {(L)} ] + b _ {g}) + \mathbf {h} _ {i} ^ {(0)},
|
| 123 |
+
$$
|
| 124 |
+
|
| 125 |
+
where $W_{g}, b_{g}$ are learnabale parameters. We use two approaches for layer use: (a) Layer-Shared Reasoning (LSR): we construct a shared reasoning graph first, followed by $L$ message passing layers for reasoning; (b) Layer-Independent Reasoning (LIR): we learn the layer-wise latent hierarchical graphs independently, where each message passing layer uses its own graph.
|
| 126 |
+
|
| 127 |
+
# 3.4 Graph-selection Attention Mechanism
|
| 128 |
+
|
| 129 |
+
In addition to token-level decoding attention, we propose a graph-selection attention mechanism (GSA) to inform the decoder with learned hierarchical information, while realizing the sentence-level content selection. In each decoding step $t$ , our decoder first obtains a graph context vector, $\mathbf{c}_G^t$ , which entails the global information of the latent hierarchical graph. We first compute the graph-level
|
| 130 |
+
|
| 131 |
+
attention distribution $\mathbf{a}_G^t$ by,
|
| 132 |
+
|
| 133 |
+
$$
|
| 134 |
+
e _ {v _ {i}} ^ {t} = \operatorname {A T T N} ^ {(G)} (\mathbf {h} ^ {(L)}, \mathbf {z} _ {t}),
|
| 135 |
+
$$
|
| 136 |
+
|
| 137 |
+
$$
|
| 138 |
+
\mathbf {a} _ {G} ^ {t} = \mathrm {S O F T M A X} (\mathbf {e} ^ {t}),
|
| 139 |
+
$$
|
| 140 |
+
|
| 141 |
+
where $\mathrm{ATTN}^{(G)}$ is a graph attention function. The vectors $\mathbf{h}_i^{(L)}\in \mathbb{R}^d,\mathbf{z}_t\in \mathbb{R}^d$ are the $L$ -th layer node embeddings for sentence $i$ and decoding state at time $t$ , respectively. The graph context vector $\mathbf{c}_G^t\in \mathbb{R}^d$ is finally obtained by summing all $\mathbf{h}_i^{(L)}$ weighted by $\mathbf{a}_G^t$ . The value of $\mathbf{c}_G^t$ is used as an additional input for computing token-level attention,
|
| 142 |
+
|
| 143 |
+
$$
|
| 144 |
+
e _ {i} ^ {t} = \mathrm {A T T N} ^ {(T)} (\mathbf {h} _ {e n c} (X), \mathbf {z} _ {t}, \mathbf {c} _ {G} ^ {t}),
|
| 145 |
+
$$
|
| 146 |
+
|
| 147 |
+
$$
|
| 148 |
+
\mathbf {a} _ {T} ^ {t} = \operatorname {S O F T M A X} \left(\mathbf {e} ^ {t}\right),
|
| 149 |
+
$$
|
| 150 |
+
|
| 151 |
+
where $\mathrm{ATTN}^{(T)}$ is a token-level attention function (Luong et al., 2015; Vaswani et al., 2017). Again, the token-attentional context vector $\mathbf{c}_f^t$ is computed by summing the encoder outputs weighted by $\mathbf{a}_T^t$ . The final context vector $\mathbf{c}_f^t$ is fused from the graph $\mathbf{c}_G^t$ and token context vectors $\mathbf{c}_T^t$ with a parametric function $g_{f}$ , $\mathbf{c}_f^t = g_f(\mathbf{c}_G^t,\mathbf{c}_T^t)$ .
|
| 152 |
+
|
| 153 |
+
# 4 Experimental Setting
|
| 154 |
+
|
| 155 |
+
Benchmarks. We evaluate our model on two common document summarization benchmarks. The first is the CNN/Daily Mail dataset (Hermann et al., 2015) in the news domain, with an average input of 45.7 sentences and 766.1 words, and a reference with an average length of 3.59 sentences and 58.2 words. We use the non-anonymized version of See et al. (2017), which has 287,084/13,367/11,490 instances for training, validation and testing. The second dataset we use is XSum (Narayan et al., 2018), a more abstractive benchmark consisting of one-sentence human-written summaries for BBC news. The average lengths for input and reference are 23.26 sentences with 430.2 words and 1 sentence with 23.3 words, respectively. We follow the standard split of Narayan et al. (2018) for training, validation and testing (203,028/11,273/11,332).
|
| 156 |
+
|
| 157 |
+
Implementations. We experiment with the non-pretrained PGN of See et al. (2017) and the pretrained BART model (Lewis et al., 2020). The implementation details are in Appendix A.
|
| 158 |
+
|
| 159 |
+
Baselines. We compare HierGNN with three types of baselines: 1) the base models for developing HierGNN; and 2) several strong non-pretrained and pretrained baselines; 3) abstractive summarizers boosted with the hierarchical information.
|
| 160 |
+
|
| 161 |
+
<table><tr><td>Non-pretrained</td><td>R-1</td><td>R-2</td><td>R-L</td><td>BS</td></tr><tr><td>LEAD-3</td><td>40.34</td><td>17.70</td><td>36.57</td><td>-</td></tr><tr><td>PGN</td><td>39.53</td><td>17.28</td><td>36.38</td><td>-</td></tr><tr><td>StructSum ES</td><td>39.63</td><td>16.98</td><td>36.72</td><td>-</td></tr><tr><td>StructSum LS</td><td>39.52</td><td>16.94</td><td>36.71</td><td>-</td></tr><tr><td>StructSum (LS + ES)</td><td>39.62</td><td>17.00</td><td>36.95</td><td>21.70</td></tr><tr><td>PGN - Ours</td><td>39.07</td><td>16.97</td><td>35.87</td><td>23.74</td></tr><tr><td>HierGNN-PGN (LSR)</td><td>39.87</td><td>17.77</td><td>36.85</td><td>25.64</td></tr><tr><td>HierGNN-PGN (LIR)</td><td>39.34</td><td>17.39</td><td>36.44</td><td>25.26</td></tr><tr><td>Pretrained</td><td>R-1</td><td>R-2</td><td>R-L</td><td>BS</td></tr><tr><td>BERTSUMABS</td><td>41.72</td><td>19.39</td><td>38.76</td><td>29.05</td></tr><tr><td>BERTSUMEXTABS</td><td>42.13</td><td>19.60</td><td>39.18</td><td>28.72</td></tr><tr><td>T5-Large</td><td>42.50</td><td>20.68</td><td>39.75</td><td>-</td></tr><tr><td>BART</td><td>44.16</td><td>21.28</td><td>40.90</td><td>-</td></tr><tr><td>Hie-BART</td><td>44.35</td><td>21.37</td><td>41.05</td><td>-</td></tr><tr><td>HAT-BART</td><td>44.48</td><td>21.31</td><td>41.52</td><td>-</td></tr><tr><td>BART - Ours</td><td>44.62</td><td>21.49</td><td>41.34</td><td>33.98</td></tr><tr><td>BART + SentTrans.</td><td>44.44</td><td>21.44</td><td>41.27</td><td>33.90</td></tr><tr><td>HierGNN-BART (LSR)</td><td>44.93</td><td>21.7</td><td>41.71</td><td>34.43</td></tr><tr><td>HierGNN-BART (LIR)</td><td>45.04</td><td>21.82</td><td>41.82</td><td>34.59</td></tr></table>
|
| 162 |
+
|
| 163 |
+
Table 2: Automatic evaluation results in ROUGE scores, BERTScore (BS) on CNN/DM. The top and bottom blocks show the comparison for non-pre-training and pre-training models separately. We use **bold** to mark the best abstractive model.
|
| 164 |
+
|
| 165 |
+
We compare HierGNN-PGN with the non-pretrained baselines. We first include the LEAD-3 (Nallapati et al., 2017) that simply selects the top three sentences in the article as the summary. StructSum (Balachandran et al., 2021) is a PGN-based model, which incorporates structure information by an explicit attention mechanism (ES Attn) on a coreference graph and implicit attention mechanism (IS Attn) on an end-to-end learned document structure. StructSum ES+IS Attn uses both implicit and explicit structures.
|
| 166 |
+
|
| 167 |
+
We compare HierGNN-PGN with the pretrained baselines. BERTSumAbs and BERTSumExtAbs are two abstractive models by Liu and Lapata (2019) based on the BERT encoder. We also incorporate a strong multitask sequence generation model, T5-Large. Hie-BART (Akiyama et al., 2021) enhances BART by jointly modeling the sentence and token-level information in the self-attention layer. HAT-BART (Rohde et al., 2021) appends a sentential Transformer block on top of BART's encoder to model the sentence-level dependencies. We also develop a baseline, BART+SentTrans., replacing our MTC block with a Transformer block. This baseline uses a comparable number of parameters to our HierGNN. We aim to verify the advantage of modeling the document's hierarchical information by MTC over just
|
| 168 |
+
|
| 169 |
+
<table><tr><td>Non-pretrained</td><td>R-1</td><td>R-2</td><td>R-L</td><td>BS</td></tr><tr><td>LEAD-3</td><td>16.30</td><td>1.60</td><td>11.95</td><td>-</td></tr><tr><td>Seq2Seq (LSTM)</td><td>28.42</td><td>8.77</td><td>22.48</td><td>-</td></tr><tr><td>Pointer-Generator</td><td>29.70</td><td>9.21</td><td>23.24</td><td>23.16</td></tr><tr><td>PGN + Coverage</td><td>28.10</td><td>8.02</td><td>21.72</td><td>-</td></tr><tr><td>HierGNN-PGN (LSR)</td><td>30.14</td><td>10.21</td><td>24.32</td><td>27.24</td></tr><tr><td>HierGNN-PGN (LIR)</td><td>30.24</td><td>10.43</td><td>24.20</td><td>27.36</td></tr><tr><td>Pretrained</td><td>R-1</td><td>R-2</td><td>R-L</td><td>BS</td></tr><tr><td>BERTSUMABS</td><td>38.76</td><td>16.33</td><td>31.15</td><td>37.60</td></tr><tr><td>BERTSUMEXTABS</td><td>38.81</td><td>16.50</td><td>31.27</td><td>38.14</td></tr><tr><td>T5 (Large)</td><td>40.9</td><td>17.3</td><td>33.0</td><td>-</td></tr><tr><td>BART</td><td>45.14</td><td>22.27</td><td>37.25</td><td>-</td></tr><tr><td>HAT-BART</td><td>45.92</td><td>22.79</td><td>37.84</td><td>-</td></tr><tr><td>BART - Ours</td><td>44.97</td><td>21.68</td><td>36.47</td><td>52.89</td></tr><tr><td>BART + SentTrans.</td><td>45.12</td><td>21.62</td><td>36.46</td><td>52.95</td></tr><tr><td>HierGNN-BART (LSR)</td><td>45.19</td><td>21.71</td><td>36.59</td><td>52.94</td></tr><tr><td>HierGNN-BART (LIR)</td><td>45.39</td><td>21.89</td><td>36.81</td><td>53.15</td></tr></table>
|
| 170 |
+
|
| 171 |
+
Table 3: Automatic evaluation results in ROUGE scores, BERTScore (BS) on XSum. All of our HierGNN-PGN models are trained without a coverage mechanism. We use **bold** for the best model.
|
| 172 |
+
|
| 173 |
+
<table><tr><td>Model</td><td>Rel.</td><td>Inf.</td><td>Red.</td><td>Overall</td></tr><tr><td>BERTSUMABS</td><td>*-0.43</td><td>*-0.33</td><td>-0.11</td><td>*-0.29</td></tr><tr><td>T5</td><td>0.08</td><td>-0.09</td><td>0.05</td><td>0.01</td></tr><tr><td>BART</td><td>0.15</td><td>0.24</td><td>-0.04</td><td>0.12</td></tr><tr><td>HierGNN-BART</td><td>0.20</td><td>0.19</td><td>0.09</td><td>0.16</td></tr></table>
|
| 174 |
+
|
| 175 |
+
increasing the model size.
|
| 176 |
+
|
| 177 |
+
# 5 Results
|
| 178 |
+
|
| 179 |
+
Automatic Evaluation. We evaluate the quality of summaries through ROUGE F-1 scores (Lin and Och, 2004) by counting the unigram (R-1), bigram (R-2) and longest common subsequence (R-L) overlaps. To avoid the use of pure lexical overlap evaluation (Huang et al., 2020a), we also use BERTScore (Zhang et al., 2020).
|
| 180 |
+
|
| 181 |
+
We summarize the results for non-pretrained and pretrained models on CNN/DM and XSum in the upper and bottom block of Table 2 and Table 3, respectively. Our HierGNN module improves the performance over the PGN and BART
|
| 182 |
+
|
| 183 |
+
Table 4: Results for the human evaluation based on i) Relevance (Rel.), ii) Informativeness (Inf.), and iii) Redundancy (Red.). * indicates statistically significant improvements over the baselines with our model (*: by pair-wise t-test with $p < 0.05$ , corrected using Benjamini-Hochberg method to control the False Discovery Rate (Benjamini and Hochberg, 1995) for multiple comparison). We bold the best results in each criteria and the overall evaluation. Detailed results are given in Appendix C.
|
| 184 |
+
|
| 185 |
+
<table><tr><td></td><td>R-1</td><td>R-2</td><td>R-L</td><td>BS</td></tr><tr><td>Full Model</td><td>30.24</td><td>10.43</td><td>24.20</td><td>27.36</td></tr><tr><td>w/o HierGNN Module</td><td>-0.54</td><td>-1.22</td><td>-0.96</td><td>-4.20</td></tr><tr><td>w/o Graph-select (GSA)</td><td>-0.41</td><td>-0.41</td><td>-0.17</td><td>-0.27</td></tr><tr><td>w/o Sparse MTC</td><td>-0.14</td><td>-0.25</td><td>+0.05</td><td>-0.41</td></tr><tr><td>w/o Graph Fusion</td><td>-0.94</td><td>-0.81</td><td>-0.77</td><td>-1.39</td></tr></table>
|
| 186 |
+
|
| 187 |
+
Table 5: Ablation study of each modules in our HierGNN-PGN (LIR) model on XSum.
|
| 188 |
+
|
| 189 |
+
<table><tr><td>Model</td><td>Coverage (↗)</td><td>Copy Length (↘)</td></tr><tr><td>Reference</td><td>20.27 %</td><td>5.10</td></tr><tr><td>Pointer-Generator</td><td>11.78 %</td><td>18.82</td></tr><tr><td>Ours w/o Graph Select Attn.</td><td>13.74 %</td><td>18.88</td></tr><tr><td>Ours w/ Graph Select Attn.</td><td>15.22 %</td><td>16.80</td></tr></table>
|
| 190 |
+
|
| 191 |
+
Table 6: Results of average copying length of sequences and coverage of the source sentences for the CNN/DM datasets. Arrows $(\nearrow$ or $\searrow$ ) indicate that larger or lower scores are better, respectively.
|
| 192 |
+
|
| 193 |
+
for both CNN/DM and XSum, demonstrating the effectiveness of our reasoning encoder for the non-pretrained and pretrained summarizers. Secondly, the best model of HierGNN-PGN achieves higher scores than StructSum ES and ES+IS that explicitly construct the document-level graph representation using an external parser in pre-processing. This indicates our learned hierarchical structure can be effective and beneficial for downstream summarization without any supervision. HierGNN-BART also outperforms Hie-BART, HAT-BART and BART+SentTrans., which indicates that the MTC encoder's inductive bias is effective in modeling useful structure.
|
| 194 |
+
|
| 195 |
+
Human Evaluations. We also invited human referees from Amazon Mechanical Turk to assess our model and additional three pure abstractive baselines including BERTSUMABS, T5-Large, BART on CNN/DM testing set. Our assessment focuses on three criteria: i) Relevance (Whether the conveyed information in the candidate summary is relevant to the article?), ii) Informativeness (How accurate and faithful information does the candidate summary convey?), and iii) Redundancy (Whether the sentences in each candidate summary are non-redundant with each other?). The detailed settings for human evaluation are presented in Appendix B. We ask the referees to choose the best and worst summaries from the four candidates for each criterion. The overall scores in Table 4 are computed as the fraction of times a summary was chosen as best minus the fraction it was selected as
|
| 196 |
+
|
| 197 |
+
<table><tr><td></td><td>R-1</td><td>R-2</td><td>BS</td></tr><tr><td>BART</td><td>49.41</td><td>21.70</td><td>19.12</td></tr><tr><td>HierGNN-BART</td><td>49.62</td><td>21.74</td><td>20.32</td></tr></table>
|
| 198 |
+
|
| 199 |
+

|
| 200 |
+
Figure 2: Performance gap on PubMed between HierGNN-BART with BART when summarizing articles truncated at different lengths. The gap between HierGNN and BART consistently increases with input length.
|
| 201 |
+
|
| 202 |
+
worst. The results show that our HierGNN-BART achieves the overall best performance. Moreover, while BART has a slightly better informativeness score, HierGNN-BART produces better summaries in terms of Relevance and Redundancy.
|
| 203 |
+
|
| 204 |
+
Ablations. We conduct an ablation study (in Table 5) of the HierGNN encoder, graph-selection attention, sparse MTC and graph fusion layer. The ablation is done on our HierGNN-PGN LIR model trained on XSum. The ablation in HierGNN reasoning module significantly degrades the model, which suggests the positive contribution of the functionality in across-sentence reasoning. The scores without GSA also confirm the guidance of graph-level information is beneficial. By removing the graph fusion layer, we again observe the performance decreases, which proves the benefits of fusing the neighbor feature from multiple hopping distances. Finally, the results also confirm the superiority of the sparse MTC over the dense MTC for learning effective hierarchical structure for summarization.
|
| 205 |
+
|
| 206 |
+
# 6 Discussion
|
| 207 |
+
|
| 208 |
+
Coverage and Copy Length. We report two metrics introduced by See et al. (2017) in Table 6. The coverage rate measures how much information in the source article is covered by the summary, while the average copy length indicates to what extent that summarizer directly copies tokens from the
|
| 209 |
+
|
| 210 |
+
Table 7: Summarization performance on PubMed. We test BART and HierGNN-BART with the same hyperparameters settings.
|
| 211 |
+
|
| 212 |
+
<table><tr><td>CNN/DM</td><td>Comp.</td><td>2-hop</td><td>3-hop</td><td>4-hop</td></tr><tr><td>Reference</td><td>63.03</td><td>32.08</td><td>4.59</td><td>0.31</td></tr><tr><td>BART</td><td>79.52</td><td>17.81</td><td>2.43</td><td>0.24</td></tr><tr><td>HierGNN-BART</td><td>78.13(↓)</td><td>19.29(↑)</td><td>2.36(↓)</td><td>0.21(↓)</td></tr><tr><td>XSum</td><td>Comp.</td><td>2-hop</td><td>3-hop</td><td>4-hop</td></tr><tr><td>Reference</td><td>34.87</td><td>42.50</td><td>18.79</td><td>3.83</td></tr><tr><td>BART</td><td>28.47</td><td>42.51</td><td>23.05</td><td>5.98</td></tr><tr><td>HierGNN-BART</td><td>27.27(↓)</td><td>42.53(↑)</td><td>24.31(↑)</td><td>5.89(↓)</td></tr></table>
|
| 213 |
+
|
| 214 |
+
Table 8: Percentages of summary sentences are synthesized by compression (information is extracted from a single source sentence) and fusion (information is combined from two or more source sentences). We use $\downarrow$ and $\uparrow$ to mark the changes between BART and HierGNN.
|
| 215 |
+
|
| 216 |
+
source article as its output. The higher coverage rate achieved by our HierGNN indicates that it can produce summaries with much richer information in the source article. Balachandran et al. find that PGN tends to over-copy content from the source article thus degenerating into an extractive model, particularly with more extractive datasets such as CNN/DM. We find that the graph-selection attention significantly reduces the average copy length, indicating that it informs the decoder to stop copying by leveraging the learned structural information in the encoder and that it reduces the reliance on PGN's copying functionality (See et al., 2017). We show a qualitative example for the graph-selection attention outcome in Appendix D.
|
| 217 |
+
|
| 218 |
+
In Tables 2 and 3, we observe that the layer-shared reasoning (LSR) architecture for HierGNN-PGN on CNN/DM outperforms the layer-independent reasoning (LIR) architecture, with the opposite being true for XSum. We attribute this difference to the inductive bias of the base model and the essential difference between the CNN/DM and XSum datasets. PGN-based models tend to copy and degenerate the model into an extractive summarizer (Balachandran et al., 2021). With a more extractive dataset like CNN/DM, a complex reasoning procedure for the PGN-based model may not be necessary; instead, learning a single hierarchical structure and selecting the sentences to be copied accordingly is sufficient. However, XSum summaries are abstractive, and the dataset emphasizes combining information from multiple document sites (see discussion by Narayan et al. 2019). LIR then shows its advantage by learning separate hierarchical structure in each layer. For an abstractive
|
| 219 |
+
|
| 220 |
+

|
| 221 |
+
Figure 3: Layer-wise intra-layer diversity (top) and inter-layer diversity (bottom) for BART with 2-layer HierGNN equipped with Sparse and Dense MTC.
|
| 222 |
+
|
| 223 |
+
base model (BART), LIR consistently outperforms LSR on both CNN/DM and XSum.
|
| 224 |
+
|
| 225 |
+
Compression or Fusion? To assess whether sentence fusion happens often, we quantify the ratio of sentence compression and sentence fusion that the model uses to generate summaries in Table 8 (Lebanoff et al., 2019). In comparison to BART, HierGNN reduces the proportion of sentence compression in both CNN/DM and XSum. Furthermore, the summarization models tend to adopt sentence compression more than exists in human-written references for CNN/DM, while more sentence fusion is used for XSum. This observation reveals that mechanism learned by end-to-end for neural summarizers to produce summaries is different than that humans use. Human editors can flexibly switch between compression and fusion; the summarization models tend to adopt one of them to produce the output.
|
| 226 |
+
|
| 227 |
+
Effectiveness for Longer Sequence. The performance of sequence-to-sequence models decays as the length of the input sequence increases (Liu et al., 2018) because they do not capture long-range dependencies. We hypothesize that HierGNN has a better capability in capturing such dependencies via its learned document hierarchical structure, thus enhancing the performance for long-sequence inputs. To verify this, we further conduct experiments on PubMed (Cohan et al., 2018), a long-document
|
| 228 |
+
|
| 229 |
+
Top-3 Sentences with Highest Root Probabilities Our Sparse MTC : 8th Sent. 9.77 : A lunar eclipse happens when the sun, Earth and moon form a straight line in space, with the Earth smack in the middle. 6th Sent. 9.40 : The sun shines on the Earth and creates a shadow. 10th Sent. 7.79 : Parts of South America, India, China and Russia also will be able to see the eclipse, but it won't be visible in Greenland, Iceland, Europe, Africa or the Middle East.
|
| 230 |
+
|
| 231 |
+
Top-3 Sentences with Lowest Root Probabilities Our Sparse MTC : 20th Sent. Sparsi ed : Share your photos with CNN iReport.
|
| 232 |
+
18th Sent. Sparsi ed : If you want to learn more about the eclipse, NASA astronomer Mitzi Adams will take questions on Twitter NASA Marshall.
|
| 233 |
+
19th Sent. 0.02 : Did you see the total lunar eclipse?
|
| 234 |
+
|
| 235 |
+
Reference: The total eclipse will only last 4 minutes and 43 seconds. People west of the Mississippi River will have the best view. Parts of South America, India, China and Russia also will see the eclipse.
|
| 236 |
+
|
| 237 |
+
Ours: A total lunar eclipse started at 3:16 a.m. Pacific Daylight Time. People west of the Mississippi River will have the best view. Parts of South America, India, China and Russia also will be able to see the eclipse. The total eclipse will only last four minutes and 43 seconds.
|
| 238 |
+
|
| 239 |
+

|
| 240 |
+
Figure 4: Top: the top-3 sentences with highest/lowest root probabilities, reference and summaries in article 23 in CNN/DM testing split. We underline the relevant contents; Bottom: visualizations for our sparse (Left) and the dense (Right) MTC layer for HierGNN-BART.
|
| 241 |
+
|
| 242 |
+

|
| 243 |
+
|
| 244 |
+
summulation dataset with scientific articles in the medical domain. We summarize the performance in Table 7. We notice that HierGNN improves BART by a large margin. We further evaluate the advantages of HierGNN over vanilla BART with respect to inputs of various lengths. As shown in Figure 2, when the input is longer than 1.6K tokens, HierGNN has a positive advantage over BART. As the input length increases, the advantage of HierGNN consistently becomes larger.
|
| 245 |
+
|
| 246 |
+
Sparse MTC or Dense MTC? We also study the expressive ability of our adaptive sparse variant of the matrix tree computation. We design two quantitative metrics: 1) Intra-layer diversity measures the diversity for the marginal distributions of roots and edges in each MTC layer, which is calculated by the range of the probability distribution; 2) Inter-layer diversity measures the diversity for the marginal distributions of roots and edges between MTC layers, which is calculated by the average Jensen-Shannon (JS) Divergence between the marginal distributions of roots and edges in different layers (Zhang et al., 2021; Correia et al.,
|
| 247 |
+
|
| 248 |
+
2019). We compare both intra-layer and inter-layer diversity for our adaptively sparse MTC and the original dense MTC (Koo et al., 2007; Liu et al., 2019a; Balachandran et al., 2021).
|
| 249 |
+
|
| 250 |
+
Figure 3 shows that our sparse variant of MTC has a higher diversity in both intra- (Top) and interlayer (Bottom) metrics for CNN/DM and XSum, indicating that our sparse MTC has a more powerful expressive ability than dense MTC. We find that the sparsity of HierGNN is different across layers and datasets: 1) $99.66\%$ of HierGNN's predictions for XSum instances have at least one element that is sparsified to zero, while this proportion is $24.22\%$ for CNN/DM; 2) Almost all the sparsified elements in HierGNN's predictions for XSum are edges, while roots for CNN/DM; 3) $90.32\%$ of the elements of the edge distribution in the second MTC layer are sparsified in XSum, but no any sparsified element in the first layer. In CNN/DM, the proportion of sparsified elements in the first and second layer are almost identical. These observations reveal that sparse MTC can adaptively choose whether sparse out elements in root or edge distributions, thus boosting the richness of the structural information represented by MTC.
|
| 251 |
+
|
| 252 |
+
We finally show a qualitative case with three sentences per article, having the highest or lowest root probabilities (see Figure 4), and the heatmap visualization of the learned hierarchical structures from sparse and dense MTC. We observe that the highest-probability root sentences tend to be summary-worthy while also scattering in different positions of the article, and the lowest probability is irrelevant. The structure learned by Sparse MTC tends to be more diverse and can successfully sparsify out the sentence nodes with irrelevant contents, e.g., 18th and 20th sentence.
|
| 253 |
+
|
| 254 |
+
# 7 Conclusion
|
| 255 |
+
|
| 256 |
+
We propose HierGNN that can be used in tandem with existing generation models. The module learns the document hierarchical structure while being able to integrate information from different parts of the text as a form of reasoning. Our experiments verify that HierGNN is effective in improving the plain sequential summarization models.
|
| 257 |
+
|
| 258 |
+
# Limitations
|
| 259 |
+
|
| 260 |
+
The inductive bias of our HierGNN model has an assumption that the source article follows an "inverted pyramid" style of writing. This may pose
|
| 261 |
+
|
| 262 |
+
limitations in the generalization of our model to other categories of input documents with no or a weak hierarchical structure. Future work includes understanding the limitations of HierGNN in different input domains (e.g., conversation summarization). Additionally, as other large-scale pretrained neural summarizers, our approach with an additional HierGNN encoder increases model complexity. To train our BART-based system, GPUs with at least 32GB of memory are required. Future work may focus on distilling the large HierGNN model into a much smaller size while retaining its original performance.
|
| 263 |
+
|
| 264 |
+
# Ethical and Other Considerations
|
| 265 |
+
|
| 266 |
+
Human evaluations. Human workers were informed of the intended use of the provided assessments of summary quality and complied with the terms and conditions of the experiment, as specified by Amazon Mechanical Turk. In regards to payment, workers were compensated fairly with the wage of £9 hourly (higher than the maximum minimum wage in the United Kingdom) i.e. £4.50 per HIT at 2 HITs per hour.
|
| 267 |
+
|
| 268 |
+
Computing time. We first report the computing time for our most computationally intense HierGNN-BART (471 million parameters) using NVIDIA Tesla A100 with 40G RAM: with CNN/DM, the training takes around 81 GPU hours, and the inference takes 9.39 GPU hours. With XSum, the training takes around 32 GPU hours, and the inference takes 4.41 GPU hours.
|
| 269 |
+
|
| 270 |
+
Additionally, training of HierGNN-PGN (32 million parameters) on CNN/DM takes 0.79 seconds per iteration using 1 NVIDIA V100 GPU card with 16GB. We estimate the inference time is 4.02 documents per second.
|
| 271 |
+
|
| 272 |
+
# Acknowledgements
|
| 273 |
+
|
| 274 |
+
We thank Zheng Zhao, Marcio Fonseca and the anonymous reviewers for their valuable comments. The human evaluation was funded by a grant from the Scottish Informatics and Computer Science Alliance (SICSA). This work was supported by computational resources provided by the EPCC Cirrus service (University of Edinburgh) and the Baskerville service (University of Birmingham).
|
| 275 |
+
|
| 276 |
+
# References
|
| 277 |
+
|
| 278 |
+
Kazuki Akiyama, Akihiro Tamura, and Takashi Ninomiya. 2021. Hie-BART: Document summarization with hierarchical BART. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Student Research Workshop, pages 159-165, Online. Association for Computational Linguistics.
|
| 279 |
+
Vidhisha Balachandran, Artidoro Pagnoni, Jay Yoon Lee, Dheeraj Rajagopal, Jaime Carbonell, and Yulia Tsvetkov. 2021. StructSum: Summarization via structured representations. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 2575-2585, Online. Association for Computational Linguistics.
|
| 280 |
+
Yoav Benjamini and Yosef Hochberg. 1995. Controlling the false discovery rate: a practical and powerful approach to multiple testing. Journal of the Royal statistical society: series B (Methodological), 57(1):289-300.
|
| 281 |
+
Meng Cao, Yue Dong, and Jackie Cheung. 2022. Hallucinated but factual! inspecting the factuality of hallucinations in abstractive summarization. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 3340-3354, Dublin, Ireland. Association for Computational Linguistics.
|
| 282 |
+
Ronald Cardenas, Matthias Galle, and Shay B Cohen. 2022. On the trade-off between redundancy and local coherence in summarization. ArXiv preprint, abs/2205.10192.
|
| 283 |
+
Arman Cohan, Franck Dernoncourt, Doo Soon Kim, Trung Bui, Seokhwan Kim, Walter Chang, and Nazli Goharian. 2018. A discourse-aware attention model for abstractive summarization of long documents. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 615-621, New Orleans, Louisiana. Association for Computational Linguistics.
|
| 284 |
+
Gonçalo M. Correia, Vlad Niculae, and André F. T. Martins. 2019. Adaptively sparse transformers. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2174-2184, Hong Kong, China. Association for Computational Linguistics.
|
| 285 |
+
Peng Cui, Le Hu, and Yuanchao Liu. 2020. Enhancing extractive text summarization with topic-aware graph neural networks. In Proceedings of the 28th International Conference on Computational Linguistics, pages 5360-5371, Barcelona, Spain (Online). International Committee on Computational Linguistics.
|
| 286 |
+
|
| 287 |
+
George E Dahl, Tara N Sainath, and Geoffrey E Hinton. 2013. Improving deep neural networks for lvcsr using rectified linear units and dropout. In 2013 IEEE international conference on acoustics, speech and signal processing, pages 8609-8613. IEEE.
|
| 288 |
+
Zi-Yi Dou, Pengfei Liu, Hiroaki Hayashi, Zhengbao Jiang, and Graham Neubig. 2021. GSum: A general framework for guided neural abstractive summarization. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4830-4842, Online. Association for Computational Linguistics.
|
| 289 |
+
Marcio Fonseca, Yftah Ziser, and Shay B. Cohen. 2022. Factorizing content and budget decisions in abstractive summarization of long documents by sampling summary views. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP).
|
| 290 |
+
Susan R Goldman, Arthur C Graesser, and Paul van den Broek. 1999. Narrative comprehension, causality, and coherence: Essays in honor of Tom Trabasso. Routledge.
|
| 291 |
+
Arthur C Graesser, Murray Singer, and Tom Trabasso. 1994. Constructing inferences during narrative text comprehension. Psychological review, 101(3):371.
|
| 292 |
+
Hardy Hardy and Andreas Vlachos. 2018. Guided neural language generation for abstractive summarization using Abstract Meaning Representation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 768-773, Brussels, Belgium. Association for Computational Linguistics.
|
| 293 |
+
Karl Moritz Hermann, Tomás Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, December 7-12, 2015, Montreal, Quebec, Canada, pages 1693-1701.
|
| 294 |
+
Dandan Huang, Leyang Cui, Sen Yang, Guangsheng Bao, Kun Wang, Jun Xie, and Yue Zhang. 2020a. What have we achieved on text summarization? In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 446-469, Online. Association for Computational Linguistics.
|
| 295 |
+
Luyang Huang, Lingfei Wu, and Lu Wang. 2020b. Knowledge graph-augmented abstractive summarization with semantic-driven cloze reward. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5094-5107, Online. Association for Computational Linguistics.
|
| 296 |
+
Hanqi Jin, Tianming Wang, and Xiaojun Wan. 2020. Semsum: Semantic dependency guided neural abstractive summarization. In The Thirty-Fourth AAAI
|
| 297 |
+
|
| 298 |
+
Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pages 8026-8033. AAAI Press.
|
| 299 |
+
Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1746-1751, Doha, Qatar. Association for Computational Linguistics.
|
| 300 |
+
Terry Koo, Amir Globerson, Xavier Carreras, and Michael Collins. 2007. Structured prediction models via the matrix-tree theorem. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL), pages 141-150, Prague, Czech Republic. Association for Computational Linguistics.
|
| 301 |
+
Jingun Kwon, Naoki Kobayashi, Hidetaka Kamigaito, and Manabu Okumura. 2021. Considering nested tree structure in sentence extractive summarization with pre-trained transformer. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 4039-4044, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
|
| 302 |
+
Logan Lebanoff, Kaiqiang Song, Franck Dernoncourt, Doo Soon Kim, Seokhwan Kim, Walter Chang, and Fei Liu. 2019. Scoring sentence singletons and pairs for abstractive summarization. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2175-2189, Florence, Italy. Association for Computational Linguistics.
|
| 303 |
+
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7871-7880, Online. Association for Computational Linguistics.
|
| 304 |
+
Chin-Yew Lin and Franz Josef Och. 2004. Automatic evaluation of machine translation quality using longest common subsequence and skip-bigram statistics. In Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics (ACL-04), pages 605–612, Barcelona, Spain.
|
| 305 |
+
Peter J. Liu, Mohammad Saleh, Etienne Pot, Ben Goodrich, Ryan Sepassi, Lukasz Kaiser, and Noam Shazeer. 2018. Generating wikipedia by summarizing long sequences. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net.
|
| 306 |
+
|
| 307 |
+
Yang Liu and Mirella Lapata. 2019. Text summarization with pretrained encoders. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3730-3740, Hong Kong, China. Association for Computational Linguistics.
|
| 308 |
+
Yang Liu, Ivan Titov, and Mirella Lapata. 2019a. Single document summarization as tree induction. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1745-1755, Minneapolis, Minnesota. Association for Computational Linguistics.
|
| 309 |
+
Zhengyuan Liu, Angela Ng, Sheldon Lee, Ai Ti Aw, and Nancy F. Chen. 2019b. Topic-aware pointer-generator networks for summarizing spoken conversations. In 2019 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), pages 814-821.
|
| 310 |
+
Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1412-1421, Lisbon, Portugal. Association for Computational Linguistics.
|
| 311 |
+
Ryan McDonald and Giorgio Satta. 2007. On the complexity of non-projective data-driven dependency parsing. In Proceedings of the Tenth International Conference on Parsing Technologies, pages 121-132, Prague, Czech Republic. Association for Computational Linguistics.
|
| 312 |
+
Afonso Mendes, Shashi Narayan, Sebastião Miranda, Zita Marinho, Andre F. T. Martins, and Shay B. Cohen. 2019. Jointly extracting and compressing documents with summary state representations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3955-3966, Minneapolis, Minnesota. Association for Computational Linguistics.
|
| 313 |
+
Ramesh Nallapati, Feifei Zhai, and Bowen Zhou. 2017. Summarrunner: A recurrent neural network based sequence model for extractive summarization of documents. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, February 4-9, 2017, San Francisco, California, USA, pages 3075-3081. AAAI Press.
|
| 314 |
+
Shashi Narayan, Shay B. Cohen, and Mirella Lapata. 2018. Don't give me the details, just the summary! topic-aware convolutional neural networks for extreme summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1797-1807, Brussels, Belgium. Association for Computational Linguistics.
|
| 315 |
+
|
| 316 |
+
Shashi Narayan, Shay B Cohen, and Mirella Lapata. 2019. What is this article about? extreme summarization with topic-aware convolutional neural networks. Journal of Artificial Intelligence Research, 66:243-278.
|
| 317 |
+
Jonathan Pilault, Raymond Li, Sandeep Subramanian, and Chris Pal. 2020. On extractive and abstractive neural document summarization with transformer language models. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 9308-9319, Online. Association for Computational Linguistics.
|
| 318 |
+
Horst Pottker. 2003. News and its communicative quality: the inverted pyramid—when and why did it appear? Journalism Studies, 4(4):501-511.
|
| 319 |
+
Tobias Rohde, Xiaoxia Wu, and Yinhan Liu. 2021. Hierarchical learning for generation with long source sequences. ArXiv preprint, abs/2104.07545.
|
| 320 |
+
Alexander M. Rush, Sumit Chopra, and Jason Weston. 2015. A neural attention model for abstractive sentence summarization. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 379-389, Lisbon, Portugal. Association for Computational Linguistics.
|
| 321 |
+
Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointer-generator networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1073-1083, Vancouver, Canada. Association for Computational Linguistics.
|
| 322 |
+
David A. Smith and Noah A. Smith. 2007. Probabilistic models of nonprojective dependency trees. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-ConNLL), pages 132-140, Prague, Czech Republic. Association for Computational Linguistics.
|
| 323 |
+
Kaiqiang Song, Lin Zhao, and Fei Liu. 2018. Structureinfused copy mechanisms for abstractive summarization. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1717-1729, Santa Fe, New Mexico, USA. Association for Computational Linguistics.
|
| 324 |
+
Simeng Sun, Ori Shapira, Ido Dagan, and Ani Nenkova. 2019. How to compare summarizers without target length? pitfalls, solutions and re-examination of the neural summarization literature. In Proceedings of the Workshop on Methods for Optimizing and Evaluating Neural Language Generation, pages 21-29, Minneapolis, Minnesota. Association for Computational Linguistics.
|
| 325 |
+
W. T. Tutte. 1986. Graph theory, by w. t. tutte, encyclopedia of mathematics and its applications, volume 21, Addison-wesley publishing company, menlo park, ca., 1984, 333 pp. price: 45.00. Networks, 16:107-108.
|
| 326 |
+
|
| 327 |
+
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 5998-6008.
|
| 328 |
+
Danqing Wang, Pengfei Liu, Yining Zheng, Xipeng Qiu, and Xuanjing Huang. 2020. Heterogeneous graph neural networks for extractive document summarization. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6209-6219, Online. Association for Computational Linguistics.
|
| 329 |
+
Haonan Wang, Yang Gao, Yu Bai, Mirella Lapata, and Heyan Huang. 2021. Exploring explainable selection to control abstractive summarization. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 13933-13941.
|
| 330 |
+
Wenhao Wu, Wei Li, Xinyan Xiao, Jiachen Liu, Ziqiang Cao, Sujian Li, Hua Wu, and Haifeng Wang. 2021. BASS: Boosting abstractive summarization with unified semantic graph. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 6052-6067, Online. Association for Computational Linguistics.
|
| 331 |
+
Yuexiang Xie, Fei Sun, Yang Deng, Yaliang Li, and Bolin Ding. 2021. Factual consistency evaluation for text summarization via counterfactual estimation. In *Findings of the Association for Computational Linguistics: EMNLP* 2021, pages 100–110, Punta Cana, Dominican Republic. Association for Computational Linguistics.
|
| 332 |
+
Espen Ytreberg. 2001. Moving out of the inverted pyramid: narratives and descriptions in television news. Journalism Studies, 2(3):357-371.
|
| 333 |
+
Biao Zhang, Ivan Titov, and Rico Sennrich. 2021. Sparse attention with linear units. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 6507-6520, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
|
| 334 |
+
Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, and Yoav Artzi. 2020. Bertscore: Evaluating text generation with BERT. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.
|
| 335 |
+
Chunting Zhou, Chonglin Sun, Zhiyuan Liu, and Francis Lau. 2015. A c-lstm neural network for text classification. ArXiv preprint, abs/1511.08630.
|
| 336 |
+
Chenguang Zhu, William Hinthorn, Ruochen Xu, Qingkai Zeng, Michael Zeng, Xuedong Huang, and Meng Jiang. 2021. Enhancing factual consistency
|
| 337 |
+
|
| 338 |
+
of abstractive summarization. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 718-733, Online. Association for Computational Linguistics.
|
| 339 |
+
|
| 340 |
+
# A Implementation Details
|
| 341 |
+
|
| 342 |
+
HierGNN-PGN is developed based on the PointerGenerator Network (See et al., 2017). To obtain the sentence representations, we use a CNN-LSTM encoder to capture both the $n$ -gram features and sequential features (Kim, 2014; Zhou et al., 2015). The CNN's filter windows sizes are set to be $\{1,2,3,4,5,7,9\}$ with 50 feature maps each. We set the dimension of the representations to be 512. The number of reasoning layers $L$ is set to 3 after a development set search in $\{1,2,3,5,10\}$ . Other settings follow the best hyperparameters for CNN/DM as in (See et al., 2017), and we use 60K iterations to train the coverage mechanism. For XSum, we discard the coverage training due to its redundancy for extreme summarization (Narayan et al., 2018), and we use a beam of size 6. We search the best model by the validation ROUGE scores on both datasets with one search trial per hyperparameter.
|
| 343 |
+
|
| 344 |
+
<table><tr><td>#Layer</td><td>Val. PPL (↑)</td><td>R-1 (↗)</td><td>R-2 (↗)</td><td>R-L (↗)</td></tr><tr><td>1</td><td>8.61</td><td>30.06</td><td>10.09</td><td>24.23</td></tr><tr><td>2</td><td>8.58</td><td>29.94</td><td>10.00</td><td>24.13</td></tr><tr><td>3</td><td>8.51</td><td>30.24</td><td>10.43</td><td>24.20</td></tr><tr><td>5</td><td>8.54</td><td>30.14</td><td>10.23</td><td>24.32</td></tr><tr><td>10</td><td>8.61</td><td>29.99</td><td>9.93</td><td>24.13</td></tr></table>
|
| 345 |
+
|
| 346 |
+
Table 9: Performance of HierGNN-PGN (LIR) on XSum with respect to the number of reasoning layers. $(\nearrow)$ and $(\searrow)$ indicates the larger and lower is better, respectively.
|
| 347 |
+
|
| 348 |
+
HierGNN-BART uses the pretrained architecture BART (Lewis et al., 2020). We use the same approach to obtain the sentence representation as in (Akiyama et al., 2021). On top of the sentence encoder, we add a two-layer HierGNN to boost the sentence representations. The GSA for HierGNN-BART is implemented as the cross-attention in Transformer decoder, which first attends to the output of the reasoning encoder then the token encoder. For both CNN/DM and XSum, we follow the same fine-tuning settings as in (Lewis et al., 2020) except that we use 40K and 20K training steps for each dataset. We search the best model by the label smoothed cross entropy loss on validation set with one search trial per hyperparameter.
|
| 349 |
+
|
| 350 |
+
Evaluation Metrics. We use the implementation for ROUGE (Lin and Och, 2004) from
|
| 351 |
+
|
| 352 |
+
Google Research. We use the official implementation for BERTScore (Zhang et al., 2020). BERTScore is used with model setting in roberta-large_L17_noidf_version=0.3.9 as suggested.
|
| 353 |
+
|
| 354 |
+
Datasets. We describe all our pre-processings for the used datasets as followed,
|
| 355 |
+
|
| 356 |
+
- CNN/DM: For HierGNN-PGN, we directly use the data processed by See et al. For HierGNN-BART, we remain all the preprocessing steps to be the same as Lewis et al.
|
| 357 |
+
- XSum: Following Lewis et al., we do not preprocess the XSum dataset, and use the original version in (Narayan et al., 2018).<sup>10</sup>
|
| 358 |
+
- PubMed: We use the same pre-processing script in https://github.com/HHousen/A rXiv-PubMed-Sum. We remove the instances with article have less 3 sentences or abstract have less 2 sentences. We also remove three special tokens: newlines, $< S >$ and $</ S>$ .
|
| 359 |
+
|
| 360 |
+
# B Details for Human Evaluation
|
| 361 |
+
|
| 362 |
+
We adopt several settings to control the quality of human evaluation: 1) we only use data instances whose length difference between candidate summaries does not exceed 35 tokens (Sun et al., 2019; Wu et al., 2021). 2) When publishing the tasks on MTurk, we require all referees to be professional English speakers located in one of the following countries: i) Australia, ii) Canada, iii) Ireland, iv) New Zealand, v) the United Kingdom and vi) the United States, with the HIT Approval Rate and number of HITs Approved to be greater than $98\%$ and 1,000. 3) We evaluate 25 instances in CNN/DM testing set in total, while each task is evaluated by three workers on MTurk. These settings give us the results with an inter agreement in the average of $58.96\%$ , $64.92\%$ and $51.52\%$ for Relevance, Informativeness and Redundancy, separately.
|
| 363 |
+
|
| 364 |
+
# C Detailed Results for Human Evaluation
|
| 365 |
+
|
| 366 |
+
We show the detailed proportions for each choice in human evaluation in Table 10.
|
| 367 |
+
|
| 368 |
+
<table><tr><td>Rel.</td><td>Best(↗)</td><td>Worst(↘)</td><td>Score(↗)</td></tr><tr><td>HierGNN-BART</td><td>0.40</td><td>0.20</td><td>0.20</td></tr><tr><td>BART</td><td>0.29</td><td>0.15</td><td>0.14</td></tr><tr><td>T5-Large</td><td>0.25</td><td>0.17</td><td>0.08</td></tr><tr><td>BERTSUMABS</td><td>0.04</td><td>0.48</td><td>*-0.44</td></tr><tr><td>Inf.</td><td>Best(↗)</td><td>Worst(↘)</td><td>Score(↗)</td></tr><tr><td>HierGNN-BART</td><td>0.35</td><td>0.16</td><td>0.19</td></tr><tr><td>BART</td><td>0.43</td><td>0.19</td><td>0.24</td></tr><tr><td>T5-Large</td><td>0.17</td><td>0.27</td><td>-0.09</td></tr><tr><td>BERTSUMABS</td><td>0.05</td><td>0.39</td><td>*-0.34</td></tr><tr><td>Red.</td><td>Best(↗)</td><td>Worst(↘)</td><td>Score(↗)</td></tr><tr><td>HierGNN-BART</td><td>0.31</td><td>0.21</td><td>0.10</td></tr><tr><td>BART</td><td>0.21</td><td>0.25</td><td>-0.04</td></tr><tr><td>T5-Large</td><td>0.31</td><td>0.25</td><td>0.06</td></tr><tr><td>BERTSUMABS</td><td>0.17</td><td>0.28</td><td>-0.11</td></tr></table>
|
| 369 |
+
|
| 370 |
+
Table 10: Detailed summary for the human evaluation in terms of Relevance (Rel.), Informativeness (Inf.) and Redundancy (Red.). We show the proportion of each option to be selected as the Best/Worst among the four candidates. $(\nearrow)$ and $(\searrow)$ indicates the larger is better and lower is better, respectively. $*$ : HierGNN-BART's scores are significantly (by pair-wise t-test with $p < 0.05$ , corrected using Benjamini-Hochberg method to control the False Discovery Rate (Benjamini and Hochberg, 1995) for multiple comparison) better than the corresponding system.
|
| 371 |
+
|
| 372 |
+
# D Qualitative Case for Graph-Selection Attention
|
| 373 |
+
|
| 374 |
+
To demonstrate the effectiveness of the graph-selection attention (GSA) on HierGNN, we visualize the graph-selection attention and compare the token attentions whether graph-selection attention is used (See Figure 5). It turns out graph-selection attention mostly focuses on the top sentences but still captures the critical information in the latter. In this case, graph-selection attention successfully captures fifth title in Miami and Andy Murray from the middle part of the article during decoding (marked in blue). In contrast, the model without graph-selection attention continuously produces content about the event Novak Djokovic beat John Isner (marked in red).
|
| 375 |
+
|
| 376 |
+
# Article 4384:
|
| 377 |
+
|
| 378 |
+
Two hours before the Miami open semifinal, Novak Djokovic practiced his returns in an empty stadium, the ball coming at him quickly because his hitting partner stood three feet inside the baseline to emulate big-serving John Isner. The drill helped. Djokovic achieved a breakthrough service break against Isner and won Friday night, 7-6 (3), 6-2. 'He's probably the best server we have in the game,' Djokovic said. (2 sentences are abbreviated here) Novak Djokovic beat John Isner in straight sets to reach the final of the Miami Open on Friday night. (4 sentences are abbreviated here) The No. 1-seeded Djokovic closed to within one win of his fifth Key Biscayne title. His opponent Sunday will be two-time champion andy Murray, who defeated Tomas Berdych 6-4, 6-4. (6 sentences are abbreviated here) Djokovic is aiming to win his fifth title in Miami and will take on Scotsman Murray in Sunday's Final. (3 sentences are abbreviated here)
|
| 379 |
+
|
| 380 |
+
# Summaries:
|
| 381 |
+
|
| 382 |
+
# Reference:
|
| 383 |
+
|
| 384 |
+
Novak Djokovic beat John Isner 7-6. The world No. 1 will take on Andy Murray in Sunday's Final. Djokovic is bidding to win his fifth title at Key Biscayne.
|
| 385 |
+
|
| 386 |
+
# HierGNN-PGN LIR w/ GSA:
|
| 387 |
+
|
| 388 |
+
Novak Djokovic beat John Isner in straight sets to reach the Miami Open. The No.1-seeded Djokovic closed to within one win of his fifth Key Biscayne title. Djokovic will be two-time champion andy Murray, who defeated Tomas Berdych 6-4.
|
| 389 |
+
|
| 390 |
+
# HierGNN-PGN LIR w/o GSA:
|
| 391 |
+
|
| 392 |
+
Novak Djokovic beat John Isner in straight sets to reach the final of the Miami Open on Friday night. Djokovic achieved a breakthrough service break against Isner and won Friday night, 7-6 (3), 6-2. His opponent Andy Murray defeated Tomas Berdych 6-4, 6-4.
|
| 393 |
+
|
| 394 |
+

|
| 395 |
+
Figure 5: Top Table: CNN/DM testing article 4384 and produced summaries; Bottom Figure: visualization for GSA (left) and HierGNN LIR's token-level attention w/ GSA (right-bottom), and HierGNN-PGN LIR w/o GSA (right-top). X-axis, Y-axis are the encoding and decoding steps, respectively.
|
| 396 |
+
|
| 397 |
+

|
| 398 |
+
|
| 399 |
+

|
abstractivesummarizationguidedbylatenthierarchicaldocumentstructure/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:27c184b390feb9c4c41fbf457824d2a2ed7963279f50462e89195e95ec9e953c
|
| 3 |
+
size 560441
|
abstractivesummarizationguidedbylatenthierarchicaldocumentstructure/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:b5371d4ca7051e5e66e5c15dc905b96f9a3127c47f21284f10881ef7529704e4
|
| 3 |
+
size 467505
|
abstractvisualreasoningwithtangramshapes/be53ce45-a171-4498-a723-e5dd579ce31c_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:fb9c7145b382da2583708cb94f3d13bbad9301ee49329e12af0d8d235370f293
|
| 3 |
+
size 103450
|
abstractvisualreasoningwithtangramshapes/be53ce45-a171-4498-a723-e5dd579ce31c_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:4693b794f1a314b55d146c65b9d374021de459cb30e4f433fa0d9baf1ca72775
|
| 3 |
+
size 127106
|
abstractvisualreasoningwithtangramshapes/be53ce45-a171-4498-a723-e5dd579ce31c_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:d9288647e5310ae1e16cb0ffa78333ec8d9d5933ed0a07893a9dda2cca9d729f
|
| 3 |
+
size 5439855
|
abstractvisualreasoningwithtangramshapes/full.md
ADDED
|
@@ -0,0 +1,394 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Abstract Visual Reasoning with Tangram Shapes
|
| 2 |
+
|
| 3 |
+
Anya Ji $^{1}$ , Noriyuki Kojima $^{1*}$ , Noah Rush $^{1*}$ , Alane Suhr $^{1,3*}$ , Wai Keen Vong $^{2}$ , Robert D. Hawkins $^{4}$ , and Yoav Artzi $^{1}$
|
| 4 |
+
|
| 5 |
+
$^{1}$ Cornell University $^{2}$ New York University $^{3}$ Allen Institute for AI $^{4}$ Princeton University
|
| 6 |
+
|
| 7 |
+
{aj592, nk654}@cornell.edu noahjrush@gmail.com
|
| 8 |
+
|
| 9 |
+
waikeen.vong@nyu.edu suhr@cs.cornell.edu
|
| 10 |
+
|
| 11 |
+
rdhawkins@princeton.edu yoav@cs.cornell.edu
|
| 12 |
+
|
| 13 |
+
# Abstract
|
| 14 |
+
|
| 15 |
+
We introduce KILOGRAM, a resource for studying abstract visual reasoning in humans and machines. Drawing on the history of tangram puzzles as stimuli in cognitive science, we build a richly annotated dataset that, with $>1\mathrm{k}$ distinct stimuli, is orders of magnitude larger and more diverse than prior resources. It is both visually and linguistically richer, moving beyond whole shape descriptions to include segmentation maps and part labels. We use this resource to evaluate the abstract visual reasoning capacities of recent multi-modal models. We observe that pre-trained weights demonstrate limited abstract reasoning, which dramatically improves with fine-tuning. We also observe that explicitly describing parts aids abstract reasoning for both humans and models, especially when jointly encoding the linguistic and visual inputs.
|
| 16 |
+
|
| 17 |
+
# 1 Introduction
|
| 18 |
+
|
| 19 |
+
Reference is a core function of natural language that relies on shared conventions and visual concepts. For example, in English, a speaker may use the term dog to refer to a particular animal of the species canis familiaris, or, through abstraction, to an object with a less strongly conventionalized name, such as the shape at the top of Figure 1. A speaker might refer to such a shape as looking like a dog, and even point to its parts, like its head and tail, despite having few visual features in common with the ordinary referent.
|
| 20 |
+
|
| 21 |
+
Comprehension and generation of references are critical for systems to engage in natural language interaction, and have been studied extensively with focus on ordinary references (e.g., Viethen and Dale, 2008; Mitchell et al., 2010; Fitzgerald et al., 2013; Mao et al., 2016; Yu et al., 2016), in contrast to the visual abstraction illustrated in Figure 1.
|
| 22 |
+
|
| 23 |
+

|
| 24 |
+
|
| 25 |
+

|
| 26 |
+
|
| 27 |
+

|
| 28 |
+
Figure 1: Two example tangrams, each with two different annotations. Each annotation includes a whole-shape description (bold), segmentation to parts (in color), and naming of parts (linked to each part). The top example shows low variability with near-perfect agreement, while the bottom shows high variability with divergence of language and segmentation.
|
| 29 |
+
|
| 30 |
+

|
| 31 |
+
|
| 32 |
+
We address this gap by adopting an influential paradigm for probing human coordination in the cognitive science literature: reference games with abstract tangram shapes (e.g. Clark and Wilkes-Gibbs, 1986; Fox Tree, 1999; Hawkins et al., 2020).
|
| 33 |
+
|
| 34 |
+
Unlike photographs of natural objects, where there is often a single canonical label, tangrams are fundamentally ambiguous. While some shapes fall under strong existing conventions and elicit consensus about appropriate names (e.g., Figure 1, top), others are characterized by weaker conventions (e.g., Figure 1, bottom) and every speaker may arrive at a distinct but valid description (Zettersten and Lupyan, 2020; Hupet et al., 1991). While such diversity is a key consideration motivating their use as stimuli, existing behavioral studies have typically been limited to a relatively small set of 10-20 shapes, highly restricting the overall diversity of the stimulus class. It also limits their applicability for training and analyzing vision and language models, where significantly more data is necessary.
|
| 35 |
+
|
| 36 |
+
In this paper, we significantly expand this resource. We introduce KILOGRAM, $^{1}$ a large collec
|
| 37 |
+
|
| 38 |
+
tion of tangrams with rich language annotations. KILOGRAM dramatically improves on existing resources along two dimensions. First, we curate and digitize 1,016 shapes, creating a set that is two orders of magnitude larger than collections used in existing work. This set dramatically increases coverage over the full range of naming variability, providing a more comprehensive view of human naming behavior. Second, rather than treating each tangram as a single whole shape, our images are vector graphics constructed from the original component puzzle pieces. This decomposition enables reasoning about both whole shapes and their parts.
|
| 39 |
+
|
| 40 |
+
We use this new collection of digitized tangram shapes to collect a large dataset of textual descriptions, reflecting a high diversity of naming behaviors. While existing work has focused on naming the complete shape, we also ask participants to segment and name semantically meaningful parts. We use crowdsourcing to scale our annotation process, collecting multiple annotations for each shape, thereby representing the distribution of annotations it elicits, rather than a single sample. In total, we collect 13,404 annotations, each describing a complete object and its segmented parts.
|
| 41 |
+
|
| 42 |
+
The potential of KILOGRAM is broad. For example, it enables the data-driven scaling of studies of human interactions and models of whole-part reasoning in language and vision models. In this paper, we use KILOGRAM to evaluate the visual reasoning capacities of recent pre-trained multi-modal models, focusing on generalizing concepts to abstract shapes. We observe limited generalization of this type in pre-trained models, but significant improvements following fine-tuning with our data. We also see how explicitly referring to and visualizing parts can help reference resolution. Data and code, as well as a data viewer are available at: https://lil.nlp.cornell.edu/kilogram/.
|
| 43 |
+
|
| 44 |
+
# 2 Background and Related Work
|
| 45 |
+
|
| 46 |
+
Abstract or ambiguous visual stimuli have been widely used to investigate how human partners coordinate when talking about things in the absence of strong naming conventions going back to Krauss and Weinheimer (1964). Tangrams as stimuli were introduced by Clark and Wilkes-Gibbs (1986). These shapes are all built from the same seven primitives, but elicit a wide range of figurative descriptions that conceptualize shapes in different ways (Schober and Clark, 1989; Hor
|
| 47 |
+
|
| 48 |
+
ton and Gerrig, 2002; Duff et al., 2006; Holler and Wilkin, 2011; Horton and Slaten, 2012; Ibarra and Tanenhaus, 2016; Shore et al., 2018; Atkinson et al., 2019; Castillo et al., 2019; Bangerter et al., 2020). It has been observed that some shapes are easier or harder to describe (Hupet et al., 1991; Zettersten and Lupyan, 2020; Brashears and Minda, 2020), a property known as nameability or codability, which has also been studied with non-tangram shapes (e.g., line drawings; Snodgrass and Vanderwart, 1980; Cycowicz et al., 1997; Dunabeitia et al., 2018). Even though diversity is a key consideration in working with tangrams, existing stimuli sets are relatively small, limiting their usefulness as NLP benchmarks, where scale is critical. Even the largest studies of variability in naming (e.g., Murfitt and McAllister, 2001) have used a relatively small set of 60 tangrams. Fasquel et al. (2022) present a resource that is related and complementary to ours, including 332 PNG-formatted tangrams with whole-shape naming annotations in French.
|
| 49 |
+
|
| 50 |
+
Contemporary pre-trained vision and language approaches can be categorized along an axis characterizing how they encode the data, from jointly encoding the two inputs (Lu et al., 2019; Chen et al., 2020; Kim et al., 2021) to encoding them separately (Radford et al., 2021; Jia et al., 2021). Joint encoding aims to capture tighter interaction between the input modalities compared to separate encoding, but is generally more computationally expensive, and can only operate on multi-modal input. We study recent models on both ends: ViLT (Kim et al., 2021) for joint encoding and CLIP (Radford et al., 2021) for separate encoding.
|
| 51 |
+
|
| 52 |
+
These models are typically evaluated on image captioning (e.g., Chen et al., 2015) or visual question answering (e.g., Antol et al., 2015) benchmarks. Several benchmarks, such as NLVR (Suhr et al., 2017, 2019) and Winoground (Thrush et al., 2022), aim for more focused evaluations with a focus on compositionality. We build on these efforts, but target generalization through abstraction using visually ambiguous stimuli. This is inspired by the role of abstraction in human cognition. Abstraction is a key step in human perception (Biederman, 1987) that is critical for generalization (Gentner and Markman, 1997; Medin et al., 1993; Shepard, 1987), and forms the shared foundation on which human language communication is layered (Lupyan and Winter, 2018; McCarthy et al., 2021; Wong et al., 2022). Our focus on part de
|
| 53 |
+
|
| 54 |
+

|
| 55 |
+
Figure 2: The two phases of our annotation task.
|
| 56 |
+
|
| 57 |
+
composition is aligned with how part identification plays an important role in human abstraction (Tversky and Hemenway, 1984).
|
| 58 |
+
|
| 59 |
+
# 3 Data Collection
|
| 60 |
+
|
| 61 |
+
We scan a large set of tangram puzzles to vector graphics, and crowdsourced annotations of natural language descriptions and part segmentations.
|
| 62 |
+
|
| 63 |
+
# 3.1 Collecting Tangram Puzzles
|
| 64 |
+
|
| 65 |
+
Tangram puzzles are made of seven primitive shapes (Elffers, 1977), which can be combined in a large variety of configurations evoking different concepts. We scan 1,004 tangrams depicting a broad set of concepts to vector graphic SVGs from Slocum (2003). Appendix A.1 shows example tangrams, Appendix A.2 details on our process. We also manually add 12 tangrams commonly used in previous studies (Hawkins et al., 2020).
|
| 66 |
+
|
| 67 |
+
# 3.2 Whole-Part Annotation
|
| 68 |
+
|
| 69 |
+
We design a two-stage crowdsourcing task to elicit natural language English descriptions for each tan-gram, both of the whole shape and of its parts (Figure 2). First, in the whole-shape description stage, the worker is shown a tangram image in grayscale and asked to complete the prompt "This shape, as a whole, looks like _____. In the part annotation stage, the worker is asked to select one or more puzzle pieces, and complete the prompt "The part(s) you selected look(s) like _____. These pieces are then colored and the annotation appears in the corresponding color. The annotator can delete annotations, annotate a part as UNKNOWN when they are not sure about its semantics, and add pieces to existing
|
| 70 |
+
|
| 71 |
+
<table><tr><td colspan="2">Mean Description Length</td></tr><tr><td>Whole-shape description</td><td>2.28±1.62</td></tr><tr><td>Part description</td><td>1.31±0.77</td></tr><tr><td colspan="2">Vocabulary Size</td></tr><tr><td>Whole-shape description</td><td>3,031</td></tr><tr><td>Part description</td><td>3,110</td></tr><tr><td>Overall</td><td>4,522</td></tr><tr><td colspan="2">Part Segmentation</td></tr><tr><td>Mean parts per shape</td><td>3.63±1.28</td></tr><tr><td>Mean pieces per part</td><td>1.93±1.20</td></tr></table>
|
| 72 |
+
|
| 73 |
+
Table 1: Data statistics for the complete dataset.
|
| 74 |
+
|
| 75 |
+
parts. All pieces must be annotated to submit the task, yielding a complete segmentation map.
|
| 76 |
+
|
| 77 |
+
We use Amazon Mechanical Turk for data collection. Workers are required to be located in the United States with at least a $98\%$ HIT acceptance rate, must pass a qualification task, and complete a survey about their language proficiency (see Appendix A.3 for further details). To prevent a small group of workers from dominating the data, each annotator is only allowed to annotate each tangram once, and cannot annotate more than 200 distinct tangrams. Workers are paid 0.14 USD per task. $^{3}$
|
| 78 |
+
|
| 79 |
+
We first collect 10,053 annotations for the 1,004 scanned tangrams, at least 10 annotations for each tangram (mean=10.01). Following this stage of annotation, we collect additional annotations for a subset of the tangrams to create a set with denser language and part segmentation annotation. We sample 62 tangrams to be representative of the different levels of diversity in annotations we observe in the initially collected data. Appendix A.4 describes the sampling procedure. We also add the 12 tangrams from previous studies for a total of 74 tangrams for dense annotation. We conduct additional annotation tasks to have at least 50 annotations for each of the 74 tangrams selected for dense annotation (mean=53.66). The dense annotation gives us a better estimate of the distribution of language for the 74 selected tangrams, for example to use as reference texts in generation tasks.
|
| 80 |
+
|
| 81 |
+
In total, we collect 13,404 annotations for 1,016 tangrams at a total cost of 2,172.94 USD. We lowercase and stem to compute vocabulary size, and tokenize on white spaces to compute description length. Table 1 shows basic data statistics. A total
|
| 82 |
+
|
| 83 |
+
<table><tr><td></td><td>FULL</td><td>DENSE</td><td>DENSE10</td></tr><tr><td>SND</td><td>0.91 ±0.11</td><td>0.93±0.06</td><td>0.90±0.15</td></tr><tr><td>PND</td><td>0.76±0.19</td><td>0.79±0.15</td><td>0.73±0.20</td></tr><tr><td>PSA</td><td>5.30±0.62</td><td>5.09±0.53</td><td>5.34±0.77</td></tr></table>
|
| 84 |
+
|
| 85 |
+
Table 2: Mean and standard deviation of our analysis measures on the three sets.
|
| 86 |
+
|
| 87 |
+
of 297 MTurk workers participate in the annotation, with $98.0\%$ of the workers speaking English as their first language. Those who do not speak English as their first language still rate their English proficiency level as native or close to native. $1.0\%$ of the workers speak more than one language, among which the most common are Spanish, German, Japanese, and Chinese.
|
| 88 |
+
|
| 89 |
+
# 3.3 Standard Data Splits
|
| 90 |
+
|
| 91 |
+
We split the dataset for analysis and learning experiments. For analysis, we create two overlapping sets: FULL and DENSE. FULL includes 1,016 tangrams, each with 10-11 annotations (mean=10.11). It includes the 10,053 annotations initially collected for the scanned 1,004 tangrams. For the 12 commonly used tangrams, we sample 10 annotations from the later collection effort. DENSE includes all annotations for the 74 densely annotated tangrams, with at least 50, and 53.66 on average annotations per tangram. We also define the set DENSE10 to include only the annotations from the sparse set for the densely annotated tangrams. For learning experiments, we split according to tangrams to create training (692 tangrams), development (125), test (125), and test-dense sets (74). All densely annotated tangrams are in test-dense. The other three sets are split randomly.
|
| 92 |
+
|
| 93 |
+
# 4 Data Analysis
|
| 94 |
+
|
| 95 |
+
The language and concepts annotators use reflect varying degrees of consensus around conventions for describing the appearance of shapes and their parts. For analysis, we preprocess the annotations by lowercasing, tokenizing, lemmatizing, and removing stop words using NLTK (Bird, 2004). We use the larger FULL set for our analyses (Section 3.3), unless otherwise noted.
|
| 96 |
+
|
| 97 |
+
For a broad overview of the types of concepts evoked, we manually tag 250 randomly sampled annotations: $30.8\%$ use human-like concepts (e.g., dancer), $31.2\%$ animate but non-human concepts (e.g., dog), and $38.0\%$ non-animate concepts (e.g.,
|
| 98 |
+
|
| 99 |
+
house). We examine how part words differ across whole-shape concepts by extracting head words from whole-shape and part descriptions. Figure 3 shows the distribution of part head words for each of 272 whole-shape head words with $>10$ occurrences, ranked in order of frequency. Figure A.2 in the appendix illustrates how the most common part word head is used in different tangrams.
|
| 100 |
+
|
| 101 |
+
A central problem of visual abstraction is the degree of ambiguity or subjectivity that a shape evokes across different people (Murthy et al., 2022): some descriptions have higher consensus than others. We define three measures of variability along different dimensions: shape naming divergence (SND), part naming divergence (PND), and part segmentation agreement (PSA). Table 2 lists the mean and standard deviation for these three measures over the sparsely and densely annotated data.
|
| 102 |
+
|
| 103 |
+
Shape Naming Divergence (SND) A tangram's SND quantifies the variability among whole-shape annotations. SND is an operationalization of nameability, a criteria that is commonly used to measure how consistent is naming of an object across individuals (e.g., Zettersten and Lupyan, 2020).
|
| 104 |
+
|
| 105 |
+
Formally, a whole-shape annotation is a sequence of $M$ tokens $\bar{x} = \langle x_1,\ldots ,x_M\rangle$ . Given a tangram with $N$ annotations $\bar{x}^{(j)}, j = 1,\dots ,N$ , each of length $M^{(j)}$ , we define $w_{i}^{(j)}$ for each token $x_{i}^{(j)}$ in annotation $\bar{x}^{(j)}$ as the proportion of other annotations of that tangram that do not contain $x_{i}^{(j)}$ :
|
| 106 |
+
|
| 107 |
+
$$
|
| 108 |
+
w _ {i} ^ {(j)} = \frac {1}{N - 1} \sum_ {j ^ {\prime} = 1} ^ {N} \mathbb {1} \left[ x _ {i} ^ {(j)} \notin \bar {x} ^ {j ^ {\prime}} \right], \tag {1}
|
| 109 |
+
$$
|
| 110 |
+
|
| 111 |
+
where $\mathbb{1}$ is an indicator function. The divergence of annotation $\bar{x}^{(j)}$ is $W^{(j)} = \frac{1}{M^{(j)}}\sum_{j=0}^{k}w_{i}^{(j)}$ . The divergence of a tangram is $W = \frac{1}{N}\sum_{j=0}^{N}W^{(j)}$ . For example, the SNDs of the tangrams in Figure 1 computed only with the two annotations displayed are 0.00 (top) and 1.00 (bottom).
|
| 112 |
+
|
| 113 |
+
Mean SND is relatively high in our data, with 0.91 on FULL (Table 2). We observe relatively similar values for DENSE and DENSE10, albeit with lower standard deviation for DENSE, as expected with more annotations. Annotators often use words that are unique to their annotation. We observe perfect consensus for only one tangram, and mostly similar annotations with relatively few deviations for a few others. Figure 5 shows several examples.
|
| 114 |
+
|
| 115 |
+
Part Naming Divergence (PND) SND measures annotation divergence for part name annotations
|
| 116 |
+
|
| 117 |
+

|
| 118 |
+
Figure 3: Part distributions for different head words. Whole-shape head words (shown in descending order of frequency from left) elicit a variety of part head word distributions. Colors are randomly assigned to part head words, but are fixed across all bars. Grey indicates part head words with $< 0.005$ frequency.
|
| 119 |
+
|
| 120 |
+

|
| 121 |
+
Figure 4: Per tangram SND, PND, and PSA mean values and $95\%$ confidence interval. Tangrams are ordered along the $x$ -axis in ascending order according to the plotted measure. Values are calculated by bootstrapping with 1,000 resamplings. In the FULL plots, the 74 densely annotated tangrams are colored red.
|
| 122 |
+
|
| 123 |
+
collected in the second step of the annotation task. PND is computed identically to SND, but with the concatenation of all part names of an annotation as the input text $\bar{x}$ . For example, the PNDs of the two tangrams in Figure 1 computed with only the two annotations displayed are 0.19 (top) and 1.00 (bottom). In general, part descriptions are more similar than whole-shape descriptions with mean PND of 0.76 (Table 2).
|
| 124 |
+
|
| 125 |
+
Part Segmentation Agreement (PSA) Annotators segment the tangrams into parts by grouping the tangram puzzle pieces. PSA quantifies the agreement between part segmentations as the maximum number of pieces that does not need to be moved to another group in order to edit one segmentation to another. We compute PSA as a linear
|
| 126 |
+
|
| 127 |
+
sum assignment problem with maximum weight matching. For each pair of segmentations, we create a cost matrix, where the number of rows is the number of parts in one annotation and the number of columns is the number of parts in the second annotation. The value of each matrix element is the number of matching puzzle pieces between the two corresponding parts in the two annotations. The tangram PSA is the mean of costs for all annotation pairs. For example, the PSAs of the two tangrams in Figure 1 computed with only the two annotations displayed are 6.00 (top) and 3.00 (bottom).
|
| 128 |
+
|
| 129 |
+
The mean PSA in our data is 5.30 (Table 2), with an approximately normal distribution of values. Some tangrams have strong segmentation cues, such that annotators reach perfect consensus, while others elicit significant segmentation disagreement.
|
| 130 |
+
|
| 131 |
+
Dense Annotations The comparison of FULL, DENSE, and DENSE10 illustrates how well our data approximates the real distribution of annotations for each tangram, and the advantage of DENSE. Figure 4 shows the complete distribution of values. Comparing DENSE10 and DENSE, the rankings of the tangrams are largely the same with the additional annotations: for SND, Spearman's rank correlation coefficient is $r(72) = .78$ , $p \ll .001$ ; for PND, $r(72) = .87$ , $p \ll .001$ ; for PSA, $r(72) = .76$ , $p \ll .001$ . The tangrams sampled for DENSE represent well the distribution of tangrams along the different measures, as illustrated by the red highlights in Figure 4.
|
| 132 |
+
|
| 133 |
+
Inter-measure Correlations Figure 5 illustrates the correlations between the three measures. The divergences of the two types of language annotations, whole-shape and part descriptions, show moderate positive correlation $r(1014) = .531$ , $p \ll .001$ . This indicates that tangrams that are annotated with similar whole-shape descriptions are often annotated with similar part descriptions.
|
| 134 |
+
|
| 135 |
+

|
| 136 |
+
|
| 137 |
+

|
| 138 |
+
Figure 5: SND, PND, and PSA correlations computed over the FULL set. Representative examples of different SND and PSA values are illustrated on the right. Densely annotated examples are highlighted in red.
|
| 139 |
+
|
| 140 |
+

|
| 141 |
+
|
| 142 |
+
Nevertheless, many tangrams with similar whole shape descriptions have diverse part descriptions. The correlations between language annotation divergence and PSA are lower, $r(1014) = -.216$ , $p \ll .001$ for SND and PSA and $r(1014) = -.165$ , $p \ll .001$ for PND and PSA.
|
| 143 |
+
|
| 144 |
+
# 5 Visual Reasoning with Tangrams
|
| 145 |
+
|
| 146 |
+
We use KILOGRAM to evaluate the reasoning of CLIP (Radford et al., 2021) and ViLT (Kim et al., 2021) through a reference game task, where the model is given a textual description and selects the corresponding image from a set of images. Formally, given a textual description $\bar{x}$ and a set of $k$ images $\mathcal{I} = \{I_1,\dots,I_k\}$ , the task is to select the image $I_{i}\in \mathcal{I}$ corresponding to $\bar{x}$ . We cast the task as computing a similarity score $f(\bar{x},I_i)$ between the description $\bar{x}$ and an image $I_{i}$ . We select the corresponding image as $I^{*} = \arg \max_{I_{i}\in \mathcal{I}}f(\bar{x},I_{i})$ .
|
| 147 |
+
|
| 148 |
+
# 5.1 Reference Game Generation
|
| 149 |
+
|
| 150 |
+
We randomly generate reference games for an annotated text-image pair $(\bar{x},I)$ by sampling additional $k - 1$ images from data under several constraints. We do not include repeating images in the set of $k$ images or images that have identical whole-shape text annotations. This avoids obvious ambiguity that is impossible to resolve in the target selection. We also require all images to be annotated with the
|
| 151 |
+
|
| 152 |
+
same number of parts. This reduces the chance of the model relying on simple part counting to discriminate between target images when including parts in the text (condition PARTS below). Appendix A.8 shows the impact of these constraints through analyzing experiments not using them.
|
| 153 |
+
|
| 154 |
+
# 5.2 Models
|
| 155 |
+
|
| 156 |
+
We instantiate $f$ using CLIP or ViLT, two models based on the Transformer architecture (Vaswani et al., 2017). We provide a brief review of the models, and refer the reader to the respective papers for further details.
|
| 157 |
+
|
| 158 |
+
CLIP uses two separate encoders to generate separate fixed-dimension representations of the text and images. It uses contrastive pre-training with a symmetric cross entropy loss on a large amount of aligned, but noisy web image-text data. We implement the scoring function $f$ with CLIP by encoding the text $\bar{x}$ and all images $I \in \mathcal{I}$ separately, and then computing the dot-product similarity score of the text with each image. This is identical to the CLIP pre-training objective, which potentially makes CLIP suitable for our task out of the box.
|
| 159 |
+
|
| 160 |
+
ViLT uses a single encoder that takes as input both the text and image inputs together. ViLT pre-training also uses aligned image-text data, but from existing benchmarks (Lin et al., 2014; Krishna et al., 2016; Ordonez et al., 2011; Sharma
|
| 161 |
+
|
| 162 |
+
Figure 6: Illustration of the language and vision modalities under the different experimental conditions.
|
| 163 |
+
|
| 164 |
+
et al., 2018). It is pre-trained using multiple self-supervised objectives, including image-text matching via a binary classification head, which is suitable for our task out of the box. We implement $f$ using this classification head. Given a text $\bar{x}$ and an image $I \in \mathcal{I}$ , we compute their similarity using the matching classification head.
|
| 165 |
+
|
| 166 |
+
# 5.3 Experimental Conditions
|
| 167 |
+
|
| 168 |
+
We study several input variants. Figure 6 illustrates the modalities under the different conditions, and Appendix A.5 shows complete example inputs. For the textual description $\bar{x}$ , we experiment with including the whole-shape description only (WHOLE) or adding part names (PARTS) by combining with the whole-shape description using the template <whole shape> with <part>, ..., and <part>. This tests the ability of models to benefit from part names. We consider two image $I$ conditions: coloring all parts with the same color (BLACK) or coloring parts differently (COLOR). The color choice in COLOR corresponds to the position of the part name in $\bar{x}$ , when the text includes part names (PARTS).
|
| 169 |
+
|
| 170 |
+
We experiment with the original pre-trained model weights, and with contrastive fine-tuning on our data using a symmetric cross entropy loss (Radford et al., 2021). During fine-tuning only, we consider a data augmentation condition (AUG), where we augment the data by creating examples that include only a subset of the part names in the text and coloring only the parts corresponding to the included parts names in the image, while all other parts remain black. We generate partial part examples for all possible subsets of parts for each example. Appendix A.5 illustrates the generated examples. When generating reference games for the augmented data, we constrain all the examples within a reference game to have the same number of parts in their full annotation, otherwise the task could be solved by counting parts. Part names are shuffled when creating the augmented data, and part colors correspond to the sequential position of the part name in the templated text.
|
| 171 |
+
|
| 172 |
+
# 5.4 Implementation Details
|
| 173 |
+
|
| 174 |
+
We set the size of the reference game context to $k = 10$ throughout our experiments. During contrastive fine-tuning, we create a text-image matching matrix of size $k \times k$ for each generated reference game in our training data by randomly selecting a text description for each tangram distractor from its annotations. We compute matching loss in both directions, from text to images and vice versa. In practice, this is equivalent to creating $2k$ reference games in both directions, and provides more informative updates. For all experiments, we use an ensemble of three models combined by element-wise multiplication of their outputs. Appendix A.7 provides model-specific implementation details. Appendix A.9 provides a reproducibility list.
|
| 175 |
+
|
| 176 |
+
# 5.5 Estimating Human Performance
|
| 177 |
+
|
| 178 |
+
We conduct an initial estimation of expected human performance on the same evaluation task by recruiting an independent group of 217 human participants. Each participant is randomly assigned to one of the four conditions and shown a random sequence of 20 trials from that condition, preventing leakage across conditions. On each trial, we present an annotation from our development set along with the corresponding context of ten tangrams and ask the participant to click the tangram that was being described. We randomly sample one referential context per annotation, which provides coverage over all 125 tangrams and over 600 unique descriptions in each condition. Before the actual test trials, each participant is provided with a fixed set of 10 practice trials with feedback indicating whether they have selected the correct tangram, and if not, we highlight the correct answer. Performance in the practice trials is not considered in our analysis. Appendix A.6 provides further details.
|
| 179 |
+
|
| 180 |
+
# 5.6 Results and Analysis
|
| 181 |
+
|
| 182 |
+
Table 3 shows development and test reference game accuracies under different experimental setups, including for human studies. Figure 7 shows the accuracy distribution for human participants.
|
| 183 |
+
|
| 184 |
+
<table><tr><td rowspan="2">Condition</td><td colspan="2">CLIP</td><td colspan="2">ViLT</td><td rowspan="2">Human</td></tr><tr><td>PT</td><td>FT</td><td>PT</td><td>FT</td></tr><tr><td colspan="6">Development Results</td></tr><tr><td>WHOLE+BLACK</td><td>16.1</td><td>43.3</td><td>12.9</td><td>40.9</td><td>47.7</td></tr><tr><td>PARTS+BLACK</td><td>16.4</td><td>45.3</td><td>12.5</td><td>45.7</td><td>49.1</td></tr><tr><td>WHOLE+COLOR</td><td>15.9</td><td>40.8</td><td>11.7</td><td>41.0</td><td>49.5</td></tr><tr><td>PARTS+COLOR</td><td>15.0</td><td>45.4</td><td>10.7</td><td>75.2</td><td>63.0</td></tr><tr><td>PARTS+COLOR+AUG</td><td>-</td><td>47.6</td><td>-</td><td>72.2</td><td></td></tr><tr><td colspan="6">Held-out Test Results</td></tr><tr><td>WHOLE+BLACK</td><td>17.9</td><td>42.5</td><td>13.1</td><td>44.5</td><td></td></tr><tr><td>PARTS+BLACK</td><td>18.6</td><td>45.8</td><td>13.3</td><td>50.3</td><td></td></tr><tr><td>WHOLE+COLOR</td><td>18.1</td><td>41.4</td><td>12.8</td><td>44.8</td><td></td></tr><tr><td>PARTS+COLOR</td><td>17.0</td><td>46.5</td><td>11.7</td><td>77.3</td><td></td></tr><tr><td>PARTS+COLOR+AUG</td><td>-</td><td>50.2</td><td>-</td><td>74.4</td><td></td></tr></table>
|
| 185 |
+
|
| 186 |
+
Table 3: Reference game accuracies (\%) for the different experimental conditions with pre-trained (PT) or fine-tuned (FT) models, as well as for human subjects.
|
| 187 |
+
|
| 188 |
+
While both models perform better than a random baseline (10%) out of the box, we generally observe poor performance with the pre-trained weights (PT). CLIP slightly outperforms ViLT throughout, potentially because it is trained with a contrastive objective similar to a reference game. Whereas ViLT's matching loss is aligned with our goal, it is only one of several losses in its objective. We observe no reliable improvement from adding part information, either textual or visual. The low performance on WHOLE+BLACK indicates the models fail to generalize familiar concepts to abstract shapes and the lack of consistent improvement with part information indicates an inability to reason about the correspondence of text and colored parts.
|
| 189 |
+
|
| 190 |
+
Fine-tuning (FT) dramatically improves performance for both models. Adding part names to the text description improves both models (PARTS+BLACK). However, segmentation information in the form of part coloring without part names (WHOLE+COLOR) shows no benefit. Although ViLT does not benefit from color information alone, the combination with part names (PARTS+COLOR) shows significant added improvement in performance over having access to part information in one of the modalities. Overall, we observe small consistent differences in performance between the two models, except when having access to both part names and colors (PARTS+COLOR), which ViLT effectively uses following fine tuning. This may be because ViLT's tight integration of the modalities in its single encoder allows it to take advantage of the part correspondence information provided
|
| 191 |
+
|
| 192 |
+

|
| 193 |
+
Figure 7: The distribution of each human participant's mean accuracy in the four conditions. The white dashed lines are the estimated means of a two-component Gaussian mixture model.
|
| 194 |
+
|
| 195 |
+
when both part names and colors are given.
|
| 196 |
+
|
| 197 |
+
Human performance follows a similar trend to the fine-tuned models: adding part names and segmentation helps performance, and their benefit is most pronounced when both are provided. Human performance is significantly higher than pre-trained (PT) models across all four conditions. Fine-tuning (FT) closes this gap. Indeed, in the PARTS+COLOR condition, ViLT significantly outperforms mean human performance. To better analyze human results, we fit a two-component Gaussian mixture model to the distribution of individual participants' accuracies (Figure 7). We observe two components for all conditions except WHOLE+BLACK, indicating two distinct sub-populations. For example, for PARTS+COLOR, the low-performing subpopulation has a mean accuracy of $52.5\%$ , while the high-performing has a mean of $83.8\%$ , significantly outperforming the fine-tuned ViLT. It is possible that the lower-performance sub-population is not making full use of the additional information.
|
| 198 |
+
|
| 199 |
+
Data augmentation (AUG) improves performance for CLIP, but not for ViLT, which even shows a small decrease in performance, although still significantly outperforming CLIP. We hypothesize that the presence of training examples with partial part information complicates resolving the correspondence between parts and their name, resulting in overall lower ViLT performance. We leave further study of this hypothesis for future work.
|
| 200 |
+
|
| 201 |
+
The augmentation condition fine-tunes the models to handle examples with partial part information, and allows to study the impact of gradually
|
| 202 |
+
|
| 203 |
+

|
| 204 |
+
Figure 8: Mean probability assigned to the correct image using fine-tuned CLIP (left) or fine-tuned ViLT (right) on the development set, by number of parts included in text and colored in the images. Curves are separated by total number of parts in the annotation of the target example. Error bands are bootstrapped $95\%$ confidence intervals.
|
| 205 |
+
|
| 206 |
+

|
| 207 |
+
|
| 208 |
+
adding part information. We apply the augmentation process to the development data to generate the data for this analysis. Figure 8 shows the effect of gradually adding part information on the probability of the correct prediction, separated by the total number of parts in the example. Overall, part information is beneficial, but with a diminishing return as more part information is added. We observe this for both models, but with a much faster rate for CLIP, which overall shows much lower performance. ViLT is able to benefit from increasing part information, with the benefit diminishing only after four parts are provided.
|
| 209 |
+
|
| 210 |
+
# 6 Discussion
|
| 211 |
+
|
| 212 |
+
KILOGRAM provides a new window into the visual abstraction capacity of grounded language models and their ability to generalize concepts beyond their photographic appearance, an integral component of human concept representations (Fan et al., 2015). Our experiments show that there is significant room to improve pre-trained models, which should be able to perform zero-shot reference game tasks without fine-tuning as well as humans do (Clark and Wilkes-Gibbs, 1986). The improved performance after fine-tuning indicates the multi-modal architecture itself has the potential for higher performance, which current pre-training regimes likely do not support. In particular, ViLT's improved performance as a function of additional part information suggests that more structured concept alignment may play a role in this effort (e.g., between parts expressed as lexical items and the corresponding elements of the image).
|
| 213 |
+
|
| 214 |
+
While we focused on the task of reference resolution, KILOGRAM is also well-suited for production tasks (e.g., generating human-like distributions of
|
| 215 |
+
|
| 216 |
+
descriptions or coloring named parts on a blank tan-gram) as well as instruction-following tasks (e.g., placing pieces in the described configuration to reconstruct a tangram). More broadly, our data emphasizes the need for maintaining well-calibrated distributions over the many different possible ways that people may conceptualize or talk about things, rather than collapsing to a "best" prediction.
|
| 217 |
+
|
| 218 |
+
# 7 Limitations
|
| 219 |
+
|
| 220 |
+
Although randomly constructed reference games provide an interpretable evaluation metric, they also pose several limitations. Performance is limited by the fact that descriptions were elicited for isolated images. These descriptions do not reflect the kind of pragmatic reasoning commonly deployed by human speakers in reference games to resolve ambiguities (Goodman and Frank, 2016). In other words, annotators were not able to anticipate the necessary level of detail to disambiguate the object from a specific context of distractors, hence the descriptions may be underinformative. Randomly generated reference games may include ambiguities that make them impossible to solve (e.g., two objects that could both plausibly be described as a bird). The possible performance ceiling on these games is likely below $100\%$ . Extending the data through interactive reference games is an important direction for future work. Likewise, our studies of baseline human performance on this task are preliminary. We found that participants clustered into higher- and lower-performing groups, likely reflecting attentional and motivational factors (e.g., some participants may not have fully attended to the provided part information). A better understanding of human behavior is critical before making any clear conclusions comparing humans and
|
| 221 |
+
|
| 222 |
+
model performance. Ultimately, models only outperformed mean human performance significantly only after fine-tuning on approximately 6,600 example reference games.
|
| 223 |
+
|
| 224 |
+
Our resource contribution and analysis are focused on English. While the data collection design does not make language-specific assumptions, it depends on the availability of proficient speakers, which is limited in contemporary crowdsourcing services for certain languages. Our large collection of visual stimuli is well suited to extend our data collection to other languages and cultures, which may display different abstractions. This is an important direction for future work. Extending our analysis to other languages depends on the availability of pre-trained models in these languages, which may be limited by the availability of aligned language vision data and the computational resources required for pre-training.
|
| 225 |
+
|
| 226 |
+
# Acknowledgements
|
| 227 |
+
|
| 228 |
+
This research was supported by ARO W911NF21-1-0106, NSF under grant No. 1750499, and a gift from Open Philanthropy. NK is supported by Masason Fellowship, AS by a Facebook PhD Fellowship and an NSF GRF under grant No. 1650441, and RDH by a CV Starr Fellowship. We thank Rob Goldstone, Judith Fan, Cathy Wong, and the anonymous reviewers for their helpful comments and suggestions. We are grateful for the contributions of the workers on Mechanical Turk.
|
| 229 |
+
|
| 230 |
+
# References
|
| 231 |
+
|
| 232 |
+
Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C. Lawrence Zitnick, and Devi Parikh. 2015. VQA: Visual question answering. In IEEE International Conference on Computer Vision, pages 2425-2433.
|
| 233 |
+
Mark Atkinson, Gregory J Mills, and Kenny Smith. 2019. Social group effects on the emergence of communicative conventions and language complexity. Journal of Language Evolution, 4(1):1-18.
|
| 234 |
+
Adrian Bangerter, Eric Mayor, and Dominique Knutsen. 2020. Lexical entrainment without conceptual pacts? revisiting the matching task. Journal of Memory and Language, 114:104129.
|
| 235 |
+
Irving Biederman. 1987. Recognition-by-components: a theory of human image understanding. Psychological review, 94 2:115-147.
|
| 236 |
+
Steven Bird. 2004. Nltk: The natural language toolkit. ArXiv, cs.CL/0205028.
|
| 237 |
+
|
| 238 |
+
G. Bradski. 2000. The OpenCV Library. Dr. Dobb's Journal of Software Tools.
|
| 239 |
+
Bailey Brashears and John Paul Minda. 2020. The effects of feature verbalizability on category learning. In Proceedings of the 42nd Conference of the Cognitive Science Society.
|
| 240 |
+
Lucía Castillo, Kenny Smith, and Holly P Branigan. 2019. Interaction promotes the adaptation of referential conventions to the communicative context. Cognitive science, 43(8):e12780.
|
| 241 |
+
Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Dolkar, and C. Lawrence Zitnick. 2015. Microsoft COCO captions: Data collection and evaluation server. CoRR, abs/1504.00325.
|
| 242 |
+
Yen-Chun Chen, Linjie Li, Licheng Yu, Ahmed El Kholy, Faisal Ahmed, Zhe Gan, Yu Cheng, and Jingjing Liu. 2020. Uniter: Universal image-text representation learning. In European conference on computer vision, pages 104-120.
|
| 243 |
+
Herbert H Clark and Deanna Wilkes-Gibbs. 1986. Referring as a collaborative process. Cognition, 22(1):1-39.
|
| 244 |
+
Yael M. Cycowicz, D Friedman, M Rothstein, and Joan Gay Snodgrass. 1997. Picture naming by young children: norms for name agreement, familiarity, and visual complexity. Journal of experimental child psychology, 65 2:171-237.
|
| 245 |
+
Melissa C Duff, Julie Hengst, Daniel Tranel, and Neal J Cohen. 2006. Development of shared information in communication despite hippocampal amnesia. Nature neuroscience, 9(1):140-146.
|
| 246 |
+
Jon Andoni Dunabeitia, Davide Crepaldi, Antje S. Meyer, Boris New, Christos Pliatsikas, Eva Smolka, and Marc Brysbaert. 2018. Multipic: A standardized set of 750 drawings with norms for six European languages. Quarterly Journal of Experimental Psychology, 71:808 - 816.
|
| 247 |
+
Joost Elffers. 1977. Tangram: The Ancient Chinese Puzzle. Penguin Books.
|
| 248 |
+
Judith E. Fan, Daniel Yamins, and Nicholas B. Turk-Browne. 2015. Common object representations for visual recognition and production. Cognitive Science.
|
| 249 |
+
Alicia Fasquel, Angèle Brunellière, and Dominique Knutsen. 2022. A modified procedure for naming 332 pictures and collecting norms: Using tangram pictures in psycholinguistic studies. Behavior research methods.
|
| 250 |
+
Nicholas FitzGerald, Yoav Artzi, and Luke Zettlemoyer. 2013. Learning distributions over logical forms for referring expression generation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 1914-1925.
|
| 251 |
+
|
| 252 |
+
Jean E Fox Tree. 1999. Listening in on monologues and dialogues. Discourse processes, 27(1):35-53.
|
| 253 |
+
Dedre Gentner and Arthur B. Markman. 1997. Structure mapping in analogy and similarity. American Psychologist, 52:45-56.
|
| 254 |
+
Noah D. Goodman and Michael C. Frank. 2016. Pragmatic language interpretation as probabilistic inference. Trends in Cognitive Sciences, 20:818-829.
|
| 255 |
+
Chris Harris, Mike Stephens, et al. 1988. A combined corner and edge detector. In *Alvey vision conference*, volume 15, pages 10-5244. CiteSeer.
|
| 256 |
+
Robert D. Hawkins, Michael C. Frank, and Noah D. Goodman. 2020. Characterizing the dynamics of learning in repeated reference games. Cognitive science, 44(6):e12845.
|
| 257 |
+
Judith Holler and Katie Wilkin. 2011. Co-speech gesture mimicry in the process of collaborative referring during face-to-face dialogue. Journal of Nonverbal Behavior, 35(2):133-153.
|
| 258 |
+
William S Horton and Richard J Gerrig. 2002. Speakers' experiences and audience design: Knowing when and knowing how to adjust utterances to addressees. Journal of Memory and Language, 47(4):589-606.
|
| 259 |
+
William S Horton and Daniel G Slater. 2012. Anticipating who will say what: The influence of speaker-specific memory associations on reference resolution. Memory & cognition, 40(1):113-126.
|
| 260 |
+
Michel Hupet, Xavier Seron, and Yves Chantraine. 1991. The effects of the codability and discriminability of the referents on the collaborative referring procedure. British Journal of Psychology, 82(4):449-462.
|
| 261 |
+
Alyssa Ibarra and Michael K Tanenhaus. 2016. The flexibility of conceptual pacts: Referring expressions dynamically shift to accommodate new conceptualizations. Frontiers in psychology, 7:561.
|
| 262 |
+
Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc Le, Yun-Hsuan Sung, Zhen Li, and Tom Duerig. 2021. Scaling up visual and vision-language representation learning with noisy text supervision. In International Conference on Machine Learning, pages 4904-4916. PMLR.
|
| 263 |
+
Wonjae Kim, Bokyung Son, and Ildoo Kim. 2021. Vilt: Vision-and-language transformer without convolution or region supervision. In International Conference on Machine Learning, pages 5583-5594. PMLR.
|
| 264 |
+
Robert M Krauss and Sidney Weinheimer. 1964. Changes in reference phrases as a function of frequency of usage in social interaction: A preliminary study. Psychonomic Science, 1(1):113-114.
|
| 265 |
+
|
| 266 |
+
Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A. Shamma, Michael S. Bernstein, and Li Fei-Fei. 2016. Visual genome: Connecting language and vision using crowdsourced dense image annotations. International Journal of Computer Vision, 123:32-73.
|
| 267 |
+
Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dólar, and C Lawrence Zitnick. 2014. Microsoft coco: Common objects in context. In European conference on computer vision, pages 740-755.
|
| 268 |
+
Jiasen Lu, Dhruv Batra, Devi Parikh, and Stefan Lee. 2019. Vilbert: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks. Advances in neural information processing systems, 32.
|
| 269 |
+
Gary Lupyan and Bodo Winter. 2018. Language is more abstract than you think, or, why aren't languages more iconic? Philosophical Transactions of the Royal Society B: Biological Sciences, 373.
|
| 270 |
+
Junhua Mao, Jonathan Huang, Alexander Toshev, Oana Camburu, Alan L. Yuille, and Kevin Murphy. 2016. Generation and comprehension of unambiguous object descriptions. In Proceedings of the Conference on Computer Vision and Pattern Recognition, pages 11-20. IEEE.
|
| 271 |
+
William McCarthy, Robert X. D. Hawkins, Haoliang Wang, Cameron Holdaway, and Judith E. Fan. 2021. Learning to communicate about shared procedural abstractions. ArXiv, abs/2107.00077.
|
| 272 |
+
Douglas L. Medin, Robert L. Goldstone, and Dedre Gentner. 1993. *Respects for similarity*. Psychological Review, 100:254-278.
|
| 273 |
+
Margaret Mitchell, Kees van Deemter, and Ehud Reiter. 2010. Natural reference to objects in a visual domain. In Proceedings of the International Natural Language Generation Conference.
|
| 274 |
+
Tara Murfitt and Jan McAllister. 2001. The effect of production variables in monolog and dialog on comprehension by novel listeners. Language and Speech, 44(3):325-350.
|
| 275 |
+
Sonia K Murthy, Thomas L Griffiths, and Robert D Hawkins. 2022. Shades of confusion: Lexical uncertainty modulates ad hoc coordination in an interactive communication task. Cognition, 225:105152.
|
| 276 |
+
Vicente Ordonez, Girish Kulkarni, and Tamara Berg. 2011. Im2text: Describing images using 1 million captioned photographs. Advances in neural information processing systems, 24.
|
| 277 |
+
Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. 2021. Learning transferable visual models from natural language supervision. In International Conference on Machine Learning, pages 8748-8763. PMLR.
|
| 278 |
+
|
| 279 |
+
Michael F Schober and Herbert H Clark. 1989. Understanding by addressees and overhearers. Cognitive psychology, 21(2):211-232.
|
| 280 |
+
Piyush Sharma, Nan Ding, Sebastian Goodman, and Radu Soricut. 2018. Conceptual captions: A cleaned, hypernymed, image alt-text dataset for automatic image captioning. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2556-2565.
|
| 281 |
+
Roger N. Shepard. 1987. Toward a universal law of generalization for psychological science. Science, 237 4820:1317-23.
|
| 282 |
+
Todd Shore, Theofronia Androulakaki, and Gabriel Skantze. 2018. KTH tangrams: A dataset for research on alignment and conceptual pacts in task-oriented dialogue. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan. European Language Resources Association (ELRA).
|
| 283 |
+
J Slocum. 2003. The Tangram Book: The Story of the Chinese Puzzle with over 2000 Puzzles to Solve. Sterling Publishing, New York.
|
| 284 |
+
Joan Gay Snodgrass and Mary Vanderwart. 1980. A standardized set of 260 pictures: norms for name agreement, image agreement, familiarity, and visual complexity. Journal of experimental psychology. Human learning and memory, 6 2:174-215.
|
| 285 |
+
Alane Suhr, Mike Lewis, James Yeh, and Yoav Artzi. 2017. A corpus of natural language for visual reasoning. In Proceedings of the Annual Meeting of the Association for Computational Linguistics, pages 217-223.
|
| 286 |
+
Alane Suhr, Stephanie Zhou, Ally Zhang, Iris Zhang, Huajun Bai, and Yoav Artzi. 2019. A corpus for reasoning about natural language grounded in photographs. In Proceedings of the Annual Meeting of the Association for Computational Linguistics, pages 6418-6428.
|
| 287 |
+
Tristan Thrush, Ryan Jiang, Max Bartolo, Amanpreet Singh, Adina Williams, Douwe Kiela, and Candace Ross. 2022. Winoground: Probing vision and language models for visio-linguistic compositionality. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5238-5248.
|
| 288 |
+
Barbara Tversky and Kathleen Hemenway. 1984. Objects, parts, and categories. Journal of Experimental Psychology: General, 113:169-193.
|
| 289 |
+
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems.
|
| 290 |
+
|
| 291 |
+
Jette Viethen and Robert Dale. 2008. The use of spatial relations in referring expression generation. In Proceedings of the International Conference on Natural Language Generation.
|
| 292 |
+
Catherine Wong, William McCarthy, Gabriel Grand, Yoni Friedman, Joshua B. Tenenbaum, Jacob Andreas, Robert D. Hawkins, and Judith E. Fan. 2022. Identifying concept libraries from language about object structure. ArXiv, abs/2205.05666.
|
| 293 |
+
Licheng Yu, Patrick Poirson, Shan Yang, Alexander C Berg, and Tamara L Berg. 2016. Modeling context in referring expressions. In The European Conference on Computer Vision, pages 69-85.
|
| 294 |
+
Martin Zettersten and Gary Lupyan. 2020. Finding categories through words: More nameable features improve category learning. Cognition, 196:104135.
|
| 295 |
+
|
| 296 |
+
# A Appendix
|
| 297 |
+
|
| 298 |
+
# A.1 Examples from KILOGRAM
|
| 299 |
+
|
| 300 |
+
Figure A.1 shows example tangrams from our data. Figure A.2 shows examples of the use of the part name head, the most common part head word in the data. All data can be browsed on the data visualization dashboard: https://lil.nlp.cornell.edu/kilogram/.
|
| 301 |
+
|
| 302 |
+
# A.2 Collecting Tangrams
|
| 303 |
+
|
| 304 |
+
We scan all the pages of tangram solutions from Slocum (2003) into JPEG files to extract SVG files of individual tangrams. We use heuristics based on edge and corner detection (Harris et al., 1988) to extract individual tangrams into separate files by detecting the four corners of each puzzle and adding padding. We heuristically detect the individual standard pieces in each tangram using corner detection. Because the shapes are standard, we can test if an extracted shape is an expected puzzle's piece and if we obtain the expected number of such shapes. We resize each tangram and all its pieces to a standard size, and label the ID of each puzzle piece consistently across all tangrams. We heuristically and manually validate the outputs, and prune solutions that fail to vectorize properly, for example if the process fails to recover exactly seven pieces.
|
| 305 |
+
|
| 306 |
+
# A.3 Crowdsourcing Qualifications and Survey
|
| 307 |
+
|
| 308 |
+
The qualifier includes three multiple choice questions aimed to ensure that (a) the annotator describes the abstract shape meaningfully instead of simply describing its geometry; (b) each part description only contains one part (body and arms instead of body with arms); and (c) the part descriptions correspond to the description of the whole shape. We provide a short video tutorial of the task and examples of invalid annotations for workers to view before completing the qualifier. We also collect basic non-identifying demographic data from each worker, including the languages that they speak and their proficiency, if English is their first language, and where they learned English. We retain the correspondence of anonymized hashed worker IDs to the annotations and language information they provide.
|
| 309 |
+
|
| 310 |
+
# A.4 Dense Annotation Sampling
|
| 311 |
+
|
| 312 |
+
The set DENSE is made of 62 tangrams sampled from FULL and 12 tangrams commonly used in
|
| 313 |
+
|
| 314 |
+
prior work. We sample the 62 tangrams from FULL to represent the diversity of tangrams using the first set of annotations we collect. We plot the annotated tangrams by average log perplexity of whole-shape descriptions with $\frac{1}{100}$ smoothing and PSA and apply a $5 \times 5$ grid to the plot (Figure A.3). Using perplexity and PSA allows us to sample a set of tangrams with diverse degrees of annotation and segmentation agreement. With a relatively high smoothing factor, we are able to spread out the data points, because the majority of the data set has high divergence in descriptions. We randomly pick 12 periphery points to collect more annotations for outliers, uniformly sample 25 from all the 1004 tangrams, and randomly sample 25, one from each grid, to represent the entire distribution.
|
| 315 |
+
|
| 316 |
+
We calculate average log perplexity of whole shape annotations for each tangram. Let $\bar{x}^{(1)},\ldots ,\bar{x}^{(N)}$ be annotations for a tangram, where each annotation is a sequence of tokens $\bar{x}^{(j)} = \langle x_1,\dots,x_{M^{(j)}}\rangle$ of length $M^{(j)}$ . We create a language model $p^{(j)}$ for every annotation $\bar{x}^{(j)}$ using all other $N - 1$ annotations for the tangram:
|
| 317 |
+
|
| 318 |
+
$$
|
| 319 |
+
p ^ {(j)} (x) = \frac {C _ {x \in \bar {x} ^ {(j ^ {\prime} \neq j)}} + k}{\operatorname {t o t a l} _ {j ^ {\prime} \neq j} + k V}, \tag {2}
|
| 320 |
+
$$
|
| 321 |
+
|
| 322 |
+
where $C_{x \in \bar{x}^{(j' \neq j)}}$ is the number of occurrences of $x$ in the other annotations for the tangram, $k$ is the smoothing factor, $\text{total}_{j' \neq j}$ is the total number of words used in the other annotations for the tangram and $V$ is the vocabulary size of all whole-shape annotations across all tangrams. The log perplexity for annotation $\bar{x}^{(j)}$ is $\log PP^{(j)} = -\frac{1}{M^{(j)}} \sum_{i=1}^{M^{(j)}} \log_2 p(x_i^{(j)})$ . The log perplexity for the tangram is the average of perplexity values for all its annotations $\log PP = \frac{1}{N} \sum_{j=1}^{N} \log PP^{(j)}$ . We lowercase, stem, and remove stop words before computing the log perplexity.
|
| 323 |
+
|
| 324 |
+
# A.5 Example Inputs for Experimental Conditions
|
| 325 |
+
|
| 326 |
+
Figure A.4 shows how one annotation, including both text and image, appears under the different experimental conditions. For conditions with PARTS annotations, we generate simple English sentences combining the whole shape description with part descriptions using the template <whole shape> with <part>, <part>, ..., and <part>. We add an indefinite article to each singular part description. BLACK images are tangrams with all pieces colored black with white borders. COLOR images are tangrams with each part colored with one of
|
| 327 |
+
|
| 328 |
+
the CSS preset colors in the order of coral, gold, lightskyblue, lightpink, mediumseagreen, darkgrey, lightgrey that correspond to the parts in the annotation. For the augmented condition (AUG), text inputs are whole annotations combined with each possible subset of the part descriptions. Image inputs are tangrams colored in the same way as colored images, but the parts excluded from the subset of part descriptions are colored black instead. All part descriptions in the annotations are randomly shuffled and not consistently associated with any particular color in the images, so that the coloring solely serves as an indication of the ordering of parts in the combined text.
|
| 329 |
+
|
| 330 |
+
# A.6 Human Performance Baseline Details
|
| 331 |
+
|
| 332 |
+
We recruited an independent group of 233 human participants from the Prolific crowdsourcing platform (https://www.prolific.co/), and asked them to perform the same reference game task we used for model evaluation. Each participant was randomly assigned to one of the four conditions and shown a random sequence of 20 trials from that condition. On each trial, we showed a text annotation from the development set along with the corresponding context of ten tangrams and asked the participant to click the tangram that was being described. The information that was available varied across condition, just as in the model evaluations. The tangrams were either presented to participants in black-and-white (BLACK) or colored according to their segmentation map (COLOR), and the language was either the whole-shape description alone (WHOLE) or with the parts included (PARTS). In the PARTS+COLOR condition, the parts text was colored to match the image to facilitate visual comparison, providing the same alignment information available to the models.
|
| 333 |
+
|
| 334 |
+
We took several steps to ensure high-quality responses. First, participants began with a fixed set of 10 practice trials to familiarize with the task. For these practice trials, we provided feedback indicating whether they have selected the correct tangram, and if not, we highlight the correct answer. To assess whether participants were paying attention as opposed to responding randomly, we inserted an unambiguous "catch trial" where the target was the square tangram and the description was square. We excluded 16 participants who failed to select the correct target on this trial, yielding a final sample size of 217 participants out of the 233 recruited.
|
| 335 |
+
|
| 336 |
+
Because our aim was to obtain overall accuracy estimates for each condition, we did not require judgements for every individual annotation and context in the test set. However, we were able to ensure good coverage of the dataset, including annotations from all 125 tangrams and over 600 unique descriptions in each condition.
|
| 337 |
+
|
| 338 |
+
# A.7 Model-specific Implementation Details
|
| 339 |
+
|
| 340 |
+
For experiments with CLIP, we use the ViT-B/32 variant. We fine-tune using an Adam optimizer with learning rate 5e-8 and weight decay 1e-6. At the end of each epoch, the training data is shuffled and rebatched. We train the models up to 200 epochs and use patience of 50 epochs to select the model with the highest image prediction accuracy on a non-augmented validation set taken from the training data. All images are resized to CLIP's default input resolution of $224 \times 224$ , with white padding to make to rectangle images square. The total number of trainable parameters in CLIP is 151.2M. CLIP models are fine-tuned with either a single GeForce RTX 2080 Ti GPU with 11GB memory or a single Titan RTX GPU with 24GB memory. Fine-tuning takes approximately 40 minutes per epoch for augmented setups (AUG) and roughly 3 minutes for other setups.
|
| 341 |
+
|
| 342 |
+
For ViLT experiments, we fine-tune with an AdamW optimizer with learning rate 1e-4 and weight decay 1e-2. We use a cosine learning rate schedule with warm-up over the first epoch. We train the models up to 30 epochs with a patience of 10 epochs and follow the same model selection criterion as for CLIP. All images are resized to $384 \times 384$ . The total number of trainable parameters in ViLT is 87.4M. ViLT models are fine-tuned with a single Titan RTX GPU with 24 GB memory. Fine-tuning takes up to 5.5 hours per epoch for augmented setups (AUG) and roughly 15 minutes for other setups.
|
| 343 |
+
|
| 344 |
+
# A.8 Random Generation of Reference Games
|
| 345 |
+
|
| 346 |
+
In our main experiments (Section 5), we randomly generate reference games subject to constraints (Section 5.1). In particular, we ensure that distractors contained the same total number of parts. We explore the impact of these constraints by repeating our experiments on reference games generated without the constraints. Without the constraints, part counting can help the model disqualify distractors and significantly narrow down the set of likely referents. This is because images with a different
|
| 347 |
+
|
| 348 |
+
<table><tr><td rowspan="2">Condition</td><td colspan="2">CLIP</td><td colspan="2">ViLT</td></tr><tr><td>PT</td><td>FT</td><td>PT</td><td>FT</td></tr><tr><td>WHOLE+BLACK</td><td>17.3</td><td>46.2</td><td>13.2</td><td>41.3</td></tr><tr><td>PARTS+BLACK</td><td>16.8</td><td>47.4</td><td>12.6</td><td>47.0</td></tr><tr><td>WHOLE+COLOR</td><td>15.9</td><td>48</td><td>12.4</td><td>46.2</td></tr><tr><td>PARTS+COLOR</td><td>15.9</td><td>71.3</td><td>12.1</td><td>89.0</td></tr><tr><td>PARTS+COLOR+AUG</td><td>-</td><td>74</td><td>-</td><td>86.0</td></tr></table>
|
| 349 |
+
|
| 350 |
+
Table A.1: Reference game development accuracies $(\%)$ for the different experimental conditions with pretrained (PT) or fine-tuned (FT) models for games generated without constraints.
|
| 351 |
+
|
| 352 |
+
number of parts colored compared to the number of parts in the text description can be easily ignored without considering the semantics of the text or images. Table A.1 shows development accuracies for games generated without constraints, both for training and testing. Generally, the success rate achieved on unconstrained contexts is much higher compared to contexts generated with constraints (Figure 3). However, when analyzing the performance of this model on part-controlled contexts (Figure A.5), we observe roughly similar performance to the games generated with constraints (Figure 8), even though we would expect a significant performance increase given the results in Table A.1. We even observe a more pronounced decrease in performance when more parts are added, illustrating further difficulty generalizing. We conclude that the model trained on games generated without constraints (Table A.1) likely learns to rely on part-counting heuristics and may be less reliable in other settings.
|
| 353 |
+
|
| 354 |
+
# A.9 Reproducibility Checklist
|
| 355 |
+
|
| 356 |
+
For all reported experimental results:
|
| 357 |
+
|
| 358 |
+
- A clear description of the mathematical setting, algorithm, and/or model: yes; see Section 5.
|
| 359 |
+
- Submission of a zip file containing source code, with specification of all dependencies, including external libraries, or a link to such resources: yes; attached to our submission.
|
| 360 |
+
- Description of computing infrastructure used: yes; see Appendix A.7.
|
| 361 |
+
- The average runtime for each model or algorithm (e.g., training, inference, etc.) or estimated energy cost: yes; see Appendix A.7.
|
| 362 |
+
|
| 363 |
+
- Number of parameters in each model: yes; see Appendix A.7.
|
| 364 |
+
- Corresponding validation performance for each reported test result: yes; see Appendix 3 and Appendix A.1 for results on the development set.
|
| 365 |
+
- Explanation of evaluation metrics, with links to code used: yes; see Section 5 for an explanation of the reference game metric. An implementation is included in the attached code zipfile.
|
| 366 |
+
|
| 367 |
+
For all experiments with hyperparameter search:
|
| 368 |
+
|
| 369 |
+
- We performed a minimal manual search for learning rate and weight decay, and used the same values for all experiments (described in Section A.7).
|
| 370 |
+
|
| 371 |
+
For all datasets used:
|
| 372 |
+
|
| 373 |
+
- Relevant details such as languages, and number of examples and label distributions: yes; see Section 3.
|
| 374 |
+
- Details of train/test/validation splits: yes; see Section 3.3.
|
| 375 |
+
- Explanation of any data that were excluded, and all pre-processing steps: yes; see Section 3 and Section A.2.
|
| 376 |
+
- A zip file containing data or link to a downloadable version of the data: yes; attached to our submission.
|
| 377 |
+
- For new data collected, a complete description of the data collection process, such as instructions to annotators and methods for quality control: yes; see Section 3.2 and Section A.3.
|
| 378 |
+
|
| 379 |
+

|
| 380 |
+
Figure A.1: Example tangrams from our dataset.
|
| 381 |
+
|
| 382 |
+

|
| 383 |
+
Figure A.2: Example tangrams containing the part description head. Each example includes a tangram and its whole-shape description. We highlight the segmentation corresponding to head in each tangram.
|
| 384 |
+
|
| 385 |
+

|
| 386 |
+
Figure A.3: Sampled tangrams for dense annotation collection: 12 purple points picked from the periphery, 25 red points randomly sampled from each grid, and 25 green points uniformly sampled from all points.
|
| 387 |
+
|
| 388 |
+

|
| 389 |
+
Figure A.4: An example of one annotation across the different experimental conditions. The augmentation condition (AUG) creates multiple examples from the same annotation.
|
| 390 |
+
|
| 391 |
+

|
| 392 |
+
|
| 393 |
+

|
| 394 |
+
Figure A.5: Mean development probabilities of predicting the correct image in reference games generated without constraints using fine-tuned CLIP (top) or fine-tuned ViLT (bottom) by number of parts included in text and colored in the images. We separate the curves by the total number of parts in the annotation of the target example. The error bands show the $95\%$ confidence interval of the expected mean at each point by bootstrapping with 1000 resamplings.
|
abstractvisualreasoningwithtangramshapes/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:3eb0a4e0b67386dc8cc894a2e55dc321b94b5032fa1aff65f03339daeab94559
|
| 3 |
+
size 758902
|
abstractvisualreasoningwithtangramshapes/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:9ada7369fc7af4709ca0904dc7839b97083cddbac473219877ef9a633c8f4ff5
|
| 3 |
+
size 493462
|
acenetattentionguidedcommonsensereasoningonhybridknowledgegraph/57a5fae5-dacd-40b3-98b8-04cf2deeee18_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:e3d33e03d551233d42f955679bcfd81194bcbf61dd303a7adbcfc4202a401903
|
| 3 |
+
size 76696
|
acenetattentionguidedcommonsensereasoningonhybridknowledgegraph/57a5fae5-dacd-40b3-98b8-04cf2deeee18_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:11303ed6614b3afe98136577453c39b437fdf43e5ff2bbe8f766773e757ba969
|
| 3 |
+
size 92202
|
acenetattentionguidedcommonsensereasoningonhybridknowledgegraph/57a5fae5-dacd-40b3-98b8-04cf2deeee18_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:5fa5cc9b3cee17d63458056b8b8b1c50af25ef13da32a847a2b0dc561866f51b
|
| 3 |
+
size 2562157
|
acenetattentionguidedcommonsensereasoningonhybridknowledgegraph/full.md
ADDED
|
@@ -0,0 +1,334 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# ACENet: Attention Guided Commonsense Reasoning on Hybrid Knowledge Graph
|
| 2 |
+
|
| 3 |
+
Chuzhan Hao, Minghui Xie, and Peng Zhang*
|
| 4 |
+
|
| 5 |
+
College of Intelligence and Computing, Tianjin University
|
| 6 |
+
|
| 7 |
+
{chuzhanhao, minghuixie, pzhang}@tju.edu.cn
|
| 8 |
+
|
| 9 |
+
# Abstract
|
| 10 |
+
|
| 11 |
+
Augmenting pre-trained language models (PLMs) with knowledge graphs (KGs) has demonstrated superior performance on commonsense reasoning. Given a commonsense based QA context (question and multiple choices), existing approaches usually estimate the plausibility of candidate choices separately based on their respective retrieved KGs, without considering the interference among different choices. In this paper, we propose an Attention guided Commonsense rEasing Network (ACENet)<sup>1</sup> to endow the neural network with the capability of integrating hybrid knowledge. Specifically, our model applies the multi-layer interaction of answer choices to continually strengthen correct choice information and guide the message passing of GNN. In addition, we also design a mix attention mechanism of nodes and edges to iteratively select supporting evidence on hybrid knowledge graph. Experimental results demonstrate the effectiveness of our proposed model through considerable performance gains across CommonsenseQA and OpenbookQA datasets.
|
| 12 |
+
|
| 13 |
+
# 1 Introduction
|
| 14 |
+
|
| 15 |
+
Commonsense question answering (CSQA) aims to answer questions based on the understanding of context and some background knowledge, which is the critical gap between the human intelligence and machine intelligence (Talmor et al., 2019). This capability of owning prior knowledge and reasoning is a foundation for communication and interaction with the world. Therefore, commonsense reasoning has become an important research task with various datasets and models proposed in this field (Mihaylov et al., 2018; Talmor et al., 2019; Bhagavatula et al., 2020; Feng et al., 2020; Yasunaga et al., 2021; Zhang et al., 2022).
|
| 16 |
+
|
| 17 |
+
Question: What room is likely to have a sideboard on the counter?
|
| 18 |
+
|
| 19 |
+
A. home B. serve food buffet C. dining room (X) D. living room E. kitchen (√)
|
| 20 |
+
|
| 21 |
+

|
| 22 |
+
Other models Subgraph w/o Interaction
|
| 23 |
+
Figure 1: Through the interaction between subgraphs, the correct choice information is continuously reinforced. The subgraph is retrieved from ConceptNet (Speer et al., 2017). The nodes with letter are the q-c pairs and connect to other nodes of their respective subgraphs. Yellow nodes correspond to entities mentioned in the question, green nodes correspond to those in the answer. The other nodes are some associated entities introduced when extracting the subgraph.
|
| 24 |
+
|
| 25 |
+

|
| 26 |
+
Our model Subgraph w/ Interaction
|
| 27 |
+
|
| 28 |
+
Recently, PLMs (Devlin et al., 2019) have made significant progress in many question answering tasks because of its powerful representation capability. Nevertheless, since commonsense knowledge is rarely stated by natural language (Gunning, 2018), this makes it hard for PLMs to learn commonsense knowledge from pre-training corpus. Therefore, many CSQA models augment the PLMs with various external knowledge sources (e.g., structured knowledge ConceptNet (Speer et al., 2017) and unstructured knowledge Wikipedia). Compared with unstructured knowledge, structured knowledge sources have the advantage of being easier to train and recover explicit evidence, which leads many researchers to leverage KGs to reason.
|
| 29 |
+
|
| 30 |
+
A straightforward approach to leverage a KG is to directly model these relational paths (Santoro et al., 2017; Lin et al., 2019; Feng et al., 2020). Although path-based models have a strong interpretability, they are easily affected by the sparsity and scale of KGs. In addition, graph neural networks (GNNs) have achieved promising perfor
|
| 31 |
+
|
| 32 |
+
mance on modeling KGs. Hence, GNNs are widely used to implicitly capture commonsense knowledge from KGs (Feng et al., 2020; Yan et al., 2021; Yasunaga et al., 2021; Zhang et al., 2022).
|
| 33 |
+
|
| 34 |
+
However, these approaches have two main issues. First, they lack consideration of the interference effects between choices. In common KG-augmented models, the probability scores of candidate choices are calculated based on their respective reasoning subgraphs or paths separately, which is difficult to capture the nuance between the correct choice and distractors in commonsense questions. Second, the retrieved KGs contain a lot of noisy knowledge, which will mislead reasoning. QAGNN (Yasunaga et al., 2021) and JointLK (Sun et al., 2022) usually filter out the noise knowledge based on node features, but ignore the different significance of various edges which contain rich semantics. Wang et al. (2021) also proves the importance of edge features for commonsense reasoning. Therefore, we should capture the important features from many aspects (e.g., node, edge, graph and QA context).
|
| 35 |
+
|
| 36 |
+
In response, we propose ACENet to capture the nuance of multiple choices by integrating the QA context and the external commonsense knowledge graphs. Given a QA context and multiple retrieved subgraphs of choices, we encode each q-c pair using PLM. Then the q-c pair is introduced into respective subgraphs as a global node (Ying et al., 2021). Knowledge is transmitted between subgraphs to construct a complete hybrid knowledge graph for reasoning (see § 3.2). First, we apply knowledge interaction layer to carry out the information interaction between subgraphs and guide GNN message passing. The layer is stacked to form a hierarchy that enables multi-layer interactions to recursively reinforce the important choice information in message passing (see Figure 1). Additionally, in order to further aggregate key features in the reasoning graph, we design a mix attention mechanism of nodes and edges to iteratively select supporting evidence based on the global node. Our model simultaneously leverage the hybrid knowledge of PLM, KGs and different choices to augment the commonsense reasoning ability. In summary, our contributions are as follows:
|
| 37 |
+
|
| 38 |
+
- We propose a knowledge interaction layer to fuse the knowledge of PLM and different choices. The multi-layer interactions continuously strengthen correct choice information in the hybrid knowledge graph.
|
| 39 |
+
|
| 40 |
+
- We design a mix attention mechanism of nodes and edges to iteratively select relevant knowledge over multiple layers of GNN. The global information of q-c pair is also introduced to enhance evidence selection.
|
| 41 |
+
|
| 42 |
+
- Experimental results show that ACENet is superior to current KG-augmented methods. Through multi-layer interactions and multi-head attention guidance over hybrid knowledge graph, ACENet exhibits stronger performance in complex reasoning, such as solving questions with negation or more prepositions.
|
| 43 |
+
|
| 44 |
+
# 2 Related Work
|
| 45 |
+
|
| 46 |
+
Graph Neural Networks (GNNs). GNNs have been widely used to model knowledge graph due to its strong ability to process graph structured data. GNNs often follow a neighborhood aggregation and then message passing scheme (Gilmer et al., 2017). Recently, a lot of works on CSQA use GNN to model external KGs. MHGRN (Feng et al., 2020) transforms single-hop propagation into multi-hop propagation based on RGCN (Schlichtkrull et al., 2018). But it does not take into account the different importance of various nodes. QAGNN (Yasunaga et al., 2021), GreaseLM (Zhang et al., 2022), JointLK (Sun et al., 2022) use Graph Attention Network (GAT) (Velickovic et al., 2018) to represent knowledge graph. GAT is a commonly used variant of GNN, which performs attention-based message passing of node features. According to GSC (Wang et al., 2021), edge features play an essential role for commonsense reasoning. Hence, we design a mix attention mechanism of nodes and edges based on GAT.
|
| 47 |
+
|
| 48 |
+
Question Answering with LM+KG. Although pre-trained language models have achieved great success in many NLP domains, they do not perform well on reasoning questions yet. Therefore, many works propose LM+KG methods for CSQA, which use knowledge graph as external knowledge source for PLMs. JAKET (Yu et al., 2020) aligns the entities and relations between questions and knowledge graph and fuses the two kind of representations. QAGNN (Yasunaga et al., 2021) introduces a context node as the bridge of PLMs and knowledge graph. The context node is initialized with the encoding of PLM. GreaseLM (Zhang et al., 2022) designs an interactive scheme to bidirectionally transfer the information from both the LM and KG in multiple layers. JointLK (Sun et al.,
|
| 49 |
+
|
| 50 |
+
2022) calculates the fine-grained attention weight between each question token and each KG node to strengthen the joint reasoning ability. They all focus on enhancing the fusion of two knowledge source, but lack consideration for the interference effects of different choices in QA context.
|
| 51 |
+
|
| 52 |
+
# 3 Methodology
|
| 53 |
+
|
| 54 |
+
The diagram of the proposed ACENet is shown in Figure 2. We assume a setting where each example in our data set contains a question $q$ and a set of answer choices $\{c_1, c_2, \dots, c_n\}$ . We derive the gold answer from QA context and relevant commonsense knowledge. Therefore, we retrieve a KG $\mathcal{G}$ as the source of commonsense knowledge following prior work (Feng et al., 2020).
|
| 55 |
+
|
| 56 |
+

|
| 57 |
+
Figure 2: Overall architecture of our proposed ACENet.
|
| 58 |
+
|
| 59 |
+
# 3.1 Knowledge Interaction Layer
|
| 60 |
+
|
| 61 |
+
As shown in Figure 2, given a question and $n$ answer choices, we concatenate them to get $n$ q-c pair $[q; c_i]$ ( $i \in [1, n]$ ) separately. For each q-c pair, they will be as the inputs to feed through PLM. We use the "[CLS]" token output from PLM as a summary vector for each choice.
|
| 62 |
+
|
| 63 |
+
Although PLMs can learn the general language representation well (Qiu et al., 2020) for each choice, it encodes each q-c pair separately, without considering inter-choice interference effects that are essential for the downstream commonsense question answering task. Our model begins to use the representation of each q-c pair to integrate external commonsense knowledge in respective subgraphs (see Figure 3). How to initialize the sum
|
| 64 |
+
|
| 65 |
+
mary representation of each choice is crucial in minimizing distracting information being passed to the downstream supporting evidence selection and answer prediction tasks.
|
| 66 |
+
|
| 67 |
+
Therefore, we propose a knowledge interaction layer (KIL shown in Figure 3) to strengthen the correct choice information. First we add a multi-head attention (Vaswani et al., 2017) KIL on top of the "CLS" tokens. This layer is defined as:
|
| 68 |
+
|
| 69 |
+
$$
|
| 70 |
+
\boldsymbol {\alpha} _ {i j} = \operatorname {M H A} \left(\boldsymbol {Q} ^ {t}, \boldsymbol {K} ^ {t}, \boldsymbol {V} ^ {t}\right) \tag {1}
|
| 71 |
+
$$
|
| 72 |
+
|
| 73 |
+
$$
|
| 74 |
+
\boldsymbol {H} ^ {t} = \boldsymbol {\eta} \odot \tilde {\boldsymbol {H}} ^ {t} + (1 - \boldsymbol {\eta}) \odot (\boldsymbol {\alpha} _ {i j} \boldsymbol {V} ^ {t}) \tag {2}
|
| 75 |
+
$$
|
| 76 |
+
|
| 77 |
+
where $Q, K, V$ are interactive representations of all q-c pairs, which are linear projections from stacked embeddings of q-c pairs. MHA is the multi-head attention mechanism. $\alpha_{ij}$ is the attention weight between choices. $\eta = \sigma (\tilde{H}^t W + b)$ , $\sigma$ denotes the sigmoid activation function, $\odot$ represents the element-wise product, $\tilde{H}^t$ is the choice representations before passing through the $t$ -th KIL layer. Our motivation for adding attention across the q-c pairs generated from different choices is to encourage inter-choice interactions. By allowing choice representations to interact with each other, the model is able to train on a better input signal for message aggregation and passing.
|
| 78 |
+
|
| 79 |
+
# 3.2 Hybrid Knowledge Graph
|
| 80 |
+
|
| 81 |
+
To unify the knowledge of PLM and KGs into the same reasoning space and take advantage of both, we introduce the q-c pair into the extracted subgraphs $\mathcal{G}_i$ . Inspired by Gilmer et al. (2017) and Yasunaga et al. (2021), in hybrid knowledge graph, we add the q-c pair as a special node called [CN-node] to the $\mathcal{G}_i$ , and make connection between [CN-node] and each node individually. Each node in the $\mathcal{G}_i$ is divided into four types based on information sources: q-C node, Question entity node, Answer entity node and Retrieved entity node, referred to as $\mathcal{T} = \{\mathcal{C}, \mathcal{Q}, \mathcal{A}, \mathcal{R}\}$ .
|
| 82 |
+
|
| 83 |
+
To further leverage the interference effects of different choices, the [CNode] node replaces various graph pooling functions to represent global information for each subgraph $\mathcal{G}_i$ . In the BERT model (Devlin et al., 2019), there is a similar token, i.e., [CLS], which is a special token attached at the beginning of each sequence, to represent the sequence-level feature on downstream tasks. Thus, we use the [CNode] node as a medium of interaction between subgraphs to achieve information transmission between internal choices.
|
| 84 |
+
|
| 85 |
+

|
| 86 |
+
Figure 3: The schematic diagram of Hybrid Knowledge Graph and Knowledge Interaction Layer. The retrieved nodes have been marked in the graph, where the correspondence between knowledge sources and graph nodes has been highlighted in the same color. The grey nodes are some associated entities in subgraph.
|
| 87 |
+
|
| 88 |
+
We initialize the embedding of [CNode] with the representation of the q-c pair $(\mathcal{C}_i^0 = f_{KIL}(f_{LM}([q;c_i]))$ , and other nodes on $\mathcal{G}_i$ by their pre-trained entity embeddings prepared by Feng et al. (2020). In message aggregation and passing stage, the representation of [CNode] is updated as normal nodes in subgraph and the [CNode] aggregates the information from all nodes. Inspired by this, we can realize knowledge interaction between different subgraphs $\mathcal{G}_i$ and define the importance of evidence on $\mathcal{G}_i$ relying on $[\mathrm{CNode}]_i$ Hence, the global node can serve as a hub to help node communications and subgraph interactions, which can make each node more aware of the non-local information. Combining PLM, KGs and inter-choice interaction information, we construct a novel hybrid knowledge graph (see Figure 3).
|
| 89 |
+
|
| 90 |
+
In the following subsections, we will conduct GNN message aggregation and passing over hybrid knowledge graph to score each choice.
|
| 91 |
+
|
| 92 |
+
# 3.3 GNN Architecture
|
| 93 |
+
|
| 94 |
+
Structured data like knowledge graph is much more efficient in representing commonsense compared with unstructured text (Xu et al., 2021). Therefore, we design a mix attention mechanism of nodes and edges to achieve iterative supporting evidence selection based on the reasoning graph $\mathcal{G}_i$ . Meanwhile, we also add the KIL between the layers of GNN to enhance global information interaction among choices (see KIL-GNN in Figure 2).
|
| 95 |
+
|
| 96 |
+
Edge Encoding. To leverage edge information in supporting evidence selection and representa
|
| 97 |
+
|
| 98 |
+
tion of the whole graph, we should capture the source/target node types and the edge types. Following Yasunaga et al. (2021), we first obtain the type embedding $u_{t}$ of each node $t$ , as well as the edge embedding $r_{st}$ from node $s$ to node $t$ by
|
| 99 |
+
|
| 100 |
+
$$
|
| 101 |
+
\boldsymbol {r} _ {s t} = f _ {r} \left(e _ {s t}, u _ {s}, u _ {t}\right) \tag {3}
|
| 102 |
+
$$
|
| 103 |
+
|
| 104 |
+
where $u_{s}, u_{t} \in \mathbb{R}^{\mathcal{T}}$ are one-hot embeddings indicating the node types of $s$ and $t$ , $e_{st} \in \mathbb{R}^{\mathcal{R}}$ is a one-hot embedding indicating the relation type of the edge $s \to t$ . Here we add self-loops for all nodes. $f_{r}: \mathbb{R}^{|\mathcal{R}| + 2|\mathcal{T}|} \to \mathbb{R}^{D}$ is a 2-layer MLP. We then compute the importance of each edge depending on [CNode] node in the reasoning process.
|
| 105 |
+
|
| 106 |
+
Edge-Weighted Message Updating. Wang et al. (2021) points out that edge encoding is of vital importance for commonsense reasoning. To better encode effective edge features into message aggregation, each edge's weight is used to rescale information flow on that edge. Intuitively, an edge's weight signifies the edge's relevance for reasoning about the given task instance. Thus, We also use the global node [CNode] as global context to compute edge attention weights.
|
| 107 |
+
|
| 108 |
+
Formally, the update rule of edges at layer $\ell$ is:
|
| 109 |
+
|
| 110 |
+
$$
|
| 111 |
+
\boldsymbol {w} _ {(i, j)} ^ {\ell} = f _ {w} ^ {\ell} \left(\left[ \mathcal {C} ^ {\ell}, \boldsymbol {r} _ {i j} ^ {\ell} \right]\right) \tag {4}
|
| 112 |
+
$$
|
| 113 |
+
|
| 114 |
+
$$
|
| 115 |
+
\boldsymbol {A} _ {(i, j)} ^ {\ell} = \frac {e ^ {w _ {(i , j)} ^ {\ell}}}{\sum_ {(s , t) \in \epsilon} e ^ {w _ {(s , t)} ^ {\ell}}} \tag {5}
|
| 116 |
+
$$
|
| 117 |
+
|
| 118 |
+
$$
|
| 119 |
+
\tilde {\boldsymbol {r}} _ {s t} ^ {\ell} = \sum_ {s \in \mathcal {N} _ {t} \cup \{t \}} \boldsymbol {A} _ {(s, t)} ^ {\ell} \boldsymbol {r} _ {s t} ^ {\ell} \tag {6}
|
| 120 |
+
$$
|
| 121 |
+
|
| 122 |
+
where $f_{w}^{\ell}$ is a 2-layer MLP. $\mathcal{N}_t$ is the set of node $t$ 's incoming neighbors. We then compute the complete node message from $s$ to $t$ as
|
| 123 |
+
|
| 124 |
+
$$
|
| 125 |
+
\tilde {h} _ {s} ^ {\ell} = f _ {m} \left(h _ {s} ^ {\ell}, \tilde {r} _ {s t} ^ {\ell}\right) \tag {7}
|
| 126 |
+
$$
|
| 127 |
+
|
| 128 |
+
where $f_{m}$ denotes a linear fully connected layer. $h_s^0$ is the initial embedding for node $s$ .
|
| 129 |
+
|
| 130 |
+
The embedding of each node $s$ is updated as $\tilde{h}_s^\ell$ , which is related to the neighboring edges of node $s$ . For each edge neighbor, edge weight $A_{(i,j)}^\ell$ is used to rescale the edge's influence on message updating of node $s$ . Through this soft pruning method, we integrate the essential edge information into node features. In the following message aggregation and passing, the node features on the hybrid subgraph is strongly contextualized.
|
| 131 |
+
|
| 132 |
+
Message Aggregation and Passing. For message passing, we use the multi-head attention GAT (Velickovic et al., 2018), which induces node representation through iterative message passing between neighbors on the graph. Specifically, in the $\ell$ -th layer of ACENet, we update the representation of each node $t$ to:
|
| 133 |
+
|
| 134 |
+
$$
|
| 135 |
+
\boldsymbol {h} _ {t} ^ {\ell + 1} = \prod_ {k = 1} ^ {K} f _ {n} \left(\sum_ {s \in \mathcal {N} _ {t} \cup \{t \}} \alpha_ {s t} ^ {k} \tilde {\boldsymbol {h}} _ {s} ^ {\ell}\right) \tag {8}
|
| 136 |
+
$$
|
| 137 |
+
|
| 138 |
+
where $||$ represents concatenation, $\alpha_{st}^{k}$ are normalized attention coefficients computed by the $k$ -th attention mechanism $(\alpha^{k})$ , $\mathcal{N}_t$ represents the neighborhood of an arbitrary node $t$ , and $f_{n}$ is a 2-layer MLP. Note that, in this setting, the final returned output, $h_t$ , will consist of the important edge-wise and node-wise features for each node.
|
| 139 |
+
|
| 140 |
+
Then, we will use the multi-head attention to compute attention weight $\alpha_{st}$ from nodes to node $t$ . The query and key vectors can be obtained by
|
| 141 |
+
|
| 142 |
+
$$
|
| 143 |
+
\boldsymbol {q} _ {s} = f _ {q} \left(\tilde {\boldsymbol {h}} _ {s} ^ {\ell}\right), \boldsymbol {k} _ {t} = f _ {k} \left(\tilde {\boldsymbol {h}} _ {t} ^ {\ell}\right) \tag {9}
|
| 144 |
+
$$
|
| 145 |
+
|
| 146 |
+
where $f_{q}$ and $f_{k}$ are linear transformations. Experimental results also show that the degree feature of nodes is also crucial, thus we add the degree feature $d_{s}$ to the local node attention weight, which is computed as follows:
|
| 147 |
+
|
| 148 |
+
$$
|
| 149 |
+
\alpha_ {s t} = \frac {\exp \left(\gamma_ {s t}\right)}{\sum_ {t ^ {\prime} \in \mathcal {N} _ {s} \cup \{s \}} \exp \left(\gamma_ {s t ^ {\prime}}\right)} \cdot d _ {s}, \gamma_ {s t} = \frac {\boldsymbol {q} _ {s} \boldsymbol {k} _ {t}}{\sqrt {D}} \tag {10}
|
| 150 |
+
$$
|
| 151 |
+
|
| 152 |
+
Subgraph Information Interaction. In the above process, we execute message aggregation and passing of single layer GAT. [CNode] aggregates the
|
| 153 |
+
|
| 154 |
+
information from other nodes of its subgraph in the message passing process. In order to further strengthen correct choice information and perception of the overall QA context, we add knowledge interaction layer between each layer of GAT to fuse the global representation $\mathcal{G}_i$ (shown in Figure 2).
|
| 155 |
+
|
| 156 |
+
# 3.4 Answer and Explain
|
| 157 |
+
|
| 158 |
+
We now discuss the learning and interactive process of ACENet instantiated for Commonsense QA tasks. By integrating the knowledge of PLM, the retrieved KGs and the interaction information of choices, we compute the probability of $c_{i}$ being the correct answer as:
|
| 159 |
+
|
| 160 |
+
$$
|
| 161 |
+
p \left(c _ {i} \mid q, c\right) \propto e x p \left(M L P \left(\mathcal {C} ^ {L M}, \mathcal {G} ^ {K I L}, \mathcal {G}\right)\right) \tag {11}
|
| 162 |
+
$$
|
| 163 |
+
|
| 164 |
+
where $c^{LM}$ is the initial embedding of the q-c pair through PLM, $\mathcal{G}^{KIL}$ is the knowledge interaction representation of q-c pair over different subgraphs, and $\mathcal{G}$ denotes attention-based pooling for last layer of GNN representation.
|
| 165 |
+
|
| 166 |
+
The whole model is trained end-to-end jointly with the PLM (e.g., RoBERTa (Liu et al., 2019)) using the cross entropy loss. Finally, we choose the choice with the highest probability score as our answer choice.
|
| 167 |
+
|
| 168 |
+
# 4 Experiments
|
| 169 |
+
|
| 170 |
+
In this section, we conducted experiments over two commonsense QA benchmarks by answering the following research questions.
|
| 171 |
+
|
| 172 |
+
- RQ1: Does ACENet outperform state-of-the-art baselines?
|
| 173 |
+
- RQ2: How do each model module and training data affect ACENet?
|
| 174 |
+
- RQ3: What is the performance of ACENet on different types of complex questions?
|
| 175 |
+
- RQ4: What is the intuitive performance of ACENet in the process of reasoning?
|
| 176 |
+
|
| 177 |
+
# 4.1 Experimental Settings
|
| 178 |
+
|
| 179 |
+
# 4.1.1 Datasets
|
| 180 |
+
|
| 181 |
+
We conduct experiments to evaluate our approach on two commonsense QA benchmarks: CommonSenseQA and OpenBookQA.
|
| 182 |
+
|
| 183 |
+
CommonsenseQA (Talmor et al., 2019) is a 5-way multiple-choice question answering dataset
|
| 184 |
+
|
| 185 |
+
of 12,102 questions that require background commonsense knowledge beyond surface language understanding. The test set of CommonsenseQA is not publicly available, and model predictions can only be evaluated every two weeks via the official leaderboard. We perform our experiments using the in-house (IH) data split of Lin et al. (2019) to compare to baseline methods.
|
| 186 |
+
|
| 187 |
+
OpenBookQA (Mihaylov et al., 2018) is a 4-way multiple-choice question answering dataset that tests elementary scientific knowledge. It contains 5,957 questions along with an open book of scientific facts. We use the official data split. Additionally, OpenBookQA also provides a collection of background facts in a textual form. We use the correspondence between these facts and their questions, prepared by Clark et al. (2020), as an additional input to the context module.
|
| 188 |
+
|
| 189 |
+
# 4.1.2 Implementation Details
|
| 190 |
+
|
| 191 |
+
Following previous work (Yasunaga et al., 2021), we use ConceptNet (Speer et al., 2017), a general-domain knowledge graph, as our structured knowledge source. Node embeddings are initialized using the entity embeddings prepared by Feng et al. (2020), which applies pre-trained LMs to all triples in ConceptNet and then obtains a pooled representation for each entity. Given each q-c pair (question and answer choice), we retrieve the top 200 nodes and adjacent edge according the node relevance score following Yasunaga et al. (2021). We set the dimension $(\mathrm{D} = 200)$ and number of our GNN layers $(\mathrm{L} = 5)$ , with dropout rate 0.2 applied to each layer (Srivastava et al., 2014). The batch size on CommonsenseQA and OpenBookQA is set from $\{64, 128\}$ . We train the model with the RAdam optimizer (Liu et al., 2020) using two GPUs (Tesla V100), which takes about 20 hours on average. We use separate learning rates for the LM module and the GNN module, which are set from $\{1e-5, 2e-5, 3e-5\}$ and $\{5e-4, 1e-3, 2e-3\}$ . The above hyperparameters are tuned on the development set.
|
| 192 |
+
|
| 193 |
+
# 4.1.3 Compared Methods
|
| 194 |
+
|
| 195 |
+
Although text corpus can provide complementary knowledge except for knowledge graphs, our model focuses on exploiting the ability of KG and the joint reasoning among different choices, LM and KG, so we choose LM+KG as the comparison methods.
|
| 196 |
+
|
| 197 |
+
To further investigate the enhancement effects of KGs on CSQA tasks, we compare with a vanilla fine-tuned LM, which does not use the KG. We
|
| 198 |
+
|
| 199 |
+
use RoBERTa-large for CommonsenseQA, and RoBERTa-large and AristoRoBERTa for OpenBookQA. In addition, the LM+KG methods share a similar high-level framework with our methods. They usually use LM as a text encoder, GNN or RN as the tool of KG message aggregation and passing. But the specific used knowledge and the joint reasoning methods are different: (1) RN (Santoro et al., 2017), (2) RGCN (Schlichtkrull et al., 2018), (3) GconAttn (Wang et al., 2019), (4) KagNet (Lin et al., 2019), (5) MHGRN (Feng et al., 2020), (6) HGN (Yan et al., 2021), (7) JointLK (Sun et al., 2022), (8) QAGNN (Yasunaga et al., 2021), (9) GREASELM (Zhang et al., 2022). (1), (2), (3) are relation-aware GNNs for KGs, and (4), (5) further model paths in KGs. (6) generates the missing edge of subgraphs for reasoning. (7), (8), (9) construct a joint reasoning graph, which can enhance the interaction of multi-modal knowledge. To be fair, we use the same LM for all comparison methods and our model. The key difference between ACENet and these are that they do not simultaneously consider the interference effects among choices or the importance of different edge and node features.
|
| 200 |
+
|
| 201 |
+
# 4.2 Main Results (RQ1)
|
| 202 |
+
|
| 203 |
+
The results on CommonsenseQA in-house split dataset are shown in Table 1. The results on OpenBookQA test dataset are shown in Table 2. We repeat each experiment 4 times and report the mean and standard deviation of accuracy.
|
| 204 |
+
|
| 205 |
+
<table><tr><td>Methods</td><td>IHdev-Acc. (%)</td><td>IHtest-Acc. (%)</td></tr><tr><td>RoBERTa-large (w/o KG)</td><td>73.07 (±0.45)</td><td>68.69 (±0.56)</td></tr><tr><td>+RGCN</td><td>72.69 (±0.19)</td><td>68.41 (±0.66)</td></tr><tr><td>+GconAttn</td><td>72.61 (±0.39)</td><td>68.59 (±0.96)</td></tr><tr><td>+RN</td><td>74.57 (±0.91)</td><td>69.08 (±0.21)</td></tr><tr><td>+KagNet</td><td>73.47 (±0.22)</td><td>69.01 (±0.76)</td></tr><tr><td>+MHGRN</td><td>74.45 (±0.10)</td><td>71.11 (±0.81)</td></tr><tr><td>+HGN</td><td>-</td><td>73.64 (±0.30)</td></tr><tr><td>+QA-GNN</td><td>76.54 (±0.21)</td><td>73.41 (±0.92)</td></tr><tr><td>+JointLK</td><td>77.88 (±0.25)</td><td>74.43 (±0.83)</td></tr><tr><td>+GREASELM</td><td>78.50 (±0.50)</td><td>74.20 (±0.40)</td></tr><tr><td>+ACENet (Ours)</td><td>78.54 (±0.45)</td><td>74.72 (±0.70)</td></tr></table>
|
| 206 |
+
|
| 207 |
+
Table 1: Performance comparison on CommonsenseQA in-house split. We follow the data division method of Lin et al. (2019) and report the in-house Dev (IHdev) and Test (IHtest) accuracy.
|
| 208 |
+
|
| 209 |
+
As show in both datasets, our proposed model ACENet outperforms previous methods. We observe consistent improvements over fine-tuned LMs and existing LM+KG models. The boost over QA-GNN suggests that ACENet makes a better use
|
| 210 |
+
|
| 211 |
+
of inter-choice interaction information than existing LM+KG methods.
|
| 212 |
+
|
| 213 |
+
<table><tr><td>Methods</td><td>RoBERTa-Large</td><td>AristoRoBERTa</td></tr><tr><td>Fine-tuned LMs (w/o KG)</td><td>64.80 (±2.37)</td><td>78.40 (±1.64)</td></tr><tr><td>+RGCN</td><td>62.45 (±1.57)</td><td>74.60 (±2.53)</td></tr><tr><td>+GconAttn</td><td>64.75 (±1.48)</td><td>71.80 (±1.21)</td></tr><tr><td>+RN</td><td>65.20 (±1.18)</td><td>75.35 (±1.39)</td></tr><tr><td>+MHGRN</td><td>66.85 (±1.19)</td><td>80.60</td></tr><tr><td>+JointLK</td><td>70.34 (±0.75)</td><td>84.92 (±1.07)</td></tr><tr><td>+QA-GNN</td><td>67.80 (±2.75)</td><td>82.77 (±1.56)</td></tr><tr><td>+GREASELM</td><td>-</td><td>84.80</td></tr><tr><td>+ACENet (Ours)</td><td>70.47 (±0.12)</td><td>83.40 (±0.14)</td></tr></table>
|
| 214 |
+
|
| 215 |
+
# 4.3 Ablation Studies (RQ2)
|
| 216 |
+
|
| 217 |
+
We further conduct specific experiments to investigate the effectiveness of different components in our model.
|
| 218 |
+
|
| 219 |
+
Impact of Model Components. We add each model component individually and report the accuracy on the CommonsenseQA IHdev set in Table 3. Adding the edge&node attention mechanism leads to $0.79\%$ improvement in performance which shows that some nodes and edges are not conducive to reasoning. Additionally, when we add the KIL (GNN) module, the results have a significant improvement: $76.33\% \rightarrow 77.56\% (+1.23\%)$ , suggesting that the interaction of different choices is essential in the process of message passing. Meanwhile, our KIL (PLM) provides a better initial representation for the q-c pairs, which is also critical.
|
| 220 |
+
|
| 221 |
+
Table 2: Test accuracy comparison on OpenBookQA. Methods with AristoRoBERTa use the textual evidence by Clark et al. (2020) as an additional input to the QA context.
|
| 222 |
+
|
| 223 |
+
<table><tr><td>Model</td><td>Dev Acc.</td></tr><tr><td>None</td><td>76.33</td></tr><tr><td>(a) w/ KIL(PLM)</td><td>76.67</td></tr><tr><td>(b) w/ KIL(GNN)</td><td>77.56</td></tr><tr><td>(c) w/ Edge&Node Attention</td><td>77.12</td></tr><tr><td>(d) w/all (final)</td><td>78.54</td></tr></table>
|
| 224 |
+
|
| 225 |
+
Impact of Less Labeled Training Data. Table 4 shows the results of our model and baselines when trained with less training data on CommonsenseQA. Even in the case of less training data, our model still achieves the best test accuracy, which suggests that incorporating the knowledge of external KGs and multiple choices are helpful for commonsense
|
| 226 |
+
|
| 227 |
+
reasoning under the low-resource setting.
|
| 228 |
+
|
| 229 |
+
Table 3: Ablation study of our model components (adding one component each time), using the Common-senseQA IHdev set.
|
| 230 |
+
|
| 231 |
+
<table><tr><td rowspan="2">Methods</td><td colspan="2">RoBERTa-Large</td></tr><tr><td>60%Train</td><td>100%Train</td></tr><tr><td>LM Finetuning</td><td>65.56 (±0.76)</td><td>68.69 (±0.56)</td></tr><tr><td>RN</td><td>66.16 (±0.28)</td><td>70.08 (±0.21)</td></tr><tr><td>MHGRN</td><td>68.84 (±1.06)</td><td>71.11 (±0.81)</td></tr><tr><td>HGN</td><td>71.10 (±0.11)</td><td>73.64 (±0.30)</td></tr><tr><td>QA-GNN</td><td>70.27 (±0.35)</td><td>73.41 (±0.92)</td></tr><tr><td>GREASELM</td><td>71.08 (±0.52)</td><td>74.20 (±0.40)</td></tr><tr><td>ACENet (Ours)</td><td>71.31 (±0.42)</td><td>74.72 (±0.70)</td></tr></table>
|
| 232 |
+
|
| 233 |
+
Table 4: Performance change (accuracy in the amounts of training data on CommonsenseQA IHtest set (same as Lin et al. (2019)).
|
| 234 |
+
|
| 235 |
+
Impact of Number of Layers $(L)$ and Heads $(H)$ . To give further insight into the factors for the capacity of our models, we investigate the impact of the number of layers and heads in the reasoning process. The Figure 4 shows the performance of our model with different numbers of layers and heads. We can observe that increasing the number of layers and heads in a certain range improves the performance of our model. The intuitive explanation is that multiple heads help the model to focus multiple knowledge rules and at the same time multiple layers help the model to recursively select the relevant knowledge rules (Paul and Frank, 2020).
|
| 236 |
+
|
| 237 |
+

|
| 238 |
+
Figure 4: Performance of ACENet model with different numbers of Heads and numbers of GNN Layers on CommonsenseQA IHdev set.
|
| 239 |
+
|
| 240 |
+
However, performance begins to drop gradually when $\mathrm{H} = 1$ , 2 and $\mathrm{L} > 5$ or $\mathrm{H} = 4$ and $\mathrm{L} > 4$ . A widely accepted explanation for the performance degradation with increasing the layers of GNN is the over-smoothing effect (Chien et al., 2020). Therefore, we set $\mathrm{L} = 5$ , $\mathrm{H} = 2$ to optimally balance their utility. Compared with the baselines, our model achieves better results at different number of layers
|
| 241 |
+
|
| 242 |
+
<table><tr><td rowspan="2">Model</td><td colspan="2">Negation Term</td><td colspan="3">Number of Question Prepositions</td><td colspan="2">Number of Question Entities</td></tr><tr><td>w/o negation</td><td>w/ negation</td><td>0</td><td>1</td><td>≥2</td><td>≤10 entities</td><td>>10 entities</td></tr><tr><td>Number</td><td>1107</td><td>114</td><td>551</td><td>464</td><td>206</td><td>1012</td><td>209</td></tr><tr><td>QA-GNN</td><td>77.78</td><td>71.93</td><td>77.86</td><td>76.51</td><td>77.18</td><td>76.98</td><td>78.47</td></tr><tr><td>GREASELM</td><td>79.31</td><td>74.56</td><td>79.31</td><td>76.94</td><td>80.58</td><td>77.57</td><td>83.73</td></tr><tr><td>ACENet (Ours)</td><td>79.49</td><td>75.44</td><td>79.49</td><td>77.59</td><td>81.56</td><td>78.66</td><td>81.34</td></tr></table>
|
| 243 |
+
|
| 244 |
+
Table 5: Performance on different types of complex questions. The questions are retrieved from the Common-senseQA IHdev set.
|
| 245 |
+
|
| 246 |
+

|
| 247 |
+
Figure 5: Ablation study on stacked of GNN layers.
|
| 248 |
+
|
| 249 |
+
(shown in Figure 5).
|
| 250 |
+
|
| 251 |
+
# 4.4 Quantitative Analysis (RQ3)
|
| 252 |
+
|
| 253 |
+
Given these overall performance improvements, we further analyze whether performance improvements were reflected in questions that required more complex reasoning. We define the reasoning complexity of different questions, such as questions with negation and complex questions with more prepositions and entities. We compare our model with the prior better baselines in Table 5.
|
| 254 |
+
|
| 255 |
+
First, our model exhibits a big boost $(+3.51\%, +0.88\%)$ over QA-GNN and GREASELM for the questions with negation term (e.g., no, not, never, etc.), suggesting its strength in negative reasoning. Alternatively, the number of prepositions (e.g., in, on, of, with, etc.) in a question usually represents the number of explicit reasoning constraints. Our results in Table 5 demonstrate that our model generally outperforms the baselines for all questions with different number of prepositions. Additionally, the number of the question entities approximately indicates the scale of the retrieved reasoning graph. Our model achieves better results $(+1.68\%, +1.09\%)$ over QA-GNN and GREASELM for most of the questions ( $\leq 10$ entities). At the same time, our model and the prior best model, GREASELM perform comparably when aiming at larger scale
|
| 256 |
+
|
| 257 |
+
retrieved graphs.
|
| 258 |
+
|
| 259 |
+
# 4.5 Qualitative Analysis (RQ4)
|
| 260 |
+
|
| 261 |
+
Figure 6 shows the choice-to-choice attention weights induced by the KIL layers of our model in different stages. Our model can strengthen the correct choice information in multi-layer interactions using external KGs to get the right answer, while QA-GNN and GREASELM make the incorrect predictions. We analyze whether different heads focus on multiple knowledge rules. In Figure 6, we observe that two heads focus the different choice-related knowledge in the message aggregation and passing process. First, the attention of two heads represent the key reasoning information in the first several KILs, but gradually averages out by the final layer. The head<sub>1</sub> primarily focuses on "pay bills" in the different KILs, which provides strong evidence of reasoning for the correct answer. In addition, the attention weights on "buy food" and "get things" become higher in head<sub>2</sub>. It also helps our model to select the relevant knowledge. As a whole, our model integrates the different knowledge rules mined by each head to realize the correct prediction.
|
| 262 |
+
|
| 263 |
+
# 4.6 Analysis of Experimental Results
|
| 264 |
+
|
| 265 |
+
To explain why ACENet outperforms other baselines, our hypothesis is because of the receptive field of the subgraph nodes expanded with the interaction of multi-layer Knowledge Interaction Layers. And through the aggregation and propagation of multi-layer graph neural network each node can more aware of the non-local information. However, the work to explain the result of neural networks requires strenuous efforts. We can think differently and extend this method into more general settings in other tasks (e.g., document modeling, reading comprehension, information extraction, etc.)
|
| 266 |
+
|
| 267 |
+
Question: August needed money because he was afraid that he'd be kicked out of his house. What did he need money to do?
|
| 268 |
+
|
| 269 |
+
A. control people B. $\checkmark$ pay bills (Ours) C. hurt people D. $\times$ buy food (GREASELM) E. $\times$ get things (QA-GNN)
|
| 270 |
+
|
| 271 |
+

|
| 272 |
+
Figure 6: Qualitative analysis of ACENet's inter-choice attention weight changes across multiple knowledge interaction layers in different heads.
|
| 273 |
+
|
| 274 |
+
# 5 Conclusions
|
| 275 |
+
|
| 276 |
+
In this paper, we propose a multi-head attention knowledge interaction layer to enhance correct choice information and capture nuances in different choices. Meanwhile, the mix attention mechanism of nodes and edges is introduced into message passing to iteratively select relevant knowledge in hybrid knowledge graph. Experimental results on CommonsenseQA and OpenBookQA demonstrate the superiority of ACENet over other LM+KG methods and the strong performance in handling complex questions. In future work, we plan to further investigate augmenting effects of knowledge graph for reasoning, and integrate neural and symbolic reasoning system to achieve dual system cognitive intelligence.
|
| 277 |
+
|
| 278 |
+
# Limitations
|
| 279 |
+
|
| 280 |
+
Although our model achieves competitive performance in commonsense question answering tasks, there are some methods and limitations that can be improved. The limitations of our study are summarized as follows:
|
| 281 |
+
|
| 282 |
+
1) GNNs incorporates implicit external knowledge in the process of message aggregation and passing. Therefore, existing KGaugmented methods are usually not interpretable enough.
|
| 283 |
+
2) The optimal number of GNN layers in our model depends on experimental results. However, the scale of the knowledge graphs is often uncertain in real application scenarios. We can not guarantee that the specific number of
|
| 284 |
+
|
| 285 |
+
GNN layers will achieve the appropriate performance. How to design the depth-adaptive GNNs for a balance between efficiency and effectiveness is a key challenge.
|
| 286 |
+
|
| 287 |
+
3) At present, our model of using the interaction between choices to strengthen correct choice information is only suitable for question answering tasks with the limited scope.
|
| 288 |
+
|
| 289 |
+
# Ethics Statement
|
| 290 |
+
|
| 291 |
+
This paper proposes a general approach to fuse QA context, language models and external knowledge graphs for commonsense reasoning. We work within the purview of acceptable privacy practices and strictly follow the data usage policy. In all the experiments, we use public datasets and consist of their intended use. We have also described our experimental settings in detail which ensure the reproducibility of our method. We neither introduce any social/ethical bias to the model nor amplify any bias in the data, so we do not foresee any direct social consequences or ethical issues.
|
| 292 |
+
|
| 293 |
+
# Acknowledgments
|
| 294 |
+
|
| 295 |
+
This work is supported in part by Natural Science Foundation of China (grant No.62276188 and No.61876129), the Beijing Academy of Artificial Intelligence(BAAI), TJU-Wenge joint laboratory funding, and MindSpore 2.
|
| 296 |
+
|
| 297 |
+
# References
|
| 298 |
+
|
| 299 |
+
Chandra Bhagavatula, Ronan Le Bras, Chaitanya Malaviya, Keisuke Sakaguchi, Ari Holtzman, Han
|
| 300 |
+
|
| 301 |
+
nah Rashkin, Doug Downey, Wen-tau Yih, and Yejin Choi. 2020. Abductive commonsense reasoning. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.
|
| 302 |
+
Eli Chien, Jianhao Peng, Pan Li, and Olgica Milenkovic. 2020. Adaptive universal generalized pagerank graph neural network. ArXiv preprint, abs/2006.07988.
|
| 303 |
+
Peter Clark, Oren Etzioni, Tushar Khot, Daniel Khashabi, Bhavana Mishra, Kyle Richardson, Ashish Sabharwal, Carissa Schoenick, Oyvind Tafjord, Niket Tandon, et al. 2020. From 'f'to 'a'on the ny regents science exams: An overview of the aristo project. AI Magazine, 41(4):39-53.
|
| 304 |
+
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.
|
| 305 |
+
Yanlin Feng, Xinyue Chen, Bill Yuchen Lin, Peifeng Wang, Jun Yan, and Xiang Ren. 2020. Scalable multihop relational reasoning for knowledge-aware question answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1295-1309, Online. Association for Computational Linguistics.
|
| 306 |
+
Justin Gilmer, Samuel S. Schoenholz, Patrick F. Riley, Oriol Vinyals, and George E. Dahl. 2017. Neural message passing for quantum chemistry. In Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, volume 70 of Proceedings of Machine Learning Research, pages 1263-1272. PMLR.
|
| 307 |
+
David Gunning. 2018. Machine common sense concept paper. ArXiv preprint, abs/1810.07528.
|
| 308 |
+
Bill Yuchen Lin, Xinyue Chen, Jamin Chen, and Xiang Ren. 2019. KagNet: Knowledge-aware graph networks for commonsense reasoning. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2829-2839, Hong Kong, China. Association for Computational Linguistics.
|
| 309 |
+
Liyuan Liu, Haoming Jiang, Pengcheng He, Weizhu Chen, Xiaodong Liu, Jianfeng Gao, and Jiawei Han. 2020. On the variance of the adaptive learning rate and beyond. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.
|
| 310 |
+
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019.
|
| 311 |
+
|
| 312 |
+
Roberta: A robustly optimized bert pretraining approach. ArXiv preprint, abs/1907.11692.
|
| 313 |
+
Todor Mihaylov, Peter Clark, Tushar Khot, and Ashish Sabharwal. 2018. Can a suit of armor conduct electricity? a new dataset for open book question answering. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2381-2391, Brussels, Belgium. Association for Computational Linguistics.
|
| 314 |
+
Debjit Paul and Anette Frank. 2020. Social commonsense reasoning with multi-head knowledge attention. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 2969-2980, Online. Association for Computational Linguistics.
|
| 315 |
+
Xipeng Qiu, Tianxiang Sun, Yige Xu, Yunfan Shao, Ning Dai, and Xuanjing Huang. 2020. Pre-trained models for natural language processing: A survey. Science China Technological Sciences, 63(10):1872-1897.
|
| 316 |
+
Adam Santoro, David Raposo, David G. T. Barrett, Mateusz Malinowski, Razvan Pascanu, Peter W. Battaglia, and Tim Lillicrap. 2017. A simple neural network module for relational reasoning. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 4967-4976.
|
| 317 |
+
Michael Schlichtkrull, Thomas N Kipf, Peter Bloem, Rianne van den Berg, Ivan Titov, and Max Welling. 2018. Modeling relational data with graph convolutional networks. In European semantic web conference, pages 593-607. Springer.
|
| 318 |
+
Robyn Speer, Joshua Chin, and Catherine Havasi. 2017. Conceptnet 5.5: An open multilingual graph of general knowledge. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, February 4-9, 2017, San Francisco, California, USA, pages 4444-4451. AAAI Press.
|
| 319 |
+
Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research, 15(1):1929-1958.
|
| 320 |
+
Yueqing Sun, Qi Shi, Le Qi, and Yu Zhang. 2022. JointLK: Joint reasoning with language models and knowledge graphs for commonsense question answering. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5049-5060, Seattle, United States. Association for Computational Linguistics.
|
| 321 |
+
Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. 2019. CommonsenseQA: A question answering challenge targeting commonsense knowledge. In Proceedings of the 2019 Conference of the North American Chapter of the Association for
|
| 322 |
+
|
| 323 |
+
Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4149-4158, Minneapolis, Minnesota. Association for Computational Linguistics.
|
| 324 |
+
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 5998-6008.
|
| 325 |
+
Petar Velickovic, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. 2018. Graph attention networks. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net.
|
| 326 |
+
Kuan Wang, Yuyu Zhang, Diyi Yang, Le Song, and Tao Qin. 2021. Gnn is a counter? revisiting gnn for question answering. ArXiv preprint, abs/2110.03192.
|
| 327 |
+
Xiaoyan Wang, Pavan Kapanipathi, Ryan Musa, Mo Yu, Kartik Talamadupula, Ibrahim Abdelaziz, Maria Chang, Achille Fokoue, Bassem Makni, Nicholas Mattei, et al. 2019. Improving natural language inference using external knowledge in the science questions domain. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 7208-7215.
|
| 328 |
+
Yichong Xu, Chenguang Zhu, Ruochen Xu, Yang Liu, Michael Zeng, and Xuedong Huang. 2021. Fusing context into knowledge graph for commonsense question answering. In *Findings of the Association for Computational Linguistics: ACL-IJCNLP* 2021, pages 1201-1207, Online. Association for Computational Linguistics.
|
| 329 |
+
Jun Yan, Mrigank Raman, Aaron Chan, Tianyu Zhang, Ryan Rossi, Handong Zhao, Sungchul Kim, Nedim Lipka, and Xiang Ren. 2021. Learning contextualized knowledge structures for commonsense reasoning. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 4038-4051, Online. Association for Computational Linguistics.
|
| 330 |
+
Michihiro Yasunaga, Hongyu Ren, Antoine Bosselut, Percy Liang, and Jure Leskovec. 2021. QA-GNN: Reasoning with language models and knowledge graphs for question answering. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 535-546, Online. Association for Computational Linguistics.
|
| 331 |
+
Chengxuan Ying, Tianle Cai, Shengjie Luo, Shuxin Zheng, Guolin Ke, Di He, Yanming Shen, and TieYan Liu. 2021. Do transformers really perform badly for graph representation? Advances in Neural Information Processing Systems, 34.
|
| 332 |
+
|
| 333 |
+
Donghan Yu, Chenguang Zhu, Yiming Yang, and Michael Zeng. 2020. Jaket: Joint pre-training of knowledge graph and language understanding. ArXiv preprint, abs/2010.00796.
|
| 334 |
+
Xikun Zhang, Antoine Bosselut, Michihiro Yasunaga, Hongyu Ren, Percy Liang, Christopher D Manning, and Jure Leskovec. 2022. Greaselm: Graph reasoning enhanced language models for question answering. ArXiv preprint, abs/2201.08860.
|
acenetattentionguidedcommonsensereasoningonhybridknowledgegraph/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:ffdd5f0fc3513fdb6eed2019907db2a62e51849f6be75185f8b7cdf9c0a757cd
|
| 3 |
+
size 461277
|
acenetattentionguidedcommonsensereasoningonhybridknowledgegraph/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:487184a0c26e97832ef0bc5d62c1a28194325d2d0db39ce2137a5604ee4edf67
|
| 3 |
+
size 370418
|
acomprehensivecomparisonofneuralnetworksascognitivemodelsofinflection/d92cb11c-2d95-4641-9dad-f4f9fd8a5b1f_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:21d34d9fa21af35e106de7ec7a9e55048a4b5b37c897d1f45a80945757897670
|
| 3 |
+
size 79845
|
acomprehensivecomparisonofneuralnetworksascognitivemodelsofinflection/d92cb11c-2d95-4641-9dad-f4f9fd8a5b1f_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:a398e2bc17c7a6c4b527a8ed9e90d1063a22f7db8d4dfba2cb61cdb62796b45c
|
| 3 |
+
size 92173
|
acomprehensivecomparisonofneuralnetworksascognitivemodelsofinflection/d92cb11c-2d95-4641-9dad-f4f9fd8a5b1f_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:5a5fd2191ce6cd89dd7e635e99e0743ffad7080481b6490d106f29a82c353bcf
|
| 3 |
+
size 1072784
|
acomprehensivecomparisonofneuralnetworksascognitivemodelsofinflection/full.md
ADDED
|
@@ -0,0 +1,297 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# A Comprehensive Comparison of Neural Networks as Cognitive Models of Inflection
|
| 2 |
+
|
| 3 |
+
Adam Wiemerslage and Shiran Dudy and Katharina Kann
|
| 4 |
+
|
| 5 |
+
University of Colorado Boulder
|
| 6 |
+
|
| 7 |
+
first.last@colorado.edu
|
| 8 |
+
|
| 9 |
+
# Abstract
|
| 10 |
+
|
| 11 |
+
Neural networks have long been at the center of a debate around the cognitive mechanism by which humans process inflectional morphology. This debate has gravitated into NLP by way of the question: Are neural networks a feasible account for human behavior in morphological inflection? We address that question by measuring the correlation between human judgments and neural network probabilities for unknown word inflections. We test a larger range of architectures than previously studied on two important tasks for the cognitive processing debate: English past tense, and German number inflection. We find evidence that the Transformer may be a better account of human behavior than LSTMs on these datasets, and that LSTM features known to increase inflection accuracy do not always result in more human-like behavior.
|
| 12 |
+
|
| 13 |
+
# 1 Introduction: The Past Tense Debate
|
| 14 |
+
|
| 15 |
+
Morphological inflection has historically been a proving ground for studying models of language acquisition. Rumelhart and McClelland (1985) famously presented a neural network that they claimed could learn English past tense inflection. However, Pinker and Prince (1988) proposed a dual-route theory for inflection, wherein regular verbs are inflected based on rules and irregular verbs are looked up in the lexicon. They highlighted several shortcomings of Rumelhart and McClelland (1985) that they claimed any neural network would suffer from.
|
| 16 |
+
|
| 17 |
+
This opened a line of work wherein cognitive theories of inflection are analyzed by implementing them as computational models and comparing their behavior to that of humans. A famous study in the area of morphology is the wug test (Berko, 1958), where human participants are prompted with a novel-to-them nonce word and asked to produce its plural form. Similarly, morphological inflection models are generally evaluated on words they have
|
| 18 |
+
|
| 19 |
+

|
| 20 |
+
Figure 1: Summary of the past tense debate as it pertains to this work, color coded by evidence for (blue) or against (red) neural networks as a cognitively plausible account for human behavior.
|
| 21 |
+
|
| 22 |
+
not seen during training. However, since they are evaluated on actual words, it is impossible to meaningfully ask a native speaker, who knows the words' inflected forms, how likely different reasonable inflections for the words in a model's test data are. Thus, in order to compare the behavior of humans and models on words unknown to both, prior work has created sets of made-up nonce words (Marcus et al., 1995; Albright and Hayes, 2003).
|
| 23 |
+
|
| 24 |
+
English Past Tense English verbs inflect to express the past and present tense distinction. Most verbs inflect for past tense by applying the /-d/, /-id/, or /-t/ suffix: allophones of the regular inflection class. Some verbs, however, express the past tense with a highly infrequent or completely unique inflection, forming the irregular inflection class. This distinction between regular and irregular inflection has motivated theories like the dual-route theory described above.
|
| 25 |
+
|
| 26 |
+
Prasada and Pinker (1993) performed a wug test for English past tense inflection in order to compare the model from Rumelhart and McClelland (1985) to humans with special attention to how models behave with respect to regular vs. irregular forms, finding that it could not account for human generalizations. Albright and Hayes (2003, A&H) gathered production probabilities – i.e., the normal
|
| 27 |
+
|
| 28 |
+
ized frequencies of the inflected forms produced by participants – and ratings – i.e., the average rating assigned to a given past tense form on a well-formedness scale. They then implemented two computational models: a rule-based and an analogy-based model and computed the correlation between the probabilities of past tense forms for nonce verbs under each model and according to humans. They found that the rule-based model more accurately accounts for nonce word inflection.
|
| 29 |
+
|
| 30 |
+
After several years of progress for neural networks, including state-of-the-art results on morphological inflection (Kann and Schütze, 2016; Cotterell et al., 2016), this debate was revisited by Kirov and Cotterell (2018, K&C), who examined modern neural networks. They trained a bidirectional LSTM (Hochreiter and Schmidhuber, 1997) with attention (Bahdanau et al., 2015) on English past tense inflection and in experiments quantifying model accuracy on a held out set of real English verbs, they showed that it addresses many of the shortcomings pointed out by Pinker and Prince (1988). They concluded that the LSTM is, in fact, capable of modeling English past tense inflection. They also applied the model to the wug experiment from A&H and found a positive correlation with human production probabilities that was slightly higher than the rule-based model from A&H.
|
| 31 |
+
|
| 32 |
+
Corkery et al. (2019, C&al.) reproduced this experiment and additionally compared to the average human rating that each past tense form received in A&H's dataset. They found that the neural network from K&C produced probabilities that were sensitive to random initialization – showing high variance in the resulting correlations with humans – and typically did not correlate better than the rule-based model from A&H. They then designed an experiment where inflected forms were sampled from several different randomly initialized models, so that the frequencies of each form could be aggregated in a similar fashion to the adult production probabilities – but the results still favored A&H. They hypothesized that the model's overconfidence in the most likely inflection (i.e. the regular inflection class) leads to uncharacteristically low variance on predictions for unknown words.
|
| 33 |
+
|
| 34 |
+
German Noun Plural McCurdy et al. (2020a, M&al.) applied an LSTM to the task of German noun plural inflection to investigate a hypothesis from Marcus et al. (1995, M95), who attributed the outputs of neural models to their susceptibility to
|
| 35 |
+
|
| 36 |
+
the most frequent pattern observed during training, stressing that, as a result, neural approaches fail to learn patterns of infrequent groups.
|
| 37 |
+
|
| 38 |
+
German nouns inflect for the plural and singular distinction. There are five suffixes, none of which is considered a regular majority: /-(e)n/, /-e/, /-er/, /-s/, and /-Ø/. M95 had built a dataset of monosyllabic German noun wugs and investigated human behavior when inflecting the plural form, distinguishing between phonologically familiar environments (rhymes), and unfamiliar ones (non-rhymes). The German plural system, they argued, was an important test for neural networks since it presents multiple productive inflection rules, all of which are minority inflection classes by frequency. This is in contrast to the dichotomy of the regular and irregular English past tense. M&al. collected their own human production probabilities and ratings for these wugs, and then compared those to LSTM productions. Humans were prompted with each wug with the neuter determiner to control for the fact that neural inflection models of German noun plurals are sensitive to grammatical gender (Goebel and Indefrey, 2000), and because humans do not have a majority preference for monosyllabic, neuter nouns (Clahsen et al., 1992).
|
| 39 |
+
|
| 40 |
+
The /-s/ inflection class, which is highly infrequent appears in a wide range of phonological contexts, which has lead some research to suggest it is the default class for German noun plurals, and thus the regular inflection, despite its infrequent use. M&al. found that it was preferred by humans in Non-Rhyme context more than Rhymes, but the LSTM model showed the opposite preference, undermining the hypothesis that LSTMs model human generalization behavior. /-s/ was additionally predicted less accurately on a held-out test set of real noun inflections when compared to other inflection classes.
|
| 41 |
+
|
| 42 |
+
They found that the most frequent inflection class in the training for the monosyllabic neuter contexts, $/ - \mathrm{e} / ,$ was over-generalized by the LSTM when compared to human productions. The most frequent class overall, $f - (e)n /$ (but infrequent in the neuter context), was applied by humans quite frequently to nonce nouns, but rarely by the LSTM. They additionally found that $/ - \mathrm{er} / ,$ which is as infrequent as $/ - \mathrm{s} / ,$ could be accurately predicted in the test set, and the null inflection $/ - \emptyset /$ which is generally frequent, but extremely rare in the monosyllabic, neuter setting was never predicted for the
|
| 43 |
+
|
| 44 |
+
wugs. We refer to McCurdy et al. (2020a) for more details on the inflection classes and their frequencies, and additional discussion around their relevance to inflection behavior.
|
| 45 |
+
|
| 46 |
+
Ultimately, M&al. reported no correlation with human production probabilities for any inflection class. They concluded that modern neural networks still simply generalize the most frequent patterns to unfamiliar inputs.
|
| 47 |
+
|
| 48 |
+
Dankers et al. (2021) performed in-depth behavioral and structural analyses of German noun plural inflection by a unidirectional LSTM without attention. They argued that these modeling decisions made a more plausible model of human cognition. In a behavioral test they found that, like humans but unlike M&al., their model did predict $/-\mathrm{s}/$ more for non-rhymes than for rhymes, but the result was not statistically significant. They also found that $/-\mathrm{s}/$ was applied with a high frequency and attributed this to sensitivity to word length. For a visual of all studies discussed in this section, see Figure 1.
|
| 49 |
+
|
| 50 |
+
Our Contribution Most work on modern neural networks discussed here analyzes the same bidirectional LSTM with attention and draws a mixture of conclusions based on differing experimental setups. Dankers et al. (2021) changed the LSTM-based architecture, and found somewhat different results for German number inflection, though they did not investigate correlations with human ratings nor production probabilities in the same way as previous work. The limited variation of architectures in previous studies as well as inconsistent methods of comparison with human behavior prevent us from drawing definite conclusions about the adequacy of neural networks as models of human inflection.
|
| 51 |
+
|
| 52 |
+
Here, we present results on a wider range of LSTMs and a Transformer (Vaswani et al., 2017) model for both English past tense and German number inflection. We ask which architecture is the best account for human inflection behavior and, following M&al., investigate the actual model productions (and probabilities) for the German plural classes in order to qualitatively compare to human behavior. We additionally ask how architectural decisions for the LSTM encoder-decoder affect this correlation. Finally, we investigate the relationship between inflection accuracy on the test set and correlation with human wug ratings.
|
| 53 |
+
|
| 54 |
+
We find that the Transformer consistently correlates best with human ratings, producing probabilities that result in Spearman's $\rho$ in the range of
|
| 55 |
+
|
| 56 |
+
0.47-0.71 for several inflection classes, which is frequently higher than LSTMs. However, when looking closely at the Transformer productions, it displays behavior that deviates from humans similarly to the LSTM in M&al., though to a lesser extent. While attention greatly increases LSTM accuracy on inflection, we also find that it does not always lead to better correlations with human wug ratings, and that the directionality of the encoder has more complicated implications. Finally, we find that there is no clear relationship between model accuracy and correlation with human ratings across all experiments, demonstrating that neural networks can solve the inflection task in its current setup without learning human-like distributions. While the Transformer experiment in this work demonstrates stronger correlations with human behavior, and some more human-like behaviors than before, our findings continue to cast doubt on the cognitive plausibility of neural networks for inflection.
|
| 57 |
+
|
| 58 |
+
# 2 Neural Morphological Inflection
|
| 59 |
+
|
| 60 |
+
# 2.1 Task Description
|
| 61 |
+
|
| 62 |
+
The experiments in this paper are centered around a natural language processing (NLP) task called morphological inflection, which consists of generating an inflected form for a given lemma and set of morphological features indicating the target form. It is typically cast as a character-level sequence-to-sequence task, where the characters of the lemma and the morphological features constitute the input, while the characters of the target inflected form are the output (Kann and Schütze, 2016):
|
| 63 |
+
|
| 64 |
+
$$
|
| 65 |
+
\mathrm {P S T} \mathrm {c r y} \rightarrow \mathrm {c r i e d}
|
| 66 |
+
$$
|
| 67 |
+
|
| 68 |
+
Formally, let $S$ be the paradigm slots expressed in a language and $l$ a lemma in the language. The set of all inflected forms – or paradigm – $\pi$ of $l$ is then defined as:
|
| 69 |
+
|
| 70 |
+
$$
|
| 71 |
+
\pi (l) = \left\{\left(f _ {k} [ l ], t _ {k}\right) \right\} _ {k \in \mathcal {S}} \tag {1}
|
| 72 |
+
$$
|
| 73 |
+
|
| 74 |
+
$f_{k}[l]$ denotes the inflection of $l$ which expresses tag $t_k$ , and $l$ and $f_{k}[l]$ represent strings consisting of letters from the language's alphabet $\Sigma$ .
|
| 75 |
+
|
| 76 |
+
The task of morphological inflection can then formally be described as predicting the form $f_{i}[l]$ from the paradigm of $l$ corresponding to tag $t_{i}$ .
|
| 77 |
+
|
| 78 |
+
# 2.2 Models
|
| 79 |
+
|
| 80 |
+
Rumelhart and McClelland The original model of Rumelhart and McClelland (1985)
|
| 81 |
+
|
| 82 |
+
preceded many of the features introduced by modern neural networks. For example, they use a feed-forward neural network to encode input sequences. This creates the requirement of coercing variable-length inputs into the fixed-size network. To solve this, they encode input words as fixed length vectors representing the phonological distinctive feature sets for each trigram in that word. The neural network is then trained to map the features of an input form to a feature vector of a hypothesized output form. The loss is computed between the input feature sets and the feature set for an inflected output form encoded in the same way. At test time, they manually select candidate output forms for each input lemma in order to overcome the intractable decoding problem. The output form, then, is the candidate whose feature vector most closely resembles the model output. Beyond decoding problems, the order of input characters is not encoded, and unique words are represented with potentially identical phonological features.
|
| 83 |
+
|
| 84 |
+
LSTM The LSTM architecture (Hochreiter and Schmidhuber, 1997) overcomes several of the issues in Rumelhart and McClelland (1985), by way of a recurrent encoding and decoding mechanism, and reliance on character embeddings.
|
| 85 |
+
|
| 86 |
+
We experiment with several variations of the LSTM encoder-decoder (Sutskever et al., 2014; Cho et al., 2014) to test their behavior compared to humans. First, we vary directionality of the encoder under the assumption that bidirectional encoding leads to higher accuracy, but a unidirectional encoder may better resemble human processing. We additionally vary whether or not attention is used. Attention is typically a crucial feature to attaining high inflection accuracy. We expect that the same may also be true for assigning a cognitively plausible probability to a nonce inflection, by supplying the model with a mechanism to focus on only the relevant parts of the inflection.
|
| 87 |
+
|
| 88 |
+
This yields 4 LSTM-based variations. We refer these models as BiLSTMAttn (BA; from K&C, C&al., and M&al.), UniLSTMAttn (UA), BiLSTMNoAttn (BN), and UniLSTMNoAttn (UN; from Dankers et al. (2021)).
|
| 89 |
+
|
| 90 |
+
Transformer Finally, we present results for a Transformer sequence-to-sequence model (Vaswani et al., 2017), following the implementation proposed for morphological inflection by
|
| 91 |
+
|
| 92 |
+
Wu et al. (2021). Unlike LSTM-based models, the transformer employs a self-attention mechanism such that each character representation can be computed in parallel as a function of all other characters. The position of each character is encoded with a special positional embedding. This means that the relation between each character in a word can be represented directly, rather than through a chain of functions via the LSTM recurrence. It is considered to be state-of-the-art for morphological inflection in terms of accuracy, which makes it an important comparison for this study. Some work has called into question the cognitive plausibility of transformer self-attention in psycholinguistic experiments of word-level language models (Merkx and Frank, 2020) – claiming that the direct access it provides to past input is cognitively implausible. It is not clear though that these arguments apply to character-level models for inflection, wherein words do not necessarily need to be processed one character at a time.
|
| 93 |
+
|
| 94 |
+
Hyperparameters We implement all LSTMs with pytorch (Paszke et al., 2019) and borrow hyperparameters from previous work on morphological inflection. For the LSTMs, we use the hyperparameters from K&C, which were based on the tuning done by Kann and Schütze (2016). For the Transformer, we follow the hyperparameters from the best model in Wu et al. (2021), but set label-smoothing to 0. In preliminary experiments, we found no significant impact of label smoothing on accuracy nor correlation with human behavior across inflection classes.
|
| 95 |
+
|
| 96 |
+
For all architectures, we follow C&al. and train 10 randomly initialized models. At test time, we decode with beam search with a width of 12. We train for up to 50 epochs because the architectures with fewer parameters tend to converge more slowly.
|
| 97 |
+
|
| 98 |
+
MGL A&H implement the Minimal Generalization Learner (MGL), which learns explicit rules (e.g. insertion of /-id/ if a verb ends in a /t/ or /d/) at varying levels of granularity. Each rule is associated with a confidence score for a given phonological environment based on its statistics in the train set. At test time, the rule with the highest confidence is applied to produce an inflection, and the confidences can be used to score various regular or irregular inflected forms. We compare to this model for English data, following previous work.
|
| 99 |
+
|
| 100 |
+
<table><tr><td></td><td>Dev Acc</td><td colspan="2">Test Acc</td><td colspan="2">Prod. Prob.</td><td colspan="2">Rating</td></tr><tr><td></td><td></td><td>reg</td><td>irreg</td><td>reg</td><td>irreg</td><td>reg</td><td>irreg</td></tr><tr><td>A&H MGL</td><td>-</td><td>99.7</td><td>38.0</td><td>.33</td><td>.30</td><td>.50</td><td>.49</td></tr><tr><td>K&C*</td><td>-</td><td>98.9</td><td>28.6</td><td>.48</td><td>.45</td><td>-</td><td>-</td></tr><tr><td>C&al. Agg.**</td><td>-</td><td>-</td><td>-</td><td>.45</td><td>.19</td><td>.43</td><td>.31</td></tr><tr><td>BiLSTMAttn</td><td>93.33</td><td>97.48 (.65)</td><td>9.05 (5.24)</td><td>.28</td><td>.36</td><td>.16</td><td>.46</td></tr><tr><td>BiLSTMNoAttn</td><td>76.37</td><td>82.72 (2.06)</td><td>7.62 (3.33)</td><td>.14</td><td>.44</td><td>.23</td><td>.35</td></tr><tr><td>UniLSTMAttn</td><td>92.45</td><td>96.53 (.68)</td><td>20.00 (4.38)</td><td>.35</td><td>.41</td><td>.40</td><td>.32</td></tr><tr><td>UniLSTMNoAttn</td><td>73.49</td><td>77.72 (1.64)</td><td>10.48 (10.24)</td><td>.22</td><td>.43</td><td>.28</td><td>.34</td></tr><tr><td>Transformer</td><td>94.88</td><td>99.21 (.53)</td><td>10.95 (11.46)</td><td>.38</td><td>.47</td><td>.58</td><td>.58</td></tr></table>
|
| 101 |
+
|
| 102 |
+
*Trained and tested a different random split, **Trained and tested on all training data
|
| 103 |
+
|
| 104 |
+
Table 1: English results for both regular (reg) and irregular (irreg) inflections for all architectures and metrics. Along with accuracy, we report Spearman's $\rho$ between average model rating and our two human metrics. Standard deviations are given in parentheses.
|
| 105 |
+
|
| 106 |
+
# 3 Experiments
|
| 107 |
+
|
| 108 |
+
# 3.1 Languages and Data
|
| 109 |
+
|
| 110 |
+
We use the same data as previous work on English past tense, and German number inflection.
|
| 111 |
+
|
| 112 |
+
English We experiment with the English past tense data from A&H, following both K&C and C&al. For training, we split the CELEX (Baayen et al., 1996) subset produced by A&H, consisting of 4253 verbs (218 irregular), into an 80/10/10 random train/dev/test split following K&C. $^{1}$ We ensure that $10\%$ of the irregular verbs are in each of the development and test set.
|
| 113 |
+
|
| 114 |
+
The English nonce words from A&H, used for computing the correlation of model rating with human ratings and production probabilities, comprise 58 made-up verb stems, each of which has 1 regular and 1 irregular past tense inflection. 16 verbs have an additional irregular form (58 regulars and 74 irregulars total). All English data is in the phonetic transcription provided by A&H.
|
| 115 |
+
|
| 116 |
+
German We also experiment with the German dataset from McCurdy et al. (2020a), who released train/dev/test splits consisting of 11,243 pairs of singular and plural nouns in the nominative case taken from UniMorph (McCarthy et al., 2020). They added gender, the only inflection feature provided, by joining UniMorph with a Wiktionary scrape.
|
| 117 |
+
|
| 118 |
+
The German wugs come from M95, who built a set of 24 monosyllabic nonce nouns: 12 of which are rhymes – resembling real words in their phonology, and 12 of which are non-rhymes – representing atypical phonology. Human ratings and production probabilities, however, are taken from M&al., who
|
| 119 |
+
|
| 120 |
+
administered an online survey to 150 native German speakers. Each participant was prompted with the nouns from M95 with the neuter determiner, and then asked to generate the plural form. Similar to A&H, after producing a plural for each noun, participants were asked to rate the acceptability of each potential plural form on a 1-5 scale. In their analysis, M&al. compare human and model behavior on 5 productive inflection classes, shown for our experiments in Table 3.
|
| 121 |
+
|
| 122 |
+
# 3.2 Evaluation Metrics
|
| 123 |
+
|
| 124 |
+
We evaluate models with respect to four metrics.
|
| 125 |
+
|
| 126 |
+
Accuracy This refers to raw accuracy on a set of real inflections that the model has not seen during training. Crucially, only the top prediction of a given model is considered, and the model's probability distribution over all predictions does not affect the score.
|
| 127 |
+
|
| 128 |
+
F1 We report F1 instead of accuracy for the German plural experiments following M&al. Here we classify each inflected form with its suffix (e.g. /-s/), and classify inflections that do not conform to the 5 inflection classes from M&al. as "other."
|
| 129 |
+
|
| 130 |
+
Production Probability Correlation Like previous work (Kirov and Cotterell, 2018; Corkery et al., 2019; McCurdy et al., 2020a), we compare model output probabilities with production probabilities from humans. The production probability of a form is calculated by counting all forms produced for a given lemma, and then normalizing them to obtain a probability distribution of the human productions. In keeping with most previous work and because we do not expect a linear relationship with the model ratings, we report Spearman's $\rho$ . This is calculated within each inflection class,
|
| 131 |
+
|
| 132 |
+
meaning that, e.g., for English we report a regular and an irregular $\rho$ . For example, the regular $\rho$ for the set of lemmas {rife, drize, flidge} would be computed from the vector containing probabilities of the forms {rifed, drized, flidged} under the model, against the corresponding vector with human probabilities.
|
| 133 |
+
|
| 134 |
+
Rating Correlation Finally, we compare model ratings to the average human rating of each form, again reporting $\rho$ within inflection class. Here, rather than normalizing over production frequencies, humans were prompted with an inflection for a given lemma and asked to rate it on a scale that differed slightly between datasets. For each lemma, we thus get an average probability for a regular form, as well as for an irregular form.
|
| 135 |
+
|
| 136 |
+
# 3.3 Neural Network Wug Test
|
| 137 |
+
|
| 138 |
+
In order to compare our models to humans, we compute analogous values to the human ratings and production probabilities. We investigate two strategies: normalizing the inflected form counts output by our models, and computing the average probability of each form under our models.
|
| 139 |
+
|
| 140 |
+
Model Production Probability Previous work (Corkery et al., 2019; McCurdy et al., 2020a) decoded outputs from multiple models and aggregated the resulting forms: given a lemma and a set of $n$ models trained with different random seeds, an inflected form is sampled from each model, resulting in forms $f_{1},\ldots ,f_{n}$ , where forms need not be unique. The frequency of each form is then normalized to obtain a probability distribution. For example, given the nonce lemma rife, the probability of the past tense form rifed is computed as
|
| 141 |
+
|
| 142 |
+
$$
|
| 143 |
+
\frac {1}{n} \sum_ {i = 1} ^ {n} \left\{ \begin{array}{l l} 1, & \text {i f} f _ {i} = \text {r i f e d} \\ 0, & \text {o t h e r w i s e} \end{array} \right.
|
| 144 |
+
$$
|
| 145 |
+
|
| 146 |
+
C&al. propose a version of this in their aggregate model, in which they sample 100 forms from each model, and normalize the resulting form frequencies. M&al., who instead train 25 randomly initialized models, perform the same aggregation over the top prediction of each model. We take the approach of M&al. (though we train only 10 models) to investigate model productions qualitatively. This metric is intuitively similar to quantifying human production probabilities if we consider one model to be one human participant.
|
| 147 |
+
|
| 148 |
+
Model Rating Because the aggregate outputs method considers only the most likely prediction aggregated over the same architecture trained on the same dataset, we expect the prediction to typically be the same for each model. We instead report correlations with the probability of inflected forms under each model in Tables 1 and 3. K&C correlate this value with human production probabilities, and C&al. use this method in an experiment to compute individual model ratings.
|
| 149 |
+
|
| 150 |
+
More formally, given a lemma $l$ and an inflected form $f$ of length $k$ , we compute
|
| 151 |
+
|
| 152 |
+
$$
|
| 153 |
+
\begin{array}{l} p (f \mid l) = p \left(f _ {1}, \dots , f _ {k} \mid l\right) (2) \\ = \prod_ {1} ^ {k} p \left(f _ {i} \mid f _ {i - 1}, l\right) (3) \\ \end{array}
|
| 154 |
+
$$
|
| 155 |
+
|
| 156 |
+
Where $f_{i}$ is the $i$ th character of $f$ . We force the model to output each inflected form $f$ to get its probability. In practice, we modify Equation 3 to compute a length-normalized probability because $p(f \mid l)$ becomes smaller as $f$ increases in length. For $f$ of length $k$ , we have
|
| 157 |
+
|
| 158 |
+
$$
|
| 159 |
+
p (f \mid l) = \sqrt [ k ]{\prod p \left(f _ {i} \mid f _ {i - 1} , l\right)} \tag {4}
|
| 160 |
+
$$
|
| 161 |
+
|
| 162 |
+
We expect computing ratings in this way to be similar to the aggregate model of C&al. described above. That is, the probability of a form $f$ computed by aggregating $n$ forms from a single model's probability distribution should approach $p(f \mid l)$ , as $n \to \infty$ . Finally, we compute the average probability of a form from all 10 randomly initialized models, and refer to it as the model rating.
|
| 163 |
+
|
| 164 |
+
# 4 Results
|
| 165 |
+
|
| 166 |
+
We present experimental results in Tables 1, 2, and 3 in terms of both inflection accuracy, and correlation with human behavior – our main focus. All correlations for neural models trained in this work are given with respect to model rating, and not the model production probability. We report results from training MGL on our data, and include the results reported K&C, C&al., and M&al. in the appropriate tables for reference.
|
| 167 |
+
|
| 168 |
+
# 4.1 English
|
| 169 |
+
|
| 170 |
+
For English, many of our models correlate better for irregulars than regulars, unlike previous work for which the strongest correlations occurred for regular verbs. As we do not have the same train
|
| 171 |
+
|
| 172 |
+
<table><tr><td rowspan="2"></td><td rowspan="2">Dev Acc.</td><td colspan="6">Test F1</td></tr><tr><td>/-(e)n/</td><td>/-e/</td><td>/-∅/</td><td>/-er/</td><td>/-s/</td><td>other</td></tr><tr><td>M&al.</td><td>92.10</td><td>95.00</td><td>87.00</td><td>92.00</td><td>84.00</td><td>60.00</td><td>42.00</td></tr><tr><td>BiLSTMAttn</td><td>89.37</td><td>93.93 (0.6)</td><td>88.08 (0.9)</td><td>92.43 (0.6)</td><td>79.07 (5.1)</td><td>51.75 (4.6)</td><td>45.36 (4.0)</td></tr><tr><td>BiLSTMNoAttn</td><td>54.65</td><td>74.16 (1.9)</td><td>63.56 (2.4)</td><td>75.57 (2.1)</td><td>51.26 (3.7)</td><td>29.58 (7.4)</td><td>9.07 (0.6)</td></tr><tr><td>UniLSTMAttn</td><td>86.40</td><td>93.39 (0.6)</td><td>87.35 (1.0)</td><td>92.49 (1.1)</td><td>69.78 (5.3)</td><td>52.36 (4.5)</td><td>44.06 (5.8)</td></tr><tr><td>UniLSTMNoAttn</td><td>48.71</td><td>69.69 (2.2)</td><td>58.31 (2.4)</td><td>71.98 (1.7)</td><td>46.64 (5.2)</td><td>32.54 (7.7)</td><td>8.08 (0.4)</td></tr><tr><td>Transformer</td><td>91.04</td><td>92.93 (0.4)</td><td>87.81 (0.7)</td><td>93.86 (0.3)</td><td>65.44 (4.7)</td><td>57.89 (2.0)</td><td>57.47 (4.5)</td></tr></table>
|
| 173 |
+
|
| 174 |
+
Table 2: Average German F1s on all German plural inflections for all architectures. Standard deviation is given in parentheses. Dev accuracy for our experiments were decoded greedily.
|
| 175 |
+
|
| 176 |
+
<table><tr><td rowspan="2"></td><td colspan="6">Prod. Prob</td><td colspan="6">Rating</td></tr><tr><td>-(-e)n/</td><td>-/-e/</td><td>-/-∅/</td><td>-/-er/</td><td>-/-s/</td><td>avg.</td><td>-(-e)n/</td><td>-/-e/</td><td>-/-∅/</td><td>-/-er/</td><td>-/-s/</td><td>avg.</td></tr><tr><td>M&al.</td><td>.28</td><td>.13</td><td>-</td><td>.05</td><td>.33</td><td>.20</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>BiLSTMAttn</td><td>.11</td><td>.08</td><td>-.14</td><td>.24</td><td>.38</td><td>.20</td><td>.36</td><td>.44</td><td>.06</td><td>.36</td><td>.39</td><td>.32</td></tr><tr><td>BiLSTMNoAttn</td><td>.44</td><td>.08</td><td>-.12</td><td>.27</td><td>.39</td><td>.30</td><td>.51</td><td>.16</td><td>-.29</td><td>.30</td><td>.31</td><td>.20</td></tr><tr><td>UniLSTMAttn</td><td>.09</td><td>.16</td><td>-.13</td><td>.36</td><td>.39</td><td>.25</td><td>.22</td><td>.27</td><td>-.16</td><td>.46</td><td>.44</td><td>.25</td></tr><tr><td>UniLSTMNoAttn</td><td>.14</td><td>.15</td><td>.08</td><td>.17</td><td>.23</td><td>.15</td><td>.24</td><td>.16</td><td>-.17</td><td>.05</td><td>.20</td><td>.10</td></tr><tr><td>Transformer</td><td>.11</td><td>.30</td><td>-.13</td><td>.28</td><td>.50</td><td>.20</td><td>.48</td><td>.59</td><td>.15</td><td>.50</td><td>.71</td><td>.49</td></tr></table>
|
| 177 |
+
|
| 178 |
+
Table 3: German wugs Spearman's $\rho$ for the average rating of each model with human production probabilities (left) and average human ratings (right). We report the macro average (avg.) over all inflection classes for both.
|
| 179 |
+
|
| 180 |
+
and test splits, it is difficult to draw conclusions from this result. We predominately focus on performance differences between the models trained in our experiments including MGL.
|
| 181 |
+
|
| 182 |
+
Accuracy The accuracies from this experiment generally reflect our expectations from prior work. The Transformer attains the highest test accuracy, LSTMs with attention always achieve higher accuracy than those without, and bidirectional LSTMs show modest improvements over their unidirectional counterparts. However, the unidirectional LSTMs outperform bidirectional counterparts for irregular accuracy (+2.86 and +10.95). Additionally, the Transformer has a low irregular accuracy, though with a very high standard deviation over all 10 runs, indicating at least one run was an outlier with much higher accuracy.
|
| 183 |
+
|
| 184 |
+
Correlation The trend in accuracy for attentional LSTMs is not strictly true for correlation. LSTMs without attention typically correlate with humans slightly better than their attentional counterparts for irregulars. Additionally, unidirectional models result in higher regular correlations, which is in contrast to the higher irregular accuracy. Irregular correlations are fairly similar across LSTMs with the exception of the BiLSTM correlation with human ratings, which is much higher than the other LSTM correlations. We also reproduce previous results showing that A&H's rule-based model,
|
| 185 |
+
|
| 186 |
+
MGL, is better correlated than any LSTM model. The transformer, however, is correlated most highly with humans among all experiments that we ran.
|
| 187 |
+
|
| 188 |
+
# 4.2 German
|
| 189 |
+
|
| 190 |
+
We refer to F1 in Table 2, and correlation with humans in Table 3. Notably all models typically correlate better with human ratings than with production probabilities, though those two metrics have a positive linear relationship $(r = 0.75)$ . Intuitively, the task of assigning a probability to a form is more like the human rating task than decoding the single most likely form. We present a graph of model production probabilities and model ratings for the German wugs in Figure 2.
|
| 191 |
+
|
| 192 |
+
F1 F1 scores follow a similar trend to English. In contrast to the very small performance gap in Dankers et al. (2021), LSTMs with attention clearly perform better in terms of F1 than without – though our training dataset from M&al. is much smaller than the one they used, which might amplify the gap. Directionality has much less effect on F1 than attention for German, with the unidirectional LSTMs actually outperforming the bidirectional ones for the infrequent /-s/ class in our experiments. The Transformer attains high (though not necessarily the highest) F1 scores for every class.
|
| 193 |
+
|
| 194 |
+
Correlation The Transformer clearly correlates most highly with human ratings, attaining a moderate correlation (0.48-0.59) for $/ - \mathrm{e} / , / - (\mathrm{e})\mathrm{n} / ,$ and $/ - \mathrm{er} / ,$
|
| 195 |
+
|
| 196 |
+

|
| 197 |
+
Figure 2: German plural productions (left) and average probabilities (right) for each architecture in Rhyme (R) and Non-Rhyme (NR) contexts for all lemmas and all random initializations. Shorthands are used for architectures - UA refers to UniLSTMAttn, whereas BN refers to BiLSTMAttn, for example.
|
| 198 |
+
|
| 199 |
+

|
| 200 |
+
|
| 201 |
+
and a high correlation (0.71) for $/ - \mathrm{s} /$ . All architectures correlate poorly with $/ - \emptyset /$ , despite very high F1. Looking more closely at $/ - \emptyset /$ , it consistently receives very low ratings (as is the case for human ratings), and it was never produced as a model's best output as can be seen in Figure 2. However, there are only $3 / - \emptyset /$ inflections in the training data that fit the same phonological context as the wugs. Across all contexts though, $/ - \emptyset /$ is a very common inflection in the training data, which explains its high accuracy on the test set.
|
| 202 |
+
|
| 203 |
+
There is no clear trend between LSTMs in terms of correlation with human production probability, with rather low $\rho$ overall. However in the case of human ratings, LSTMs with attention always correlate better than those without, with the exception of the most frequent class overall in the training data, $/-(\mathrm{e})\mathrm{n}/$ . BiLSTMNoAttn is most strongly correlated for $/-(\mathrm{e})\mathrm{n}/$ , in contrast to its lower F1 - demonstrating that removing attention leads to a lower F1, but also to a more human-like probability of $/-(\mathrm{e})\mathrm{n}/$ .
|
| 204 |
+
|
| 205 |
+
Regarding directionality, unidirectional LSTMs always outperform their bidirectional counterparts for the infrequent /-s/ class in our experiments. UniLSTMAttn correlates better with humans than any LSTM for the infrequent classes /-er/ and /-s/. However, BiLSTMAttn has the highest correlation for the frequent /-e/ and /-(e)n/.
|
| 206 |
+
|
| 207 |
+
# 5 Analysis
|
| 208 |
+
|
| 209 |
+
We mainly analyze the correlation between (average) model ratings and human ratings. We find that the Transformer correlates best with human ratings with few exceptions, indeed it attains a statistically significant positive correlation for all inflection classes in both languages, with the exception of $/ - \emptyset /$ in German. It is also highly accurate, as in previous work (Wu et al., 2021).
|
| 210 |
+
|
| 211 |
+
Regarding LSTM architectural decisions, unsurprisingly, attention and bidirectionality typically increase accuracy in both languages. The positive effect of attention is similar for correlations with some exceptions. Attention almost always leads to better correlations in German, with the interesting exception of $/-(\mathrm{e})\mathrm{n}/$ . Given that humans rate $/-(\mathrm{e})\mathrm{n}/$ most highly on average, the higher correlation could be because without attention, LSTMs are very sensitive to the high $/-\mathrm{en}/$ frequency in the training set. The attentional LSTMs might learn the monosyllabic, neuter context that applies to the wugs, for which there are very few $/-(\mathrm{e})\mathrm{n}/$ training examples. Despite slightly higher accuracy for bidirectional LSTMs, unidirectional LSTMs tend to attain higher correlations with both human metrics for English, especially for the more frequent regular inflections.
|
| 212 |
+
|
| 213 |
+
Conversely in German, the bidirectional LSTMs correlate better for the more frequent $/-(\mathrm{e})\mathrm{n}/$ and $/ - \mathrm{e}/$ classes, but UniLstmAttn correlates better for the rarer $/ - \mathrm{er}/$ and $/ - \mathrm{s}/$ classes. The dichotomy between just one highly productive class in English and several productive classes in German may explain the first observation: if unidirectional LSTMs overfit to the frequent class, then they might appear to correlate better in English, but not German. However, this would not explain the German class correlations for infrequent inflections, which could be explored in future work.
|
| 214 |
+
|
| 215 |
+
German Model Productions The model production counts in Rhyme versus Non-Rhyme contexts were important for the conclusion in M&al. that BiLSTMaTtn is not a good model of human behavior. We thus investigate this in Figure 2.
|
| 216 |
+
|
| 217 |
+
Most of the criticisms from M&al. apply to the productions in our experiments as well. One new observation is that, without attention, LSTMs pre
|
| 218 |
+
|
| 219 |
+
<table><tr><td></td><td>reg</td><td>irreg</td><td>/-(e)n/</td><td>/-e/</td><td>/-∅/</td><td>/-er/</td><td>/-s/</td></tr><tr><td>r</td><td>0.44</td><td>-0.31</td><td>0.01</td><td>0.80</td><td>0.73</td><td>0.70</td><td>0.83</td></tr></table>
|
| 220 |
+
|
| 221 |
+
Table 4: Pearson $r$ between model acc. (or F1),and correlation with human ratings within infl. class $\left( {\mathrm{n} = 5}\right)$ .
|
| 222 |
+
|
| 223 |
+
<table><tr><td></td><td>BA</td><td>BN</td><td>UA</td><td>UN</td><td>Trm</td></tr><tr><td>r</td><td>-0.57</td><td>-0.33</td><td>-0.37</td><td>-0.39</td><td>-0.38</td></tr></table>
|
| 224 |
+
|
| 225 |
+
Table 5: Pearson $r$ between model acc. (or F1),and correlation with human ratings within model $\left( {\mathrm{n} = 7}\right)$ .
|
| 226 |
+
|
| 227 |
+
dict many "other" forms for NR contexts, but not for R. This likely means that Non-Rhymes lead to decoding errors for these models due to the unfamiliar context. Additionally, despite several behaviors that differ from humans in the Transformer productions, its second most produced inflection class is $\left\lnot \left( \mathrm{e}\right) \mathrm{n}/\right\rangle$ ,like humans,and unlike any LSTM model. The right side of Figure 2 instead displays the average model rating of each inflection class, on which we base our correlations in Tables 1 and 3.
|
| 228 |
+
|
| 229 |
+
The average model rating of an inflection class represents the probability assigned to it averaged over all 10 randomly initialized models and all 24 lemmas. The $/ - \mathrm{e}/$ inflection accounts for a much smaller amount of the probability mass on average than its production probability. The preference for $/ - \mathrm{e}/$ in the NR context, which diverges from human ratings, is smaller by this metric for the Transformer and LSTMs with attention. Furthermore, $/ - (\mathrm{e})\mathrm{n}/$ has a more reasonable average probability for most models when compared to the human ratings in M&al., despite the preference for Rhymes, which diverges from human behavior. However, for $/ - \mathrm{s}/$ the Transformer shows a much higher average probability for Non-Rhymes than for Rhymes, which is more in line with human ratings.
|
| 230 |
+
|
| 231 |
+
Overall, this means model ratings of German noun plurals look more similar to human ratings than model productions do to human productions. The Transformer is a better account for human behavior than the LSTM, though it still diverges in some ways. Dankers et al. (2021) warned that the /-s/ behavior may be explainable by a simple heuristic though, so this behavior may not actually indicate cognitive plausibility.
|
| 232 |
+
|
| 233 |
+
Accuracy vs. Correlation The task of predicting the most likely inflection for an unknown word (measured by accuracy or F1) is not the same as rating multiple inflections (measured by Spearman's
|
| 234 |
+
|
| 235 |
+
$\rho$ ). We thus investigate the relationship between these two tasks by measuring Pearson's $r$ between them to see if better inflection models in terms of accuracy are also more human-like. First, we consider the relationship for all models and inflection classes in both datasets and find no correlation ( $r = -0.17$ , $n = 35$ ). However, some inflection classes or models may behave differently than others. We refer to Table 5 to investigate this relationship within each architecture. In Table 4, we check the correlation within each inflection class. There is not sufficient data to draw statistically significant conclusions in either case, but the correlations that we report can still characterize the relationship in our experiments. We find that all architectures show a negative correlation. This implies that models are more accurate for inflection classes on which they correlate poorly with humans, and vise versa. However, Table 4 shows that all German inflection classes have a positive correlation between the two metrics, with the exception of $/-(e)n/$ . This is likely because $/-e(n)/$ is highly frequent in the training set, but is less suitable for the monosyllabic, neuter wugs. Neither English inflection class shows a strong relationship, though.
|
| 236 |
+
|
| 237 |
+
# 6 Conclusion
|
| 238 |
+
|
| 239 |
+
We ask which neural architecture most resembles human behavior in a wug test. We introduce results on a wider range of architectures than previous work and find that the Transformer, a state-of-the-art model for morphological inflection, frequently correlates best with human wug ratings. Despite this, a closer look at model ratings and productions on German plural inflection shows that neither model closely resembles human behavior. We also find that, while attention is crucial for LSTM inflection accuracy, it does not always lead to higher correlations with humans. Additionally, the often less accurate unidirectional model sometimes correlates better than its bidirectional counterpart, especially in the case of infrequent German plural classes. Finally, while for some inflection classes more accurate models correlate better with humans, there is no clear relationship between the two metrics overall. Future work might consider behavior when hyperparameters are tuned to maximize plausibility of the probability distribution rather than accuracy. Additionally, these results motivate a closer look at the effect of LSTM encoder directionality with respect to inflection class frequency.
|
| 240 |
+
|
| 241 |
+
# Limitations
|
| 242 |
+
|
| 243 |
+
This work is limited by the scope of languages and inflection categories that our models are tested on. We present results for two specific inflection categories in two languages. Previously, McCurdy et al. (2020b) ran experiments on neural network behavior for the German plural wugs used here, which brought into question some of the conclusions found in prior work for English past tense inflection. We thus believe that expanding this work to new inflection phenomenon and new languages may introduce results where the findings here do not necessarily hold.
|
| 244 |
+
|
| 245 |
+
# Acknowledgments
|
| 246 |
+
|
| 247 |
+
We would like to thank Kate McCurdy and Yohei Oseki for their input to and feedback on early stages of this work. We would also like to thank the anonymous reviewers, Abteen Ebrahimi, and Ananya Ganesh for their feedback on drafts of this paper. This research was supported by the NSF National AI Institute for Student-AI Teaming (iSAT) under grant DRL 2019805. The opinions expressed are those of the authors, and do not represent views of the NSF.
|
| 248 |
+
|
| 249 |
+
# References
|
| 250 |
+
|
| 251 |
+
Adam Albright and Bruce Hayes. 2003. Rules vs. analogy in english past tenses: A computational/experimental study. Cognition, 90(2):119-161.
|
| 252 |
+
R Harald Baayen, Richard Piepenbrock, and Leon Gulikers. 1996. The celex lexical database (cd-rom).
|
| 253 |
+
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In ICLR.
|
| 254 |
+
Jean Berko. 1958. The child's learning of English morphology. Word, 14(2-3):150-177.
|
| 255 |
+
Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder-decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1724-1734, Doha, Qatar. Association for Computational Linguistics.
|
| 256 |
+
Harald Clahsen, Monika Rothweiler, Andreas Woest, and Gary F. Marcus. 1992. Regular and irregular inflection in the acquisition of German noun plurals. Cognition, 45(3):225-255.
|
| 257 |
+
|
| 258 |
+
Maria Corkery, Yevgen Matushevych, and Sharon Goldwater. 2019. Are we there yet? encoder-decoder neural networks as cognitive models of english past tense inflection. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3868-3877.
|
| 259 |
+
Ryan Cotterell, Christo Kirov, John Sylak-Glassman, David Yarowsky, Jason Eisner, and Mans Hulden. 2016. The SIGMORPHON 2016 shared Task—Morphological reinflection. In Proceedings of the 14th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology, pages 10-22, Berlin, Germany. Association for Computational Linguistics.
|
| 260 |
+
Verna Dankers, Anna Langedijk, Kate McCurdy, Adina Williams, and Dieuwke Hupkes. 2021. Generalising to german plural noun classes, from the perspective of a recurrent neural network. In Proceedings of the 25th Conference on Computational Natural Language Learning, pages 94-108.
|
| 261 |
+
Rainer Goebel and Peter Indefrey. 2000. A recurrent network with short-term memory capacity learning the german-s plural. Models of language acquisition: Inductive and deductive approaches, pages 177-200.
|
| 262 |
+
Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.
|
| 263 |
+
Katharina Kann and Hinrich Schütze. 2016. Single-model encoder-decoder with explicit morphological representation for reinflation. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 555-560, Berlin, Germany. Association for Computational Linguistics.
|
| 264 |
+
Christo Kirov and Ryan Cotterell. 2018. Recurrent neural networks in linguistic theory: Revisiting pinker and prince (1988) and the past tense debate. Transactions of the Association for Computational Linguistics, 6:651-665.
|
| 265 |
+
Gary F Marcus, Ursula Brinkmann, Harald Clahsen, Richard Wiese, and Steven Pinker. 1995. German inflection: The exception that proves the rule. Cognitive psychology, 29(3):189-256.
|
| 266 |
+
Arya D. McCarthy, Christo Kirov, Matteo Grella, Amrit Nidhi, Patrick Xia, Kyle Gorman, Ekaterina Vylomova, Sabrina J. Mielke, Garrett Nicolai, Miikka Silfverberg, Timofey Arkhangelskiy, Nataly Krizhanovsky, Andrew Krizhanovsky, Elena Klyachko, Alexey Sorokin, John Mansfield, Valts Ernstreits, Yuval Pinter, Cassandra L. Jacobs, Ryan Cotterell, Mans Hulden, and David Yarowsky. 2020. UniMorph 3.0: Universal Morphology. In Proceedings of the 12th Language Resources and Evaluation Conference, pages 3922-3931, Marseille, France. European Language Resources Association.
|
| 267 |
+
|
| 268 |
+
Kate McCurdy, Sharon Goldwater, and Adam Lopez. 2020a. Inflecting when there's no majority: Limitations of encoder-decoder neural networks as cognitive models for german plurals. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1745-1756.
|
| 269 |
+
Kate McCurdy, Adam Lopez, and Sharon Goldwater. 2020b. Conditioning, but on which distribution? grammatical gender in German plural inflection. In Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics, pages 59-65, Online. Association for Computational Linguistics.
|
| 270 |
+
Danny Merkx and Stefan L Frank. 2020. Human sentence processing: Recurrence or attention? arXiv preprint arXiv:2005.09471.
|
| 271 |
+
Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. Pytorch: An imperative style, high-performance deep learning library. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems 32, pages 8024-8035. Curran Associates, Inc.
|
| 272 |
+
Steven Pinker and Alan Prince. 1988. On language and connectionism: Analysis of a parallel distributed processing model of language acquisition. Cognition, 28(1-2):73-193.
|
| 273 |
+
Sandeep Prasada and Steven Pinker. 1993. Generalisation of regular and irregular morphological patterns. Language and cognitive processes, 8(1):1-56.
|
| 274 |
+
David E Rumelhart and James L McClelland. 1985. On learning the past tenses of english verbs. Technical report, CALIFORNIA UNIV SAN DIEGO LA JOLLA INST FOR COGNITIVE SCIENCE.
|
| 275 |
+
Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. Advances in neural information processing systems, 27.
|
| 276 |
+
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NeurIPS.
|
| 277 |
+
Shijie Wu, Ryan Cotterell, and Mans Hulden. 2021. Applying the transformer to character-level transduction. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 1901-1907, Online. Association for Computational Linguistics.
|
| 278 |
+
|
| 279 |
+
<table><tr><td>Model</td><td>Hyperparams.</td></tr><tr><td>BiLSTMAttn</td><td>0.93M</td></tr><tr><td>BiLSTMNoAttn</td><td>0.90M</td></tr><tr><td>UniLSTMAttn</td><td>0.56M</td></tr><tr><td>UniLSTMNoAttn</td><td>0.54M</td></tr><tr><td>Transformer</td><td>7.41M</td></tr></table>
|
| 280 |
+
|
| 281 |
+
Table 6: Number of parameters in each model.
|
| 282 |
+
|
| 283 |
+
# A Individual Model Variance
|
| 284 |
+
|
| 285 |
+
In figure A.2, we show the variance, via boxplots, when correlating with human ratings. Models typically have higher correlations with ratings than with production probabilities, but the two are linearly related in our results. Similar to the findings of C&al., who compared to production probabilities, we find that individual BiLSTMAattn models vary quite a bit with respect to correlation with humans. For English, some models vary far less, for example BiLSTMAattn has a much lower variance with respect to both regulars and irregulars than BiLSTMAattn. Similarly, the Transformer often correlates the same across different random initializations, with the exception of a few outliers. Turning to the German boxplots in A.2b, we see similarly low variance for the transformers, and typically higher variance for most LSTMs. For architectures that vary more, i.e. LSTMs, we often see a higher correlation when the ratings are first averaged (as reported in Table 1 and 3), but the same is often not true for English.
|
| 286 |
+
|
| 287 |
+

|
| 288 |
+
Figure A.1: English past tense productions (left) and average probability (right) for each architecture for all lemmas and all random initializations.
|
| 289 |
+
|
| 290 |
+

|
| 291 |
+
|
| 292 |
+

|
| 293 |
+
(a) English past tense
|
| 294 |
+
|
| 295 |
+

|
| 296 |
+
(b) German plural
|
| 297 |
+
Figure A.2: Boxplots of Spearman's correlation for individual models with respect to average human ratings
|
acomprehensivecomparisonofneuralnetworksascognitivemodelsofinflection/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:b0c1f827dd2826243b1df0864dfeddd93e495bb45ef0d67e473d2e2b090a7a1d
|
| 3 |
+
size 327324
|
acomprehensivecomparisonofneuralnetworksascognitivemodelsofinflection/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:003260bc3c0f2ca4dcdab28d866adc08e26afe088748b3dbfbe919c49ab2aba9
|
| 3 |
+
size 341026
|
activeexampleselectionforincontextlearning/7df1d58e-95f9-4891-9f80-ce9df75cf31a_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:3bf7b144a8c50cf3b7bb42433c9fb0c988478ad36d2a3941dd50af3a899e4991
|
| 3 |
+
size 99264
|
activeexampleselectionforincontextlearning/7df1d58e-95f9-4891-9f80-ce9df75cf31a_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:788c23c3b54fbd283d41870c09414bbdf29c8ef0c9c7d0c5a5ee36bb7077dce2
|
| 3 |
+
size 119196
|
activeexampleselectionforincontextlearning/7df1d58e-95f9-4891-9f80-ce9df75cf31a_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:d90b0909059a960db3217f130f062f29b2caee72d70ab6e17bf87d74ffed71ae
|
| 3 |
+
size 545424
|
activeexampleselectionforincontextlearning/full.md
ADDED
|
@@ -0,0 +1,402 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Active Example Selection for In-Context Learning
|
| 2 |
+
|
| 3 |
+
Yiming Zhang and Shi Feng and Chenhao Tan {yimingz0, shif, chenhao}@uchicago.edu University of Chicago
|
| 4 |
+
|
| 5 |
+
# Abstract
|
| 6 |
+
|
| 7 |
+
With a handful of demonstration examples, large-scale language models show strong capability to perform various tasks by in-context learning from these examples, without any fine-tuning. We demonstrate that in-context learning performance can be highly unstable across samples of examples, indicating the idiosyncrasies of how language models acquire information. We formulate example selection for in-context learning as a sequential decision problem, and propose a reinforcement learning algorithm for identifying generalizable policies to select demonstration examples. For GPT-2, our learned policies demonstrate strong abilities of generalizing to unseen tasks in training, with a $5.8\%$ improvement on average. Examples selected from our learned policies can even achieve a small improvement on GPT-3 Ada. However, the improvement diminishes on larger GPT-3 models, suggesting emerging capabilities of large language models.
|
| 8 |
+
|
| 9 |
+
# 1 Introduction
|
| 10 |
+
|
| 11 |
+
Large language models demonstrate the capability to learn from just a few examples (Radford et al., 2019; Brown et al., 2020; Rae et al., 2022; Zhang et al., 2022). The possibility to train a model without any parameter update has inspired excitement about the in-context learning paradigm.
|
| 12 |
+
|
| 13 |
+
Intuitively, high in-context learning performance should require carefully chosen demonstration examples, but a recent line of work suggests otherwise — that demonstration examples are not as important as we expected, and that few-shot performance can be largely attributed to the model's zero-shot learning capacity (Min et al., 2022), across GPT-2 and GPT-3. This insight is corroborated by a parallel line of work that brings significant improvements to in-context learning performance without example selection, for example, by reordering randomly selected examples and using
|
| 14 |
+
|
| 15 |
+
calibration (Lu et al., 2022; Zhao et al., 2021; Kojima et al., 2022). Another notable approach is to use best-of- $n$ sampling, which requires a labeled set for validation (Nakano et al., 2022).
|
| 16 |
+
|
| 17 |
+
Our contribution in this paper is twofold. First, we revisit the effect of example selection on in-context learning. We show that even with reordering and calibration, we still observe a large variance across sets of demonstration examples, especially for GPT-2, while calibration reduces the variance for GPT-3 models. The high variance needs further investigation, as we take it as evidence that large language models are still not capable of efficiently and reliably acquire new information in-context. Understanding what makes good demonstration examples sheds some light on the mechanisms that large language models use to process information.
|
| 18 |
+
|
| 19 |
+
Second, we seek to discover general trends in example selection for in-context learning across different tasks. Concretely, we use reinforcement learning to optimize example selection as sequential decision making problem. We argue that active example selection from unlabeled datasets is the most appropriate setting for in-context learning because fine-tuning with an existing labeled set leads to great performance with low variance. For GPT-2, we validate our learned policy on a seen task with labeled dataset and observe a $12.1\%$ improvement over a max-entropy active learning baseline. Moreover, our learned policy is able to generalize to new tasks with $5.8\%$ improvement, suggesting that the policy is able to capture systematic biases in how GPT-2 acquires information. Examples selected from our learned policies can even achieve a small improvement on GPT-3 Ada. However, the improvement diminishes on larger GPT-3 models. We provide further analyses to understand the properties of useful examples.
|
| 20 |
+
|
| 21 |
+
Overall, our work explores how large language models process information through the perspective of example selection and formulate active ex
|
| 22 |
+
|
| 23 |
+
ample selection as a sequential decision making problem. We investigate divergent behaviors between GPT-2 and GPT-3, which echoes the emerging abilities of large language models, and suggest that researchers in the NLP community should collectively build knowledge and research practice in the era of large language models. $^{1}$
|
| 24 |
+
|
| 25 |
+
# 2 The Effect of Example Selection
|
| 26 |
+
|
| 27 |
+
In this section, we demonstrate the instability of incontext learning performance due to the selection of demonstration examples. We further show that existing methods (e.g., calibration, reordering) are insufficient for addressing this stability for GPT-2. In comparison, the variance of GPT-3 models can be mitigated with calibration.
|
| 28 |
+
|
| 29 |
+
# 2.1 In-context Text Classification with Demonstration Examples
|
| 30 |
+
|
| 31 |
+
We start by formally defining in-context learning. We focus on in-context learning for text classification with a left-to-right language model. All supervision is given through a "prompt" which we denote as $s$ . The prompt typically contains natural language instructions and a few demonstration examples. To make a prediction for a test example $x$ , we concatenate the prompt and the test example as prefix, and use the language model to predict the next token: $\arg \max_y \mathbf{P}_{\mathrm{LM}}(y|s + x)$ , where $+$ denotes concatenation. Typically, instead of taking the arg max from the whole vocabulary, we restrict the model's output to a set of special tokens which corresponds to the set of labels, e.g., with the word "positive" corresponding to the positive class in binary sentiment classification. In our formulation, we omit a separate variable for the special tokens, and use $\mathcal{V}$ to refer to both the label set and the set of proxy tokens for simplicity.
|
| 32 |
+
|
| 33 |
+
To summarize, a prompt in this paper is a sequence of $k$ labeled examples concatenated together: $s = (x_{1},y_{1}),(x_{2},y_{2}),\ldots ,(x_{k},y_{k})$ . And the prediction for a test input $x$ is the label with the highest likelihood of being by the language model: $\arg \max_{y\in \mathcal{V}}\mathbf{P}_{\mathrm{LM}}(y|s + x)$ .<sup>2</sup>
|
| 34 |
+
|
| 35 |
+
Experiment setup. Following Zhao et al. (2021), we conduct our experiments on AGNews (Zhang
|
| 36 |
+
|
| 37 |
+
<table><tr><td>Dataset</td><td>Domain</td><td>#classes</td><td>avg. length</td></tr><tr><td>AGNews</td><td>Topic cls.</td><td>4</td><td>37.8</td></tr><tr><td>Amazon</td><td>Sentiment cls.</td><td>2</td><td>78.5</td></tr><tr><td>SST-2</td><td>Sentiment cls.</td><td>2</td><td>19.3</td></tr><tr><td>TREC</td><td>Question type cls.</td><td>6</td><td>10.2</td></tr></table>
|
| 38 |
+
|
| 39 |
+
Table 1: Dataset information.
|
| 40 |
+
|
| 41 |
+

|
| 42 |
+
Figure 1: Zero-centered in-context learning accuracy of GPT-2 on 30 random sets of 4 demonstration examples. Each dot indicates performance of the best permutation for one set of demonstration examples. $y$ -axis represents the accuracy difference with the mean accuracy of random demonstration examples.
|
| 43 |
+
|
| 44 |
+
et al., 2015), SST-2 (Socher et al., 2013) and TREC (Voorhees and Tice, 2000). We additionally include Amazon (Zhang et al., 2015) since it contains longer texts than the remaining datasets. Table 1 give basic information of the tasks.
|
| 45 |
+
|
| 46 |
+
Using GPT-2 345M (GPT-2), GPT-3 Ada (ADA) and GPT-3 Babbage (BABBAGE) as the in-context learning models, we report 4-shot example selection performance across all experiments.
|
| 47 |
+
|
| 48 |
+
# 2.2 Sensitivity to Example Selection
|
| 49 |
+
|
| 50 |
+
We first highlight the sensitivity of GPT-2 due to example selection. In Figure 1, we plot the in-context learning performance of 30 random sequences of demonstration examples with length 4. Across all 4 tasks, the maximum and minimum performance due to random sampling differs by $>30\%$ . Additionally, for 3 out of the 4 tasks (AGNews, SST-2 and TREC), performance of the worst set of demonstration examples lead to in-context learning performance below random guessing (e.g., it is $10.0\%$ on TREC, below $16.7\%$ accuracy of guessing randomly among 6 labels in TREC).
|
| 51 |
+
|
| 52 |
+
Reordering sequence alone cannot address the instability. Lu et al. (2022) identifies the ordering of demonstration examples as the cause for variance, and proposed heuristics to reorder demonstra
|
| 53 |
+
|
| 54 |
+

|
| 55 |
+
Figure 2: In-context learning accuracy of 30 random sets of 4 demonstration examples with calibration. Each dot indicates performance of the best permutation for one set of demonstration examples. Accuracy over random examples (no calibration) is plotted.
|
| 56 |
+
|
| 57 |
+
tion examples. For such an approach to be effective, the underlying assumption is that there exists good orderings for most sets of demonstration examples.
|
| 58 |
+
|
| 59 |
+
In Figure 1, we additionally report the highest possible performance among $4! = 24$ permutations for each of the 30 sets using a validation set of 100 examples. The reordering performance reported here is highly optimistic for a true few-shot setting (Perez et al., 2021) since a validation set cannot be assumed available. As expected, taking the best permutation on a validation set improves test performance: we observe an average of $8.1\%$ increase on average over random demonstration examples.
|
| 60 |
+
|
| 61 |
+
However, these best orderings of examples still lead to a wide range of possible performance. On AGNews, we observe a maximum accuracy of $79.6\%$ and a minimum accuracy of $32.7\%$ after considering the best possible orderings. On TREC, the best ordering for 9 out of 30 sets of examples lead to performance below random examples. These observations suggest that there are simply no good orderings for considerable proportions of demonstration sets, motivating the need for selecting examples beyond merely reordering.
|
| 62 |
+
|
| 63 |
+
Calibration does not decrease variance for GPT-2, either. Zhao et al. (2021) finds that language models are poorly calibrated when used directly as in-context classifiers, and argues that calibration is the key missing piece to improve and stabilize in-context learning performance. It proposes using dummy examples (e.g., "N/A") as anchors for calibrating the language model since a calibrated language model should make neutral predictions for these content-free examples.
|
| 64 |
+
|
| 65 |
+
Figure 2 demonstrates the effectiveness of cali
|
| 66 |
+
|
| 67 |
+
<table><tr><td>Model</td><td>AGNews</td><td>Amazon</td><td>SST-2</td><td>TREC</td></tr><tr><td>GPT-2</td><td>44.59.3</td><td>87.53.7</td><td>61.714.4</td><td>29.412.8</td></tr><tr><td>GPT-2 (C)</td><td>55.212.0</td><td>76.314.0</td><td>66.214.7</td><td>40.85.4</td></tr><tr><td>ADA</td><td>62.917.5</td><td>87.06.1</td><td>65.010.2</td><td>21.26.6</td></tr><tr><td>ADA (C)</td><td>64.04.0</td><td>90.01.2</td><td>73.89.7</td><td>22.15.3</td></tr><tr><td>BABBAGE</td><td>68.014.0</td><td>93.40.8</td><td>92.22.7</td><td>27.45.8</td></tr><tr><td>BABBAGE (C)</td><td>78.16.1</td><td>92.71.6</td><td>90.81.1</td><td>36.04.0</td></tr></table>
|
| 68 |
+
|
| 69 |
+
Table 2: Performance of GPT-2, ADA and BABBAGE across 5 random sets of 4-shot demonstration examples. C indicates calibration. Standard deviation is reported as subscripts.
|
| 70 |
+
|
| 71 |
+
bration in improving few-shot performance. With calibration, we observe an increase in average performance of varying magnitude on 3 out of the 4 tasks (AGNews, SST-2 and TREC), but a marginal decrease of performance on Amazon. For example, on AGNews where calibration improves performance the most, we observe a maximum accuracy of $79.5\%$ and a minimum accuracy of $26.1\%$ , resulting in a gap of over $53.4\%$ .
|
| 72 |
+
|
| 73 |
+
Interestingly, we observe varying behavior when combining calibration with demonstration reordering. On the binary tasks (Amazon and SST-2), we observe prompt reordering to be quite effective, consistently leading to performance above random examples. On the other hand, for AGNews (4 labels) and TREC (6 labels), we observe much greater variance.
|
| 74 |
+
|
| 75 |
+
In summary, with GPT-2, existing methods do not provide satisfactory solutions to the sensitivity of in-context learning to demonstration examples. Reordering demonstration requires a well-behaving demonstration set, which is often not the case, and does not reduce variance. Calibration, though improves performance, does not reduce variance, and its effectiveness deteriorates with a large label set. These findings motivate the need for identifying high quality demonstration examples for consistent and performant in-context learning.
|
| 76 |
+
|
| 77 |
+
Variance persists to some degree with GPT-3. In Table 2, we report the performance of GPT-2, ADA and BABBAGE on 5 random sets of demonstration examples.3 GPT-3 models are not immune to instability due to resampling demonstration examples. On multi-labeled tasks including AGNews and TREC, we observe both ADA and BABBAGE demonstrate significant variance, and on binary
|
| 78 |
+
|
| 79 |
+
tasks such as Amazon and SST-2, much smaller variance is observed. This difference is potentially due to the difficulty of the task and the multi-class nature of AGNews and TREC. We will address the latter in §4.3. Another interesting observation is that variance diminishes with calibration. However, one may argue that calibration no longer reflects the model's innate ability to acquire information.
|
| 80 |
+
|
| 81 |
+
Overall, the differences in model behavior between GPT-2 and GPT-3 add evidence to the emergent ability of large language models (Wei et al., 2022; Bowman, 2022). We hypothesize that the variance will be even smaller with GPT-3 Davinci.
|
| 82 |
+
|
| 83 |
+
# 3 Active Example Selection by RL
|
| 84 |
+
|
| 85 |
+
Given a set of unlabeled examples, can we choose the right ones to be annotated as demonstration examples? In this section, we formulate the problem of active example selection for in-context learning. Following the definition of in-context learning in §2.1, constructing a prompt for in-context learning boils down to choosing a sequence of demonstration examples.
|
| 86 |
+
|
| 87 |
+
We emphasize that by selecting from unlabeled examples, our setup is analogous to active learning, where we select examples to label. We think that this is the most appropriate setting for in-context learning because fine-tuning can lead to great performance with low variance if we already have a moderately-sized labeled set (e.g., 100 instances).
|
| 88 |
+
|
| 89 |
+
As in-context learning uses a small number of examples, we formulate active example selection as a sequential decision making problem, where prompt is constructed by selecting and annotating one demonstration example at a time. We use a Markov Decision Process (MDP) to formalize the problem, discuss our design of the reward function, and introduce our solution to example selection using reinforcement learning (RL).
|
| 90 |
+
|
| 91 |
+
# 3.1 Active Example Selection as a MDP
|
| 92 |
+
|
| 93 |
+
Given a set of unlabeled examples, we want to maximize the expected accuracy on unseen test examples by getting up to $k$ annotations. The space of possible prompts grows exponentially with the number of unlabeled example and is intractable to enumerate, so we treat it as a sequential decision making problem: given the pool of unlabeled examples $\mathbf{S}_{\mathcal{X}} = \{x_i\}$ , choose one example $x_{i}$ , obtain its groundtruth label $y_{i}$ , append the pair $(x_{i},y_{i})$ to our prompt, and repeat this process until either the
|
| 94 |
+
|
| 95 |
+
budget $k$ is exhausted or the policy takes a special action $\bot$ indicating early termination.
|
| 96 |
+
|
| 97 |
+
Action space and state space. The action space of the MDP is the set of unlabeled examples plus the special end-of-prompt action: $\mathcal{A} = \mathbf{S}_{\mathcal{X}}\cup \{\bot \}$ . After choosing an action $x_{i}$ we observe its label $y_{i}$ , and the state is defined by the prefix of the prompt $s = (x_{1},y_{1}),(x_{2},y_{2}),\ldots ,(x_{i},y_{i})$
|
| 98 |
+
|
| 99 |
+
Reward. The reward $r$ can be defined based on an arbitrary scoring function $f$ of the language model LM when conditioned on the prompt $s$ , denoted $r = f(\mathrm{LM}_s)$ . In practice, we use the accuracy on a labeled validation set as reward.
|
| 100 |
+
|
| 101 |
+
It follows that we need to have access to a validation set during training, which we refer to as reward set. Similarly, we also have a labeled set from which our policy learns to select examples. We refer to this labeled set as training set. Ideally, our learned policies identify generalizable qualities of demonstration examples and can select useful unlabeled examples in a task where the policy has not observed any labeled examples. We will explore different setups to evaluate our learned policies.
|
| 102 |
+
|
| 103 |
+
It is useful to emphasize how active example selection deviates from the standard reinforcement learning setting. First, the action space is the examples to be selected, which can be variable in size. Furthermore, the actions during test time can be actions that the policy has never observed during training. Similarly, the classification task can differ from training, analogous to a new environment. Such generalizations are not typically assumed in reinforcement learning, due to the challenging nature of the problem (Kirk et al., 2022).
|
| 104 |
+
|
| 105 |
+
# 3.2 Active Example Selection by Q-learning
|
| 106 |
+
|
| 107 |
+
Framing active example selection as a sequential problem allows us to use off-the-shelf RL algorithms to train a policy. We opt to use Q-learning (Mnih et al., 2013) for its simplicity and effectiveness.
|
| 108 |
+
|
| 109 |
+
The objective of Q-learning is to approximate the optimal state-value function $Q^{\star}(s,a)$ , i.e., the maximum (discounted) future reward after taking action $a$ in state $s$ . The Bellman equation (Bellman, 1957) allows a recursive formulation of the optimal state-value function $Q^{\star}$ as
|
| 110 |
+
|
| 111 |
+
$$
|
| 112 |
+
Q ^ {\star} (s, a) = \mathbb {E} _ {s \sim \mathcal {S}} \left[ r (s, a) + \gamma \max _ {a ^ {\prime}} Q ^ {\star} (s ^ {\prime}, a ^ {\prime}) \right].
|
| 113 |
+
$$
|
| 114 |
+
|
| 115 |
+
We collect off-policy training data in our implementation and thus use offline Q-learning to lever
|
| 116 |
+
|
| 117 |
+
age off-policy data (Prudencio et al., 2022). Specifically, We use conservative Q-learning (CQL) (Kumar et al., 2020), which uses regularization to prevent the overestimation of Q-values for unobserved actions in training data, contributing to a robust policy when evaluated in an unfamiliar environment. More details about CQL can be found in the Appendix A.
|
| 118 |
+
|
| 119 |
+
Generation of off-policy data. Offline learning requers off-policy training data. We run a random policy for a fixed number (2,000) of episodes to create the off-policy data. For every episode, we randomly sample 4 demonstration examples, and compute features and intermediate rewards. Then, we store the trajectory as training data.
|
| 120 |
+
|
| 121 |
+
Feature-based representation of actions. In our framework, a state $s$ is a sequence of examples, and we simply use the number of already selected examples $|s|$ as the feature representation. To enable our method to be deployed in an active example selection process, we assume no access to labels prior to selecting an example. That is, when representing a example to be selected $a = (x,y)$ , we omit the label $y$ and simply use predicted label probabilities conditioned on the current examples $\mathbf{P}_{\mathrm{LM}}(\cdot | s + x)$ . We additionally include entropy of the prediction.
|
| 122 |
+
|
| 123 |
+
Reward shaping. The previously defined reward function only rewards a completed prompt, while intermediate states receive zero reward. Sparse reward schemes are known to make learning difficult (Pathak et al., 2017). Therefore, we propose an alternative reward function based on the marginal utility of actions (Von Wieser, 1893). At time step $t$ we define $r: S \times \mathcal{A} \to \mathbb{R}$ as
|
| 124 |
+
|
| 125 |
+
$$
|
| 126 |
+
r (s, a) = f \left(\mathrm {L M} _ {s + a}\right) - f \left(\mathrm {L M} _ {s}\right).
|
| 127 |
+
$$
|
| 128 |
+
|
| 129 |
+
Intuitively, $r$ measures the "additional gain" on objective $f$ by acquiring the label of example $a$ . Notice that $f(\mathrm{LM}_{\emptyset})$ can be conveniently interpreted as the zero-shot performance of the language model. Maximizing this marginal utility reward function is indeed equivalent to optimizing the true objective $f$ : observe that the summation of rewards along a trajectory is a telescoping series, leaving only the final term $f(\mathrm{LM}_{s_{\perp}})$ minus a constant term that does not affect the learned policy. It turns out
|
| 130 |
+
|
| 131 |
+
that $r$ is a shaped reward (Ng et al., 1999), a family of transformed reward functions that preserves the invariance of optimal policies.
|
| 132 |
+
|
| 133 |
+
Target network with replay buffer. Our algorithm uses separate policy and target networks (Hasselt, 2010) with a replay buffer (Lin, 1992). Both are standard extensions to vanilla DQN (Arulkumaran et al., 2017), and are demonstrated to improve performance while alleviating certain optimization issues (Hessel et al., 2017). After concatenating state and action representations, we use a 3-layer MLP as the Q-network: $\hat{Q}(s,a) = \mathrm{MLP}([s\parallel a])$ . We report hyperparameters details in Appendix B.
|
| 134 |
+
|
| 135 |
+
# 4 Results
|
| 136 |
+
|
| 137 |
+
In this section, we investigate the performance of our learned policies for GPT-2. Due to the significant costs of generating episodes, we only apply the policies learned from GPT-2 and examine direct transfer results on GPT-3. Baselines, oracles and our method have access to the same underpinning calibrated GPT-2 model.
|
| 138 |
+
|
| 139 |
+
# 4.1 Setup
|
| 140 |
+
|
| 141 |
+
Following our framework in §3, during training, we use a training set from which the trained policy picks 4 examples for demonstration, as well as a reward set, which is a validation set where we compute rewards for the learning agent. Each set has 100 examples and our training scheme uses a total of 200 examples.
|
| 142 |
+
|
| 143 |
+
Depending on the availability of a reward set, we consider three evaluation settings:
|
| 144 |
+
|
| 145 |
+
- SEEN EXAMPLES, SAME TASK. In this setting, we use the learned policy to pick demonstration examples from the training set. We expect our method to be competitive with oracle methods that select examples based on rewards.
|
| 146 |
+
- NEW EXAMPLES, SAME TASK. We consider a more challenging setting where the learned policy picks from an unlabeled set of 100 or 1000 previously unseen examples. The learned policy still benefits from access to the reward set during training as the classification task is the same, but it cannot perform well simply by memorizing good sequences.
|
| 147 |
+
- NEW EXAMPLES, NEW TASK. Finally, we ask the learned policy to pick examples on a new task that it has never seen. Specifically, we adopt a multi-task learning approach, allowing the policy
|
| 148 |
+
|
| 149 |
+
<table><tr><td>Method</td><td>Average</td><td>AGNews</td><td>Amazon</td><td>SST-2</td><td>TREC</td></tr><tr><td>random</td><td>59.6</td><td>55.210.5</td><td>76.312.3</td><td>66.212.9</td><td>40.84.7</td></tr><tr><td>max-entropy</td><td>59.3</td><td>58.811.3</td><td>74.85.1</td><td>65.710.7</td><td>37.86.7</td></tr><tr><td>reordering</td><td>63.5</td><td>63.36.8</td><td>89.83.8</td><td>67.911.1</td><td>33.04.2</td></tr><tr><td>best-of-10</td><td>72.5</td><td>72.11.9</td><td>91.10.6</td><td>81.14.4</td><td>45.63.5</td></tr><tr><td>greedy-oracle</td><td>78.0</td><td>80.61.7</td><td>91.81.1</td><td>81.73.9</td><td>58.07.5</td></tr><tr><td>our method (seen examples)</td><td>71.4</td><td>70.87.8</td><td>90.41.9</td><td>81.03.5</td><td>43.32.0</td></tr><tr><td>our method (100 new examples)</td><td>71.6</td><td>71.37.4</td><td>89.23.9</td><td>81.82.6</td><td>44.04.6</td></tr><tr><td>our method (1000 new examples)</td><td>69.0</td><td>65.57.4</td><td>88.54.2</td><td>76.77.5</td><td>45.45.0</td></tr></table>
|
| 150 |
+
|
| 151 |
+
Table 3: SAME TASK accuracy on AGNews, Amazon, SST-2 and TREC, across 5 random seeds. $95\%$ confidence intervals are reported as subscripts.
|
| 152 |
+
|
| 153 |
+
to simultaneously learn from all but one tasks. Then, we evaluate the held-out task (e.g., train on AGNews, SST-2, TREC and test on Amazon). The learned policies use 600 examples from training $(3 \times 100$ each for the training set and reward set). During evaluation, the policy picks examples from an unlabeled set of examples in the held-out task, and we experiment with either 100 or 1000 unlabeled examples.
|
| 154 |
+
|
| 155 |
+
SEEN EXAMPLES, SAME TASK and NEW EXAMPLES, SAME TASK serve as sanity check of our learned policies, while NEW EXAMPLES, NEW TASK is the most appropriate setting for evaluating in-context learning.
|
| 156 |
+
|
| 157 |
+
Baselines and oracles. We consider three baseline methods for example selection. The random strategy simply picks demonstration examples randomly. Our second baseline (max-entropy) is a standard approach in active learning (Settles, 2009; Dagan and Engelson, 1995) which greedily picks the example maximizing classification entropy. We additionally consider a strong example reordering heuristic by Lu et al. (2022), dubbed reordering; reordering first uses the language model to generate a set of fake examples that resemble demonstration, and then chooses an ordering that maximizes classification entropy on these fake examples. Intuitively, max-entropy and reordering both encourages class balance during prediction. All three baselines can be used in active example selection, namely, example selection that does not have label access to examples before they are selected.
|
| 158 |
+
|
| 159 |
+
We further consider two oracle methods that require a labeled candidate set and a reward set. The best-of-10 strategy randomly samples 10 times and
|
| 160 |
+
|
| 161 |
+
keeps the sample that maximizes performance on the reward set as the final demonstration sequence. In addition, we use a greedy strategy to iteratively choose the example that results in the highest performance on the reward set, and we refer to this strategy as greedy-oracle. The oracles do not work for active example selection and cannot be used in NEW TASK as the assumption is that we do not have any labeled examples, so we do not compare our learned policies with oracles in NEW TASK.
|
| 162 |
+
|
| 163 |
+
We use baselines and our methods to select 4 demonstration examples for every task, and we average model performances across 5 random runs.
|
| 164 |
+
|
| 165 |
+
# 4.2 Main results
|
| 166 |
+
|
| 167 |
+
We analyze the effectiveness of applying our method in both SAME TASK and NEW TASK.
|
| 168 |
+
|
| 169 |
+
SAME TASK. Our method evaluated by picking from seen examples demonstrates strong performance. Across all 4 tasks, our method outperforms random, max-entropy and reordering baselines by an average of $11.8\%$ , $12.1\%$ and $7.9\%$ , respectively, as well as $>10\%$ improvements on 2 tasks.
|
| 170 |
+
|
| 171 |
+
Beyond performance gains, it is clear that our method helps reduce variance. We present $95\%$ confidence intervals as a proxy for variance. Across all 4 tasks, we observe consistent decrease in variance compared to the baselines.
|
| 172 |
+
|
| 173 |
+
Picking from both 100 and 1000 new examples largely retains the performance gains and variance reductions. Interestingly, we notice a higher overall performance of picking from 100 over 1000 new examples. This can be attributed to the large variance (see Appendix C.1 for more results).
|
| 174 |
+
|
| 175 |
+
Comparing with oracle methods, our methods perform relatively closely to best-of-10, while greedy-oracle significantly outperforms the other methods. Since we want the policies to learn generalizable example selection strategies, we intention
|
| 176 |
+
|
| 177 |
+
<table><tr><td>Method</td><td>Average</td><td>AGNews</td><td>Amazon</td><td>SST-2</td><td>TREC</td></tr><tr><td>random</td><td>59.6</td><td>55.210.5</td><td>76.312.3</td><td>66.212.9</td><td>40.84.7</td></tr><tr><td>max-entropy</td><td>59.3</td><td>58.811.3</td><td>74.85.1</td><td>65.710.7</td><td>37.86.7</td></tr><tr><td>reordering</td><td>63.5</td><td>63.36.8</td><td>89.83.8</td><td>67.911.1</td><td>33.04.2</td></tr><tr><td>our method (100 examples)</td><td>63.8</td><td>63.410.4</td><td>86.86.7</td><td>65.913.4</td><td>38.95.1</td></tr><tr><td>our method (1000 examples)</td><td>65.4</td><td>66.75.7</td><td>89.91.6</td><td>61.97.7</td><td>43.34.4</td></tr></table>
|
| 178 |
+
|
| 179 |
+
Table 4: New-task accuracy on AGNews, Amazon, SST-2 and SST-2, across 5 random seeds. $95\%$ confidence intervals are reported as subscripts.
|
| 180 |
+
|
| 181 |
+
ally use simple features, which may explain why our method, even when picking from seen examples, does not outperform oracles. Thanks to the high variance of random sampling, best-of-10 is a very performant strategy despite its simplicity, and a reasonable choice if validation is possible. At the cost of an exponential runtime, greedy-oracle shows the strong in-context learning performance attainable with just example selection, motivating the framing of in-context learning optimization as a pure example selection problem. In fact, the average performance from greedy-oracle with GPT-2 (345M) is better than that of GPT-3 Curie, a 20x larger model (see Appendix C.2).<sup>7</sup>
|
| 182 |
+
|
| 183 |
+
NEW TASK. We further evaluate our methods under the new task setting, where we train the example selection policy on 3 tasks, and evaluate on a previously unseen task. On average, we observe a smaller, but still significant improvements over both random and max-entropy baselines, suggesting the existence of learnable insights about good demonstration examples that generalize across tasks. On the other hand, we observe limited gains over reordering, signifying the challenge of finding good examples in an unknown task.
|
| 184 |
+
|
| 185 |
+
Interestingly, when picking from 1000 examples, we observe a much greater effect of variance reduction compared to baselines. In comparison, the variance reduction effect is minimal when picking from 100 examples and the performance gain is slightly smaller likely due to randomness.
|
| 186 |
+
|
| 187 |
+
We continue this discussion on the effect of size of selection set on transfer performance in Appendix C.1.
|
| 188 |
+
|
| 189 |
+
GPT-3 transfer. Training example selection policies directly on GPT-3 models is not viable since it requires sample a significant number of trajectories while computing rewards. Therefore, we instead
|
| 190 |
+
|
| 191 |
+
evaluate if policies and examples trained on GPT-2 generalize to GPT-3. Overall, we find mixed transfer results. On the smaller GPT-3 ADA model, we observe small gains ( $\sim 1\%$ ) by transferring both policies and examples, which is impressive considering the architectural differences between GPT-2 and GPT-3. However, we observe mixed results in transfer to BABBAGE and CURIE. We report further details in Appendix C.2.
|
| 192 |
+
|
| 193 |
+
# 4.3 What Makes Good Examples?
|
| 194 |
+
|
| 195 |
+
To understand what makes good examples, we explore properties of the learned policy and design additional experiments based on our qualitative examination of the selected examples. In the interest of space, we focus on label balance and coverage, and present other results based on linear policies (C.3) and length (C.4) in the Appendix.
|
| 196 |
+
|
| 197 |
+
On Amazon and SST-2, both binary sentiment classification tasks, we focus on label balance, measured by the number of positive labels in the demonstration set. For AGNews (4 labels) and TREC (6 labels), we instead focus on the distinct number of labels covered in demonstration. We present the results in Figure 3 and Figure 4.
|
| 198 |
+
|
| 199 |
+
Perhaps surprisingly, a well-balanced demonstration set does not consistently lead to greater performance or less variance. In Amazon, we notice that having all 4 examples being positive actually leads to good in-context learning performance, with an average accuracy of $87.8\%$ and $4.5\%$ greater than that of a perfectly balanced demonstration set $(83.3\%)$ . A similar trend is demonstrated in SST-2, where having all positive or all negative labels leads to much smaller variance compared to more balanced sets, while outperforming perfectly balanced sets on average.
|
| 200 |
+
|
| 201 |
+
In TREC, we again observe that the model does not need to observe the entire label space to perform well. The greatest performance occurs when
|
| 202 |
+
|
| 203 |
+

|
| 204 |
+
(a) Amazon
|
| 205 |
+
|
| 206 |
+

|
| 207 |
+
(b) SST-2
|
| 208 |
+
|
| 209 |
+

|
| 210 |
+
Figure 3: Accuracies of Amazon and SST-2 with varying label balance (number of positive examples in demonstration), across 100 total random samples of 4 demonstration examples.
|
| 211 |
+
(a) AGNews
|
| 212 |
+
|
| 213 |
+

|
| 214 |
+
(b) TREC
|
| 215 |
+
Figure 4: Accuracies of AGNews and TREC with varying label coverage (number of unique labels covered in demonstration), across 100 total random samples of 4 demonstration examples. Demonstration set that only covers 1 label is very unlikely and does not appear in our experiments.
|
| 216 |
+
|
| 217 |
+
exactly two labels are covered by demonstration, and the performance deteriorates as label coverage increases. AGNews demonstrates a somewhat expected pattern. When 4 label are covered, we observe the best performance along with a small variance. That said, covering three labels does not improve over covering two labels.
|
| 218 |
+
|
| 219 |
+
Overall, our analysis highlights the idiosyncrasies of how GPT-2 acquires information in in-context learning. The sequences that lead to strong performance may not align with human intuitions.
|
| 220 |
+
|
| 221 |
+
# 5 Related Work
|
| 222 |
+
|
| 223 |
+
Our paper builds on top of prior work that uses RL to solve the active learning problem (Fang et al., 2017; Liu et al., 2018), and is made possible by the recent advances in pre-trained language models (Devlin et al., 2019; Liu et al., 2019; Raffel et al., 2020; Gao et al., 2021). In-context learning, the observation that LMs (Radford et al., 2019; Brown et al., 2020; Rae et al., 2022; Zhang et al., 2022) can "learn" to perform a task when conditioned on a prompt. Xie et al. (2022) explains the
|
| 224 |
+
|
| 225 |
+
emergence of in-context learning by inferring the shared latent concept among demonstration examples, while Min et al. (2022) finds the success of in-context learning is largely independent of access to gold labels.
|
| 226 |
+
|
| 227 |
+
A variety of issues with in-context learning is discovered, including surface form competition, the phenomenon that multiple words referring to the same concept fighting for probability mass (Holtzman et al., 2021), and sensitivity of LMs due to changes in prompt (Lester et al., 2021), instruction (Mishra et al., 2022), or ordering of demonstration examples (Zhao et al., 2021; Lu et al., 2022). To optimize the performance of in-context learning, methods with varying levels of granularity are proposed. Such methods include prompt tuning (Lester et al., 2021; Vu et al., 2022; Wu et al., 2022), and instruction optimization (Mishra et al., 2022; Kojima et al., 2022). Liu et al. (2021) approaches the example selection problem by searching for nearest neighbors of test examples in the embedding space, while Rubin et al. (2022) uses a scoring LM for example retrieval.
|
| 228 |
+
|
| 229 |
+
# 6 Discussion
|
| 230 |
+
|
| 231 |
+
Inspired by Pang and Lee (2005), we adopt a Q&A format to discuss the implications of our work.
|
| 232 |
+
|
| 233 |
+
Q: Are GPT-2 results still relevant?
|
| 234 |
+
|
| 235 |
+
A: We believe that it is relevant for three reasons. First, GPT-2 is public and economically feasible options for many researchers. Our knowledge about GPT-2 is far from complete and expanding this understanding is useful on its own. Second, in the long term, it is unclear that everyone will have access to large models or that it is appropriate to use the largest model available in every use case. Models of moderate sizes are likely still useful depending on the use case. Third, it is important to highlight the emerging abilities over different sizes of language models. By understanding the phase change, i.e., when emerging abilities happen, we will better understand the behavior of large-scale language models.
|
| 236 |
+
|
| 237 |
+
That said, one should caution against making generalizing claims based on results from GPT-2, because the results may not generalize to GPT-3 (Bowman, 2022). This is why we present negative results from GPT-3. Differing results between GPT-2 and GPT-3 or more generally models of different sizes will be a reality in NLP for a while. It is important for the NLP community to collectively build knowledge about such differences and develop the future ecosystem of models.
|
| 238 |
+
|
| 239 |
+
Q: Why did you not experiment with GPT-3-Davinci?
|
| 240 |
+
|
| 241 |
+
A: The goal of this work is twofold: 1) assessing the ability of large-scale language models to acquire new information and 2) exploring whether reinforcement learning can identify reliable strategies for actively selecting examples. Our results are generally positive on GPT-2. Meanwhile, we observe relatively small variance after calibration with GPT-3-Babbage, so it does not seem economically sensible to experiment with even bigger models.
|
| 242 |
+
|
| 243 |
+
Q: Why did you choose $k = 4$ ? Is this generalizable?
|
| 244 |
+
|
| 245 |
+
A: Our experiments are limited by the context window of GPT-2 (1024 tokens) and GPT-3 (2048) tokens. Using $k$ beyond 4 would frequently leads to demonstration examples overflowing the token limit and need to be truncated. Additionally, prior work (Zhao et al., 2021; Brown et al., 2020) shows diminishing improvements of in-context learning performance by adding the number of demonstration examples beyond 4. Therefore, we believe
|
| 246 |
+
|
| 247 |
+
experimenting with $k = 4$ is a reasonable choice. We are optimistic that our framework and method can generalize to different shots.
|
| 248 |
+
|
| 249 |
+
# 7 Conclusion
|
| 250 |
+
|
| 251 |
+
In this work, we investigate how large language models acquire information through the perspective of example selection for in-context learning. In-context learning with GPT-2 and GPT-3 is sensitive to the selection of demonstration examples. In order to identify generalizable properties of useful demonstration examples, we study active example selection where unlabeled examples are iteratively selected, annotated, and added to the prompt. We use reinforcement learning to train policies for active example selection. The learned policy stabilizes in-context learning and improves accuracy when we apply it to a new pool of unlabeled examples or even completely new tasks unseen during training for GPT-2. Our analyses further reveal that properties of useful demonstration examples can deviate from human intuitions.
|
| 252 |
+
|
| 253 |
+
Examples selected from GPT-2 can still lead to a small improvement on GPT-3 Ada, however, the gain diminishes on larger models (i.e., Babbage and Curie). Our results highlight the challenges of generalization in the era of large-scale models due to their emerging capabilities. We believe that it is important for the NLP community to collectively build knowledge about such differences and develop the future ecosystem of models together.
|
| 254 |
+
|
| 255 |
+
# Ethics Statement
|
| 256 |
+
|
| 257 |
+
Our primary goal is to understand how large language models acquire new information in in-context learning through the perspective of example selection. A better understanding can help develop more effective strategies for in-context learning as well as better large-scale language models. However, these strategies can also be used in applications that may incur harm to the society.
|
| 258 |
+
|
| 259 |
+
# Acknowledgments
|
| 260 |
+
|
| 261 |
+
We thank all anonymous reviewers for their insightful suggestions and comments. We thank all members of the Chicago Human+AI Lab for feedback on early versions of this work. This work was supported in part by an Amazon research award, a Salesforce research award, a UChicago DSI discovery grant, and an NSF grant IIS-2126602.
|
| 262 |
+
|
| 263 |
+
# References
|
| 264 |
+
|
| 265 |
+
Kai Arulkumaran, Marc Peter Deisenroth, Miles Brundage, and Anil Anthony Bharath. 2017. A Brief Survey of Deep Reinforcement Learning. IEEE Signal Processing Magazine, 34(6):26-38.
|
| 266 |
+
Richard Bellman. 1957. Dynamic Programming, first edition. Princeton University Press, Princeton, NJ, USA.
|
| 267 |
+
Samuel Bowman. 2022. The dangers of underclaiming: Reasons for caution when reporting how NLP systems fail. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 7484-7499, Dublin, Ireland. Association for Computational Linguistics.
|
| 268 |
+
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language Models are Few-Shot Learners. In Advances in Neural Information Processing Systems, volume 33, pages 1877-1901. Curran Associates, Inc.
|
| 269 |
+
Ido Dagan and Sean P. Engelson. 1995. Committee-based sampling for training probabilistic classifiers. In Proceedings of the Twelfth International Conference on International Conference on Machine Learning, ICML'95, pages 150-157, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc.
|
| 270 |
+
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.
|
| 271 |
+
Meng Fang, Yuan Li, and Trevor Cohn. 2017. Learning how to Active Learn: A Deep Reinforcement Learning Approach. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 595-605, Copenhagen, Denmark. Association for Computational Linguistics.
|
| 272 |
+
Tianyu Gao, Adam Fisch, and Danqi Chen. 2021. Making Pre-trained Language Models Better Few-shot Learners. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 3816-3830, Online. Association for Computational Linguistics.
|
| 273 |
+
|
| 274 |
+
Hado Hasselt. 2010. Double Q-learning. In Advances in Neural Information Processing Systems, volume 23. Curran Associates, Inc.
|
| 275 |
+
Matteo Hessel, Joseph Modayil, Hado van Hasselt, Tom Schaul, Georg Ostrovski, Will Dabney, Dan Horgan, Bilal Piot, Mohammad Azar, and David Silver. 2017. Rainbow: Combining Improvements in Deep Reinforcement Learning.
|
| 276 |
+
Ari Holtzman, Peter West, Vered Shwartz, Yejin Choi, and Luke Zettlemoyer. 2021. Surface Form Competition: Why the Highest Probability Answer Isn't Always Right.
|
| 277 |
+
Robert Kirk, Amy Zhang, Edward Grefenstette, and Tim Roktaschel. 2022. A Survey of Generalisation in Deep Reinforcement Learning.
|
| 278 |
+
Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. 2022. Large Language Models are Zero-Shot Reasoners.
|
| 279 |
+
Aviral Kumar, Aurick Zhou, George Tucker, and Sergey Levine. 2020. Conservative Q-Learning for Offline Reinforcement Learning.
|
| 280 |
+
Brian Lester, Rami Al-Rfou, and Noah Constant. 2021. The Power of Scale for Parameter-Efficient Prompt Tuning. arXiv:2104.08691 [cs].
|
| 281 |
+
Long-Ji Lin. 1992. Self-Improving Reactive Agents Based on Reinforcement Learning, Planning and Teaching. Machine Language, 8(3-4):293-321.
|
| 282 |
+
Jiachang Liu, Dinghan Shen, Yizhe Zhang, Bill Dolan, Lawrence Carin, and Weizhu Chen. 2021. What Makes Good In-Context Examples for GPT- $\$ 3$ ?
|
| 283 |
+
Ming Liu, Wray Buntine, and Gholamreza Haffari. 2018. Learning How to Actively Learn: A Deep Imitation Learning Approach. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1874-1883, Melbourne, Australia. Association for Computational Linguistics.
|
| 284 |
+
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A Robustly Optimized BERT Pretraining Approach.
|
| 285 |
+
Yao Lu, Max Bartolo, Alastair Moore, Sebastian Riedel, and Pontus Stenetorp. 2022. Fantastically Ordered Prompts and Where to Find Them: Overcoming Few-Shot Prompt Order Sensitivity. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 8086-8098, Dublin, Ireland. Association for Computational Linguistics.
|
| 286 |
+
Sewon Min, Xinxi Lyu, Ari Holtzman, Mikel Artetxe, Mike Lewis, Hannaneh Hajishirzi, and Luke Zettlemoyer. 2022. Rethinking the Role of Demonstrations: What Makes In-Context Learning Work? arXiv:2202.12837 [cs].
|
| 287 |
+
|
| 288 |
+
Swaroop Mishra, Daniel Khashabi, Chitta Baral, Yejin Choi, and Hannaneh Hajishirzi. 2022. Reframing Instructional Prompts to GPTk's Language.
|
| 289 |
+
Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, and Martin Riedmiller. 2013. Playing Atari with Deep Reinforcement Learning.
|
| 290 |
+
Reiichiro Nakano, Jacob Hilton, Suchir Balaji, Jeff Wu, Long Ouyang, Christina Kim, Christopher Hesse, Shantanu Jain, Vineet Kosaraju, William Saunders, Xu Jiang, Karl Cobbe, Tyna Eloundou, Gretchen Krueger, Kevin Button, Matthew Knight, Benjamin Chess, and John Schulman. 2022. WebGPT: Browser-assisted question-answering with human feedback.
|
| 291 |
+
Andrew Y. Ng, Daishi Harada, and Stuart J. Russell. 1999. Policy Invariance Under Reward Transformations: Theory and Application to Reward Shaping. In Proceedings of the Sixteenth International Conference on Machine Learning, ICML '99, pages 278-287, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc.
|
| 292 |
+
Bo Pang and Lillian Lee. 2005. Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales. In Proceedings of ACL, pages 115-124.
|
| 293 |
+
Deepak Pathak, Pulkit Agrawal, Alexei A. Efros, and Trevor Darrell. 2017. Curiosity-Driven Exploration by Self-Supervised Prediction. In 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pages 488-489, Honolulu, HI, USA. IEEE.
|
| 294 |
+
Ethan Perez, Douwe Kiela, and Kyunghyun Cho. 2021. True Few-Shot Learning with Language Models. In Advances in Neural Information Processing Systems.
|
| 295 |
+
Rafael Figueiredo Prudencio, Marcos R. O. A. Maximo, and Esther Luna Colombini. 2022. A Survey on Offline Reinforcement Learning: Taxonomy, Review, and Open Problems.
|
| 296 |
+
Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners.
|
| 297 |
+
Jack W. Rae, Sebastian Borgeaud, Trevor Cai, Katie Millican, Jordan Hoffmann, Francis Song, John Aslanides, Sarah Henderson, Roman Ring, Susannah Young, Eliza Rutherford, Tom Hennigan, Jacob Menick, Albin Cassirer, Richard Powell, George van den Driessche, Lisa Anne Hendricks, Maribeth Rauh, Po-Sen Huang, Amelia Glaese, Johannes Welbl, Sumanth Dathathri, Saffron Huang, Jonathan Uesato, John Mellor, Irina Higgins, Antonia Creswell, Nat McAleese, Amy Wu, Erich Elsen, Siddhant Jayakumar, Elena Buchatskaya, David Budden, Esme Sutherland, Karen Simonyan, Michela Paganini, Laurent Sifre, Lena Martens, Xiang Lorraine Li, Adhiguna Kuncoro, Aida Nematzadeh, Elena
|
| 298 |
+
|
| 299 |
+
Gribovskaya, Domenic Donato, Angeliki Lazaridou, Arthur Mensch, Jean-Baptiste Lespiau, Maria Tsimpoukelli, Nikolai Grigorev, Doug Fritz, Thibault Sottiaux, Mantas Pajarskas, Toby Pohlen, Zhitao Gong, Daniel Toyama, Cyprien de Masson d'Autume, Yujia Li, Tayfun Terzi, Vladimir Mikulik, Igor Babuschkin, Aidan Clark, Diego de Las Casas, Aurelia Guy, Chris Jones, James Bradbury, Matthew Johnson, Blake Hechtman, Laura Weidinger, Iason Gabriel, William Isaac, Ed Lockhart, Simon Osindero, Laura Rimell, Chris Dyer, Oriol Vinyals, Kareem Ayoub, Jeff Stanway, Lorrayne Bennett, Demis Hassabis, Koray Kavukcuoglu, and Geoffrey Irving. 2022. Scaling Language Models: Methods, Analysis & Insights from Training Gopher.
|
| 300 |
+
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer.
|
| 301 |
+
Ohad Rubin, Jonathan Herzig, and Jonathan Berant. 2022. Learning To Retrieve Prompts for In-Context Learning.
|
| 302 |
+
Burr Settles. 2009. Active learning literature survey.
|
| 303 |
+
Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1631-1642, Seattle, Washington, USA. Association for Computational Linguistics.
|
| 304 |
+
Friedrich Freiherr Von Wieser. 1893. Natural Value. Macmillan and Company.
|
| 305 |
+
Ellen M. Voorhees and Dawn M. Tice. 2000. Building a question answering test collection. In Proceedings of the 23rd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '00, pages 200-207, New York, NY, USA. Association for Computing Machinery.
|
| 306 |
+
Tu Vu, Brian Lester, Noah Constant, Rami Al-Rfou, and Daniel Cer. 2022. SPoT: Better Frozen Model Adaptation through Soft Prompt Transfer. arXiv:2110.07904 [cs].
|
| 307 |
+
Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, et al. 2022. Emergent abilities of large language models. arXiv preprint arXiv:2206.07682.
|
| 308 |
+
Zhuofeng Wu, Sinong Wang, Jiatao Gu, Rui Hou, Yuxiao Dong, V. G. Vinod Vydiswaran, and Hao Ma. 2022. IDPG: An Instance-Dependent Prompt Generation Method. arXiv:2204.04497 [cs].
|
| 309 |
+
Sang Michael Xie, Aditi Raghunathan, Percy Liang, and Tengyu Ma. 2022. An Explanation of In-context Learning as Implicit Bayesian Inference.
|
| 310 |
+
|
| 311 |
+
Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, Todor Mihaylov, Myle Ott, Sam Shleifer, Kurt Shuster, Daniel Simig, Punit Singh Koura, Anjali Sridhar, Tianlu Wang, and Luke Zettlemoyer. 2022. OPT: Open Pre-trained Transformer Language Models.
|
| 312 |
+
|
| 313 |
+
Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level Convolutional Networks for Text Classification. In Advances in Neural Information Processing Systems, volume 28. Curran Associates, Inc.
|
| 314 |
+
|
| 315 |
+
Tony Z. Zhao, Eric Wallace, Shi Feng, Dan Klein, and Sameer Singh. 2021. Calibrate Before Use: Improving Few-Shot Performance of Language Models.
|
| 316 |
+
|
| 317 |
+
# A Conservative Q-Learning
|
| 318 |
+
|
| 319 |
+
The objective of standard Q-learning is to minimize the Bellman Error (BE):
|
| 320 |
+
|
| 321 |
+
$$
|
| 322 |
+
\begin{array}{l} \operatorname {B E} (Q) = \mathbb {E} _ {s, a, s ^ {\prime} \sim \mathcal {D}} \left[ r (s, a) + \right. \\ \left. \gamma \max _ {a ^ {\prime}} Q (s ^ {\prime}, a ^ {\prime}) - Q (s, a) \right]. \\ \end{array}
|
| 323 |
+
$$
|
| 324 |
+
|
| 325 |
+
An issue with offline Q-learning is there are OOD actions that do not appear in the training data. Learned Q-networks often overestimate these Q-values, resulting in the policy taking unfamiliar actions during evaluation and hurts performance. To mitigate this issue, conservative Q-learning (CQL) adds a penalty term to regularize Q-values:
|
| 326 |
+
|
| 327 |
+
$$
|
| 328 |
+
\begin{array}{l} \min _ {Q} \alpha \mathbb {E} _ {s \sim \mathcal {D}} \left[ \log \sum_ {a} \exp (Q (s, a)) - \right. \\ \left. \mathbb {E} _ {a \sim \hat {\pi} _ {\beta}} [ Q (s, a) ] \right] + \frac {1}{2} \mathrm {B E} (Q) ^ {2}, \\ \end{array}
|
| 329 |
+
$$
|
| 330 |
+
|
| 331 |
+
where $\alpha$ is a weight term, and $\hat{\pi}_{\beta}$ is the behavior policy, under which the offline transitions are collected for training. Notice this objective penalizes all unobserved actions under $\hat{\pi}_{\beta}$ . Intuitively, this regularizer leads to a policy that avoids unfamiliar actions during evaluation. We refer the interested reader to the original paper for theoretical guarantees and further details (Kumar et al., 2020).
|
| 332 |
+
|
| 333 |
+
# B Hyperparameters
|
| 334 |
+
|
| 335 |
+
We report the list of hyperparameters for the hyperparameter search in Table 5. We use grid search over these hyperparameters to determine the combination that maximizes validation performance.
|
| 336 |
+
|
| 337 |
+
<table><tr><td>Hyperparameter</td><td>Value</td></tr><tr><td>Train steps</td><td>8000</td></tr><tr><td>Batch size</td><td>16</td></tr><tr><td>Hidden dim (MLP)</td><td>16</td></tr><tr><td>Replay memory size</td><td>50000</td></tr><tr><td>Learning rate</td><td>1e-4, 3e-4, 5e-4</td></tr><tr><td>CQL regularization weight α</td><td>0, 0.1, 0.2</td></tr><tr><td>Target network update steps</td><td>100, 200, 400</td></tr><tr><td>Dropout rate</td><td>0, 0.25</td></tr></table>
|
| 338 |
+
|
| 339 |
+
Table 5: List of hyperparameters used in our experiments.
|
| 340 |
+
|
| 341 |
+

|
| 342 |
+
Figure 5: Average NEW TASK (transfer) accuracy on 4 tasks across 5 random seeds. $95\%$ confidence intervals are reported as error bars.
|
| 343 |
+
|
| 344 |
+
During validation, the policy picks from the reward set, and is evaluated on the training set, whereas in training, we pick from the training set and evaluate on the reward set. We point out that our validation scheme does not use extra data.
|
| 345 |
+
|
| 346 |
+
Table 6 further includes the performance of linear policies. The performance of linear policies is better than the baselines, but clearly worse than the MLP policy.
|
| 347 |
+
|
| 348 |
+
# C Additional Results
|
| 349 |
+
|
| 350 |
+
We present results on the effect of unlabeled size and on transfer GPT-3. We also provide additional analysis towards understanding what makes good examples for in-context learning.
|
| 351 |
+
|
| 352 |
+
# C.1 Effect of Unlabeled Size
|
| 353 |
+
|
| 354 |
+
In §4.2, we noticed the number of unlabeled examples available for selection plays a role in the performance our policies. One might expect the transfer performance in the NEW TASK setting scales with unlabeled size, simply because there are additional examples to pick from.
|
| 355 |
+
|
| 356 |
+
<table><tr><td>Method</td><td>Average</td><td>AGNews</td><td>Amazon</td><td>SST-2</td><td>TREC</td></tr><tr><td>random</td><td>59.6</td><td>55.210.5</td><td>76.312.3</td><td>66.212.9</td><td>40.84.7</td></tr><tr><td>max-entropy</td><td>59.3</td><td>58.811.3</td><td>74.85.1</td><td>65.710.7</td><td>37.86.7</td></tr><tr><td>best-of-10</td><td>72.5</td><td>72.11.9</td><td>91.10.6</td><td>81.14.4</td><td>45.63.5</td></tr><tr><td>greedy-oracle</td><td>78.0</td><td>80.61.7</td><td>91.81.1</td><td>81.73.9</td><td>58.07.5</td></tr><tr><td>Linear policy (seen examples)</td><td>65.6</td><td>62.87.8</td><td>82.78.6</td><td>74.25.8</td><td>42.82.9</td></tr><tr><td>Linear policy (1000 new examples)</td><td>65.9</td><td>69.56.0</td><td>83.76.2</td><td>65.24.9</td><td>45.22.8</td></tr><tr><td>MLP policy (seen examples)</td><td>71.4</td><td>70.87.8</td><td>90.41.9</td><td>81.03.5</td><td>43.32.0</td></tr><tr><td>MLP policy (1000 new examples)</td><td>69.0</td><td>65.57.4</td><td>88.54.2</td><td>76.77.5</td><td>45.45.0</td></tr></table>
|
| 357 |
+
|
| 358 |
+
Table 6: SAME TASK accuracy on AGNews, Amazon, SST-2 and TREC, across 5 random seeds, with our methods (using MLP and Linear networks as policies). $95\%$ confidence intervals are reported as subscripts.
|
| 359 |
+
|
| 360 |
+
In Figure 5, we plot average accuracies in the NEW TASK setting, where we train our policies on three datasets and evaluate on a held-out dataset. Here, we notice the benefit of a larger unlabeled set is twofold, both in increasing transfer performance, and in reducing variance. That said, the improvement is not necessarily monotonic due to the large variance. Interestingly, our learned policy is performant even when the unlabeled set is small. Picking from 50 unlabeled examples, our policies reaches an average accuracy of $63.3\%$ , still manage to outperform random demonstration $(59.6\%)$ .
|
| 361 |
+
|
| 362 |
+
# C.2 Transfer to GPT-3
|
| 363 |
+
|
| 364 |
+
Despite demonstrating abilities to generalize across tasks, it is yet clear whether learned policies on GPT-2 can generalize to other models, such as GPT-3. In table 7, we report the performance of transferring both learned policies and selected examples from GPT-2 to GPT-3 ADA, BABBAGE and CURIE.
|
| 365 |
+
|
| 366 |
+
We observe mixed results when transferring to GPT-3. With an uncalibrated ADA model, we observe a small, but measurable improvement of transferring either policy (1.1%) or examples directly (0.9%). Such a trend holds for the calibrated ADA model too (0.4% and 1.9%). Despite the improved performance, the benefits of variance reduction is diminished. Perhaps surprising is the generalization of learned policies: it suggests different models could indeed share similar preferences for demonstration examples.
|
| 367 |
+
|
| 368 |
+
On the other hand, we observe negative results when transferring to BABBAGE. When transferring learned policy to an uncalibrated BABBAGE model, we notice the performance drops by $1.6\%$ . For cost considerations, we run CURIE experiments for one
|
| 369 |
+
|
| 370 |
+
random set and do not report variance. Marginal gains are observed when transferring policy to the uncalibrated model (1.8%) and examples to the calibrated model (1.0%). In other scenarios, transfer results match or underperform base models. As the observed results could be attributed to randomness, we hold short of drawing conclusions.
|
| 371 |
+
|
| 372 |
+
# C.3 Coefficients in Linear Policies
|
| 373 |
+
|
| 374 |
+
Although linear policies perform worse than the MLP, they are more interpretable. Figure 6 shows the coefficients of feature representations of actions for AGNews and SST-2. The average coefficient of entropy is indeed positive, suggesting that strategies encouraging class balance have some value. However, it is often not the most important feature. For example, positive examples in SST-2 matter more, which is consistent with our observation in the main paper. Moreover, the variance is large, highlighting the challenges in learning a generalizable policy.
|
| 375 |
+
|
| 376 |
+
# C.4 Effect of Length
|
| 377 |
+
|
| 378 |
+
We also examine the effect of length on in-context learning. Intuitively, one might expect longer examples to be more meaningful. However, we do not see a correlation between length and accuracy in AGNews and TREC, and a non-significant negative correlations in SST-2. In Amazon, we observe a statistically significant (p-value = 0.019), but weak correlation between length and accuracy. Overall, there is no evidence suggesting longer examples improve in-context learning performance.
|
| 379 |
+
|
| 380 |
+
<table><tr><td>Model</td><td>Average</td><td>AGNews</td><td>Amazon</td><td>SST-2</td><td>TREC</td></tr><tr><td>ADA</td><td>59.0</td><td>62.915.3</td><td>87.05.3</td><td>65.08.9</td><td>21.25.8</td></tr><tr><td>ADA (C)</td><td>62.5</td><td>64.03.5</td><td>90.01.1</td><td>73.88.5</td><td>22.14.6</td></tr><tr><td>GPT-2 policy → ADA</td><td>60.1</td><td>51.815.5</td><td>89.11.7</td><td>73.315.0</td><td>26.23.9</td></tr><tr><td>GPT-2 policy → ADA (C)</td><td>62.9</td><td>55.65.9</td><td>89.72.2</td><td>86.71.6</td><td>19.51.4</td></tr><tr><td>GPT-2 examples → ADA</td><td>59.9</td><td>48.912.5</td><td>89.32.5</td><td>74.811.4</td><td>26.63.9</td></tr><tr><td>GPT-2 examples → ADA (C)</td><td>64.4</td><td>62.08.3</td><td>88.73.2</td><td>84.03.6</td><td>23.05.3</td></tr><tr><td>BABBAGE</td><td>70.3</td><td>68.012.3</td><td>93.40.7</td><td>92.22.4</td><td>27.45.1</td></tr><tr><td>BABBAGE (C)</td><td>74.4</td><td>78.15.3</td><td>92.71.4</td><td>90.81.0</td><td>36.03.5</td></tr><tr><td>GPT-2 policy → BABBAGE</td><td>68.7</td><td>58.05.9</td><td>93.62.2</td><td>90.61.6</td><td>32.51.4</td></tr><tr><td>GPT-2 policy → BABBAGE (C)</td><td>74.4</td><td>75.15.3</td><td>93.40.5</td><td>90.31.7</td><td>38.86.1</td></tr><tr><td>GPT-2 examples → BABBAGE</td><td>65.8</td><td>42.610.0</td><td>93.00.4</td><td>91.12.9</td><td>36.68.4</td></tr><tr><td>GPT-2 examples → BABBAGE (C)</td><td>73.6</td><td>73.97.3</td><td>93.10.5</td><td>91.11.8</td><td>36.22.6</td></tr><tr><td>CURIE</td><td>74.2</td><td>76.7</td><td>94.7</td><td>93.8</td><td>31.4</td></tr><tr><td>CURIE (C)</td><td>76.3</td><td>69.8</td><td>94.8</td><td>93.4</td><td>47.0</td></tr><tr><td>GPT-2 policy → CURIE</td><td>76.0</td><td>81.2</td><td>95.7</td><td>96.0</td><td>31.0</td></tr><tr><td>GPT-2 policy → CURIE (C)</td><td>75.4</td><td>75.8</td><td>95.4</td><td>93.0</td><td>38.2</td></tr><tr><td>GPT-2 examples → CURIE</td><td>74.4</td><td>77.7</td><td>93.8</td><td>94.3</td><td>31.8</td></tr><tr><td>GPT-2 examples → CURIE (C)</td><td>77.3</td><td>79.8</td><td>93.1</td><td>94.6</td><td>41.8</td></tr></table>
|
| 381 |
+
|
| 382 |
+
Table 7: Transfer of policies and examples learned on GPT-2 to various GPT-3 models across 5 random sets of 4-shot demonstration examples. C indicates calibration. $95\%$ confidence intervals are reported as subscripts. Due to resource constraints, we limit experiments with CURIE to 1 random set.
|
| 383 |
+
|
| 384 |
+

|
| 385 |
+
(a) AGNews
|
| 386 |
+
|
| 387 |
+

|
| 388 |
+
(b) SST-2
|
| 389 |
+
Figure 6: Average coefficients of linear policies trained on AGNews and SST-2 across 5 runs. Error bars show the standard deviation.
|
| 390 |
+
|
| 391 |
+

|
| 392 |
+
(a) AGNews $(r = -0.01)$
|
| 393 |
+
|
| 394 |
+

|
| 395 |
+
(b) Amazon $(r = -0.23^{*})$
|
| 396 |
+
|
| 397 |
+

|
| 398 |
+
(c) SST-2 $(r = -0.08)$
|
| 399 |
+
|
| 400 |
+

|
| 401 |
+
(d) TREC $(r = -0.00)$
|
| 402 |
+
Figure 7: Correlation between length (number of words) of the demonstration prompt and in-context learning performance across 100 sets of randomly sample 4-shot demonstration. * indicates a p-value $< 0.05$ .
|
activeexampleselectionforincontextlearning/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:154b223ff664b3aa73227f29d1d3752541eaa9a396cb1dae444ab24574642a62
|
| 3 |
+
size 585056
|
activeexampleselectionforincontextlearning/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:7c0e34e02e0cfab8c7f53322d80d14ca6577f47763885fc9a9fff3efc8b27ed0
|
| 3 |
+
size 456221
|
adamixmixtureofadaptationsforparameterefficientmodeltuning/4d2f7f55-bfd8-4bb9-b067-0fabeb8655d7_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:88eba0b943890474e18fdd9aa11f6a118919ed6c1ac8bfff9995d8838df7f4bc
|
| 3 |
+
size 109872
|
adamixmixtureofadaptationsforparameterefficientmodeltuning/4d2f7f55-bfd8-4bb9-b067-0fabeb8655d7_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:88622679c96c810a3f4edfc026c0540da550341c04e7009d87b20de61ce26fbe
|
| 3 |
+
size 130955
|
adamixmixtureofadaptationsforparameterefficientmodeltuning/4d2f7f55-bfd8-4bb9-b067-0fabeb8655d7_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:71538fee8d64d4540bfae065ab58f909e816fb5f68b63994b7ef5d2fa0048aac
|
| 3 |
+
size 912766
|
adamixmixtureofadaptationsforparameterefficientmodeltuning/full.md
ADDED
|
@@ -0,0 +1,434 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# AdaMix: Mixture-of-Adaptations for Parameter-efficient Model Tuning
|
| 2 |
+
|
| 3 |
+
Yaqing Wang*
|
| 4 |
+
|
| 5 |
+
Purdue University
|
| 6 |
+
|
| 7 |
+
wang5075@purdue.edu
|
| 8 |
+
|
| 9 |
+
Sahaj Agarwal
|
| 10 |
+
|
| 11 |
+
Microsoft
|
| 12 |
+
|
| 13 |
+
sahagar@microsoft.com
|
| 14 |
+
|
| 15 |
+
Subhabrata Mukherjee†
|
| 16 |
+
|
| 17 |
+
Microsoft Research
|
| 18 |
+
|
| 19 |
+
submukhe@microsoft.com
|
| 20 |
+
|
| 21 |
+
Xiaodong Liu
|
| 22 |
+
|
| 23 |
+
Microsoft Research
|
| 24 |
+
|
| 25 |
+
Jing Gao
|
| 26 |
+
|
| 27 |
+
Purdue University
|
| 28 |
+
|
| 29 |
+
Ahmed Hassan Awadallah
|
| 30 |
+
|
| 31 |
+
Microsoft Research
|
| 32 |
+
|
| 33 |
+
Jianfeng Gao
|
| 34 |
+
|
| 35 |
+
Microsoft Research
|
| 36 |
+
|
| 37 |
+
# Abstract
|
| 38 |
+
|
| 39 |
+
Standard fine-tuning of large pre-trained language models (PLMs) for downstream tasks requires updating hundreds of millions to billions of parameters, and storing a large copy of the PLM weights for every task resulting in increased cost for storing, sharing and serving the models. To address this, parameter-efficient fine-tuning (PEFT) techniques were introduced where small trainable components are injected in the PLM and updated during fine-tuning. We propose AdaMix as a general PEFT method that tunes a mixture of adaptation modules – given the underlying PEFT method of choice – introduced in each Transformer layer while keeping most of the PLM weights frozen. For instance, AdaMix can leverage a mixture of adapters like Houlsby (Houlsby et al., 2019) or a mixture of low rank decomposition matrices like LoRA (Hu et al., 2021) to improve downstream task performance over the corresponding PEFT methods for fully supervised and few-shot NLU and NLG tasks. Further, we design AdaMix such that it matches the same computational cost and the number of tunable parameters as the underlying PEFT method. By only tuning $0.1 - 0.2\%$ of PLM parameters, we show that AdaMix outperforms SOTA parameter-efficient fine-tuning and full model fine-tuning for both NLU and NLG tasks. Code and models are made available at https://aka.ms/AdaMix.
|
| 40 |
+
|
| 41 |
+
# 1 Introduction
|
| 42 |
+
|
| 43 |
+
Standard fine-tuning of large pre-trained language models (PLMs) (Devlin et al., 2019; Liu et al., 2019; Brown et al., 2020; Raffel et al., 2019) to downstream tasks requires updating all model parameters. Given the ever-increasing size of PLMs (e.g., 175 billion parameters for GPT-3 (Brown et al., 2020) and 530 billion parameters for MT-NLG (Smith et al., 2022)), even the fine-tuning step becomes expensive as it requires storing a full copy
|
| 44 |
+
|
| 45 |
+

|
| 46 |
+
Figure 1: Performance of different parameter-efficient fine-tuning methods on GLUE development set with RoBERTa-large encoder following a setup similar to (Houlsby et al., 2019) for fair comparison. We report the performance of Pfeiffer (Pfeiffer et al., 2021), Houlsby (Houlsby et al., 2019) and LoRA (Hu et al., 2021) with their default number of fine-tuned parameters as well as the number of fine-tuned parameters used in AdaMix with a mixture of adaptations. Red dash shows the performance of full model fine-tuning.
|
| 47 |
+
|
| 48 |
+
of model weights for every task. To address these challenges, recent works have developed parameter-efficient fine-tuning (PEFT) techniques. These approaches typically underperform standard full model fine-tuning, but significantly reduce the number of trainable parameters. There are many varieties of PEFT methods, including prefix-tuning (Li and Liang, 2021) and prompt-tuning (Lester et al., 2021) to condition frozen language models via natural language task descriptions, low dimensional projections using adapters (Houlsby et al., 2019; Pfeiffer et al., 2020, 2021) and more recently using low-rank approximation (Hu et al., 2021). Figure 1 shows the performance of some popular PEFT methods with varying number of tunable parameters. We observe a significant performance gap with respect to full model tuning where all PLM parameters are updated.
|
| 49 |
+
|
| 50 |
+
In this paper, we present AdaMix, a mixture of adaptation modules approach, and show that it outperforms SOTA PEFT methods and also full model fine-tuning while tuning only $0.1 - 0.2\%$ of PLM parameters.
|
| 51 |
+
|
| 52 |
+
In contrast to traditional PEFT methods that use a single adaptation module in every Transformer layer, AdaMix uses several adaptation modules that learn multiple views of the given task. In order to design this mixture of adaptations, we take inspiration from sparsely-activated mixture-of-experts (MoE) models. In traditional dense models (e.g., BERT (Devlin et al., 2019), GPT-3 (Brown et al., 2020)), all model weights are activated for every input example. MoE models induce sparsity by activating only a subset of the model weights for each incoming input.
|
| 53 |
+
|
| 54 |
+
Consider adapters (Houlsby et al., 2019), one of the most popular PEFT techniques, to illustrate our method. A feedforward layer (FFN) is introduced to down-project the hidden representation to a low dimension $d$ (also called the bottleneck dimension) followed by another up-project FFN to match the dimensionality of the next layer. Instead of using a single adapter, we introduce multiple project-up and project-down FFNs in each Transformer layer. We route input examples to one of the project-up and one of the project-down FFN's resulting in the same amount of computational cost (FLOPs) as that of using a single adapter. For methods like LoRA (Hu et al., 2021), that decomposes the gradient of pre-trained weights into low-rank matrices $(A$ and $B)$ , we introduce multiple low-rank decompositions and route the input examples to them similar to adapters.
|
| 55 |
+
|
| 56 |
+
We discuss different routing mechanisms and show that stochastic routing yields good performance while eliminating the need for introducing any additional parameters for module selection. To alleviate training instability that may arise from the randomness in selecting different adaptation modules in different training steps, we leverage consistency regularization and the sharing of adaptation modules during stochastic routing.
|
| 57 |
+
|
| 58 |
+
The introduction of multiple adaptation modules results in an increased number of adaptation parameters. This does not increase computational cost but increases storage cost. To address this, we develop a merging mechanism to combine weights from different adaptation modules to a single module in each Transformer layer. This allows us to keep the number of adaptation parameters the same as that of a single adaptation module. Our merging mechanism is inspired by model weight averaging model soups (Wortsman et al., 2022) and multi BERTs (Sellam et al., 2022). Weight averaging
|
| 59 |
+
|
| 60 |
+
of models with different random initialization has been shown to improve model performance in recent works (Matena and Raffel, 2021; Neyshabur et al., 2020; Frankle et al., 2020) that show the optimized models to lie in the same basin of error landscape. While the above works are geared towards fine-tuning independent models, we extend this idea to parameter-efficient fine-tuning with randomly initialized adaptation modules and a frozen language model.
|
| 61 |
+
|
| 62 |
+
Overall, our work makes the following contributions:
|
| 63 |
+
|
| 64 |
+
(a) We develop a new method AdaMix as a mixture of adaptations for parameter-efficient fine-tuning (PEFT) of large language models. Given any PEFT method of choice like adapters and low-rank decompositions, AdaMix improves downstream task performance over the underlying PEFT method.
|
| 65 |
+
(b) AdaMix is trained with stochastic routing and adaptation module merging to retain the same computational cost (e.g., FLOPs, #tunable adaptation parameters) and benefits of the underlying PEFT method. To better understand how AdaMix works, we demonstrate its strong connections to Bayesian Neural Networks and model ensembling.
|
| 66 |
+
(c) By tuning only $0.1 - 0.2\%$ of a pre-trained language model's parameters, AdaMix is the first PEFT method to outperform full model fine-tuning methods for all NLU tasks on GLUE, and outperforms other competing methods for NLG and few-shot NLU tasks.
|
| 67 |
+
|
| 68 |
+
Practical benefits of PEFT methods. The most significant benefit of PEFT methods comes from the reduction in memory and storage usage. For a Transformer, the VRAM consumption can be significantly reduced as we do not need to keep track of optimizer states for the frozen parameters. PEFT methods also allow multiple tasks to share the same copy of the full (frozen) PLM. Hence, the storage cost for introducing a new task can be reduced by up to 444x (from 355MB to 0.8MB with RoBERTa-large encoder in our setting).
|
| 69 |
+
|
| 70 |
+
We present background on Mixture-of-Experts (MoE) and adapters in Section A of Appendix.
|
| 71 |
+
|
| 72 |
+
# 2 Mixture-of-Adaptations
|
| 73 |
+
|
| 74 |
+
Consider a set of $M$ adaptation modules injected in each Transformer layer, where $A_{ij} : i \in \{1 \cdots L\}, j \in \{1 \cdots M\}$ represents the $j^{th}$ adaptation module in the $i^{th}$ Transformer layer. For illustration, we will consider adapters (Houlsby
|
| 75 |
+
|
| 76 |
+

|
| 77 |
+
Figure 2: Mixture-of-Adaptations (AdaMix) with adapters (Houlsby et al., 2019) as the underlying PEFT mechanism. For illustration, we show $M = 4$ adaptation modules consisting of feedforward up (FFN_U) feedforward down (FFN_D) projection matrices. The above block shown for one Transformer layer is repeated across all the layers. AdaMix stochastically routes instances from an input batch via randomly selected adaptation modules resulting in FLOPs match to a single module with consistency regularization and parameter sharing. Adaptation merging (Figure 3) collapses multiple modules to match single-module parameters in each layer.
|
| 78 |
+
|
| 79 |
+
et al., 2019) as the underlying parameter-efficient fine-tuning (PEFT) mechanism as a running example. Similar principles can be used for other PEFT mechanism like LoRA (Hu et al., 2021) for low-rank decompositions as we show in experiments.
|
| 80 |
+
|
| 81 |
+
We adopt the popularly used Transformer architecture (Vaswani et al., 2017) consisting of $L$ repeated Transformer blocks, where each block consists of a self-attention sub-layer, a fully connected feed-forward network (FFN) and residual connections around the sub-layers followed by layer normalization. Each adaptation module $A_{ij}$ corresponding to the adapters (Houlsby et al., 2019) consists of a feedforward up $\mathcal{W}_{ij}^{up}$ and a feedforward down $\mathcal{W}_{ij}^{down}$ projection matrices.
|
| 82 |
+
|
| 83 |
+
# 2.1 Routing Policy
|
| 84 |
+
|
| 85 |
+
Recent work like THOR (Zuo et al., 2021) has demonstrated stochastic routing policy like random routing to work as well as classical routing mechanism like Switch routing (Fedus et al., 2021) with the following benefits. Since input examples are randomly routed to different experts, there is no requirement for additional load balancing as each expert has an equal opportunity of being activated simplifying the framework. Further, there are no added parameters, and therefore no additional computation, at the Switch layer for expert selection. The latter is particularly important in our setting for parameter-efficient fine-tuning to keep the parameters and FLOPs the same as that of a single adap
|
| 86 |
+
|
| 87 |
+
tation module. To analyze the working of AdaMix, we demonstrate connections to stochastic routing and model weight averaging to Bayesian Neural Networks and model ensembling in Section 2.5.
|
| 88 |
+
|
| 89 |
+
In the stochastic routing policy for AdaMix with adapters, at any training step, we randomly select a pair of feedforward up and feedforward down projection matrices in the $i^{th}$ Transformer layer as $A_{i} = \{\mathcal{W}_{ij}^{up},\mathcal{W}_{ik}^{down}\}$ and $B_{i} = \{\mathcal{W}_{ij^{\prime}}^{up},\mathcal{W}_{ik^{\prime}}^{down}\}$ respectively. Given this selection of adaptation modules $A_{i}$ and $B_{i}$ in each Transformer layer in every step, all the inputs in a given batch are processed through the same set of modules. Given an input representation $x$ in a given Transformer layer, the above pair of modules perform the following transformations:
|
| 90 |
+
|
| 91 |
+
$$
|
| 92 |
+
x \leftarrow x + f (x \cdot \mathcal {W} ^ {\text {d o w n}}) \cdot \mathcal {W} ^ {\text {u p}} \tag {1}
|
| 93 |
+
$$
|
| 94 |
+
|
| 95 |
+
Such stochastic routing enables adaptation modules to learn different transformations during training and obtain multiple views of the task. However, this also creates a challenge on which modules to use during inference due to random routing protocol during training. We address this challenge with the following two techniques that further allow us to collapse adaptation modules and obtain the same computational cost (FLOPs, #tunable adaptation parameters) as that of a single module.
|
| 96 |
+
|
| 97 |
+

|
| 98 |
+
Figure 3: Stochastic routing during training activates different adaptation modules to have multiple views of the task with FLOPs match to a single module. Merging weights of the adaptation modules $\left(\{\mathrm{FFN\_U}_i\}, \{\mathrm{FFN\_D}_i\}: i \in \{1 \cdots 4\}\right)$ by averaging preserves improved performance with parameter match to a single-module.
|
| 99 |
+
|
| 100 |
+
# 2.2 Consistency regularization
|
| 101 |
+
|
| 102 |
+
Consider $\mathcal{A} = \{A_{i=1}^{L}\}$ and $\mathcal{B} = \{B_{i=1}^{L}\}$ to be the sets of adaptation modules (e.g., projection matrices) activated during two stochastic forward passes through the network for an input $x$ across $L$ layers of the Transformer. The objective of consistency regularization is to enable the adaptation modules to share information and prevent divergence. To this end, we add the following consistency loss as a regularizer to the task-specific optimization loss:
|
| 103 |
+
|
| 104 |
+
$$
|
| 105 |
+
\mathcal {L} = - \Big (\sum_ {c = 1} ^ {C} \mathcal {I} (x, c) \log \operatorname {s o f t m a x} \left(z _ {c} ^ {\mathcal {A}} (x)\right) +
|
| 106 |
+
$$
|
| 107 |
+
|
| 108 |
+
$$
|
| 109 |
+
\left. \right. \frac {1}{2} \left( \right.\mathcal {K L} \left( \right.z _ {(.)} ^ {\mathcal {A}} (x) \left. \right| | z _ {(.)} ^ {\mathcal {B}} (x)\left. \right) + \mathcal {K L} \left( \right.z _ {(.)} ^ {\mathcal {B}} (x) \left. \right| | z _ {(.)} ^ {\mathcal {A}} (x)\left. \right)\left. \right) \tag {2}
|
| 110 |
+
$$
|
| 111 |
+
|
| 112 |
+
where $\mathcal{I}(x,c)$ is a binary indicator (0 or 1) if class label $c$ is the correct classification for $x$ and $z_{(\cdot)}^{\mathcal{A}}(x)$ and $z_{(\cdot)}^{\mathcal{B}}(x)$ are the predicted logits while routing through two sets of adaptation modules $\mathcal{A}$ and $\mathcal{B}$ respectively with $\mathcal{KL}$ denoting the Kullback-Leibler divergence. $x$ is the input representation from the PLM with frozen parameters and only the parameters of modules $\{\mathcal{W}^{up},\mathcal{W}^{down}\}$ are updated during training.
|
| 113 |
+
|
| 114 |
+
# 2.3 Adaptation module merging
|
| 115 |
+
|
| 116 |
+
While the above regularization mitigates inconsistency in random module selection during inference, it still results in increased serving cost to host several adaptation modules. Prior works in fine-tuning language models for downstream tasks have shown improved performance on averaging the weights of different models fine-tuned with different random seeds outperforming a single fine-tuned model. Recent work (Wortsman et al., 2022) has also shown that differently fine-tuned models from the same
|
| 117 |
+
|
| 118 |
+
initialization lie in the same error basin motivating the use of weight aggregation for robust task summarization. We adopt and extend prior techniques for language model fine-tuning to our parameter-efficient training of multi-view adaptation modules.
|
| 119 |
+
|
| 120 |
+
In contrast to the aforementioned techniques like stochastic routing and consistency regularization that are applied at the training phase, we employ adaptation merging only during inference. Given a set of adaptation modules, $\mathcal{W}_{ij}^{up}$ and $\mathcal{W}_{ik}^{down}$ for $i\in \{1\dots L\}$ and $\{j,k\} \in \{1\dots M\}$ , we simply average the weights of all the corresponding modules (e.g., project-up or project-down matrices) in every Transformer layer to collapse to a single module $\{\mathcal{W}_i^{\prime up},\mathcal{W}_i^{\prime down}\}$ , where:
|
| 121 |
+
|
| 122 |
+
$$
|
| 123 |
+
\mathcal {W} _ {i} ^ {\prime u p} \leftarrow \frac {1}{M} \sum_ {j = 1} ^ {M} \mathcal {W} _ {i j} ^ {u p} \quad \mathcal {W} _ {i} ^ {\prime d o w n} \leftarrow \frac {1}{M} \sum_ {j = 1} ^ {M} \mathcal {W} _ {i j} ^ {d o w n} \tag {3}
|
| 124 |
+
$$
|
| 125 |
+
|
| 126 |
+
# 2.4 Adaptation module sharing
|
| 127 |
+
|
| 128 |
+
While stochastic routing to multi-view adaptation modules increases the model capacity, it can also impact downstream tasks with less amounts of labeled data for tuning several sets of adaptation modules. To address this challenge, we use another mechanism to share some of the adaption modules (e.g., project-down or the project-up operations) to improve training efficiency. In the standard setting for adapters, we share only the feedforward projection-up matrices i.e., $\mathcal{W}_{ij}^{up} = \mathcal{W}_i^{up}$ . We investigate these design choices via ablation studies in our experiments in Section 3.3 and Section C in Appendix.
|
| 129 |
+
|
| 130 |
+
# 2.5 Connection to Bayesian Neural Networks and Model Ensembling
|
| 131 |
+
|
| 132 |
+
Bayesian Neural Network (BNN) (Gal and Ghahramani, 2015) replaces a deterministic model's weight parameters by a distribution over the parameters. For inference, BNN averages over all the possible weights, also referred to as marginalization. Consider $f^{\mathcal{W}(x)} \in \mathbb{R}^d$ to be the $d$ -dimensional output of such a neural network where the model likelihood is given by $p(y|f^{\mathcal{W}(x)})$ . In our setting, $\mathcal{W} = \langle \mathcal{W}^{up}, \mathcal{W}^{down} \rangle$ along with frozen PLM parameters that are dropped from the notation for simplicity. For classification, we can further apply a softmax likelihood to the output to obtain: $P(y = c|x,W) = \text{softmax}(f^{\mathcal{W}(x)})$ . Given an instance $x$ , the probability distribution over the classes is given by marginalization over the pos-
|
| 133 |
+
|
| 134 |
+
terior distribution as: $p(y = c|x) = \int_{\mathcal{W}}p(y = c|f^{\mathcal{W}(x)})p(\mathcal{W}|X,Y)d\mathcal{W}$ .
|
| 135 |
+
|
| 136 |
+
This requires averaging over all possible model weights, which is intractable in practice. Therefore, several approximation methods have been developed based on variational inference methods and stochastic regularization techniques using dropouts. In this work, we leverage another stochastic regularization in the form of random routing. Here, the objective is to find a surrogate distribution $q_{\theta}(w)$ in a tractable family of distributions that can replace the true model posterior that is hard to compute. The ideal surrogate is identified by minimizing the Kullback-Leibler (KL) divergence between the candidate and the true posterior.
|
| 137 |
+
|
| 138 |
+
Consider $q_{\theta}(\mathcal{W})$ to be the stochastic routing policy which samples $T$ masked model weights $\{\widetilde{\mathcal{W}}_t\}_{t=1}^T \sim q_{\theta}(\mathcal{W})$ . For classification tasks, the approximate posterior can be now obtained by Monte-Carlo integration (Gal et al., 2017) as:
|
| 139 |
+
|
| 140 |
+
$$
|
| 141 |
+
\begin{array}{l} p (y = c | x) \approx p (y = c | f ^ {\mathcal {W}} (x)) q _ {\theta} (\mathcal {W}) d \mathcal {W} \\ \approx \frac {1}{T} \sum_ {t = 1} ^ {T} p (y = c | f ^ {\widetilde {\mathcal {W}} _ {t}} (x)) \tag {4} \\ = \frac {1}{T} \sum_ {t = 1} ^ {T} \operatorname {s o f t m a x} \left(f ^ {\widetilde {\mathcal {W}} _ {t}} (x)\right) \\ \end{array}
|
| 142 |
+
$$
|
| 143 |
+
|
| 144 |
+
However, computing the approximate posterior above in our setting requires storing all the stochastic model weights $\mathcal{W}_t(x)$ which increases the serving cost during inference. To reduce this cost, we resort to the other technique for weight averaging via adaptation module merging during inference.
|
| 145 |
+
|
| 146 |
+
Let $\mathcal{L}_{\mathcal{W}}^{AM} = \mathbb{E}_{x,y}\mathcal{L}(softmax(f^{\widetilde{\mathcal{W}}}(\boldsymbol {x}),\boldsymbol {y})$ denote the expected loss with merging of the stochastic adaptation weights with $\widetilde{\mathcal{W}} = \frac{1}{T}\sum_{t}\widetilde{\mathcal{W}}_{t}$ (from Equation 3) and $\mathcal{L}$ denoting the cross-entropy loss. Consider $\mathcal{L}_{\mathcal{W}}^{Ens} = \mathbb{E}_{x,y}\mathcal{L}(\frac{1}{T}\sum_{t = 1}^{T}softmax(f^{\widetilde{\mathcal{W}}_t}(\boldsymbol {x}),\boldsymbol {y}))$ denote the expected loss from logit-level stochastic model ensembling (from Equation 4).
|
| 147 |
+
|
| 148 |
+
Prior work (Wortsman et al., 2022) shows that averaging the weights of multiple models fine-tuned with different hyper-parameters improves model performance. They analytically show the similarity in loss between weight-averaging $(\mathcal{L}_{\mathcal{W}}^{AM}$ in our setting) and logit-ensembling $(\mathcal{L}_{\mathcal{W}}^{Ens}$ in our setting) as a function of the flatness of the loss and confidence of the predictions. While the above analysis is geared towards averaging of multiple independently fine-tuned model weights, we can apply a
|
| 149 |
+
|
| 150 |
+
similar analysis in our setting towards averaging of multiple stochastically obtained adaptation weights in obtaining a favorable loss $\mathcal{L}_{\mathcal{W}}^{AM}$ . Further, adaptation merging reduces the serving cost during inference since we need to retain only one copy of the merged weights as opposed to logit-ensembling which requires copies of all the adaptation weights
|
| 151 |
+
|
| 152 |
+
# 3 Experiments
|
| 153 |
+
|
| 154 |
+
# 3.1 Experimental Setup
|
| 155 |
+
|
| 156 |
+
Dataset. We perform experiments on a wide range of tasks including eight natural language understanding (NLU) tasks in the General Language Understanding Evaluation (GLUE) benchmark (Wang et al., 2019) and three natural language generation (NLG) tasks, namely, E2E (Novikova et al., 2017), WebNLG (Gardent et al., 2017) and DART (Nan et al., 2020). For the NLU and NLG tasks, we follow the same setup as (Houlsby et al., 2019) and (Li and Liang, 2021; Hu et al., 2021), respectively.
|
| 157 |
+
|
| 158 |
+
Baselines. We compare AdaMix to full model fine-tuning and several state-of-the-art parameter-efficient fine-tuning (PEFT) methods, namely, Pfeiffer Adapter (Pfeiffer et al., 2021), Houlsby Adapter (Houlsby et al., 2019), BitFit (Zaken et al., 2021), Prefix-tuning (Li and Liang, 2021), UNIPELT (Mao et al., 2021) and LoRA (Hu et al., 2021). We use BERT-base (Devlin et al., 2019) and RoBERTa-large (Liu et al., 2019) as encoders for NLU tasks (results in Table 1 and Table 2), and GPT-2 (Brown et al., 2020) for NLG tasks (results in Table 3).
|
| 159 |
+
|
| 160 |
+
AdaMix implementation details. We implement AdaMix in Pytorch and use Tesla V100 gpus for experiments with detailed hyper-parameter configurations presented in Section E in Appendix. AdaMix with adapters uses a dimension of 16 and 48 using BERT-base and RoBERTa-large encoders following the setup of (Hu et al., 2021; Mao et al., 2021) for fair comparison. AdaMix with LoRA uses rank $r = 4$ following the setup of (Hu et al., 2021) to keep the same number of adaptation parameters during inference. The number of adaptation modules in AdaMix is set to 4 for all the tasks and encoders unless otherwise specified. The impact of adapter dimension and number of adaptation modules for NLU tasks are investigated in Table 9 and 10. For most of the experiments and ablation analysis, we report results from AdaMix with adapters for NLU tasks. For demonstrating the generalizability of our framework, we report results from AdaMix with LoRA (Hu et al., 2021) as the under
|
| 161 |
+
|
| 162 |
+
<table><tr><td>Model</td><td>#Param.</td><td>MNLI Acc</td><td>QNLI Acc</td><td>SST2 Acc</td><td>QQP Acc</td><td>MRPC Acc</td><td>CoLA Mcc</td><td>RTE Acc</td><td>STS-B Pearson</td><td>Avg.</td></tr><tr><td>Full Fine-tuning†</td><td>355.0M</td><td>90.2</td><td>94.7</td><td>96.4</td><td>92.2</td><td>90.9</td><td>68.0</td><td>86.6</td><td>92.4</td><td>88.9</td></tr><tr><td>Pfeiffer Adapter†</td><td>3.0M</td><td>90.2</td><td>94.8</td><td>96.1</td><td>91.9</td><td>90.2</td><td>68.3</td><td>83.8</td><td>92.1</td><td>88.4</td></tr><tr><td>Pfeiffer Adapter†</td><td>0.8M</td><td>90.5</td><td>94.8</td><td>96.6</td><td>91.7</td><td>89.7</td><td>67.8</td><td>80.1</td><td>91.9</td><td>87.9</td></tr><tr><td>Houlsby Adapter†</td><td>6.0M</td><td>89.9</td><td>94.7</td><td>96.2</td><td>92.1</td><td>88.7</td><td>66.5</td><td>83.4</td><td>91.0</td><td>87.8</td></tr><tr><td>Houlsby Adapter†</td><td>0.8M</td><td>90.3</td><td>94.7</td><td>96.3</td><td>91.5</td><td>87.7</td><td>66.3</td><td>72.9</td><td>91.5</td><td>86.4</td></tr><tr><td>LoRA†</td><td>0.8M</td><td>90.6</td><td>94.8</td><td>96.2</td><td>91.6</td><td>90.2</td><td>68.2</td><td>85.2</td><td>92.3</td><td>88.6</td></tr><tr><td>AdaMix Adapter</td><td>0.8M</td><td>90.9</td><td>95.4</td><td>97.1</td><td>92.3</td><td>91.9</td><td>70.2</td><td>89.2</td><td>92.4</td><td>89.9</td></tr></table>
|
| 163 |
+
|
| 164 |
+
Table 1: Results for NLU tasks on GLUE development set with RoBERTa-large encoder. The best result on each task is in bold and “-” denotes missing measure. AdaMix with a mixture of adapters outperforms all competing methods as well as fully fine-tuned large model with only $0.23\%$ tunable parameters.† denotes results reported from (Hu et al., 2021). Mcc refers to Matthews correlation coefficient, and Pearson refers to Pearson correlation. #Param. denotes the number of tunable adaptation parameters used during inference.
|
| 165 |
+
|
| 166 |
+
lying PEFT mechanism for NLG tasks.
|
| 167 |
+
|
| 168 |
+
# 3.2 Key Results
|
| 169 |
+
|
| 170 |
+
# 3.2.1 NLU Tasks
|
| 171 |
+
|
| 172 |
+
Tables 1 and 2 show the performance comparison among PEFT models with RoBERTa-large and BERT-base encoders respectively. Fully fine-tuned RoBERTa-large and BERT-base provide the ceiling performance. We observe AdaMix with a mixture-of-adapters to significantly outperform other state-of-the-art baselines on most tasks with different encoders. AdaMix with adapters is the only PEFT method which outperforms full model fine-tuning on all the tasks and on average score.
|
| 173 |
+
|
| 174 |
+
<table><tr><td>Model</td><td>#Param.</td><td>Avg.</td></tr><tr><td>Full Fine-tuning†</td><td>110M</td><td>82.7</td></tr><tr><td>Houlsby Adapter†</td><td>0.9M</td><td>83.0</td></tr><tr><td>BitFitdiamond</td><td>0.1M</td><td>82.3</td></tr><tr><td>Prefix-tuning†</td><td>0.2M</td><td>82.1</td></tr><tr><td>LoRA†</td><td>0.3M</td><td>82.2</td></tr><tr><td>UNIPELT (AP)†</td><td>1.1M</td><td>83.1</td></tr><tr><td>UNIPELT (APL)†</td><td>1.4M</td><td>83.5</td></tr><tr><td>AdaMix Adapter</td><td>0.9M</td><td>84.5</td></tr></table>
|
| 175 |
+
|
| 176 |
+
Table 2: Results for NLU tasks on GLUE development set with BERT-base encoder and AdaMix with a mixture-of-adapters. The best result on each task is in bold. $\dagger$ and $\diamond$ denote results reported from (Mao et al., 2021; Zaken et al., 2021). Detailed task-specific results are reported in Table 13 of Appendix. #Param. refers to the number of tunable adaptation parameters during inference.
|
| 177 |
+
|
| 178 |
+
# 3.2.2 NLG Tasks
|
| 179 |
+
|
| 180 |
+
AdaMix leverages mixture of adaptations to improve over underlying PEFT method as demonstrated in Table 3 for E2E NLG i.e. AdaMix with LoRA and AdaMix with adapters outperform
|
| 181 |
+
|
| 182 |
+
LoRA (Hu et al., 2021) and adapters (Houlsby et al., 2019) respectively. We report results on Dart and WebNLG in Tables 4 and 5 in Appendix.
|
| 183 |
+
|
| 184 |
+
# 3.2.3 Few-shot NLU
|
| 185 |
+
|
| 186 |
+
In contrast to the fully supervised setting in the above experiments, we also perform few-shot experiments on six GLUE tasks following the same setup (e.g., shots, train and test splits) and evaluation as in (Wang et al., 2021). Detailed experimental configuration presented in Section B of Appendix. AdaMix uses a mixture-of-adapters with prompt-based fine-tuning (Gao et al., 2021).
|
| 187 |
+
|
| 188 |
+
Table 6 shows the performance comparison among different PEFT methods with $|K| = 30$ labeled examples with RoBERTa-large as frozen encoder. We observe significant performance gap for most PEFT methods with full model prompt-based fine-tuning i.e. with all model parameters being updated. AdaMix with adapters outperforms full model tuning performance for few-shot NLU similar to that in the fully supervised setting. Note that AdaMix and LiST (Wang et al., 2021) use similar adapter design with prompt-based fine-tuning.
|
| 189 |
+
|
| 190 |
+
# 3.3 Ablation Study
|
| 191 |
+
|
| 192 |
+
We perform all the ablation analysis on AdaMix with adapters for parameter-efficient fine-tuning.
|
| 193 |
+
|
| 194 |
+
Analysis of adaptation merging. In this ablation study, we do not merge adaptation modules and consider two different routing strategies at inference time: (a) randomly routing input to any adaptation module, and (b) fixed routing where we route all the input to the first adaptation module in AdaMix. From Table 7, we observe AdaMix with adaptation merging to perform better than any of the other variants without the merging mechanism.
|
| 195 |
+
|
| 196 |
+
<table><tr><td>Model</td><td>#Param.</td><td>BLEU</td><td>NIST</td><td>MET</td><td>ROUGE-L</td><td>CIDEr</td></tr><tr><td>Full Fine-tuning†</td><td>354.92M</td><td>68.2</td><td>8.62</td><td>46.2</td><td>71.0</td><td>2.47</td></tr><tr><td>Lin AdapterL†</td><td>0.37M</td><td>66.3</td><td>8.41</td><td>45.0</td><td>69.8</td><td>2.40</td></tr><tr><td>Lin Adapter†</td><td>11.09M</td><td>68.9</td><td>8.71</td><td>46.1</td><td>71.3</td><td>2.47</td></tr><tr><td>Houlsby Adapter†</td><td>11.09M</td><td>67.3</td><td>8.50</td><td>46.0</td><td>70.7</td><td>2.44</td></tr><tr><td>FTTop2†</td><td>25.19M</td><td>68.1</td><td>8.59</td><td>46.0</td><td>70.8</td><td>2.41</td></tr><tr><td>PreLayer†</td><td>0.35M</td><td>69.7</td><td>8.81</td><td>46.1</td><td>71.4</td><td>2.49</td></tr><tr><td>LoRA†</td><td>0.35M</td><td>70.4</td><td>8.85</td><td>46.8</td><td>71.8</td><td>2.53</td></tr><tr><td>LoRA (repr.)</td><td>0.35M</td><td>69.8</td><td>8.77</td><td>46.6</td><td>71.8</td><td>2.52</td></tr><tr><td>AdaMix Adapter</td><td>0.42M</td><td>69.8</td><td>8.75</td><td>46.8</td><td>71.9</td><td>2.52</td></tr><tr><td>AdaMix LoRA</td><td>0.35M</td><td>71.0</td><td>8.89</td><td>46.8</td><td>72.2</td><td>2.54</td></tr></table>
|
| 197 |
+
|
| 198 |
+
Table 3: Results on E2E NLG Challenge with GPT-2 medium backbone. Best result on each task is in bold. We report AdaMix results with both adapters and LoRA as underlying PEFT method. AdaMix outperforms all competing methods as well as fully fine-tuned large model with only $0.1\%$ tunable parameters.† denotes results reported from (Hu et al., 2021) and repr. denotes reproduced results. #Param. denotes the number of tunable adaptation parameters used during inference. Results on DART and WebNLG presented in Tables 4 and 5 in Appendix.
|
| 199 |
+
|
| 200 |
+
<table><tr><td>Model</td><td>#Param.</td><td>BLEU</td></tr><tr><td>Full Fine-tuning†</td><td>354.92M</td><td>46.2</td></tr><tr><td>Lin AdapterL†</td><td>0.37M</td><td>42.4</td></tr><tr><td>Lin Adapter†</td><td>11.09M</td><td>45.2</td></tr><tr><td>FTTop2†</td><td>25.19M</td><td>41.0</td></tr><tr><td>PrefLayer†</td><td>0.35M</td><td>46.4</td></tr><tr><td>LoRA†</td><td>0.35M</td><td>47.1</td></tr><tr><td>LoRA (repr.)</td><td>0.35M</td><td>47.35</td></tr><tr><td>AdaMix Adapter</td><td>0.42M</td><td>47.72</td></tr><tr><td>AdaMix LoRA</td><td>0.35M</td><td>47.86</td></tr></table>
|
| 201 |
+
|
| 202 |
+
Notably, all of the AdaMix variants outperform full model tuning.
|
| 203 |
+
|
| 204 |
+
Moreover, Figure 4 shows that the performance of merging mechanism is consistently better than the average performance of random routing and comparable to the best performance of random routing.
|
| 205 |
+
|
| 206 |
+
Averaging weights v.s. ensembling logits. We compare AdaMix with a variant of logit ensembling, denoted as AdaMix-Ensemble. To this end, we make four random routing passes through the network for every input $(T = 4)$ and average the logits from different passes as the final predicted logit. Inference time for this ensembling method is $4 \times$ AdaMix. We run repeated experiments with three different seeds and report mean performance in Ta
|
| 207 |
+
|
| 208 |
+
Table 4: Results on DART with GPT-2 backbone encoder. Best result on each task is in bold. We report AdaMix results with both adapters and LoRA as underlying PEFT method. AdaMix outperforms all competing methods as well as fully fine-tuned large model with only $0.1\%$ tunable parameters. $^{\dagger}$ denotes results reported from (Hu et al., 2021) and repr. denotes reproduced results. #Param. denotes the number of tunable adaptation parameters used during inference.
|
| 209 |
+
|
| 210 |
+
<table><tr><td>Model</td><td>#Param.</td><td>BLEU</td></tr><tr><td>Full Fine-tuning†</td><td>354.92M</td><td>46.5</td></tr><tr><td>Lin AdapterL†</td><td>0.37M</td><td>50.2</td></tr><tr><td>Lin Adapter†</td><td>11.09M</td><td>54.9</td></tr><tr><td>FTTop2†</td><td>25.19M</td><td>36.0</td></tr><tr><td>Prefix†</td><td>0.35M</td><td>55.1</td></tr><tr><td>LoRA†</td><td>0.35M</td><td>55.3</td></tr><tr><td>LoRA (repr.)</td><td>0.35M</td><td>55.37</td></tr><tr><td>AdaMix Adapter</td><td>0.42M</td><td>54.94</td></tr><tr><td>AdaMix LoRA</td><td>0.35M</td><td>55.64</td></tr></table>
|
| 211 |
+
|
| 212 |
+
Table 5: Results on WebNLG with GPT-2 medium backbone. The results are based on all categories in the test set of WebNLG. Best result on each task is in bold. We report AdaMix results with both adapters and LoRA as underlying PEFT method. AdaMix outperforms all competing methods as well as fully fine-tuned large model with only $0.1\%$ tunable parameters. $^{\dagger}$ denotes results reported from (Hu et al., 2021) and repr. denotes reproduced results. #Param. denotes the number of tunable adaptation parameters used during inference.
|
| 213 |
+
|
| 214 |
+
ble 7. We observe AdaMix with adaptation weight averaging to outperform logit-ensembling following our analysis $(\mathcal{L}_{\mathcal{W}}^{AM}$ v.s. $\mathcal{L}_{\mathcal{W}}^{Ens})$ in Section 2.5.
|
| 215 |
+
|
| 216 |
+
Analysis of consistency regularization. We drop consistency regularization during training for ablation and demonstrate significant performance degradation in Table 8.
|
| 217 |
+
|
| 218 |
+
Analysis of adaptation module sharing. We remove adaptation module sharing in AdaMix for ablation and keep four different copies of project-down and four project-up FFN layers. From Table 8 we observe the performance gap between AdaMix and AdaMix w/o sharing to increase with decrease in the dataset size demonstrating the importance of parameter sharing for low-resource tasks (e.g.,
|
| 219 |
+
|
| 220 |
+
<table><tr><td>Model</td><td>MNLI</td><td>RTE</td><td>QQP</td><td>SST2</td><td>Subj</td><td>MPQA</td><td>Avg.</td></tr><tr><td>Full Prompt Fine-tuning*</td><td>62.8 (2.6)</td><td>66.1 (2.2)</td><td>71.1 (1.5)</td><td>91.5 (1.0)</td><td>91.0 (0.5)</td><td>82.7 (3.8)</td><td>77.5</td></tr><tr><td>Head-only*</td><td>54.1 (1.1)</td><td>58.8 (2.6)</td><td>56.7 (4.5)</td><td>85.6 (1.0)</td><td>82.1 (2.5)</td><td>64.1 (2.1)</td><td>66.9</td></tr><tr><td>BitFit*</td><td>54.4 (1.3)</td><td>59.8 (3.5)</td><td>58.6 (4.4)</td><td>87.3 (1.1)</td><td>83.9 (2.3)</td><td>65.8 (1.8)</td><td>68.3</td></tr><tr><td>Prompt-tuning*</td><td>47.3 (0.2)</td><td>53.0 (0.6)</td><td>39.9 (0.7)</td><td>75.7 (1.7)</td><td>51.5 (1.4)</td><td>70.9 (2.4)</td><td>56.4</td></tr><tr><td>Houlsby Adapter*</td><td>35.7 (1.1)</td><td>51.0 (3.0)</td><td>62.8 (3.0)</td><td>57.0 (6.2)</td><td>83.2 (5.4)</td><td>57.2 (3.5)</td><td>57.8</td></tr><tr><td>LiST Adapter*</td><td>62.4 (1.7)</td><td>66.6 (3.9)</td><td>71.2 (2.6)</td><td>91.7 (1.0)</td><td>90.9 (1.3)</td><td>82.6 (2.0)</td><td>77.6</td></tr><tr><td>AdaMix Adapter</td><td>65.6 (2.6)</td><td>69.6 (3.4)</td><td>72.6 (1.2)</td><td>91.8 (1.1)</td><td>91.5 (2.0)</td><td>84.7 (1.6)</td><td>79.3</td></tr></table>
|
| 221 |
+
|
| 222 |
+
Table 6: Average performance and standard deviation of several parameter-efficient fine-tuning strategies based on RoBERTa-large with $|\mathcal{K}| = 30$ training labels. The best performance is shown in **bold**. Prompt-tuning, Head-only and BitFit tune $1M$ model parameters during inference. Houlsby Adapter, LiST Adapter and AdaMix Adapter tune $14M$ model parameters. * denotes that the results are taken from (Wang et al., 2021).
|
| 223 |
+
|
| 224 |
+
<table><tr><td>Model</td><td>#Param.</td><td>Avg.</td></tr><tr><td>Full Fine-tuning</td><td>110M</td><td>82.7</td></tr><tr><td>AdaMix w/ Merging</td><td>0.9M</td><td>84.5</td></tr><tr><td>AdaMix w/o Merging + RandomRouting</td><td>3.6M</td><td>83.3</td></tr><tr><td>AdaMix w/o Merging + FixedRouting</td><td>0.9M</td><td>83.7</td></tr><tr><td>AdaMix w/o Merging + Ensemble</td><td>3.6M</td><td>83.2</td></tr></table>
|
| 225 |
+
|
| 226 |
+

|
| 227 |
+
Figure 4: Violin plot of AdaMix-RandomRouting performance distribution with RoBERTa-large encoders. Red dot denotes the performance of AdaMix.
|
| 228 |
+
|
| 229 |
+
Table 7: AdaMix without adaptation merging and different routing and ensembling strategies. Average results are presented on GLUE development set with BERT-base encoder. Detailed task results in Table 14 of Appendix for BERT-base and RoBERTa-large encoders.
|
| 230 |
+
|
| 231 |
+
<table><tr><td>Model/# Train</td><td>MNLI 393k</td><td>QNLI 108k</td><td>SST2 67k</td><td>MRPC 3.7k</td><td>RTE 2.5k</td></tr><tr><td>Full Fine-tuning</td><td>90.2</td><td>94.7</td><td>96.4</td><td>90.9</td><td>86.6</td></tr><tr><td>AdaMix</td><td>90.9</td><td>95.4</td><td>97.1</td><td>91.9</td><td>89.2</td></tr><tr><td>w/o Consistency</td><td>90.7</td><td>95.0</td><td>97.1</td><td>91.4</td><td>84.8</td></tr><tr><td>w/o Sharing</td><td>90.9</td><td>95.0</td><td>96.4</td><td>90.4</td><td>84.1</td></tr></table>
|
| 232 |
+
|
| 233 |
+
RTE, MRPC). This is further demonstrated in Figure 7 in Appendix which shows a faster convergence and lower training loss of AdaMix with shar
|
| 234 |
+
|
| 235 |
+
ing compared to that without given the same number of training steps. We explore which adaptation module to share (project-up v.s. project-down) in Table 11 in Appendix that depict similar results.
|
| 236 |
+
|
| 237 |
+
Impact of the number of adaptation modules. In this study, we vary the number of adaptation modules in AdaMix as 2, 4 and 8 during training. Table 9 shows diminishing returns on aggregate task performance with increasing number of modules. As we increase sparsity and the number of tunable parameters by increasing the number of adaptation modules, low-resource tasks like RTE and SST-2 – with limited amount of labeled data for fine-tuning – degrade in performance compared to high-resource tasks like MNLI and QNLI.
|
| 238 |
+
|
| 239 |
+
Table 8: Ablation study demonstrating the impact of consistency regularization and sharing in AdaMix.
|
| 240 |
+
|
| 241 |
+
<table><tr><td>Adaptation Module</td><td>MNLI 393k</td><td>QNLI 108k</td><td>SST2 67k</td><td>MRPC 3.7k</td><td>RTE 2.5k</td></tr><tr><td>2</td><td>90.9</td><td>95.2</td><td>96.8</td><td>90.9</td><td>87.4</td></tr><tr><td>4*</td><td>90.9</td><td>95.4</td><td>97.1</td><td>91.9</td><td>89.2</td></tr><tr><td>8</td><td>90.9</td><td>95.3</td><td>96.9</td><td>91.4</td><td>87.4</td></tr></table>
|
| 242 |
+
|
| 243 |
+
Table 9: Varying the number of adaptation modules in AdaMix with RoBERTa-large encoder. * denotes the number of modules used in AdaMix with adapters.
|
| 244 |
+
|
| 245 |
+
Impact of adapter bottleneck dimension. Table 10 shows the impact of bottleneck dimension of adapters with different encoders in AdaMix. The model performance improves with increase in the number of trainable parameters by increasing the bottleneck dimension with diminishing returns after a certain point.
|
| 246 |
+
|
| 247 |
+
# 4 Related Work
|
| 248 |
+
|
| 249 |
+
Parameter-efficient fine-tuning of PLMs. Recent works on parameter-efficient fine-tuning (PEFT) can be roughly categorized into two categories: (1) tuning a subset of existing parameters including head fine-tuning (Lee et al., 2019), bias term
|
| 250 |
+
|
| 251 |
+
<table><tr><td>Adapter Dimension</td><td>#Param.</td><td>MNLI 393k</td><td>QNLI 108k</td><td>SST2 67k</td><td>MRPC 3.7k</td><td>RTE 2.5k</td></tr><tr><td>8</td><td>0.4M</td><td>90.7</td><td>95.2</td><td>96.8</td><td>91.2</td><td>87.7</td></tr><tr><td>16*</td><td>0.8M</td><td>90.9</td><td>95.4</td><td>97.1</td><td>91.9</td><td>89.2</td></tr><tr><td>32</td><td>1.5M</td><td>91.0</td><td>95.4</td><td>96.8</td><td>90.7</td><td>89.2</td></tr></table>
|
| 252 |
+
|
| 253 |
+
Table 10: Varying the bottleneck dimension of adapters in AdaMix with RoBERTa-large encoder. * denotes the bottleneck dimension used in AdaMix with adapters. Results with BERT-base encoder in Table 12 in Appendix.
|
| 254 |
+
|
| 255 |
+
tuning (Zaken et al., 2021), (2) tuning newly-introduced parameters including adapters (Houlsby et al., 2019; Pfeiffer et al., 2020), prompt-tuning (Lester et al., 2021), prefix-tuning (Li and Liang, 2021) and low-rank adaptation (Hu et al., 2021). As opposed to prior works operating on a single adaptation module, AdaMix introduces a mixture of adaptation modules with stochastic routing during training and adaptation module merging during inference to keep the same computational cost as with a single module. Further, AdaMix can be used on top of any PEFT method to further boost its performance.
|
| 256 |
+
|
| 257 |
+
Mixture-of-Expert (MoE). Shazeer et al., 2017 introduced the MoE model with a single gating network with $Top - k$ routing and load balancing across experts. Fedus et al., 2021 propose initialization and training schemes for $Top - 1$ routing. Zuo et al., 2021 propose consistency regularization for random routing; Yang et al., 2021 propose $k$ Top-1 routing with expert-prototypes, and Roller et al., 2021; Lewis et al., 2021 address other load balancing issues. All the above works study sparse MoE with pre-training the entire model from scratch. In contrast, we study parameter-efficient adaptation of pre-trained language models by tuning only a very small number of sparse adapter parameters.
|
| 258 |
+
|
| 259 |
+
Averaging model weights. Recent explorations (Szegedy et al., 2016; Matena and Raffel, 2021; Wortsman et al., 2022; Izmailov et al., 2018) study model aggregation by averaging all the model weights. (Matena and Raffel, 2021) propose to merge pre-trained language models which are fine-tuned on various text classification tasks. (Wortsman et al., 2022) explores averaging model weights from various independent runs on the same task with different hyper-parameter configurations. In contrast to the above works on full model finetuning, we focus on parameter-efficient fine-tuning. We explore weight averaging for merging weights of adaptation modules consisting of small tunable
|
| 260 |
+
|
| 261 |
+
parameters that are updated during model tuning while keeping the large model parameters fixed.
|
| 262 |
+
|
| 263 |
+
# 5 Conclusions
|
| 264 |
+
|
| 265 |
+
We develop a new framework AdaMix for parameter-efficient fine-tuning (PEFT) of large pretrained language models (PLM). AdaMix leverages a mixture of adaptation modules to improve downstream task performance without increasing the computational cost (e.g., FLOPs, parameters) of the underlying adaptation method. We demonstrate AdaMix to work with and improve over different PEFT methods like adapters and low rank decompositions across NLU and NLG tasks.
|
| 266 |
+
|
| 267 |
+
By tuning only $0.1 - 0.2\%$ of PLM parameters, AdaMix outperforms full model fine-tuning that updates all the model parameters as well as other state-of-the-art PEFT methods.
|
| 268 |
+
|
| 269 |
+
# 6 Limitations
|
| 270 |
+
|
| 271 |
+
The proposed AdaMix method is somewhat compute-intensive as it involves fine-tuning large-scale language models. The training cost of the proposed AdaMix is higher than standard PEFT methods since the training procedure involves multiple copies of adapters. Based on our empirical observation, the number of training iterations for AdaMix is usually between $1 \sim 2$ times the training for standard PEFT methods. This imposes negative impact on carbon footprint from training the described models.
|
| 272 |
+
|
| 273 |
+
AdaMix is orthogonal to most of the existing parameter-efficient fine-tuning (PEFT) studies and is able to potentially improve the performance of any PEFT method. In this work, we explore two representative PEFT methods like adapter and LoRA but we did not experiment with other combinations like prompt-tuning and prefix-tuning. We leave those studies to future work.
|
| 274 |
+
|
| 275 |
+
# 7 Acknowledgment
|
| 276 |
+
|
| 277 |
+
The authors would like to thank the anonymous referees for their valuable comments and helpful suggestions and would like to thank Guoqing Zheng and Ruya Kang for their insightful comments on the project. This work is supported in part by the US National Science Foundation under grants NSF-IIS 1747614 and NSF-IIS-2141037. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.
|
| 278 |
+
|
| 279 |
+
# References
|
| 280 |
+
|
| 281 |
+
Armen Aghajanyan, Sonal Gupta, and Luke Zettlemoyer. 2021. Intrinsic dimensionality explains the effectiveness of language model fine-tuning. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 7319-7328, Online. Association for Computational Linguistics.
|
| 282 |
+
Roy Bar Haim, Ido Dagan, Bill Dolan, Lisa Ferro, Danilo Giampiccolo, Bernardo Magnini, and Idan Szpektor. 2006. The second PASCAL recognising textual entailment challenge.
|
| 283 |
+
Luisa Bentivogli, Peter Clark, Ido Dagan, and Danilo Giampiccolo. 2009. The fifth PASCAL recognizing textual entailment challenge. In TAC.
|
| 284 |
+
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In Advances in Neural Information Processing Systems, volume 33, pages 1877-1901. Curran Associates, Inc.
|
| 285 |
+
Ido Dagan, Oren Glickman, and Bernardo Magnini. 2005. The PASCAL recognising textual entailment challenge. In the First International Conference on Machine Learning Challenges: Evaluating Predictive Uncertainty Visual Object Classification, and Recognizing Textual Entailment.
|
| 286 |
+
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Volume 1 (Long and Short Papers), pages 4171-4186.
|
| 287 |
+
William Fedus, Barret Zoph, and Noam Shazeer. 2021. Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity. arXiv preprint arXiv:2101.03961.
|
| 288 |
+
Jonathan Frankle, Gintare Karolina Dziugaite, Daniel Roy, and Michael Carbin. 2020. Linear mode connectivity and the lottery ticket hypothesis. In International Conference on Machine Learning, pages 3259-3269. PMLR.
|
| 289 |
+
Yarin Gal and Zoubin Ghahramani. 2015. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. CoRR, abs/1506.02142.
|
| 290 |
+
|
| 291 |
+
Yarin Gal, Riashat Islam, and Zoubin Ghahramani. 2017. Deep Bayesian active learning with image data. In Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pages 1183-1192. PMLR.
|
| 292 |
+
Tianyu Gao, Adam Fisch, and Danqi Chen. 2021. Making pre-trained language models better few-shot learners. In Association for Computational Linguistics (ACL).
|
| 293 |
+
Claire Gardent, Anastasia Shimorina, Shashi Narayan, and Laura Perez-Beltrachini. 2017. The webnlg challenge: Generating text from rdf data. In Proceedings of the 10th International Conference on Natural Language Generation, pages 124-133.
|
| 294 |
+
Danilo Giampiccolo, Bernardo Magnini, Ido Dagan, and Bill Dolan. 2007. The third PASCAL recognizing textual entailment challenge. In the ACL-PASCAL Workshop on Textual Entailment and Paraphrasing.
|
| 295 |
+
Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. 2019. Parameter-efficient transfer learning for nlp. In International Conference on Machine Learning, pages 2790-2799. PMLR.
|
| 296 |
+
Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2021. Lora: Low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685.
|
| 297 |
+
Pavel Izmailov, Dmitrii Podoprikhin, Timur Garipov, Dmitry Vetrov, and Andrew Gordon Wilson. 2018. Averaging weights leads to wider optima and better generalization. arXiv preprint arXiv:1803.05407.
|
| 298 |
+
Jaejun Lee, Raphael Tang, and Jimmy Lin. 2019. What would elsa do? freezing layers during transformer fine-tuning. arXiv preprint arXiv:1911.03090.
|
| 299 |
+
Dmitry Lepikhin, HyoukJoong Lee, Yuanzhong Xu, Dehao Chen, Orhan First, Yanping Huang, Maxim Krikun, Noam Shazeer, and Zhifeng Chen. 2020. Gshard: Scaling giant models with conditional computation and automatic sharding. arXiv preprint arXiv:2006.16668.
|
| 300 |
+
Brian Lester, Rami Al-Rfou, and Noah Constant. 2021. The power of scale for parameter-efficient prompt tuning. CoRR, abs/2104.08691.
|
| 301 |
+
Mike Lewis, Shruti Bhosale, Tim Dettmers, Naman Goyal, and Luke Zettlemoyer. 2021. Base layers: Simplifying training of large, sparse models. In ICML.
|
| 302 |
+
Xiang Lisa Li and Percy Liang. 2021. Prefix-tuning: Optimizing continuous prompts for generation. CoRR, abs/2101.00190.
|
| 303 |
+
|
| 304 |
+
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining approach. CoRR, abs/1907.11692.
|
| 305 |
+
Yuning Mao, Lambert Mathias, Rui Hou, Amjad Alma-hairi, Hao Ma, Jiawei Han, Wen-tau Yih, and Madian Khabsa. 2021. Unipelt: A unified framework for parameter-efficient language model tuning. arXiv preprint arXiv:2110.07577.
|
| 306 |
+
Michael Matena and Colin Raffel. 2021. Merging models with fisher-weighted averaging. arXiv preprint arXiv:2111.09832.
|
| 307 |
+
Linyong Nan, Dragomir Radev, Rui Zhang, Amrit Rau, Abhinand Sivaprasad, Chiachun Hsieh, Xiangru Tang, Aadit Vyas, Neha Verma, Pranav Krishna, et al. 2020. Dart: Open-domain structured data record to text generation. arXiv preprint arXiv:2007.02871.
|
| 308 |
+
Behnam Neyshabur, Hanie Sedghi, and Chiyuan Zhang. 2020. What is being transferred in transfer learning? Advances in neural information processing systems, 33:512-523.
|
| 309 |
+
Jekaterina Novikova, Ondrej Dušek, and Verena Rieser. 2017. The e2e dataset: New challenges for end-to-end generation. arXiv preprint arXiv:1706.09254.
|
| 310 |
+
Bo Pang and Lillian Lee. 2004. A sentimental education: Sentiment analysis using subjectivity summarization based on minimum cuts.
|
| 311 |
+
Ethan Perez, Douwe Kiela, and Kyunghyun Cho. 2021. True few-shot learning with language models. arXiv preprint arXiv:2105.11447.
|
| 312 |
+
Jonas Pfeiffer, Aishwarya Kamath, Andreas Rückle, Kyunghyun Cho, and Iryna Gurevych. 2021. Adapterfusion: Non-destructive task composition for transfer learning. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 487-503.
|
| 313 |
+
Jonas Pfeiffer, Andreas Rückle, Clifton Poth, Aishwarya Kamath, Ivan Vulic, Sebastian Ruder, Kyunghyun Cho, and Iryna Gurevych. 2020. Adapterhub: A framework for adapting transformers. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP 2020): Systems Demonstrations, pages 46-54, Online. Association for Computational Linguistics.
|
| 314 |
+
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text transformer. arXiv preprint arXiv:1910.10683.
|
| 315 |
+
Stephen Roller, Sainbayar Sukhbaatar, Arthur D. Szlam, and Jason Weston. 2021. Hash layers for large sparse models. ArXiv, abs/2106.04426.
|
| 316 |
+
|
| 317 |
+
Thibault Sellam, Steve Yadowsky, Ian Tenney, Jason Wei, Naomi Saphra, Alexander D'Amour, Tal Linzen, Jasmijn Bastings, Iulia Raluca Turc, Jacob Eisenstein, Dipanjan Das, and Ellie Pavlick. 2022. The multiBERTs: BERT reproductions for robustness analysis. In International Conference on Learning Representations.
|
| 318 |
+
Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, and Jeff Dean. 2017. Outrageously large neural networks: The sparsely-gated mixture-of-experts layer. arXiv preprint arXiv:1701.06538.
|
| 319 |
+
Shaden Smith, Mostofa Patwary, Brandon Norick, Patrick LeGresley, Samyam Rajbhandari, Jared Casper, Zhun Liu, Shrimai Prabhumoye, George Zerveas, Vijay Korthikanti, et al. 2022. Using deep-speed and megatron to train megatron-turing nlg 530b, a large-scale generative language model. arXiv preprint arXiv:2201.11990.
|
| 320 |
+
Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. Recursive deep models for semantic compositionality over a sentiment treebank.
|
| 321 |
+
Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. 2016. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2818-2826.
|
| 322 |
+
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in neural information processing systems, 30.
|
| 323 |
+
Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. 2019. GLUE: A multi-task benchmark and analysis platform for natural language understanding.
|
| 324 |
+
Yaqing Wang, Subhabrata Mukherjee, Xiaodong Liu, Jing Gao, Ahmed Hassan Awadallah, and Jianfeng Gao. 2021. List: Lite self-training makes efficient few-shot learners. arXiv preprint arXiv:2110.06274.
|
| 325 |
+
Janyce Wiebe, Theresa Wilson, and Claire Cardie. 2005. Annotating expressions of opinions and emotions in language. *Language resources and evaluation*, 39(2):165-210.
|
| 326 |
+
Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference.
|
| 327 |
+
Mitchell Wortsman, Gabriel Ilharco, Samir Yitzhak Gadre, Rebecca Roelofs, Raphael Gontijo-Lopes, Ari S Morcos, Hongseok Namkoong, Ali Farhadi, Yair Carmon, Simon Kornblith, et al. 2022. Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time. arXiv preprint arXiv:2203.05482.
|
| 328 |
+
|
| 329 |
+
An Yang, Junyang Lin, Rui Men, Chang Zhou, Le Jiang, Xianyan Jia, Ang Wang, Jie Zhang, Jiamang Wang, Yong Li, et al. 2021. M6-t: Exploring sparse expert models and beyond. arXiv preprint arXiv:2105.15082.
|
| 330 |
+
Elad Ben Zaken, Shauli Ravfogel, and Yoav Goldberg. 2021. Bitfit: Simple parameter-efficient fine-tuning for transformer-based masked language-models. arXiv preprint arXiv:2106.10199.
|
| 331 |
+
Tianyi Zhang, Felix Wu, Arzoo Katiyar, Kilian Q Weinberger, and Yoav Artzi. 2021. Revisiting few-sample BERT fine-tuning.
|
| 332 |
+
Simiao Zuo, Xiaodong Liu, Jian Jiao, Young Jin Kim, Hany Hassan, Ruofei Zhang, Tuo Zhao, and Jianfeng Gao. 2021. Taming sparsely activated transformer with stochastic experts. arXiv preprint arXiv:2110.04260.
|
| 333 |
+
|
| 334 |
+
# Appendix
|
| 335 |
+
|
| 336 |
+
# A Background
|
| 337 |
+
|
| 338 |
+
# A.1 Mixture-of-Experts
|
| 339 |
+
|
| 340 |
+
The objective of sparsely-activated model design is to support conditional computation and increase the parameter count of neural models like Transformers while keeping the floating point operations (FLOPs) for each input example constant. Mixture-of-Experts (MoE) Transformer models (Shazeer et al., 2017; Fedus et al., 2021; Lepikhin et al., 2020; Zuo et al., 2021) achieve this by using $N$ feed-forward networks (FFN), namely "experts" denoted as $\mathbb{E}_{i=1}^{N}$ , each with its own set of learnable weights that compute different representations of an input token $x$ based on context. In order to sparsify the network to keep the FLOPs constant, there is an additional gating network $\mathbb{G}$ whose output is a sparse $N$ -dimensional vector to route each token via a few of these experts. Note that, a sparse model with $N = 1$ corresponding to only one FFN layer in each Transformer block collapses to the traditional dense model.
|
| 341 |
+
|
| 342 |
+
Consider $x_{s}$ as the input token representation in the $s^{th}$ position to the MOE layer comprising of the $\{\mathbb{E}\}_{i = 1}^{N}$ expert FFNs. Also, consider $w_{i}^{in}$ and $w_{i}^{out}$ to be the input and output projection matrices for $i^{th}$ expert. Expert output $\mathbb{E}_i(x_s)$ is given by:
|
| 343 |
+
|
| 344 |
+
$$
|
| 345 |
+
\mathbb {E} _ {i} \left(x _ {s}\right) = w _ {i} ^ {\text {o u t}} \cdot \operatorname {G e L U} \left(w _ {i} ^ {\text {i n}} \cdot x _ {s}\right) \tag {5}
|
| 346 |
+
$$
|
| 347 |
+
|
| 348 |
+
Consider $\mathbb{G}(x_s)$ to be output of the gating network. Output of the sparse MoE layer is given by:
|
| 349 |
+
|
| 350 |
+
$$
|
| 351 |
+
h \left(x _ {s}\right) = \sum_ {i} \mathbb {G} \left(x _ {s}\right) _ {i} \mathbb {E} _ {i} \left(x _ {s}\right) \tag {6}
|
| 352 |
+
$$
|
| 353 |
+
|
| 354 |
+
where $\mathbb{G}(x_s)_i$ the $i^{th}$ logit of the output of $\mathbb{G}(x_s)$ denotes the probability of selecting expert $\mathbb{E}_i$ .
|
| 355 |
+
|
| 356 |
+
In order to keep the number of FLOPs in the sparse Transformer to be the same as that of a dense one, the gating mechanism can be constrained to route each token to only one expert FFN, i.e. $\sum_{i}\mathbb{G}_{t}(x_{s})_{i} = 1$
|
| 357 |
+
|
| 358 |
+
# A.2 Adapters
|
| 359 |
+
|
| 360 |
+
The predominant methodology for task adaptation is to tune all of the trainable parameters of the PLMs for every task. This raises significant resource challenges both during training and deployment. A recent study (Aghajanyan et al., 2021) shows that PLMs have a low intrinsic dimension
|
| 361 |
+
|
| 362 |
+

|
| 363 |
+
Figure 5: Conventional adapter design in standard Transformer architecture.
|
| 364 |
+
|
| 365 |
+
that can match the performance of the full parameter space.
|
| 366 |
+
|
| 367 |
+
To adapt PLMs for downstream tasks with a small number of parameters, adapters (Houlsby et al., 2019) have recently been introduced as an alternative approach for lightweight tuning.
|
| 368 |
+
|
| 369 |
+
The adapter tuning strategy judiciously introduces new parameters into the original PLMs. During fine-tuning, only the adapter parameters are updated while keeping the remaining parameters of the PLM frozen. Adapters usually consist of two fully connected layers as shown in Figure 5, where the adapter layer uses a down projection $\mathcal{W}^{down} \in \mathcal{R}^{d \times r}$ to project input representation $x$ to a low-dimensional space $r$ (referred as the bottleneck dimension) with $d$ being the model dimension, followed by a nonlinear activation function $f(\cdot)$ , and a up-projection with $\mathcal{W}^{up} \in \mathcal{R}^{r \times d}$ to project the low-dimensional features back to the original dimension. The adapters are further surrounded by residual connections.
|
| 370 |
+
|
| 371 |
+
Given the above adapter design with parameters $\psi$ , the dataset $\mathcal{D}_K$ , a pre-trained language model encoder enc with parameters $\Theta_{\mathrm{PLM}}$ , where $\Theta_{\mathrm{PLM}} \gg \psi$ , we want to perform the following optimization for efficient model adaptation:
|
| 372 |
+
|
| 373 |
+
$$
|
| 374 |
+
\psi \leftarrow \operatorname {a r g m i n} _ {\psi} \mathcal {L} \left(\mathcal {D} _ {k}; \Theta_ {\mathrm {P L M}}, \psi\right) \tag {7}
|
| 375 |
+
$$
|
| 376 |
+
|
| 377 |
+
# B Few-shot NLU Datasets
|
| 378 |
+
|
| 379 |
+
Data. In contrast to the fully supervised setting in the above experiments, we also perform few-shot experiments following the prior study (Wang et al., 2021) on six tasks including MNLI (Williams et al., 2018), RTE (Dagan et al., 2005; Bar Haim et al., 2006; Giampiccolo et al., 2007; Bentivogli et al., 2009), $\mathrm{QQP}^1$ and SST-2 (Socher et al.). The results are reported on their development set fol
|
| 380 |
+
|
| 381 |
+
lowing (Zhang et al., 2021). MPQA (Wiebe et al., 2005) and Subj (Pang and Lee, 2004) are used for polarity and subjectivity detection, where we follow (Gao et al., 2021) to keep 2,000 examples for testing. The few-shot model only has access to $|\mathcal{K}|$ labeled samples for any task. Following true few-shot learning setting (Perez et al., 2021; Wang et al., 2021), we do not use any additional validation set for any hyper-parameter tuning or early stopping. The performance of each model is reported after fixed number of training epochs. For a fair comparison, we use the same set of few-shot labeled instances for training as in (Wang et al., 2021). We train each model with 5 different seeds and report average performance with standard deviation across the runs. In the few-shot experiments, we follow (Wang et al., 2021) to train AdaMix via the prompt-based fine-tuning strategy. In contrast to (Wang et al., 2021), we do not use any unlabeled data.
|
| 382 |
+
|
| 383 |
+
# C Ablation Study
|
| 384 |
+
|
| 385 |
+
<table><tr><td>Model</td><td>MNLI Acc</td><td>SST2 Acc</td></tr><tr><td>Sharing Project-up</td><td>90.9</td><td>97.1</td></tr><tr><td>Sharing Project-down</td><td>90.8</td><td>97.1</td></tr></table>
|
| 386 |
+
|
| 387 |
+
Table 11: Ablation study demonstrating the impact of parameter sharing in AdaMix adapter framework.
|
| 388 |
+
|
| 389 |
+
<table><tr><td>Adapter Dim</td><td>#Param.</td><td>MNLI 393k</td><td>QNLI 108k</td><td>SST2 67k</td><td>MRPC 3.7k</td><td>RTE 2.5k</td></tr><tr><td colspan="7">BERTBASE</td></tr><tr><td>8</td><td>0.1M</td><td>82.2</td><td>91.1</td><td>92.2</td><td>87.3</td><td>72.6</td></tr><tr><td>16</td><td>0.3M</td><td>83.0</td><td>91.5</td><td>92.2</td><td>88.2</td><td>72.9</td></tr><tr><td>32</td><td>0.6M</td><td>83.6</td><td>91.3</td><td>92.2</td><td>88.5</td><td>73.6</td></tr><tr><td>48*</td><td>0.9M</td><td>84.7</td><td>91.5</td><td>92.4</td><td>89.5</td><td>74.7</td></tr><tr><td>64</td><td>1.2M</td><td>84.4</td><td>91.8</td><td>92.3</td><td>88.2</td><td>75.1</td></tr><tr><td colspan="7">RoBERTaLARGE</td></tr><tr><td>8</td><td>0.4M</td><td>90.7</td><td>95.2</td><td>96.8</td><td>91.2</td><td>87.7</td></tr><tr><td>16*</td><td>0.8M</td><td>90.9</td><td>95.4</td><td>97.1</td><td>91.9</td><td>89.2</td></tr><tr><td>32</td><td>1.5M</td><td>91.0</td><td>95.4</td><td>96.8</td><td>90.7</td><td>89.2</td></tr></table>
|
| 390 |
+
|
| 391 |
+
Table 12: Varying the bottleneck dimension of adapters in AdaMix with BERT-base and RoBERTa-large encoder. * denotes the bottleneck dimension used in AdaMix with adapters.
|
| 392 |
+
|
| 393 |
+
# D Detailed Results on NLU Tasks
|
| 394 |
+
|
| 395 |
+
The results on NLU tasks are included in Table 1 and Table 13. The performance AdaMix with
|
| 396 |
+
|
| 397 |
+
RoBERTa-large encoder achieves the best performance in terms of different task metrics in the GLUE benchmark. AdaMix with adapters is the only PEFT method which outperforms full model fine-tuning on all the tasks and on average score. Additionally, the improvement brought by AdaMix is more significant with BERT-base as the encoder, demonstrating $2.2\%$ and $1.2\%$ improvement over the performance of full model fine-tuning and the best performing baseline UNIPELT with BERT-base. The improvement is observed to be consistent as that with RoBERTa-large on every task. The NLG results are included in Table 4 and 5.
|
| 398 |
+
|
| 399 |
+
# E Hyper-parameter
|
| 400 |
+
|
| 401 |
+
Detailed hyper-parameter configuration for different tasks presented in Table 15 and Table 16.
|
| 402 |
+
|
| 403 |
+
<table><tr><td>Model</td><td>#Param.</td><td>MNLI Acc</td><td>QNLI Acc</td><td>SST2 Acc</td><td>QQP Acc /F1</td><td>MRPC Acc/F1</td><td>CoLA Mcc</td><td>RTE Acc</td><td>STS-B Pearson</td><td>Avg.</td></tr><tr><td>Full Fine-tuning†</td><td>110M</td><td>83.2</td><td>90.0</td><td>91.6</td><td>-/87.4</td><td>-/90.9</td><td>62.1</td><td>66.4</td><td>89.8</td><td>82.7</td></tr><tr><td>Houlsby Adapter†</td><td>0.9M</td><td>83.1</td><td>90.6</td><td>91.9</td><td>-/86.8</td><td>-/89.9</td><td>61.5</td><td>71.8</td><td>88.6</td><td>83.0</td></tr><tr><td>BitFit°</td><td>0.1M</td><td>81.4</td><td>90.2</td><td>92.1</td><td>-/84.0</td><td>-/90.4</td><td>58.8</td><td>72.3</td><td>89.2</td><td>82.3</td></tr><tr><td>Prefix-tuning†</td><td>0.2M</td><td>81.2</td><td>90.4</td><td>90.9</td><td>-/83.3</td><td>-/91.3</td><td>55.4</td><td>76.9</td><td>87.2</td><td>82.1</td></tr><tr><td>LoRA†</td><td>0.3M</td><td>82.5</td><td>89.9</td><td>91.5</td><td>-/86.0</td><td>-/90.0</td><td>60.5</td><td>71.5</td><td>85.7</td><td>82.2</td></tr><tr><td>UNIPELT (AP)†</td><td>1.1M</td><td>83.4</td><td>90.8</td><td>91.9</td><td>-/86.7</td><td>-/90.3</td><td>61.2</td><td>71.8</td><td>88.9</td><td>83.1</td></tr><tr><td>UNIPELT (APL)†</td><td>1.4M</td><td>83.9</td><td>90.5</td><td>91.5</td><td>85.5</td><td>-/90.2</td><td>58.6</td><td>73.7</td><td>88.9</td><td>83.5</td></tr><tr><td>AdaMix Adapter</td><td>0.9M</td><td>84.7</td><td>91.5</td><td>92.4</td><td>90.7/ 87.6</td><td>89.5/ 92.4</td><td>62.9</td><td>74.7</td><td>89.9</td><td>84.5</td></tr></table>
|
| 404 |
+
|
| 405 |
+
Table 13: Main results on GLUE development set with BERT-base encoder. The best result on each task is in bold and “-” denotes the missing measure. $\dagger$ and $\diamond$ denote that the reported results are taken from (Mao et al., 2021; Zaken et al., 2021). The average performance is calculated based on F1 of QQP and MRPC. #Param. refers to the number of updated parameters in the inference stage.
|
| 406 |
+
|
| 407 |
+

|
| 408 |
+
(a) BERT-base
|
| 409 |
+
|
| 410 |
+

|
| 411 |
+
(b) RoBERTa-large
|
| 412 |
+
Figure 6: Violin plot of AdaMix-RandomRouting performance distribution with BERT-base and RoBERTa-large encoders. Red dot denotes the performance of AdaMix.
|
| 413 |
+
|
| 414 |
+

|
| 415 |
+
(a) MNLI
|
| 416 |
+
|
| 417 |
+

|
| 418 |
+
(b)QNLI
|
| 419 |
+
|
| 420 |
+

|
| 421 |
+
(c) SST2
|
| 422 |
+
Figure 7: Convergence analysis demonstrating the impact of adapter sharing design in AdaMix.
|
| 423 |
+
|
| 424 |
+
<table><tr><td>Model</td><td>#Param.</td><td>MNLI Acc</td><td>QNLI Acc</td><td>SST2 Acc</td><td>QQP Acc /F1</td><td>MRPC Acc/F1</td><td>CoLA Mcc</td><td>RTE Acc</td><td>STS-B Pearson</td><td>Avg.</td></tr><tr><td colspan="11">BERTBASE</td></tr><tr><td>Full Fine-tuning</td><td>110M</td><td>83.2</td><td>90.0</td><td>91.6</td><td>-/87.4</td><td>-/90.9</td><td>62.1</td><td>66.4</td><td>89.8</td><td>82.7</td></tr><tr><td>AdaMix</td><td>0.9M</td><td>84.7</td><td>91.5</td><td>92.4</td><td>90.7/87.6</td><td>89.5/92.4</td><td>62.9</td><td>74.7</td><td>89.9</td><td>84.5</td></tr><tr><td>AdaMix-RandomRouting</td><td>3.6M</td><td>84.3</td><td>91.1</td><td>91.8</td><td>90.6/87.4</td><td>85.6/89.1</td><td>60.5</td><td>72.1</td><td>89.8</td><td>83.3</td></tr><tr><td>AdaMix-FixedRouting</td><td>0.9M</td><td>84.5</td><td>91.1</td><td>91.6</td><td>90.5/87.3</td><td>87.5/90.8</td><td>61.4</td><td>73.3</td><td>89.8</td><td>83.7</td></tr><tr><td>AdaMix-Ensemble</td><td>3.6M</td><td>84.3</td><td>91.2</td><td>91.6</td><td>90.5/87.4</td><td>85.9/89.4</td><td>59.4</td><td>72.1</td><td>89.8</td><td>83.2</td></tr><tr><td colspan="11">RoBERTaLARGE</td></tr><tr><td>Full Fine-tuning</td><td>355.0M</td><td>90.2</td><td>94.7</td><td>96.4</td><td>92.2/-</td><td>90.9/-</td><td>68.0</td><td>86.6</td><td>92.4</td><td>88.9</td></tr><tr><td>AdaMix</td><td>0.8M</td><td>90.9</td><td>95.4</td><td>97.1</td><td>92.3/89.8</td><td>91.9/94.1</td><td>70.2</td><td>89.2</td><td>92.4</td><td>89.9</td></tr><tr><td>AdaMix-RandomRouting</td><td>3.2M</td><td>90.8</td><td>95.2</td><td>96.8</td><td>92.2/89.6</td><td>90.8/93.3</td><td>68.8</td><td>88.5</td><td>92.2</td><td>89.4</td></tr><tr><td>AdaMix-FixedRouting</td><td>0.8M</td><td>90.7</td><td>95.1</td><td>96.8</td><td>92.1/89.5</td><td>91.2/93.6</td><td>68.6</td><td>89.2</td><td>92.2</td><td>89.5</td></tr><tr><td>AdaMix-Ensemble</td><td>3.2M</td><td>90.9</td><td>95.3</td><td>97.0</td><td>92.2/89.7</td><td>91.0/93.5</td><td>69.3</td><td>89.1</td><td>92.4</td><td>89.7</td></tr></table>
|
| 425 |
+
|
| 426 |
+
Table 14: Comparing the impact of different routing and ensembling strategies with AdaMix. Results are presented on GLUE development set with BERT-base and RoBERTa-large encoders. Average results are calculated following Table 1 and Table 2 for consistency. The best result on each task is in **bold** and “-” denotes the missing measure.
|
| 427 |
+
|
| 428 |
+
<table><tr><td>Task</td><td>Learning rate</td><td>epoch</td><td>batch size</td><td>warmup</td><td>weight decay</td><td>adapter size</td><td>adapter num</td></tr><tr><td colspan="8">BERTBASE</td></tr><tr><td>MRPC</td><td>4e-4</td><td>100</td><td>16</td><td>0.06</td><td>0.1</td><td>48</td><td>4</td></tr><tr><td>CoLA</td><td>5e-4</td><td>100</td><td>16</td><td>0.06</td><td>0.1</td><td>48</td><td>4</td></tr><tr><td>SST</td><td>4e-4</td><td>40</td><td>64</td><td>0.06</td><td>0.1</td><td>48</td><td>4</td></tr><tr><td>STS-B</td><td>5e-4</td><td>80</td><td>32</td><td>0.06</td><td>0.1</td><td>48</td><td>4</td></tr><tr><td>QNLI</td><td>4e-4</td><td>20</td><td>64</td><td>0.06</td><td>0.1</td><td>48</td><td>4</td></tr><tr><td>MNLI</td><td>4e-4</td><td>40</td><td>64</td><td>0.06</td><td>0.1</td><td>48</td><td>4</td></tr><tr><td>QQP</td><td>5e-4</td><td>60</td><td>64</td><td>0.06</td><td>0.1</td><td>48</td><td>4</td></tr><tr><td>RTE</td><td>5e-4</td><td>80</td><td>64</td><td>0.06</td><td>0.1</td><td>48</td><td>4</td></tr><tr><td colspan="8">RoBERTaLARGE</td></tr><tr><td>MRPC</td><td>3e-4</td><td>60</td><td>64</td><td>0.6</td><td>0.1</td><td>16</td><td>4</td></tr><tr><td>CoLA</td><td>3e-4</td><td>80</td><td>64</td><td>0.6</td><td>0.1</td><td>16</td><td>4</td></tr><tr><td>SST</td><td>3e-4</td><td>20</td><td>64</td><td>0.6</td><td>0.1</td><td>16</td><td>4</td></tr><tr><td>STS-B</td><td>3e-4</td><td>80</td><td>64</td><td>0.6</td><td>0.1</td><td>16</td><td>4</td></tr><tr><td>QNLI</td><td>3e-4</td><td>20</td><td>64</td><td>0.6</td><td>0.1</td><td>16</td><td>4</td></tr><tr><td>MNLI</td><td>3e-4</td><td>20</td><td>64</td><td>0.6</td><td>0.1</td><td>16</td><td>4</td></tr><tr><td>QQP</td><td>5e-4</td><td>80</td><td>64</td><td>0.6</td><td>0.1</td><td>16</td><td>4</td></tr><tr><td>RTE</td><td>5e-4</td><td>60</td><td>64</td><td>0.6</td><td>0.1</td><td>16</td><td>4</td></tr></table>
|
| 429 |
+
|
| 430 |
+
Table 15: Hyperparameter configurations for GLUE tasks.
|
| 431 |
+
|
| 432 |
+
<table><tr><td>Task</td><td>|epoch|</td><td colspan="3">warmup steps|adapter size|no. of experts</td></tr><tr><td colspan="5">Adapter with Adamix</td></tr><tr><td>E2E NLG Challenge</td><td>20</td><td>2000</td><td>8</td><td>8</td></tr><tr><td>WebNLG</td><td>25</td><td>2500</td><td>8</td><td>8</td></tr><tr><td>DART</td><td>20</td><td>2000</td><td>8</td><td>8</td></tr><tr><td colspan="5">LoRA with Adamix</td></tr><tr><td>E2E NLG Challenge</td><td>20</td><td>2000</td><td>-</td><td>8</td></tr><tr><td>WebNLG</td><td>25</td><td>2500</td><td>-</td><td>8</td></tr><tr><td>DART</td><td>20</td><td>2000</td><td>-</td><td>8</td></tr></table>
|
| 433 |
+
|
| 434 |
+
Table 16: Hyperparameter configurations for GPT-2 Medium on NLG tasks. We retain all other default training and generation specific hyper-parameters from LoRA (Hu et al., 2021).
|
adamixmixtureofadaptationsforparameterefficientmodeltuning/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:5599f8fdb8566b31842aa2a686d61701cbb2a216084d6f893e355721f4cf17c8
|
| 3 |
+
size 919428
|
adamixmixtureofadaptationsforparameterefficientmodeltuning/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:556d326eaa4ba6d967b0523b9dd01cfa18608cf22855ada4b8bf9f15cc3a7f1d
|
| 3 |
+
size 512774
|
adaptersharetaskcorrelationmodelingwithadapterdifferentiation/3f8d551b-99cc-46c1-a866-b4c2c94b841d_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:ef75e79c1cb8b1944d26e2cbf5b069be3180341ddc3fadec2e8ee2dd16b8ec06
|
| 3 |
+
size 44948
|
adaptersharetaskcorrelationmodelingwithadapterdifferentiation/3f8d551b-99cc-46c1-a866-b4c2c94b841d_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:884185f1c05bcc765fc478af566a0c6d4c279aa7ac2fa9d7740b84db2766eae8
|
| 3 |
+
size 53456
|
adaptersharetaskcorrelationmodelingwithadapterdifferentiation/3f8d551b-99cc-46c1-a866-b4c2c94b841d_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:8fa5426d204de829a2dd1ecbd396f51cc9d6d63825e18f21b3d6336ca9aa0a62
|
| 3 |
+
size 770097
|
adaptersharetaskcorrelationmodelingwithadapterdifferentiation/full.md
ADDED
|
@@ -0,0 +1,187 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# AdapterShare: Task Correlation Modeling with Adapter Differentiation
|
| 2 |
+
|
| 3 |
+
Zhi Chen $^{1*}$ , Bei Chen $^{2}$ , Lu Chen $^{1}$ , Kai Yu $^{1}$ , Jian-Guang Lou $^{2}$
|
| 4 |
+
|
| 5 |
+
$^{1}$ X-LANCE Lab, Department of Computer Science and Engineering MoE Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University $^{2}$ Microsoft Research Asia {zhenchi713, chenlusz, kai.yu}@sjtu.edu.cn, {beichen, jlou}@microsoft.com
|
| 6 |
+
|
| 7 |
+
# Abstract
|
| 8 |
+
|
| 9 |
+
Thanks to the development of pre-trained language models, multitask learning (MTL) methods have achieved great success in natural language understanding. However, current MTL methods pay more attention to task selection or model design to fuse as much knowledge as possible, while the intrinsic task correlation is often neglected. It is important to learn sharing strategies among multiple tasks rather than sharing everything. In this paper, we propose AdapterShare, an adapter differentiation method to explicitly model task correlation among multiple tasks. AdapterShare is automatically learned based on the gradients on tiny held-out validation data. Compared to single-task learning and fully shared MTL methods, our proposed method obtains obvious performance improvements. Compared to the existing MTL method AdapterFusion, AdapterShare achieves an absolute average improvement of 1.90 points on five dialogue understanding tasks and 2.33 points on NLU tasks. Our implementation is available at https://github.com/microsoft/ContextualSP.
|
| 10 |
+
|
| 11 |
+
# 1 Introduction
|
| 12 |
+
|
| 13 |
+
With the development of transformer-based pretrained language models (PLMs), natural language understanding (NLU) has made great progress as a downstream task. There are two main ways to leverage PLMs in NLU tasks. One is the fine-tuning method, which updates the pre-trained language model directly on a target task. The other one is adapters (Rebuffi et al., 2017; Houlsby et al., 2019), which introduces a small number of task-specific parameters on a fixed PLM. When training on the target task, only the introduced parameters are updated. Compared to the fine-tuning method, adapter is memory-efficient, since the introduced parameters are much less than those of the PLM. In this paper, we focus on the approach using adapters.
|
| 14 |
+
|
| 15 |
+

|
| 16 |
+
Figure 1: The architecture of the adapters with task correlation modeling method.
|
| 17 |
+
|
| 18 |
+
To transfer the knowledge of different tasks, Stickland and Murray (2019) proposed a multitask learning (MTL) method to update the weights of a shared adapter using the weighting of the objective functions of all target tasks. The shared adapter captures the common structure underlying all the target tasks. This is a typical multitask learning method based on an implicit assumption that all tasks benefit from each other, where all parameters of the adapter are shared during multitask training. In other words, the task correlation has not been modeled in the traditional MTL method. In this paper, we propose a robust adapter differentiation method, called AdapterShare, to model the correlation of all target tasks explicitly. As shown in Figure 1, during the multitask learning process, the sharing strategy of adapter at each PLM layer is automatically learned according to the adapter gradients on small-scale held-out validation data. The learned sharing strategy can be regarded as a discrete task correlation map.
|
| 19 |
+
|
| 20 |
+
The closest work is AdapterFusion (Pfeiffer et al., 2021), which is a two-stage learning method. The first stage is to train task-wise adapters separately, and the second stage is to fuse all task-wise adapters with attention mechanism for each target task. The two-stage method is sensitive to the initialization of attention weights. Once there are two tasks that hurt each other, it is hard to assign zero to the corresponding adapter using soft attention mechanism. Compared to AdapterFusion, our proposed AdapterShare learns all the adapters and their task correlation simultaneously. We adopt a discrete format to represent task correlation, where at each PLM layer, every two tasks either share the adapter (1 in the task correlation map) or not (0 in the task correlation map).
|
| 21 |
+
|
| 22 |
+
# 2 Problem Statement
|
| 23 |
+
|
| 24 |
+
As discussed, the existing multitask learning methods tend to share all parameters. It assumes that all target tasks benefit from each other. However, in practice, it can be detrimental to assume correlation in a set of tasks and simply put them together for learning (Bonilla et al., 2007). In this paper, we propose an approach to learn task correlation automatically. The task correlation indicates that all the target tasks are clustered into several task groups. The tasks in the same task group share the parameters. We maintain the task correlation map at the granularity of each transformer layer of pretrained language models. With adapters training strategy, the learning process can be formalized as:
|
| 25 |
+
|
| 26 |
+
$$
|
| 27 |
+
\Phi_ {i} \leftarrow \operatorname {a r g m i n} \left(L _ {\Phi_ {i}} \left(D _ {i}; \Theta_ {0}, \Phi_ {i}\right)\right), \tag {1}
|
| 28 |
+
$$
|
| 29 |
+
|
| 30 |
+
where $\Theta_0$ is initialized parameters of PLM, $\Phi_i$ is the adapter parameters of $i$ -th task $t_i$ , $D_i$ is the annotated training samples of $i$ -th task and $L_{\Phi_i}(\cdot)$ is the loss function of target task. The adapters consists of adapter networks at all PLM layers:
|
| 31 |
+
|
| 32 |
+
$$
|
| 33 |
+
\Phi_ {i} = \left\{\Phi_ {i} ^ {1}, \Phi_ {i} ^ {2}, \dots , \Phi_ {i} ^ {L} \right\}, \tag {2}
|
| 34 |
+
$$
|
| 35 |
+
|
| 36 |
+
where $L$ is the layer number of PLM and $\Phi_i^l$ is the adapter parameters of $l$ -th PLM layer for the task group containing task $t_i$ . As mentioned, the task correlation is at layer granularity. If task $t_j$ is in the same task group as task $t_i$ at $l$ -th layer, the adapter parameters are shared between these two tasks, which means $\Phi_i^l = \Phi_j^l$ . The task group at $l$ -th PLM layer is defined by layer-wise task correlation map $M^l$ . For example, as shown in
|
| 37 |
+
|
| 38 |
+

|
| 39 |
+
Figure 2: Calculated inter-task and intra-task gradients on tiny task-wise held-out validation sets.
|
| 40 |
+
|
| 41 |
+
Figure 1, there are two task groups: $G_1^l = G_2^l = \{t_1, t_2\}$ , $G_3^l = G_4^l = G_5^l = \{t_3, t_4, t_5\}$ according to the task correlation map $M^l$ , where $M^l(i, j) = 1$ means $t_i$ and $t_j$ is in the same group at $l$ -th layer. In the next section, we will introduce how to learn the layer-wise task correlation map.
|
| 42 |
+
|
| 43 |
+
# 3 AdapterShare
|
| 44 |
+
|
| 45 |
+
In this section, we first introduce the adopted task correlation learning method in general. Then we reveal the problem of existing neural differentiation algorithm and improve it in our proposed task correlation learning algorithm, AdapterShare. Note that in the following, all learnable parameters are adapters, while the parameters of PLM are fixed.
|
| 46 |
+
|
| 47 |
+
# 3.1 Adapter Differentiation
|
| 48 |
+
|
| 49 |
+
We model task correlation in a discrete format. The discrete task correlation map divides all the target tasks into several task groups. The tasks in the same task group benefit from each other. The main challenge is how to quantify the effects of two different tasks. Inspired by the parameter differentiation method (Wang and Zhang, 2021), we leverage interference degree as the effect metric. The interference degree of two tasks is the negative value of the inter-task gradient cosine similarity on the shared parameters. The inter-task gradient is calculated on tiny held-out validation data, which contains validation samples of all tasks. Formally, the interference degree of a task group is:
|
| 50 |
+
|
| 51 |
+
$$
|
| 52 |
+
\mathcal {I} \left(\Phi_ {i} ^ {l}; G _ {i} ^ {l}\right) = \max _ {t _ {i}, t _ {j} \in G _ {i} ^ {l}} - \frac {\overline {{\mathbf {g}}} _ {t _ {i}} ^ {l} \cdot \overline {{\mathbf {g}}} _ {t _ {j}} ^ {l}}{\| \overline {{\mathbf {g}}} _ {t _ {i}} ^ {l} \| * \| \overline {{\mathbf {g}}} _ {t _ {j}} ^ {l} \|}, \tag {3}
|
| 53 |
+
$$
|
| 54 |
+
|
| 55 |
+
$$
|
| 56 |
+
\overline {{\mathbf {g}}} _ {t _ {i}} ^ {l} = \nabla L _ {\Phi_ {i} ^ {l}} \left(H _ {i}; \Theta_ {0}, \Phi_ {i} ^ {l}\right), \tag {4}
|
| 57 |
+
$$
|
| 58 |
+
|
| 59 |
+
where $\bar{\mathbf{g}}_{t_i}^l$ is the inter-task gradient of shared adapter in task group $G_{i}^{l}$ , calculated on the held-out validation data $H_{i}$ of task $t_i$ . The inter-task gradient $\bar{\mathbf{g}}_{t_i}^l$ is accumulated gradient of all the samples in the held-out validation data of task $t_i$ . If the
|
| 60 |
+
|
| 61 |
+
Algorithm 1: Task Correlation Learning
|
| 62 |
+
Set all the elements of task correlation maps to one: $\{M^l\}_{l = 1}^L$
|
| 63 |
+
Initialize the adapter parameters: $\{\Phi_i^l\}_{l = 1}^L$ , where $\Phi_0^l = \dots = \Phi_N^l$
|
| 64 |
+
// Prepare the data for $N$ tasks
|
| 65 |
+
Training dataset: $\{D_i\}_{i = 1}^N$
|
| 66 |
+
Held-out validation dataset: $\{H_{i}\}_{i = 1}^{N}$
|
| 67 |
+
// Training process of each epoch
|
| 68 |
+
for i in 1,2,...,N do 1.Sample a mini-batch $b_{i}$ from $D_{i}$ 2.Switch the adapters into $i$ -th task mode according to $\{M^l\}_{l = 1}^L:\Phi_i$ 3.Compute loss as Eq. 1 and Update $\Phi_{i}$
|
| 69 |
+
// Detect adapter differentiation
|
| 70 |
+
for l in 1,2,...,L do Task group set: $\{G_i^l\}_{i = 1}^N$ for $G_{i}$ in $\{G_i^l\}_{i = 1}^N$ do for $t_i$ in $G_{i}$ do // Consistency of intra-task gradients 4.Split $H_{i}$ into $H_{i,0}$ and $H_{i,1}$ 5.Calculate $\overline{\mathbf{g}}_{t_i,0}^l$ and $\overline{\mathbf{g}}_{t_i,1}^l$ as Eq.4. 6.Calculate $\mathcal{C}(\Phi_i^l)$ as Eq.5. if all $\mathcal{C}(\Phi_i^l) > \alpha$ then 7.Calculate $\overline{\mathbf{g}}_{t_i}^l$ as Eq.6. 8.Calculate $\mathcal{I}(\Phi_i^l;G_i^l)$ as Eq.3. if any $\mathcal{I}(\Phi_i^l;G_i^l) > 0$ then 9.Adapter differentiation. 10.Update $M^l$
|
| 71 |
+
|
| 72 |
+
interference degree $\mathcal{I}(\Phi_i^l; G_i^l) > 0$ , it indicates that there are at least two tasks in this task group that have conflicting optimum directions. For example, as shown in Figure 2, $\overline{\mathbf{g}}_{t_1}^l$ and $\overline{\mathbf{g}}_{t_2}^l$ have similar global optimum directions, while $\overline{\mathbf{g}}_{t_3}^l$ has the opposite direction to the other two tasks. It suggests that $t_3$ may hinder the other two tasks $t_1$ and $t_2$ . These three tasks need to be divided into two different groups: $G_1^l = G_2^l = \{t_1, t_2\}$ and $G_3^l = \{t_3\}$ . The dividing process is named adapter differentiation, where one task group is split into two subgroups. In detail, adapter differentiation has three steps: 1) The two tasks with the highest interference degree are taken as representatives and put into two different subgroups; 2) Every other task in the current task group is compared with these two representatives and added to the subgroup with the lower interference degree; 3) The parameters of two differentiated adapters are copied from the original adapter. The elements in the task correlation map $M^l$ will change from 1 to 0, if two tasks belong to different task groups.
|
| 73 |
+
|
| 74 |
+
At the beginning of the training process, we set all elements of the task correlation map to 1, which means that all adapter parameters are shared among
|
| 75 |
+
|
| 76 |
+
<table><tr><td>Corpora</td><td>#Sample</td><td>I(Token)</td><td>I(Turn)</td><td>O(Token)</td><td>Task</td></tr><tr><td>SAMSUM (2019)</td><td>14732</td><td>104.95</td><td>11.2</td><td>20.31</td><td>DS</td></tr><tr><td>TASK (2019)</td><td>2205</td><td>34.92</td><td>2.8</td><td>10.84</td><td>DC</td></tr><tr><td>BANK77 (2020)</td><td>12081</td><td>21.64</td><td>1</td><td>3.14</td><td>ID</td></tr><tr><td>RES8K (2020)</td><td>15270</td><td>14.44</td><td>1</td><td>3.38</td><td>SF</td></tr><tr><td>WOZ2.0 (2017)</td><td>7608</td><td>78.96</td><td>4.6</td><td>1.30</td><td>DST</td></tr></table>
|
| 77 |
+
|
| 78 |
+
Table 1: Statistics of five dialogue understanding datasets. $\mathbf{I}_{(\mathrm{Token})}$ and $\mathbf{I}_{(\mathrm{Turn})}$ mean the average length of the split tokens and the average turns of the input dialogue content. $\mathbf{O}_{(\mathrm{Token})}$ means the average length of the split tokens of the task-specific output.
|
| 79 |
+
|
| 80 |
+
<table><tr><td>Corpora</td><td>#Train</td><td>#Dev.</td><td>#Test</td><td>#Label</td><td>Task</td></tr><tr><td>WNLI (2012)</td><td>634</td><td>71</td><td>146</td><td>2</td><td>NLI</td></tr><tr><td>RTE (2018)</td><td>2500</td><td>276</td><td>3000</td><td>2</td><td>NLI</td></tr><tr><td>CoLA (2019)</td><td>8500</td><td>1000</td><td>1000</td><td>2</td><td>ACC</td></tr><tr><td>SST-2 (2013)</td><td>67000</td><td>872</td><td>1800</td><td>2</td><td>SEN</td></tr><tr><td>STSB (2017)</td><td>7000</td><td>1500</td><td>1400</td><td>1</td><td>SIM</td></tr></table>
|
| 81 |
+
|
| 82 |
+
Table 2: Statistics of five natural language understanding datasets.
|
| 83 |
+
|
| 84 |
+
all tasks. Then, we periodically calculate the interference degree of the current task groups to activate the adapter differentiation operation when the interference degree is greater than 0. Once adapter differentiation starts, the task correlation map will be permanently changed.
|
| 85 |
+
|
| 86 |
+
# 3.2 Avoiding Over-Differentiation
|
| 87 |
+
|
| 88 |
+
So far, we have introduced the basic adapter differentiation method for learning task correlation. However, in practice, we find a problem called over-differentiation: the basic adapter differentiation method has an unstable training process, in which the update of the task correlation map is irreversible. At the beginning of the training process, the shared adapter parameters are fragile and the inter-task gradients have a big bias on the held-out validation data. Thus, the adapter differentiation operation needs to be cautious. In our proposed AdapterShare, we add another line of defense to activate the differentiation. We have to make sure that the inter-task gradient is trusted. As shown in Figure 2, we can see that each inter-task gradient is accumulated by intra-task gradients, while the intra-task gradients vary within a task.
|
| 89 |
+
|
| 90 |
+
To alleviate this issue, we randomly split all the intra-task gradients into two groups and calculate the accumulated intra-task gradients of these two groups: $\overline{\mathbf{g}}_{t_i,0}^l$ and $\overline{\mathbf{g}}_{t_i,1}^l$ . Then, we use their cosine
|
| 91 |
+
|
| 92 |
+
<table><tr><td rowspan="2">DU Tasks (T5)</td><td colspan="4">Methods</td></tr><tr><td>ST</td><td>MT</td><td>AdapterFusion</td><td>AdapterShare</td></tr><tr><td>SAMSUM (R-L)</td><td>48.80</td><td>47.78</td><td>47.36</td><td>49.12</td></tr><tr><td>TASK (BLEU)</td><td>88.45</td><td>89.54</td><td>89.92</td><td>90.20</td></tr><tr><td>BANK77 (ACC.)</td><td>91.58</td><td>89.25</td><td>91.10</td><td>93.15</td></tr><tr><td>REST8K (F1)</td><td>97.28</td><td>96.41</td><td>95.93</td><td>97.58</td></tr><tr><td>WOZ2.0 (JGA)</td><td>91.25</td><td>90.70</td><td>89.12</td><td>92.89</td></tr><tr><td>OVERALL</td><td>83.47</td><td>82.74</td><td>82.69</td><td>84.59</td></tr></table>
|
| 93 |
+
|
| 94 |
+
Table 3: Results on five dialogue understanding tasks with the backbone T5.
|
| 95 |
+
|
| 96 |
+
<table><tr><td rowspan="2">NU Tasks(BERT)</td><td colspan="4">Methods</td></tr><tr><td>ST</td><td>MT</td><td>AdapterFusion</td><td>AdapterShare</td></tr><tr><td>WNLI (ACC.)</td><td>56.34</td><td>61.97</td><td>56.33</td><td>61.97</td></tr><tr><td>RTE (ACC.)</td><td>66.06</td><td>77.61</td><td>70.75</td><td>77.62</td></tr><tr><td>CoLA (MCC.)</td><td>58.02</td><td>59.06</td><td>60.23</td><td>60.64</td></tr><tr><td>SST-2 (ACC.)</td><td>93.12</td><td>92.66</td><td>93.12</td><td>92.77</td></tr><tr><td>STSB (Spearman)</td><td>88.78</td><td>89.28</td><td>89.88</td><td>88.96</td></tr><tr><td>OVERALL</td><td>72.46</td><td>76.12</td><td>74.06</td><td>76.39</td></tr></table>
|
| 97 |
+
|
| 98 |
+
Table 4: Results on five natural language understanding tasks with the backbone BERT.
|
| 99 |
+
|
| 100 |
+
similarity as the consistency of inter-task gradient, calculated as:
|
| 101 |
+
|
| 102 |
+
$$
|
| 103 |
+
\mathcal {C} \left(\Phi_ {i} ^ {l}\right) = \frac {\overline {{\mathbf {g}}} _ {t _ {i} , 0} ^ {l} \cdot \overline {{\mathbf {g}}} _ {t _ {i} , 1} ^ {l}}{\| \overline {{\mathbf {g}}} _ {t _ {i} , 0} ^ {l} \| * \| \overline {{\mathbf {g}}} _ {t _ {i} , 1} ^ {l} \|.} \tag {5}
|
| 104 |
+
$$
|
| 105 |
+
|
| 106 |
+
The adapter differentiation on a task group can be activated only when all tasks in this task group have consistency values greater than the threshold $\alpha$ . The inter-task gradient of task $t_i$ is equal to the sum of two accumulated intra-task gradients, formalized as:
|
| 107 |
+
|
| 108 |
+
$$
|
| 109 |
+
\bar {\mathbf {g}} _ {t _ {i}} ^ {l} = \bar {\mathbf {g}} _ {t _ {i}, 0} ^ {l} + \bar {\mathbf {g}} _ {t _ {i}, 1} ^ {l}. \tag {6}
|
| 110 |
+
$$
|
| 111 |
+
|
| 112 |
+
To distinct with basic adapter differentiation method, we name the improved method as robust adapter differentiation. The details of task correlation learning are shown in Algorithm 1.
|
| 113 |
+
|
| 114 |
+
# 4 Experiments
|
| 115 |
+
|
| 116 |
+
# 4.1 Datasets
|
| 117 |
+
|
| 118 |
+
We evaluate our proposed AdapterShare on five dialog understanding (DU) datasets (shown in Table 1) and five natural language understanding (NLU) datasets (shown in Table 2). There are five different dialog understanding tasks in DU datasets. DS, DC, ID, SF and DST represent dialogue summary, dialogue completion, intent detection, slot filling and dialogue state tracking, respectively. Five NLU
|
| 119 |
+
|
| 120 |
+
datasets are chosen from GLUE benchmark, spanning four different NLU tasks. NLI, ACC, SEN and SIM indicate natural language inferencing, acceptability, sentiment and similarity, respectively.
|
| 121 |
+
|
| 122 |
+
# 4.2 Experimental Setup
|
| 123 |
+
|
| 124 |
+
In order to investigate the proposed AdapterShare training method, we compare it with ST, MT and AdapterFusion. ST trains a separate adapter for each target task. MT trains the adapters on all the target tasks (Stickland and Murray, 2019). AdapterFusion fuses the separated ST adapters on the target task with attention mechanism.
|
| 125 |
+
|
| 126 |
+
As described in Su et al. (2022) and Chen et al. (2022), the dialogue understanding tasks can be formulated as a unified sequence-to-sequence generation task. For five DU tasks, we leverage T5-base model (Raffel et al., 2020) as the backbone of the generation model. For five NLU tasks, we implement all the experiments based on the released code by Liu et al. (2019). The backbone of NLU tasks is BERT-large (Kenton and Toutanova, 2019). The adapters is implemented based on AdapterHub (Pfeiffer et al., 2020), where the pre-trained language models are inherited from HuggingFace library (Wolf et al., 2019). We set the threshold of intra-task consistency $\alpha$ to $0.707\left(\cos \left(\pi /4\right)\right)$ . The learning rate is 1e-5. We conduct all the experiments on V100 GPU with 16G memory. All the metrics are the higher, the better.
|
| 127 |
+
|
| 128 |
+

|
| 129 |
+
Figure 3: Differentiated adapters on 24 transformer layers of T5. X-axis represents the task name. Y-axis represents the number of shared tasks.
|
| 130 |
+
|
| 131 |
+
# 4.3 Results
|
| 132 |
+
|
| 133 |
+
The proposed AdapterShare adopts a robust adapter differentiation method to learn task correlation. As shown in Table 3, we can find that the proposed AdapterShare can get the best performance than the baselines. Compared with the single-task method, AdapterFusion method can not obtain any performance gain in encoder-decoder setup. In the encoder-only situation, AdapterFusion method can achieve the best performance on two of five tasks, as shown in Table 4. Compared with the single-task method, it actually gets obvious improvements, which is consistent with the original conclusion (Pfeiffer et al., 2021). However, in encoder-only setup, our proposed AdapterShare can still obtain the best performance on three of five tasks and get the best overall score. MT method shares all the parameters among all the tasks. In dialog understanding tasks, the overall score of ST is better than MT, which indicates that there are some tasks hurt by other tasks. The final results on DU tasks further indicate our proposed AdapterShare, which learns the task correlation map, is more efficient than independent training (ST) and complete-sharing methods. The final differentiation architecture on T5 is shown in Figure 3. The four shared tasks mean that all five tasks are shared with each other in the corresponding layer. We can see that the adapter differentiation happens only on T5 decoder side and all the adapters on encoder are shared. This phenomenon is interesting. As we know, inputs in all DU tasks are the dialogue context. The encoder module, as the presentation function, is used to represent the dialogue context. Compared with the encoder, the decoder needs to
|
| 134 |
+
|
| 135 |
+
solve different DU tasks, whose outputs are very different. Various DU tasks need to pay attention to different dialogue context areas. For example, the DST task is more inclined to obtain the entity information mentioned by the user, and the intention detection is more inclined to pay attention to user actions.
|
| 136 |
+
|
| 137 |
+
We also conduct an ablation study to compare robust adapter differentiation method with basic differentiation method on dialog understanding tasks. The performance curves on the development datasets are shown in Appendix A. It shows that the training process of the robust adapter differentiation method is more stable than the basic method. The metrics of robust method on DU tasks are also higher than the basic differentiation method.
|
| 138 |
+
|
| 139 |
+
# 5 Conclusion
|
| 140 |
+
|
| 141 |
+
In this paper, we propose a robust adapter differentiation method to automatically learn task correlation in the multitask learning setting. On both encoder-decoder and encoder-only PLMs, our proposed method can achieve exciting performance gains compared to the separated training, complete-sharing and AdapterFusion methods. In future work, we will try our method in the domain transfer area, which is a more general scenario than multitask learning.
|
| 142 |
+
|
| 143 |
+
# Limitations
|
| 144 |
+
|
| 145 |
+
There are two main limitations in this paper. The first one is about the scale of multiple tasks. In the experiments, there are five tasks on dialogue understanding area and natural language understanding area. It is unsure whether the proposed method works in a large-scale task learning setup. The second one is the implicit assumption included in our proposed method that the effect of two tasks are mutual, where one benefits/hurts the other means that the other also benefits/hurts itself. There is currently no evidence for the validity of this assumption. We leave these explorations for future work.
|
| 146 |
+
|
| 147 |
+
# Ethical Considerations
|
| 148 |
+
|
| 149 |
+
As our adapter differentiation methods are validated on the existing datasets, we follow the original copyright statements of 10 datasets. All claims in this paper are based on the experimental results. No demographic or identity characteristics information is used in this paper.
|
| 150 |
+
|
| 151 |
+
# References
|
| 152 |
+
|
| 153 |
+
Edwin V Bonilla, Kian Chai, and Christopher Williams. 2007. Multi-task gaussian process prediction. Advances in neural information processing systems, 20.
|
| 154 |
+
Inigo Casanueva, Tadas Temcinas, Daniela Gerz, Matthew Henderson, and Ivan Vulic. 2020. Efficient intent detection with dual sentence encoders. ACL 2020, page 38.
|
| 155 |
+
Daniel Cer, Mona Diab, Eneko Agirre, Inigo Lopez-Gazpio, and Lucia Specia. 2017. Semeval-2017 task 1: Semantic textual similarity-multilingual and cross-lingual focused evaluation. arXiv preprint arXiv:1708.00055.
|
| 156 |
+
Zhi Chen, Lu Chen, Bei Chen, Libo Qin, Yuncong Liu, Su Zhu, Jian-Guang Lou, and Kai Yu. 2022. UniDU: Towards a unified generative dialogue understanding framework. In Proceedings of the 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 442-455, Edinburgh, UK. Association for Computational Linguistics.
|
| 157 |
+
Samuel Coope, Tyler Farghly, Daniela Gerz, Ivan Vulic, and Matthew Henderson. 2020. Span-convert: Few-shot span extraction for dialog with pretrained conversational representations. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 107-121.
|
| 158 |
+
Bogdan Gliwa, Iwona Mochol, Maciej Biesek, and Aleksander Wawer. 2019. Samsum corpus: A human-annotated dialogue dataset for abstractive summarization. EMNLP-IJCNLP 2019, page 70.
|
| 159 |
+
Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. 2019. Parameter-efficient transfer learning for nlp. In International Conference on Machine Learning, pages 2790-2799. PMLR.
|
| 160 |
+
Jacob Devlin Ming-Wei Chang Kenton and Lee Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of NAACL-HLT, pages 4171-4186.
|
| 161 |
+
Hector Levesque, Ernest Davis, and Leora Morgenstern. 2012. The winograd schema challenge. In Thirteenth international conference on the principles of knowledge representation and reasoning.
|
| 162 |
+
Xiaodong Liu, Pengcheng He, Weizhu Chen, and Jianfeng Gao. 2019. Multi-task deep neural networks for natural language understanding. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4487-4496.
|
| 163 |
+
Jonas Pfeiffer, Aishwarya Kamath, Andreas Rückle, Kyunghyun Cho, and Iryna Gurevych. 2021. Adapterfusion: Non-destructive task composition for transfer learning. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 487-503.
|
| 164 |
+
|
| 165 |
+
Jonas Pfeiffer, Andreas Rückle, Clifton Poth, Aishwarya Kamath, Ivan Vulic, Sebastian Ruder, Kyunghyun Cho, and Iryna Gurevych. 2020. Adapterhub: A framework for adapting transformers. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP 2020): Systems Demonstrations, pages 46-54, Online. Association for Computational Linguistics.
|
| 166 |
+
Jun Quan, Deyi Xiong, Bonnie Webber, and Changjian Hu. 2019. Gecor: An end-to-end generative ellipsis and co-reference resolution model for task-oriented dialogue. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4547-4557.
|
| 167 |
+
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research, 21:1-67.
|
| 168 |
+
Sylvestre-Alvise Rebuffi, Hakan Bilen, and Andrea Vedaldi. 2017. Learning multiple visual domains with residual adapters. Advances in neural information processing systems, 30.
|
| 169 |
+
Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 conference on empirical methods in natural language processing, pages 1631-1642.
|
| 170 |
+
Asa Cooper Stickland and Iain Murray. 2019. Bert and pals: Projected attention layers for efficient adaptation in multi-task learning. In International Conference on Machine Learning, pages 5986-5995. PMLR.
|
| 171 |
+
Yixuan Su, Lei Shu, Elman Mansimov, Arshit Gupta, Deng Cai, Yi-An Lai, and Yi Zhang. 2022. Multi-task pre-training for plug-and-play task-oriented dialogue system. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 4661-4676.
|
| 172 |
+
Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. 2018. Glue: A multi-task benchmark and analysis platform for natural language understanding. arXiv preprint arXiv:1804.07461.
|
| 173 |
+
Qian Wang and Jiajun Zhang. 2021. Parameter differentiation based multilingual neural machine translation. arXiv preprint arXiv:2112.13619.
|
| 174 |
+
Alex Warstadt, Amanpreet Singh, and Samuel Bowman. 2019. Neural network acceptability judgments. Transactions of the Association for Computational Linguistics, 7:625-641.
|
| 175 |
+
|
| 176 |
+
Tsung-Hsien Wen, David Vandyke, Nikola Mrkšić, Milica Gasic, Lina M Rojas Barahona, Pei-Hao Su, Stefan Ultes, and Steve Young. 2017. A network-based end-to-end trainable task-oriented dialogue system. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 438-449.
|
| 177 |
+
|
| 178 |
+
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, et al. 2019. Huggingface's transformers: State-of-the-art natural language processing. arXiv preprint arXiv:1910.03771.
|
| 179 |
+
|
| 180 |
+
# A Ablation Study on DU Tasks
|
| 181 |
+
|
| 182 |
+

|
| 183 |
+
|
| 184 |
+

|
| 185 |
+
(a) Basic adapter differentiation.
|
| 186 |
+
(b) Robust adapter differentiation.
|
| 187 |
+
Figure 4: The performance curves on five dialogue understanding tasks with (a) basic adapter differentiation and (b) robust adapter differentiation methods.
|
adaptersharetaskcorrelationmodelingwithadapterdifferentiation/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:1e90275414bde31c50b3a2ce1e64903dd6c418bf444f45d7ac999de67a4d7cb8
|
| 3 |
+
size 319746
|
adaptersharetaskcorrelationmodelingwithadapterdifferentiation/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:8a1962f6771113064b10b4f353d2ff172a7d2fc259c3a6557df0110dc2033c7c
|
| 3 |
+
size 241544
|
adaptingalanguagemodelwhilepreservingitsgeneralknowledge/3daf2795-b1f5-40a0-84ed-04f630dbdc4f_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:b0bacc12071113ef6012bfeb660c887649330e3b8af0436f0cdf4da593a38a34
|
| 3 |
+
size 97744
|
adaptingalanguagemodelwhilepreservingitsgeneralknowledge/3daf2795-b1f5-40a0-84ed-04f630dbdc4f_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:0c2f753741902f1c91939ee4cef1269252a11526ecfca2ad5a98dc23ff429962
|
| 3 |
+
size 117837
|
adaptingalanguagemodelwhilepreservingitsgeneralknowledge/3daf2795-b1f5-40a0-84ed-04f630dbdc4f_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:13f1f1356ef11824b696daaeb37baf750a93ab275b366255cb8960c37c26a388
|
| 3 |
+
size 517148
|
adaptingalanguagemodelwhilepreservingitsgeneralknowledge/full.md
ADDED
|
@@ -0,0 +1,385 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Adapting a Language Model While Preserving its General Knowledge
|
| 2 |
+
|
| 3 |
+
Zixuan Ke $^{1}$ , Yijia Shao $^{2}$ , Haowei Lin $^{2}$ , Hu Xu $^{3}$ , Lei Shu $^{1*}$ and Bing Liu $^{1}$
|
| 4 |
+
|
| 5 |
+
$^{1}$ Department of Computer Science, University of Illinois at Chicago
|
| 6 |
+
|
| 7 |
+
$^{2}$ Wangxuan Institute of Computer Technology, Peking University
|
| 8 |
+
|
| 9 |
+
Meta AI
|
| 10 |
+
|
| 11 |
+
$^{1}\{zke4, liub\} @uic.edu$
|
| 12 |
+
|
| 13 |
+
$^{2}$ shaoyj, linhaowei}@pku.edu.cn
|
| 14 |
+
|
| 15 |
+
$^{3}$ huxu@fb.com
|
| 16 |
+
|
| 17 |
+
# Abstract
|
| 18 |
+
|
| 19 |
+
Domain-adaptive pre-training (or DA-training for short), also known as post-training, aims to train a pre-trained general-purpose language model (LM) using an unlabeled corpus of a particular domain to adapt the LM so that endtasks in the domain can give improved performances. However, existing DA-training methods are in some sense blind as they do not explicitly identify what knowledge in the LM should be preserved and what should be changed by the domain corpus. This paper shows that the existing methods are suboptimal and proposes a novel method to perform a more informed adaptation of the knowledge in the LM by (1) soft-masking the attention heads based on their importance to best preserve the general knowledge in the LM and (2) contrasting the representations of the general and the full (both general and domain knowledge) to learn an integrated representation with both general and domain-specific knowledge. Experimental results will demonstrate the effectiveness of the proposed approach.
|
| 20 |
+
|
| 21 |
+
# 1 Introduction
|
| 22 |
+
|
| 23 |
+
Pre-trained general-purpose language models (LMs) like BERT (Devlin et al., 2019), RoBERTa (Liu et al., 2019), and GPT-3 (Brown et al., 2020) have become a standard component in almost all NLP applications. Researchers have also found that domain-adaptive pre-training (or DA-training for short) using an unlabeled corpus in a specific domain to adapt an LM can further improve the end-task performance in the domain (Gururangan et al., 2020; Xu et al., 2019a,b; Sun et al., 2019; Alsentzer et al., 2019). Note that domain-adaptive pre-training is also called post-training (Xu et al., 2019a).
|
| 24 |
+
|
| 25 |
+
Existing DA-training methods simply apply the same pre-training objective, i.e., the mask language
|
| 26 |
+
|
| 27 |
+
model (MLM) loss, to further train an LM using a domain corpus. These methods are sub-optimal because they do not explicitly identify what should be preserved and what should be updated in the LM by the domain corpus.
|
| 28 |
+
|
| 29 |
+
This paper argues that a good DA-training method has two needs. On the one hand, the general language knowledge learned in the LM should be preserved as much as possible because the target domain data is typically not large enough to be sufficient to learn the general knowledge well. For example, some words and their contexts may appear infrequently in a particular domain. The knowledge about them cannot be learned accurately based on the domain data alone. When these words and contexts appear in an end-task, the system will have difficulties. Thus, we need to rely on the knowledge about them in the LM. Since existing DA-training updates the LM with little guidance, such useful general knowledge may be corrupted. On the other hand, due to polysemy (same word with different meanings in different domains) and the fact that different domains also have their special word usages and contexts, the LM should be specialized or adapted to the target domain. A good DA-training should balance these two needs to adapt the LM to the target domain with minimal corruption to the good general knowledge in the LM.
|
| 30 |
+
|
| 31 |
+
This paper proposes a novel technique to enable a more informed adaptation to (1) preserve the general knowledge in the LM as much as possible, and (2) update the LM to incorporate the domain-specific knowledge of the target domain as needed. The focus of the existing DA-training research has been on (2). As we argued above, (1) is also important as focusing only on (2) may destroy some useful general knowledge and produce sub-optimal results for end-tasks. To achieve (1), the system should constrain the gradient update of each attention head based on its importance to the general
|
| 32 |
+
|
| 33 |
+
knowledge so that the general knowledge in LM can be preserved as much as possible. With (1), (2) will be able to change the part of the general knowledge that needs to be updated to adapt the LM to suit the target domain.3
|
| 34 |
+
|
| 35 |
+
In this paper, we propose a novel model called DGA (DA-training - General knowledge preservation and LM Adaptation) for the purpose. The key idea of the proposed method is to preserve the general language knowledge in the LM while adapting the LM to a specific domain. However, it is not obvious how this can be done, i.e., how to find those parameters that are important for the general knowledge and how to protect them. This paper proposes a novel proxy-based method to achieve the objectives. It works as follows. DGA first estimates the importance of each attention head in the LM via the newly proposed proxy KL-divergence loss (Sec. 3.1). This importance score reflects how important each attention head is to the general knowledge. Based on the importance scores, it performs two key functions: The first function uses the scores to soft-mask (rather than binary-mask or completely block) the gradient update to prevent important general knowledge in LM from being unnecessarily corrupted. This is related to pruning of unimportant attention heads (Michel et al., 2019). However, pruning is not directly applicable to DA-training as we will show in Sec. 2. The proposed soft-masking constrains only the backward gradient flow in training. It is not necessary to soft-mask the forward pass in either training or inference. This is important because using the knowledge in the full network encourages maximal integration of pre-trained general knowledge and the target domain-specific knowledge. The second function contrasts the representation for the general knowledge in the LM and the full (including both the general and the domain-specific) knowledge to learn an integrated representation (Sec. 3.2).<sup>4</sup>
|
| 36 |
+
|
| 37 |
+
In summary, this paper makes two key contributions.
|
| 38 |
+
|
| 39 |
+
(1). It proposes the idea of informed adaptation to integrate the specialized knowledge in the target
|
| 40 |
+
|
| 41 |
+
domain into the LM with minimal corruption to the useful general knowledge in the original LM.
|
| 42 |
+
|
| 43 |
+
(2). It proposes a new model DGA with two novel functions to enable better DA-training. DGA estimates the attention head importance to protect the important general knowledge in the LM and integrates the specialized knowledge in the target domain into the LM through contrasting the general and the full knowledge.
|
| 44 |
+
|
| 45 |
+
To the best of our knowledge, none of these has been reported in the literature before.
|
| 46 |
+
|
| 47 |
+
Extensive experiments have been conducted in 6 different domains and on 10 baselines to demonstrate the effectiveness of the proposed DGA.
|
| 48 |
+
|
| 49 |
+
# 2 Related Work
|
| 50 |
+
|
| 51 |
+
Domain-adaptive pre-training (DA-training). Researchers have applied DA-training to many domains, e.g., reviews (Xu et al., 2019a,b), biomedical text (Lee et al., 2020), news and papers (Gururangan et al., 2020), and social media (Chakrabarty et al., 2019). However, they all use the same mask language model (MLM) loss. We argue that it is sub-optimal and it is also important to preserve the general knowledge in the LM as much as possible and integrate it with the target domain knowledge.
|
| 52 |
+
|
| 53 |
+
Network pruning as importance computation. It is known that many parameters in a neural network are redundant and can be pruned (Li et al., 2021; Lai et al., 2021). This has also been shown for pre-trained Transformer (Chen et al., 2020a; Lin et al., 2020; Gao et al., 2021b; Michel et al., 2019; Voita et al., 2019). A popular pruning method is to discard the parameters with small absolute values (Han et al., 2015; Guo et al., 2016). Other methods prune the network at a higher level. In a Transformer-based model, these include pruning the attention head (Michel et al., 2019; Voita et al., 2019; McCarley et al., 2019) and pruning sub-layers in a standard Transformer layer (Fan et al., 2020; Sajjad et al., 2020). However, the above methods are not directly applicable to us as we need to compute the head importance for the LM using unlabeled domain data, while the above approaches are all for supervised end-tasks. We propose to use a proxy KL-divergence loss for our purpose. Note that it is possible to prune other sub-layers in the Transformer. However, as shown in Sec. 4.3, estimating the importance for other layers does not improve the performance.
|
| 54 |
+
|
| 55 |
+
Contrastive learning. Contrastive learning (Chen
|
| 56 |
+
|
| 57 |
+
et al., 2020b; He et al., 2020) can learn good representations by maximizing the similarity of positive pairs and minimizes that of negative pairs:
|
| 58 |
+
|
| 59 |
+
$$
|
| 60 |
+
\mathcal {L} _ {\text {c o n t r a s t}} = - \log \frac {e ^ {(\operatorname {s i m} \left(q _ {i} , q _ {i} ^ {+}\right) / \tau)}}{\sum_ {j = 1} ^ {N} e ^ {(\operatorname {s i m} \left(q _ {i} , q _ {j} ^ {+}\right) / \tau)}}, \tag {1}
|
| 61 |
+
$$
|
| 62 |
+
|
| 63 |
+
where $N$ is the batch size, $\tau$ is a temperature parameter, $\mathrm{sim}(\cdot)$ is a similarity metric, and $q_{i}$ and $q_{i}^{+}$ are representations for positive pairs $x_{i}$ and $x_{i}^{+}$ (typically, $x_{i}^{+}$ is an augmented sample of $x_{i}$ , e.g., generated via cropping, deletion or synonym replacement (Gao et al., 2021a)). In the unsupervised contrastive loss, the negative samples are the other samples in the batch, indicated in the denominator.
|
| 64 |
+
|
| 65 |
+
We mainly use contrastive loss to contrast the representations of the important general knowledge in the original LM and the full knowledge (both the general and domain-specific knowledge) to achieve a good integration of the general knowledge and the domain specific knowledge.
|
| 66 |
+
|
| 67 |
+
# 3 Proposed DGA System
|
| 68 |
+
|
| 69 |
+
As discussed earlier, DGA goes beyond the MLM loss to perform two more functions: (1) preserving the important general knowledge in the LM by soft-masking the attention heads based on their importance. This helps avoid potential corruption of the general knowledge in the LM in DA-training (Sec. 3.1). However, the challenge is how to identify the general knowledge in the LM and how to protect it. We will propose a method to do that. (2) encouraging the model to learn integrated representations of the target domain and the general knowledge in the LM (Sec. 3.2). It is also not obvious how this can be done. We propose a contrastive learning based method to do it. Figure 1 gives an overview of DGA.
|
| 70 |
+
|
| 71 |
+
# 3.1 Preserving General Knowledge by Soft-Masking Attention Heads
|
| 72 |
+
|
| 73 |
+
Multi-head attention. Multi-head attention is arguably the most important component in the Transformer model (Vaswani et al., 2017). We omit details of other parts and refer the reader to the original paper. Formally, let $\boldsymbol{x} = x^{(1)},\dots,x^{(T)}$ be a sequence of $T$ real vectors where $x^{(t)}\in \mathbb{R}^d$ and let $q\in \mathbb{R}^d$ be a query vector. The attention mechanism is defined as
|
| 74 |
+
|
| 75 |
+
$$
|
| 76 |
+
\operatorname {a t t} (\boldsymbol {x}, q) = W _ {o} \sum_ {t = 1} ^ {T} \alpha^ {(t)} (q) W _ {v} x ^ {(t)}, \tag {2}
|
| 77 |
+
$$
|
| 78 |
+
|
| 79 |
+

|
| 80 |
+
Figure 1: Illustration of DGA. (A) shows the importance computation. This is done by adding a gate vector $\pmb{g}_l$ multiplying with the multi-head attention (Eq. 5) and averaging its training gradients (Eq. 6). (B) shows DGA training. In backward pass, attention heads are soft-masked based on their importance $\pmb{I}$ (Eqs. 9 and 10) to try to preserve the general knowledge in the LM as much as possible. In forward pass, the added gate vector is removed except for feature learning in the contrastive loss. The contrastive loss is computed by contrasting the general knowledge with importance $(\pmb{o}^{\mathrm{gen}}$ in Eq. 12) applied and the full knowledge without applying the importance $(\pmb{o}^{\mathrm{full}}$ in Eq. 14). The final objective of DGA consists of MLM loss and contrastive loss. Note that we omit the details of other parts of Transformer and only focus on the multi-head attention mechanism.
|
| 81 |
+
|
| 82 |
+

|
| 83 |
+
|
| 84 |
+
where
|
| 85 |
+
|
| 86 |
+
$$
|
| 87 |
+
\alpha^ {(t)} (q) = \operatorname {s o f t m a x} \left(\frac {q ^ {T} W _ {q} ^ {T} W _ {k} x ^ {(t)}}{\sqrt {d}}\right). \tag {3}
|
| 88 |
+
$$
|
| 89 |
+
|
| 90 |
+
The projection matrices $W_{o}, W_{v}, W_{q}, W_{k} \in \mathbb{R}^{d \times d}$ are learnable parameters. The query vector is from the same sequence as $\pmb{x}$ in self-attention. A Transformer contains $L$ identical layers. For layer $l$ , $H_{l}$ different attention heads are applied in parallel to enable the Transformer to be trained on more data. Simply put, multi-head attention (mhatt) is the simultaneous application of multiple attention heads in a single Transformer architecture. They are then applied in parallel to obtain multi-head attention.[5]
|
| 91 |
+
|
| 92 |
+
$$
|
| 93 |
+
\operatorname {m h a t t} _ {l} (\boldsymbol {x}, q) = \sum_ {h = 1} ^ {H _ {l}} \operatorname {a t t} _ {l h} (\boldsymbol {x}, q), \tag {4}
|
| 94 |
+
$$
|
| 95 |
+
|
| 96 |
+
where $h$ indicates the $h^{th}$ attention head. Note that the input $x$ is different in each layer since the input of a given layer is the output of last layer. To ease the notation, we use the input $x$ for all layers.
|
| 97 |
+
|
| 98 |
+
Head importance. Researchers have found that not all attention heads are important (Michel et al., 2019). We introduce a gate vector, $\pmb{g}_l$ , where each cell is a gate variable, $g_{lh}$ , to the attention head summation for detecting the importance of attention heads. The resulting importance scores are used to soft-mask the heads in DA-training.
|
| 99 |
+
|
| 100 |
+
$$
|
| 101 |
+
\operatorname {g m h a t t} _ {l} (\boldsymbol {x}, q) = \sum_ {h = 1} ^ {H _ {l}} g _ {l h}, \otimes \operatorname {a t t} _ {l h} (\boldsymbol {x}, q) \tag {5}
|
| 102 |
+
$$
|
| 103 |
+
|
| 104 |
+
where $\otimes$ is the element-wise multiplication. A gradient-based head importance detection method is proposed in (Michel et al., 2019). Given a dataset $D = \{(\pmb{y}_m, \pmb{x}_m)\}_{m=1}^M$ of $M$ samples ( $\pmb{y}_m$ is the label of $\pmb{x}_m$ as Michel et al. (2019) worked on supervised learning), the importance of a head is estimated with a gradient-based proxy score
|
| 105 |
+
|
| 106 |
+
$$
|
| 107 |
+
I _ {l h} = \frac {1}{M} \sum_ {m = 1} ^ {M} | \nabla_ {g _ {l h}} |, \tag {6}
|
| 108 |
+
$$
|
| 109 |
+
|
| 110 |
+
where $\nabla_{g_{lh}}$ is the gradient of gate variable $g_{lh}$
|
| 111 |
+
|
| 112 |
+
$$
|
| 113 |
+
\nabla_ {g _ {l h}} = \frac {\partial \mathcal {L} _ {\text {i m p t}} \left(\boldsymbol {y} _ {m} , \boldsymbol {x} _ {m}\right)}{\partial g _ {l h}}, \tag {7}
|
| 114 |
+
$$
|
| 115 |
+
|
| 116 |
+
where $\mathcal{L}_{\mathrm{impt}}$ is a task-specific/domain-specific loss function. The gradient can be used as the importance score because changing $g_{lh}$ is liable to have a large effect on the model if $I_{lh}$ has a high value.
|
| 117 |
+
|
| 118 |
+
Although Eq. 6 offers a way to compute the importance of attention heads w.r.t. a given loss $\mathcal{L}_{\mathrm{impt}}$ , we are unable to directly apply it: If we use the domain data at hand and the MLM loss as $\mathcal{L}_{\mathrm{impt}}$ , $\nabla_{g_{lh}}$ only indicates the importance score for domain-specific knowledge. However, our goal is to estimate the attention heads importance for the general knowledge in LM which requires the data used in training the LM to compute the $\mathcal{L}_{\mathrm{impt}}$ . In practice, such data is not accessible to users of the LM. Further, label is needed in Eq. 6 but our domain corpus is unlabeled in DA-training. To address these issues, we propose to compute a proxy KL-divergence loss for $\mathcal{L}_{\mathrm{impt}}$ .
|
| 119 |
+
|
| 120 |
+
Proxy KL-divergence loss. We need a proxy for $\mathcal{L}_{\mathrm{impt}}$ such that its gradient $(\nabla_{g_{lh}})$ can be used to compute head importance without using the LM's original pre-training data. We propose to use model
|
| 121 |
+
|
| 122 |
+
robustness as the proxy, i.e., we try to detect heads that are important for LM's robustness. Its gradient, $\nabla_{g_{lh}}$ , then indicates the robustness and thus the importance to the LM model. Our rationale is as follows: If an $I_{lh}$ (the average of $|\nabla_{g_{lh}}|$ , see Eq. 6) has a high value, it indicates that it is important to the LM's robustness because its change can cause the LM to change a great deal. It is thus an important head to the LM. In contrast, if $I_{lh}$ has a small value, it is a less or not important head to the LM.
|
| 123 |
+
|
| 124 |
+
To compute the robustness of the LM, we take a subset (a hyper-parameter) of the target domain data $\{x_{m}^{\mathrm{sub}}\}$ (no label in DA-training) and input $x_{m}^{\mathrm{sub}}$ twice to the LM and compute the KL-divergence of the two resulting representations,
|
| 125 |
+
|
| 126 |
+
$$
|
| 127 |
+
\mathcal {L} _ {\text {i m p t}} = \mathrm {K L} \left(f _ {1} \left(\boldsymbol {x} _ {m} ^ {\text {s u b}}\right), f _ {2} \left(\boldsymbol {x} _ {m} ^ {\text {s u b}}\right)\right), \tag {8}
|
| 128 |
+
$$
|
| 129 |
+
|
| 130 |
+
where $f_{1}$ and $f_{2}$ are the LM with different dropout masks. Note that we don't need to add any additional dropouts to implement $f$ because independently sampled dropout masks are used as input in the Transformer. In training a Transformer, there are dropout masks placed on fully-connected layers and attention probabilities. Thus, simply feeding the same input to the Transformer twice will get two representations with different dropout masks. Since dropout is similar to adding noise, the difference between the two representations can be regarded as the robustness of the Transformer model. Figure 1 (A) shows how we compute the importance of each attention head using the gradient of the gate vector $g_{l}$ .
|
| 131 |
+
|
| 132 |
+
Soft-masking attention heads in DA-training. Recall we want to preserve the general knowledge in the LM during DA-training using head importance $I_{lh}$ . Given the attention head $\mathrm{att}(x, q)$ and DA-training loss $\mathcal{L}_{\mathrm{DA - train}}$ (typically the MLM loss; we also propose an additional loss in Sec. 3.2), we can "soft mask" its corresponding gradient $(\nabla_{\mathrm{att}_{lh}})^6$ using the head importance value $I_{lh}$ ,
|
| 133 |
+
|
| 134 |
+
$$
|
| 135 |
+
\nabla_ {\mathrm {a t t} _ {l h}} ^ {\prime} = \left(1 - I _ {l h} ^ {\text {n o r m}}\right) \otimes \nabla_ {\mathrm {a t t} _ {l h}}, \tag {9}
|
| 136 |
+
$$
|
| 137 |
+
|
| 138 |
+
where $I_{lh}^{\mathrm{norm}}$ is from $I_{lh}$ via normalization
|
| 139 |
+
|
| 140 |
+
$$
|
| 141 |
+
I _ {l h} ^ {\text {n o r m}} = \left| \operatorname {T a n h} \left(\operatorname {N o r m a l i z e} \left(I _ {l h}\right)\right) \right|. \tag {10}
|
| 142 |
+
$$
|
| 143 |
+
|
| 144 |
+
Normalize makes the $I_{lh}$ have a mean of 0 and standard deviation of 1. Absolute value of Tanh
|
| 145 |
+
|
| 146 |
+
ensures that $I_{lh}$ takes values in the interval [0, 1]. Eq. 9 means to constrain the gradient of the corresponding head $\mathrm{att}_{lh}(x,q)$ by element-wise multiplying one minus the head importance to the gradient. It is "soft-masking" because $I_{lh}$ is a real number in [0, 1] (instead of binary {0, 1}), which gives the model the flexibility to adjust the attention head. This is useful because although some heads are important to the LM, they may conflict with the knowledge in the target domain and thus need adjusting. Also note that the soft masks here affect only the backward pass and are not used in forward pass (so that forward pass can use the full network and encourage maximal integration of pre-trained general and domain-specific knowledge) except for feature learning using contrastive learning (see below). Figure 1 (B) shows that attention heads are soft-masked during training.
|
| 147 |
+
|
| 148 |
+
# 3.2 Contrasting General and Full Knowledge
|
| 149 |
+
|
| 150 |
+
We now present how to integrate the general knowledge in the LM and the domain-specific knowledge in the target domain by contrasting the general knowledge and the full knowledge (both general and domain-specific). We first introduce how we obtain such knowledge from the LM for the input $\mathbf{x}$ , and then discuss how we contrast them.
|
| 151 |
+
|
| 152 |
+
Obtaining the general knowledge for the input sequence $\pmb{x}$ from the LM is by extracting the representation of combining the attention heads and their importance scores ( $I_{lh}^{\mathrm{norm}}$ in Eq. 10) in the forward pass. The intuition is that since the importance scores show how important each attention head is to the general knowledge, the resulting representation reflects the main general knowledge used by $\pmb{x}$ . Formally, we plug $I_{lh}^{\mathrm{norm}}$ (soft-masks) as the gate variable $g_{lh}$ in Eq. 5,
|
| 153 |
+
|
| 154 |
+
$$
|
| 155 |
+
\operatorname {g m h a t t} _ {l} ^ {\text {g e n}} (\boldsymbol {x}, q) = \sum_ {h = 1} ^ {H _ {l}} I _ {l h} ^ {\text {n o r m}} \otimes \operatorname {a t t} _ {l h} (\boldsymbol {x}, q). \tag {11}
|
| 156 |
+
$$
|
| 157 |
+
|
| 158 |
+
Given the attention heads for general knowledge, we can plug it into the whole Transformer to obtain the final general knowledge (taking the average of each token's output in the input sequence).
|
| 159 |
+
|
| 160 |
+
$$
|
| 161 |
+
\boldsymbol {o} ^ {\text {g e n}} = \operatorname {T r a n s f o r m e r} \left(\operatorname {g m h a t t} ^ {\text {g e n}} (\boldsymbol {x}, q)\right). \tag {12}
|
| 162 |
+
$$
|
| 163 |
+
|
| 164 |
+
(See $o^{\mathrm{gen}}$ also in Figure 1 (B)).
|
| 165 |
+
|
| 166 |
+
Obtaining the full (both general and domain-specific) knowledge in $x$ is similar. The only difference is that we extract the representation of $x$
|
| 167 |
+
|
| 168 |
+
without applying the importance (soft-masks) on attention heads in the forward pass,
|
| 169 |
+
|
| 170 |
+
$$
|
| 171 |
+
\operatorname {g m h a t t} _ {l} ^ {\text {f u l l}} (\boldsymbol {x}, q) = \sum_ {h = 1} ^ {H _ {l}} \operatorname {a t t} _ {l h} (\boldsymbol {x}, q). \tag {13}
|
| 172 |
+
$$
|
| 173 |
+
|
| 174 |
+
Similarly, we can plug it into the Transformer,
|
| 175 |
+
|
| 176 |
+
$$
|
| 177 |
+
\boldsymbol {o} ^ {\text {f u l l}} = \operatorname {T r a n s f o r m e r} \left(\operatorname {g m h a t t} ^ {\text {f u l l}} (\boldsymbol {x}, q)\right). \tag {14}
|
| 178 |
+
$$
|
| 179 |
+
|
| 180 |
+
(See $o^{\mathrm{full}}$ also in Figure 1 (B)). Note that it is possible to use $(1 - I_{lh}^{\mathrm{norm}})$ as the importance of domain-specific knowledge and contrast it with the general knowledge. However, this produces poorer results (see Table 3) as explained in footnote 4.
|
| 181 |
+
|
| 182 |
+
Contrasting general and full knowledge. It is known that contrastive learning helps learn a good isotropic representation that is good for downstream tasks, with the help of positive and negative instances. We contrast the general $(o^{\mathrm{gen}})$ and full $(o^{\mathrm{full}})$ representations (as positive and negative instances) for the same input $x$ to make them different, which encourages the learning of domain-specific knowledge in $o^{\mathrm{full}}$ that is not already in the general knowledge and yet related to and integrated with the general knowledge $(o^{\mathrm{gen}})$ of the input.
|
| 183 |
+
|
| 184 |
+
We construct contrastive instances as follows: for an input $\pmb{x}_m$ , three contrastive instances are produced. Anchor $\pmb{o}_m$ and positive instance $o_m^+$ are both full knowledge from Eq. 14, obtained based on two independently sampled dropout masks in the Transformer (recall that this can be achieved by inputting $x_m$ twice (see Sec. 3.1). We regard $o_m^+$ and $o_m$ as positive instances because the dropout noise has been shown to be good positive instances for improving alignment in training sentence embedding (Gao et al., 2021a). Negative instance $o_m^-$ is the general knowledge for $x_m$ from the LM obtained via Eq. 12. With $o_m$ , $o_m^+$ , and $o_m^-$ , our contrastive loss is $(\mathrm{sim}(\cdot))$ is the cosine similarity),
|
| 185 |
+
|
| 186 |
+
$$
|
| 187 |
+
\mathcal {L} _ {\text {c o n t r a s t}} = - \log \frac {e ^ {\sin \left(\boldsymbol {o} _ {m} , \boldsymbol {o} _ {m} ^ {+}\right)} / \tau}{\sum_ {j = 1} ^ {N} \left(e ^ {\sin \left(\boldsymbol {o} _ {m} , \boldsymbol {o} _ {j} ^ {+}\right) / \tau} + e ^ {\sin \left(\boldsymbol {o} _ {m} , \boldsymbol {o} _ {j} ^ {-}\right) / \tau}\right)}. \tag {15}
|
| 188 |
+
$$
|
| 189 |
+
|
| 190 |
+
Compared to Eq. 1, the second term is added in the denominator, i.e., general knowledge representations as additional negative samples/instances. Figure 1 (B) shows a red arrow pointed from $o^{\mathrm{full}}$ to itself, indicating the positive instances are from inputting twice. The dashed red arrow pointing to $o^{\mathrm{gen}}$ indicates the negative instances contrasting the specialized and general knowledge.
|
| 191 |
+
|
| 192 |
+
<table><tr><td colspan="3">Unlabeled Domain Datasets</td><td colspan="5">End-Task Classification Datasets</td></tr><tr><td>Source</td><td>Dataset</td><td>Size</td><td>Dataset</td><td>Task</td><td>#Training</td><td>#Testing</td><td>#Classes</td></tr><tr><td rowspan="3">Reviews</td><td>Yelp Restaurant</td><td>758MB</td><td>Restaurant</td><td>Aspect Sentiment Classification (ASC)</td><td>3,452</td><td>1,120</td><td>3</td></tr><tr><td>Amazon Phone</td><td>724MB</td><td>Phone</td><td>Aspect Sentiment Classification (ASC)</td><td>239</td><td>553</td><td>2</td></tr><tr><td>Amazon Camera</td><td>319MB</td><td>Camera</td><td>Aspect Sentiment Classification (ASC)</td><td>230</td><td>626</td><td>2</td></tr><tr><td rowspan="3">Academic Papers</td><td>ACL Papers</td><td>867MB</td><td>ACL</td><td>Citation Intent Classification</td><td>1,520</td><td>421</td><td>6</td></tr><tr><td>AI Papers</td><td>507MB</td><td>AI</td><td>Relation Classification</td><td>2,260</td><td>2,388</td><td>7</td></tr><tr><td>PubMed Papers</td><td>989MB</td><td>PubMed</td><td>Chemical-protein Interaction Prediction</td><td>2,667</td><td>7,398</td><td>13</td></tr></table>
|
| 193 |
+
|
| 194 |
+
Table 1: Statistics for domain post-training datasets and end task supervised classification datasets (more detail of each task is given in Appendix A).
|
| 195 |
+
|
| 196 |
+
# 3.3 DGA Objectives
|
| 197 |
+
|
| 198 |
+
DGA is a pipelined model: First, a subset of the domain data is used to estimate the attention head importance $(I_{lh}$ in Sec. 3.1). Second, given the attention head importance, we compute the final domain-adaptive loss by combining the conventional Masked Language Model (MLM) loss (include the proposed soft-masking for general knowledge) and the proposed contrastive loss:
|
| 199 |
+
|
| 200 |
+
$$
|
| 201 |
+
\mathcal {L} _ {\mathrm {D A - t r a i n}} = \mathcal {L} _ {\mathrm {M L M}} + \lambda_ {1} \mathcal {L} _ {\text {c o n t r a s t}}, \tag {16}
|
| 202 |
+
$$
|
| 203 |
+
|
| 204 |
+
where $\lambda_{1}$ is the hyper-parameter to adjust the impact of the added term.
|
| 205 |
+
|
| 206 |
+
# 4 Experiments
|
| 207 |
+
|
| 208 |
+
We follow the experiment setup in (Gururangan et al., 2020). RoBERTa (Liu et al., 2019)<sup>7</sup> is used as the LM. In each experiment, we first DA-train the LM and then fine-tune it on the end-task. The final evaluation is based on the end-task results.
|
| 209 |
+
|
| 210 |
+
# 4.1 Datasets and Baselines
|
| 211 |
+
|
| 212 |
+
Datasets: Table 1 shows the statistics of the unlabeled domain datasets for DA-training and their corresponding end-task classification datasets. We use 6 unlabeled domain datasets:3 of them are about reviews: Yelp Restaurant (Xu et al., 2019a), Amazon Phone (Ni et al., 2019), Amazon Camera (Ni et al., 2019); 3 of them are academic papers: ACL Papers (Lo et al., 2020), AI Papers (Lo et al., 2020), and PubMed Papers. Each unlabeled domain dataset has a corresponding end-task classify
|
| 213 |
+
|
| 214 |
+
cation dataset<sup>10</sup>: Restaurant<sup>11</sup> (Xu et al., 2019a), Phone (Ding et al., 2008; Hu and Liu, 2004), Camera (Ding et al., 2008; Hu and Liu, 2004)<sup>12</sup>, ACL (ACL-ARC in Jurgens et al. (2018)), AI (SCIERC in Luan et al. (2018)), and PubMed (CHEMPORT in Kringelum et al. (2016)).
|
| 215 |
+
|
| 216 |
+
# Baselines. We consider 10 baselines.
|
| 217 |
+
|
| 218 |
+
(1). Non-DA-training (RoBERTa) (Liu et al., 2019) uses the original RoBERTa for the end-task fine-tuning without any DA-training.
|
| 219 |
+
|
| 220 |
+
(2). DA-training using masked language model loss (MLM) is the existing DA-training method. To our knowledge, existing DA-training systems are all based on the MLM loss.
|
| 221 |
+
|
| 222 |
+
(3). DA-training using adapter-tuning (MLM (Adapter)) adds adapter layers between layers of Transformer for DA-training. An adapter (Houlsby et al., 2019) has two fully connected layers and a skip connection. During DA-training, the Transformer is fixed, only the adapters are trained. The bottleneck (adapter) size is set to 64 (Houlsby et al., 2019). During end-task fine-tuning, both RoBERTa and adapters are trainable for fair comparison.
|
| 223 |
+
|
| 224 |
+
(4). DA-training using prompt-tuning (MLM (Prompt)) (Lester et al., 2021) adds a sequence of prompt tokens to the end of the original sequence. In DA-training, RoBERTa (the LM) is fixed and only the prompt tokens are trained. In end-task fine-tuning, both LM and the trained prompt are trainable. We initialize 100 tokens and set the learning rate of the prompt token to 0.3 in DA-training, following the setting in Lester et al. (2021).
|
| 225 |
+
|
| 226 |
+
# (5). Knowledge distillation (MLM+KD) (Hin-
|
| 227 |
+
|
| 228 |
+
ton et al., 2015) minimizes the representational deviation between the general knowledge in LM and the specialized knowledge in DA-training. We compute the KL divergence between the representations (the output before the masked language model prediction head) of each word of the two models (LM and DA-trained) as the distillation loss.
|
| 229 |
+
|
| 230 |
+
(6). Adapted distillation through attention (MLM+AdaptedDeiT) is derived from DeiT (Touvron et al., 2021), a distillation method for visual Transformer (ViT) (Dosovitskiy et al., 2020). We adapt DeiT to a text-based and unsupervised model by distilling the LM representation<sup>13</sup> to the added distillation token and change ViT to RoBERTa.
|
| 231 |
+
|
| 232 |
+
(7, 8). DA-training using sequence-level contrastive learning (MLM+SimCSE and MLM+InfoWord). SimCSE is a contrastive learning method for sentence embedding (Gao et al., 2021a). We use its unsupervised version where positive samples are from the same input with different dropout masks and negative samples are other instances in the same batch. InfoWord (Kong et al., 2020) is another contrastive learning method contrasts the span-level local representation and sequence-level global representation.
|
| 233 |
+
|
| 234 |
+
(9, 10). DA-training using token-aware contrastive learning (MLM+TaCL and MLM+TaCO). TaCL (Su et al., 2021) and TaCO (Fu et al., 2022) are two recent methods to improve BERT pre-training with token-aware contrastive loss. We change the backbone to RoBERTa for fair comparison.
|
| 235 |
+
|
| 236 |
+
# 4.2 Implementation Detail
|
| 237 |
+
|
| 238 |
+
Architecture. We adopt RoBERTa<sub>BASE</sub> as our backbone LM (12 layers and 12 attention heads in each layer). A masked language model head is applied for DA-training. The end-task fine-tuning of RoBERTa follows the standard practice. For the three ASC tasks (see Table 1), we adopt the ASC formulation in (Xu et al., 2019a), where the aspect (e.g., "sound") and review sentence (e.g., "The sound is great") are concatenated via $\langle /s \rangle$ .
|
| 239 |
+
|
| 240 |
+
Hyperparameters. Unless otherwise stated, the same hyper-parameters are used in all experiments. The maximum input length is set to 164 which is long enough for all datasets. Adam optimizer is
|
| 241 |
+
|
| 242 |
+
used for both DA-training and end-task fine-tuning. The max sequence length is set to 164, which is long enough for our end-tasks and only needs moderate computational resources.
|
| 243 |
+
|
| 244 |
+
Domain-adaptive pre-training (DA-training). The learning rate is set to 1e-4 and batch size is 256. We train 2.5K steps for each domain, roughly a full pass through the domain data, following (Gururanget al., 2020; Xu et al., 2019a). The subset of data $\{\pmb{x}_m^{\mathrm{sub}}\}$ for computing $\mathcal{L}_{\mathrm{impt}}$ to determine head importance in Sec. 3.1 is set to 1.64 Million tokens, which is sufficient in our experiments. $\lambda_{1}$ in Eq. 16 is set to 1 and $\tau$ in Eq. 15 is set to 0.05.
|
| 245 |
+
|
| 246 |
+
End-task fine-tuning. The learning rate is set to 1e-5 and batch size to 16. We train on end-task fine-tuning datasets for 5 epochs for Restaurant; 10 epochs for ACL, AI and PubMed; and 15 epochs for Phone and Camera. We simply take the results for the last epoch as we empirically found that the above number of epochs gives us stable and convergence results.
|
| 247 |
+
|
| 248 |
+
# 4.3 Evaluation Results and Ablation Study
|
| 249 |
+
|
| 250 |
+
We report the end-task results of the 10 baselines on the 6 datasets in Table 2.
|
| 251 |
+
|
| 252 |
+
Superiority of DGA. Our DGA consistently outperforms all baselines. Thanks to the proposed more informed adaptation, DGA improves over the widely used traditional DA-training baseline MLM. We also see that MLM markedly outperforms RoBERTa (non-DA-training) on average (see the last column). We discuss more observations about the results below.
|
| 253 |
+
|
| 254 |
+
(1). Training the entire LM in DGA helps achieve much better results. Using adapter (MLM (adapter)) and prompt (MLM (prompt)) have mixed results. This is because adapter and prompt do not have sufficient trainable parameters, which are also randomly initialized and can be difficult to train.
|
| 255 |
+
(2). DGA is also better than distillation-based systems: MLM+AdaptedDeiT and MLM+KD, which try to preserve the past knowledge. This is not surprising because the goal of DA-training is not simply preserving the previous knowledge but also to adapt/change it as needed to suit the target domain. DGA is specifically designed for this with soft-masking and contrasting of knowledge.
|
| 256 |
+
(3). The contrastive learning in DGA is more effective than the other contrastive alternatives (MLM+SimCSE, MLM+TaCL, MLM+TaCO and MLM+InfoWord). This indicates contrasting the
|
| 257 |
+
|
| 258 |
+
<table><tr><td rowspan="2">Domain
|
| 259 |
+
Model</td><td colspan="2">Camera</td><td colspan="2">Phone</td><td colspan="2">Restaurant</td><td colspan="2">AI</td><td colspan="2">ACL</td><td rowspan="2">PubMed
|
| 260 |
+
Micro-F1</td><td rowspan="2">Avg</td></tr><tr><td>MF1</td><td>Acc.</td><td>MF1</td><td>Acc.</td><td>MF1</td><td>Acc.</td><td>MF1</td><td>Acc.</td><td>MF1</td><td>Acc.</td></tr><tr><td>RoBERTa</td><td>78.82</td><td>87.03</td><td>83.75</td><td>86.08</td><td>79.81</td><td>87.00</td><td>60.98</td><td>71.85</td><td>66.11</td><td>71.26</td><td>72.38</td><td>73.64</td></tr><tr><td>MLM</td><td>84.39</td><td>89.90</td><td>82.59</td><td>85.50</td><td>80.84</td><td>87.68</td><td>68.97</td><td>75.95</td><td>68.75</td><td>73.44</td><td>72.84</td><td>76.40</td></tr><tr><td>MLM (Adapter)</td><td>83.62</td><td>89.23</td><td>82.71</td><td>85.35</td><td>80.19</td><td>87.14</td><td>60.55</td><td>71.38</td><td>68.87</td><td>72.92</td><td>71.68</td><td>74.60</td></tr><tr><td>MLM (Prompt)</td><td>85.52</td><td>90.38</td><td>84.17</td><td>86.53</td><td>79.00</td><td>86.45</td><td>61.47</td><td>72.36</td><td>66.66</td><td>71.35</td><td>73.09</td><td>74.98</td></tr><tr><td>MLM+KD</td><td>82.79</td><td>89.30</td><td>80.08</td><td>83.33</td><td>80.40</td><td>87.25</td><td>67.76</td><td>75.46</td><td>68.19</td><td>72.73</td><td>72.35</td><td>75.26</td></tr><tr><td>MLM+AdaptedDeiT</td><td>86.86</td><td>91.37</td><td>83.08</td><td>85.64</td><td>79.70</td><td>86.84</td><td>69.72</td><td>76.83</td><td>69.11</td><td>73.35</td><td>72.69</td><td>76.86</td></tr><tr><td>MLM+SimCSE</td><td>84.91</td><td>90.35</td><td>83.46</td><td>86.08</td><td>80.88</td><td>87.59</td><td>69.10</td><td>76.25</td><td>69.89</td><td>74.30</td><td>72.77</td><td>76.84</td></tr><tr><td>MLM+TaCL</td><td>81.98</td><td>88.88</td><td>81.87</td><td>84.92</td><td>81.12</td><td>87.50</td><td>64.04</td><td>73.18</td><td>63.18</td><td>70.31</td><td>69.46</td><td>73.61</td></tr><tr><td>MLM+TaCO</td><td>84.50</td><td>90.22</td><td>82.63</td><td>85.32</td><td>79.27</td><td>86.68</td><td>59.73</td><td>71.22</td><td>63.66</td><td>70.36</td><td>72.38</td><td>73.69</td></tr><tr><td>MLM+InfoWord</td><td>87.95</td><td>91.92</td><td>84.58</td><td>86.84</td><td>81.24</td><td>87.82</td><td>68.29</td><td>75.92</td><td>68.58</td><td>73.68</td><td>73.21</td><td>77.31</td></tr><tr><td>DGA</td><td>88.52</td><td>92.49</td><td>85.47</td><td>87.45</td><td>81.83</td><td>88.20</td><td>71.99</td><td>78.06</td><td>71.01</td><td>74.73</td><td>73.65</td><td>78.74</td></tr></table>
|
| 261 |
+
|
| 262 |
+
Table 2: We report the macro-F1 (MF1) and accuracy results for all datasets, except for CHEMPORT in the PubMed domain, for which we use micro-F1 following Gururangan et al. (2020); Dery et al. (2021); Beltagy et al. (2019). The results are averages of 5 random seeds (the standard deviation is reported in Appendix B). The average column (Avg) is the average over the MF1 (or Micro-F1 for PubMed) for all datasets.
|
| 263 |
+
|
| 264 |
+
general and full knowledge for knowledge integration is important.
|
| 265 |
+
|
| 266 |
+
Effectiveness of the proxy KL-divergence loss. We use the proposed proxy KL-divergence loss to compute the head importance to identify the general language knowledge in the LM without using the LM's original pre-training data (Sec. 3.1).
|
| 267 |
+
|
| 268 |
+
For evaluation, we are interested in how good the proxy is. Since we don't have the data that pretrains RoBERTa, it is not obvious how to assess the quality of the proxy directly. Here, we provide some indirect evidences to show the effectiveness of the proxy for computing the importance of units to the general knowledge in the LM.
|
| 269 |
+
|
| 270 |
+
We conduct a separate experiment to compare the attention heads' importance score vectors after applying the proxy using the data from different domains. For each domain $i$ , we compare its importance vector with the importance vector of every other domain, and then average the cosine similarities to get the value for domain $i$ . We get 0.92 for Restaurant, the same 0.91 for ACL, AI, and Phone, 0.89 for PubMed and 0.92 for Camera. We see that different domains give similar importance values, which indirectly show that our proxy can identify the common general knowledge.
|
| 271 |
+
|
| 272 |
+
We also compute the importance score distributions of the proxy. For each of the 6 domains, after applying the proxy, around $20\%$ of the attention heads are heavily protected $(0.8 \leq I_{lh}^{\mathrm{norm}} \leq 1.0)$ and another $20\%$ moderately protected $(0.6 \leq I_{lh}^{\mathrm{norm}} < 0.8)$ , which indicate the general knowledge. While Phone, AI, Camera and Restaurant share a similar distribution, ACL and PubMed protect slightly less. This is understandable as PubMed
|
| 273 |
+
|
| 274 |
+
and ACL (medical or NLP publications) are probably less common than the other domains and the general knowledge in the LM covers them less.
|
| 275 |
+
|
| 276 |
+
Ablation study. To better understand DGA, We want to know (1) whether constraining the neurons in other layers are helpful (the proposed DGA only constrains the attention heads), and (2) where the gain of DGA is from. To answer (1), we constrain the training of different layers in a standard Transformer. In Table 3 (rows 3-5), "H", "I", and "O" refer to attention head, intermediate layer, output layer in a standard Transformer layer, respectively. "E" refers to the embedding layers. The brackets with combination of "H, I, O, E" indicate the location we apply the soft-masking (DGA only applies soft-masking in the attention head). We can see their results are similar or worse than DGA, implying that attention heads are more indicative of important knowledge. To answer (2), we conduct the following ablation experiments: (i) DGA (w/o contrast), without the contrastive loss, but only soft-masking the backward pass according to the attention head importance. (ii) DGA (random masking) with randomly generated attention head importance scores and using them to do soft-masking and contrastive learning. (iii) Ensemble (LM+MLM) performs the end-task fine-tuning on both the MLM DA-trained RoBERTa (conventional DA-training) and the original RoBERTa (LM) by concatenating their outputs and taking the average. (iv) DGA (domain-specific) refers to the variant that contrasts domain-specific and general knowledge (see Sec. 3.2).<sup>15</sup>
|
| 277 |
+
|
| 278 |
+
<table><tr><td rowspan="2">Domain
|
| 279 |
+
Model</td><td colspan="2">Camera</td><td colspan="2">Phone</td><td colspan="2">Restaurant</td><td colspan="2">AI</td><td colspan="2">ACL</td><td rowspan="2">PubMed Micro-F1</td><td rowspan="2">Avg</td></tr><tr><td>MF1</td><td>Acc.</td><td>MF1</td><td>Acc.</td><td>MF1</td><td>Acc.</td><td>MF1</td><td>Acc.</td><td>MF1</td><td>Acc.</td></tr><tr><td>RoBERTa</td><td>78.82</td><td>87.03</td><td>83.75</td><td>86.08</td><td>79.81</td><td>87.00</td><td>60.98</td><td>71.85</td><td>66.11</td><td>71.26</td><td>72.38</td><td>73.64</td></tr><tr><td>MLM</td><td>84.39</td><td>89.90</td><td>82.59</td><td>85.50</td><td>80.84</td><td>87.68</td><td>68.97</td><td>75.95</td><td>68.75</td><td>73.44</td><td>72.84</td><td>76.40</td></tr><tr><td>DGA (H, I)</td><td>86.79</td><td>91.60</td><td>84.21</td><td>86.40</td><td>81.32</td><td>87.91</td><td>71.07</td><td>77.36</td><td>69.50</td><td>73.82</td><td>73.34</td><td>77.71</td></tr><tr><td>DGA (H, I, O)</td><td>88.04</td><td>92.01</td><td>85.85</td><td>87.63</td><td>81.45</td><td>87.79</td><td>71.54</td><td>77.61</td><td>70.52</td><td>74.58</td><td>73.10</td><td>78.42</td></tr><tr><td>DGA (H, I, O, E)</td><td>87.05</td><td>91.60</td><td>83.74</td><td>86.11</td><td>80.64</td><td>87.61</td><td>72.64</td><td>78.17</td><td>71.24</td><td>74.96</td><td>73.54</td><td>78.14</td></tr><tr><td>DGA (w/o contrast)</td><td>86.19</td><td>90.89</td><td>84.48</td><td>86.65</td><td>81.70</td><td>87.93</td><td>68.25</td><td>75.49</td><td>69.31</td><td>73.73</td><td>72.72</td><td>77.11</td></tr><tr><td>DGA (random mask)</td><td>82.07</td><td>89.30</td><td>83.86</td><td>86.33</td><td>80.60</td><td>87.52</td><td>69.51</td><td>76.64</td><td>69.59</td><td>73.73</td><td>72.92</td><td>76.43</td></tr><tr><td>Ensemble (LM+MLM)</td><td>85.22</td><td>90.64</td><td>85.15</td><td>87.23</td><td>79.86</td><td>86.98</td><td>65.10</td><td>74.43</td><td>68.56</td><td>73.44</td><td>72.60</td><td>76.08</td></tr><tr><td>DGA (domain-specific)</td><td>88.06</td><td>92.04</td><td>83.45</td><td>85.82</td><td>81.72</td><td>87.90</td><td>68.00</td><td>75.57</td><td>70.91</td><td>75.06</td><td>73.17</td><td>77.55</td></tr><tr><td>DGA</td><td>88.52</td><td>92.49</td><td>85.47</td><td>87.45</td><td>81.83</td><td>88.20</td><td>71.99</td><td>78.06</td><td>71.01</td><td>74.73</td><td>73.65</td><td>78.74</td></tr></table>
|
| 280 |
+
|
| 281 |
+
Table 3: Ablation results - averages of 5 random seeds. The standard deviations are reported in Appendix B.
|
| 282 |
+
|
| 283 |
+
Table 3 shows that the full DGA always gives the best result, indicating every component contributes. Additional observations are as follows:
|
| 284 |
+
|
| 285 |
+
(1) DGA's gain is partially from the novel softmasking: we can see that on average, DGA (w/o contrast) outperforms conventional DA-training (MLM). Besides, our gradient-based mask is informative: we can see DGA (random mask) is worse than DGA (w/o contrast) on all datasets. DGA (w/o contrast) is even better than Ensemble, which directly combines the information given by both the original LM and the traditional DA-trained model during end-task fine-tuning
|
| 286 |
+
|
| 287 |
+
(2) Besides soft-masking, contrasting the general and full knowledge also helps. We can see DGA outperforms DGA (w/o contrast) and DGA (domain-specific) in all datasets.
|
| 288 |
+
|
| 289 |
+
# 5 Conclusion
|
| 290 |
+
|
| 291 |
+
This paper argued that an effective DA-training method should effectively integrate the target domain knowledge to the general knowledge in the LM. Existing approaches do not explicitly do this. This paper proposed a novel method DGA to achieve it (1) by estimating the attention heads importance in LM and using the importance scores to soft-mask the attention heads in DA-training to preserve the important knowledge in LM as much as possible, and (2) by contrasting the general and the full knowledge. Extensive experiment results demonstrated the effectiveness of the proposed approach DGA.
|
| 292 |
+
|
| 293 |
+
# 6 Limitations
|
| 294 |
+
|
| 295 |
+
While effective, DGA has some limitations. First, the main focus of DGA is to adapt an LM to a
|
| 296 |
+
|
| 297 |
+
trastive learning relies on soft-masking. If removed, contrastive loss will not have the additional negative samples and our DGA becomes MLM+SimCSE.
|
| 298 |
+
|
| 299 |
+
given target domain. It does not consider the generalization to other domains. For example, it will be interesting to incrementally or continually adapt an LM to more and more domains to make the LM more useful. Second, the importance of parameters for general knowledge in the LM is computed using a proxy method based on model robustness. Although it is quite effective, it is interesting to explore other approaches to further improve it. We will work on these in our future work as specializing and improving an LM is an important problem.
|
| 300 |
+
|
| 301 |
+
# Acknowledgments
|
| 302 |
+
|
| 303 |
+
The work of Zixuan Ke and Bing Liu was supported in part by three National Science Foundation (NSF) grants (IIS-1910424, IIS-1838770, and CNS-2225427).
|
| 304 |
+
|
| 305 |
+
# References
|
| 306 |
+
|
| 307 |
+
Emily Alsentzer, John R Murphy, Willie Boag, WeiHung Weng, Di Jin, Tristan Naumann, and Matthew McDermott. 2019. Publicly available clinical bert embeddings. arXiv preprint arXiv:1904.03323.
|
| 308 |
+
Iz Beltagy, Kyle Lo, and Arman Cohan. 2019. SciBERT: A pretrained language model for scientific text.
|
| 309 |
+
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems.
|
| 310 |
+
Tuhin Chakrabarty, Christopher Hidey, and Kathleen McKeown. 2019. Imho fine-tuning improves claim detection. arXiv preprint arXiv:1905.07000.
|
| 311 |
+
Tianlong Chen, Jonathan Frankle, Shiyu Chang, Sijia Liu, Yang Zhang, Zhangyang Wang, and Michael Carbin. 2020a. The lottery ticket hypothesis for pretrained bert networks. Advances in neural information processing systems, 33:15834-15846.
|
| 312 |
+
|
| 313 |
+
Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. 2020b. A simple framework for contrastive learning of visual representations. In International conference on machine learning, pages 1597-1607. PMLR.
|
| 314 |
+
Zhiyuan Chen and Bing Liu. 2018. Lifelong machine learning. Synthesis Lectures on Artificial Intelligence and Machine Learning, 12(3):1-207.
|
| 315 |
+
Lucio M Dery, Paul Michel, Ameet Talwalkar, and Graham Neubig. 2021. Should we be pre-training? an argument for end-task aware training as an alternative. arXiv preprint arXiv:2109.07437.
|
| 316 |
+
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In NAACL-HLT.
|
| 317 |
+
Xiaowen Ding, Bing Liu, and Philip S Yu. 2008. A holistic lexicon-based approach to opinion mining. In Proceedings of the 2008 international conference on web search and data mining.
|
| 318 |
+
Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. 2020. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929.
|
| 319 |
+
Angela Fan, Edouard Grave, and Armand Joulin. 2020. Reducing transformer depth on demand with structured dropout. In International Conference on Learning Representations.
|
| 320 |
+
Zhiyi Fu, Wangchunshu Zhou, Jingjing Xu, Hao Zhou, and Lei Li. 2022. Contextual representation learning beyond masked language modeling. In ACL.
|
| 321 |
+
Tianyu Gao, Xingcheng Yao, and Danqi Chen. 2021a. Simcse: Simple contrastive learning of sentence embeddings. arXiv preprint arXiv:2104.08821.
|
| 322 |
+
Yang Gao, Nicolo Colombo, and Wei Wang. 2021b. Adapting by pruning: A case study on bert. arXiv preprint arXiv:2105.03343.
|
| 323 |
+
Yiwen Guo, Anbang Yao, and Yurong Chen. 2016. Dynamic network surgery for efficient dnns. Advances in neural information processing systems, 29.
|
| 324 |
+
Suchin Gururangan, Ana Marasovic, Swabha Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey, and Noah A. Smith. 2020. Don't stop pretraining: Adapt language models to domains and tasks. In ACL.
|
| 325 |
+
Song Han, Jeff Pool, John Tran, and William Dally. 2015. Learning both weights and connections for efficient neural network. Advances in neural information processing systems, 28.
|
| 326 |
+
|
| 327 |
+
Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. 2020. Momentum contrast for unsupervised visual representation learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9729-9738.
|
| 328 |
+
Geoffrey Hinton, Oriol Vinyals, Jeff Dean, et al. 2015. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2(7).
|
| 329 |
+
Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin de Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. 2019. Parameter-efficient transfer learning for NLP. In ICML.
|
| 330 |
+
Minqing Hu and Bing Liu. 2004. Mining and summarizing customer reviews. In Proceedings of ACM SIGKDD.
|
| 331 |
+
David Jurgens, Srijan Kumar, Raine Hoover, Daniel A. McFarland, and Dan Jurafsky. 2018. Measuring the evolution of a scientific field through citation frames. TACL.
|
| 332 |
+
Lingpeng Kong, Cyprien de Masson d'Autume, Lei Yu, Wang Ling, Zihang Dai, and Dani Yogatama. 2020. A mutual information maximization perspective of language representation learning. In ICLR.
|
| 333 |
+
Jens Kringelum, Sonny Kim Kjaerulff, Søren Brunak, Ole Lund, Tudor I Oprea, and Olivier Taboureau. 2016. Chemprot-3.0: a global chemical biology diseases mapping. Database, 2016.
|
| 334 |
+
Cheng-I Jeff Lai, Yang Zhang, Alexander H Liu, Shiyu Chang, Yi-Lun Liao, Yung-Sung Chuang, Kaizhi Qian, Sameer Khurana, David Cox, and Jim Glass. 2021. Parp: Prune, adjust and re-prune for self-supervised speech recognition. NeurIPS, 34.
|
| 335 |
+
Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2020. Biobert: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics, 36(4):1234-1240.
|
| 336 |
+
Brian Lester, Rami Al-Rfou, and Noah Constant. 2021. The power of scale for parameter-efficient prompt tuning. In EMNLP.
|
| 337 |
+
Jiaoda Li, Ryan Cotterell, and Mrinmaya Sachan. 2021. Differentiable subset pruning of transformer heads. Transactions of the Association for Computational Linguistics, 9:1442-1459.
|
| 338 |
+
Zi Lin, Jeremiah Zhe Liu, Zi Yang, Nan Hua, and Dan Roth. 2020. Pruning redundant mappings in transformer models via spectral-normalized identity prior. arXiv preprint arXiv:2010.01791.
|
| 339 |
+
Bing Liu. 2015. Sentiment analysis: Mining opinions, sentiments, and emotions. Cambridge University Press.
|
| 340 |
+
|
| 341 |
+
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining approach. CoRR.
|
| 342 |
+
Kyle Lo, Lucy Lu Wang, Mark Neumann, Rodney Kinney, and Daniel S. Weld. 2020. S2ORC: the semantic scholar open research corpus. In ACL.
|
| 343 |
+
Yi Luan, Luheng He, Mari Ostendorf, and Hannaneh Hajishirzi. 2018. Multi-task identification of entities, relations, and coreference for scientific knowledge graph construction. In ACL.
|
| 344 |
+
JS McCarley, Rishav Chakravarti, and Avirup Sil. 2019. Structured pruning of a bert-based question answering model. arXiv preprint arXiv:1910.06360.
|
| 345 |
+
Michael McCloskey and Neal J Cohen. 1989. Catastrophic interference in connectionist networks: The sequential learning problem. In Psychology of learning and motivation.
|
| 346 |
+
Paul Michel, Omer Levy, and Graham Neubig. 2019. Are sixteen heads really better than one? Advances in neural information processing systems, 32.
|
| 347 |
+
Jianmo Ni, Jiacheng Li, and Julian J. McAuley. 2019. Justifying recommendations using distantly-labeled reviews and fine-grained aspects. In EMNLP, pages 188-197. Association for Computational Linguistics.
|
| 348 |
+
Hassan Sajjad, Fahim Dalvi, Nadir Durrani, and Preslav Nakov. 2020. On the effect of dropping layers of pre-trained transformer models. arXiv preprint arXiv:2004.03844.
|
| 349 |
+
Yixuan Su, Fangyu Liu, Zaiqiao Meng, Lei Shu, Ehsan Shareghi, and Nigel Collier. 2021. Tacl: Improving bert pre-training with token-aware contrastive learning. arXiv preprint arXiv:2111.04198.
|
| 350 |
+
Chi Sun, Xipeng Qiu, Yige Xu, and Xuanjing Huang. 2019. How to fine-tune bert for text classification? In China national conference on Chinese computational linguistics, pages 194-206. Springer.
|
| 351 |
+
Duyu Tang, Bing Qin, and Ting Liu. 2016. Aspect level sentiment classification with deep memory network. In EMNLP.
|
| 352 |
+
Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, and Hervé Jégou. 2021. Training data-efficient image transformers & distillation through attention. In International Conference on Machine Learning, pages 10347-10357. PMLR.
|
| 353 |
+
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NeurIPS.
|
| 354 |
+
|
| 355 |
+
Elena Voita, David Talbot, Fedor Moiseev, Rico Senrich, and Ivan Titov. 2019. Analyzing multi-head self-attention: Specialized heads do the heavy lifting, the rest can be pruned. arXiv preprint arXiv:1905.09418.
|
| 356 |
+
Hu Xu, Bing Liu, Lei Shu, and Philip S. Yu. 2019a. BERT post-training for review reading comprehension and aspect-based sentiment analysis. In NAACL-HLT.
|
| 357 |
+
Hu Xu, Bing Liu, Lei Shu, and Philip S Yu. 2019b. Review conversational reading comprehension. arXiv preprint arXiv:1902.00821.
|
| 358 |
+
|
| 359 |
+
<table><tr><td rowspan="2">Domain
|
| 360 |
+
Model</td><td colspan="2">Camera</td><td colspan="2">Phone</td><td colspan="2">Restaurant</td><td colspan="2">AI</td><td colspan="2">ACL</td><td rowspan="2">PubMed
|
| 361 |
+
Micro-F1</td></tr><tr><td>MF1</td><td>Acc.</td><td>MF1</td><td>Acc.</td><td>MF1</td><td>Acc.</td><td>MF1</td><td>Acc.</td><td>MF1</td><td>Acc.</td></tr><tr><td>RoBERTa</td><td>±0.0403</td><td>±0.0179</td><td>±0.0210</td><td>±0.0154</td><td>±0.0117</td><td>±0.0049</td><td>±0.0646</td><td>±0.0347</td><td>±0.0192</td><td>±0.0096</td><td>±0.0071</td></tr><tr><td>MLM</td><td>±0.0479</td><td>±0.0298</td><td>±0.0165</td><td>±0.0103</td><td>±0.0096</td><td>±0.0056</td><td>±0.0117</td><td>±0.0086</td><td>±0.0218</td><td>±0.0118</td><td>±0.0035</td></tr><tr><td>MLM (adapter)</td><td>±0.0165</td><td>±0.0110</td><td>±0.0265</td><td>±0.0181</td><td>±0.0102</td><td>±0.0068</td><td>±0.0551</td><td>±0.0288</td><td>±0.0142</td><td>±0.0099</td><td>±0.0055</td></tr><tr><td>MLM (prompt)</td><td>±0.0243</td><td>±0.0138</td><td>±0.0126</td><td>±0.0087</td><td>±0.0060</td><td>±0.0035</td><td>±0.0301</td><td>±0.0124</td><td>±0.0068</td><td>±0.0108</td><td>±0.0028</td></tr><tr><td>MLM+KD</td><td>±0.0295</td><td>±0.0158</td><td>±0.0320</td><td>±0.0230</td><td>±0.0099</td><td>±0.0070</td><td>±0.0345</td><td>±0.0224</td><td>±0.0292</td><td>±0.0155</td><td>±0.0093</td></tr><tr><td>MLM+AdaptedDeiT</td><td>±0.0187</td><td>±0.0122</td><td>±0.0160</td><td>±0.0101</td><td>±0.0048</td><td>±0.0022</td><td>±0.0250</td><td>±0.0179</td><td>±0.0065</td><td>±0.0079</td><td>±0.0086</td></tr><tr><td>MLM+SimCSE</td><td>±0.0114</td><td>±0.0077</td><td>±0.0098</td><td>±0.0065</td><td>±0.0029</td><td>±0.0016</td><td>±0.0086</td><td>±0.0056</td><td>±0.0054</td><td>±0.0071</td><td>±0.0027</td></tr><tr><td>MLM+TaCL</td><td>±0.0218</td><td>±0.0103</td><td>±0.0230</td><td>±0.0159</td><td>±0.0105</td><td>±0.0059</td><td>±0.0275</td><td>±0.0156</td><td>±0.0713</td><td>±0.0394</td><td>±0.0118</td></tr><tr><td>MLM+TaCO</td><td>±0.0456</td><td>±0.0232</td><td>±0.0166</td><td>±0.0134</td><td>±0.0077</td><td>±0.0052</td><td>±0.0675</td><td>±0.0380</td><td>±0.0207</td><td>±0.0128</td><td>±0.0099</td></tr><tr><td>MLM+InfoWord</td><td>±0.0267</td><td>±0.0139</td><td>±0.0272</td><td>±0.0191</td><td>±0.0170</td><td>±0.0089</td><td>±0.0344</td><td>±0.0219</td><td>±0.0070</td><td>±0.0079</td><td>±0.0072</td></tr><tr><td>DGA</td><td>±0.0095</td><td>±0.0047</td><td>±0.0127</td><td>±0.0094</td><td>±0.0052</td><td>±0.0040</td><td>±0.0127</td><td>±0.0081</td><td>±0.0079</td><td>±0.0080</td><td>±0.0034</td></tr></table>
|
| 362 |
+
|
| 363 |
+
Table 4: Standard deviations of the corresponding metrics of the proposed DGA model and the baselines on the six experiments.
|
| 364 |
+
|
| 365 |
+
<table><tr><td rowspan="2">Domain
|
| 366 |
+
Model</td><td colspan="2">Camera</td><td colspan="2">Phone</td><td colspan="2">Restaurant</td><td colspan="2">AI</td><td colspan="2">ACL</td><td rowspan="2">PubMed
|
| 367 |
+
Micro-F1</td></tr><tr><td>MF1</td><td>Acc.</td><td>MF1</td><td>Acc.</td><td>MF1</td><td>Acc.</td><td>MF1</td><td>Acc.</td><td>MF1</td><td>Acc.</td></tr><tr><td>RoBERTa</td><td>±0.0403</td><td>±0.0179</td><td>±0.0210</td><td>±0.0154</td><td>±0.0117</td><td>±0.0049</td><td>±0.0646</td><td>±0.0347</td><td>±0.0192</td><td>±0.0096</td><td>±0.0071</td></tr><tr><td>MLM</td><td>±0.0479</td><td>±0.0298</td><td>±0.0165</td><td>±0.0103</td><td>±0.0096</td><td>±0.0056</td><td>±0.0117</td><td>±0.0086</td><td>±0.0218</td><td>±0.0118</td><td>±0.0035</td></tr><tr><td>DGA (H, I)</td><td>±0.0373</td><td>±0.0210</td><td>±0.0032</td><td>±0.0039</td><td>±0.0054</td><td>±0.0045</td><td>±0.0095</td><td>±0.0048</td><td>±0.0094</td><td>±0.0073</td><td>±0.0049</td></tr><tr><td>DGA (H, I, O)</td><td>±0.0167</td><td>±0.0092</td><td>±0.0182</td><td>±0.0155</td><td>±0.0055</td><td>±0.0033</td><td>±0.0093</td><td>±0.0075</td><td>±0.0080</td><td>±0.0070</td><td>±0.0056</td></tr><tr><td>DGA (H, I, O, E)</td><td>±0.0237</td><td>±0.0123</td><td>±0.0270</td><td>±0.0187</td><td>±0.0099</td><td>±0.0050</td><td>±0.0109</td><td>±0.0089</td><td>±0.0067</td><td>±0.0057</td><td>±0.0079</td></tr><tr><td>DGA (w/o contrast)</td><td>±0.0433</td><td>±0.0251</td><td>±0.0135</td><td>±0.0106</td><td>±0.0060</td><td>±0.0040</td><td>±0.0197</td><td>±0.0119</td><td>±0.0132</td><td>±0.0093</td><td>±0.0050</td></tr><tr><td>DGA (random mask)</td><td>±0.0879</td><td>±0.0413</td><td>±0.0335</td><td>±0.0235</td><td>±0.0096</td><td>±0.0044</td><td>±0.0153</td><td>±0.0090</td><td>±0.0105</td><td>±0.0059</td><td>±0.0052</td></tr><tr><td>Ensemble</td><td>±0.0332</td><td>±0.0178</td><td>±0.0199</td><td>±0.0139</td><td>±0.0035</td><td>±0.0031</td><td>±0.0236</td><td>±0.0103</td><td>±0.0061</td><td>±0.0028</td><td>±0.0046</td></tr><tr><td>DGA (domain-specific)</td><td>±0.0137</td><td>±0.0070</td><td>±0.0259</td><td>±0.0200</td><td>±0.0031</td><td>±0.0018</td><td>±0.0128</td><td>±0.0071</td><td>±0.0108</td><td>±0.0067</td><td>±0.0043</td></tr><tr><td>DGA</td><td>±0.0095</td><td>±0.0047</td><td>±0.0127</td><td>±0.0094</td><td>±0.0052</td><td>±0.0040</td><td>±0.0127</td><td>±0.0081</td><td>±0.0079</td><td>±0.0080</td><td>±0.0034</td></tr></table>
|
| 368 |
+
|
| 369 |
+
Table 5: Standard deviations of the corresponding metrics of the proposed DGA model and the ablation on the six experiments.
|
| 370 |
+
|
| 371 |
+
# A Datasets Details
|
| 372 |
+
|
| 373 |
+
Table 2 in the main paper has given the number of examples in each dataset. Here we provide additional details about the 4 types of end-tasks.
|
| 374 |
+
|
| 375 |
+
(1) (Phone, Camera and Restaurant) Aspect Sentiment Classification (ASC) is defined as follows (Liu, 2015): given an aspect or product feature (e.g., picture quality in a camera review) and a review sentence containing the aspect in a domain or product category (e.g., camera), classify if the sentence expresses a positive, negative, or neutral (no opinion) sentiment or polarity about the aspect (for Phone and Camera, there are only negative and positive polarities in the data).
|
| 376 |
+
(2) (ACL) Citation Intent Classification is defined as follows: given a citing sentence (a sentence contains a citation), classify if the sentence expresses a citation function among "background", "motivation", "uses", "extension" and "comparison or contrast future".
|
| 377 |
+
(3) (AI) Relation Classification is defined as follows: given a within-sentence word sequence spans containing a pair of entities, classify if the span expresses a relation among “feature of”, “conjunction”, “evaluate for”, “hyponym of”, “used for”, “part of” and “compare”.
|
| 378 |
+
|
| 379 |
+
(4) (PubMed) Chemical-protein Interaction Classification is defined as follows: given a span containing a pair of chemical and protein, classify if the span expresses a chemical-protein interaction among "downregulator", "substrate", "indirect-upregulator", "indirect-downregulator", "agnonist", "activator", "product of", "agonist-activator", "inhibitor", "upregulator", "substrate product of", "agonist-inhibitor" and "antagonist".
|
| 380 |
+
|
| 381 |
+
# B Standard Deviations
|
| 382 |
+
|
| 383 |
+
Table 4 reports the standard deviations of the corresponding results in Table 2 (in the main paper) of DGA and the considered baselines over 5 runs with random seeds. We can see the results of DGA are stable. Some baselines (e.g., RoBERTa in AI, MLM in Camera and MLM+TaCL in ACL) can have quite large standard deviations.
|
| 384 |
+
|
| 385 |
+
Table 5 reports the standard deviations of the corresponding results in Table 3 (in the main paper) of DGA and the considered baselines over 5 runs with random seeds. We can see the results of DGA are stable. Some baselines (e.g., DGA (random mask) and DGA (w/o contrast) in Camera) can have quite large standard deviations.
|
adaptingalanguagemodelwhilepreservingitsgeneralknowledge/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:d2693c2d657b2bed74c82ee1006b8ec80e4d86a582f75f0f304a6867aec345b3
|
| 3 |
+
size 635975
|
adaptingalanguagemodelwhilepreservingitsgeneralknowledge/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:b2168ba7d47b3fa93ec5bb29fb7f002fb8106e807aeeb7f3f1fa324ace589d3b
|
| 3 |
+
size 451367
|
adaptivecontrastivelearningonmultimodaltransformerforreviewhelpfulnessprediction/ca62769c-92be-4c2a-9c9c-9cf26e3d6c1d_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:78de84aaac48d4408df9b598361b0ceff88657bacb8b7cd9c4a592ac2e772cf9
|
| 3 |
+
size 88855
|
adaptivecontrastivelearningonmultimodaltransformerforreviewhelpfulnessprediction/ca62769c-92be-4c2a-9c9c-9cf26e3d6c1d_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:5c394f9157b599b6173029ff6165fd186ac57b8b06fbbb068d235d4527b710bd
|
| 3 |
+
size 105788
|