ACL-OCL / Base_JSON /prefixD /json /deelio /2022.deelio-1.10.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2022",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T16:22:03.113118Z"
},
"title": "What Makes Good In-Context Examples for GPT-3?",
"authors": [
{
"first": "Jiachang",
"middle": [],
"last": "Liu",
"suffix": "",
"affiliation": {},
"email": "jiachang.liu@duke.edu"
},
{
"first": "Dinghan",
"middle": [],
"last": "Shen",
"suffix": "",
"affiliation": {},
"email": "dishen@microsoft.com"
},
{
"first": "Yizhe",
"middle": [],
"last": "Zhang",
"suffix": "",
"affiliation": {},
"email": "yizhe.zhang@hotmail.com"
},
{
"first": "Bill",
"middle": [],
"last": "Dolan",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Lawrence",
"middle": [],
"last": "Carin",
"suffix": "",
"affiliation": {},
"email": "lcarin@duke.edu"
},
{
"first": "Weizhu",
"middle": [],
"last": "Chen",
"suffix": "",
"affiliation": {},
"email": "wzchen@microsoft.com"
},
{
"first": "Duke",
"middle": [],
"last": "University",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Microsoft",
"middle": [],
"last": "Dynamics",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "",
"middle": [],
"last": "Ai",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Meta",
"middle": [],
"last": "Ai",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Microsoft",
"middle": [],
"last": "Research",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "GPT-3 has attracted lots of attention due to its superior performance across a wide range of NLP tasks, especially with its in-context learning abilities. Despite its success, we found that the empirical results of GPT-3 depend heavily on the choice of in-context examples. In this work, we investigate whether there are more effective strategies for judiciously selecting incontext examples (relative to random sampling) that better leverage GPT-3's in-context learning capabilities. Inspired by the recent success of leveraging a retrieval module to augment neural networks, we propose to retrieve examples that are semantically-similar to a test query sample to formulate its corresponding prompt. Intuitively, the examples selected with such a strategy may serve as more informative inputs to unleash GPT-3's power of text generation. We evaluate the proposed approach on several natural language understanding and generation benchmarks, where the retrieval-based prompt selection approach consistently outperforms the random selection baseline. Moreover, it is observed that the sentence encoders finetuned on task-related datasets yield even more helpful retrieval results. Notably, significant gains are observed on tasks such as table-totext generation (44.3% on the ToTTo dataset) and open-domain question answering (45.5% on the NQ dataset). * Work was done when Jiachang (intern) and Yizhe were at Microsoft.",
"pdf_parse": {
"paper_id": "2022",
"_pdf_hash": "",
"abstract": [
{
"text": "GPT-3 has attracted lots of attention due to its superior performance across a wide range of NLP tasks, especially with its in-context learning abilities. Despite its success, we found that the empirical results of GPT-3 depend heavily on the choice of in-context examples. In this work, we investigate whether there are more effective strategies for judiciously selecting incontext examples (relative to random sampling) that better leverage GPT-3's in-context learning capabilities. Inspired by the recent success of leveraging a retrieval module to augment neural networks, we propose to retrieve examples that are semantically-similar to a test query sample to formulate its corresponding prompt. Intuitively, the examples selected with such a strategy may serve as more informative inputs to unleash GPT-3's power of text generation. We evaluate the proposed approach on several natural language understanding and generation benchmarks, where the retrieval-based prompt selection approach consistently outperforms the random selection baseline. Moreover, it is observed that the sentence encoders finetuned on task-related datasets yield even more helpful retrieval results. Notably, significant gains are observed on tasks such as table-totext generation (44.3% on the ToTTo dataset) and open-domain question answering (45.5% on the NQ dataset). * Work was done when Jiachang (intern) and Yizhe were at Microsoft.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "1 Introduction GPT-3 (Brown et al., 2020 ) is a new breakthrough in NLP research. Previously, NLP models are firstly pre-trained and then fine-tuned on a specific task. What sets GPT-3 apart from other models is its impressive \"in-context\" learning ability. Provided with a few in-context examples, GPT-3 can generalize to unseen cases without further finetuning. This opens up many new technological possibilities that are previously considered unique to human. Future NLP systems can be developed to expand emails, extract entities from text, generate code based on natural language instructions with a few demonstration examples.",
"cite_spans": [
{
"start": 15,
"end": 40,
"text": "GPT-3 (Brown et al., 2020",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Despite its powerful and versatile in-context learning ability, GPT-3 has some practical challenges. The original paper utilizes task-relevant examples that are randomly sampled from the training set. However, we observe that the performance of GPT-3 tends to fluctuate with different choices of in-context examples. As shown in Table 1 , the variance with distinct in-context examples can be significant. Our work aims to carefully examine this issue to gain a deeper understanding on how to better select in-context examples to improve GPT-3's performance without fine-tuning. Note that our approach requires a training set to select examples. With such a training dataset, it is possible to fine-tune GPT-3 to take full advantage of the model's strength. However, currently GPT-3 has not been released to public for fine-tuning. Even if it is available, fine-tuning GPT-3 requires hundreds of GPUs to load the 175B model, which is prohibitively expensive and time-consuming for ordinary research labs. Another issue is that storing large fine-tuned model checkpoints require huge storage space. Consequently, we resort to prompt/example engineering strategy. Nevertheless, the fine-tuning results using T5 are provided for reference.",
"cite_spans": [],
"ref_spans": [
{
"start": 329,
"end": 336,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "A brute-force approach for selecting the optimal in-context instances would be to perform combinatorial search over the entire dataset. Unfortunately, this strategy is computationally impractical. To this Figure 1: In-context example selection for GPT-3. White dots: unused training samples; grey dots: randomly sampled training samples; red dots: training samples selected by the k-nearest neighbors algorithm in the embedding space of a sentence encoder.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "end, we empirically investigate the influences of employing different in-context examples. Interestingly, we find that the in-context examples that are closer to the test sample in the embedding space consistently give rise to stronger performance (relative to the farther ones). Inspired by this observation and the recent success of retrieval-augmented models , we propose to utilize nearest neighbors of a given test sample (among all the training instances available) as the in-context examples.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "To verify the effectiveness of the proposed method, we evaluate it on several natural language understanding and generation tasks, including sentiment analysis, table-to-text generation and opendomain question answering. It is observed that the retrieval-based in-context examples unleash the in-context learning capabilities of GPT-3 much more effectively than the random sampling baseline, even when the number of examples is small. Moreover, we find that the specific sentence encoders employed for the retrieval procedure play a critical role. Thus, an extensive exploration is conducted and shows that encoders fine-tuned on natural language matching tasks serve as more effective in-context examples selector on the QA task. In summary, our contributions are as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "i) to the best of our knowledge, we take a first step towards understanding the sensitivity of GPT-3's in-context learning ability with respect to the choice of in-context examples;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "ii) to alleviate the sensitivity issue, an additional retrieval module is introduced to find semanticallysimilar in-context examples of a test instance, which greatly outperforms the baseline based on randomly sampled in-context examples;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "iii) empirically, the better selected examples lead GPT-3 to achieve comparable performance to a fine-tuned T5 model on the table-to-text task and outperforms the T5 model on the QA tasks; iv) fine-tuning the retrieval model on task-related dataset(s) leads to stronger empirical results;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "v) the performance of GPT-3 improves as the number of examples for retrieval increases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The in-context learning scenario of GPT-3 can be regarded as a conditional text generation problem. Concretely, the probability of generating a target y is conditioned on the context C, which includes k examples, and the source x. Therefore, the probability can be expressed as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "GPT-3 for In-Context Learning",
"sec_num": "2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p LM (y|C, x) = T t=1 p(y t |C, x, y <t )",
"eq_num": "(1)"
}
],
"section": "GPT-3 for In-Context Learning",
"sec_num": "2.1"
},
{
"text": "where LM denotes the parameters of the language model, and C = {x 1 , y 1 , x 2 , y 2 , ..., x k , y k } is a context string concatenating k training instances with the special character \"\\n\". A concrete illustration can be found in the Appendix. For GPT-3, this generation process is implemented through a giant transformer-based architecture (Vaswani et al., 2017) . Due to the computational burden of fine-tuning, GPT-3 is leveraged in an in-context learning manner as described above. Unfortunately, as shown in Table 1 , the results of GPT-3 tend to fluctuate significantly with different in-context examples. We aim to alleviate this issue via judicious in-context example selection.",
"cite_spans": [
{
"start": 344,
"end": 366,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF41"
}
],
"ref_spans": [
{
"start": 516,
"end": 523,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "GPT-3 for In-Context Learning",
"sec_num": "2.1"
},
{
"text": "We start the investigation by looking at the role of in-context examples from an empirical perspective. Previous retrieve-and-edit literature usually retrieve prototypes that are close to the test source x in some embedding space. These examples and the test source x often share semantic or lexical similarities. This hints on how we may select incontext examples for GPT-3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Impact of In-Context Examples",
"sec_num": "2.2"
},
{
"text": "To this end, we examine the impact of the distance between the in-context example and the test sample on GPT-3's performance. Concretely, a comparison is made on the the Natural Questions (NQ) dataset between two selection strategies. Given a test example, the first method utilizes the 10 farthest training instances as the in-context examples, while the second employs the 10 closest neighbors. We use the CLS embeddings of a pre-trained RoBERTa-large model as sentence representations to measure the proximity of two sentences (using the Euclidean distance).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Impact of In-Context Examples",
"sec_num": "2.2"
},
{
"text": "For evaluation, 100 test questions are randomly sampled and the average Exact Match (EM) scores with the two distinct strategies are reported in Table 2. It can be observed that the nearest neighbors, used as the in-context examples, give rise to much better results relative to the farthest ones. Moreover, the pre-trained RoBERTa model serves as effective sentence embeddings for the retrieval procedure.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Impact of In-Context Examples",
"sec_num": "2.2"
},
{
"text": "Based on the findings above, we propose KATE 1 , a strategy to select good examples for in-context learning. The process is visualized in Figure 1 . Specifically, we first use a sentence encoder to convert sources in both the training set and test set to vector representations. For online prediction, we can convert the training set first and encode each test source on the fly. Then, for each test source x, we retrieve its nearest k neighbors x 1 , x 2 , ..., x k from the training set (according to the distances in the embedding space). Given some pre-defined similarity measure s such as the negative Euclidean distance or the cosine similarity, the neighbors are ordered so that s(",
"cite_spans": [],
"ref_spans": [
{
"start": 138,
"end": 146,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "kNN-augmented Example Selection",
"sec_num": "2.3"
},
{
"text": "x i , x) \u2265 s(x j , x) when i < j.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "kNN-augmented Example Selection",
"sec_num": "2.3"
},
{
"text": "The k sources are concatenated with their targets to form the context C = {x 1 , y 1 , x 2 , y 2 , ..., x k , y k }, which is sent to GPT-3 along with the test input. The algorithm is presented in Algorithm 1. Note that different ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "kNN-augmented Example Selection",
"sec_num": "2.3"
},
{
"text": "T = {x i , y i } N i=1 , sentence encoder \u00b5 \u03b8 (\u2022)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "kNN-augmented Example Selection",
"sec_num": "2.3"
},
{
"text": ", and number of in-context examples k (hyperparameter).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "kNN-augmented Example Selection",
"sec_num": "2.3"
},
{
"text": "1: v test = \u00b5 \u03b8 (x test ) 2: for x i \u2208 D T do 3: v i = \u00b5 \u03b8 (x i ) 4: s i = \u2212\u2225v test \u2212 v i \u2225 2 (or vtest\u2022v i \u2225vtest\u2225 2 \u2225v i \u2225 2 ) 5: end for 6: Select largest k similarities s i 's (in descending order) with indices {\u03c3(1), ..., \u03c3(k)} 7: C = [x \u03c3(1) ; y \u03c3(1) ; ...; x \u03c3(k) ; y \u03c3(k) ] 8:\u0177 test = GPT-3([C; x test ])",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "kNN-augmented Example Selection",
"sec_num": "2.3"
},
{
"text": "numbers of examples can be employed, and we conduct study on its impact in a later section.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "kNN-augmented Example Selection",
"sec_num": "2.3"
},
{
"text": "Choices of Retrieval Module A core step for our context selection approach is mapping sentences into a latent semantic space, leaving a question as what sentence encoders we should choose. We compared among existing pre-trained text encoders and found them sufficient to retrieve semantically similar sentences. The sentence encoders can be divided into two categories.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "kNN-augmented Example Selection",
"sec_num": "2.3"
},
{
"text": "The first category includes generally pre-trained sentence encoders such as the BERT, RoBERTa, and XLNet models. These models have been trained on large quantities of unsupervised tasks and achieved good performance on many natural language tasks. The corresponding embeddings contain rich semantic information from the original sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "kNN-augmented Example Selection",
"sec_num": "2.3"
},
{
"text": "The second category includes sentence encoders fine-tuned on specific tasks or datasets. For example, a sentence encoder trained on the STS dataset should be able to assess similarities among different questions better than a generally pre-trained sentence encoder. Sentence-BERT (Wolf et al., 2019; Gurevych, 2019, 2020) shows that these fine-tuned encoders have achieved great performance on tasks such as sentence clustering, paraphrase mining, and information retrieval.",
"cite_spans": [
{
"start": 280,
"end": 299,
"text": "(Wolf et al., 2019;",
"ref_id": "BIBREF45"
},
{
"start": 300,
"end": 321,
"text": "Gurevych, 2019, 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "kNN-augmented Example Selection",
"sec_num": "2.3"
},
{
"text": "We apply our proposed method to the following three tasks: sentiment analysis, table-to-text generation, and question answering. Dataset split setups and prompt templates are shown in Table 9 and 11 in the Appendix. For the hyper-parameters in the GPT-3 API, we set the temperature to 0.",
"cite_spans": [],
"ref_spans": [
{
"start": 184,
"end": 191,
"text": "Table 9",
"ref_id": "TABREF14"
}
],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "3"
},
{
"text": "To retrieve semantically-similar training instances, we consider two types of sentence embeddings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Embeddings for Retrieval",
"sec_num": "3.1"
},
{
"text": "\u2022 The original RoBERTa-large model , which is abbreviated as KATE roberta ;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Embeddings for Retrieval",
"sec_num": "3.1"
},
{
"text": "\u2022 The RoBERTa-large models which are: i) finetuned on the SNLI and MultiNLI datasets (KATE nli ) (Bowman et al., 2015; Williams et al., 2017) ; ii) first fine-tuned on the SNLI and MultiNLI dataset and then on the STS-B datasets (KATE nli+sts-b ) (Cer et al., 2017) .",
"cite_spans": [
{
"start": 97,
"end": 118,
"text": "(Bowman et al., 2015;",
"ref_id": "BIBREF1"
},
{
"start": 119,
"end": 141,
"text": "Williams et al., 2017)",
"ref_id": "BIBREF44"
},
{
"start": 247,
"end": 265,
"text": "(Cer et al., 2017)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Embeddings for Retrieval",
"sec_num": "3.1"
},
{
"text": "All sentence encoders share the same architecture. The only differences are the specific datasets used for fine-tuning. The negative Euclidean distance is used for KATE roberta , while the cosine similarity is employed for KATE nli and KATE nli+sts-b . Sentiment Analysis For this task, we conduct experiments under the dataset-transfer setting. Incontext examples are selected from one dataset, and the evaluation is made on another dataset. This setting is designed to simulate a real-world scenario where we want to leverage an existing labeled dataset for a unlabeled one (of a similar task). Specifically, we select examples from the SST-2 training set (Socher et al., 2013; and ask GPT-3 to predict on the IMDB test set (Maas et al., 2011) . To explore whether a sentence encoder fine-tuned on a similar task would benefit KATE, we also employ a pre-trained RoBERTa-large model fine-tuned on the SST-2 training set (dubbed as KATE sst-2 ). The number of examples is chosen to be 3 since adding more examples does not further improve the performance. Table- to-Text Generation Given a Wikipedia table and a set of highlighted cells, this task focuses on producing human-readable texts as descriptions. ToTTo (Parikh et al., 2020) 2 is utilized for evaluation due to its popularity. We use BLEU (Papineni et al., 2002) and PARENT metrics for evaluation. Because the token length limit of GPT-3 is 2048, we add a preprocessing step by deleting the closing angle brackets such as </cell> and </table> to save space. The number of in-context examples is set as 2 so that the input length is within the token limit.",
"cite_spans": [
{
"start": 658,
"end": 679,
"text": "(Socher et al., 2013;",
"ref_id": "BIBREF38"
},
{
"start": 726,
"end": 745,
"text": "(Maas et al., 2011)",
"ref_id": "BIBREF25"
},
{
"start": 1299,
"end": 1322,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [
{
"start": 1056,
"end": 1062,
"text": "Table-",
"ref_id": null
}
],
"eq_spans": [],
"section": "Sentence Embeddings for Retrieval",
"sec_num": "3.1"
},
{
"text": "We conduct experiments on three QA benchmarks: Natural Questions (NQ) (Kwiatkowski et al., 2019) , Web Questions (WQ) (Berant et al., 2013) , and TriviaQA (Joshi et al., 2017) . For evaluation, we use the Exact Match (EM) score, which is defined as the proportion of the number of predicted answers being exactly one of the ground-truth answers. The matching is performed after string normalization, which includes article and punctuation removal. The number of examples is set to be 64 for NQ and WQ and 10 for TriviaQA (The retrieved 64 examples exceed the token limit). We evaluate on the test sets of NQ and WQ and the dev set of TriviaQA.",
"cite_spans": [
{
"start": 70,
"end": 96,
"text": "(Kwiatkowski et al., 2019)",
"ref_id": "BIBREF19"
},
{
"start": 118,
"end": 139,
"text": "(Berant et al., 2013)",
"ref_id": "BIBREF0"
},
{
"start": 155,
"end": 175,
"text": "(Joshi et al., 2017)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Question Answering",
"sec_num": null
},
{
"text": "Random Sampling For each test sentence, we randomly select in-context examples from the training set. We refer to this method as Random in the experimental results. On the test set, the random baseline is repeated for five times to obtain the average score and corresponding standard deviation. k-Nearest Neighbor Additionally, to investigate whether the retrieval module is complementary to GPT-3's in-context learning ability, we further consider a k-nearest neighbor baseline. Specifically, the target y 1 associated with the first retrieved example is considered as the predicted target for the test sample. For the sentiment analysis and QA tasks, the top k retrieved examples {y 1 , ..., y k } are utilized, where the final prediction is determined by majority voting among the k examples' targets. If there is a tie case, we use the target of the example most similar to the test sentence. To ensure fair comparison, we compare the baseline kNN and KATE under the same embedding space of a pre-trained RoBERTa-large model. This baseline is abbreviated as kNN roberta .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline Methods",
"sec_num": "3.2"
},
{
"text": "Fine-tuned T5 Although this work aims at improving the in-context learning abilities of GPT-3, we include a fine-tuned T5 (3B) model as a baseline. This comparison informs us where GPT-3 performs comparably or surpasses a fine-tuned model. 4 Experimental Results",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline Methods",
"sec_num": "3.2"
},
{
"text": "We first evaluate KATE on the sentiment analysis task. The results are in Table 3 . KATE consistently produces better performance relative to the random selection baseline. Notably, there is no variance with the obtained results since the fixed retrieved in-context examples are employed. For KATE, when the pre-trained sentence encoder is fine-tuned on NLI or NLI+STS-B datasets, the performance slightly decreases. Since the objectives of the IMDB and the NLI+STS-B datasets are different, this shows that fine-tuning on a dissimilar task hurts KATE's performance. In contrast, KATE sst-2 obtains the best accuracy, showing that fine-tuning on a similar task improves KATE's performance. To verify that the gains are not merely from the retrieval step, we further compare KATE roberta with the kNN roberta . It turns out that the performance of kNN roberta is close to random guessing. This observation is consistent when one neighbor or three neighbors are retrieved. Notably, with the sentence encoder fine-tuned on the SST-2 dataset, the accuracy of kNN sst-2 is 92.46, which is lower than that of KATE sst-2 . These results suggest that GPT-3 is critical to the final results, and the retrieval module is complementary to GPT-3. The fine-tuned T5 model works better since its parameters has been adapted to this specific task. However, fine-tuning requires access to model parameters, lots of memory storage, and time. The fine-tuning result here is just for reference. Through KATE, the performance of GPT-3 has increased significantly without fine-tuning.",
"cite_spans": [],
"ref_spans": [
{
"start": 74,
"end": 81,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Sentiment Analysis",
"sec_num": "4.1"
},
{
"text": "We next evaluate KATE on the ToTTo dataset and present results in Table 4 . KATE gives rise to considerable gains over the random baseline, according to both the BLEU and PARENT scores. Notably, KATE enables GPT-3 to achieve performance comparable to a fine-tuned T5 model. On a finer scale, the evaluation can be done on the overlap subset and the nonoverlap subset. The overlap dev subset shares a significant number of header names with the training set, while the nonoverlap one does not. KATE improves results on both subsets, meaning that the retrieval module is helpful even when the dev set is out of distribution of the training set. Similar to sentiment analysis, there is a slight drop in performance from KATE roberta to KATE nli and KATE nli+sts-b . This is due to the difference between the objectives of the ToTTo dataset and NLI+STS-B datasets. The drop from KATE nli to KATE nli+sts-b further validates the idea that fine-tuning on a dissimilar task can hurt KATE's performance. For the kNN baseline, it performs much worse than the random selection method and KATE, suggesting that the retrieval process and GPT-3 work collaboratively to achieve better results.",
"cite_spans": [],
"ref_spans": [
{
"start": 66,
"end": 73,
"text": "Table 4",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Table-to-text Generation",
"sec_num": "4.2"
},
{
"text": "To understand how the retrieval mechanism helps GPT-3, we conduct a case study on the retrieved examples (see Table 5 ). By retrieving relevant examples from the training set, KATE provides useful detailed information within the table, e.g., the number of points, rebounds, and assists, to GPT-3 for more accurate description. On the other hand, the random selection method has the issue of hallucination, where the generated sequences contain information (i.e., \"senior year\" and \"University of Texas\") not present in the table.",
"cite_spans": [],
"ref_spans": [
{
"start": 110,
"end": 117,
"text": "Table 5",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Table-to-text Generation",
"sec_num": "4.2"
},
{
"text": "Lastly, we evaluate KATE on the open-domain QA tasks, as shown in Table 6 . We compare with some state-of-the-art fine-tuned methods such as RAG (Lewis et al., 2020) and T5 (Raffel et al., 2019) . The T5 results were reported in (Brown et al., 2020) using the 11B model, which needs specialized TPUs to do fine-tuning. KATE again improves GPT-3's performance substantially across various benchmarks. Moreover, KATE helps GPT-3 to even outperform the fine-tuned T5 model. It is worth noting that this time both KATE nli and KATE nli+sts-b improve upon KATE roberta because fine-tuning on NLI or STS-B datasets is helpful for retrieving semantically similar questions from the QA datasets. Moreover, on the NQ and TriviaQA datasets, further fine-tuning on the STS-B dataset improves KATE's results. We evaluate the baseline kNN roberta by using the top-1 nearest neighbor. The kNN baseline results again suggest that the retrieval module and GPT-3 work together to achieve better performance. We also explore using 64 nearest neighbors (10 for TriviaQA) to determine the answer (by majority voting explained in Section 3.2). The EM score are similar to retrieving the top-1 nearest neighbor.",
"cite_spans": [
{
"start": 145,
"end": 165,
"text": "(Lewis et al., 2020)",
"ref_id": "BIBREF17"
},
{
"start": 173,
"end": 194,
"text": "(Raffel et al., 2019)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [
{
"start": 66,
"end": 73,
"text": "Table 6",
"ref_id": "TABREF9"
}
],
"eq_spans": [],
"section": "Questing Answering",
"sec_num": "4.3"
},
{
"text": "To investigate why the retrieved examples are helpful, we present a case study. Concretely, the retrieval examples from the NQ dataset are shown in Table 7 . For the first and second cases, the random baseline provides wrong answers because GPT-3 is unable to recall the exact detail. However, the in-context examples selected by KATE contain the correct details, which facilitate GPT-3 to answer questions. For the third case, the random baseline leads GPT-3 to misinterpret the question as asking for a specific location. In contrast, KATE selects similar types of questions asking for the origins of objects. Using these in-context examples, GPT-3 is able to interpret and answer the question correctly.",
"cite_spans": [],
"ref_spans": [
{
"start": 148,
"end": 155,
"text": "Table 7",
"ref_id": "TABREF11"
}
],
"eq_spans": [],
"section": "Questing Answering",
"sec_num": "4.3"
},
{
"text": "We first investigate the impact of the number of examples on KATE's performance. Concretely, on the NQ dataset, we choose the number of examples to be 5, 10, 20, 35, and 64, and KATE nli+sts-b is compared with the random baseline and KATE roberta across different settings. As shown in the left plot of Figure 2 , both KATE and the random baseline benefit from utilizing more examples. However, KATE consistently outperforms the random selection method, even when the number of in-context examples is as few as 5. This result is interesting because in practice, employing less examples leads to more efficient inference with GPT-3.",
"cite_spans": [],
"ref_spans": [
{
"start": 303,
"end": 311,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Number of In-context Examples",
"sec_num": "5.1"
},
{
"text": "We further examine how the size of the training set may influence the KATE method. On the NQ dataset, we create new subsets from the original training set, with sizes of 1k, 2k, 5k, 10k, 30k, and 70k, respectively. In-context examples are retrieved from these subsets instead of the original training set. The number of nearest neighbors is set to 64. We compare KATE nli+sts-b with the random selection method and KATE roberta , and the results are shown in the right plot of Figure 2 . For KATE roberta and KATE nli+sts-b , as the size of the training set increases, the EM scores also increase. In contrast, the result of the random sampling baseline does not change much. Intuitively, as the training size gets larger, it is more likely for KATE to retrieve relevant in-context examples to help GPT-3 answer a question correctly. As we have shown previously in Table 7 , the retrieved in-context examples could provide critical detailed information to GPT-3, thus helping GPT-3 to better answer the questions. Table 8 : Analysis on the effect of orders of in-context example on the NQ dataset using KATE nli+sts-b . The default order puts the most similar example in the front, and the reverse order does the opposite.",
"cite_spans": [],
"ref_spans": [
{
"start": 477,
"end": 485,
"text": "Figure 2",
"ref_id": "FIGREF0"
},
{
"start": 865,
"end": 872,
"text": "Table 7",
"ref_id": "TABREF11"
},
{
"start": 1014,
"end": 1021,
"text": "Table 8",
"ref_id": null
}
],
"eq_spans": [],
"section": "Size of Training Set for Retrieval",
"sec_num": "5.2"
},
{
"text": "domly permute the order of in-context examples in the NQ dataset for the proposed KATE nli+sts-b method, and conduct the experiments for 3 different orders. Additionally, we explore the reverse order where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Order of In-context Examples",
"sec_num": "5.3"
},
{
"text": "s(x i , x) \u2264 s(x j , x) whenever i < j.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Order of In-context Examples",
"sec_num": "5.3"
},
{
"text": "The results are presented in Table 8 . On this particular NQ dataset, the reverse order performs the best. However, we also did the experiments on the WQ and TriviaQA and find that the default order performs slightly better than the reverse order. Hence, the choice of orders is data-dependent. Additionally, it can be observed that the variation among the NQ results tends to be quite small (compared with the difference between the random baseline and KATE), indicating that the example order does not have a significant impact on KATE's performance.",
"cite_spans": [],
"ref_spans": [
{
"start": 29,
"end": 36,
"text": "Table 8",
"ref_id": null
}
],
"eq_spans": [],
"section": "Order of In-context Examples",
"sec_num": "5.3"
},
{
"text": "Pre-trained Language Models NLP systems have made tremendous progress by pre-training models on unlabeled text (Devlin et al., 2018; Yang et al., 2019; Raffel et al., 2019; Xue et al., 2020; Lample and Conneau, 2019; Radford et al., 2018 Radford et al., , 2019 . These models can be fine-tuned for a wide range of downstream tasks. GPT-3 (Brown et al., 2020) , however, can perform in-context learning without fine-tuning. People have just started trying to understand GPT-3 from different perspectives. (Hendrycks et al., 2020) studies which categories of questions GPT-3 is more capable of answering. (Zhao et al., 2021) proposes to improve the model by contextual calibration. However, their method is limited to predicting very few tokens because for long sequence generation, the contextual calibration step needs to be repeatedly performed after each newly generated token. In contrast, our work, KATE, only calls the API once and is suitable for both text classification and generation tasks. Another related work is LM-BFF , which uses a smaller language model (RoBERTa-large) to demonstrate that prompt-based fine-tuning can outperform standard fine-tuning on text classification tasks. Our work differs by showing that, without fine-tuning, relevant examples can still substantially improve the performance of GPT-3 for both text classification and generation tasks. Finally, Au-toPrompt (Shin et al., 2020) explores adding some additional tokens to smaller language models to improve performance on classification tasks.",
"cite_spans": [
{
"start": 111,
"end": 132,
"text": "(Devlin et al., 2018;",
"ref_id": "BIBREF6"
},
{
"start": 133,
"end": 151,
"text": "Yang et al., 2019;",
"ref_id": "BIBREF49"
},
{
"start": 152,
"end": 172,
"text": "Raffel et al., 2019;",
"ref_id": "BIBREF33"
},
{
"start": 173,
"end": 190,
"text": "Xue et al., 2020;",
"ref_id": null
},
{
"start": 191,
"end": 216,
"text": "Lample and Conneau, 2019;",
"ref_id": "BIBREF20"
},
{
"start": 217,
"end": 237,
"text": "Radford et al., 2018",
"ref_id": "BIBREF31"
},
{
"start": 238,
"end": 260,
"text": "Radford et al., , 2019",
"ref_id": "BIBREF32"
},
{
"start": 332,
"end": 358,
"text": "GPT-3 (Brown et al., 2020)",
"ref_id": null
},
{
"start": 504,
"end": 528,
"text": "(Hendrycks et al., 2020)",
"ref_id": null
},
{
"start": 603,
"end": 622,
"text": "(Zhao et al., 2021)",
"ref_id": "BIBREF50"
},
{
"start": 1398,
"end": 1417,
"text": "(Shin et al., 2020)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "6"
},
{
"text": "Retrieval-based Text Generation There is a long history of applying information retrieval to text generation (Sumita and Hitoshi, 1991) . It is very related to the exemplar-based learning (J\u00e4kel et al., 2008; Ziyadi et al., 2020) . Some representative applications in the field of deep learning include machine translation (Gu et al., 2018) , sentiment transfer , QA (Karpukhin et al., 2020; Mao et al., 2020) , dialogue generation Cai et al., 2018; Pandey et al., 2018; We-ston et al., 2018; ), text summarization (Cao et al., 2017; Peng et al., 2019) , datato-text generation (Peng et al., 2019) , and text-tocode generation . All these retrieve-and-edit frameworks require their editors to be trained or fine-tuned on specific tasks. In contrast, our work uniquely examines how to better use GPT-3 as a universal editor without fine-tuning. We find that the more semantically similar context we provide to GPT-3, the better results the model can generate.",
"cite_spans": [
{
"start": 109,
"end": 135,
"text": "(Sumita and Hitoshi, 1991)",
"ref_id": "BIBREF40"
},
{
"start": 188,
"end": 208,
"text": "(J\u00e4kel et al., 2008;",
"ref_id": "BIBREF13"
},
{
"start": 209,
"end": 229,
"text": "Ziyadi et al., 2020)",
"ref_id": "BIBREF51"
},
{
"start": 323,
"end": 340,
"text": "(Gu et al., 2018)",
"ref_id": "BIBREF9"
},
{
"start": 367,
"end": 391,
"text": "(Karpukhin et al., 2020;",
"ref_id": "BIBREF15"
},
{
"start": 392,
"end": 409,
"text": "Mao et al., 2020)",
"ref_id": "BIBREF26"
},
{
"start": 432,
"end": 449,
"text": "Cai et al., 2018;",
"ref_id": "BIBREF3"
},
{
"start": 450,
"end": 470,
"text": "Pandey et al., 2018;",
"ref_id": "BIBREF27"
},
{
"start": 471,
"end": 492,
"text": "We-ston et al., 2018;",
"ref_id": null
},
{
"start": 515,
"end": 533,
"text": "(Cao et al., 2017;",
"ref_id": "BIBREF4"
},
{
"start": 534,
"end": 552,
"text": "Peng et al., 2019)",
"ref_id": "BIBREF30"
},
{
"start": 578,
"end": 597,
"text": "(Peng et al., 2019)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "6"
},
{
"text": "Improve NLP Systems with kNN Some recent works try to incorporate non-parametric methods to improve a given model's performance. For example, the newly introduced kNN-LM (Khandelwal et al., 2019) , kNN-MT (Khandelwal et al., 2020) , and BERT-kNN (Kassner and Sch\u00fctze, 2020) generate the next token by retrieving the nearest k neighbors from the datastore. Another related work kNN classification model (Rajani et al., 2020) uses kNN as backoff when the confidence is low from the classification model. There are two key differences between our work and other approaches. First, we retrieve the nearest k neighbors to modify the conditional context instead of the prediction. Second, we do not have access to the parameters of GPT-3. Instead, we rely on some independently pre-trained models to get the sentence embeddings to retrieve the nearest k neighbors.",
"cite_spans": [
{
"start": 170,
"end": 195,
"text": "(Khandelwal et al., 2019)",
"ref_id": "BIBREF18"
},
{
"start": 205,
"end": 230,
"text": "(Khandelwal et al., 2020)",
"ref_id": "BIBREF17"
},
{
"start": 246,
"end": 273,
"text": "(Kassner and Sch\u00fctze, 2020)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "6"
},
{
"text": "This work presented a first step towards investigating the sensitivity of GPT-3 to in-context examples.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "To this end, we proposed KATE, a non-parametric selection approach that retrieves in-context examples according to their semantic similarity to the test samples. On several natural language understanding and generation tasks, the proposed method improves GPT-3's performance, over the random sampling baseline, by a significant margin. Particularly, KATE enables GPT-3 to achieve performance comparable to a fine-tuned T5 model on the tableto-text generation task and outperforms T5 on the QA task. Moreover, we found that fine-tuning the sentence embeddings for retrieval on task-related datasets gave rise to further empirical gains. Detailed analysis was conducted to explore the robustness of KATE to different hyperprameters, such as the number of in-context examples, examples' order, etc. One limitation we notice is that despite the improved performance on sentiment analysis, GPT-3 still lags behind the fine-tuned T5 model by a small margin. This suggests that our proposed method is more suitable and effective on long text generation tasks. We hope this work could provide insights for better understanding the behaviors of GPT-3 and represents a helpful step towards further improving its in-context learning capabilities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "8 Ethical and Broader Impacts Risk Our proposed KATE method significantly improves the in-context learning ability of GPT-3 and makes long-text generation more easily without fine-tuning the pre-trained model. However, one risk implication is that our proposed method will benefit the research groups which are financially capable of using such huge models. For individual or small-group researchers, they cannot apply our proposed method to their specific applications since they don't have access to the model. Our work has suggested researchers should focus more on investigating the in-context learning of pretrained models. One potential future direction is for researchers to scale-down the sizes of pre-trained models to find a balance between model performance and model size. Once a smaller model is obtained with comparable performance (enhanced by KATE), our proposed method can become more widely accessible to individual researchers. During the experiment on table- to-text generation, we have pointed out that large pre-trained language models could be susceptible to hallucination (case study in Table 5 ). This problem is more pronounced when we use randomly sampled examples. This happens because the language model is biased toward the training dataset. As shown in Table 5 , when random examples are used, the sentence generated by GPT-3 is grammatically correct, but some details never exist in the given table. In contrast, our proposed method, KATE, can significantly alleviate this problem by guiding GPT-3 to look for and generate the correct information. For similar reasons, large pretrained models could be potentially susceptible to gender and racial bias. Since our KATE method shows that in-context examples are crucial for highquality long-text generations, one way to alleviate the racial and gender bias is to incorporate an additional module to filter out offensive in-context examples. Since racial and gender bias are not our main research focus, a full investigation goes beyond the scope of our work. However, we believe this is an exciting opportunity for future work.",
"cite_spans": [],
"ref_spans": [
{
"start": 947,
"end": 978,
"text": "During the experiment on table-",
"ref_id": null
},
{
"start": 1111,
"end": 1118,
"text": "Table 5",
"ref_id": "TABREF8"
},
{
"start": 1284,
"end": 1291,
"text": "Table 5",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "Implementations of the proposed KATE method discussed in this paper are available at https: //github.com/jiachangliu/KATEGPT3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Code Availability",
"sec_num": null
},
{
"text": "As shown in the illustration of Figure 3 , GPT-3 is asked to translate \"mountain\" to its German version based on the three examples given as part of the input. ",
"cite_spans": [],
"ref_spans": [
{
"start": 32,
"end": 40,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "A An Example of In-context Learning",
"sec_num": null
},
{
"text": "Due to the length limit of the main paper, we present in the appendix the full ToTTo case study comparing the random sampling baseline and our proposed KATE method. We present the case study in Table 10 . As we have discussed in the main paper, the in-context examples retrieved by KATE facilitates GPT-3 to effectively extract key information from the given table. Detailed numbers such as the number of points, rebounds, and assists have all been included in the sentence.",
"cite_spans": [],
"ref_spans": [
{
"start": 194,
"end": 202,
"text": "Table 10",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "C Complete ToTTo Case Study",
"sec_num": null
},
{
"text": "In contrast, the sentence generated by GPT-3 using randomly sampled in-context examples only extract partial information from the table. Only the number of points is included while the numbers of rebounds and assists are ignored. Moreover, the random sampling baseline could lead to the issue of hallucination. Both \"senior year\" and \"University of Texas\" are not present in the given table. One may wonder whether these wrong phrases were present in the randomly sampled in-context examples, which might have caused this issue. However, if we look at the randomly sampled in-context examples in the second block of the table, such information do not exist. This suggests such hallucinated phrases are generated by the language model itself.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C Complete ToTTo Case Study",
"sec_num": null
},
{
"text": "This comparison provides some key insights on why KATE works better than the random sampling baseline. By retrieving semantically/syntactically similar in-context examples, KATE provides GPT-3 with a much more accurate template/structure to do text generation. Without such structure, GPT-3 can generate sentences that are fluent but do not meet the goal of a particular task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C Complete ToTTo Case Study",
"sec_num": null
},
{
"text": "As we mentioned in the main paper, given a training dataset, we could take the full advantage of the GPT-3's model strength through fine-tuning. However, there are several advantages of prompt engineering over fine-tuning. First, fine-tuning requires access to the model parameters and gradients. It is impossible to access this information via the current GPT-3's API. Second, fine-tuning large models are time-consuming and costly. Ordinary research labs and individual developers do not have resources to accomplish such tasks. Third, storing large fine-tuned model checkpoints requires large storage space. Even if GPT-3 is fine-tuned and stored for many specific tasks/datasets, many finetuned checkpoints may not be frequently called. This is not energy efficient. Our proposed KATE method does not require costly fine-tuning and improves the random baseline on both text classification and generation tasks, sometimes by a significant margin. This makes it more practical to deploy the same GPT-3 model across all tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "D On Prompt Engineering vs. Fine-tuning",
"sec_num": null
},
{
"text": "Although our primary goal is to improve GPT-3's in-context learning ability, we also include the finetuned T5 results as a reference (3B T5 on SST-2 and research, we didn't find a good publicly available fine-tuned model, so we fine-tune the pre-trained RoBERTa-large model on SST-2 by ourselves. The exact fine-tuning procedure, including the hyperparameters and learning rate, can be found at the Hug-gingFace website 6 . We fine-tune the RoBERTalarge model using a single V100 GPU.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "E T5 Baseline",
"sec_num": null
},
{
"text": "For reproducibility, we show the prompt templates used for all tasks in Tables 11 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "G Prompt Templates Used",
"sec_num": null
},
{
"text": "SST-2 & IMDB Sentence: comes from the brave , uninhibited performances. Label: Positive",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task Prompt Template",
"sec_num": null
},
{
"text": "Sentence: This tearful movie about a sister and her battle to save as many souls as she can is very moving. The film does well in picking up the characters and showing how Sister Helen deals with each. A wonderful journey from life to death. Label:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task Prompt Template",
"sec_num": null
},
{
"text": "ToTTo Table: <page_title>Dedric Lawson <section_title>College <table><cell>9.9 <col_header>RPG <cell>3.3 <col_header>APG <cell>19.2 <col_header>PPG Sentence: Dedric Lawson averaged 19.2 points, 9.9 rebounds and 3.3 assists per game. Table: <page_title>Trey Johnson <section_title>College <table><cell>32 <col_header>GP <cell>4.8 <col_header>RPG <cell>2.3 <col_header>APG <cell>23.5 <col_header>PPG Sentence:",
"cite_spans": [],
"ref_spans": [
{
"start": 6,
"end": 12,
"text": "Table:",
"ref_id": null
},
{
"start": 233,
"end": 239,
"text": "Table:",
"ref_id": null
}
],
"eq_spans": [],
"section": "Task Prompt Template",
"sec_num": null
},
{
"text": "QA Q: The landscape design of the Gardens of Versailles is known as which style? A: The Persian style of architecture.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task Prompt Template",
"sec_num": null
},
{
"text": "Q: The Mughal Gardens of Rashtrapati Bhavan is modelled on which garden? A: Table 11 : The prompt templates used for all tasks discussed in the paper. We show only one in-context example per task for illustration purposes.",
"cite_spans": [],
"ref_spans": [
{
"start": 76,
"end": 84,
"text": "Table 11",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Task Prompt Template",
"sec_num": null
},
{
"text": "The ToTTo code base and evaluation scripts can be found at https://github.com/google-research/ language/tree/master/language/totto",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The HuggingFace Model Zoo can be found at https: //huggingface.co/models.5 The Sentence-BERT Model Zoo can be found at https: //huggingface.co/sentence-transformers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The fine-tuning script we use can be found at https://huggingface.co/transformers/ v2.7.0/examples.html#glue.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": " Table Table: <page_title >Trey Johnson <section_title >College <table ><cell >32 <col_header > GP <cell >4.8 <col_header >RPG <cell >2.3 <col_header >APG <cell >23.5 <col_header >PPG Table: <page_title >List of RAGBRAI overnight stops <section_title >By year <table ><cell > 1986 <col_header ><col_header >Year <cell >Audubon (1) <col_header >Route -start to finish (number indicates occurrence) <col_header >Monday <cell >2006 <col_header ><col_header > Year <cell >Audubon 2 : A sample of retrieved in-context examples from the ToTTo dataset. For the KATE method, GPT-3 pays more attention to detailed information such as the number of points, rebounds, and assists. In contrast, the random selection method leads GPT-3 to generate details which do not exist in the original table. Information such as \"senior year\" and \"University of Texas\" also do not exist in the randomly sampled in-context examples. This suggests that the wrong information was generated by the language model itself. Although the sentence by the random sampling baseline is fluent, it does meet the goal of the table-to-text task.ToTTo datasets, and 11B T5 on the QA datasets). The reason for reporting the 3B T5 results on the SST-2 and ToTTo datasets is that this is the largest T5 model we can use. For the 3B T5 model, Google Colab 3 provides a free V2-8 TPU to fine-tune the 3B model. We used the Colab tutorial notebook to fine-tune the 3B T5 model on the SST-2 and ToTTo training sets. We couldn't fine-tune the 11B T5 model because the model size is too large. Finetuning such a large model requires a V3-8 TPU, which is not free of charge. Fortunately, the original GPT-3 paper (Brown et al., 2020) has already reported the finet-tuned 11B T5 results on the three QA datasets, so we reuse these results in our main paper for the QA task. Our proposed KATE method significantly improves GPT-3, performing comparably to the fine-tuned T5 model on the table-to-text task and outperforming the fine-tuned T5 model on the QA task.",
"cite_spans": [],
"ref_spans": [
{
"start": 1,
"end": 15,
"text": "Table Table:",
"ref_id": null
},
{
"start": 186,
"end": 192,
"text": "Table:",
"ref_id": null
}
],
"eq_spans": [],
"section": "Test",
"sec_num": null
},
{
"text": "As we mention in the main paper, we use the pretrained RoBERTa-large model 3 The Colab notebook on how to fine-tune the 3B T5 model can be found at https: //github.com/google-research/ text-to-text-transfer-transformer.as the first retrieval module, which has 355M parameters and is pre-trained with the MLM (masked language modeling) objective. The result given by this module is denoted as KATE roberta . We directly download this model from the HuggingFace Model Zoo (MIT license) 4 . All other retrieval modules share the same architecture as the RoBERTa-large module but are fine-tuned on specific datasets.For the fine-tuned retrieval modules, the first we use is the RoBERTa-large model fine-tuned on the SNLI and MultiNLI datasets (KATE nli ) (Bowman et al., 2015; Williams et al., 2017) ; the next we use is the RoBERTa-large model fine-tuned on the SNLI and MultiNLI dataset and then on the STS-B datasets (KATE nli+sts-b ) (Cer et al., 2017) . These fine-tuned models have already been accomplished and included by the Sentence-BERT family and are publicly available, so we directly download from the Sentence-BERT Model Zoo 5 .Lastly, specifically for the sentiment analysis task, we include a RoBERTa-large model finetuned on the SST-2 dataset (KATE sst-2 ) (Socher et al., 2013; . At the time of our",
"cite_spans": [
{
"start": 751,
"end": 772,
"text": "(Bowman et al., 2015;",
"ref_id": "BIBREF1"
},
{
"start": 773,
"end": 795,
"text": "Williams et al., 2017)",
"ref_id": "BIBREF44"
},
{
"start": 934,
"end": 952,
"text": "(Cer et al., 2017)",
"ref_id": "BIBREF5"
},
{
"start": 1271,
"end": 1292,
"text": "(Socher et al., 2013;",
"ref_id": "BIBREF38"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "F Details on Retrieval Modules",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Semantic parsing on freebase from question-answer pairs",
"authors": [
{
"first": "Jonathan",
"middle": [],
"last": "Berant",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Chou",
"suffix": ""
},
{
"first": "Roy",
"middle": [],
"last": "Frostig",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 conference on empirical methods in natural language processing",
"volume": "",
"issue": "",
"pages": "1533--1544",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on freebase from question-answer pairs. In Proceedings of the 2013 conference on empirical methods in natural language processing, pages 1533-1544.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A large annotated corpus for learning natural language inference",
"authors": [
{
"first": "Gabor",
"middle": [],
"last": "Samuel R Bowman",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Angeli",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Potts",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1508.05326"
]
},
"num": null,
"urls": [],
"raw_text": "Samuel R Bowman, Gabor Angeli, Christopher Potts, and Christopher D Manning. 2015. A large annotated corpus for learning natural language inference. arXiv preprint arXiv:1508.05326.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Language models are few-shot learners",
"authors": [
{
"first": "Benjamin",
"middle": [],
"last": "Tom B Brown",
"suffix": ""
},
{
"first": "Nick",
"middle": [],
"last": "Mann",
"suffix": ""
},
{
"first": "Melanie",
"middle": [],
"last": "Ryder",
"suffix": ""
},
{
"first": "Jared",
"middle": [],
"last": "Subbiah",
"suffix": ""
},
{
"first": "Prafulla",
"middle": [],
"last": "Kaplan",
"suffix": ""
},
{
"first": "Arvind",
"middle": [],
"last": "Dhariwal",
"suffix": ""
},
{
"first": "Pranav",
"middle": [],
"last": "Neelakantan",
"suffix": ""
},
{
"first": "Girish",
"middle": [],
"last": "Shyam",
"suffix": ""
},
{
"first": "Amanda",
"middle": [],
"last": "Sastry",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Askell",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2005.14165"
]
},
"num": null,
"urls": [],
"raw_text": "Tom B Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. arXiv preprint arXiv:2005.14165.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Skeletonto-response: Dialogue generation guided by retrieval memory",
"authors": [
{
"first": "Deng",
"middle": [],
"last": "Cai",
"suffix": ""
},
{
"first": "Yan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Bi",
"suffix": ""
},
{
"first": "Zhaopeng",
"middle": [],
"last": "Tu",
"suffix": ""
},
{
"first": "Xiaojiang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Wai",
"middle": [],
"last": "Lam",
"suffix": ""
},
{
"first": "Shuming",
"middle": [],
"last": "Shi",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1809.05296"
]
},
"num": null,
"urls": [],
"raw_text": "Deng Cai, Yan Wang, Wei Bi, Zhaopeng Tu, Xiaojiang Liu, Wai Lam, and Shuming Shi. 2018. Skeleton- to-response: Dialogue generation guided by retrieval memory. arXiv preprint arXiv:1809.05296.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Faithful to the original: Fact aware neural abstractive summarization",
"authors": [
{
"first": "Ziqiang",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Furu",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Wenjie",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Sujian",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1711.04434"
]
},
"num": null,
"urls": [],
"raw_text": "Ziqiang Cao, Furu Wei, Wenjie Li, and Sujian Li. 2017. Faithful to the original: Fact aware neural abstractive summarization. arXiv preprint arXiv:1711.04434.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Semeval-2017 task 1: Semantic textual similarity-multilingual and cross-lingual focused evaluation",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Cer",
"suffix": ""
},
{
"first": "Mona",
"middle": [],
"last": "Diab",
"suffix": ""
},
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
},
{
"first": "Inigo",
"middle": [],
"last": "Lopez-Gazpio",
"suffix": ""
},
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1708.00055"
]
},
"num": null,
"urls": [],
"raw_text": "Daniel Cer, Mona Diab, Eneko Agirre, Inigo Lopez- Gazpio, and Lucia Specia. 2017. Semeval-2017 task 1: Semantic textual similarity-multilingual and cross-lingual focused evaluation. arXiv preprint arXiv:1708.00055.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1810.04805"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Handling divergent reference texts when evaluating table-to-text generation",
"authors": [
{
"first": "Bhuwan",
"middle": [],
"last": "Dhingra",
"suffix": ""
},
{
"first": "Manaal",
"middle": [],
"last": "Faruqui",
"suffix": ""
},
{
"first": "Ankur",
"middle": [],
"last": "Parikh",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Dipanjan",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "William",
"middle": [
"W"
],
"last": "Cohen",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1906.01081"
]
},
"num": null,
"urls": [],
"raw_text": "Bhuwan Dhingra, Manaal Faruqui, Ankur Parikh, Ming- Wei Chang, Dipanjan Das, and William W Cohen. 2019. Handling divergent reference texts when evaluating table-to-text generation. arXiv preprint arXiv:1906.01081.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Making pre-trained language models better few-shot learners",
"authors": [
{
"first": "Tianyu",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Fisch",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2012.15723"
]
},
"num": null,
"urls": [],
"raw_text": "Tianyu Gao, Adam Fisch, and Danqi Chen. 2020. Making pre-trained language models better few-shot learners. arXiv preprint arXiv:2012.15723.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Search engine guided neural machine translation",
"authors": [
{
"first": "Jiatao",
"middle": [],
"last": "Gu",
"suffix": ""
},
{
"first": "Yong",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "O",
"middle": [
"K"
],
"last": "Victor",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2018,
"venue": "AAAI",
"volume": "",
"issue": "",
"pages": "5133--5140",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiatao Gu, Yong Wang, Kyunghyun Cho, and Victor OK Li. 2018. Search engine guided neural machine trans- lation. In AAAI, pages 5133-5140.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Generating sentences by editing prototypes",
"authors": [
{
"first": "Kelvin",
"middle": [],
"last": "Guu",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Tatsunori",
"suffix": ""
},
{
"first": "Yonatan",
"middle": [],
"last": "Hashimoto",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Oren",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2018,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "6",
"issue": "",
"pages": "437--450",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kelvin Guu, Tatsunori B Hashimoto, Yonatan Oren, and Percy Liang. 2018. Generating sentences by editing prototypes. Transactions of the Association for Computational Linguistics, 6:437-450.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "A retrieve-and-edit framework for predicting structured outputs",
"authors": [
{
"first": "B",
"middle": [],
"last": "Tatsunori",
"suffix": ""
},
{
"first": "Kelvin",
"middle": [],
"last": "Hashimoto",
"suffix": ""
},
{
"first": "Yonatan",
"middle": [],
"last": "Guu",
"suffix": ""
},
{
"first": "Percy S",
"middle": [],
"last": "Oren",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2018,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "10052--10062",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tatsunori B Hashimoto, Kelvin Guu, Yonatan Oren, and Percy S Liang. 2018. A retrieve-and-edit frame- work for predicting structured outputs. In Advances in Neural Information Processing Systems, pages 10052-10062.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Mantas Mazeika, Dawn Song, and Jacob Steinhardt. 2020. Measuring massive multitask language understanding",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Hendrycks",
"suffix": ""
},
{
"first": "Collin",
"middle": [],
"last": "Burns",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Basart",
"suffix": ""
},
{
"first": "Andy",
"middle": [],
"last": "Zou",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2009.03300"
]
},
"num": null,
"urls": [],
"raw_text": "Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. 2020. Measuring massive multitask language under- standing. arXiv preprint arXiv:2009.03300.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Generalization and similarity in exemplar models of categorization: Insights from machine learning",
"authors": [
{
"first": "Frank",
"middle": [],
"last": "J\u00e4kel",
"suffix": ""
},
{
"first": "Bernhard",
"middle": [],
"last": "Sch\u00f6lkopf",
"suffix": ""
},
{
"first": "Felix",
"middle": [
"A"
],
"last": "Wichmann",
"suffix": ""
}
],
"year": 2008,
"venue": "Psychonomic Bulletin & Review",
"volume": "15",
"issue": "2",
"pages": "256--271",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Frank J\u00e4kel, Bernhard Sch\u00f6lkopf, and Felix A Wich- mann. 2008. Generalization and similarity in exem- plar models of categorization: Insights from machine learning. Psychonomic Bulletin & Review, 15(2):256- 271.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Triviaqa: A large scale distantly supervised challenge dataset for reading comprehension",
"authors": [
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Eunsol",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Daniel",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Weld",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1705.03551"
]
},
"num": null,
"urls": [],
"raw_text": "Mandar Joshi, Eunsol Choi, Daniel S Weld, and Luke Zettlemoyer. 2017. Triviaqa: A large scale distantly supervised challenge dataset for reading comprehen- sion. arXiv preprint arXiv:1705.03551.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Dense passage retrieval for open-domain question answering",
"authors": [
{
"first": "Vladimir",
"middle": [],
"last": "Karpukhin",
"suffix": ""
},
{
"first": "Barlas",
"middle": [],
"last": "Oguz",
"suffix": ""
},
{
"first": "Sewon",
"middle": [],
"last": "Min",
"suffix": ""
},
{
"first": "Ledell",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Sergey",
"middle": [],
"last": "Edunov",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Wen-Tau",
"middle": [],
"last": "Yih",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2004.04906"
]
},
"num": null,
"urls": [],
"raw_text": "Vladimir Karpukhin, Barlas Oguz, Sewon Min, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for open-domain ques- tion answering. arXiv preprint arXiv:2004.04906.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Bertknn: Adding a knn search component to pretrained language models for better qa",
"authors": [
{
"first": "Nora",
"middle": [],
"last": "Kassner",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2005.00766"
]
},
"num": null,
"urls": [],
"raw_text": "Nora Kassner and Hinrich Sch\u00fctze. 2020. Bert- knn: Adding a knn search component to pretrained language models for better qa. arXiv preprint arXiv:2005.00766.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Nearest neighbor machine translation",
"authors": [
{
"first": "Urvashi",
"middle": [],
"last": "Khandelwal",
"suffix": ""
},
{
"first": "Angela",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2010.00710"
]
},
"num": null,
"urls": [],
"raw_text": "Urvashi Khandelwal, Angela Fan, Dan Jurafsky, Luke Zettlemoyer, and Mike Lewis. 2020. Nearest neighbor machine translation. arXiv preprint arXiv:2010.00710.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Generalization through memorization: Nearest neighbor language models",
"authors": [
{
"first": "Urvashi",
"middle": [],
"last": "Khandelwal",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1911.00172"
]
},
"num": null,
"urls": [],
"raw_text": "Urvashi Khandelwal, Omer Levy, Dan Jurafsky, Luke Zettlemoyer, and Mike Lewis. 2019. Generalization through memorization: Nearest neighbor language models. arXiv preprint arXiv:1911.00172.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Natural questions: a benchmark for question answering research",
"authors": [
{
"first": "Tom",
"middle": [],
"last": "Kwiatkowski",
"suffix": ""
},
{
"first": "Jennimaria",
"middle": [],
"last": "Palomaki",
"suffix": ""
},
{
"first": "Olivia",
"middle": [],
"last": "Redfield",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
},
{
"first": "Ankur",
"middle": [],
"last": "Parikh",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Alberti",
"suffix": ""
},
{
"first": "Danielle",
"middle": [],
"last": "Epstein",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2019,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "7",
"issue": "",
"pages": "453--466",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tom Kwiatkowski, Jennimaria Palomaki, Olivia Red- field, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Ken- ton Lee, et al. 2019. Natural questions: a benchmark for question answering research. Transactions of the Association for Computational Linguistics, 7:453- 466.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Crosslingual language model pretraining",
"authors": [
{
"first": "Guillaume",
"middle": [],
"last": "Lample",
"suffix": ""
},
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1901.07291"
]
},
"num": null,
"urls": [],
"raw_text": "Guillaume Lample and Alexis Conneau. 2019. Cross- lingual language model pretraining. arXiv preprint arXiv:1901.07291.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension",
"authors": [
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Marjan",
"middle": [],
"last": "Ghazvininejad",
"suffix": ""
},
{
"first": "Abdelrahman",
"middle": [],
"last": "Mohamed",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Ves",
"middle": [],
"last": "Stoyanov",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1910.13461"
]
},
"num": null,
"urls": [],
"raw_text": "Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. 2019. Bart: De- noising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Tim Rockt\u00e4schel, et al. 2020. Retrieval-augmented generation for knowledge-intensive nlp tasks",
"authors": [
{
"first": "Patrick",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Ethan",
"middle": [],
"last": "Perez",
"suffix": ""
},
{
"first": "Aleksandara",
"middle": [],
"last": "Piktus",
"suffix": ""
},
{
"first": "Fabio",
"middle": [],
"last": "Petroni",
"suffix": ""
},
{
"first": "Vladimir",
"middle": [],
"last": "Karpukhin",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Heinrich",
"middle": [],
"last": "K\u00fcttler",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Wen-Tau",
"middle": [],
"last": "Yih",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2005.11401"
]
},
"num": null,
"urls": [],
"raw_text": "Patrick Lewis, Ethan Perez, Aleksandara Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Hein- rich K\u00fcttler, Mike Lewis, Wen-tau Yih, Tim Rock- t\u00e4schel, et al. 2020. Retrieval-augmented generation for knowledge-intensive nlp tasks. arXiv preprint arXiv:2005.11401.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Delete, retrieve, generate: A simple approach to sentiment and style transfer",
"authors": [
{
"first": "Juncen",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Robin",
"middle": [],
"last": "Jia",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1804.06437"
]
},
"num": null,
"urls": [],
"raw_text": "Juncen Li, Robin Jia, He He, and Percy Liang. 2018. Delete, retrieve, generate: A simple ap- proach to sentiment and style transfer. arXiv preprint arXiv:1804.06437.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Roberta: A robustly optimized bert pretraining approach",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1907.11692"
]
},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Learning word vectors for sentiment analysis",
"authors": [
{
"first": "Andrew",
"middle": [],
"last": "Maas",
"suffix": ""
},
{
"first": "Raymond",
"middle": [
"E"
],
"last": "Daly",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Pham",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Andrew",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Potts",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th annual meeting of the association for computational linguistics: Human language technologies",
"volume": "",
"issue": "",
"pages": "142--150",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrew Maas, Raymond E Daly, Peter T Pham, Dan Huang, Andrew Y Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In Proceedings of the 49th annual meeting of the associ- ation for computational linguistics: Human language technologies, pages 142-150.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Generation-augmented retrieval for open-domain question answering",
"authors": [
{
"first": "Yuning",
"middle": [],
"last": "Mao",
"suffix": ""
},
{
"first": "Pengcheng",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yelong",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Jiawei",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Weizhu",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2009.08553"
]
},
"num": null,
"urls": [],
"raw_text": "Yuning Mao, Pengcheng He, Xiaodong Liu, Ye- long Shen, Jianfeng Gao, Jiawei Han, and Weizhu Chen. 2020. Generation-augmented retrieval for open-domain question answering. arXiv preprint arXiv:2009.08553.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Exemplar encoder-decoder for neural conversation generation",
"authors": [
{
"first": "Gaurav",
"middle": [],
"last": "Pandey",
"suffix": ""
},
{
"first": "Danish",
"middle": [],
"last": "Contractor",
"suffix": ""
},
{
"first": "Vineet",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "Sachindra",
"middle": [],
"last": "Joshi",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1329--1338",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gaurav Pandey, Danish Contractor, Vineet Kumar, and Sachindra Joshi. 2018. Exemplar encoder-decoder for neural conversation generation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1329-1338.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Bleu: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th annual meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic evalu- ation of machine translation. In Proceedings of the 40th annual meeting of the Association for Computa- tional Linguistics, pages 311-318.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "ToTTo: A controlled table-totext generation dataset",
"authors": [
{
"first": "P",
"middle": [],
"last": "Ankur",
"suffix": ""
},
{
"first": "Xuezhi",
"middle": [],
"last": "Parikh",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Manaal",
"middle": [],
"last": "Gehrmann",
"suffix": ""
},
{
"first": "Bhuwan",
"middle": [],
"last": "Faruqui",
"suffix": ""
},
{
"first": "Diyi",
"middle": [],
"last": "Dhingra",
"suffix": ""
},
{
"first": "Dipanjan",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Das",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ankur P Parikh, Xuezhi Wang, Sebastian Gehrmann, Manaal Faruqui, Bhuwan Dhingra, Diyi Yang, and Dipanjan Das. 2020. ToTTo: A controlled table-to- text generation dataset. In Proceedings of EMNLP.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Text generation with exemplar-based adaptive decoding",
"authors": [
{
"first": "Hao",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Ankur",
"suffix": ""
},
{
"first": "Manaal",
"middle": [],
"last": "Parikh",
"suffix": ""
},
{
"first": "Bhuwan",
"middle": [],
"last": "Faruqui",
"suffix": ""
},
{
"first": "Dipanjan",
"middle": [],
"last": "Dhingra",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Das",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1904.04428"
]
},
"num": null,
"urls": [],
"raw_text": "Hao Peng, Ankur P Parikh, Manaal Faruqui, Bhuwan Dhingra, and Dipanjan Das. 2019. Text genera- tion with exemplar-based adaptive decoding. arXiv preprint arXiv:1904.04428.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Improving language understanding by generative pre-training",
"authors": [
{
"first": "Alec",
"middle": [],
"last": "Radford",
"suffix": ""
},
{
"first": "Karthik",
"middle": [],
"last": "Narasimhan",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language under- standing by generative pre-training.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Language models are unsupervised multitask learners",
"authors": [
{
"first": "Alec",
"middle": [],
"last": "Radford",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Rewon",
"middle": [],
"last": "Child",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Luan",
"suffix": ""
},
{
"first": "Dario",
"middle": [],
"last": "Amodei",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
}
],
"year": 2019,
"venue": "OpenAI blog",
"volume": "1",
"issue": "8",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Exploring the limits of transfer learning with a unified text-to-text transformer",
"authors": [
{
"first": "Colin",
"middle": [],
"last": "Raffel",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Roberts",
"suffix": ""
},
{
"first": "Katherine",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Sharan",
"middle": [],
"last": "Narang",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Matena",
"suffix": ""
},
{
"first": "Yanqi",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Peter J",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1910.10683"
]
},
"num": null,
"urls": [],
"raw_text": "Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text trans- former. arXiv preprint arXiv:1910.10683.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Richard Socher, and Caiming Xiong. 2020. Explaining and improving model behavior with k nearest neighbor representations",
"authors": [
{
"first": "Ben",
"middle": [],
"last": "Nazneen Fatema Rajani",
"suffix": ""
},
{
"first": "Wengpeng",
"middle": [],
"last": "Krause",
"suffix": ""
},
{
"first": "Tong",
"middle": [],
"last": "Yin",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Niu",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2010.09030"
]
},
"num": null,
"urls": [],
"raw_text": "Nazneen Fatema Rajani, Ben Krause, Wengpeng Yin, Tong Niu, Richard Socher, and Caiming Xiong. 2020. Explaining and improving model behavior with k nearest neighbor representations. arXiv preprint arXiv:2010.09030.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Sentence-bert: Sentence embeddings using siamese bert-networks",
"authors": [
{
"first": "Nils",
"middle": [],
"last": "Reimers",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1908.10084"
]
},
"num": null,
"urls": [],
"raw_text": "Nils Reimers and Iryna Gurevych. 2019. Sentence-bert: Sentence embeddings using siamese bert-networks. arXiv preprint arXiv:1908.10084.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Making monolingual sentence embeddings multilingual using knowledge distillation",
"authors": [
{
"first": "Nils",
"middle": [],
"last": "Reimers",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2004.09813"
]
},
"num": null,
"urls": [],
"raw_text": "Nils Reimers and Iryna Gurevych. 2020. Mak- ing monolingual sentence embeddings multilin- gual using knowledge distillation. arXiv preprint arXiv:2004.09813.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Autoprompt: Eliciting knowledge from language models with automatically generated prompts",
"authors": [
{
"first": "Taylor",
"middle": [],
"last": "Shin",
"suffix": ""
},
{
"first": "Yasaman",
"middle": [],
"last": "Razeghi",
"suffix": ""
},
{
"first": "I",
"middle": [
"V"
],
"last": "Robert L Logan",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Wallace",
"suffix": ""
},
{
"first": "Sameer",
"middle": [],
"last": "Singh",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2010.15980"
]
},
"num": null,
"urls": [],
"raw_text": "Taylor Shin, Yasaman Razeghi, Robert L Logan IV, Eric Wallace, and Sameer Singh. 2020. Autoprompt: Eliciting knowledge from language models with automatically generated prompts. arXiv preprint arXiv:2010.15980.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Recursive deep models for semantic compositionality over a sentiment treebank",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Perelygin",
"suffix": ""
},
{
"first": "Jean",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Chuang",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Andrew",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Potts",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 conference on empirical methods in natural language processing",
"volume": "",
"issue": "",
"pages": "1631--1642",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 conference on empiri- cal methods in natural language processing, pages 1631-1642.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Two are better than one: An ensemble of retrieval-and generation-based dialog systems",
"authors": [
{
"first": "Yiping",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Rui",
"middle": [],
"last": "Yan",
"suffix": ""
},
{
"first": "Xiang",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Dongyan",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1610.07149"
]
},
"num": null,
"urls": [],
"raw_text": "Yiping Song, Rui Yan, Xiang Li, Dongyan Zhao, and Ming Zhang. 2016. Two are better than one: An ensemble of retrieval-and generation-based dialog systems. arXiv preprint arXiv:1610.07149.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Experiments and prospects of example-based machine translation",
"authors": [
{
"first": "Eiichiro",
"middle": [],
"last": "Sumita",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hitoshi",
"suffix": ""
}
],
"year": 1991,
"venue": "29th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "185--192",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eiichiro Sumita and HDA Hitoshi. 1991. Experiments and prospects of example-based machine translation. In 29th Annual Meeting of the Association for Com- putational Linguistics, pages 185-192.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in neural information processing systems",
"volume": "30",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in neural information processing systems, 30:5998-6008.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Glue: A multi-task benchmark and analysis platform for natural language understanding",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Amanpreet",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Julian",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Samuel R",
"middle": [],
"last": "Bowman",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1804.07461"
]
},
"num": null,
"urls": [],
"raw_text": "Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. 2018. Glue: A multi-task benchmark and analysis platform for natural language understanding. arXiv preprint arXiv:1804.07461.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Retrieve and refine: Improved sequence generation models for dialogue",
"authors": [
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "Emily",
"middle": [],
"last": "Dinan",
"suffix": ""
},
{
"first": "Alexander H",
"middle": [],
"last": "Miller",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1808.04776"
]
},
"num": null,
"urls": [],
"raw_text": "Jason Weston, Emily Dinan, and Alexander H Miller. 2018. Retrieve and refine: Improved sequence generation models for dialogue. arXiv preprint arXiv:1808.04776.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "A broad-coverage challenge corpus for sentence understanding through inference",
"authors": [
{
"first": "Adina",
"middle": [],
"last": "Williams",
"suffix": ""
},
{
"first": "Nikita",
"middle": [],
"last": "Nangia",
"suffix": ""
},
{
"first": "Samuel R",
"middle": [],
"last": "Bowman",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1704.05426"
]
},
"num": null,
"urls": [],
"raw_text": "Adina Williams, Nikita Nangia, and Samuel R Bow- man. 2017. A broad-coverage challenge corpus for sentence understanding through inference. arXiv preprint arXiv:1704.05426.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Huggingface's transformers: State-ofthe-art natural language processing",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Clement",
"middle": [],
"last": "Delangue",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Moi",
"suffix": ""
},
{
"first": "Pierric",
"middle": [],
"last": "Cistac",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rault",
"suffix": ""
},
{
"first": "R\u00e9mi",
"middle": [],
"last": "Louf",
"suffix": ""
},
{
"first": "Morgan",
"middle": [],
"last": "Funtowicz",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1910.03771"
]
},
"num": null,
"urls": [],
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R\u00e9mi Louf, Morgan Funtowicz, et al. 2019. Huggingface's transformers: State-of- the-art natural language processing. arXiv preprint arXiv:1910.03771.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Response generation by context-aware prototype editing",
"authors": [
{
"first": "Yu",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Furu",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Shaohan",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Yunli",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Zhoujun",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence",
"volume": "33",
"issue": "",
"pages": "7281--7288",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yu Wu, Furu Wei, Shaohan Huang, Yunli Wang, Zhou- jun Li, and Ming Zhou. 2019. Response generation by context-aware prototype editing. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 7281-7288.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Aditya Barua, and Colin Raffel. 2020. mt5: A massively multilingual pre-trained text-to-text transformer",
"authors": [
{
"first": "Linting",
"middle": [],
"last": "Xue",
"suffix": ""
},
{
"first": "Noah",
"middle": [],
"last": "Constant",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Roberts",
"suffix": ""
},
{
"first": "Mihir",
"middle": [],
"last": "Kale",
"suffix": ""
},
{
"first": "Rami",
"middle": [],
"last": "Al-Rfou",
"suffix": ""
},
{
"first": "Aditya",
"middle": [],
"last": "Siddhant",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2010.11934"
]
},
"num": null,
"urls": [],
"raw_text": "Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, and Colin Raffel. 2020. mt5: A massively multilingual pre-trained text-to-text transformer. arXiv preprint arXiv:2010.11934.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Learning to respond with deep neural networks for retrieval-based human-computer conversation system",
"authors": [
{
"first": "Rui",
"middle": [],
"last": "Yan",
"suffix": ""
},
{
"first": "Yiping",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Hua",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 39th International ACM SIGIR conference on Research and Development in Information Retrieval",
"volume": "",
"issue": "",
"pages": "55--64",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rui Yan, Yiping Song, and Hua Wu. 2016. Learning to respond with deep neural networks for retrieval-based human-computer conversation system. In Proceed- ings of the 39th International ACM SIGIR conference on Research and Development in Information Re- trieval, pages 55-64.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "Xlnet: Generalized autoregressive pretraining for language understanding",
"authors": [
{
"first": "Zhilin",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Zihang",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "Yiming",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Jaime",
"middle": [],
"last": "Carbonell",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Russ",
"suffix": ""
},
{
"first": "Quoc V",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "5753--5763",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Car- bonell, Russ R Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretraining for lan- guage understanding. In Advances in neural informa- tion processing systems, pages 5753-5763.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "Calibrate before use: Improving few-shot performance of language models",
"authors": [
{
"first": "Z",
"middle": [],
"last": "Tony",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Shi",
"middle": [],
"last": "Wallace",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Sameer",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Singh",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2102.09690"
]
},
"num": null,
"urls": [],
"raw_text": "Tony Z Zhao, Eric Wallace, Shi Feng, Dan Klein, and Sameer Singh. 2021. Calibrate before use: Improv- ing few-shot performance of language models. arXiv preprint arXiv:2102.09690.",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "Examplebased named entity recognition",
"authors": [
{
"first": "Morteza",
"middle": [],
"last": "Ziyadi",
"suffix": ""
},
{
"first": "Yuting",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Abhishek",
"middle": [],
"last": "Goswami",
"suffix": ""
},
{
"first": "Jade",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Weizhu",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2008.10570"
]
},
"num": null,
"urls": [],
"raw_text": "Morteza Ziyadi, Yuting Sun, Abhishek Goswami, Jade Huang, and Weizhu Chen. 2020. Example- based named entity recognition. arXiv preprint arXiv:2008.10570.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"type_str": "figure",
"text": "Left: Effect of number of in-context examples for different selection methods. Right: Effect of the size of training set for retrieval on KATE. Two representative sentence encoders are used in these studies.",
"uris": null
},
"FIGREF2": {
"num": null,
"type_str": "figure",
"text": "The figure above shows how to perform in-context learning with a language model. Three incontext examples and the test prompt are concatenated as a single string input for GPT-3, with a special character \"\\n\" inserted between two adjacent examples. GPT-3 keeps generating tokens until there is a special character \"\\n\".",
"uris": null
},
"TABREF0": {
"type_str": "table",
"text": "Results of GPT-3 on the SST-2 sentiment analysis dataset. Five different examples are randomly selected from the training set for each trial. Different contexts induce different accuracies on the test set.",
"num": null,
"content": "<table/>",
"html": null
},
"TABREF1": {
"type_str": "table",
"text": "What county is Frederick, MD in? What Olympic athlete has won the most medals? What county is Duluth Minnesota in? A: St. Louis County Q: What county is Frederick, MD in? A:",
"num": null,
"content": "<table><tr><td/><td>select nearest neighbors</td></tr><tr><td/><td>Test Prompt</td></tr><tr><td/><td>encode</td></tr><tr><td/><td>encode</td></tr><tr><td colspan=\"2\">Training Data</td></tr><tr><td>1</td><td>What county is Duluth Minnesota in?</td></tr><tr><td/><td>GPT-3</td></tr><tr><td/><td>Frederick County</td></tr></table>",
"html": null
},
"TABREF2": {
"type_str": "table",
"text": "Knn-Augmented in-conText Example selection",
"num": null,
"content": "<table><tr><td>Method Accuracy</td><td>Closest 46.0</td><td>Farthest 31.0</td></tr></table>",
"html": null
},
"TABREF3": {
"type_str": "table",
"text": "Comparison of the EM score on the closest 10 neighbors and farthest 10 neighbors on a subset of 100 test samples of the NQ dataset.",
"num": null,
"content": "<table><tr><td>Algorithm 1 kNN In-context Example Selection</td></tr><tr><td>Given: test prompt x test , training set D</td></tr></table>",
"html": null
},
"TABREF5": {
"type_str": "table",
"text": "Results on the IMDB dataset. In-context examples are from the SST-2 dataset.",
"num": null,
"content": "<table/>",
"html": null
},
"TABREF7": {
"type_str": "table",
"text": "Table-to-text generation results on the ToTTo dev dataset.",
"num": null,
"content": "<table><tr><td>Test Table</td><td>Table: &lt;page_title &gt;Trey Johnson &lt;section_title &gt;College &lt;table &gt;&lt;cell &gt;32 &lt;col_header &gt; GP &lt;cell &gt;4.8 &lt;col_header &gt;RPG &lt;cell &gt;2.3 &lt;col_header &gt;APG &lt;cell &gt;23.5 &lt;col_header &gt;PPG</td></tr><tr><td/><td>Table: &lt;page_title &gt;Dedric Lawson &lt;section_title &gt;College &lt;table &gt;&lt;cell &gt;9.9 &lt;col_header &gt;</td></tr><tr><td/><td>RPG &lt;cell &gt;3.3 &lt;col_header &gt;APG &lt;cell &gt;19.2 &lt;col_header &gt;PPG</td></tr><tr><td>Retrieved Examples</td><td>Sentence: Dedric Lawson averaged 19.2 points, 9.9 rebounds and 3.3 assists per game. Table: &lt;page_title &gt;Carsen Edwards &lt;section_title &gt;College &lt;table &gt;&lt;cell &gt;3.8 &lt;col_header &gt;</td></tr><tr><td/><td>RPG &lt;cell &gt;2.8 &lt;col_header &gt;APG &lt;cell &gt;18.5 &lt;col_header &gt;PPG</td></tr><tr><td/><td>Sentence: Edwards averaged 18.5 points, 3.8 rebounds and 2.8 assists per game.</td></tr><tr><td>Predictions</td><td>Ground-truth: Trey Johnson averaged 23.5 points, 4.8 rebounds, and 2.3 assists in 32 games.</td></tr></table>",
"html": null
},
"TABREF8": {
"type_str": "table",
"text": "",
"num": null,
"content": "<table><tr><td>Method RAG (Open-Domain) T5+SSM (Closed-Book) T5 (Closed-Book) GPT-3 (64 examples)</td><td>NQ 44.5 36.6 34.5 29.9</td><td>WQ 45.5 44.7 37.4 41.5</td><td>TriviaQA * 68.0 60.5 50.1 -</td></tr><tr><td/><td>Ours</td><td/><td/></tr><tr><td>Random kNNroberta KATEroberta KATEnli KATEnli+sts-b</td><td colspan=\"3\">28.6 \u00b1 0.3 41.0 \u00b1 0.5 59.2 \u00b1 0.4 24.0 23.9 26.2 40.0 47.7 57.5 40.8 50.6 60.9 41.6 50.2 62.4</td></tr></table>",
"html": null
},
"TABREF9": {
"type_str": "table",
"text": "",
"num": null,
"content": "<table/>",
"html": null
},
"TABREF10": {
"type_str": "table",
"text": "The Mughal Gardens of Rashtrapati Bhavan is modelled on which garden? The Mughal Garden of Rashtrapati Bhavan is modelled on? The Persian style of architecture Ground-truth: Persian garden Who built the first Mughal Garden in India? Babur KATE: The Persian gardens The landscape design of the Gardens of Versailles is known as which style? French garden Random Baseline: Shalimar gardens Question: What city was Zeus the patron god of? What is the symbol of Zeus the Greek God? Bull Ground-truth: Olympia Where did Zeus spend most of his time? Mount Olympus KATE: Olympia Where was the statue of Zeus at Olympia located? In the Temple of Zeus",
"num": null,
"content": "<table><tr><td>In-Context Examples</td><td>Predictions</td></tr><tr><td colspan=\"2\">Question: Random Baseline Athens</td></tr><tr><td colspan=\"2\">Question: Where did the Dewey decimal system come from?</td></tr><tr><td>Where did the formula for area of a circle come from? Archimedes</td><td>Ground-truth: Melvil Dewey</td></tr><tr><td>Where did the name jack russell come from? Reverend John Russell</td><td>KATE: Melvil Dewey</td></tr><tr><td>Where did the letters of the alphabet come from? The Phoenician alphabet</td><td>Random Baseline: the library of Congress</td></tr></table>",
"html": null
},
"TABREF11": {
"type_str": "table",
"text": "Three samples of retrieved in-context examples from the NQ dataset. Three retrieved Q-A pairs are shown on the left. Predictions by the KATE method and useful details from in-context examples are shown in Green. Gold-standard references are shown in Blue. Predictions by the random baseline are shown in Red.",
"num": null,
"content": "<table><tr><td/><td/><td colspan=\"3\">EM Score vs. Number of In-context Examples</td><td/><td>EM Score vs. Size of Training Set</td></tr><tr><td/><td>37.5 40.0</td><td/><td/><td/><td/><td>38 40</td><td>Random KATE roberta KATE nli + sts b</td></tr><tr><td>EM Score</td><td>30.0 32.5 35.0</td><td/><td colspan=\"2\">Random KATE roberta KATE nli + sts b</td><td>EM Score</td><td>32 34 36</td></tr><tr><td/><td>27.5</td><td/><td/><td/><td/><td>30</td></tr><tr><td/><td>25.0</td><td>10</td><td>20 Number of In-context Examples 30 40 50</td><td>60</td><td/><td>28</td><td>Size of Training Set 0 10000 20000 30000 40000 50000 60000 70000</td></tr></table>",
"html": null
},
"TABREF14": {
"type_str": "table",
"text": "Data split for different datasets. In-context examples are selected from the training set. Because ToTTo and TriviaQA require submitting to their leaderboards, the evaluation is done on the dev sets. For all other datasets, the evaluation is done on the test sets.",
"num": null,
"content": "<table/>",
"html": null
}
}
}
}