ACL-OCL / Base_JSON /prefixD /json /deelio /2022.deelio-1.3.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2022",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T16:21:43.618012Z"
},
"title": "Query Generation with External Knowledge for Dense Retrieval",
"authors": [
{
"first": "Sukmin",
"middle": [],
"last": "Cho",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Soyeong",
"middle": [],
"last": "Jeong",
"suffix": "",
"affiliation": {},
"email": "syjeong@nlp.kaist.ac.kr"
},
{
"first": "Wonsuk",
"middle": [],
"last": "Yang",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Jong",
"middle": [
"C"
],
"last": "Park",
"suffix": "",
"affiliation": {},
"email": "park@nlp.kaist.ac.kr"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Dense retrieval aims at searching for the most relevant documents to the given query by encoding texts in the embedding space, requiring a large amount of query-document pairs to train. Since manually constructing such training data is challenging, recent work has proposed to generate synthetic queries from documents and use them to train a dense retriever. However, compared to the human labeled queries, synthetic queries do not generally ask for hidden information, therefore leading to a degraded retrieval performance. In this work, we propose Query Generation with External Knowledge (QGEK), a novel method for generating queries with external knowledge related to the corresponding document. Specifically, we convert a query into a triplet-based template to accommodate external knowledge and transmit it to a pre-trained language model (PLM). We validate QGEK in both in-domain and out-domain dense retrieval settings. The dense retriever with the queries requiring external knowledge is found to make good performance improvement. Also, such queries are similar to the human labeled queries, confirmed by both human evaluation",
"pdf_parse": {
"paper_id": "2022",
"_pdf_hash": "",
"abstract": [
{
"text": "Dense retrieval aims at searching for the most relevant documents to the given query by encoding texts in the embedding space, requiring a large amount of query-document pairs to train. Since manually constructing such training data is challenging, recent work has proposed to generate synthetic queries from documents and use them to train a dense retriever. However, compared to the human labeled queries, synthetic queries do not generally ask for hidden information, therefore leading to a degraded retrieval performance. In this work, we propose Query Generation with External Knowledge (QGEK), a novel method for generating queries with external knowledge related to the corresponding document. Specifically, we convert a query into a triplet-based template to accommodate external knowledge and transmit it to a pre-trained language model (PLM). We validate QGEK in both in-domain and out-domain dense retrieval settings. The dense retriever with the queries requiring external knowledge is found to make good performance improvement. Also, such queries are similar to the human labeled queries, confirmed by both human evaluation",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Information retrieval (IR) is the task of collecting relevant documents from a large corpus when given a query. IR not only plays an important role in the search system by itself, but is also crucially applied to various NLP tasks such as Open-Domain QA (Kwiatkowski et al., 2019) and Citation-Prediction with its ability to find grounding documents. As the simplest retrieval method, traditional term-based sparse models such as TF-IDF and BM25 (Robertson and Zaragoza, 2009) are widely used. However, these sparse retrieval models are unable to capture the semantic similarities without explicit * Corresponding author \u2026 that zebra mussels have also had an effect on fish populations \u2026 They were first detected in Canada in the Great Lakes in 1988, \u2026 when did zebra mussels come to north america where are mussels located lexical overlaps between the query and its relevant documents. As a solution, dense retrieval models are recently proposed where query and document representations are embedded into the latent space (Gillick et al., 2018; Karpukhin et al., 2020) , though they require a large amount of paired query-document training samples for notable performance, which is very challenging and expensive. In response, a zero-shot setting is often adopted, but dense retrievers are known to show poor performance on a new target domain (Ma et al., 2021; Xin et al., 2021) .",
"cite_spans": [
{
"start": 254,
"end": 280,
"text": "(Kwiatkowski et al., 2019)",
"ref_id": "BIBREF12"
},
{
"start": 446,
"end": 476,
"text": "(Robertson and Zaragoza, 2009)",
"ref_id": "BIBREF20"
},
{
"start": 1023,
"end": 1045,
"text": "(Gillick et al., 2018;",
"ref_id": "BIBREF8"
},
{
"start": 1046,
"end": 1069,
"text": "Karpukhin et al., 2020)",
"ref_id": "BIBREF10"
},
{
"start": 1345,
"end": 1362,
"text": "(Ma et al., 2021;",
"ref_id": "BIBREF16"
},
{
"start": 1363,
"end": 1380,
"text": "Xin et al., 2021)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "One possible solution is to generate synthetic queries by fine-tuning a pre-trained language model (PLM) on a large IR benchmark dataset, and to use such queries for training dense retrievers (Ma et al., 2021; . However, this method does not yet provide synthetic queries whose quality is comparable to that of human labeled ones, thus hindering retrieval performance.",
"cite_spans": [
{
"start": 192,
"end": 209,
"text": "(Ma et al., 2021;",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In particular, we argue that, for the effective training of dense retrievers, query samples should be allowed to contain external knowledge that is not explicitly shown in documents. As shows, the human labeled query contains the external knowledge that Canada and North America are related, which is easily grasped by humans but not by the machine. Also, unique words in the query, often considered as external knowledge, are more frequently included in the human labeled queries than in the synthetic queries. The dense retrievers would better capture semantic relations if they are trained with such queries that show more characteristics of human labeled ones.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we focus on generating queries with external knowledge by employing a simple method of explicitly transmitting document-related information to a PLM. Even though PLMs can handle hidden information to some extent by learning from a large amount of data, we argue that transmitting additional pieces of external knowledge to a PLM contributes positively to generating queries requiring external knowledge. Specifically, we first interpret the given query into a tripletbased template to consider the given document and related external knowledge together. A PLM is then fine-tuned to generate queries from tripletbased templates, together with a processed KB-QA dataset. The dense retriever is trained with the synthetic queries from the template extracted from the given document and corresponding external knowledge. The proposed method, henceforth referred to as Query Generation with External Knowledge (QGEK), is schematically illustrated in Figure 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 960,
"end": 968,
"text": "Figure 2",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We validate QGEK in both in-domain and out-domain (zero-shot) dense retrieval settings with diverse evaluation metrics. The experimental results show that queries that require external knowledge to answer are helpful for improving retrieval performance. Furthermore, we provide detailed qualitative analyses of synthetic queries and discuss which aspects of queries should be considered when training dense retrieval models. Our contributions in this work are threefold:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We propose a generation method of queries that require hidden information, not present in the document, from external sources.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We experimentally show that the generated queries are similar to the gold queries that are labeled by human annotators.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We evaluate the quality of generated queries with respect to dense retrieval performance and distribution of unique words so as to find optimal queries in training a dense retriever.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2 Related Work",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The sparse retriever, a traditional IR system, retrieves the target documents based on the lexical values such as frequency of terms and documents. BM25 (Robertson and Zaragoza, 2009) has been arguably the most frequently used method for such IR. However, as the retriever mainly handles the match of the lexical entries, 'semantically similar' but not the same lexical entries are not considered in the search for documents, affecting the user experience (Berger et al., 2000) . The dense retriever (Karpukhin et al., 2020) has received much attention as a solution to handle the problem, triggered by the Transformer (Vaswani et al., 2017) network and PLM. A dense retriever fetches the documents located closest to the query vector in the dense vector space with the results recorded in advance for retrieval performance. The model maps queries and documents to the dense vector space using a bi-encoder structure initialized from a PLM such as BERT (Devlin et al., 2019a) .",
"cite_spans": [
{
"start": 153,
"end": 183,
"text": "(Robertson and Zaragoza, 2009)",
"ref_id": "BIBREF20"
},
{
"start": 456,
"end": 477,
"text": "(Berger et al., 2000)",
"ref_id": "BIBREF0"
},
{
"start": 500,
"end": 524,
"text": "(Karpukhin et al., 2020)",
"ref_id": "BIBREF10"
},
{
"start": 619,
"end": 641,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF25"
},
{
"start": 953,
"end": 975,
"text": "(Devlin et al., 2019a)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dense Retriever",
"sec_num": "2.1"
},
{
"text": "The dense retriever requires a large-scale dataset for model training, and curating such datasets is a much arduous endeavor. proposed a zero-shot setting where dense retrievers are trained on a single large IR corpus, rather than on every dataset. Nonetheless, retrieval in such setting is still quite challenging.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dense Retriever",
"sec_num": "2.1"
},
{
"text": "Query generation is a simple method that addresses the shortage of training data for a dense retriever (Ma et al., 2021; . The most commonly used method has been to fine-tune the T5-base model (Raffel et al., 2020) to the MS MARCO dataset (Nguyen et al., 2016) and create a synthetic query in the target domain. Exploiting the size and domain of MS MARCO, we can obtain an effective retrieval performance by fine-tuning the T5 model. Info-HCVAE (Lee et al., 2020) achieved good performance by designing the relationship between document, query, and answer as a probability distribution and learning the latent vectors based on an auto-encoder. Answers and documents are used as inputs when creating queries. In these two methods, however, the processing of hidden information in the document still depends only on PLMs.",
"cite_spans": [
{
"start": 103,
"end": 120,
"text": "(Ma et al., 2021;",
"ref_id": "BIBREF16"
},
{
"start": 193,
"end": 214,
"text": "(Raffel et al., 2020)",
"ref_id": "BIBREF19"
},
{
"start": 239,
"end": 260,
"text": "(Nguyen et al., 2016)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Query Generation",
"sec_num": "2.2"
},
{
"text": "The existing methods focus only on the given document when generating queries, without much consideration of hidden information. In contrast, QGEK includes not only the document but also the hidden information that can be inferred from the given document with external knowledge.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Query Generation",
"sec_num": "2.2"
},
{
"text": "External knowledge has been widely used along with PLMs for several NLP tasks. augmented PLMs using ConceptNet (Speer et al., 2017) for a commonsense question answering (QA) task and showed that KB, such as Concept-Net, contributes to the explicit grounding of the output, resulting in better reasoning abilities.",
"cite_spans": [
{
"start": 111,
"end": 131,
"text": "(Speer et al., 2017)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Exploiting External Knowledge",
"sec_num": "2.3"
},
{
"text": "Furthermore, Zhou et al. (2018) proposed to generate knowledge-based dialogues for an Open-Domain Dialogue system. Dinan et al. (2019) confirmed that the additional external knowledge positively affects dialogue generation. In addition, Shuster et al. (2021) showed that the related external knowledge can be exploited to address critical issues, such as factual incorrectness and hallucination, in dialogue systems.",
"cite_spans": [
{
"start": 13,
"end": 31,
"text": "Zhou et al. (2018)",
"ref_id": "BIBREF34"
},
{
"start": 115,
"end": 134,
"text": "Dinan et al. (2019)",
"ref_id": "BIBREF7"
},
{
"start": 237,
"end": 258,
"text": "Shuster et al. (2021)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Exploiting External Knowledge",
"sec_num": "2.3"
},
{
"text": "While external knowledge from KB has proved helpful in Commonsense QA and Open-Domain Dialogue domains, it is relatively underexplored for generating synthetic queries for dense retrieval. In this work, we adopt KB into a PLM for query generation and show the effectiveness of training dense retrievers with the synthetic queries on IR benchmark datasets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Exploiting External Knowledge",
"sec_num": "2.3"
},
{
"text": "QGEK is designed to generate a new synthetic query that requires an implicit inference process for the answer by exploiting both the given document and external knowledge hidden in the document. First, we interpret the query as the triplet <S,R,O> that can easily utilize both of them, where the triplet is converted into a single-text template to simplify the transmission to a PLM. Then, we construct triplet-based template & query pairs as training datasets for fine-tuning a PLM. For generating a query from target documents, the triplet-based template is extracted from a general document.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "3"
},
{
"text": "The dense retriever maps query q and document d into an n-dimensional vector space with query encoder",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "3.1"
},
{
"text": "E Q (\u2022, \u03b8 q ) and document encoder E D (\u2022, \u03b8 d )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "3.1"
},
{
"text": "where \u03b8 is the encoder's parameter. The similarity score f (q, d) between query q and document d is computed as a dot product:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "3.1"
},
{
"text": "f (q, p) = E Q (q, \u03b8 q ) T \u2022 E D (d, \u03b8 d )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "3.1"
},
{
"text": "Training the dense retriever targets the vector space of which the relevant query and document pairs have a high similarity score compared to irrelevant pairs. Given query q, let (D + q , D \u2212 q ) be the pairs of the sets of relevant documents and irrelevant documents. The objective function of dense retriever is as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "3.1"
},
{
"text": "min \u03b8 q d + \u2208D + q d \u2212 \u2208D \u2212 q L(f (q, d + ), f (q, d \u2212 ))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "3.1"
},
{
"text": "The loss L is the negative log likelihood of the positive passage.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "3.1"
},
{
"text": "Queries can be simply mapped to the <S, R, O> triplet. The query <S, R, O> asks for the answer O, which has relationship R with subject S. For example, the query \"big little lies season 2 how many episodes\" and the answer \"7 episodes\" can be mapped to <\"big little lies season 2\", \"number of episodes\", \"7 episodes\">. Information of each item in a triplet can be largely divided into sentences or words units. We use two types of information to express each item of a triplet in more detail. Let W x = {w 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Query Interpretation as Triplet Form",
"sec_num": "3.2"
},
{
"text": "x , . . . , w n x } and l x be the set of word unit information and the single sentence unit information of the item x, respectively. Then, query Q can be interpreted as the triplet items with their own information:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Query Interpretation as Triplet Form",
"sec_num": "3.2"
},
{
"text": "Q = {(W S , l S ), (W R , l R ), (W O , l O )}",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Query Interpretation as Triplet Form",
"sec_num": "3.2"
},
{
"text": "For generating a query that requires an implicit inference, a form of query that can utilize both the document and external knowledge is required. The proposed triplet simply handles both document and external knowledge by arranging information into the appropriate positions in the triplet. When transmitting such triplets to a PLM, we use the simple form of a single text template. The tripletbased template consists of triplet items delimited by special tokens as shown in Figure 3 .",
"cite_spans": [],
"ref_spans": [
{
"start": 476,
"end": 484,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Query Interpretation as Triplet Form",
"sec_num": "3.2"
},
{
"text": "We construct a dataset consisting of triplet-query pairs for fine-tuning PLM. The KB based query can be converted into the proposed triplet. A canonical logical form of a KB based query is a representation that expresses the same meaning as the relationship between entities in KB. A simple interpretation of the proposed triplet can be seen as a canonical form consisting of two entities and a relationship between them. For example, suppose that the entity, 'Michael Dotson', is first selected as subject S and has word unit information, 'Actor', and sentence unit information, 'Michael Dotson is an actor'. Suppose also that there is an entity, 'Frenso', linked by 'placeof-birth' relationship with 'Michael Dotson'. The other entity and relationship may have their own information from KB. The triplet-based template is created by combining all of them.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset Construction for Query Generator",
"sec_num": "3.3"
},
{
"text": "The fine-tuned PLM with the dataset constructed in Section 3.3 needs the triplet-based template to generate a query from a general document. We extract triplet items from the given document, and collect external knowledge to fill the template from the open web.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Applying Template for General Document",
"sec_num": "3.4"
},
{
"text": "For example, suppose that there is a document about zebra mussels (cf. Figure 3) . The subject S, relation R and object O are selected as 'zebra mussels', 'location' and 'Canada', respectively. The document alone is not enough to fill the information of object O, 'Canada'. The external knowledge, 'Canada is a country in North America', is extracted from the open web. Both given document and external knowledge are arranged into the appropriate positions in the template.",
"cite_spans": [],
"ref_spans": [
{
"start": 71,
"end": 80,
"text": "Figure 3)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Applying Template for General Document",
"sec_num": "3.4"
},
{
"text": "We evaluate the performances of the dense retriever when trained with the synthetic queries compared to the human labeled queries. The dense retriever used in our experiments is the DPR (Karpukhin et al., 2020) . The train dataset of the dense retriever is the pairs of the documents of Natural Question (NQ) (Kwiatkowski et al., 2019) , also exploited as the source of the query generator, and the synthetic queries of the proposed method.",
"cite_spans": [
{
"start": 186,
"end": 210,
"text": "(Karpukhin et al., 2020)",
"ref_id": "BIBREF10"
},
{
"start": 309,
"end": 335,
"text": "(Kwiatkowski et al., 2019)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setups",
"sec_num": "4"
},
{
"text": "We evaluate the effectiveness of the generated queries when using external knowledge on IR benchmark datasets. We conduct experiments in two settings: in-domain and out-domain (zeroshot). We measure the in-domain performance on the NQ and the out-domain performance on 13 representative IR datasets .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "4.1"
},
{
"text": "In-Domain Dataset NQ (Kwiatkowski et al., 2019 ) is a benchmark dataset for the open-domain question answering task, fetched by Google search engine and from Wikipedia. We use the preprocessed version of the NQ following DPR (Karpukhin et al., 2020) , which includes 58,880 training pairs and 7,405 test queries. The documents in NQ is used as input of query generator.",
"cite_spans": [
{
"start": 21,
"end": 46,
"text": "(Kwiatkowski et al., 2019",
"ref_id": "BIBREF12"
},
{
"start": 225,
"end": 249,
"text": "(Karpukhin et al., 2020)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "4.1"
},
{
"text": "Out-Domain Dataset To validate the quality of generated queries for training the dense retriever, it is necessary to show the retrieval performance of diverse tasks. Each dataset used in out-domain experiments has diverse tasks and domains and requires retrieval models for finding grounding documents. They are shown in Table 1 . Touche-2020 (Bondarenko et al., 2020) Entity-Retrieval Wikipedia DBPedia (Hasibi et al., 2017) Question Anwering Wikipedia HotpotQA (Yang et al., ",
"cite_spans": [
{
"start": 343,
"end": 368,
"text": "(Bondarenko et al., 2020)",
"ref_id": "BIBREF1"
},
{
"start": 404,
"end": 425,
"text": "(Hasibi et al., 2017)",
"ref_id": "BIBREF9"
},
{
"start": 463,
"end": 476,
"text": "(Yang et al.,",
"ref_id": null
}
],
"ref_spans": [
{
"start": 321,
"end": 328,
"text": "Table 1",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Datasets",
"sec_num": "4.1"
},
{
"text": "FiQA-2018 (Maia et al., 2018) Duplicate-Question Retrieval Quora Quora Fact Checking Wikipedia FEVER (Thorne et al., ",
"cite_spans": [
{
"start": 10,
"end": 29,
"text": "(Maia et al., 2018)",
"ref_id": "BIBREF17"
},
{
"start": 101,
"end": 116,
"text": "(Thorne et al.,",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "2018) Finance",
"sec_num": null
},
{
"text": "Climate-Fever (Leippold and Diggelmann, 2020) Scientific SciFact (Wadden et al., 2020) Passage-Retrieval Misc. MS MARCO (Nguyen et al., 2016) Citation-Prediction Scientific SCIDOCS Bio-Medical IR Bio-Medical TREC-COVID (Voorhees et al., 2021) Bio-Medical NFCorpus (Boteva et al., 2016) ",
"cite_spans": [
{
"start": 65,
"end": 86,
"text": "(Wadden et al., 2020)",
"ref_id": "BIBREF29"
},
{
"start": 120,
"end": 141,
"text": "(Nguyen et al., 2016)",
"ref_id": "BIBREF18"
},
{
"start": 219,
"end": 242,
"text": "(Voorhees et al., 2021)",
"ref_id": "BIBREF26"
},
{
"start": 264,
"end": 285,
"text": "(Boteva et al., 2016)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "2018) Wikipedia",
"sec_num": null
},
{
"text": "We explain the metrics for evaluating the performance of a dense retriever. In the basic setting, the retriever searches for top k relevant documents on a given query. We employ 4 metrics for top k documents: ACC@k, MRR@k, MAP@k, and nDCG@k. The in-domain experiment is evaluated with these 4 metrics, and the out-domain performance is evaluated with only nDCG@10.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Metrics",
"sec_num": "4.2"
},
{
"text": "ACC@k is the percentage of whether the correct documents are included in the top-k hits. It ignores the rank of retrieved documents.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Metrics",
"sec_num": "4.2"
},
{
"text": "MRR@k (Mean Reciprocal Rank) computes the average of the ranks of the first correct document from top-k documents. The rest of the correct documents are not included in computing MRR.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Metrics",
"sec_num": "4.2"
},
{
"text": "MAP@k (Mean Average Precision) first computes the average precision score of the correct documents' ranks in top-k hits for a given query. The mean of the average precision scores is the value of the MAP@K.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Metrics",
"sec_num": "4.2"
},
{
"text": "nDCG@k (Normalised Cumulative Discount Gain) is similar to MAP@k, but reflects the fact that the more relevant document is the more highly ranked in top-k documents.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Metrics",
"sec_num": "4.2"
},
{
"text": "Query Generator We used BART , one of the widely used PLMs, to generate the synthetic query from the proposed template. BART based on the transformer seq2seq architecture is trained by reconstructing text from noised input. The de-noising ability of BART is suitable for generating queries from text with noise from the external source. SimpleQuestions (Bordes et al., 2015 ) (SQ), a question answering dataset based on KB, is se-",
"cite_spans": [
{
"start": 353,
"end": 373,
"text": "(Bordes et al., 2015",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation Details",
"sec_num": "4.3"
},
{
"text": "In-Domain Train Query (\u2193)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation Details",
"sec_num": "4.3"
},
{
"text": "ACC@10 ACC@100 MRR@10 MRR@100 MAP@10 MAP@100 NDCG@10 NDCG@100 lected to convert the query's logical form into the proposed template. A query in SQ is generated from a one-to-one correspondence of KB entities, which is very similar to the form of our proposed triplet. The conversion process proceeds in the same way as mentioned in Section 3.3. The BART-large (d = 1024) is fine-tuned for 5 epochs with 47,180 template-query pairs. For training the model, Adam optimizer (Kingma and Ba, 2015) is used with the batch size of 8, and the learning rate starts from 10 \u22125 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation Details",
"sec_num": "4.3"
},
{
"text": "We used the documents in the NQ train split (Kwiatkowski et al., 2019) , exploited as a training dataset in DPR (Karpukhin et al., 2020) , as the target dataset for query generation. The documents are converted into a template through the process described in Section 3.4. To obtain external knowledge of the subject and object, the first paragraph and category information of the Wikipedia documents are collected and inserted into the template. The generated queries and the corresponding NQ documents, input of the queries, are used in the training step of DPR.",
"cite_spans": [
{
"start": 44,
"end": 70,
"text": "(Kwiatkowski et al., 2019)",
"ref_id": "BIBREF12"
},
{
"start": 112,
"end": 136,
"text": "(Karpukhin et al., 2020)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Query Generation",
"sec_num": null
},
{
"text": "Retriever model The dense retriever used in the training has the same structure proposed by DPR (Karpukhin et al., 2020) , which has a biencoder structure that calculates the dot product between query and document embedding as the ranking score. The train dataset consists of the generated queries and the corresponding NQ documents for comparison with the human labeled queries of NQ. The encoder is initialized from BERT (base, uncased) (Devlin et al., 2019b) . The retriever is trained with Adam optimizer (Kingma and Ba, 2015) for 25 epochs. The negative samples for contrastive learning are sampled from a single batch. The size of the train batch is 8 and the learning rate is initialized with 2 \u2022 10 \u22125 .",
"cite_spans": [
{
"start": 96,
"end": 120,
"text": "(Karpukhin et al., 2020)",
"ref_id": "BIBREF10"
},
{
"start": 439,
"end": 461,
"text": "(Devlin et al., 2019b)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Query Generation",
"sec_num": null
},
{
"text": "Our main results are shown in Table 2 . We evaluate the retrieval performance of the dense retriever trained with the synthetic queries from QGEK against the gold query in the NQ train split. In the in-domain experiments, the dense retriever with the gold query of NQ showed superior performance over the retriever with QGEK. QGEK shows better performance in all metrics than the ablation case not including external knowledge in the proposed triplet. The average of NDCG@10 in out-domain experiments shows a small difference (-0.0108) between the gold queries and QGEK. In detail, the retriever trained with QGEK shows better performance on 4 datasets: DBpedia, HotpotQA, Fever, and Climate-Fever. The rest of the 9 datasets show that the retriever with the gold queries is more appropriate.",
"cite_spans": [],
"ref_spans": [
{
"start": 30,
"end": 37,
"text": "Table 2",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Overall Result",
"sec_num": "5.1"
},
{
"text": "Using external knowledge gives rise to generating more appropriate queries for most datasets than not using one, though human labeled queries are more appropriate for training the dense retriever in the in-domain experiments. On the other hand, we see that QGEK gives comparable performance to the one with human labeled queries in the outdomain experiments and even outperforms on some datasets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overall Result",
"sec_num": "5.1"
},
{
"text": "Experiments are conducted to compare against query generator baselines. We selected GenQ and Info-HCVAE (Lee et al., 2020) models as the baselines. The models receive the documents in NQ train split as input. The size Figure 4: NDCG@10 average of the dense retrieval trained with various queries for NQ and 13 out-domain datasets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis of Synthetic Queries",
"sec_num": "5.2"
},
{
"text": "and documents of the dataset are the same as those of the NQ train split except for synthetic queries.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis of Synthetic Queries",
"sec_num": "5.2"
},
{
"text": "Baseline Comparison A comparison with other query creation methods is made, as shown in Figure 4 . The average of the NDCG@10 performance in in-domain and out-domain experiments is calculated by training the dense retriever through the generated queries. The models trained with synthetic queries are sorted as GenQ, QGEK, and Info-HCVAE in descending order. QGEK shows somewhat lower performance than the one with gold queries, but GenQ shows the best performance, indicating that many queries suitable for the IR tasks are generated by training on the MS MARCO dataset.",
"cite_spans": [],
"ref_spans": [
{
"start": 88,
"end": 96,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Analysis of Synthetic Queries",
"sec_num": "5.2"
},
{
"text": "The MS MARCO dataset is most widely used for dense retriever training, and training a dense retriever with MS MARCO is known to give a higher performance than training it on other datasets such as NQ. Also, it has a huge amount of data, more than 500,000 pairs. This has the advantage of generating queries suitable for IR tasks based on abundant and task-appropriate data. However, the proposed method is trained on a relatively small amount of 47,180 data from SimpleQuestions, a KB-QA dataset. There is a possibility that the generated queries are largely incompatible with the IR task. However, the proposed method focuses on utilizing external knowledge, and it can be applied orthogonally to the MS MARCO dataset, which we leave for future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis of Synthetic Queries",
"sec_num": "5.2"
},
{
"text": "Unique & Non-Unique Words in Query We analyze whether the words in a query are from the corresponding documents. The implicitly inferring query has a higher probability of including unique words not present in the document. So, the distribution of unique & non-unique words can indirectly tell the existence of such queries. The stop words, such as the interrogative word and articles, in a query are excluded from the analysis. The distribution of unique words in a query is shown in Figure 5 . The 27% of gold labels of NQ contain 3 unique words, and 80% of the cases contain 4 or fewer unique words. QGEK shows a similar pattern of non-unique words compared with the gold, and over 40% of queries contain more than 5 unique words. The distribution of GenQ shows a similar pattern to that of the gold queries in both unique and non-unique words. Unlike other models, the Info most frequently includes 2 nonunique words.",
"cite_spans": [],
"ref_spans": [
{
"start": 485,
"end": 493,
"text": "Figure 5",
"ref_id": "FIGREF7"
}
],
"eq_spans": [],
"section": "Analysis of Synthetic Queries",
"sec_num": "5.2"
},
{
"text": "Note that QGEK generates queries with more unique words than other queries, together with a similar distribution of non-unique words to that of gold queries. This implies that QGEK can generate queries requesting hidden information not present in the document. Given the performance of the dense retriever (Figure 4 ) and the distribution of unique & non-unique words ( Figure 5 ), generating queries both close to the human labeled ones and appropriate to the IR tasks is an important factor for an optimal training of the dense retriever. Our future work includes generating queries not only close to human labeled ones but also optimized for IR tasks, such as exploiting the MS MARCO dataset.",
"cite_spans": [],
"ref_spans": [
{
"start": 306,
"end": 315,
"text": "(Figure 4",
"ref_id": null
},
{
"start": 370,
"end": 378,
"text": "Figure 5",
"ref_id": "FIGREF7"
}
],
"eq_spans": [],
"section": "Analysis of Synthetic Queries",
"sec_num": "5.2"
},
{
"text": "We use human evaluation to check whether the synthetic queries are similar to human labeled ones. The randomly sampled 30 documents and corresponding queries are given to three annotators fluent in English. After reading the given documents, annotators evaluated each query on a scale of 0-5 against three points: 1) how relevant a given query is to the document (Relevancy), 2) how grammatically natural it is (Grammaticality), and 3) how much reasoning is needed to answer (Difficulty). As shown in Table 4 , QGEK shows statistically higher degrees of grammaticality and difficulty than the gold labels. These results indicate that queries from QGEK need more hidden information not present in the documents compared to other queries.",
"cite_spans": [],
"ref_spans": [
{
"start": 501,
"end": 508,
"text": "Table 4",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Manual Evaluation",
"sec_num": null
},
{
"text": "Case Study Examples of the documents and corresponding queries are shown in Table 3 . Document 1 is about the water treatment problem caused by mussels. In answering the gold label, external knowledge that Canada is in North America is needed for the inference from the document. However, other generated queries do not require much external information. In the case of Docu-ment 2, the introduction of the game \"Call of Duty\", the gold label does not require hidden information in the document. However, in the case of GenQ, the additional information that PlayStation 3, Xbox 360, and Wii are gaming consoles is required for a suitable answer. This gives evidence that there are cases in which queries requiring inference from external knowledge are generated through the proposed method. In the case of Document 3, introduction of Call the Midwife, the query from QGEK needs external information about the gender of actors to answer.",
"cite_spans": [],
"ref_spans": [
{
"start": 76,
"end": 83,
"text": "Table 3",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Manual Evaluation",
"sec_num": null
},
{
"text": "Although QGEK generates the queries that need external knowledge to answer, they have a similar pattern that begins with an interrogative word. In the case of GenQ and Info-HCVAE, different patterns exist through the queries of Document 3. It can be inferred that the triplet-based template makes the logical structure simple, and that the syntactic diversity of the generated query tends to decrease. For future work, we plan to propose a template that can include more logical structures, developed from the current triplet-based template.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Manual Evaluation",
"sec_num": null
},
{
"text": "We presented a novel query generation method, QGEK, that generates synthetic queries in a form more similar to human labeled queries by using external knowledge. In order to use unprocessed external knowledge, we convert a query into a tripletbased template, which can include information of subjects and answers. Remarkably, when dense retrieval models are trained with the queries generated from QGEK, the performance has improved much compared to using the queries without external knowledge. Also, we have shown that including external knowledge give rises to the distribution of the unique words similar to that of the human labeled queries. We believe that QGEK can also be applied to the other generation methods by orthogonally adding some external knowledge processing modules. For future work, we plan to generate queries both close to human labeled ones and optimized for IR tasks and to allow the template to accept more general logical forms for diverse highquality queries. The code and data will be made available for public access.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
}
],
"back_matter": [
{
"text": "This work was supported by Institute for Information and communications Technology Promotion (IITP) grant funded by the Korea government (MSIT) (No. 2018-0-00582, Prediction and augmentation of the credibility distribution via linguistic analysis and automated evidence document collection).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Bridging the lexical chasm: Statistical approaches to answer-finding",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Berger",
"suffix": ""
},
{
"first": "Rich",
"middle": [],
"last": "Caruana",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Cohn",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the 23rd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '00",
"volume": "",
"issue": "",
"pages": "192--199",
"other_ids": {
"DOI": [
"10.1145/345508.345576"
]
},
"num": null,
"urls": [],
"raw_text": "Adam Berger, Rich Caruana, David Cohn, Dayne Fre- itag, and Vibhu Mittal. 2000. Bridging the lexical chasm: Statistical approaches to answer-finding. In Proceedings of the 23rd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '00, page 192-199, New York, NY, USA. Association for Computing Machinery.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Overview of touch\u00e9 2020: Argument retrieval -extended abstract",
"authors": [
{
"first": "Alexander",
"middle": [],
"last": "Bondarenko",
"suffix": ""
},
{
"first": "Maik",
"middle": [],
"last": "Fr\u00f6be",
"suffix": ""
},
{
"first": "Meriem",
"middle": [],
"last": "Beloucif",
"suffix": ""
},
{
"first": "Lukas",
"middle": [],
"last": "Gienapp",
"suffix": ""
},
{
"first": "Yamen",
"middle": [],
"last": "Ajjour",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Panchenko",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Biemann",
"suffix": ""
},
{
"first": "Benno",
"middle": [],
"last": "Stein",
"suffix": ""
},
{
"first": "Henning",
"middle": [],
"last": "Wachsmuth",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Potthast",
"suffix": ""
},
{
"first": "Matthias",
"middle": [],
"last": "Hagen",
"suffix": ""
}
],
"year": 2020,
"venue": "CLEF",
"volume": "",
"issue": "",
"pages": "384--395",
"other_ids": {
"DOI": [
"10.1007/978-3-030-58219-7_26"
]
},
"num": null,
"urls": [],
"raw_text": "Alexander Bondarenko, Maik Fr\u00f6be, Meriem Be- loucif, Lukas Gienapp, Yamen Ajjour, Alexander Panchenko, Chris Biemann, Benno Stein, Henning Wachsmuth, Martin Potthast, and Matthias Hagen. 2020. Overview of touch\u00e9 2020: Argument retrieval -extended abstract. In CLEF, pages 384-395.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Large-scale simple question answering with memory networks",
"authors": [
{
"first": "Antoine",
"middle": [],
"last": "Bordes",
"suffix": ""
},
{
"first": "Nicolas",
"middle": [],
"last": "Usunier",
"suffix": ""
},
{
"first": "Sumit",
"middle": [],
"last": "Chopra",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1506.02075"
]
},
"num": null,
"urls": [],
"raw_text": "Antoine Bordes, Nicolas Usunier, Sumit Chopra, and Jason Weston. 2015. Large-scale simple question answering with memory networks. arXiv preprint arXiv:1506.02075.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A full-text learning to rank dataset for medical information retrieval",
"authors": [
{
"first": "Vera",
"middle": [],
"last": "Boteva",
"suffix": ""
},
{
"first": "Demian",
"middle": [],
"last": "Gholipour",
"suffix": ""
},
{
"first": "Artem",
"middle": [],
"last": "Sokolov",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Riezler",
"suffix": ""
}
],
"year": 2016,
"venue": "Advances in Information Retrieval",
"volume": "",
"issue": "",
"pages": "716--722",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vera Boteva, Demian Gholipour, Artem Sokolov, and Stefan Riezler. 2016. A full-text learning to rank dataset for medical information retrieval. In Ad- vances in Information Retrieval, pages 716-722, Cham. Springer International Publishing.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "SPECTER: Document-level representation learning using citation-informed transformers",
"authors": [
{
"first": "Arman",
"middle": [],
"last": "Cohan",
"suffix": ""
},
{
"first": "Sergey",
"middle": [],
"last": "Feldman",
"suffix": ""
},
{
"first": "Iz",
"middle": [],
"last": "Beltagy",
"suffix": ""
},
{
"first": "Doug",
"middle": [],
"last": "Downey",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Weld",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "2270--2282",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.207"
]
},
"num": null,
"urls": [],
"raw_text": "Arman Cohan, Sergey Feldman, Iz Beltagy, Doug Downey, and Daniel Weld. 2020. SPECTER: Document-level representation learning using citation-informed transformers. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2270-2282, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "BERT: pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/n19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019a. BERT: pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 4171-4186. Association for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019b. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Wizard of wikipedia: Knowledge-powered conversational agents",
"authors": [
{
"first": "Emily",
"middle": [],
"last": "Dinan",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Roller",
"suffix": ""
},
{
"first": "Kurt",
"middle": [],
"last": "Shuster",
"suffix": ""
},
{
"first": "Angela",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
}
],
"year": 2019,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Emily Dinan, Stephen Roller, Kurt Shuster, Angela Fan, Michael Auli, and Jason Weston. 2019. Wizard of wikipedia: Knowledge-powered conversational agents. In International Conference on Learning Representations.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "End-to-end retrieval in continuous space",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Gillick",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Presta",
"suffix": ""
},
{
"first": "Gaurav Singh",
"middle": [],
"last": "Tomar",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Gillick, Alessandro Presta, and Gaurav Singh Tomar. 2018. End-to-end retrieval in continuous space.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Dbpedia-entity v2: A test collection for entity search",
"authors": [
{
"first": "Faegheh",
"middle": [],
"last": "Hasibi",
"suffix": ""
},
{
"first": "Fedor",
"middle": [],
"last": "Nikolaev",
"suffix": ""
},
{
"first": "Chenyan",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Krisztian",
"middle": [],
"last": "Balog",
"suffix": ""
},
{
"first": "Svein",
"middle": [
"Erik"
],
"last": "Bratsberg",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Kotov",
"suffix": ""
},
{
"first": "Jamie",
"middle": [],
"last": "Callan",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '17",
"volume": "",
"issue": "",
"pages": "1265--1268",
"other_ids": {
"DOI": [
"10.1145/3077136.3080751"
]
},
"num": null,
"urls": [],
"raw_text": "Faegheh Hasibi, Fedor Nikolaev, Chenyan Xiong, Krisz- tian Balog, Svein Erik Bratsberg, Alexander Kotov, and Jamie Callan. 2017. Dbpedia-entity v2: A test collection for entity search. In Proceedings of the 40th International ACM SIGIR Conference on Re- search and Development in Information Retrieval, SIGIR '17, page 1265-1268, New York, NY, USA. Association for Computing Machinery.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Dense passage retrieval for opendomain question answering",
"authors": [
{
"first": "Vladimir",
"middle": [],
"last": "Karpukhin",
"suffix": ""
},
{
"first": "Barlas",
"middle": [],
"last": "Oguz",
"suffix": ""
},
{
"first": "Sewon",
"middle": [],
"last": "Min",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Ledell",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Sergey",
"middle": [],
"last": "Edunov",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Wen-Tau",
"middle": [],
"last": "Yih",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "6769--6781",
"other_ids": {
"DOI": [
"10.18653/v1/2020.emnlp-main.550"
]
},
"num": null,
"urls": [],
"raw_text": "Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for open- domain question answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6769-6781, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2015,
"venue": "3rd International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd Inter- national Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Natural Questions: A Benchmark for Question Answering Research. Transactions of the Association for Computational Linguistics",
"authors": [
{
"first": "Tom",
"middle": [],
"last": "Kwiatkowski",
"suffix": ""
},
{
"first": "Jennimaria",
"middle": [],
"last": "Palomaki",
"suffix": ""
},
{
"first": "Olivia",
"middle": [],
"last": "Redfield",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
},
{
"first": "Ankur",
"middle": [],
"last": "Parikh",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Alberti",
"suffix": ""
},
{
"first": "Danielle",
"middle": [],
"last": "Epstein",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "7",
"issue": "",
"pages": "453--466",
"other_ids": {
"DOI": [
"10.1162/tacl_a_00276"
]
},
"num": null,
"urls": [],
"raw_text": "Tom Kwiatkowski, Jennimaria Palomaki, Olivia Red- field, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Ken- ton Lee, Kristina Toutanova, Llion Jones, Matthew Kelcey, Ming-Wei Chang, Andrew M. Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natural Questions: A Benchmark for Question Answering Research. Transactions of the Association for Com- putational Linguistics, 7:453-466.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Generating diverse and consistent QA pairs from contexts with information-maximizing hierarchical conditional VAEs",
"authors": [
{
"first": "Seanie",
"middle": [],
"last": "Dong Bok Lee",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Donghwan",
"middle": [],
"last": "Woo Tae Jeong",
"suffix": ""
},
{
"first": "Sung",
"middle": [
"Ju"
],
"last": "Kim",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hwang",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "208--224",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.20"
]
},
"num": null,
"urls": [],
"raw_text": "Dong Bok Lee, Seanie Lee, Woo Tae Jeong, Dongh- wan Kim, and Sung Ju Hwang. 2020. Gener- ating diverse and consistent QA pairs from con- texts with information-maximizing hierarchical con- ditional VAEs. In Proceedings of the 58th Annual Meeting of the Association for Computational Lin- guistics, pages 208-224, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Climate-fever: A dataset for verification of realworld climate claims",
"authors": [
{
"first": "Markus",
"middle": [],
"last": "Leippold",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Diggelmann",
"suffix": ""
}
],
"year": 2020,
"venue": "NeurIPS 2020 Workshop on Tackling Climate Change with Machine Learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Markus Leippold and Thomas Diggelmann. 2020. Climate-fever: A dataset for verification of real- world climate claims. In NeurIPS 2020 Workshop on Tackling Climate Change with Machine Learning.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension",
"authors": [
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Marjan",
"middle": [],
"last": "Ghazvininejad",
"suffix": ""
},
{
"first": "Abdelrahman",
"middle": [],
"last": "Mohamed",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "7871--7880",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.703"
]
},
"num": null,
"urls": [],
"raw_text": "Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and com- prehension. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics, pages 7871-7880, Online. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Zero-shot neural passage retrieval via domain-targeted synthetic question generation",
"authors": [
{
"first": "Ji",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Korotkov",
"suffix": ""
},
{
"first": "Yinfei",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Keith",
"middle": [],
"last": "Hall",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Mcdonald",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume",
"volume": "",
"issue": "",
"pages": "1075--1088",
"other_ids": {
"DOI": [
"10.18653/v1/2021.eacl-main.92"
]
},
"num": null,
"urls": [],
"raw_text": "Ji Ma, Ivan Korotkov, Yinfei Yang, Keith Hall, and Ryan McDonald. 2021. Zero-shot neural passage retrieval via domain-targeted synthetic question generation. In Proceedings of the 16th Conference of the Euro- pean Chapter of the Association for Computational Linguistics: Main Volume, pages 1075-1088, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Www'18 open challenge: Financial opinion mining and question answering",
"authors": [
{
"first": "Macedo",
"middle": [],
"last": "Maia",
"suffix": ""
},
{
"first": "Siegfried",
"middle": [],
"last": "Handschuh",
"suffix": ""
},
{
"first": "Andr\u00e9",
"middle": [],
"last": "Freitas",
"suffix": ""
},
{
"first": "Brian",
"middle": [],
"last": "Davis",
"suffix": ""
},
{
"first": "Ross",
"middle": [],
"last": "Mcdermott",
"suffix": ""
},
{
"first": "Manel",
"middle": [],
"last": "Zarrouk",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Balahur",
"suffix": ""
}
],
"year": 2018,
"venue": "Companion Proceedings of the The Web Conference",
"volume": "18",
"issue": "",
"pages": "1941--1942",
"other_ids": {
"DOI": [
"10.1145/3184558.3192301"
]
},
"num": null,
"urls": [],
"raw_text": "Macedo Maia, Siegfried Handschuh, Andr\u00e9 Freitas, Brian Davis, Ross McDermott, Manel Zarrouk, and Alexandra Balahur. 2018. Www'18 open challenge: Financial opinion mining and question answering. In Companion Proceedings of the The Web Conference 2018, WWW '18, page 1941-1942, Republic and Canton of Geneva, CHE. International World Wide Web Conferences Steering Committee.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Ms marco: A human generated machine reading comprehension dataset",
"authors": [
{
"first": "Tri",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Mir",
"middle": [],
"last": "Rosenberg",
"suffix": ""
},
{
"first": "Xia",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Saurabh",
"middle": [],
"last": "Tiwary",
"suffix": ""
},
{
"first": "Rangan",
"middle": [],
"last": "Majumder",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Deng",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and Li Deng. 2016. Ms marco: A human generated machine read- ing comprehension dataset.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Exploring the limits of transfer learning with a unified text-to-text transformer",
"authors": [
{
"first": "Colin",
"middle": [],
"last": "Raffel",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Roberts",
"suffix": ""
},
{
"first": "Katherine",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Sharan",
"middle": [],
"last": "Narang",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Matena",
"suffix": ""
},
{
"first": "Yanqi",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Peter",
"middle": [
"J"
],
"last": "Liu",
"suffix": ""
}
],
"year": 2020,
"venue": "Journal of Machine Learning Research",
"volume": "21",
"issue": "140",
"pages": "1--67",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Colin Raffel, Noam Shazeer, Adam Roberts, Kather- ine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research, 21(140):1-67.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "The probabilistic relevance framework: Bm25 and beyond",
"authors": [
{
"first": "Stephen",
"middle": [],
"last": "Robertson",
"suffix": ""
},
{
"first": "Hugo",
"middle": [],
"last": "Zaragoza",
"suffix": ""
}
],
"year": 2009,
"venue": "Foundations and Trends\u00ae in Information Retrieval",
"volume": "3",
"issue": "4",
"pages": "333--389",
"other_ids": {
"DOI": [
"10.1561/1500000019"
]
},
"num": null,
"urls": [],
"raw_text": "Stephen Robertson and Hugo Zaragoza. 2009. The prob- abilistic relevance framework: Bm25 and beyond. Foundations and Trends\u00ae in Information Retrieval, 3(4):333-389.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Retrieval augmentation reduces hallucination in conversation",
"authors": [
{
"first": "Kurt",
"middle": [],
"last": "Shuster",
"suffix": ""
},
{
"first": "Spencer",
"middle": [],
"last": "Poff",
"suffix": ""
},
{
"first": "Moya",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Douwe",
"middle": [],
"last": "Kiela",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
}
],
"year": 2021,
"venue": "Findings of the Association for Computational Linguistics: EMNLP 2021",
"volume": "",
"issue": "",
"pages": "3784--3803",
"other_ids": {
"DOI": [
"10.18653/v1/2021.findings-emnlp.320"
]
},
"num": null,
"urls": [],
"raw_text": "Kurt Shuster, Spencer Poff, Moya Chen, Douwe Kiela, and Jason Weston. 2021. Retrieval augmentation reduces hallucination in conversation. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 3784-3803, Punta Cana, Do- minican Republic. Association for Computational Linguistics.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Conceptnet 5.5: An open multilingual graph of general knowledge",
"authors": [
{
"first": "Robyn",
"middle": [],
"last": "Speer",
"suffix": ""
},
{
"first": "Joshua",
"middle": [],
"last": "Chin",
"suffix": ""
},
{
"first": "Catherine",
"middle": [],
"last": "Havasi",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, AAAI'17",
"volume": "",
"issue": "",
"pages": "4444--4451",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robyn Speer, Joshua Chin, and Catherine Havasi. 2017. Conceptnet 5.5: An open multilingual graph of gen- eral knowledge. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, AAAI'17, page 4444-4451. AAAI Press.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "BEIR: A heterogeneous benchmark for zero-shot evaluation of information retrieval models",
"authors": [
{
"first": "Nandan",
"middle": [],
"last": "Thakur",
"suffix": ""
},
{
"first": "Nils",
"middle": [],
"last": "Reimers",
"suffix": ""
},
{
"first": "Andreas",
"middle": [],
"last": "R\u00fcckl\u00e9",
"suffix": ""
}
],
"year": 2021,
"venue": "Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nandan Thakur, Nils Reimers, Andreas R\u00fcckl\u00e9, Ab- hishek Srivastava, and Iryna Gurevych. 2021. BEIR: A heterogeneous benchmark for zero-shot evaluation of information retrieval models. In Thirty-fifth Con- ference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2).",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "FEVER: a large-scale dataset for fact extraction and VERification",
"authors": [
{
"first": "James",
"middle": [],
"last": "Thorne",
"suffix": ""
},
{
"first": "Andreas",
"middle": [],
"last": "Vlachos",
"suffix": ""
},
{
"first": "Christos",
"middle": [],
"last": "Christodoulopoulos",
"suffix": ""
},
{
"first": "Arpit",
"middle": [],
"last": "Mittal",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "809--819",
"other_ids": {
"DOI": [
"10.18653/v1/N18-1074"
]
},
"num": null,
"urls": [],
"raw_text": "James Thorne, Andreas Vlachos, Christos Christodoulopoulos, and Arpit Mittal. 2018. FEVER: a large-scale dataset for fact extraction and VERification. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 809-819, New Orleans, Louisiana. Association for Computational Linguistics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems",
"volume": "30",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141 ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems, volume 30. Curran Associates, Inc.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Trec-covid: Constructing a pandemic information retrieval test collection",
"authors": [
{
"first": "Ellen",
"middle": [],
"last": "Voorhees",
"suffix": ""
},
{
"first": "Tasmeer",
"middle": [],
"last": "Alam",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Bedrick",
"suffix": ""
},
{
"first": "Dina",
"middle": [],
"last": "Demner-Fushman",
"suffix": ""
},
{
"first": "William",
"middle": [
"R"
],
"last": "Hersh",
"suffix": ""
},
{
"first": "Kyle",
"middle": [],
"last": "Lo",
"suffix": ""
},
{
"first": "Kirk",
"middle": [],
"last": "Roberts",
"suffix": ""
},
{
"first": "Ian",
"middle": [],
"last": "Soboroff",
"suffix": ""
},
{
"first": "Lucy",
"middle": [
"Lu"
],
"last": "Wang",
"suffix": ""
}
],
"year": 2021,
"venue": "SIGIR Forum",
"volume": "",
"issue": "1",
"pages": "",
"other_ids": {
"DOI": [
"10.1145/3451964.3451965"
]
},
"num": null,
"urls": [],
"raw_text": "Ellen Voorhees, Tasmeer Alam, Steven Bedrick, Dina Demner-Fushman, William R. Hersh, Kyle Lo, Kirk Roberts, Ian Soboroff, and Lucy Lu Wang. 2021. Trec-covid: Constructing a pandemic information retrieval test collection. SIGIR Forum, 54(1).",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Retrieval of the best counterargument without prior topic knowledge",
"authors": [
{
"first": "Henning",
"middle": [],
"last": "Wachsmuth",
"suffix": ""
},
{
"first": "Shahbaz",
"middle": [],
"last": "Syed",
"suffix": ""
},
{
"first": "Benno",
"middle": [],
"last": "Stein",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/P18-1023"
]
},
"num": null,
"urls": [],
"raw_text": "Henning Wachsmuth, Shahbaz Syed, and Benno Stein. 2018. Retrieval of the best counterargument without prior topic knowledge. In Proceedings of the 56th",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Annual Meeting of the Association for Computational Linguistics",
"authors": [],
"year": null,
"venue": "",
"volume": "1",
"issue": "",
"pages": "241--251",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 241-251, Melbourne, Australia. Association for Computational Linguistics.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Fact or fiction: Verifying scientific claims",
"authors": [
{
"first": "David",
"middle": [],
"last": "Wadden",
"suffix": ""
},
{
"first": "Shanchuan",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Kyle",
"middle": [],
"last": "Lo",
"suffix": ""
},
{
"first": "Lucy",
"middle": [
"Lu"
],
"last": "Wang",
"suffix": ""
},
{
"first": "Madeleine",
"middle": [],
"last": "Van Zuylen",
"suffix": ""
},
{
"first": "Arman",
"middle": [],
"last": "Cohan",
"suffix": ""
},
{
"first": "Hannaneh",
"middle": [],
"last": "Hajishirzi",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "7534--7550",
"other_ids": {
"DOI": [
"10.18653/v1/2020.emnlp-main.609"
]
},
"num": null,
"urls": [],
"raw_text": "David Wadden, Shanchuan Lin, Kyle Lo, Lucy Lu Wang, Madeleine van Zuylen, Arman Cohan, and Hannaneh Hajishirzi. 2020. Fact or fiction: Verifying scientific claims. In Proceedings of the 2020 Con- ference on Empirical Methods in Natural Language Processing (EMNLP), pages 7534-7550, Online. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Gpl: Generative pseudo labeling for unsupervised domain adaptation of dense retrieval",
"authors": [
{
"first": "Kexin",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Nandan",
"middle": [],
"last": "Thakur",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2112.07577"
]
},
"num": null,
"urls": [],
"raw_text": "Kexin Wang, Nandan Thakur, Nils Reimers, and Iryna Gurevych. 2021. Gpl: Generative pseudo labeling for unsupervised domain adaptation of dense retrieval. arXiv preprint arXiv:2112.07577.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Connecting the dots: A knowledgeable path generator for commonsense question answering",
"authors": [
{
"first": "Peifeng",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Nanyun",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Filip",
"middle": [],
"last": "Ilievski",
"suffix": ""
},
{
"first": "Pedro",
"middle": [
"A"
],
"last": "Szekely",
"suffix": ""
},
{
"first": "Xiang",
"middle": [],
"last": "Ren",
"suffix": ""
}
],
"year": 2020,
"venue": "EMNLP (Findings)",
"volume": "",
"issue": "",
"pages": "4129--4140",
"other_ids": {
"DOI": [
"10.18653/v1/2020.findings-emnlp.369"
]
},
"num": null,
"urls": [],
"raw_text": "Peifeng Wang, Nanyun Peng, Filip Ilievski, Pedro A. Szekely, and Xiang Ren. 2020. Connecting the dots: A knowledgeable path generator for commonsense question answering. In EMNLP (Findings), pages 4129-4140.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Zero-shot dense retrieval with momentum adversarial domain invariant representations",
"authors": [
{
"first": "Ji",
"middle": [],
"last": "Xin",
"suffix": ""
},
{
"first": "Chenyan",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Ashwin",
"middle": [],
"last": "Srinivasan",
"suffix": ""
},
{
"first": "Ankita",
"middle": [],
"last": "Sharma",
"suffix": ""
},
{
"first": "Damien",
"middle": [],
"last": "Jose",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Bennett",
"suffix": ""
}
],
"year": 2021,
"venue": "ArXiv",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ji Xin, Chenyan Xiong, Ashwin Srinivasan, Ankita Sharma, Damien Jose, and Paul Bennett. 2021. Zero-shot dense retrieval with momentum adver- sarial domain invariant representations. ArXiv, abs/2110.07581.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "HotpotQA: A dataset for diverse, explainable multi-hop question answering",
"authors": [
{
"first": "Zhilin",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Peng",
"middle": [],
"last": "Qi",
"suffix": ""
},
{
"first": "Saizheng",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Cohen",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2369--2380",
"other_ids": {
"DOI": [
"10.18653/v1/D18-1259"
]
},
"num": null,
"urls": [],
"raw_text": "Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William Cohen, Ruslan Salakhutdinov, and Christo- pher D. Manning. 2018. HotpotQA: A dataset for diverse, explainable multi-hop question answering. In Proceedings of the 2018 Conference on Empiri- cal Methods in Natural Language Processing, pages 2369-2380, Brussels, Belgium. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "A dataset for document grounded conversations",
"authors": [
{
"first": "Kangyan",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Shrimai",
"middle": [],
"last": "Prabhumoye",
"suffix": ""
},
{
"first": "Alan",
"middle": [
"W"
],
"last": "Black",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "708--713",
"other_ids": {
"DOI": [
"10.18653/v1/D18-1076"
]
},
"num": null,
"urls": [],
"raw_text": "Kangyan Zhou, Shrimai Prabhumoye, and Alan W Black. 2018. A dataset for document grounded con- versations. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Process- ing, pages 708-713, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "The analysis of human labeled query and synthetic query. (Left) Examples of the human labeled query and synthetic query. (Right) Average of unique words in human labeled query and synthetic query."
},
"FIGREF1": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "Figure 1"
},
"FIGREF2": {
"num": null,
"uris": null,
"type_str": "figure",
"text": ""
},
"FIGREF3": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "Overall methods of query generation with external knowledge and dense retrieval training with synthetic queries."
},
"FIGREF4": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "michael dotson born\" Query S : ({Michael Dotson, Actor, \u2026}, 'Michael Dotson is an actor') R : ({Place of Birth, People, \u2026}, '') O : ({Frenso, sports team location}, 'Frenso is the center of the San Joaquin Valley \u2026') Triple Template Figure 3: Overview of the Methods for Query Generation based on Triplet-based Template."
},
"FIGREF7": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "Distribution of unique & non-unique words in the queries."
},
"TABREF0": {
"content": "<table><tr><td colspan=\"2\">[subject][name]Call of Duty: World at War[type]\u2026[description]\u2026</td></tr><tr><td>[relation][name]\u2026[type]\u2026</td><td/></tr><tr><td colspan=\"2\">[object][name]PlayStation3[type]\u2026[description]\u2026Game Console\u2026</td></tr><tr><td/><td>PlayStation3</td></tr><tr><td>Call of Duty: World at War</td><td>\u2026 home video game console\u2026</td></tr><tr><td>Call of Duty: World at War is a first-</td><td/></tr><tr><td>person shooter video game \u2026 released</td><td/></tr><tr><td>for Microsoft Windows, the</td><td/></tr><tr><td>PlayStation3, Xbox 360, and Wii \u2026</td><td/></tr></table>",
"num": null,
"type_str": "table",
"html": null,
"text": "What console is call of duty: world at war available on?"
},
"TABREF1": {
"content": "<table><tr><td colspan=\"5\">Dataset Construction for Query Generator</td><td>Applying Template for General Document</td></tr><tr><td/><td/><td/><td/><td/><td>Zebra Mussels</td></tr><tr><td>S</td><td>R</td><td>O</td><td>[subject] [relation]</td><td>S R</td><td>\u2026 that zebra mussels have also had an effect on fish populations \u2026 They were first detected in Canada in the Great Lakes in 1988, \u2026</td></tr><tr><td/><td/><td/><td>[object]</td><td>O</td><td/></tr><tr><td/><td>&amp;</td><td/><td>&amp;</td><td/><td/></tr><tr><td colspan=\"3\">Question</td><td>Question</td><td/><td/></tr><tr><td/><td/><td/><td/><td/><td>[subject][name]\u2026[type]\u2026[description]\u2026</td></tr><tr><td/><td/><td/><td/><td/><td>[relation][name]\u2026[type]\u2026</td></tr><tr><td colspan=\"3\">KB &amp; Question</td><td colspan=\"2\">Template &amp; Question</td><td>[object][name]\u2026[type]\u2026[description]\u2026</td></tr></table>",
"num": null,
"type_str": "table",
"html": null,
"text": "[subject][name]Michael Dotson[type]Actor[description]Michael Dotson is an actor [relation][name]Place of Birth[type]People [object][name]Frenso[type]sports team location[description]Frensso is the center of the San Joaquin Valley"
},
"TABREF2": {
"content": "<table/>",
"num": null,
"type_str": "table",
"html": null,
"text": ""
},
"TABREF4": {
"content": "<table/>",
"num": null,
"type_str": "table",
"html": null,
"text": "In-domain and Out-domain performance of DPR. The scores for out-domain denote nDCG@10. The scores over the gold query are marked in bold, and the better scores between queries from QGEK are underlined."
},
"TABREF5": {
"content": "<table><tr><td>Document 1</td><td>Document 2</td><td>Document 3</td></tr><tr><td>Gold Label when did zebra mussels come to north america</td><td>Gold Label who made call of duty world at war</td><td>Gold Label where in london is call the midwife set</td></tr><tr><td>QGEK What is the date zebra mussel was first detected in Canada? (-) Ext. Knowledge what country is zebra mussel found</td><td>QGEK What console is call of duty: world at war avail-able on (-) Ext. Knowledge what is the setting of call of duty: world at war</td><td>QGEK who is the actress for call the midwife (-) Ext. Knowledge who produced call the midwife</td></tr><tr><td>Info-HCVAE where did the lake st. clairs originate?</td><td>Info-HCVAE what setting was the setting for the game of the \" world at war :\"?</td><td>Info-HCVAE in what time period did the bbc's the midcene series take place?</td></tr><tr><td>GenQ where are mussels located</td><td>GenQ what year did call of duty world at war come out</td><td>GenQ cast of call the midwife</td></tr></table>",
"num": null,
"type_str": "table",
"html": null,
"text": "(...) reporting having problems with their water treatment plants with the mussels attaching themselves to pipeworks. (...) They were first detected in Canada in the Great Lakes in 1988, in Lake St. Clair, located east/northeast of Detroit and Windsor. (...) Call of Duty: World at War is a first-person shooter video game developed by Treyarch and published by Activision. It was released for Microsoft Windows, the PlayStation 3, Xbox 360, and Wii in November 2008. (...) \"World at War\" received ports featuring different storyline versions, while remaining in the World War II setting, for the and . (...) (...) Call the Midwife is a BBC period drama series about a group of nurse midwives working in the East End of London in the late 1950s and early 1960s. It stars Jessica Raine, Miranda Hart, Helen George, Bryony Hannah, Laura Main, Jenny Agutter, Pam Ferris, (...) and Leonie Elliott. The series is produced by Neal Street Productions, a production company founded (...)"
},
"TABREF6": {
"content": "<table/>",
"num": null,
"type_str": "table",
"html": null,
"text": "Examples of documents and the corresponding queries. The non-unique words are underlined, and the unique words are marked in bold."
},
"TABREF8": {
"content": "<table/>",
"num": null,
"type_str": "table",
"html": null,
"text": "The result of human evaluation. Statistically significant difference compared to gold via t-test (p < 0.05) is marked in bold."
}
}
}
}