ACL-OCL / Base_JSON /prefixD /json /dialdoc /2022.dialdoc-1.12.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2022",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T16:31:29.329369Z"
},
"title": "UGent-T2K at the 2nd DialDoc Shared Task: A Retrieval-Focused Dialog System Grounded in Multiple Documents",
"authors": [
{
"first": "Yiwei",
"middle": [],
"last": "Jiang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Ghent University -imec",
"location": {
"settlement": "IDLab Ghent",
"country": "Belgium"
}
},
"email": ""
},
{
"first": "Amir",
"middle": [],
"last": "Hadifar",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Ghent University -imec",
"location": {
"settlement": "IDLab Ghent",
"country": "Belgium"
}
},
"email": ""
},
{
"first": "Johannes",
"middle": [],
"last": "Deleu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Ghent University -imec",
"location": {
"settlement": "IDLab Ghent",
"country": "Belgium"
}
},
"email": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Demeester",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Ghent University -imec",
"location": {
"settlement": "IDLab Ghent",
"country": "Belgium"
}
},
"email": ""
},
{
"first": "Chris",
"middle": [],
"last": "Develder",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Ghent University -imec",
"location": {
"settlement": "IDLab Ghent",
"country": "Belgium"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This work presents the contribution from the Text-to-Knowledge team of Ghent University (UGent-T2K) 1 to the MultiDoc2Dial shared task on modeling dialogs grounded in multiple documents. We propose a pipeline system, comprising (1) document retrieval, (2) passage retrieval, and (3) response generation. We engineered these individual components mainly by, for (1)-(2), combining multiple ranking models and adding a final LambdaMART reranker, and, for (3), by adopting a Fusion-in-Decoder (FiD) model. We thus significantly boost the baseline system's performance (over +10 points for both F1 and SacreBLEU). Further, error analysis reveals two major failure cases, to be addressed in future work: (i) in case of topic shift within the dialog, retrieval often fails to select the correct grounding document(s), and (ii) generation sometimes fails to use the correctly retrieved grounding passage. Our code is released at this link.",
"pdf_parse": {
"paper_id": "2022",
"_pdf_hash": "",
"abstract": [
{
"text": "This work presents the contribution from the Text-to-Knowledge team of Ghent University (UGent-T2K) 1 to the MultiDoc2Dial shared task on modeling dialogs grounded in multiple documents. We propose a pipeline system, comprising (1) document retrieval, (2) passage retrieval, and (3) response generation. We engineered these individual components mainly by, for (1)-(2), combining multiple ranking models and adding a final LambdaMART reranker, and, for (3), by adopting a Fusion-in-Decoder (FiD) model. We thus significantly boost the baseline system's performance (over +10 points for both F1 and SacreBLEU). Further, error analysis reveals two major failure cases, to be addressed in future work: (i) in case of topic shift within the dialog, retrieval often fails to select the correct grounding document(s), and (ii) generation sometimes fails to use the correctly retrieved grounding passage. Our code is released at this link.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Most prior research on document-grounded dialog systems assumes a single document for each dialog (Choi et al., 2018; Reddy et al., 2019; Feng et al., 2020) . There are relatively few works on Multi-Document Grounded (MDG) dialog modeling, which requires a dialog system to (i) retrieve grounded passages (or documents) given the user question, and then (ii) generate responses based on the retrieval results and dialog context. Real-world applications (e.g., administration question answering, travel booking assistance and procedural task guidance) for MDG are challenging because of more complex user behavior in such dialogs on diverse information sources. In particular, for (i) retrieval of grounding text passage(s) the challenges pertain to keeping track of dialog state, topic shift (e.g., switching from driving license requirements to car insurance), vocabulary mismatch, vague question formulation, etc. Furthermore, (ii) response generation needs to appropriately phrase the answer to fit in a human(-like) dialog rather than simply copying a source document snippet.",
"cite_spans": [
{
"start": 98,
"end": 117,
"text": "(Choi et al., 2018;",
"ref_id": "BIBREF3"
},
{
"start": 118,
"end": 137,
"text": "Reddy et al., 2019;",
"ref_id": "BIBREF17"
},
{
"start": 138,
"end": 156,
"text": "Feng et al., 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We leverage the recently released dialog dataset, MultiDoc2Dial (Feng et al., 2021) , to tackle aforementioned challenges. We build a pipeline system ( Fig. 1) comprising (1) a document retriever, (2) a passage retriever, and (3) an answer generator fusing multiple grounding input passages. Given the dialog context (i.e., the dialog history and user question), a document retriever searches given supporting documents to select the top-m related ones. Subsequently, these full documents are segmented into shorter passages ranked by a passage retriever. For these retrieval components (1)-(2), we use an ensemble approach -combining BM25, cosine similarity, etc.; for passage retrieval, we included Dense Passage Retrieval (DPR; Karpukhin et al., 2020) -followed by a reranking step using Lam-baMART (Burges, 2010) . The top-k passages are fused with the dialog context by a response generator to produce knowledge-grounded responses, based on Fusion in Decoder (FiD; Izacard and Grave, 2021) . We contribute with: (i) a multi-stage pipeline system comprising first the grounding text retrieval stages, split further into document and subsequent passage retrieval components (both using a multi-feature ensemble system), and second an answer generation model fusing information from multiple passages; (ii) experiments demonstrating that our pipeline system outperforms the baseline method by a large margin (over +10 points for both F1 and SacreBLEU); (iii) insightful error analysis, suggesting that the main shortcomings of the current system include failures of (a) the retrieval stages in case of topic shifts by the user, and (b) the answer generation stage to identify the correct grounding passage among its inputs. Our codes are released at https://github. com/YiweiJiang2015/ugent-t2k-dialdoc",
"cite_spans": [
{
"start": 64,
"end": 83,
"text": "(Feng et al., 2021)",
"ref_id": "BIBREF6"
},
{
"start": 791,
"end": 816,
"text": "Lam-baMART (Burges, 2010)",
"ref_id": null
},
{
"start": 970,
"end": 994,
"text": "Izacard and Grave, 2021)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 152,
"end": 159,
"text": "Fig. 1)",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The MultiDoc2Dial shared task comprises two subtasks: in the seen-domain (referenced by subscript S), the system can rely on training data comprising both exemplary dialogs as well as the corresponding document set from the domains it will be tested on, whereas in the unseen-domain (referenced by U ) no related dialogs nor documents have been seen by the system before.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task Definition",
"sec_num": "2"
},
{
"text": "In general, for both subtasks, a system first retrieves relevant documents from a document pool (D S or D U ) given the dialog context, i.e., a user's question Q i (i is the turn number) and the full conversation history Q <i . Current state-of-the-art solutions split long documents into passages (P S or P U ) to facilitate more fine-grained location of the grounding information. Second, the grounding information G (span(s) or passage(s)) for Q i has to be identified within the passages of retrieved documents. The MultiDoc2Dial dataset was curated such that G for each question can be exactly found within one document, while the full dialog's answers jointly may span multiple documents, thus requiring a model to decide when to switch to a different document. (Note that, depending on how exactly a document is split into shorter passages, G may extend over more than one passage.) Third, a generation model takes as input G and Q \u2264i to generate responses whose meaningfulness and coherence are rated using automatic metrics (i.e., F1_U, SacreBLEU, Rouge-L and Meteor).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task Definition",
"sec_num": "2"
},
{
"text": "The next subsections detail the aforementioned components (1)-(3) of our pipeline system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "The input to our document retrieval model is a user's dialog question Q and the output is a set of m documents {d 1 , d 2 , . . . , d m } selected from the document pool D. For each question, we rank all the documents by computing the similarity scores. We utilize various scoring modules as input to the LambdaMART reranker (Burges, 2010). Our scoring modules include: (i) different BM25 (Trotman et al., 2014) configurations, (ii) cosine similarity between dense representations on both word-level and passage-level, and (iii) off-the-shelf term-matching techniques provided by Terrier (Macdonald et al., 2012) .",
"cite_spans": [
{
"start": 389,
"end": 411,
"text": "(Trotman et al., 2014)",
"ref_id": "BIBREF18"
},
{
"start": 588,
"end": 612,
"text": "(Macdonald et al., 2012)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Document Retrieval",
"sec_num": "3.1"
},
{
"text": "Given the top-m documents returned by the document retriever, a passage retriever ranks passages belonging to these documents. More specifically, we follow the baseline's segmentation of a document into passages, ensuring a fair performance comparison between our passage retriever and the baseline. The same scoring modules for the document retrieval are applied on the passage level, with additional similarity features computed by DPR (Karpukhin et al., 2020) ",
"cite_spans": [
{
"start": 438,
"end": 462,
"text": "(Karpukhin et al., 2020)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Passage Retrieval",
"sec_num": "3.2"
},
{
"text": "We choose FiD (Izacard and Grave, 2021 ) as our generation model, which can be trained independently from the retrieval module. FiD was originally proposed for the open-book question answering problem (Kwiatkowski et al., 2019; Joshi et al., 2017) and showed great power in incorporating knowledge from multiple passages. It is built on top of a transformer-based seq2seq model. We employ BART (Lewis et al., 2020a) as the pretrained weights of FiD instead of T5 as in (Izacard and Grave, 2021) , since fine-tuning BART is computationally more affordable in our case. The FiD's encoder takes as input a question and a list of top-k ranked passages formatted as ((Q, P 1 ), (Q, P 2 )...(Q, P k )). Each pair (Q, P ) is encoded individually. Concatenation of the k encodings is used as the memory accessed by the decoder for the cross-attention operation. The train-ing objective is the cross-entropy loss between generated sequences and gold responses.",
"cite_spans": [
{
"start": 14,
"end": 38,
"text": "(Izacard and Grave, 2021",
"ref_id": "BIBREF9"
},
{
"start": 201,
"end": 227,
"text": "(Kwiatkowski et al., 2019;",
"ref_id": "BIBREF12"
},
{
"start": 228,
"end": 247,
"text": "Joshi et al., 2017)",
"ref_id": "BIBREF10"
},
{
"start": 394,
"end": 415,
"text": "(Lewis et al., 2020a)",
"ref_id": "BIBREF13"
},
{
"start": 469,
"end": 494,
"text": "(Izacard and Grave, 2021)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Response Generation",
"sec_num": "3.3"
},
{
"text": "We evaluate our pipeline system on the Multi-Doc2Dial dataset, containing 4,796 conversations grounded in 488 documents. In the dialog data, each conversation covers at least one topic from four domains (see Appendix B.2). It is challenging to retrieve the grounding information when users shift their topic (i.e., implicitly referring to another document) during a dialog. In total, there are 61,078 turns, including 29,746 user questions that are split into 21,451, 4,201 and 4,094 for train, dev and test sets respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "4.1"
},
{
"text": "The baseline system uses the Retrieval Augmented Generation (RAG; Lewis et al., 2020b) model composed of two neural modules: DPR for passage retrieval and BART for response generation. First, a pre-trained DPR is finetuned on the passage retrieval task built from MultiDoc2Dial dataset. Second, RAG is finetuned to generate responses for MultiDoc2Dial dialogs by inserting the finetuned DPR weights and freezing DPR's context encoder.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline System",
"sec_num": "4.2"
},
{
"text": "We present experiment results for the document retriever, passage retriever and generator separately. Ablation studies of the document retriever focus on analysing the contributions of different features. We validate the effectiveness of first using the document retriever, to boost the passage retriever's performance. Results of response generation experiments show that there is an optimal number of passages input to the FiD model. We also discuss FiD's inefficiency in recognizing grounding knowledge among multiple passage inputs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Retrieval and Generation Results",
"sec_num": "4.3"
},
{
"text": "Document retrieval - Table 1 presents our results for the document retrieval. The first row shows a simple BM25 with the same configuration as official baseline but on document level. BM25 tuned indicates BM25 with additional preprocessing and postprocessing over its input features and output rankings (see Section B.1 for details). BM25 title is another BM25, solely trained on document titles and subtitles. The reason for this choice is to distinguish the importance of the title words from other words, as the title provides a strong signal for document retrieval. In addition, to capture semantic relatedness and to address the word-mismatch problem between questions and documents, we compute word-level and passage-level embeddings to retrieve relevant documents. For word-level (denoted by 'Word emb.'), we simply average word vectors to obtain question and document representations, then using TF-IDF weighting and principal component removal (Arora et al., 2017) followed by cosine similarity. For passage-level (denoted by 'Passage emb.'), we use a pre-trained model 2 to embed a document's passages and use the highest passage score to rank the document. Macdonald et al. (2012) offer various term-matching approaches for text retrieval. The best performing model in our experiment is DPH (Amati, 2006) . In the second block of Table 1 , we combine various ranking methods in an ensemble using rank fusion, simply summing the various scores.",
"cite_spans": [
{
"start": 953,
"end": 973,
"text": "(Arora et al., 2017)",
"ref_id": "BIBREF1"
},
{
"start": 1168,
"end": 1191,
"text": "Macdonald et al. (2012)",
"ref_id": "BIBREF16"
},
{
"start": 1302,
"end": 1315,
"text": "(Amati, 2006)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 21,
"end": 28,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 1341,
"end": 1348,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Retriever",
"sec_num": "4.3.1"
},
{
"text": "We first aggregate scores from BM25 tuned and BM25 title . The next row presents adding the combination of 13 term-matching techniques borrowed from the Terrier IR framework. 3 Finally, we add the embedding scores into the ensemble model, which significantly boosts the performance (increasing R@1 from 62.5 to 66.3), indicating the complementarity of the various ranking criteria. Finally, instead of naively summing all scores, we employ the LambdaMART algorithm, which yields the highest recall scores (except for R@1). Passage retrieval - (Feng et al., 2021) . m denotes the number of top documents that are used for passage retrieval.",
"cite_spans": [
{
"start": 543,
"end": 562,
"text": "(Feng et al., 2021)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Retriever",
"sec_num": "4.3.1"
},
{
"text": "retrieval results to that of the baseline (Feng et al., 2021) . To validate whether the document retrieval stage helps to limit the search space of passage retrieval, we perform a simple test that uses DPR to only rank passages from the top-m documents.",
"cite_spans": [
{
"start": 42,
"end": 61,
"text": "(Feng et al., 2021)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Retriever",
"sec_num": "4.3.1"
},
{
"text": "Restricting the DPR to retrieve passages only from the top-1 document increases R@1 from 49.0 to 55.6 while it hurts R@10 (dropping from 80.0 to 73.0). By increasing m, R@10 improves at the cost of lowering R@1 as we expose the DPR to more passages that are similar to the dialog question. The maximum performance (R@15 = 91.4) is attained by LambdaMART on passages from the top-30 documents. 4 Error analysis -We noted that the document retriever fails on 42 cases out of 4201 (i.e., R@30 = 99.0). We identified 4 major error types: (i) topic shift (22 cases), where grounding information hops from one document to another; (ii) vague question formulation (12 cases), where user questions are unclear and require the agent to ask follow-up questions for clarification; (iii) annotation errors (4 cases) due to some meaningless utterances; (iv) hard examples (4 cases) where our retriever totally failed.",
"cite_spans": [
{
"start": 393,
"end": 394,
"text": "4",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Retriever",
"sec_num": "4.3.1"
},
{
"text": "Generation models are trained and evaluated on our LambdaMART retriever's output, ranking passages from the top-30 documents. The number of preceding dialog turns from the history (that are fed as input for the generator, see Fig. 1 ) is fixed at 5, which is the length leading to the best performance on the dev set in our preliminary experiments. Each turn is prefixed by a role indicator, i.e., \u2329AGENT\u232a and \u2329USER\u232a. A separator \u2329CONTEXT\u232a is inserted between the question and passage text. See Appendix C for hyperparameter details. The evalua-tion metrics are calculated by the official shared task script. Our experiments study the impact of the number of passages in the generator's input and establish upper bounds of its performance. In addition, we introduce \"knowledge misrecognition rate\" to quantify limitations of our generation model (see further).",
"cite_spans": [],
"ref_spans": [
{
"start": 226,
"end": 232,
"text": "Fig. 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Response Generator",
"sec_num": "4.3.2"
},
{
"text": "Upper-bound Tests -We perform three types of upper-bound tests as shown in Table 3 : (i) only the grounding passage is provided to the FiD model (for both of train and dev sets) to generate a response (row 3); (ii) only the grounding span (phrases or sentences within the grounding passage) is input to the FiD model for generation (row 4); (iii) use the grounding span as the response to be evaluated against the gold one (row 5). Scores in Table 3 demonstrates a notable gap (78.33 w.r.t. total score) between the baseline (row 4) and an upper-bound model (row 1). It is noteworthy that directly using the grounding span as the response yields better performance (224.66) than inputting it to FiD (214.1), implying that a span-extraction model might get higher scores than the current generation one. However, while extracting correct spans provides users the needed information, it cannot satisfy the pragmatic requirements of a conversation (e.g., greetings at the beginning, yes/no prefix before giving the details). Thus, we choose a generation model as it has greater power in generating more coherent phrases at potential cost of losing partial information.",
"cite_spans": [],
"ref_spans": [
{
"start": 75,
"end": 82,
"text": "Table 3",
"ref_id": "TABREF4"
},
{
"start": 442,
"end": 449,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Response Generator",
"sec_num": "4.3.2"
},
{
"text": "Impact of the Number of Passages N p -Intuitively, the more passages are fed as input to FiD, the higher is the chance for FiD to capture the grounding information. Yet, it then also becomes harder to recognize the correct passage. We thus hypothesize that there should be an optimal number of passages N p for which FiD to attains the best performance, without being distracted by too much information. Figure 2 shows that all performance metrics slightly drop when N p exceeds 15 (even though they mostly recover once N p \u2265 30). The performance of the best model (N p = 15) on the dev set is listed in row 5 of system (N p = 15). An interesting observation is that the FiD model may behave poorly even when the grounding passage is retrieved among the top-15 results presented to the generator: FiD cannot always recognize the grounding passage among its multiple inputs. We propose to quantify this with \"knowledge misrecognition rate\" \u00b5, calculated as the fraction of low-quality responses among all cases where the correct passage is included in the retrieved ones as fed as input to the generator. For example, using SacreBLEU, a low value thereof (e.g., <10) suggests that the model did not actually use elements from the ground truth passage in generating the response. Thus, using SacreBLEU < 10 as an indication of a \"low-quality\" response, we find that the misrecognition rate of our best system is \u00b5 = 50.3% on the dev set. This means that over half of the correct retrieval results are lost in the generation phase. The high rate also implies that the FiD model alone lacks the necessary inductive bias to identify the grounded information among multiple passages. We consider this as a key element in designing future versions of the response generation component. Leaderboard Submission -Our submission results on the test sets (including test-dev and test-test) are listed in Table 4 . For the unseen-domain task, inference was performed by the model trained on seen-domain data as a test of our system's zeroshot ability. Besides the FiD-BART-base model, we also train a FiD-BART-large model, which achieves our best scores. For the seen-domain task, our best model outperforms the baseline by 11.05 and 10.07 for F1_U and SacreBLEU. For the unseendomain task, these two metrics are improved by 14.10 and 14.88. As a result, our UGent-T2K team was ranked second and third for the seen-domain and unseen-domain tasks respectively.",
"cite_spans": [],
"ref_spans": [
{
"start": 404,
"end": 412,
"text": "Figure 2",
"ref_id": "FIGREF1"
},
{
"start": 1892,
"end": 1899,
"text": "Table 4",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Response Generator",
"sec_num": "4.3.2"
},
{
"text": "We propose a pipeline system for dialogs grounded in multiple documents. Our system consists of a document retriever, a passage retriever and a multipassage-fusing generator. The retriever is designed to limit the passage search space by first ranking documents, which proves to enhance the passage retrieval performance considerably for the Multi-Doc2Dial shared task. Compared to the baseline RAG model, our multi-passage-fusing generator achieves better knowledge-grounded answer generation. Based on error analysis of our current system, future work will focus on the topic shift issue for conversational retrieval and investigate the knowledge misrecognition problem for dialog generation. structure-wise segmentation method. More specifically, in a document html file, a header tagged by <h1> or <h2> and its children nodes are treated as a passage prefixed by its hierarchical titles. We note that some passages produced in this way are too short (424 passages are shorter than 20 tokens, e.g., headers with empty content below) or too long (24 passages longer than 1,000 tokens) as shown in Fig. 3(a) , not to mention those repetitive passages due to document duplicates. Given that common transformer-based generation models takes input up to 512 tokens, such length distribution either wastes a generation model's capacity when short passages are padded or loses a significant portion of information when long passages are truncated. To eliminate these extreme cases, three measures are taken based on our cleaned document set: (i) We remove the 56 duplicate documents. (ii) For each of the remaining documents, we first split it using the structure-wise method, calling the results \"sections\" to differentiate from the baseline's \"passages\". If a section has fewer than 150 tokens, it is directly added to the final passage list. If not, it will be further split into passages using a flexible sliding window which allows for a passage with tokens fewer than the window size in order to not break sentences. 6 (iii) Next, a passage with fewer than 60 tokens is merged with its following passage -except if it appears at the end of a section, in which case it will be appended to its preceding one. Figure 3 (b) depicts the passage length distribution using our segmentation method. The long tail problem of the baseline is largely resolved. As Table 5 shows, our new segmentation method reduces the total number of passages from 4,110 to 3,734 while it increases the average passage length from 130.4 to 154.1. Table 5 : Total number of passages and average passage length produced by the baseline method and ours. \"tokenizer\" and \"white space\" denote using the BART tokenizer and splitting words by white space respectively. ",
"cite_spans": [],
"ref_spans": [
{
"start": 1099,
"end": 1108,
"text": "Fig. 3(a)",
"ref_id": "FIGREF3"
},
{
"start": 2209,
"end": 2217,
"text": "Figure 3",
"ref_id": "FIGREF3"
},
{
"start": 2355,
"end": 2362,
"text": "Table 5",
"ref_id": null
},
{
"start": 2522,
"end": 2529,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "This section reports (i) the ablation study of BM25 for document retrieval revealing how different features affect the retrieval performance; (ii) domain classification that enhances document retrieval; (iii) passage retrieval experiments based on our new segmentation method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Experiments",
"sec_num": null
},
{
"text": "B.1 Ablation study of BM25 for document retrieval Table 6 presents our results for BM25 tuned on document retrieval. The first row shows the simple BM25 model without any preprocessing on inputs (question and documents). The next four rows respectively represent: lower casing inputs, removing stop-words, removing punctuation, and stemming, which greatly improve the performance (over +10 points for R@25). We obtained slight improvement with a domain classifier that predicts the conversation domain (see Appendix B.2). We also observed that using n-grams (n = 1,2,3) features instead of unigrams brings a further improvement with additional 3.2 points of R@25.",
"cite_spans": [],
"ref_spans": [
{
"start": 50,
"end": 57,
"text": "Table 6",
"ref_id": "TABREF10"
}
],
"eq_spans": [],
"section": "B Experiments",
"sec_num": null
},
{
"text": "In the training data of MultiDoc2Dial , the grounding documents were crawled from 4 U.S. government websites, 7 covering 4 domains: Social Security Administration, U.S. Department of Veterans Affairs, Department of Motor Vehicles (New York State) and Federal Student Aid, which are respectively noted as ssa, va, dmv and student. We applied the idea proposed by Han et al. (2021) to further improve BM25 performance by training a domain classifier, i.e., finetuning the RoBERTalarge model (Liu et al., 2019) to predict a domain label for a given dialog. The domain scores are multiplied to BM25 after which a weighted combination between the initial BM25 and the new scores is used to create the final ranked list. In our experiments, we simply assume equal weights (0.5) for the two scores. Table 7 presents different classifiers' accuracy for seen-domain prediction.",
"cite_spans": [
{
"start": 362,
"end": 379,
"text": "Han et al. (2021)",
"ref_id": "BIBREF8"
},
{
"start": 489,
"end": 507,
"text": "(Liu et al., 2019)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [
{
"start": 792,
"end": 799,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "B.2 Domain Classifier",
"sec_num": null
},
{
"text": "Model Accuracy SVM (Cortes and Vapnik, 1995) 96.7 Bert-large 97.0 Roberta-large (Liu et al., 2019) 98.2 Table 7 : Domain classifier accuracy on dev set. Table 8 presents the passage retrieval results based on our passage segmentation. We experiment with three models: DPR ranking all the passages, DPR ranking only the passages within top-m documents and the LambdaMART model based on top-30 documents. Restricting DPR's search space within the top-5 documents increases R@15 from 80.1 to 87.1, which further grows to 90.4 with the Lamb-daMART model.",
"cite_spans": [
{
"start": 19,
"end": 44,
"text": "(Cortes and Vapnik, 1995)",
"ref_id": "BIBREF4"
},
{
"start": 80,
"end": 98,
"text": "(Liu et al., 2019)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [
{
"start": 104,
"end": 111,
"text": "Table 7",
"ref_id": null
},
{
"start": 153,
"end": 160,
"text": "Table 8",
"ref_id": null
}
],
"eq_spans": [],
"section": "B.2 Domain Classifier",
"sec_num": null
},
{
"text": "FiD was finetuned from pretrained BART weights with the following hyperparameter settings:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C Hyperparameters",
"sec_num": null
},
{
"text": "batch_size=4 total_epochs=15 max_source_length=400 max_target_length=64",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C Hyperparameters",
"sec_num": null
},
{
"text": "Model m R@1 R@5 R@10 R@15 Table 8 : Recall scores for passage retrieval on dev set. The passage set is produced by the method described in Appendix A.",
"cite_spans": [],
"ref_spans": [
{
"start": 26,
"end": 33,
"text": "Table 8",
"ref_id": null
}
],
"eq_spans": [],
"section": "C Hyperparameters",
"sec_num": null
},
{
"text": "label_smoothing=0.1 optimizer=AdamW weight_decay=0.1 adam_epsilon=1e-08 max_grad_norm=1.0 lr_scheduler=linear learning_rate=5e-05 warmup_steps=500 gradient_accumulation_steps=2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C Hyperparameters",
"sec_num": null
},
{
"text": "msmarco-bert-base-dot-v5: available at https://bit. ly/3ID92fF3 http://terrier.org/ -Note that due to our limited time budget for the challenge, we did not properly analyze the contribution of the various Terrier features; therefore some of them may be unnecessary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We select 30 documents, because at the document level, we find R@30 = 99.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The full list of duplicates can be found in https://bit. ly/376TxPX",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Window size \u2264 150, stride = 50. Since we rely on Spacy to extract sentences, some of them may be broken depending on Spacy model's decision.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "ssa.gov, va.gov, dmv.ny.gov, studentaid.gov",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This research received funding from the Flemish Government under the \"Onderzoeksprogramma Artifici\u00eble Intelligentie (AI) Vlaanderen\" programme. The first author was supported by China Scholarship Council (201806020194). We thank the anonymous reviewers whose comments helped to improve our work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
},
{
"text": "The current version of the MultiDoc2Dial dataset provides 488 documents in which we found 56 duplicate documents 5 . The baseline relies on a",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Passage segmentation",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Frequentist and bayesian approach to information retrieval",
"authors": [
{
"first": "Giambattista",
"middle": [],
"last": "Amati",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of ECIR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1007/11735106_3"
]
},
"num": null,
"urls": [],
"raw_text": "Giambattista Amati. 2006. Frequentist and bayesian approach to information retrieval. In Proceedings of ECIR.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A simple but tough-to-beat baseline for sentence embeddings",
"authors": [
{
"first": "Sanjeev",
"middle": [],
"last": "Arora",
"suffix": ""
},
{
"first": "Yingyu",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Tengyu",
"middle": [],
"last": "Ma",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sanjeev Arora, Yingyu Liang, and Tengyu Ma. 2017. A simple but tough-to-beat baseline for sentence em- beddings. In Proceedings of ICLR.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "From ranknet to lambdarank to lambdamart: An overview",
"authors": [
{
"first": "J",
"middle": [
"C"
],
"last": "Christopher",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Burges",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher JC Burges. 2010. From ranknet to lamb- darank to lambdamart: An overview. Microsoft Re- search Technical Report.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "QuAC: Question answering in context",
"authors": [
{
"first": "Eunsol",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "He",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Yatskar",
"suffix": ""
},
{
"first": "Wentau",
"middle": [],
"last": "Yih",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/D18-1241"
]
},
"num": null,
"urls": [],
"raw_text": "Eunsol Choi, He He, Mohit Iyyer, Mark Yatskar, Wen- tau Yih, Yejin Choi, Percy Liang, and Luke Zettle- moyer. 2018. QuAC: Question answering in context. In Proceedings of EMNLP.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Supportvector networks",
"authors": [
{
"first": "Corinna",
"middle": [],
"last": "Cortes",
"suffix": ""
},
{
"first": "Vladimir",
"middle": [],
"last": "Vapnik",
"suffix": ""
}
],
"year": 1995,
"venue": "Machine Learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Corinna Cortes and Vladimir Vapnik. 1995. Support- vector networks. Machine Learning.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of NAACL.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "MultiDoc2Dial: Modeling dialogues grounded in multiple documents",
"authors": [
{
"first": "Song",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Sankalp",
"middle": [],
"last": "Siva",
"suffix": ""
},
{
"first": "Hui",
"middle": [],
"last": "Patel",
"suffix": ""
},
{
"first": "Sachindra",
"middle": [],
"last": "Wan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Joshi",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/2021.emnlp-main.498"
]
},
"num": null,
"urls": [],
"raw_text": "Song Feng, Siva Sankalp Patel, Hui Wan, and Sachindra Joshi. 2021. MultiDoc2Dial: Modeling dialogues grounded in multiple documents. In Proceedings of EMNLP.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "2020. doc2dial: A goal-oriented document-grounded dialogue dataset",
"authors": [
{
"first": "Song",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Hui",
"middle": [],
"last": "Wan",
"suffix": ""
},
{
"first": "Chulaka",
"middle": [],
"last": "Gunasekara",
"suffix": ""
},
{
"first": "Siva",
"middle": [],
"last": "Patel",
"suffix": ""
},
{
"first": "Sachindra",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Luis",
"middle": [],
"last": "Lastras",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/2020.emnlp-main.652"
]
},
"num": null,
"urls": [],
"raw_text": "Song Feng, Hui Wan, Chulaka Gunasekara, Siva Patel, Sachindra Joshi, and Luis Lastras. 2020. doc2dial: A goal-oriented document-grounded dialogue dataset. In Proceedings of EMNLP.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "The simplest thing that can possibly work: (pseudo-)relevance feedback via text classification",
"authors": [
{
"first": "Xiao",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Yuqi",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of SIGIR-ICTIR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1145/3471158.3472261"
]
},
"num": null,
"urls": [],
"raw_text": "Xiao Han, Yuqi Liu, and Jimmy Lin. 2021. The simplest thing that can possibly work: (pseudo-)relevance feedback via text classification. In Proceedings of SIGIR-ICTIR.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Leveraging passage retrieval with generative models for open domain question answering",
"authors": [
{
"first": "Gautier",
"middle": [],
"last": "Izacard",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of EACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/2021.eacl-main.74"
]
},
"num": null,
"urls": [],
"raw_text": "Gautier Izacard and Edouard Grave. 2021. Leveraging passage retrieval with generative models for open do- main question answering. In Proceedings of EACL.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "TriviaQA: A large scale distantly supervised challenge dataset for reading comprehension",
"authors": [
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Eunsol",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Weld",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/P17-1147"
]
},
"num": null,
"urls": [],
"raw_text": "Mandar Joshi, Eunsol Choi, Daniel Weld, and Luke Zettlemoyer. 2017. TriviaQA: A large scale distantly supervised challenge dataset for reading comprehen- sion. In Proceedings of ACL.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Dense passage retrieval for open-domain question answering",
"authors": [
{
"first": "Vladimir",
"middle": [],
"last": "Karpukhin",
"suffix": ""
},
{
"first": "Barlas",
"middle": [],
"last": "Oguz",
"suffix": ""
},
{
"first": "Sewon",
"middle": [],
"last": "Min",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Ledell",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Sergey",
"middle": [],
"last": "Edunov",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Wen-Tau",
"middle": [],
"last": "Yih",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/2020.emnlp-main.550"
]
},
"num": null,
"urls": [],
"raw_text": "Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for open-domain question answering. In Proceedings of EMNLP.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Natural Questions: A Benchmark for Question Answering Research. Transactions of the Association for Computational Linguistics",
"authors": [
{
"first": "Tom",
"middle": [],
"last": "Kwiatkowski",
"suffix": ""
},
{
"first": "Jennimaria",
"middle": [],
"last": "Palomaki",
"suffix": ""
},
{
"first": "Olivia",
"middle": [],
"last": "Redfield",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
},
{
"first": "Ankur",
"middle": [],
"last": "Parikh",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Alberti",
"suffix": ""
},
{
"first": "Danielle",
"middle": [],
"last": "Epstein",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1162/tacl_a_00276"
]
},
"num": null,
"urls": [],
"raw_text": "Tom Kwiatkowski, Jennimaria Palomaki, Olivia Red- field, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Ken- ton Lee, Kristina Toutanova, Llion Jones, Matthew Kelcey, Ming-Wei Chang, Andrew M. Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natural Questions: A Benchmark for Question Answering Research. Transactions of the Association for Com- putational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension",
"authors": [
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Marjan",
"middle": [],
"last": "Ghazvininejad",
"suffix": ""
},
{
"first": "Abdelrahman",
"middle": [],
"last": "Mohamed",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.703"
]
},
"num": null,
"urls": [],
"raw_text": "Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020a. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and com- prehension. In Proceedings of ACL.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Retrieval-augmented generation for knowledge-intensive nlp tasks",
"authors": [
{
"first": "Patrick",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Ethan",
"middle": [],
"last": "Perez",
"suffix": ""
},
{
"first": "Aleksandra",
"middle": [],
"last": "Piktus",
"suffix": ""
},
{
"first": "Fabio",
"middle": [],
"last": "Petroni",
"suffix": ""
},
{
"first": "Vladimir",
"middle": [],
"last": "Karpukhin",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Heinrich",
"middle": [],
"last": "K\u00fcttler",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of NeurIPS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Hein- rich K\u00fcttler, Mike Lewis, Wen-tau Yih, Tim Rock- t\u00e4schel, et al. 2020b. Retrieval-augmented generation for knowledge-intensive nlp tasks. In Proceedings of NeurIPS.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "RoBERTa: A robustly optimized BERT pretraining approach",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.48550/ARXIV.1907.11692"
],
"arXiv": [
"arXiv:1907.11692"
]
},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A robustly optimized BERT pretraining approach. arXiv preprint arXiv:1907.11692.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "From puppy to maturity: Experiences in developing Terrier",
"authors": [
{
"first": "Craig",
"middle": [],
"last": "Macdonald",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Mccreadie",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of SIGIR-OSIR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Craig Macdonald, Richard McCreadie, Rodrygo LT San- tos, and Iadh Ounis. 2012. From puppy to maturity: Experiences in developing Terrier. In Proceedings of SIGIR-OSIR.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "CoQA: A conversational question answering challenge. Transactions of the Association for Computational Linguistics",
"authors": [
{
"first": "Siva",
"middle": [],
"last": "Reddy",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1162/tacl_a_00266"
]
},
"num": null,
"urls": [],
"raw_text": "Siva Reddy, Danqi Chen, and Christopher D. Manning. 2019. CoQA: A conversational question answering challenge. Transactions of the Association for Com- putational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Improvements to bm25 and language models examined",
"authors": [
{
"first": "Andrew",
"middle": [],
"last": "Trotman",
"suffix": ""
},
{
"first": "Antti",
"middle": [],
"last": "Puurula",
"suffix": ""
},
{
"first": "Blake",
"middle": [],
"last": "Burgess",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of ADCS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1145/2682862.2682863"
]
},
"num": null,
"urls": [],
"raw_text": "Andrew Trotman, Antti Puurula, and Blake Burgess. 2014. Improvements to bm25 and language models examined. In Proceedings of ADCS.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"text": "Our proposed pipeline dialog system.",
"uris": null,
"num": null
},
"FIGREF1": {
"type_str": "figure",
"text": "Impact of the number of passages (N p \u2265 1) on generation metrics. (seen-domain task; FiD-BARTbase model; on dev set)",
"uris": null,
"num": null
},
"FIGREF3": {
"type_str": "figure",
"text": "Passage length histograms of baseline and our passage segmentation. The length is the number of tokens processed by the BART tokenizer. (a) Baseline passages. The x-axis is truncated to 1,000 to make smaller value bins more clear. (b) Our passages after removing duplicate documents and merging short passages. No passage is omitted.",
"uris": null,
"num": null
},
"TABREF1": {
"text": "Recall scores for document retrieval on dev set.",
"num": null,
"type_str": "table",
"html": null,
"content": "<table/>"
},
"TABREF2": {
"text": "",
"num": null,
"type_str": "table",
"html": null,
"content": "<table><tr><td>compares our passage</td></tr></table>"
},
"TABREF3": {
"text": "Recall scores for passage retrieval on dev set.",
"num": null,
"type_str": "table",
"html": null,
"content": "<table/>"
},
"TABREF4": {
"text": "",
"num": null,
"type_str": "table",
"html": null,
"content": "<table><tr><td>, with</td></tr></table>"
},
"TABREF5": {
"text": "Generation performance of the baseline and our FiD-BART-base model (seen-domain task; on dev set). Row 1-3 list the upper-bound performance. A perfect-retriever assumes that the grounding passage is always ranked as the top 1. Row 4-5 use realistic retrievers. The baseline scores are our reproduction results.",
"num": null,
"type_str": "table",
"html": null,
"content": "<table><tr><td>40</td><td/><td/><td/><td/></tr><tr><td>35</td><td/><td/><td/><td/></tr><tr><td>30</td><td/><td/><td/><td/></tr><tr><td>15 20 25</td><td/><td/><td colspan=\"2\">F1_U SacreBLEU Meteor RougeL</td></tr><tr><td>0</td><td>10</td><td>20 Number of passages 30</td><td>40</td><td>50</td></tr></table>"
},
"TABREF7": {
"text": "Submission results on the leaderboard (on test-test set).",
"num": null,
"type_str": "table",
"html": null,
"content": "<table/>"
},
"TABREF10": {
"text": "BM25 tuned recall scores for document retrieval on dev set.",
"num": null,
"type_str": "table",
"html": null,
"content": "<table/>"
}
}
}
}