| { |
| "paper_id": "2022", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T16:31:16.238355Z" |
| }, |
| "title": "Improving Multiple Documents Grounded Goal-Oriented Dialog Systems via Diverse Knowledge Enhanced Pretrained Language Model", |
| "authors": [ |
| { |
| "first": "Yunah", |
| "middle": [], |
| "last": "Jang", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Seoul National University", |
| "location": { |
| "settlement": "Seoul", |
| "country": "Korea" |
| } |
| }, |
| "email": "" |
| }, |
| { |
| "first": "Dongryeol", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Seoul National University", |
| "location": { |
| "settlement": "Seoul", |
| "country": "Korea" |
| } |
| }, |
| "email": "" |
| }, |
| { |
| "first": "Hyungjoo", |
| "middle": [], |
| "last": "Park", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Seoul National University", |
| "location": { |
| "settlement": "Seoul", |
| "country": "Korea" |
| } |
| }, |
| "email": "" |
| }, |
| { |
| "first": "Taegwan", |
| "middle": [], |
| "last": "Kang", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Seoul National University", |
| "location": { |
| "settlement": "Seoul", |
| "country": "Korea" |
| } |
| }, |
| "email": "" |
| }, |
| { |
| "first": "Hwanhee", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| }, |
| { |
| "first": "Hyunkyung", |
| "middle": [], |
| "last": "Bae", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Seoul National University", |
| "location": { |
| "settlement": "Seoul", |
| "country": "Korea" |
| } |
| }, |
| "email": "hkbae@snu.ac.kr" |
| }, |
| { |
| "first": "Kyomin", |
| "middle": [], |
| "last": "Jung", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Seoul National University", |
| "location": { |
| "settlement": "Seoul", |
| "country": "Korea" |
| } |
| }, |
| "email": "kjung@snu.ac.kr" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "In this paper, we mainly discuss about our submission to MultiDoc2Dial task, which aims to model the goal-oriented dialogues grounded in multiple documents. The proposed task is split into grounding span prediction and agent response generation. The baseline for the task is the retrieval augmented generation model, which consists of a dense passage retrieval model for the retrieval part and the BART model for the generation part. The main challenge of this task is that the system requires a great amount of pre-trained knowledge to generate answers grounded in multiple documents. To overcome this challenge, we adopt multitask learning, data augmentation, model pretraining and contrastive learning to enhance our model's coverage of pretrained knowledge. We experiment with various settings of our method to show the effectiveness of our approaches. Our final model achieved 37.78 F1 score, 22.94 SacreBLEU, 36.97 Meteor, 35.46 RougeL, a total of 133.15 on DialDoc Shared Task at ACL 2022 released test set.", |
| "pdf_parse": { |
| "paper_id": "2022", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "In this paper, we mainly discuss about our submission to MultiDoc2Dial task, which aims to model the goal-oriented dialogues grounded in multiple documents. The proposed task is split into grounding span prediction and agent response generation. The baseline for the task is the retrieval augmented generation model, which consists of a dense passage retrieval model for the retrieval part and the BART model for the generation part. The main challenge of this task is that the system requires a great amount of pre-trained knowledge to generate answers grounded in multiple documents. To overcome this challenge, we adopt multitask learning, data augmentation, model pretraining and contrastive learning to enhance our model's coverage of pretrained knowledge. We experiment with various settings of our method to show the effectiveness of our approaches. Our final model achieved 37.78 F1 score, 22.94 SacreBLEU, 36.97 Meteor, 35.46 RougeL, a total of 133.15 on DialDoc Shared Task at ACL 2022 released test set.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Recently, deep learning-based dialog systems have attracted much attention from academia and the industry. The main challenge of dialog systems is to generate fluent responses consistent with the users' text input. As Pre-trained Language Models (PLMs) (e.g., BART (Lewis et al., 2019) and GPT2 (Radford et al., 2019) ) have emerged, dialog systems have taken advantage of PLMs (Zhao et al., 2020; Budzianowski and Vulic, 2019) , which can enhance the quality of dialog response by applying implicit language knowledge.", |
| "cite_spans": [ |
| { |
| "start": 265, |
| "end": 285, |
| "text": "(Lewis et al., 2019)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 295, |
| "end": 317, |
| "text": "(Radford et al., 2019)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 378, |
| "end": 397, |
| "text": "(Zhao et al., 2020;", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 398, |
| "end": 427, |
| "text": "Budzianowski and Vulic, 2019)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "However, these systems lack knowledge of specific topics and thus show weakness in conducting an in-depth conversation with humans. There have been various works for knowledge-grounded dialogue systems to address this problem. (Kim et al., 2020; Zhan et al., 2021) Knowledge grounded dialogue models are capable of generating precise responses based on both the dialogue context and external sources. Therefore, researchers have usually constructed dialogue flows grounded in related documents (Dinan et al., 2018; Zhou et al., 2018b) or knowledge graphs (Moon et al., 2019; Zhou et al., 2018a; Tuan et al., 2019) . In particular, Feng et al. (2020) have introduced the Doc2Dial tasks for goal-oriented document-grounded dialog systems. Compared to previous works, Doc2dial has introduced a more challenging setting with multiturn queries and aims to generate natural language responses from relevant grounding document. On top of that, they also propose the MultiDoc2Dial dataset (Feng et al., 2021) ,which is built upon the Doc2Dial dataset. MultiDoc2Dial dataset is more closely related to real-life scenarios than the prior work since the agent generates responses based on multiple documents as grounding knowledge. Due to its multi-document setting, utilizing knowledge has become more complex.", |
| "cite_spans": [ |
| { |
| "start": 227, |
| "end": 245, |
| "text": "(Kim et al., 2020;", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 246, |
| "end": 264, |
| "text": "Zhan et al., 2021)", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 494, |
| "end": 514, |
| "text": "(Dinan et al., 2018;", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 515, |
| "end": 534, |
| "text": "Zhou et al., 2018b)", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 555, |
| "end": 574, |
| "text": "(Moon et al., 2019;", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 575, |
| "end": 594, |
| "text": "Zhou et al., 2018a;", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 595, |
| "end": 613, |
| "text": "Tuan et al., 2019)", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 631, |
| "end": 649, |
| "text": "Feng et al. (2020)", |
| "ref_id": null |
| }, |
| { |
| "start": 981, |
| "end": 1000, |
| "text": "(Feng et al., 2021)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "To utilize external knowledge in dialogue, knowledge grounded models generally consist of a retrieval model and a generative model. Recently, the Retrieval Augmented Generation (RAG) model (Lewis et al., 2020a) has been proposed to leverage both parametric (Raffel et al., 2019; Lewis et al., 2019) and non-parametric memory (Lewis et al., 2020b; Xiao et al., 2020) methods by combining pre-trained seq2seq models and the dense vector index of grounding documents. However, the RAG model lacks knowledge related to question answering and dialogue generation.", |
| "cite_spans": [ |
| { |
| "start": 189, |
| "end": 210, |
| "text": "(Lewis et al., 2020a)", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 257, |
| "end": 278, |
| "text": "(Raffel et al., 2019;", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 279, |
| "end": 298, |
| "text": "Lewis et al., 2019)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 325, |
| "end": 346, |
| "text": "(Lewis et al., 2020b;", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 347, |
| "end": 365, |
| "text": "Xiao et al., 2020)", |
| "ref_id": "BIBREF15" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In this paper, our team JPL proposes four approaches to enhance RAG's diverse knowledge: multi-task learning, data augmentation, pretraining and contrastive learning. Multi-task learning, extra pretraining on conversational question answering datasets, and data augmentation enhance the model's task-oriented knowledge. Contrastive ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Figure 1: Our training pipeline We utilize four methods to cultivate the RAG model's diverse knowledge. To enhance model's task-agnositic knowledge, we add a hard negative sample for contrastive learning on the DPR retriever module. Pretraining BART with conversational QA datasets, data augmentation on grounding task, and multi-task learning improves task-specific knowledge for the final RAG model. learning for the DPR retriever module strengthen task-agnostic knowledge. We participate in the second DialDoc shared task held by ACL, Multi-Doc2Dial: Modeling Dialogues Grounded in Multiple Documents (Feng et al., 2021) . These methods cultivate the dialogue model's capability to use complex external knowledge on top of PLM's inherent power. In this shared task, we focus on the MultiDoc2Dial dataset (Feng et al., 2021) , which contains conversations that are grounded in multiple documents. The dataset is constructed based on the Doc2Dial dataset, the dataset for the prior shared task at the DialDoc 2021 workshop. Unlike its predecessor, each dialogue in the MultiDoc2Dial dataset has multiple segments with different grounding documents for adjacent segments. The dataset consists of 4800 dialogues with an average of 14 turns that are grounded in 488 documents from four different domains (dmv, ssa, studentaid, va). Details of the MultiDoc2Dial dataset are given in Table 1 .", |
| "cite_spans": [ |
| { |
| "start": 604, |
| "end": 623, |
| "text": "(Feng et al., 2021)", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 807, |
| "end": 826, |
| "text": "(Feng et al., 2021)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 1380, |
| "end": 1387, |
| "text": "Table 1", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Generation", |
| "sec_num": null |
| }, |
| { |
| "text": "For the evaluations on MultiDoc2Dial dataset, two sub-tasks are proposed. Task 1 aims to predict the grounding span for the next agent response. For task 1, we get (1) current user turn, (2) dialogue history, (3) the entire set of documents from all domains as input. For the output, we aim to figure out the most relevant grounding text span from one document for the next agent response. Task 2 aims to generate agent response in natural language. For task 2, we get (1) current user turn, (2) dialogue history, (3) the entire set of documents from all domain as an input.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Multidoc2dial", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "In this shared task, the author proposed a baseline model based on the HuggingFace RAG. 1 For the retriever part, DPR (Karpukhin et al., 2020) was given in the form of both finetuned DPR encoders by author 2 and the original Facebook DPR. 3 The generator module of the baseline is BARTlarge from the HuggingFace. 4 Our final submission model is composed of our own fine-tuned DPR and Bart-large pretrained with conversational QA datasets.", |
| "cite_spans": [ |
| { |
| "start": 118, |
| "end": 142, |
| "text": "(Karpukhin et al., 2020)", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 239, |
| "end": 240, |
| "text": "3", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Baseline Model", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "We use four methods to enhance the model's ability to efficiently utilize external grounding knowledge especially on dialogue modeling.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Methodology", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Multi-task learning improves the model's performance when different tasks share information or semantics. If the tasks have a higher correlation, it is likely for the model to benefit more from multitask learning. The final goal of the proposed task is to generate natural language responses, which corresponds to the generation task. Figure 2 presents the similarity between the ground truth of each task. From this statistic, it is clear that two tasks share much semantic information.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 335, |
| "end": 343, |
| "text": "Figure 2", |
| "ref_id": "FIGREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Multi-task Learning", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "In order to implement multi-task learning, we first train the model on the grounding task with prefix \"TASK1: \" added to the input string for the generator. Then, using the last checkpoint, we continue training the model on the generation task with prefix \"TASK2: \" concatenated to each input string. ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Multi-task Learning", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "To enhance the adaptability of the RAG model to the dataset, we attempt to increase the amount of data for finetuning. For each dialogue query in the original dataset, we apply the synonym augmenter from nlpaug 5 . The synonym augmenter randomly changes some words in the input to similar words based on WordNet 6 . We exclude '[SEP]', 'agent:' 'user:' since these words are special tokens for the task.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data Augmentation", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "To enhance the generative performance of the model, we pretrain the RAG generator on two datasets.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Pretraining on Conversational QA Datasets", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "CoQA The first dataset is the CoQA dataset (Reddy et al., 2018) , a conversational QA dataset grounded in a diverse range of documents. Because MultiDoc2Dial is not a large dataset, there is always a possibility of underfitting. CoQA, with its 127k questions, can provide us with much-needed extra data for our generator. As the format of the CoQA dataset (grounding document, then questions) is different from the input format of our BART model (query and dialogue context, followed by the grounding document), we reformat the dataset to fit our needs before training.", |
| "cite_spans": [ |
| { |
| "start": 43, |
| "end": 63, |
| "text": "(Reddy et al., 2018)", |
| "ref_id": "BIBREF12" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Pretraining on Conversational QA Datasets", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "Doc2Dial The second dataset is the Doc2Dial dataset (Feng et al., 2020) , a goal-oriented document-grounded dialogue dataset which is extremely similar to the MultiDoc2Dial dataset. As mentioned above, most of the instances in the MultiDoc2Dial dataset are formed by modifying Doc2Dial instances to fit a multi-document setting. Along with the existence of a single grounding document, this extreme similarity of content makes it an ideal candidate to train our generator without relying on the proper functioning of the retriever. Therefore, we can expect pretraining the generator on the Doc2Dial dataset to boost the generative capabilities of our model. As with CoQA, we reformat the dataset to fit the input of our BART model before training.", |
| "cite_spans": [ |
| { |
| "start": 52, |
| "end": 71, |
| "text": "(Feng et al., 2020)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Pretraining on Conversational QA Datasets", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "For both datasets, we do not cut down the grounding document to fit the maximum input length of our model. This may have resulted in truncation of the relevant span in some instances, and remains an area of possible improvement.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Pretraining on Conversational QA Datasets", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "To enhance the retrieval performance of the model, we adopt data augmentation to increase the number of hard negative contexts in the DPR training data. We apply the antonym augmenter from nlpaug 7 . The antonym augmenter takes positive contexts, which is the correct grounding document for the dialogue, as input. Based on WordNet antonym, the augmenter switches some words in the inputs to their respective antonyms and outputs the augmented sentences. We consider these outputs as the hard negative contexts and added them to the original dataset. We use the augmented dataset to finetune DPR.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Contrastive Learning", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "We fine-tune RAG by following the default hyperparameter settings from the baseline code. 8 Due to hardware shortage, there are minor modifications; we set the gradient accumulation step as 2 and reduce the training and evaluation batch size to 4 and 1, respectively. We only report results of utilizing document structural information for segmentation since it shows better results in our experimental settings. The retrieved documents are not re-ranked since this method doesn't benefit the model performance. ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training Details", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "In this section, the Facebook released version of DPR and BART-large in the HuggingFace constitute the baseline model.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "RAG Fine-tuning Methods", |
| "sec_num": "4.2.1" |
| }, |
| { |
| "text": "Multi-task Learning We sequentially fine-tune the model on the grounding and generation tasks. Table 2 shows the results for multi-task learning. There are improvements in the F1 and EM score using multi-task learning, even though considering the fact that the model was trained on the generation task for a much shorter time. We expect the model to show better results with more extended training.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 95, |
| "end": 102, |
| "text": "Table 2", |
| "ref_id": "TABREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "RAG Fine-tuning Methods", |
| "sec_num": "4.2.1" |
| }, |
| { |
| "text": "Data Augmentation For data augmentation, we apply synonym transformation to the original dataset, attaining twice the baseline size. Table 2 presents the result for data augmentation on generation task. We have observed that applying data augmentation to the generation task degraded the performance. However, by utilizing augmented data on the grounding task, the model achieves a 40.55 F1 score and a 23.49 exact match score. Compared to our baseline model implementation trained with the original grounding task data, training with augmented data improved +0.5 F1 score and +0.64 exact match score. These results demonstrate that synonym data augmentation on the generation task's gold answers does not provide the model with any informative knowledge for the generation task. Therefore, we include augmented data only on grounding task during multi-task learning. ", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 133, |
| "end": 140, |
| "text": "Table 2", |
| "ref_id": "TABREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "RAG Fine-tuning Methods", |
| "sec_num": "4.2.1" |
| }, |
| { |
| "text": "This section mainly discusses results for modulespecific training methods. We fine-tune RAG's retriever and pretrain generator, DPR and BART, with contrastive learning and conversational QA datasets. We set the baseline model as the same configuration with section 4.2.1. Pretraining We pretrain BART-large on CoQA and Doc2Dial before integrating it into RAG. We train 10 epochs for each dataset using hyperparameters suggested by the DialDoc2021 baseline code on subtask2. 10 Table 3 shows the result for pretraining. We report two results; pretrained on CoQA only and pretrained on both CoQA and Doc2Dial. Both datasets enhanced the model performance in terms of F1 and EM scores. There is extra room for improvement since we pretrain BART only for a few epochs due to long training time and limited resources.", |
| "cite_spans": [ |
| { |
| "start": 474, |
| "end": 476, |
| "text": "10", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 477, |
| "end": 484, |
| "text": "Table 3", |
| "ref_id": "TABREF5" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Module Specific Methods", |
| "sec_num": "4.2.2" |
| }, |
| { |
| "text": "Contrastive Learning We fine-tune DPR using the settings implemented by the shared task. We fine-tune the recently released version of DPR, checkpoint.retriever.single-advhn.nq.bert-base-encoder, for 50 epochs on our new DPR dataset with one extra hard negative sample generated by antonym augmentation. Table 3 reports the results for contrastive learning. Despite using the same hyperparameters for DPR, there is degradation in the score for fine-tuning on our setting. However, after adding another hard negative sample, the model shows better performance on the shared task.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 304, |
| "end": 311, |
| "text": "Table 3", |
| "ref_id": "TABREF5" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Module Specific Methods", |
| "sec_num": "4.2.2" |
| }, |
| { |
| "text": "Our final model for DialDoc shared task at ACL 2022 utilizes all four suggested methods in this paper. We only participate in MultiDoc2Dialseen-domain task which training data and test data share the same domains for the grounding documents. Our best performing model achieves 37.78 F1 score, 22.94 SacreBLEU, 36.97 Meteor, 35.46 RougeL, a total of 133.15 on the officially released test set (MDD-SEEN).", |
| "cite_spans": [ |
| { |
| "start": 277, |
| "end": 292, |
| "text": "37.78 F1 score,", |
| "ref_id": null |
| }, |
| { |
| "start": 293, |
| "end": 309, |
| "text": "22.94 SacreBLEU,", |
| "ref_id": null |
| }, |
| { |
| "start": 310, |
| "end": 323, |
| "text": "36.97 Meteor,", |
| "ref_id": null |
| }, |
| { |
| "start": 324, |
| "end": 329, |
| "text": "35.46", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Leaderboard Submission", |
| "sec_num": "4.2.3" |
| }, |
| { |
| "text": "In this paper, we explain our submissions to the MultiDoc2Dial shared task. We utilize various conversational QA datasets and methods to improve the given baseline model. Our RAG model is composed of DPR for the retriever and BART for the generator. We train DPR with contrastive learning with an extra hard negative sample. BART is pretrained on conversational QA datasets, CoQA and Doc2Dial. On the end-to-end level, we implement multi-task learning to utilize model knowledge obtained from the previous grounding task that is trained on augmented data. All of the mentioned techniques enhance the model performance compared to the suggested baseline model.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "5" |
| }, |
| { |
| "text": "https://huggingface.co/docs/transformers/master/model_doc /rag 2 https://huggingface.co/sivasankalpp 3 https://github.com/facebookresearch/DPR 4 https://huggingface.co/facebook/bart-large", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "https://github.com/makcedward/nlpaug 6 https://wordnet.princeton.edu/", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "https://github.com/makcedward/nlpaug", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "https://github.com/IBM/multidoc2dial", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "https://github.com/doc2dial/sharedtask-dialdoc2021", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "K. Jung is with ASRI, Seoul National University, Korea. This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (No. 2021R1A2C2008855)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgements", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Hello, it's GPT-2 -how can I help you? towards the use of pretrained language models for task-oriented dialogue systems", |
| "authors": [ |
| { |
| "first": "Pawel", |
| "middle": [], |
| "last": "Budzianowski", |
| "suffix": "" |
| }, |
| { |
| "first": "Ivan", |
| "middle": [], |
| "last": "Vulic", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "CoRR", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Pawel Budzianowski and Ivan Vulic. 2019. Hello, it's GPT-2 -how can I help you? towards the use of pre- trained language models for task-oriented dialogue systems. CoRR, abs/1907.05774.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Wizard of wikipedia: Knowledge-powered conversational agents", |
| "authors": [ |
| { |
| "first": "Emily", |
| "middle": [], |
| "last": "Dinan", |
| "suffix": "" |
| }, |
| { |
| "first": "Stephen", |
| "middle": [], |
| "last": "Roller", |
| "suffix": "" |
| }, |
| { |
| "first": "Kurt", |
| "middle": [], |
| "last": "Shuster", |
| "suffix": "" |
| }, |
| { |
| "first": "Angela", |
| "middle": [], |
| "last": "Fan", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Auli", |
| "suffix": "" |
| }, |
| { |
| "first": "Jason", |
| "middle": [], |
| "last": "Weston", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1811.01241" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Emily Dinan, Stephen Roller, Kurt Shuster, Angela Fan, Michael Auli, and Jason Weston. 2018. Wizard of wikipedia: Knowledge-powered conversational agents. arXiv preprint arXiv:1811.01241.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Multidoc2dial: Modeling dialogues grounded in multiple documents", |
| "authors": [ |
| { |
| "first": "Song", |
| "middle": [], |
| "last": "Feng", |
| "suffix": "" |
| }, |
| { |
| "first": "Sankalp", |
| "middle": [], |
| "last": "Siva", |
| "suffix": "" |
| }, |
| { |
| "first": "Hui", |
| "middle": [], |
| "last": "Patel", |
| "suffix": "" |
| }, |
| { |
| "first": "Sachindra", |
| "middle": [], |
| "last": "Wan", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Joshi", |
| "suffix": "" |
| } |
| ], |
| "year": 2021, |
| "venue": "CoRR", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Song Feng, Siva Sankalp Patel, Hui Wan, and Sachin- dra Joshi. 2021. Multidoc2dial: Modeling dia- logues grounded in multiple documents. CoRR, abs/2109.12595.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "2020. doc2dial: A goal-oriented documentgrounded dialogue dataset", |
| "authors": [ |
| { |
| "first": "Song", |
| "middle": [], |
| "last": "Feng", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [ |
| "Chulaka" |
| ], |
| "last": "Hui Wan", |
| "suffix": "" |
| }, |
| { |
| "first": "Siva Sankalp", |
| "middle": [], |
| "last": "Gunasekara", |
| "suffix": "" |
| }, |
| { |
| "first": "Sachindra", |
| "middle": [], |
| "last": "Patel", |
| "suffix": "" |
| }, |
| { |
| "first": "Luis", |
| "middle": [ |
| "A" |
| ], |
| "last": "Joshi", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Lastras", |
| "suffix": "" |
| } |
| ], |
| "year": null, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Song Feng, Hui Wan, R. Chulaka Gunasekara, Siva Sankalp Patel, Sachindra Joshi, and Luis A. Lastras. 2020. doc2dial: A goal-oriented document- grounded dialogue dataset. CoRR, abs/2011.06623.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Dense passage retrieval for opendomain question answering", |
| "authors": [ |
| { |
| "first": "Vladimir", |
| "middle": [], |
| "last": "Karpukhin", |
| "suffix": "" |
| }, |
| { |
| "first": "Barlas", |
| "middle": [], |
| "last": "Oguz", |
| "suffix": "" |
| }, |
| { |
| "first": "Sewon", |
| "middle": [], |
| "last": "Min", |
| "suffix": "" |
| }, |
| { |
| "first": "Patrick", |
| "middle": [], |
| "last": "Lewis", |
| "suffix": "" |
| }, |
| { |
| "first": "Ledell", |
| "middle": [], |
| "last": "Wu", |
| "suffix": "" |
| }, |
| { |
| "first": "Sergey", |
| "middle": [], |
| "last": "Edunov", |
| "suffix": "" |
| }, |
| { |
| "first": "Danqi", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Wen-Tau", |
| "middle": [], |
| "last": "Yih", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
| "volume": "", |
| "issue": "", |
| "pages": "6769--6781", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/2020.emnlp-main.550" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for open- domain question answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6769-6781, Online. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Sequential latent knowledge selection for knowledge-grounded dialogue", |
| "authors": [ |
| { |
| "first": "Byeongchang", |
| "middle": [], |
| "last": "Kim", |
| "suffix": "" |
| }, |
| { |
| "first": "Jaewoo", |
| "middle": [], |
| "last": "Ahn", |
| "suffix": "" |
| }, |
| { |
| "first": "Gunhee", |
| "middle": [], |
| "last": "Kim", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "DOI": [ |
| "10.48550/ARXIV.2002.07510" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Byeongchang Kim, Jaewoo Ahn, and Gunhee Kim. 2020. Sequential latent knowledge selection for knowledge-grounded dialogue.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "BART: denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension", |
| "authors": [ |
| { |
| "first": "Mike", |
| "middle": [], |
| "last": "Lewis", |
| "suffix": "" |
| }, |
| { |
| "first": "Yinhan", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "Naman", |
| "middle": [], |
| "last": "Goyal", |
| "suffix": "" |
| }, |
| { |
| "first": "Marjan", |
| "middle": [], |
| "last": "Ghazvininejad", |
| "suffix": "" |
| }, |
| { |
| "first": "Abdelrahman", |
| "middle": [], |
| "last": "Mohamed", |
| "suffix": "" |
| }, |
| { |
| "first": "Omer", |
| "middle": [], |
| "last": "Levy", |
| "suffix": "" |
| }, |
| { |
| "first": "Veselin", |
| "middle": [], |
| "last": "Stoyanov", |
| "suffix": "" |
| }, |
| { |
| "first": "Luke", |
| "middle": [], |
| "last": "Zettlemoyer", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2019. BART: denoising sequence-to-sequence pre-training for natural language generation, translation, and com- prehension. CoRR, abs/1910.13461.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Retrieval-augmented generation for knowledge-intensive NLP tasks", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [ |
| "H" |
| ], |
| "last": "Patrick", |
| "suffix": "" |
| }, |
| { |
| "first": "Ethan", |
| "middle": [], |
| "last": "Lewis", |
| "suffix": "" |
| }, |
| { |
| "first": "Aleksandra", |
| "middle": [], |
| "last": "Perez", |
| "suffix": "" |
| }, |
| { |
| "first": "Fabio", |
| "middle": [], |
| "last": "Piktus", |
| "suffix": "" |
| }, |
| { |
| "first": "Vladimir", |
| "middle": [], |
| "last": "Petroni", |
| "suffix": "" |
| }, |
| { |
| "first": "Naman", |
| "middle": [], |
| "last": "Karpukhin", |
| "suffix": "" |
| }, |
| { |
| "first": "Heinrich", |
| "middle": [], |
| "last": "Goyal", |
| "suffix": "" |
| }, |
| { |
| "first": "Mike", |
| "middle": [], |
| "last": "K\u00fcttler", |
| "suffix": "" |
| }, |
| { |
| "first": "Wen-Tau", |
| "middle": [], |
| "last": "Lewis", |
| "suffix": "" |
| }, |
| { |
| "first": "Tim", |
| "middle": [], |
| "last": "Yih", |
| "suffix": "" |
| }, |
| { |
| "first": "Sebastian", |
| "middle": [], |
| "last": "Rockt\u00e4schel", |
| "suffix": "" |
| }, |
| { |
| "first": "Douwe", |
| "middle": [], |
| "last": "Riedel", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Kiela", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Patrick S. H. Lewis, Ethan Perez, Aleksandra Pik- tus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich K\u00fcttler, Mike Lewis, Wen-tau Yih, Tim Rockt\u00e4schel, Sebastian Riedel, and Douwe Kiela. 2020a. Retrieval-augmented gener- ation for knowledge-intensive NLP tasks. CoRR, abs/2005.11401.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Question and answer test-train overlap in open-domain question answering datasets. CoRR, abs", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [ |
| "H" |
| ], |
| "last": "Patrick", |
| "suffix": "" |
| }, |
| { |
| "first": "Pontus", |
| "middle": [], |
| "last": "Lewis", |
| "suffix": "" |
| }, |
| { |
| "first": "Sebastian", |
| "middle": [], |
| "last": "Stenetorp", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Riedel", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Patrick S. H. Lewis, Pontus Stenetorp, and Sebastian Riedel. 2020b. Question and answer test-train over- lap in open-domain question answering datasets. CoRR, abs/2008.02637.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "OpenDialKG: Explainable conversational reasoning with attention-based walks over knowledge graphs", |
| "authors": [ |
| { |
| "first": "Seungwhan", |
| "middle": [], |
| "last": "Moon", |
| "suffix": "" |
| }, |
| { |
| "first": "Pararth", |
| "middle": [], |
| "last": "Shah", |
| "suffix": "" |
| }, |
| { |
| "first": "Anuj", |
| "middle": [], |
| "last": "Kumar", |
| "suffix": "" |
| }, |
| { |
| "first": "Rajen", |
| "middle": [], |
| "last": "Subba", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "845--854", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/P19-1081" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Seungwhan Moon, Pararth Shah, Anuj Kumar, and Ra- jen Subba. 2019. OpenDialKG: Explainable conver- sational reasoning with attention-based walks over knowledge graphs. In Proceedings of the 57th An- nual Meeting of the Association for Computational Linguistics, pages 845-854, Florence, Italy. Associa- tion for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Language models are unsupervised multitask learners", |
| "authors": [ |
| { |
| "first": "Alec", |
| "middle": [], |
| "last": "Radford", |
| "suffix": "" |
| }, |
| { |
| "first": "Jeffrey", |
| "middle": [], |
| "last": "Wu", |
| "suffix": "" |
| }, |
| { |
| "first": "Rewon", |
| "middle": [], |
| "last": "Child", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Luan", |
| "suffix": "" |
| }, |
| { |
| "first": "Dario", |
| "middle": [], |
| "last": "Amodei", |
| "suffix": "" |
| }, |
| { |
| "first": "Ilya", |
| "middle": [], |
| "last": "Sutskever", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "OpenAI blog", |
| "volume": "1", |
| "issue": "8", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", |
| "authors": [ |
| { |
| "first": "Colin", |
| "middle": [], |
| "last": "Raffel", |
| "suffix": "" |
| }, |
| { |
| "first": "Noam", |
| "middle": [], |
| "last": "Shazeer", |
| "suffix": "" |
| }, |
| { |
| "first": "Adam", |
| "middle": [], |
| "last": "Roberts", |
| "suffix": "" |
| }, |
| { |
| "first": "Katherine", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "Sharan", |
| "middle": [], |
| "last": "Narang", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Matena", |
| "suffix": "" |
| }, |
| { |
| "first": "Yanqi", |
| "middle": [], |
| "last": "Zhou", |
| "suffix": "" |
| }, |
| { |
| "first": "Wei", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "Peter", |
| "middle": [ |
| "J" |
| ], |
| "last": "Liu", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text trans- former. CoRR, abs/1910.10683.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Coqa: A conversational question answering challenge", |
| "authors": [ |
| { |
| "first": "Siva", |
| "middle": [], |
| "last": "Reddy", |
| "suffix": "" |
| }, |
| { |
| "first": "Danqi", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [ |
| "D" |
| ], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Siva Reddy, Danqi Chen, and Christopher D. Manning. 2018. Coqa: A conversational question answering challenge. CoRR, abs/1808.07042.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Dykgchat: Benchmarking dialogue generation grounding on dynamic knowledge graphs", |
| "authors": [ |
| { |
| "first": "Yi-Lin", |
| "middle": [], |
| "last": "Tuan", |
| "suffix": "" |
| }, |
| { |
| "first": "Yun-Nung", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Hung-Yi", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1910.00610" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yi-Lin Tuan, Yun-Nung Chen, and Hung-yi Lee. 2019. Dykgchat: Benchmarking dialogue genera- tion grounding on dynamic knowledge graphs. arXiv preprint arXiv:1910.00610.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Alternating recurrent dialog model with large-scale pre-trained language models", |
| "authors": [ |
| { |
| "first": "Qingyang", |
| "middle": [], |
| "last": "Wu", |
| "suffix": "" |
| }, |
| { |
| "first": "Yichi", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Yu", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "Zhou", |
| "middle": [], |
| "last": "Yu", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Qingyang Wu, Yichi Zhang, Yu Li, and Zhou Yu. 2019. Alternating recurrent dialog model with large-scale pre-trained language models. CoRR, abs/1910.03756.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Open-domain question answering with pre-constructed question spaces. CoRR, abs", |
| "authors": [ |
| { |
| "first": "Jinfeng", |
| "middle": [], |
| "last": "Xiao", |
| "suffix": "" |
| }, |
| { |
| "first": "Lidan", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "Franck", |
| "middle": [], |
| "last": "Dernoncourt", |
| "suffix": "" |
| }, |
| { |
| "first": "Trung", |
| "middle": [], |
| "last": "Bui", |
| "suffix": "" |
| }, |
| { |
| "first": "Tong", |
| "middle": [], |
| "last": "Sun", |
| "suffix": "" |
| }, |
| { |
| "first": "Jiawei", |
| "middle": [], |
| "last": "Han", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jinfeng Xiao, Lidan Wang, Franck Dernoncourt, Trung Bui, Tong Sun, and Jiawei Han. 2020. Open-domain question answering with pre-constructed question spaces. CoRR, abs/2006.08337.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "CoLV: A collaborative latent variable model for knowledge-grounded dialogue generation", |
| "authors": [ |
| { |
| "first": "Lei", |
| "middle": [], |
| "last": "Haolan Zhan", |
| "suffix": "" |
| }, |
| { |
| "first": "Hongshen", |
| "middle": [], |
| "last": "Shen", |
| "suffix": "" |
| }, |
| { |
| "first": "Hainan", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| } |
| ], |
| "year": 2021, |
| "venue": "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "2250--2261", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/2021.emnlp-main.172" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Haolan Zhan, Lei Shen, Hongshen Chen, and Hainan Zhang. 2021. CoLV: A collaborative latent variable model for knowledge-grounded dialogue generation. In Proceedings of the 2021 Conference on Empiri- cal Methods in Natural Language Processing, pages 2250-2261, Online and Punta Cana, Dominican Re- public. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Knowledgegrounded dialogue generation with pre-trained language models", |
| "authors": [ |
| { |
| "first": "Xueliang", |
| "middle": [], |
| "last": "Zhao", |
| "suffix": "" |
| }, |
| { |
| "first": "Wei", |
| "middle": [], |
| "last": "Wu", |
| "suffix": "" |
| }, |
| { |
| "first": "Can", |
| "middle": [], |
| "last": "Xu", |
| "suffix": "" |
| }, |
| { |
| "first": "Chongyang", |
| "middle": [], |
| "last": "Tao", |
| "suffix": "" |
| }, |
| { |
| "first": "Dongyan", |
| "middle": [], |
| "last": "Zhao", |
| "suffix": "" |
| }, |
| { |
| "first": "Rui", |
| "middle": [], |
| "last": "Yan", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Xueliang Zhao, Wei Wu, Can Xu, Chongyang Tao, Dongyan Zhao, and Rui Yan. 2020. Knowledge- grounded dialogue generation with pre-trained lan- guage models. CoRR, abs/2010.08824.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Commonsense knowledge aware conversation generation with graph attention", |
| "authors": [ |
| { |
| "first": "Hao", |
| "middle": [], |
| "last": "Zhou", |
| "suffix": "" |
| }, |
| { |
| "first": "Tom", |
| "middle": [], |
| "last": "Young", |
| "suffix": "" |
| }, |
| { |
| "first": "Minlie", |
| "middle": [], |
| "last": "Huang", |
| "suffix": "" |
| }, |
| { |
| "first": "Haizhou", |
| "middle": [], |
| "last": "Zhao", |
| "suffix": "" |
| }, |
| { |
| "first": "Jingfang", |
| "middle": [], |
| "last": "Xu", |
| "suffix": "" |
| }, |
| { |
| "first": "Xiaoyan", |
| "middle": [], |
| "last": "Zhu", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "IJCAI", |
| "volume": "", |
| "issue": "", |
| "pages": "4623--4629", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hao Zhou, Tom Young, Minlie Huang, Haizhou Zhao, Jingfang Xu, and Xiaoyan Zhu. 2018a. Common- sense knowledge aware conversation generation with graph attention. In IJCAI, pages 4623-4629.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "A dataset for document grounded conversations", |
| "authors": [ |
| { |
| "first": "Kangyan", |
| "middle": [], |
| "last": "Zhou", |
| "suffix": "" |
| }, |
| { |
| "first": "Shrimai", |
| "middle": [], |
| "last": "Prabhumoye", |
| "suffix": "" |
| }, |
| { |
| "first": "Alan", |
| "middle": [ |
| "W" |
| ], |
| "last": "Black", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1809.07358" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kangyan Zhou, Shrimai Prabhumoye, and Alan W Black. 2018b. A dataset for document grounded conversations. arXiv preprint arXiv:1809.07358.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF2": { |
| "uris": null, |
| "text": "Similarity score of ground truth answer on grounding and generation task", |
| "type_str": "figure", |
| "num": null |
| }, |
| "TABREF1": { |
| "html": null, |
| "type_str": "table", |
| "content": "<table><tr><td>2 Shared Task</td></tr><tr><td>2.1 Dataset</td></tr></table>", |
| "num": null, |
| "text": "Dataset Statistics We split documents by using structural information from markup tags integrated in HTML files." |
| }, |
| "TABREF3": { |
| "html": null, |
| "type_str": "table", |
| "content": "<table/>", |
| "num": null, |
| "text": "RAG Fine-tuning Methods Results Models are evaluated with F1, Exact Match, and sacreBLEU scores. The baseline model is composed of the released version of finetuned DPR 9 and BART-large on the Hug-gingFace." |
| }, |
| "TABREF5": { |
| "html": null, |
| "type_str": "table", |
| "content": "<table><tr><td>Specific Methods Results We evaluate models with F1, Exact Match, and sacre-BLEU scores. +CoQA&Doc2Dial reports results for BART-large pretrained on CoQA and Doc2Dial dataset. DPR(adv_nq) is the RAG model composed of our own fine-tuned DPR using shared task configuration. +DPR(+hard_negative) corresponds to results for RAG with our fine-tuned DPR version with an extra hard neg-ative sample.</td></tr></table>", |
| "num": null, |
| "text": "Module" |
| } |
| } |
| } |
| } |