ACL-OCL / Base_JSON /prefixN /json /nlpmc /2021.nlpmc-1.3.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:06:19.934983Z"
},
"title": "Gathering Information and Engaging the User ComBot : A Task-Based, Serendipitous Dialog Model for Patient-Doctor Interactions",
"authors": [
{
"first": "Anna",
"middle": [],
"last": "Liednikova",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "CNRS/LORIA",
"location": {}
},
"email": "anna.liednikova@loria.fr"
},
{
"first": "Philippe",
"middle": [],
"last": "Jolivet",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "CNRS/LORIA",
"location": {}
},
"email": "philippe.jolivet@aliae.io"
},
{
"first": "Alexandre",
"middle": [],
"last": "Durand-Salmon",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "CNRS/LORIA",
"location": {}
},
"email": "alexandre.durand-salmon@aliae.io"
},
{
"first": "Claire",
"middle": [],
"last": "Gardent",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "CNRS/LORIA",
"location": {}
},
"email": "claire.gardent@loria.fr"
},
{
"first": "&",
"middle": [],
"last": "Aliae",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "CNRS/LORIA",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We focus on dialog models in the context of clinical studies where the goal is to help gather, in addition to the closed set of information collected based on a questionnaire, serendipitous information that is medically relevant. To promote user engagement and address this dual goal (collecting both a predefined set of data points and more informal information about the state of the patients), we introduce an ensemble model made of three bots: a task-based, a follow-up and a social bot. We introduce a generic method for developing follow-up bots. We compare different ensemble configurations and we show that the combination of the three bots (i) provides a better basis for collecting information than just the information seeking bot and (ii) collects information in a more efficient manner that an ensemble model combining the information seeking and the social bot.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "We focus on dialog models in the context of clinical studies where the goal is to help gather, in addition to the closed set of information collected based on a questionnaire, serendipitous information that is medically relevant. To promote user engagement and address this dual goal (collecting both a predefined set of data points and more informal information about the state of the patients), we introduce an ensemble model made of three bots: a task-based, a follow-up and a social bot. We introduce a generic method for developing follow-up bots. We compare different ensemble configurations and we show that the combination of the three bots (i) provides a better basis for collecting information than just the information seeking bot and (ii) collects information in a more efficient manner that an ensemble model combining the information seeking and the social bot.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Current work on Human-Machine interaction focuses on three main types of dialogs: task-based, open domain and question answering conversational dialogs. The goal of task-based models is to gather the information needed for a given task e.g., gathering the price, location and type of a restaurant needed to recommend this restaurant. Usually trained on social media data (Roller et al., 2020) (Adiwardana et al.) , open domain conversational models aim to mimick open domain conversation between two humans. Finally, question answering conversational models seek to model dialogs where a series of inter-connected questions is asked about a text passage.",
"cite_spans": [
{
"start": 371,
"end": 392,
"text": "(Roller et al., 2020)",
"ref_id": null
},
{
"start": 393,
"end": 412,
"text": "(Adiwardana et al.)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we consider dialog models in the context of clinical studies i.e., dialog models which are used to collect the information needed by the medical body to assess the impact of the clinical trial on a cohort of patients (e.g., information about their mood, their activity, their sleeping patterns). In the context of these clinical studies, the goal of the dialog model is two-fold. A first goal is to collect a set of pre-defined data points i.e., answers to a set of pre-defined questions specified in a questionnaire. A second goal is to gather relevant serendipitous information i.e., health related information that is not addressed by the questionnaire but that is provided by the user during the interaction and which may be relevant to understand the impact of the therapy investigated by the clinical study. This requires keeping the user engaged and prompting him/her with relevant follow-up questions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To model these three goals (collecting a predefined set of data points, keeping the user engaged and gathering more informal information about the state of the patient), we introduce an ensemble model which combines three bots: a task-based bot (MEDBOT) whose goal is to collect information about the mood, the daily life, the sleeping pattern, the anxiety level and the leisure activities of the patients; a follow-up bot (FOLLOWUPBOT) designed to extend the task-based exchanges with health-related, follow-up questions based on the user input; and an empathy bot (EMPATHYBOT) whose task is to reinforce the patient engagement by providing empathetic and socially driven feedback.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our work makes the following contributions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We introduce a model where interactions are driven by three main goals: maintaining user engagement, gathering a predefined set of information units and encouraging domain related user input.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We provide a generic method to create training data for a bot that can follow-up on the user response while remaining in a given domain (in this case the health domain).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We show that such a follow-up bot is crucial to support both information gathering and user engagement and we provide a detailed analysis of how the three bots interact.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Several approaches have explored the use of ensemble models for dialog. While Song et al. (2016) proposed an ensemble model for human-machine dialog which combines a generative and a retrieval model, further ensemble models for dialog have focused on combining agents/bots designed to model different conversation strategies. Yu et al. (2016) focus on open domain conversation and combines three agents, two to improve dialog coherence (ensuring that pronouns can be resolved and maximising semantic similarity with the current context) and one to handle topic switch (moving to a new topic when the retrieval confidence score is low). The ALANA ensemble model (Papaioannou et al., 2017b,a) , developed for the Amazon Alexa Challenge i.e., for open domain chitchat, combines domain specific bots used to provide information from different sources with social bots to smooth the interactions (by asking for clarification, expressing personal views or handling profanities). Similarly, introduces a dialog model which interleaves a social and a task-based bot. Conversely, Gunson et al. (2020) showed that success of interleaving depends on the context and that in a public setting, users either prefer purely task-based systems or fail to see a difference between task-based and a richer ensemble model combining task-based and social bots. Our work differs from these previous approaches in that we combine a standard, task-based model with both a social bot and a domain specific, followup bot. This allows both for more natural dialogs (by following up on the user input rather than systematically asking about an item in the predefined set of topics) and for additional relevant, health related information to be gathered.",
"cite_spans": [
{
"start": 78,
"end": 96,
"text": "Song et al. (2016)",
"ref_id": "BIBREF16"
},
{
"start": 326,
"end": 342,
"text": "Yu et al. (2016)",
"ref_id": "BIBREF21"
},
{
"start": 661,
"end": 690,
"text": "(Papaioannou et al., 2017b,a)",
"ref_id": null
},
{
"start": 1071,
"end": 1091,
"text": "Gunson et al. (2020)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "We introduce the three bots making up our ensemble model and the ensemble model combining them.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ComBot, an ensemble Model for Repeated Task-Based Interactions",
"sec_num": "3"
},
{
"text": "MEDBOT is a retrieval model which uses the pretrained ConveRT dialog response selection model (Henderson et al., 2019) to retrieve a query from the MedTree Corpus (Liednikova et al., 2020) . It is designed to collect information from the user based on a predefined set of questions contained in a questionaire.",
"cite_spans": [
{
"start": 94,
"end": 118,
"text": "(Henderson et al., 2019)",
"ref_id": "BIBREF5"
},
{
"start": 163,
"end": 188,
"text": "(Liednikova et al., 2020)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Medical Bot",
"sec_num": "3.1"
},
{
"text": "The MedTree Dataset. The MedTree corpus (Liednikova et al., 2020) was developed to train a task-based, information seeking, health bot on five domains: sleep, mood, anxiety, daily tasks and leisure activities. It was derived from a dialog tree provided by a domain expert (i.e., a physician) and designed to formalise typical patient-doctor interactions occurring in the context of a clinical study. In that tree, each branch captures a sequence of (Doctor Question, Patient Answer) pairs and each domain is modeled by a separate tree with the root introducing the conversation (initial question) and the leaves providing a closing statement. The MedTree corpus is then derived from this tree by extracting from each branch of the tree, all contextquestion pairs, where the context consists of a sequence of patient-doctor-patient turns present on that branch and the question is the following doctor question. A fragment of the decision tree created for the sleep domain and an example dialog are shown in Figure 1 . There are two versions of the MedTree corpus: one consisting of only the context/question pairs derived from the dialog tree (INIT) and the other including variants of these pairs based on paraphrases extracted from forum data (ALL). In (Liednikova et al., 2020) , the ALL corpus is used to train a generative and a classification model. In our work, we use (a slightly modified version 1 of) the INIT corpus instead, as its small size facilitates retrieval (the number of candidates is small) and preliminary experimentations showed better results when using the INIT corpus.",
"cite_spans": [
{
"start": 40,
"end": 65,
"text": "(Liednikova et al., 2020)",
"ref_id": "BIBREF8"
},
{
"start": 1255,
"end": 1280,
"text": "(Liednikova et al., 2020)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 1007,
"end": 1015,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Medical Bot",
"sec_num": "3.1"
},
{
"text": "Encoder-Decoder which is trained on Reddit (727M input-response pairs) to identify the dialog context most similar to the current context and to retrieve the dialog turn following this context. In order to retrieve from the MedTree corpus, the question that best fits the current dialog context, the MEDBOT model compares the last three turns of the current dialog with contexts from the MedTree Corpus. The model identifies the MedTree corpus context with the highest similarity score 2 and outputs the question following that context. If the selected question has already been asked in the dialog generated so far and provided it is not a question such as \"What other things would you like to share with me ?\", we retrieve the next best question that is not a repetition. No fine-tuning is done due to the small amount of data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model. ConverT is a Transformer-based",
"sec_num": null
},
{
"text": "One main motivation behind the use of a health-bot in clinical studies is to complement the information traditionally gathered through a fixed questionnaire filled in each week by the patients with serendipitous information i.e., information that is not actively queried by the questionnaire but that is useful to analyse the cohort results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Follow-Up Bot",
"sec_num": "3.2"
},
{
"text": "The MEDBOT model introduced in the previous section is constrained to address only those topics which are present in the dialog tree, in effect, modeling a closed questionnaire. To allow for the collection of serendipitous health information, we develop the FOLLOWUPBOT whose function is to generate health-related questions which are not predicted by the dialog tree but which naturally follow from the user input. The main difference of FOL-LOWUPBOT from MEDBOT is the way it retrieves questions that are not in the sequence, but the ones that occurs in the same context even if the question itself doesn't share the lexions with the previous turns. Rather than artificially restricting the dialog to the limited set of topics pre-defined by the dialog tree, the combined model (MEDBOT + FOL-LOWUPBOT) allows for transitions based either on the dialog tree or on health-related, follow-up questions. In that sense, FOLLOWUPBOT allows not only for the collection of health-related serendipitous information but also for smoother dialog transitions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Follow-Up Bot",
"sec_num": "3.2"
},
{
"text": "Like MEDBOT, FOLLOWUPBOT used the pretrained ConveRT model to retrieve context appropriate queries from a dialog dataset. In this case however, the queries are retrieved from the Health-Board dataset, a new dataset we created to support follow-up questions in the health domain.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Follow-Up Bot",
"sec_num": "3.2"
},
{
"text": "The Healthboard Dataset. This dataset consists of (s, q) pairs where s is a (health related) state-ment and q is a follow-up question for that statement. We extract this dataset from the Healthboard forum 3 as follows. We first select 16 forum categories (listed in Table 1 ) that are relevant to our five domains. In the forum, each category includes multiple conversational threads, each thread consists of multiple posts and each post is a text of several paragraphs that can be split into sentences. In total, we collect 175,789 posts from 31,042 threads with 5.68 posts in average per thread. We then segment each post into sentences using the default NLTK sentence segmenter. We label each sentence with a dialogue act classifier in order to distinguish statements (\"sd\" label) from questions (\"qo\" label). For this labelling, we fine-tune the Distilbert Transformerbased classification model 4 on the Switchboard Corpus Stolcke et al. (2000) using 6 classes \"qo\" (Open-Question), \"sd\" (Statement-non-opinion), \"ft\" (Thanking), \"aa\" (Agree/Accept), \"%\" (Uninterpretable) and \"ba\" (Appreciation). For each question q (i.e., sentence labelled \"qo\") in each thread T , we gather all statements (i.e., all sentences labeled as \"sd\") which precede q in T into a pool of candidate statements 5 . As dialogue turns in bots should remain short, we filter sentences that have more than 100 tokens. For each candidate statement, we calculate its similarity with the question using the dot product on their ConveRT embeddings. We filter out all candidate statements whose score with the question is less than 0.6. If after filtering the resulting pool contains at least one candidate, we select the top-ranked statement and add the statement-question pair pair to the dataset. The resulting dataset contains 3,181 (statement, question) pairs.",
"cite_spans": [],
"ref_spans": [
{
"start": 266,
"end": 273,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Follow-Up Bot",
"sec_num": "3.2"
},
{
"text": "Model. Similar to the MEDBOT model, the FOL-LOWUPBOT model used the pre-trained ConveRT model to compare the current dialog context (the preceding three turns) with the statements contained in the HealthBoard dataset using the inner product. The top-20 candidates are then retrieved and filtered using Maximal Marginal Rel- (Carbonell and Goldstein, 1998) with \u03bb = 0.5 to control for repetitions 6 . Next, we compute the similarity between the remaining selected questions and the questions included in the current dialog context (all preceding dialog turns) and we exclude candidates with similarity score 0.8 or higher. After filtering, the top ranking candidate is selected and the associated follow-up question is output.",
"cite_spans": [
{
"start": 324,
"end": 355,
"text": "(Carbonell and Goldstein, 1998)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Follow-Up Bot",
"sec_num": "3.2"
},
{
"text": "As the name suggests, the role of the EMPATHY-BOT is to engage the user by showing empathy. For this bot, we use Roller et al. (2020) generative model which was pre-trained on a variant of Reddit discussion (Baumgartner et al., 2020) and finetuned on the ConvAI2 (Zhang et al., 2018) , Wizard of Wikipedia (Dinan et al., 2019) , Empathetic Dialogues (Rashkin et al., 2019), and Blended Skill Talk datasets (BST) to opti- 6 MMR is a measure for quantifying the extent to which a new item is both dissimilar to those already selected and similar to the target (here a selected question). A \u03bb value of 0.5 favors similarity and diversity equally, both matter equally. mize engagigness and humanness in open-domain conversation.",
"cite_spans": [
{
"start": 113,
"end": 133,
"text": "Roller et al. (2020)",
"ref_id": null
},
{
"start": 207,
"end": 233,
"text": "(Baumgartner et al., 2020)",
"ref_id": "BIBREF1"
},
{
"start": 263,
"end": 283,
"text": "(Zhang et al., 2018)",
"ref_id": "BIBREF23"
},
{
"start": 306,
"end": 326,
"text": "(Dinan et al., 2019)",
"ref_id": "BIBREF3"
},
{
"start": 421,
"end": 422,
"text": "6",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Empathy Bot",
"sec_num": "3.3"
},
{
"text": "Each bot provides a single candidate. To rank them, we encode the whole current dialog context and each candidate response using the ConveRT encoder, we calculate similarity (dot product) for each candidate/context pair and we select the candidate with highest similarity score. In case all candidates scores are less than 0.1, we consider that there is no good response and we end the conversation. Table 2 shows some statistics for the corpora used for pretraining (ConveRT, Blender) and for retrieval (INIT, HealthBoard). For MEDBOT and FOLLOWUPBOT, we use the ConveRT model from PolyAI 7 . For EMPATHYBOT, we use the Blender model with 90M parameters from the ParlAI library 8 .",
"cite_spans": [],
"ref_spans": [
{
"start": 400,
"end": 407,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Ensemble Model (ComBot)",
"sec_num": "3.4"
},
{
"text": "One benefit of the ensemble approach is that several models can be combined, each modelling different types of dialog requirements. We compare different configurations of our three bots: COMBOT (which combines the three bots), MEDBOT (using only the task-based bot), MED+EMPATHYBOT an ensemble model which combines the task-based ( MEDBOT) and the social bot (EMPATHYBOT) and MEDBOT+ FOL-LOWUPBOT, a bot combining the task-based and the follow-up question bot.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "4.1"
},
{
"text": "We first use automatic metrics and global satisfaction scores to compare the four models. We restrict the Acute-Eval, human-based model comparison to the two best performing systems namely, COMBOT and MEDBOT.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "4.1"
},
{
"text": "As there does not exist a dataset of well-formed health-related dialogs whose aim is both to answer a clinical study questionaire and to allow for serendipitous interactions, we have no test set on which to compare the output of our dialog models. Moreover, as has been repeatedly argued, referencebased, automatic metrics such as BLEU or ME-TEOR, fail to do justice to the fact that a dialog context usually has many possible continuations. We therefore use reference-free automatic metrics and human assessment for evaluation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4.2"
},
{
"text": "Human evaluation. We use the MTurk platform to collect human-bot dialogs for our four models (COMBOT, MEDBOT and MED+EMPATHYBOT) and ask the crowdworkers to provide a satisfaction rate at the end of their interaction with the bot. We then run a second MTurk crowdsourcing task to grade and compare dialogs pairs produced by different models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4.2"
},
{
"text": "To collect dialogs, we ask participants to interact with the bot for as long as they want. The conversation starts randomly with one of the initial questions of MEDBOT. The interaction stops either when all candidates scores are less than 0.1 (cf. Section 3.4) or when the user ends the conversation. For each model, we collect 50 dialogs. Each annotator interacts at most once with a bot.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4.2"
},
{
"text": "At the end of each human-bot conversation, the annotator is asked to rate satisfaction on a 1-5 Likert scale (a higher score indicates more satisfaction).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4.2"
},
{
"text": "Assigning a satisfaction score to a single dialog is a highly subjective task however with scores suffering from different bias and variance per annotators (Kulikov et al., 2019) . As argued by , comparing two dialogs, each produced by different models, and deciding on which dialog is best with respect to a predefined set of questions, helps support a more objective evaluation. We therefore use the Acute-Eval human evaluation framework to compare the dialogs collected using different bots. Since the automatic evaluation (cf. Section 5.1) shows that COMBOT and MEDBOT are the best systems, we compare only these two systems asking annotators to read pairs of dialogs created by these two bots and to then answer the pre-defined set of questions recommended by 's evaluation protocol namely:",
"cite_spans": [
{
"start": 156,
"end": 178,
"text": "(Kulikov et al., 2019)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4.2"
},
{
"text": "\u2022 Who would you prefer to talk to for a long conversation?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4.2"
},
{
"text": "\u2022 If you had to say one of the speakers is interesting and one is boring, who would you say is more interesting?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4.2"
},
{
"text": "\u2022 Which speaker sounds more human?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4.2"
},
{
"text": "\u2022 Which speaker has more coherent responses in the conversation?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4.2"
},
{
"text": "We report the percentage of time one model was chosen over the other.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4.2"
},
{
"text": "For this comparison, we consider 50 dialog pairs (one dialog produced by COMBOT, the other by MEDBOT) and for each Acute-Eval question, collected 50 judgments, one per dialog pair. We had ten annotators, each annotating at most 5 dialog pairs. To maximise similarity between the dialogs being compared, we create the dialog pairs by computing euclidean distance between context embeddings of MEDBOT and COMBOT dialogue sets. Then we composed a pair of two closest items and excluded them from the choice in the next iteration.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4.2"
},
{
"text": "Automatic Metrics. After collecting dialogues we perform their automatic evaluation. All scores are computed on the 50 bot-human dialogs collected for a given model. Table 3 shows the result scores averaged over 50 dialogs.",
"cite_spans": [],
"ref_spans": [
{
"start": 166,
"end": 173,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4.2"
},
{
"text": "To measure coherence, we exploit the unsupervised model CoSim introduced by Mesgar et al. (2019); Xu et al. (2018) ; Zhang et al. (2017) . This model measures the coherence of a dialog as the average of the cosine similarities between ConveRT embedding vectors of its adjacent turns.",
"cite_spans": [
{
"start": 98,
"end": 114,
"text": "Xu et al. (2018)",
"ref_id": "BIBREF19"
},
{
"start": 117,
"end": 136,
"text": "Zhang et al. (2017)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4.2"
},
{
"text": "To assess task success, we count the number of unique medical entities (Slots) mentioned. We do this using the clinical NER-model from gthe Stanza library (Zhang et al., 2020) 9 , a model trained on the 2010 i2b2/VA dataset (Uzuner et al., 2011) to extract named entities denoting a medical problem, test or treatment. We report the average number of medical entities both per dialog and in the user turns (to assess how much medical information comes from the user).",
"cite_spans": [
{
"start": 155,
"end": 175,
"text": "(Zhang et al., 2020)",
"ref_id": "BIBREF24"
},
{
"start": 224,
"end": 245,
"text": "(Uzuner et al., 2011)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4.2"
},
{
"text": "Following , we also calculate Information gain (InfoGain), the average number of unique tokens per dialog and Conversation Length (ConvLen), the average number of turns in the overall dialog. Finally, we compute the number of questions asked by the user (UserQ) as an indication of the user trust and engagement. We compute both the total number of questions present in the 50 dialog collected for a given model and the average number of question per dialog.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4.2"
},
{
"text": "We compare four models using automatic metric and absolute satisfaction scores. Based on this first evaluation, we compare two of these models using the Acute-Eval human evaluation framework. We display an example dialog and discuss the respec-tive use of each bot in the COMBOT model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "5"
},
{
"text": "Satisfaction Scores Table 3 shows the absolute satisfaction scores (i.e., scores provided on the basis of a single dialog rather than by comparing dialogs produced by different models) and the results of the automatic evaluation for the four models mentioned above.",
"cite_spans": [],
"ref_spans": [
{
"start": 20,
"end": 27,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Automatic Evaluation and Absolute",
"sec_num": "5.1"
},
{
"text": "ComBot provides a better basis for collecting information than MedBot. The automatic scores show that COMBOT consistently outperforms MEDBOT on informativity (Slots, InfoGrain) while allowing for shorter dialogs (ConvLen). In other words, COMBOT allows for a larger range of informational units (words and medical named entities) to be discussed in fewer turns.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic Evaluation and Absolute",
"sec_num": "5.1"
},
{
"text": "ComBot collects information in a more user-friendly, more efficient manner than Med+EmpathyBot. While the InfoGain scores are higher for MED+EMPATHYBOT and MEDBOT+FOLLOWUPBOT than for COMBOT (InfoGain: 140.19 and 153.23 vs. 124.82) , this is achieved at the cost of much longer dialogs (ConvLen: 30.29 and 36.06 vs. 21.96 ; cf. also Figure 2b ) In fact, when normalising InfoGain by the number of dialog turns (ConvLen), we see that in average, a turn in COMBOT dialogs contains a much higher number of unique tokens (i.e., is more informative) than for MEDBOT (3.82), MEDBOT+EMPATHYBOT (4.63) or MEDBOT+FOLLOWUPBOT (4.25).",
"cite_spans": [
{
"start": 191,
"end": 231,
"text": "(InfoGain: 140.19 and 153.23 vs. 124.82)",
"ref_id": null
},
{
"start": 286,
"end": 321,
"text": "(ConvLen: 30.29 and 36.06 vs. 21.96",
"ref_id": null
}
],
"ref_spans": [
{
"start": 333,
"end": 342,
"text": "Figure 2b",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Automatic Evaluation and Absolute",
"sec_num": "5.1"
},
{
"text": "ComBot allows for more coherent dialogs. In terms of quality, the differences in satisfaction scores between the three models is not statistically significant (p < 0.05, T-test). For dialog coherence (Measured by CoSim) however, COMBOT scores highest (0.36) and the difference with MED-BOT is statistically significant (p < 0.05, T-test). This suggests that follow up questions help support smoother transitions between dialog turns.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic Evaluation and Absolute",
"sec_num": "5.1"
},
{
"text": "The results of the comparative human evaluation are presented in Figure 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 65,
"end": 73,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Comparative Human Evaluation",
"sec_num": "5.2"
},
{
"text": "ComBot is judged more knowledgeable, more interesting, more human and better for long conversations. COMBOT outperforms MEDBOT on all Acute-Eval questions (Figure 2c ).",
"cite_spans": [],
"ref_spans": [
{
"start": 155,
"end": 165,
"text": "(Figure 2c",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Comparative Human Evaluation",
"sec_num": "5.2"
},
{
"text": "In particular, users find COMBOT more knowledgeable by a large margin. This is in line with the automatic metrics results (higher COMBOT values for Slots and InfoGain) and is likely due to the fact that the COMBOT model supports the use of healthrelated, follow-up questions which in turn allows for a wider range of medical issues to be discussed than just those present in the MEDBOT corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparative Human Evaluation",
"sec_num": "5.2"
},
{
"text": "Users also show a clear preference for COMBOT in long conversations (Figure 2a ). While this seems to contradict the fact that both models have similar satisfaction score, we conjecture that the high MEDBOT satisfaction score is an artefact of the MEDBOT model. Since the MEDBOT coverage is restricted, the users have low expectations and correspondingly give high satisfaction scores (they are easily satisfied because their expectations are low). An indication of these low user expectations is given by the number of questions asked : when users feel that the system they interact with is unrestricted, they will feel comfortable asking questions and will start to do so. Conversely, if they feel the model is restricted, they will refrain from asking questions. The results show a much higher number of questions for users interacting with COMBOT (Table 3) 5.3 Component analysis Figure 3 displays an example Human-Bot dialog using the COMBOT model which illustrates the interactions between the three composing bots: the EMPATHYBOT closes the conversation with social chit-chat, the FOLLOWUPBOT responds to the user turn and MEDBOT asks questions from the dialog tree whenever suitable.",
"cite_spans": [],
"ref_spans": [
{
"start": 68,
"end": 78,
"text": "(Figure 2a",
"ref_id": "FIGREF0"
},
{
"start": 851,
"end": 860,
"text": "(Table 3)",
"ref_id": "TABREF4"
},
{
"start": 884,
"end": 892,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Comparative Human Evaluation",
"sec_num": "5.2"
},
{
"text": "The proportion of turns generated by each bot (cf. Figure 2d ) varies from one dialog to another, illustrating the capacity of the ensemble model to adapt to various dialog users and contexts. We find that in 55% of the collected dialogs, a majority of turns (i.e., more than 33% of the turns) is generated by the EMPATHYBOT model; in 29% of the cases by the FOLLOWUPBOT and in 16% of the cases by the MEDBOT 10 We also observe interesting dependencies and correlations. MEDBOT is triggered twice more of- 10 Since a COMBOT dialog has an average of 21 turns and only half of those are generated by the bot, this means that for 55% of the collected dialogs, the dialog contains more than 3 \"social\" dialog turns (turns generated by EMPATHYBOT). Similarly, 29% of the collected dialogs contain more than 3 follow-up turns (FOLLOWUPBOT) and 16% more than 3 task-based turns (MEDBOT).",
"cite_spans": [
{
"start": 409,
"end": 411,
"text": "10",
"ref_id": null
},
{
"start": 506,
"end": 508,
"text": "10",
"ref_id": null
}
],
"ref_spans": [
{
"start": 51,
"end": 60,
"text": "Figure 2d",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Comparative Human Evaluation",
"sec_num": "5.2"
},
{
"text": "The modifications consists in shortening the questions, changing all leaves to statements and adding meta-statements about the dialog to account for cases where the user indicates misunderstanding or agreement",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Both contexts are encoded using ConveRT as average of embeddings of the last turn and concatenation of preceding ones. The inner product is used to compute similarity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://www.healthboards.com/ 4 https://huggingface.co/distilbert-base-uncased5 We do not restrict the set of candidates at that stage i.e., we consider all posts that precede the question within the question thread and all statements in these posts no matter how far away the statement is from the question. In practice, the set of such statements has limited size and distance does not seem to matter too much although an investigation of that factor would be interesting. We leave this question open for further research as it is not central to our paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/connorbrinton/polyaimodels/releases/tag/v1.0 8 https://parl.ai/projects/recipes/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://stanza.run/bio",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank the anonymous reviewers for their feedback. We would like to acknowledge Farnaz Ghassemi for her help in developing the FOLLOWUP-BOT. We gratefully acknowledge the support of the ALIAE company, the French National Center for Scientific Research, and the ANALGESIA Institute Foundation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
},
{
"text": " Figure 3 : Example Human-ComBot dialog ten after FOLLOWUPBOT (30 cases) than after EM-PATHYBOT (12 cases) -this indicates that followup questions help bringing the user back to the questions contained in the dialog tree.",
"cite_spans": [],
"ref_spans": [
{
"start": 1,
"end": 9,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "annex",
"sec_num": null
},
{
"text": "A qualitative analysis of the collected dialogs indicates several directions for further research.Negation is often not recognised leading to interactions in which the model continues discussing a topic which was declared as irrelevant by the user. Another difficulty is knowing when to end the conversation. Long ones are good to complete the task, but bad for people who are ready to finish conversation but feel forced to continue. To improve user engagement, a possibility would be to explore whether the information provided by sentiment analysers could be exploited to help maintain a positive interaction. By detecting polarity, it could also help improve negation handling.Another key issue concerns the emotional impact of the dialog on the user. An interaction with the bot might highlight a health issue the user was not aware of resulting in increased user stress. In such a situation, a good policy would be to provide the user with some notion of solution, some piece of information or advice which can help her face the situation and if possible, incite her to act to improve her health. Indeed some of the dialogs collected with COMBOT show that users sometimes ask for help. Here a knowledge-based agent could be useful either to provide facts that are related to the topic at hand or to highlight the connections between facts that have been mentioned in the dialog.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Towards a Human-like Open-Domain Chatbot",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Adiwardana",
"suffix": ""
},
{
"first": "Minh-Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "David",
"middle": [
"R"
],
"last": "So",
"suffix": ""
},
{
"first": "Jamie",
"middle": [],
"last": "Hall",
"suffix": ""
},
{
"first": "Noah",
"middle": [],
"last": "Fiedel",
"suffix": ""
},
{
"first": "Romal",
"middle": [],
"last": "Thoppilan",
"suffix": ""
},
{
"first": "Zi",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Apoorv",
"middle": [],
"last": "Kulshreshtha",
"suffix": ""
},
{
"first": "Gaurav",
"middle": [],
"last": "Nemade",
"suffix": ""
},
{
"first": "Yifeng",
"middle": [],
"last": "Lu Quoc",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Adiwardana, Minh-Thang Luong, David R So, Jamie Hall, Noah Fiedel, Romal Thoppilan, Zi Yang, Apoorv Kulshreshtha, Gaurav Nemade, Yifeng Lu Quoc, and V Le. Towards a Human-like Open- Domain Chatbot. Technical report.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "The pushshift reddit dataset",
"authors": [
{
"first": "Jason",
"middle": [],
"last": "Baumgartner",
"suffix": ""
},
{
"first": "Savvas",
"middle": [],
"last": "Zannettou",
"suffix": ""
},
{
"first": "Brian",
"middle": [],
"last": "Keegan",
"suffix": ""
},
{
"first": "Megan",
"middle": [],
"last": "Squire",
"suffix": ""
},
{
"first": "Jeremy",
"middle": [],
"last": "Blackburn",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jason Baumgartner, Savvas Zannettou, Brian Keegan, Megan Squire, and Jeremy Blackburn. 2020. The pushshift reddit dataset.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Use of MMR, diversity-based reranking for reordering documents and producing summaries",
"authors": [
{
"first": "Jaime",
"middle": [],
"last": "Carbonell",
"suffix": ""
},
{
"first": "Jade",
"middle": [],
"last": "Goldstein",
"suffix": ""
}
],
"year": 1998,
"venue": "SIGIR Forum",
"volume": "",
"issue": "",
"pages": "335--336",
"other_ids": {
"DOI": [
"10.1145/3130348.3130369"
]
},
"num": null,
"urls": [],
"raw_text": "Jaime Carbonell and Jade Goldstein. 1998. Use of MMR, diversity-based reranking for reordering doc- uments and producing summaries. In SIGIR Fo- rum (ACM Special Interest Group on Information Retrieval), pages 335-336, New York, New York, USA. ACM Press.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Wizard of wikipedia: Knowledge-powered conversational agents",
"authors": [
{
"first": "Emily",
"middle": [],
"last": "Dinan",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Roller",
"suffix": ""
},
{
"first": "Kurt",
"middle": [],
"last": "Shuster",
"suffix": ""
},
{
"first": "Angela",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Emily Dinan, Stephen Roller, Kurt Shuster, Angela Fan, Michael Auli, and Jason Weston. 2019. Wizard of wikipedia: Knowledge-powered conversational agents.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "It's good to chat? evaluation and design guidelines for combining open-domain social conversation with task-based dialogue in intelligent buildings",
"authors": [
{
"first": "Nancie",
"middle": [],
"last": "Gunson",
"suffix": ""
},
{
"first": "Weronika",
"middle": [],
"last": "Siei\u0144ska",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Walsh",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Dondrup",
"suffix": ""
},
{
"first": "Oliver",
"middle": [],
"last": "Lemon",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 20th ACM International Conference on Intelligent Virtual Agents, IVA '20",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1145/3383652.3423889"
]
},
"num": null,
"urls": [],
"raw_text": "Nancie Gunson, Weronika Siei\u0144ska, Christopher Walsh, Christian Dondrup, and Oliver Lemon. 2020. It's good to chat? evaluation and design guide- lines for combining open-domain social conversa- tion with task-based dialogue in intelligent buildings. In Proceedings of the 20th ACM International Con- ference on Intelligent Virtual Agents, IVA '20, New York, NY, USA. Association for Computing Machin- ery.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Convert: Efficient and accurate conversational representations from transformers",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Henderson",
"suffix": ""
},
{
"first": "I\u00f1igo",
"middle": [],
"last": "Casanueva",
"suffix": ""
},
{
"first": "Nikola",
"middle": [],
"last": "Mrk\u0161i\u0107",
"suffix": ""
},
{
"first": "Pei-Hao",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "Tsung-Hsien",
"middle": [],
"last": "Wen",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Vuli\u0107",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Henderson, I\u00f1igo Casanueva, Nikola Mrk\u0161i\u0107, Pei-Hao Su, Tsung-Hsien Wen, and Ivan Vuli\u0107. 2019. Convert: Efficient and accurate conversa- tional representations from transformers.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Importance of search and evaluation strategies in neural dialogue modeling",
"authors": [
{
"first": "Ilia",
"middle": [],
"last": "Kulikov",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"H"
],
"last": "Miller",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ilia Kulikov, Alexander H. Miller, Kyunghyun Cho, and Jason Weston. 2019. Importance of search and evaluation strategies in neural dialogue modeling.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Acute-eval: Improved dialogue evaluation with optimized questions and multi-turn comparisons",
"authors": [
{
"first": "Margaret",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Roller",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1909.03087"
]
},
"num": null,
"urls": [],
"raw_text": "Margaret Li, Jason Weston, and Stephen Roller. 2019. Acute-eval: Improved dialogue evaluation with opti- mized questions and multi-turn comparisons. arXiv preprint arXiv:1909.03087.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Learning healthbots from training data that was automatically created using paraphrase detection and expert knowledge",
"authors": [
{
"first": "Anna",
"middle": [],
"last": "Liednikova",
"suffix": ""
},
{
"first": "Philippe",
"middle": [],
"last": "Jolivet",
"suffix": ""
},
{
"first": "Alexandre",
"middle": [],
"last": "Durand-Salmon",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Gardent",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the International Conference on Computational Linguistics (COLING)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anna Liednikova, Philippe Jolivet, Alexandre Durand- Salmon, and Claire Gardent. 2020. Learning health- bots from training data that was automatically cre- ated using paraphrase detection and expert knowl- edge. In Proceedings of the International Confer- ence on Computational Linguistics (COLING).",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "A Neural Model for Dialogue Coherence Assessment",
"authors": [
{
"first": "Mohsen",
"middle": [],
"last": "Mesgar",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "B\u00a8ucker",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mohsen Mesgar, Sebastian B\u00a8Ucker, and Iryna Gurevych. 2019. A Neural Model for Dialogue Co- herence Assessment. Technical report.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Alana: Social Dialogue using an Ensemble Model and a Ranker trained on User Feedback",
"authors": [
{
"first": "Ioannis",
"middle": [],
"last": "Papaioannou",
"suffix": ""
},
{
"first": "Amanda",
"middle": [
"Cercas"
],
"last": "Curry",
"suffix": ""
},
{
"first": "Jose",
"middle": [],
"last": "Part",
"suffix": ""
},
{
"first": "Igor",
"middle": [],
"last": "Shalyminov",
"suffix": ""
},
{
"first": "Xu",
"middle": [],
"last": "Xinnuo",
"suffix": ""
},
{
"first": "Yanchao",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Ondrej",
"middle": [],
"last": "Dusek",
"suffix": ""
},
{
"first": "Verena",
"middle": [],
"last": "Rieser",
"suffix": ""
},
{
"first": "Oliver",
"middle": [],
"last": "Lemon",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ioannis Papaioannou, Amanda Cercas Curry, Jose Part, Igor Shalyminov, Xu Xinnuo, Yanchao Yu, Ondrej Dusek, Verena Rieser, and Oliver Lemon. 2017a. Alana: Social Dialogue using an Ensemble Model and a Ranker trained on User Feedback. In 2017",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Alexa Prize Proceedings",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexa Prize Proceedings.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "An ensemble model with ranking for social dialogue",
"authors": [
{
"first": "Ioannis",
"middle": [],
"last": "Papaioannou",
"suffix": ""
},
{
"first": "Amanda",
"middle": [
"Cercas"
],
"last": "Curry",
"suffix": ""
},
{
"first": "Jose",
"middle": [
"L"
],
"last": "Part",
"suffix": ""
},
{
"first": "Igor",
"middle": [],
"last": "Shalyminov",
"suffix": ""
},
{
"first": "Xinnuo",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Yanchao",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Du\u0161ek",
"suffix": ""
},
{
"first": "Verena",
"middle": [],
"last": "Rieser",
"suffix": ""
},
{
"first": "Oliver",
"middle": [],
"last": "Lemon",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1712.07558"
]
},
"num": null,
"urls": [],
"raw_text": "Ioannis Papaioannou, Amanda Cercas Curry, Jose L Part, Igor Shalyminov, Xinnuo Xu, Yanchao Yu, Ond\u0159ej Du\u0161ek, Verena Rieser, and Oliver Lemon. 2017b. An ensemble model with ranking for social dialogue. arXiv preprint arXiv:1712.07558.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Towards empathetic opendomain conversation models: A new benchmark and dataset",
"authors": [
{
"first": "Eric",
"middle": [
"Michael"
],
"last": "Hannah Rashkin",
"suffix": ""
},
{
"first": "Margaret",
"middle": [],
"last": "Smith",
"suffix": ""
},
{
"first": "Y-Lan",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Boureau",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "5370--5381",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1534"
]
},
"num": null,
"urls": [],
"raw_text": "Hannah Rashkin, Eric Michael Smith, Margaret Li, and Y-Lan Boureau. 2019. Towards empathetic open- domain conversation models: A new benchmark and dataset. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguis- tics, pages 5370-5381, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Can you put it all together: Evaluating conversational agents' ability to blend skills",
"authors": [
{
"first": "Eric",
"middle": [
"Michael"
],
"last": "Smith",
"suffix": ""
},
{
"first": "Mary",
"middle": [],
"last": "Williamson",
"suffix": ""
},
{
"first": "Kurt",
"middle": [],
"last": "Shuster",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "Y-Lan",
"middle": [],
"last": "Boureau",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eric Michael Smith, Mary Williamson, Kurt Shuster, Jason Weston, and Y-Lan Boureau. 2020. Can you put it all together: Evaluating conversational agents' ability to blend skills.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Two are Better than One: An Ensemble of Retrieval-and Generation-Based Dialog Systems",
"authors": [
{
"first": "Yiping",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Rui",
"middle": [],
"last": "Yan",
"suffix": ""
},
{
"first": "Xiang",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Dongyan",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yiping Song, Rui Yan, Xiang Li, Dongyan Zhao, and Ming Zhang. 2016. Two are Better than One: An Ensemble of Retrieval-and Generation-Based Dia- log Systems.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Dialogue act modeling for automatic tagging and recognition of conversational speech",
"authors": [
{
"first": "Andreas",
"middle": [],
"last": "Stolcke",
"suffix": ""
},
{
"first": "Klaus",
"middle": [],
"last": "Ries",
"suffix": ""
},
{
"first": "Noah",
"middle": [],
"last": "Coccaro",
"suffix": ""
},
{
"first": "Elizabeth",
"middle": [],
"last": "Shriberg",
"suffix": ""
},
{
"first": "Rebecca",
"middle": [],
"last": "Bates",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Jurafsky",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Taylor",
"suffix": ""
},
{
"first": "Rachel",
"middle": [],
"last": "Martin",
"suffix": ""
},
{
"first": "Carol",
"middle": [],
"last": "Van Ess-Dykema",
"suffix": ""
},
{
"first": "Marie",
"middle": [],
"last": "Meteer",
"suffix": ""
}
],
"year": 2000,
"venue": "Computational Linguistics",
"volume": "26",
"issue": "3",
"pages": "339--374",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andreas Stolcke, Klaus Ries, Noah Coccaro, Eliza- beth Shriberg, Rebecca Bates, Daniel Jurafsky, Paul Taylor, Rachel Martin, Carol Van Ess-Dykema, and Marie Meteer. 2000. Dialogue act modeling for au- tomatic tagging and recognition of conversational speech. Computational Linguistics, 26(3):339-374.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "i2b2/va challenge on concepts, assertions, and relations in clinical text",
"authors": [
{
"first": "\u00d6",
"middle": [],
"last": "Uzuner",
"suffix": ""
},
{
"first": "B",
"middle": [
"R"
],
"last": "South",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "S",
"middle": [
"L"
],
"last": "Duvall",
"suffix": ""
}
],
"year": 2010,
"venue": "Journal of the American Medical Informatics Association",
"volume": "18",
"issue": "5",
"pages": "552--556",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "\u00d6. Uzuner, B.R. South, S. Shen, and S.L. DuVall. 2011. 2010 i2b2/va challenge on concepts, assertions, and relations in clinical text. Journal of the American Medical Informatics Association, 18(5):552-556.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Better conversations by modeling, filtering, and optimizing for coherence and diversity",
"authors": [
{
"first": "Xinnuo",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Du\u0161ek",
"suffix": ""
},
{
"first": "Ioannis",
"middle": [],
"last": "Konstas",
"suffix": ""
},
{
"first": "Verena",
"middle": [],
"last": "Rieser",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/d18-1432"
]
},
"num": null,
"urls": [],
"raw_text": "Xinnuo Xu, Ond\u0159ej Du\u0161ek, Ioannis Konstas, and Ver- ena Rieser. 2018. Better conversations by modeling, filtering, and optimizing for coherence and diversity. Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Learning conversational systems that interleave task and non-task content",
"authors": [
{
"first": "Zhou",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Alan",
"middle": [
"W"
],
"last": "Black",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"I"
],
"last": "Rudnicky",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 26th International Joint Conference on Artificial Intelligence, IJCAI'17",
"volume": "",
"issue": "",
"pages": "4214--4220",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhou Yu, Alan W. Black, and Alexander I. Rudnicky. 2017. Learning conversational systems that inter- leave task and non-task content. In Proceedings of the 26th International Joint Conference on Artifi- cial Intelligence, IJCAI'17, page 4214-4220. AAAI Press.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Strategy and policy learning for nontask-oriented conversational systems",
"authors": [
{
"first": "Zhou",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Ziyu",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Alan",
"middle": [
"W"
],
"last": "Black",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Rudnicky",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 17th Annual Meeting of the Special Interest Group on Discourse and Dialogue",
"volume": "",
"issue": "",
"pages": "404--412",
"other_ids": {
"DOI": [
"10.18653/v1/W16-3649"
]
},
"num": null,
"urls": [],
"raw_text": "Zhou Yu, Ziyu Xu, Alan W Black, and Alexander Rud- nicky. 2016. Strategy and policy learning for non- task-oriented conversational systems. In Proceed- ings of the 17th Annual Meeting of the Special Inter- est Group on Discourse and Dialogue, pages 404- 412, Los Angeles. Association for Computational Linguistics.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Reinforcing Coherence for Sequence to Sequence Model in Dialogue Generation",
"authors": [
{
"first": "Hainan",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Yanyan",
"middle": [],
"last": "Lan",
"suffix": ""
},
{
"first": "Jiafeng",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Xueqi",
"middle": [],
"last": "Cheng",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hainan Zhang, Yanyan Lan, Jiafeng Guo, Jun Xu, and Xueqi Cheng. 2017. Reinforcing Coherence for Se- quence to Sequence Model in Dialogue Generation. Technical report.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Personalizing dialogue agents: I have a dog, do you have pets too?",
"authors": [
{
"first": "Saizheng",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Emily",
"middle": [],
"last": "Dinan",
"suffix": ""
},
{
"first": "Jack",
"middle": [],
"last": "Urbanek",
"suffix": ""
},
{
"first": "Arthur",
"middle": [],
"last": "Szlam",
"suffix": ""
},
{
"first": "Douwe",
"middle": [],
"last": "Kiela",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "2204--2213",
"other_ids": {
"DOI": [
"10.18653/v1/P18-1205"
]
},
"num": null,
"urls": [],
"raw_text": "Saizheng Zhang, Emily Dinan, Jack Urbanek, Arthur Szlam, Douwe Kiela, and Jason Weston. 2018. Per- sonalizing dialogue agents: I have a dog, do you have pets too? In Proceedings of the 56th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2204- 2213, Melbourne, Australia. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Biomedical and Clinical English Model Packages in the Stanza Python NLP Library",
"authors": [
{
"first": "Yuhao",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Yuhui",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Peng",
"middle": [],
"last": "Qi",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Curtis",
"middle": [
"P"
],
"last": "Langlotz",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yuhao Zhang, Yuhui Zhang, Peng Qi, Christopher D. Manning, and Curtis P. Langlotz. 2020. Biomedical and Clinical English Model Packages in the Stanza Python NLP Library.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"text": "(a) Distribution of the Satisfaction Scores for each configuration, (b) Conversation length distribution for MedBot and ComBot, (c) Acute-Eval results for both systems, (d) Majority bot ratio in COMBOT",
"num": null,
"type_str": "figure"
},
"TABREF0": {
"text": "Fragment of decision tree for the sleep domain and a corresponding dialog",
"type_str": "table",
"html": null,
"content": "<table><tr><td>Category</td><td>Threads</td><td colspan=\"2\">Posts Avg</td></tr><tr><td>anxiety</td><td colspan=\"3\">6852 38523 5.63</td></tr><tr><td>anxiety tips</td><td>42</td><td colspan=\"2\">71 1.69</td></tr><tr><td>chronic fatigue</td><td>670</td><td colspan=\"2\">3856 5.77</td></tr><tr><td>chronic pain</td><td>646</td><td colspan=\"2\">4893 7.59</td></tr><tr><td>depression</td><td colspan=\"3\">5327 32998 6.21</td></tr><tr><td>depression tips</td><td>27</td><td colspan=\"2\">51 1.89</td></tr><tr><td>exercise fitness</td><td>1583</td><td colspan=\"2\">8142 5.16</td></tr><tr><td>general health</td><td colspan=\"3\">7279 29858 4.11</td></tr><tr><td>healthy lifestyle</td><td>104</td><td colspan=\"2\">621 5.97</td></tr><tr><td>pain management</td><td colspan=\"3\">4985 38738 7.79</td></tr><tr><td>panic disorders</td><td>1314</td><td colspan=\"2\">8376 6.39</td></tr><tr><td>share your anxiety story</td><td>42</td><td>42</td><td>1</td></tr><tr><td>share your depression story</td><td>55</td><td colspan=\"2\">71 1.29</td></tr><tr><td>share your pain story</td><td>28</td><td colspan=\"2\">42 1.50</td></tr><tr><td>sleep disorders</td><td>1671</td><td colspan=\"2\">7656 4.59</td></tr><tr><td>stress</td><td>415</td><td colspan=\"2\">1973 4.76</td></tr></table>",
"num": null
},
"TABREF1": {
"text": "",
"type_str": "table",
"html": null,
"content": "<table/>",
"num": null
},
"TABREF3": {
"text": "Corpus statistics (Reddit: pre-training corpus for ConveRT and the Empathy bot. ConvAI2, WoW, Empa-Dial and BSD: Datasets used to fine-tune the Empathy Bot. INIT: used for the MedBot retrieval step. HealthBoard: for FollowUp Bot Fine-Tuning and Retrieval .)",
"type_str": "table",
"html": null,
"content": "<table><tr><td>Model</td><td colspan=\"2\">Satisf. CoSim</td><td colspan=\"2\">Slots ConvLen</td><td>InfoGain</td><td>UserQ</td></tr><tr><td>MEDBOT</td><td>3.94</td><td>0.26</td><td>6.24 (1.68)</td><td colspan=\"2\">28.46 108.82 (3.82)</td><td>0.08 (4)</td></tr><tr><td>MEDBOT+ FOLLOWUPBOT</td><td>3.18</td><td colspan=\"2\">0.34 11.65 (3.22)</td><td colspan=\"3\">36.06 153.23 (4.25) 0.47 (23)</td></tr><tr><td>MEDBOT+ EMPATHYBOT</td><td>3.77</td><td>0.34</td><td>3.87 (1.46)</td><td colspan=\"3\">30.29 140.19 (4.63) 0.68 (33)</td></tr><tr><td>COMBOT</td><td>3.72</td><td>0.36</td><td>7.12 (2.82)</td><td colspan=\"3\">21.96 124.82 (5.68) 0.48 (24)</td></tr></table>",
"num": null
},
"TABREF4": {
"text": "",
"type_str": "table",
"html": null,
"content": "<table><tr><td>: Satisfaction Scores (Satisf.) and Results of the Automatic Evaluation. CoSim: Average Cosine Similarity</td></tr><tr><td>between adjacent turns. Slots: Average Number of Medical Entities per dialogue (in brackets: average number in</td></tr><tr><td>the user turns). ConvLen: Average Number of turns per dialog. InfoGain: Average number of unique tokens per</td></tr><tr><td>dialog (in brackets: normalised by dialog length). UserQ: number of questions asked by Human (in bracket: total</td></tr><tr><td>number for 50 dialogs). All metrics are averaged over the 50 Human-Bot dialogs collected for each model.</td></tr></table>",
"num": null
}
}
}
}