ACL-OCL / Base_JSON /prefixD /json /dialdoc /2022.dialdoc-1.9.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2022",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T16:31:34.511307Z"
},
"title": "Construction of Hierarchical Structured Knowledge-based Recommendation Dialogue Dataset and Dialogue System",
"authors": [
{
"first": "Takashi",
"middle": [],
"last": "Kodama",
"suffix": "",
"affiliation": {},
"email": "kodama@nlp.ist.i.kyoto-u.ac.jp"
},
{
"first": "Ribeka",
"middle": [],
"last": "Tanaka",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Ochanomizu University",
"location": {
"addrLine": "2-1-1 Otsuka, Bunkyo-ku",
"postCode": "112-8610",
"settlement": "Tokyo",
"country": "Japan"
}
},
"email": "tanaka.ribeka@is.ocha.ac.jp"
},
{
"first": "Sadao",
"middle": [],
"last": "Kurohashi",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Rupert",
"middle": [],
"last": "Wyatt",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "James",
"middle": [],
"last": "Franco",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Andy",
"middle": [],
"last": "Serkis",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We work on a recommendation dialogue system to help a user understand the appealing points of some target (e.g., a movie). In such dialogues, the recommendation system needs to utilize structured external knowledge to make informative and detailed recommendations. However, there is no dialogue dataset with structured external knowledge designed to make detailed recommendations for the target. Therefore, we construct a dialogue dataset, Japanese Movie Recommendation Dialogue (JMRD), in which the recommender recommends one movie in a long dialogue (23 turns on average). The external knowledge used in this dataset is hierarchically structured, including title, casts, reviews, and plots. Every recommender's utterance is associated with the external knowledge related to the utterance. We then create a movie recommendation dialogue system that considers the structure of the external knowledge and the history of the knowledge used. Experimental results show that the proposed model is superior in knowledge selection to the baseline models.",
"pdf_parse": {
"paper_id": "2022",
"_pdf_hash": "",
"abstract": [
{
"text": "We work on a recommendation dialogue system to help a user understand the appealing points of some target (e.g., a movie). In such dialogues, the recommendation system needs to utilize structured external knowledge to make informative and detailed recommendations. However, there is no dialogue dataset with structured external knowledge designed to make detailed recommendations for the target. Therefore, we construct a dialogue dataset, Japanese Movie Recommendation Dialogue (JMRD), in which the recommender recommends one movie in a long dialogue (23 turns on average). The external knowledge used in this dataset is hierarchically structured, including title, casts, reviews, and plots. Every recommender's utterance is associated with the external knowledge related to the utterance. We then create a movie recommendation dialogue system that considers the structure of the external knowledge and the history of the knowledge used. Experimental results show that the proposed model is superior in knowledge selection to the baseline models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "In recent years, research on recommendation dialogue systems, which systems recommend something to users through dialogues, has attracted much attention. Here, we focus on movie recommendations. A recommendation dialogue consists of two phases: (1) the user's preferences are elicited, and a movie is selected from several candidates, and (2) in-depth information is provided for the selected movie. We focus on the latter phase in this study.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To provide in-depth information, the use of external knowledge is crucial. There has been much research on incorporating external knowledge in dialogue, and many kinds of knowledge-grounded dialogue datasets have been proposed (Dinan et al., 2019; Liu et al., 2020) . These datasets often use plain texts or knowledge graphs as external knowledge. If the hierarchically structured knowledge is available in recommendation dialogues, it allows for more appropriate knowledge selection and informative response generation. However, there is no dialogue dataset with hierarchically structured knowledge to provide rich information for a single target (e.g., a movie).",
"cite_spans": [
{
"start": 227,
"end": 247,
"text": "(Dinan et al., 2019;",
"ref_id": "BIBREF3"
},
{
"start": 248,
"end": 265,
"text": "Liu et al., 2020)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To address the aforementioned problem, we propose a dialogue dataset, Japanese Movie Recommendation Dialogue (JMRD), in which recommendation dialogues are paired with the corresponding external knowledge. This dialogue dataset consists of about 5,200 dialogues between crowd workers. Each dialogue has 23 turns on average. We can say that our dataset provides in-depth movie recommendations utilizing various knowledge about a movie, with relatively a large number of dialogue turns. Specifically, as shown in Figure 1 , one speaker (recommender) recommends a movie to the other speaker (seeker). Only the recommenders can have access to the knowledge about the movie, and they should use the external knowledge as much as possible in their utterances. The recommenders are asked to annotate the knowledge they used when sending their utterance. This procedure enables us to associate every recommenders' utterances with the corresponding external knowledge. The external knowledge is hierarchically structured into knowledge types common to all movies (e.g., \"Title\", \"Released Year\") and giving knowledge contents for each movie (e.g., \"Rise of Planet of the Apes\", \"August 5, 2011\").",
"cite_spans": [],
"ref_spans": [
{
"start": 510,
"end": 518,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We also propose a strong baseline model for the constructed dataset. This model considers the history of knowledge types/contents, noting that the order in which each piece of knowledge is used is essential in recommendation dialogues. The exper-imental results show that our proposed model can select appropriate knowledge with higher accuracy than the baseline method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our contributions are three-fold.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We construct a movie recommendation dialogue dataset associated with hierarchically structured external knowledge.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We propose a strong baseline model, which selects knowledge based on hierarchically structured knowledge, for our dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 To the best of our knowledge, we are the first to construct a human-to-human dialogue dataset based on external knowledge in Japanese.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Recommendation dialogue has long attracted attention. However, most of them are goal-oriented dialogues in which the user's preferences are elicited from multiple recommendation candidates, and a recommendation target is decided according to that preferences Li et al., 2018) . Li et al. (2018) propose REDIAL, a human-tohuman movie recommendation dialogue dataset. The recommender presents several movies in one dialogue while inquiring about the seeker's preferences. Kang et al. (2019) collect GoRecDial dataset in a gamified setting where experts decide on a movie similar to the seekers' preference among a small set of movies (= five movies) in a minimal number of turns. OpenDialKG (Moon et al., 2019) is a recommendation and chit-chat dataset linking open-ended dialogues to knowledge graphs. In this study, we focus on the recommendation dialogue, which provides in-depth information about a movie rather than deciding which movie to recommend. Research on the knowledge-grounded dialogue has also been growing in the last few years. Zhou et al. (2018) collect a human-to-human chit-chat dialogue dataset by utilizing Wikipedia articles of 30 famous movies. This dataset is unique in that it has two dialogue settings: either only one of the participants can see the knowledge, or both of them can see it. Moghe et al. (2018) also collect chit-chat dialogues about movies based on multiple types of knowledge: plot, review, Reddit comments, and fact table. Wizard of Wikipedia (Dinan et al., 2019) is an open-domain chit-chat dialogue dataset based on Wikipedia articles on 1,365 topics. It has become a standard benchmark in this research field. Su et al. (2020) collect a large Chinese chit-chat dialogue dataset (246,141 dialogues with 3,010,650 turns) about movies. Other dialogue datasets with external knowledge in Chinese are DuConv (Wu et al., 2019) , KdConv , and DuRecDial (Liu et al., 2020) . DuConv (Wu et al., 2019) combines dialogues with knowledge graphs to track the progress of the dialogue topic. KdConv is also a chit-chat dialogue corpus that consists of relatively long dialogues to allow deep discussions in multiple domains (movies, music, and travel). Liu et al. (2020) focus on multiple dialogue types (e.g., QA, chit-chat, recommendation) and collect a multi-domain dialogue dataset associated with a knowledge graph. Compared to these studies, our work differs in that it uses hierarchically structured knowledge that contains both factoid (e.g., title) and non-factoid (e.g., review) information to make recommendations.",
"cite_spans": [
{
"start": 259,
"end": 275,
"text": "Li et al., 2018)",
"ref_id": "BIBREF7"
},
{
"start": 278,
"end": 294,
"text": "Li et al. (2018)",
"ref_id": "BIBREF7"
},
{
"start": 470,
"end": 488,
"text": "Kang et al. (2019)",
"ref_id": "BIBREF5"
},
{
"start": 689,
"end": 708,
"text": "(Moon et al., 2019)",
"ref_id": "BIBREF13"
},
{
"start": 1043,
"end": 1061,
"text": "Zhou et al. (2018)",
"ref_id": "BIBREF24"
},
{
"start": 1315,
"end": 1334,
"text": "Moghe et al. (2018)",
"ref_id": "BIBREF12"
},
{
"start": 1486,
"end": 1506,
"text": "(Dinan et al., 2019)",
"ref_id": "BIBREF3"
},
{
"start": 1656,
"end": 1672,
"text": "Su et al. (2020)",
"ref_id": "BIBREF19"
},
{
"start": 1849,
"end": 1866,
"text": "(Wu et al., 2019)",
"ref_id": "BIBREF22"
},
{
"start": 1892,
"end": 1910,
"text": "(Liu et al., 2020)",
"ref_id": "BIBREF9"
},
{
"start": 1920,
"end": 1937,
"text": "(Wu et al., 2019)",
"ref_id": "BIBREF22"
},
{
"start": 2185,
"end": 2202,
"text": "Liu et al. (2020)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "We choose movies as the domain for the recommendation dialogue because movies are interesting to everyone and facilitate smooth dialogue. In addition, movie recommendation dialogue is opendomain in nature according to the variety of movie topics, and it is a preferable property for NLP research. In this section, we explain the construction method of the JMRD.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Japanese Movie Recommendation Dialogue",
"sec_num": "3"
},
{
"text": "The external knowledge is mainly collected from web texts such as Wikipedia. First, we select 261 movies based on the box-office revenue ranking. 1 For each of these movies, we collect movie information as external knowledge. The external knowledge consists of seven knowledge types: title, released year, director, cast, genre, review, and plot, as shown in Figure 1 . The title, released year, director, cast, and plot are extracted from the Wikipedia article of each movie (we allow at most one director and two casts). For the director and the casts, a brief description is also extracted from the first paragraph of each person's Wikipedia article. For the genre, we use the genre classification of Yahoo! Movies. 2 Reviews are collected by crowdsourcing using Yahoo! Crowdsourcing. 3 Each worker selects a movie that he or she has seen from a list of 261 movies and writes down three recommendations for the selected movie. As a result, we collected an average of 16.5 reviews per movie.",
"cite_spans": [
{
"start": 788,
"end": 789,
"text": "3",
"ref_id": null
}
],
"ref_spans": [
{
"start": 359,
"end": 367,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "External Knowledge Collection",
"sec_num": "3.1"
},
{
"text": "We split the plot into sentences and present only the first ten sentences (or all sentences if fewer than ten) to reduce the burden of the recommender. Besides, we use the reviews written by the workers as it is, without splitting the sentences. We randomly selected five reviews between 15 and 80 characters long for each movie from the collected reviews. Those five reviews are used as the reviews for that movie.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "External Knowledge Collection",
"sec_num": "3.1"
},
{
"text": "The two workers engaging in the movie recommendation dialogue have different roles: one is the recommender, and the other is the seeker. The flow of the dialogue takes place as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Settings",
"sec_num": "3.2.1"
},
{
"text": "1. Either the recommender or seeker can initialize the conversation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Settings",
"sec_num": "3.2.1"
},
{
"text": "2. The recommender decides which movie to recommend from the movie list. The recommender can choose a movie he or she wants to recommend or a movie that matches the seeker's preference obtained from a few message exchanges. The recommender can access the movie knowledge after deciding the movie 3 https://crowdsourcing.yahoo.co.jp/ to recommend. On the other hand, the seeker is only shown the chat screen and cannot access knowledge about the movie.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Settings",
"sec_num": "3.2.1"
},
{
"text": "3. The recommender is instructed to use the presented knowledge as much as possible to recommend the movie. When the recommender sends their utterance, they must select the knowledge referred to by the utterance (multiple selection is allowed). For the utterance that does not use any knowledge, such as greetings, the recommender can select the \"no knowledge\" option.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Settings",
"sec_num": "3.2.1"
},
{
"text": "The seeker is only instructed to enjoy and learn more about the recommended movie, and they can talk freely. This instruction refers to that of Wizard of Wikipedia (Dinan et al., 2019) .",
"cite_spans": [
{
"start": 164,
"end": 184,
"text": "(Dinan et al., 2019)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "4.",
"sec_num": null
},
{
"text": "5. The dialogue lasts at least 20 turns after the movie is selected and can be terminated after 20 turns.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "4.",
"sec_num": null
},
{
"text": "ParlAI (Miller et al., 2017 ) is a framework for collecting real-time chats in crowdsourcing. However, it is not easy to perform Japanese tasks with the Amazon Mechanical Turk used in ParlAI. Therefore, we build a new framework for dialogue collection, which incorporates crowdsourcing services where more native Japanese speakers can be gathered. In our framework, when workers access the specified URL for dialogue collection, pair match- ing is performed, and a chat room is created for the worker to interact in real-time.",
"cite_spans": [
{
"start": 7,
"end": 27,
"text": "(Miller et al., 2017",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dialogue Collection System",
"sec_num": "3.2.2"
},
{
"text": "The statistics are shown in Table 1 . Our dataset consists of 5,166 dialogues with 116,874 turns. The average number of words per utterance of the recommender is more than three times larger than that of the seeker. It is probably because the recommender needs to talk more than the seeker to provide information to recommend a movie. The average number of knowledge items per utterance is 1.3, and the recommender tends to mention each knowledge item separately. There were on average 10.8 different types of knowledge used per dia- logue, indicating that we could collect dialogues with various types of external knowledge. Figure 2 shows the distribution of the knowledge types used. The number of utterances that did not use any knowledge was only about 20% of the total, indicating that most utterances use some kind of external knowledge. In addition, non-factoid texts such as reviews and plots tend to be used more frequently.",
"cite_spans": [],
"ref_spans": [
{
"start": 28,
"end": 35,
"text": "Table 1",
"ref_id": null
},
{
"start": 626,
"end": 634,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Statistics",
"sec_num": "3.2.3"
},
{
"text": "Furthermore, Figure 3 shows the distribution of the knowledge used in each dialogue turn of the recommender. In the early part of the dialogue, there are many utterances without knowledge, such as greetings or utterances that mention the title. The recommenders often use factoid information such as released year, director, and cast in the middle of the dialogue. In the later part, non-factoid information such as reviews and plots are often used to convey specific content. In addition, after ten turns, the percentage of \"No knowledge\" increased again, as more generic recommendations such as \"please check it out\" are used. As can be seen from this analysis, our dataset is capable of analyzing human recommendation strategies.",
"cite_spans": [],
"ref_spans": [
{
"start": 13,
"end": 21,
"text": "Figure 3",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Statistics",
"sec_num": "3.2.3"
},
{
"text": "We ask the dialogue participants to answer the following post-task questionnaire in some of the collected dialogues (= 4,410 dialogues). the movie and remember the contents well/have seen the movie and remember some the contents/have never seen the movie but know the plot/have never seen the movie but know only the title/do not know at all]. Q4 is for recommenders only, and Q5 is for seekers only. Table 2 shows the results of the questionnaire. We found that most of the workers were highly interested in the topic of movies (Q1), and both recommenders and seekers enjoyed the dialogue, although it was relatively long, more than 20 turns (Q2). In addition, from Q3, we can see that the recommenders recommended movies they knew, while the seekers were often recommended movies they did not know. Finally, from Q4 and Q5, it was confirmed that the collected dialogues sufficiently achieved the purpose of movie recommendation.",
"cite_spans": [],
"ref_spans": [
{
"start": 401,
"end": 408,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Post-task Questionnaire",
"sec_num": "3.2.4"
},
{
"text": "Each dialogue D = {(x l , y l )} L l=1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Outline",
"sec_num": "4.1"
},
{
"text": "in the dataset is paired with a knowledge pool K = (k t , k c ) about the movie recommended in that dialogue, where x l , y l is the utterance of the seeker and recommender at turn l and L is the number of turns in D. In addition, k t (= {k t,1 , . . . , k t,m , . . . , k t,M }) are the knowledge types, k c (= {k c,1 , . . . , k c,n , . . . , k c,N }) are knowledge contents, and M , N are the number of knowledge types and knowledge contents contained in K, respectively. At turn l, given the dialogue context (= the current seeker's utterance x l and the last recommender's utterance y l\u22121 ), the previously selected knowledge types {k 1 t , . . . ,k l\u22121 t }, and previously selected knowledge contents {k 1 c , . . . ,k l\u22121 c }, our target is to select a piece of knowledgek l c from k c and generate response y l utilizingk l c . We call the previously selected knowledge types the \"knowledge type history\" and the previously selected knowledge contents the \"knowledge content history\" in this paper. Figure 4 shows the overview of the proposed model. The proposed model mainly consists of the Encoding Layer, the Knowledge Selection Layer, and the Decoding Layer. We describe each of the components in the following sections.",
"cite_spans": [],
"ref_spans": [
{
"start": 1007,
"end": 1015,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Outline",
"sec_num": "4.1"
},
{
"text": "The encoding layer is used to obtain the following representations: dialogue context, knowledge types, knowledge contents, knowledge type history, and knowledge content history. We use BERT (De-vlin et al., 2019) as the encoder. For encoding the dialogue context, we obtain the hidden state H x l y l\u22121 via BERT, and then perform average pooling to obtain h x l y l\u22121 (Cer et al., 2018) :",
"cite_spans": [
{
"start": 190,
"end": 212,
"text": "(De-vlin et al., 2019)",
"ref_id": null
},
{
"start": 368,
"end": 386,
"text": "(Cer et al., 2018)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Encoding Layer",
"sec_num": "4.2"
},
{
"text": "H x l y l\u22121 = BERT (x l , y l\u22121 ) (1) h x l y l\u22121 = avgpool(H x l y l\u22121 ) \u2208 R d (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Encoding Layer",
"sec_num": "4.2"
},
{
"text": "where d is the hidden size. We insert [SEP] between x l and y l\u22121 , and insert [CLS] and [SEP] at the beginning and the end of the entire input string, respectively. In the case of knowledge types, we insert [CLS] and [SEP] at the beginning and the end of the input string, respectively. After that, we get {h kt,m } M m=1 by feeding it to BERT in the same way. For the knowledge contents, we input the knowledge type in addition to the knowledge contents, following the method of Dinan et al. (2019) . We insert a new special token [KNOW SEP] between the knowledge type and the knowledge content and further insert [CLS] and [SEP] at the beginning and the end of the input string, respectively. The resulting string is input to BERT to obtain {h kc,n } N n=1 likewise. We also compute the representations of knowledge type history {hk i t } l\u22121 i=1 and that of knowledge content history {hk",
"cite_spans": [
{
"start": 481,
"end": 500,
"text": "Dinan et al. (2019)",
"ref_id": "BIBREF3"
},
{
"start": 616,
"end": 621,
"text": "[CLS]",
"ref_id": null
},
{
"start": 626,
"end": 631,
"text": "[SEP]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Encoding Layer",
"sec_num": "4.2"
},
{
"text": "i c } l\u22121 i=1 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Encoding Layer",
"sec_num": "4.2"
},
{
"text": "We encode the knowledge type history via the transformer encoder (Vaswani et al., 2017) . This transformer encoder (we call this \"knowledge type encoder\") adds a positional embedding for each turn (= turn embedding) to the input so that the model reflects in which turn each knowledge type was used (Meng et al., 2021) . We concatenate the last output of this encoder hk l\u22121 t trans with the hidden state of the dialogue context h x l y l\u22121 as the query, and regard {h kt,m } M m=1 as the key. The attention over knowledge types a t \u2208 R M is calculated as follows:",
"cite_spans": [
{
"start": 65,
"end": 87,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF21"
},
{
"start": 299,
"end": 318,
"text": "(Meng et al., 2021)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge Selection Layer",
"sec_num": "4.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "a t = [a t,1 , . . . , a t,m , . . . , a t,M ] = softmax (Q t K t ) Q t = MLP ([hk l\u22121 t trans ; h x l y l\u22121 ]) K t = MLP ([h k t,1 , . . . , h k t,M ]) [hk 1 t trans , . . . , hk l\u22121 t trans ] = KTE ([hk 1 t , . . . , hk l\u22121 t ])",
"eq_num": "(3)"
}
],
"section": "Knowledge Selection Layer",
"sec_num": "4.3"
},
{
"text": "where MLP (\u2022) is a multilayer perceptron, KTE is the knowledge type encoder, and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge Selection Layer",
"sec_num": "4.3"
},
{
"text": "[\u2022;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge Selection Layer",
"sec_num": "4.3"
},
{
"text": "\u2022] is the vector concatenation operation. We compute the weighted hidden state of the knowledge contents {h kc,n w } N n=1 based on the calculated attention a t . This weighted hidden state is used to calculate the attention over the knowledge contents. Suppose the number of knowledge contents belonging to the m-th knowledge type is N m , and the same weight a t,m \u2208 a t is given to all of them. In that case, the M -dimensional a t can be extended to the N -dimensional a t \u2208 R N as follows, because N m satisfies M m=1 N m = N :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge Selection Layer",
"sec_num": "4.3"
},
{
"text": "a t = [a t,1 , . . . , a t,m , . . . , a t,m Nm , . . . , a t,M ] (4)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge Selection Layer",
"sec_num": "4.3"
},
{
"text": "Using a t , the weighted hidden states of the knowledge contents {h kc,n w } N n=1 can be obtained as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge Selection Layer",
"sec_num": "4.3"
},
{
"text": "[h k c,1 w , . . . , h k c,N w ] = a t [h k c,1 , . . . , h k c,N ] (5)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge Selection Layer",
"sec_num": "4.3"
},
{
"text": "The knowledge content history is encoded by the transformer encoder as well. This transformer encoder, which we call \"knowledge content encoder\", has the same setting as the knowledge type encoder, but they do not share any parameters. We concatenate the last output of the encoder hk l\u22121 c trans with h x l y l\u22121 as the query, and regard the weighted hidden states of knowledge contents {h",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge Selection Layer",
"sec_num": "4.3"
},
{
"text": "kc,n w } N n=1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge Selection Layer",
"sec_num": "4.3"
},
{
"text": "as the key. We can calculate the attention over the knowledge contents a c \u2208 R N as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge Selection Layer",
"sec_num": "4.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "a c = softmax (Q c K c ) Q c = MLP ([hk l\u22121 c trans ; h x l y l\u22121 ]) K c = MLP ([h k c,1 w , . . . , h k c,N w ]) [hk 1 c trans , . . . , hk l\u22121 c trans ] = KCE ([hk 1 c , . . . , hk l\u22121 c ])",
"eq_num": "(6)"
}
],
"section": "Knowledge Selection Layer",
"sec_num": "4.3"
},
{
"text": "where KCE is the knowledge content encoder. Finally, we select a knowledge contentk l c at time l from the probability distribution of a c .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge Selection Layer",
"sec_num": "4.3"
},
{
"text": "At time l , the dialogue context x l , y l\u22121 and the knowledge contentk l c selected by the knowledge selection layer, are input to the transformer decoder to generate the response y l . Specifically, we feed the concatenated embedding",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoding Layer",
"sec_num": "4.4"
},
{
"text": "H x l y l\u22121k l c = [H x l y l\u22121 ; Hk l c",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoding Layer",
"sec_num": "4.4"
},
{
"text": "] to the decoder. The word generation probability p(y l j ) over the vocabulary V when the decoder generates the j -th word can be written as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoding Layer",
"sec_num": "4.4"
},
{
"text": "p(y l j ) = softmax (MLP (h l,j dec )) \u2208 R 1\u00d7|V | h l,j dec = TD(H x l y l\u22121kl c , emb(y l <j ))) \u2208 R 1\u00d7d (7)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoding Layer",
"sec_num": "4.4"
},
{
"text": "where TD is the transformer decoder, y l <j are the words generated up to the j -th word, emb(y l <j ) are the word embeddings of y l <j , which is initialized with the word embedding of BERT.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoding Layer",
"sec_num": "4.4"
},
{
"text": "We use copy mechanism (Gu et al., 2016; See et al., 2017) to make it easier to generate knowledge words and follow the method used in Meng et al. (2021) .",
"cite_spans": [
{
"start": 22,
"end": 39,
"text": "(Gu et al., 2016;",
"ref_id": "BIBREF4"
},
{
"start": 40,
"end": 57,
"text": "See et al., 2017)",
"ref_id": "BIBREF17"
},
{
"start": 134,
"end": 152,
"text": "Meng et al. (2021)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Decoding Layer",
"sec_num": "4.4"
},
{
"text": "Similar to Dinan et al. (2019) , we combine the negative log-likelihood loss for the generated response L nll with the cross-entropy loss for knowledge selection L knowledge modulated by a weight \u03bb, which is the hyperparameter. The final loss function L is as:",
"cite_spans": [
{
"start": 11,
"end": 30,
"text": "Dinan et al. (2019)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Objective",
"sec_num": "4.5"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L = (1 \u2212 \u03bb)L nll + \u03bbL knowledge",
"eq_num": "(8)"
}
],
"section": "Learning Objective",
"sec_num": "4.5"
},
{
"text": "5 Experiments",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Objective",
"sec_num": "4.5"
},
{
"text": "We randomly split the dialogues into the train (90%), validation (5%), and test sets (5%). Input texts are truncated to the maximum input length of 64 tokens for dialogue contexts and knowledge contents and 5 tokens for knowledge types. In addition, a maximum of 20 turns of knowledge history can be entered for both knowledge types and knowledge contents. Our proposed dataset may have multiple pieces of knowledge associated with a recommender's utterance, but we use only one of them in this study for simplicity. In the case of an utterance with multiple knowledge items, we select the one with the highest Jaccard coefficient in the word set of the recommender's utterance and each knowledge as the correct knowledge. To input \"No knowledge,\" we use the special token [NO KNOW] in place of knowledge type and content.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Settings",
"sec_num": "5.1"
},
{
"text": "We use an end-to-end Transformer Memory Network (TMN) (Dinan et al., 2019) as baseline. This model encodes the dialogue context and each knowledge respectively and selects knowledge by calculating the dot-product attention between them. It also performs end-to-end response generation using the selected knowledge. To make a fair comparison with our proposed model, we have replaced the original transformer encoder with a BERT encoder. We call this model TMN BERT. As a baseline to consider knowledge history, we add the knowledge content encoder to TMN BERT and concatenate its output with the hidden states of the dialogue context. We call this model TMN BERT+KH. Knowledge selection is made by calculating the attention between the knowledge candidates and the concatenated hidden states. Other conditions are the same as in TMN BERT.",
"cite_spans": [
{
"start": 54,
"end": 74,
"text": "(Dinan et al., 2019)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline",
"sec_num": "5.2"
},
{
"text": "In addition, we use Random baseline that selects knowledge randomly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline",
"sec_num": "5.2"
},
{
"text": "We use the NICT BERT Japanese pre-trained model (with BPE) 4 as the encoder. This BERT is also used to initialize the word embedding in the transformer decoder. The transformer encoders for knowledge type and knowledge content, and the transformer decoder have the same architecture, consisting of 2 attention heads, 5 layers, and the size of the hidden layer is 768 and the filter size is 3072. We train the models for 100 epochs with a batch size of 512 and 0.1 gradient clipping. We do early stopping if no improvement of the validation loss is observed for five consecutive epochs. All models are learned with Adam optimizer (Kingma and Ba, 2015) with \u03b2 1 = 0.9, \u03b2 2 = 0.999 and an initial learning rate = 0.00005. We use an inverse square root learning rate scheduler with the first 1,000 steps allocated for warmup. In addition, we set the hyperparameter \u03bb to 0.95. At decoding, we use beam search with a beam of size 3. We add a restriction to prevent the same bigram from being generated multiple times.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation Details",
"sec_num": "5.3"
},
{
"text": "We evaluate the models with automatic evaluation metrics. For knowledge selection, we use accuracy (Acc). For response reproducibility, we measure BLEU tgt -4 (Papineni et al., 2002) , which is the 4-gram overlap between a generated response and a target response. We also use unigram F1 (F1) following the evaluation setting in Dinan et al. (2019) . Additionally, we use Jaccard and BLEU know -4 to evaluate whether the knowledge is reflected in the generated response. Jaccard is the Jaccard coefficient of the set of words in the generated response and the set of words in the selected knowledge content. BLEU know -4 is the BLEU-4 computed between the generated response and the selected knowledge content.",
"cite_spans": [
{
"start": 159,
"end": 182,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF14"
},
{
"start": 329,
"end": 348,
"text": "Dinan et al. (2019)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "5.4"
},
{
"text": "The results of knowledge selection are shown in Table 3. The results show that our proposed method outperformed the baselines. TMN BERT+KH, which adds a mechanism to consider knowledge history to the baseline TMN BERT, is almost the same as TMN BERT in Acc. our proposed method improves Acc, suggesting the importance of considering knowledge structurally. The results of response generation are also shown in Table 3 . The proposed method did not perform well in terms of reproducibility for target responses. However, this should not be a major problem because it is known that it is inappropriate to measure reproducibility in dialogue evaluation (Liu et al., 2016) . On the other hand, the proposed model performed the best for knowledge reflection. We believe this improvement is due to selecting knowledge more correctly according to the dialogue context and knowledge history. Table 4 shows an example of knowledge selection and response generation. TMN BERT, which does not consider knowledge history, selects the plot even though it is at the beginning of the dialogue. Moreover, the generated utterance does not reflect the selected knowledge. On the other hand, our proposed model introduces the movie title that has not yet been mentioned in this dialogue by considering the knowledge history.",
"cite_spans": [
{
"start": 650,
"end": 668,
"text": "(Liu et al., 2016)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 410,
"end": 417,
"text": "Table 3",
"ref_id": "TABREF5"
},
{
"start": 884,
"end": 891,
"text": "Table 4",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Results and Analysis",
"sec_num": "5.5"
},
{
"text": "As illustrated by the generated response of TMN BERT, the generated utterances may not reflect the selected knowledge or may contain words inconsis-tent with the selected knowledge. This problem is known as the hallucination problem (Roller et al., 2020; Shuster et al., 2021) , and we leave the solution to this problem as future work.",
"cite_spans": [
{
"start": 233,
"end": 254,
"text": "(Roller et al., 2020;",
"ref_id": null
},
{
"start": 255,
"end": 276,
"text": "Shuster et al., 2021)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Case Study",
"sec_num": "5.6"
},
{
"text": "We proposed JMRD, a hierarchically structured knowledge-based movie recommendation dialogue dataset. We also proposed an end-to-end dialogue system that utilizes the hierarchically structured knowledge of knowledge types and contents to perform knowledge selection and generate responses as a strong baseline for our dataset. The experimental results show that our model can select more appropriate knowledge than baselines.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "As far as we know, this is the first Japanese dialogue dataset associated with external knowledge. We hope our dataset facilitates further research on movie recommendation dialogue based on structured external knowledge (especially in Japanese dialogue research).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "In response generation, we can observe that the utterances do not reflect the knowledge in some cases, even when the knowledge is selected correctly. There is still much room for improvement in knowledge reflection, and we leave this as future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "http://www.eiren.org/toukei/index. html 2 https://movies.yahoo.co.jp/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://alaginrc.nict.go.jp/ nict-bert/index.html",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work was supported by NII CRIS collaborative research program operated by NII CRIS and LINE Corporation. This work was also supported by JST, CREST Grant Number JPMJCR20D2, Japan.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": "7"
},
{
"text": ": Overview of the proposed model. In this figure, the model generates the response y 4 at time l = 4. Knowledge Cont Enc, Knowledge Type Enc, and Transformer Dec denote the knowledge content encoder, the knowledge type encoder, and the transformer decoder, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Encoding Layer",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Learning end-to-end goal-oriented dialog",
"authors": [
{
"first": "Antoine",
"middle": [],
"last": "Bordes",
"suffix": ""
},
{
"first": "Y-Lan",
"middle": [],
"last": "Boureau",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
}
],
"year": 2017,
"venue": "ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Antoine Bordes, Y-Lan Boureau, and Jason Weston. 2017. Learning end-to-end goal-oriented dialog. In ICLR.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Universal sentence encoder for English",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Cer",
"suffix": ""
},
{
"first": "Yinfei",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Sheng-Yi",
"middle": [],
"last": "Kong",
"suffix": ""
},
{
"first": "Nan",
"middle": [],
"last": "Hua",
"suffix": ""
},
{
"first": "Nicole",
"middle": [],
"last": "Limtiaco",
"suffix": ""
},
{
"first": "Rhomni",
"middle": [],
"last": "St",
"suffix": ""
},
{
"first": "Noah",
"middle": [],
"last": "John",
"suffix": ""
},
{
"first": "Mario",
"middle": [],
"last": "Constant",
"suffix": ""
},
{
"first": "Steve",
"middle": [],
"last": "Guajardo-Cespedes",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Yuan",
"suffix": ""
},
{
"first": "Brian",
"middle": [],
"last": "Tar",
"suffix": ""
},
{
"first": "Ray",
"middle": [],
"last": "Strope",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kurzweil",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
"volume": "",
"issue": "",
"pages": "169--174",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Cer, Yinfei Yang, Sheng-yi Kong, Nan Hua, Nicole Limtiaco, Rhomni St. John, Noah Constant, Mario Guajardo-Cespedes, Steve Yuan, Chris Tar, Brian Strope, and Ray Kurzweil. 2018. Universal sentence encoder for English. In Proceedings of the 2018 Conference on Empirical Methods in Nat- ural Language Processing: System Demonstrations, pages 169-174, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Wizard of wikipedia: Knowledge-powered conversational agents",
"authors": [
{
"first": "Emily",
"middle": [],
"last": "Dinan",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Roller",
"suffix": ""
},
{
"first": "Kurt",
"middle": [],
"last": "Shuster",
"suffix": ""
},
{
"first": "Angela",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
}
],
"year": 2019,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Emily Dinan, Stephen Roller, Kurt Shuster, Angela Fan, Michael Auli, and Jason Weston. 2019. Wizard of wikipedia: Knowledge-powered conversational agents. In International Conference on Learning Representations.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Incorporating copying mechanism in sequence-to-sequence learning",
"authors": [
{
"first": "Jiatao",
"middle": [],
"last": "Gu",
"suffix": ""
},
{
"first": "Zhengdong",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Hang",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "O",
"middle": [
"K"
],
"last": "Victor",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1631--1640",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiatao Gu, Zhengdong Lu, Hang Li, and Victor O.K. Li. 2016. Incorporating copying mechanism in sequence-to-sequence learning. In Proceedings of the 54th Annual Meeting of the Association for Com- putational Linguistics (Volume 1: Long Papers), pages 1631-1640, Berlin, Germany. Association for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Recommendation as a communication game: Self-supervised bot-play for goal-oriented dialogue",
"authors": [
{
"first": "Dongyeop",
"middle": [],
"last": "Kang",
"suffix": ""
},
{
"first": "Anusha",
"middle": [],
"last": "Balakrishnan",
"suffix": ""
},
{
"first": "Pararth",
"middle": [],
"last": "Shah",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Crook",
"suffix": ""
},
{
"first": "Y-Lan",
"middle": [],
"last": "Boureau",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "1951--1961",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dongyeop Kang, Anusha Balakrishnan, Pararth Shah, Paul Crook, Y-Lan Boureau, and Jason Weston. 2019. Recommendation as a communication game: Self-supervised bot-play for goal-oriented dialogue. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 1951- 1961, Hong Kong, China. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederick",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2015,
"venue": "International Conference on Learning Representations (ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diederick P Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In International Conference on Learning Representations (ICLR).",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Vincent Michalski, Laurent Charlin, and Chris Pal",
"authors": [
{
"first": "Raymond",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Samira",
"middle": [],
"last": "Kahou",
"suffix": ""
},
{
"first": "Hannes",
"middle": [],
"last": "Schulz",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 32nd International Conference on Neural Information Processing Systems, NIPS'18",
"volume": "",
"issue": "",
"pages": "9748--9758",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Raymond Li, Samira Kahou, Hannes Schulz, Vin- cent Michalski, Laurent Charlin, and Chris Pal. 2018. Towards deep conversational recommenda- tions. In Proceedings of the 32nd International Con- ference on Neural Information Processing Systems, NIPS'18, pages 9748-9758, Red Hook, NY, USA. Curran Associates Inc.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "How NOT to evaluate your dialogue system: An empirical study of unsupervised evaluation metrics for dialogue response generation",
"authors": [
{
"first": "Chia-Wei",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Lowe",
"suffix": ""
},
{
"first": "Iulian",
"middle": [],
"last": "Serban",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Noseworthy",
"suffix": ""
},
{
"first": "Laurent",
"middle": [],
"last": "Charlin",
"suffix": ""
},
{
"first": "Joelle",
"middle": [],
"last": "Pineau",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2122--2132",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chia-Wei Liu, Ryan Lowe, Iulian Serban, Mike Nose- worthy, Laurent Charlin, and Joelle Pineau. 2016. How NOT to evaluate your dialogue system: An empirical study of unsupervised evaluation metrics for dialogue response generation. In Proceedings of the 2016 Conference on Empirical Methods in Natu- ral Language Processing, pages 2122-2132, Austin, Texas. Association for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Towards conversational recommendation over multi-type dialogs",
"authors": [
{
"first": "Zeming",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Haifeng",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Zheng-Yu",
"middle": [],
"last": "Niu",
"suffix": ""
},
{
"first": "Hua",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Wanxiang",
"middle": [],
"last": "Che",
"suffix": ""
},
{
"first": "Ting",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1036--1049",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zeming Liu, Haifeng Wang, Zheng-Yu Niu, Hua Wu, Wanxiang Che, and Ting Liu. 2020. Towards con- versational recommendation over multi-type dialogs. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1036-1049, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Initiative-aware self-supervised learning for knowledge-grounded conversations",
"authors": [
{
"first": "Chuan",
"middle": [],
"last": "Meng",
"suffix": ""
},
{
"first": "Pengjie",
"middle": [],
"last": "Ren",
"suffix": ""
},
{
"first": "Zhumin",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Zhaochun",
"middle": [],
"last": "Ren",
"suffix": ""
},
{
"first": "Tengxiao",
"middle": [],
"last": "Xi",
"suffix": ""
},
{
"first": "Maarten",
"middle": [],
"last": "De Rijke",
"suffix": ""
}
],
"year": 2021,
"venue": "SIGIR",
"volume": "",
"issue": "",
"pages": "522--532",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chuan Meng, Pengjie Ren, Zhumin Chen, Zhaochun Ren, Tengxiao Xi, and Maarten de Rijke. 2021. Initiative-aware self-supervised learning for knowledge-grounded conversations. In SIGIR, pages 522-532.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "ParlAI: A dialog research software platform",
"authors": [
{
"first": "Alexander",
"middle": [],
"last": "Miller",
"suffix": ""
},
{
"first": "Will",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Dhruv",
"middle": [],
"last": "Batra",
"suffix": ""
},
{
"first": "Antoine",
"middle": [],
"last": "Bordes",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Fisch",
"suffix": ""
},
{
"first": "Jiasen",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Devi",
"middle": [],
"last": "Parikh",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
"volume": "",
"issue": "",
"pages": "79--84",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexander Miller, Will Feng, Dhruv Batra, Antoine Bordes, Adam Fisch, Jiasen Lu, Devi Parikh, and Jason Weston. 2017. ParlAI: A dialog research soft- ware platform. In Proceedings of the 2017 Con- ference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 79-84, Copenhagen, Denmark. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Towards exploiting background knowledge for building conversation systems",
"authors": [
{
"first": "Nikita",
"middle": [],
"last": "Moghe",
"suffix": ""
},
{
"first": "Siddhartha",
"middle": [],
"last": "Arora",
"suffix": ""
},
{
"first": "Suman",
"middle": [],
"last": "Banerjee",
"suffix": ""
},
{
"first": "Mitesh",
"middle": [
"M"
],
"last": "Khapra",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2322--2332",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nikita Moghe, Siddhartha Arora, Suman Banerjee, and Mitesh M. Khapra. 2018. Towards exploiting back- ground knowledge for building conversation sys- tems. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2322-2332, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "OpenDialKG: Explainable conversational reasoning with attention-based walks over knowledge graphs",
"authors": [
{
"first": "Seungwhan",
"middle": [],
"last": "Moon",
"suffix": ""
},
{
"first": "Pararth",
"middle": [],
"last": "Shah",
"suffix": ""
},
{
"first": "Anuj",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "Rajen",
"middle": [],
"last": "Subba",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "845--854",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Seungwhan Moon, Pararth Shah, Anuj Kumar, and Ra- jen Subba. 2019. OpenDialKG: Explainable conver- sational reasoning with attention-based walks over knowledge graphs. In Proceedings of the 57th An- nual Meeting of the Association for Computational Linguistics, pages 845-854, Florence, Italy. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Bleu: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Com- putational Linguistics, pages 311-318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Recipes for building an open-domain chatbot",
"authors": [
{
"first": "Kurt",
"middle": [],
"last": "Shuster",
"suffix": ""
},
{
"first": "Eric",
"middle": [
"Michael"
],
"last": "Smith",
"suffix": ""
},
{
"first": "Y-Lan",
"middle": [],
"last": "Boureau",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kurt Shuster, Eric Michael Smith, Y-Lan Boureau, and Jason Weston. 2020. Recipes for building an open-domain chatbot. abs/2004.13637.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Get to the point: Summarization with pointergenerator networks",
"authors": [
{
"first": "Abigail",
"middle": [],
"last": "See",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Liu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1073--1083",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointer- generator networks. In Proceedings of the 55th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1073- 1083, Vancouver, Canada. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Retrieval augmentation reduces hallucination in conversation",
"authors": [
{
"first": "Kurt",
"middle": [],
"last": "Shuster",
"suffix": ""
},
{
"first": "Spencer",
"middle": [],
"last": "Poff",
"suffix": ""
},
{
"first": "Moya",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Douwe",
"middle": [],
"last": "Kiela",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kurt Shuster, Spencer Poff, Moya Chen, Douwe Kiela, and Jason Weston. 2021. Retrieval aug- mentation reduces hallucination in conversation. abs/2104.07567.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "MovieChats: Chat like humans in a closed domain",
"authors": [
{
"first": "Hui",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "Xiaoyu",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Zhou",
"middle": [],
"last": "Xiao",
"suffix": ""
},
{
"first": "Zheng",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Ernie",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Cheng",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Cheng",
"middle": [],
"last": "Niu",
"suffix": ""
},
{
"first": "Jie",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "6605--6619",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hui Su, Xiaoyu Shen, Zhou Xiao, Zheng Zhang, Ernie Chang, Cheng Zhang, Cheng Niu, and Jie Zhou. 2020. MovieChats: Chat like humans in a closed domain. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Process- ing (EMNLP), pages 6605-6619, Online. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Design and structure of the Ju-man++ morphological analyzer toolkit",
"authors": [
{
"first": "Arseny",
"middle": [],
"last": "Tolmachev",
"suffix": ""
},
{
"first": "Daisuke",
"middle": [],
"last": "Kawahara",
"suffix": ""
},
{
"first": "Sadao",
"middle": [],
"last": "Kurohashi",
"suffix": ""
}
],
"year": 2020,
"venue": "Journal of Natural Language Processing",
"volume": "27",
"issue": "1",
"pages": "89--132",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Arseny Tolmachev, Daisuke Kawahara, and Sadao Kurohashi. 2020. Design and structure of the Ju- man++ morphological analyzer toolkit. Journal of Natural Language Processing, 27(1):89-132.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "NeurIPS",
"volume": "",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NeurIPS, pages 5998-6008. Curran As- sociates Inc.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Proactive human-machine conversation with explicit conversation goal",
"authors": [
{
"first": "Wenquan",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Zhen",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Xiangyang",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Hua",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Xiyuan",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Rongzhong",
"middle": [],
"last": "Lian",
"suffix": ""
},
{
"first": "Haifeng",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "3794--3804",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wenquan Wu, Zhen Guo, Xiangyang Zhou, Hua Wu, Xiyuan Zhang, Rongzhong Lian, and Haifeng Wang. 2019. Proactive human-machine conversation with explicit conversation goal. In Proceedings of the 57th Annual Meeting of the Association for Com- putational Linguistics, pages 3794-3804, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "KdConv: A Chinese multi-domain dialogue dataset towards multi-turn knowledge-driven conversation",
"authors": [
{
"first": "Hao",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Chujie",
"middle": [],
"last": "Zheng",
"suffix": ""
},
{
"first": "Kaili",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Minlie",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Xiaoyan",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "7098--7108",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hao Zhou, Chujie Zheng, Kaili Huang, Minlie Huang, and Xiaoyan Zhu. 2020. KdConv: A Chinese multi-domain dialogue dataset towards multi-turn knowledge-driven conversation. In Proceedings of the 58th Annual Meeting of the Association for Com- putational Linguistics, pages 7098-7108, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "A dataset for document grounded conversations",
"authors": [
{
"first": "Kangyan",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Shrimai",
"middle": [],
"last": "Prabhumoye",
"suffix": ""
},
{
"first": "Alan",
"middle": [
"W"
],
"last": "Black",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "708--713",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kangyan Zhou, Shrimai Prabhumoye, and Alan W Black. 2018. A dataset for document grounded con- versations. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Process- ing, pages 708-713, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "An example of JMRD dataset. The underlined parts of the external knowledge indicate the knowledge items used in the dialogue."
},
"FIGREF1": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Distribution of external knowledge used in each dialogue turn of the recommender. The information up to turn 12 is shown here."
},
"TABREF0": {
"content": "<table><tr><td colspan=\"3\"># dialogues</td><td/><td/><td/><td/><td/><td>5,166</td></tr><tr><td colspan=\"4\"># utterances (R)</td><td/><td/><td/><td/><td>57,714</td></tr><tr><td colspan=\"4\"># utterances (S)</td><td/><td/><td/><td/><td>59,160</td></tr><tr><td colspan=\"2\"># movies</td><td/><td/><td/><td/><td/><td/><td>261</td></tr><tr><td colspan=\"2\"># workers</td><td/><td/><td/><td/><td/><td/><td>322</td></tr><tr><td colspan=\"6\">Avg. # turns per dialogue</td><td/><td/><td>22.6</td></tr><tr><td colspan=\"7\">Avg. # words per utterance (R)</td><td/><td>23.8</td></tr><tr><td colspan=\"7\">Avg. # words per utterance (S)</td><td/><td>6.9</td></tr><tr><td colspan=\"8\">Avg. # knowledge used per utterance</td><td>1.3</td></tr><tr><td colspan=\"8\">Avg. # knowledge used per dialogue</td><td>10.8</td></tr><tr><td colspan=\"9\">Table 1: Statistics of JMRD. R and S denote recom-mender and seeker respectively. We use Juman++ (Tol-machev et al., 2020) for word segmentation.</td></tr><tr><td colspan=\"2\"># utterances</td><td/><td/><td/><td/><td/><td/></tr><tr><td>30000</td><td/><td/><td/><td/><td/><td/><td/><td>25627</td></tr><tr><td>25000</td><td/><td/><td/><td/><td/><td/><td/><td>(43.0%)</td></tr><tr><td>5000 10000 15000 20000</td><td>6674 (11.2%)</td><td>4047 (6.8%)</td><td colspan=\"2\">2762 (4.6%)</td><td>(9.7%) 5806</td><td>(6.4%) 3835</td><td>(18.2%) 10841</td><td>(20.8%) 12418</td></tr><tr><td>0</td><td/><td/><td/><td/><td/><td/><td/></tr><tr><td/><td>Title</td><td>Released</td><td colspan=\"2\">Director</td><td>Cast</td><td>Genre</td><td>Review</td><td>Plot</td><td>No</td></tr><tr><td/><td/><td>year</td><td/><td/><td/><td/><td/><td>knowledge</td></tr><tr><td colspan=\"4\">10% Figure 2: 0% 1 2 3 4 5 6 7 8 9 10 11 12 # dialogue turns of the recommender 20% 30%</td><td>40%</td><td>50%</td><td>60%</td><td>70%</td><td>80%</td><td>90%</td><td>100%</td></tr><tr><td/><td colspan=\"2\">No knowledge</td><td>Title</td><td>Year</td><td>Director</td><td>Cast</td><td>Genre</td><td>Review</td><td>Plot</td></tr></table>",
"text": "Distribution of external knowledge used.",
"type_str": "table",
"num": null,
"html": null
},
"TABREF2": {
"content": "<table/>",
"text": "Results of the questionnaire.",
"type_str": "table",
"num": null,
"html": null
},
"TABREF4": {
"content": "<table><tr><td/><td>knowledge selection</td><td colspan=\"2\">response reproducibility</td><td colspan=\"2\">knowledge reflection</td></tr><tr><td/><td>Acc</td><td>F1</td><td>BLEU tgt -4</td><td colspan=\"2\">Jaccard BLEU know -4</td></tr><tr><td>Random</td><td colspan=\"2\">4.18 (0.15) 24.05 (0.26)</td><td>4.63 (0.16)</td><td>5.87 (0.18)</td><td>0.47 (0.07)</td></tr><tr><td>TMN BERT</td><td>48.81 (0</td><td/><td/><td/></tr><tr><td/><td/><td/><td/><td colspan=\"2\">On the other hand,</td></tr></table>",
"text": ".25) 42.97 (0.16) 21.03 (0.70) 38.36 (0.81) 24.94 (1.36) TMN BERT+KH 48.66 (0.06) 42.74 (0.46) 20.68 (0.56) 38.23 (0.94) 25.08 (1.29) Ours 49.72 (0.44) 42.92 (0.71) 20.78 (0.69) 39.35 (1.41) 25.88 (1.35)",
"type_str": "table",
"num": null,
"html": null
},
"TABREF5": {
"content": "<table><tr><td/><td>Dialogue</td><td>Knowledge</td></tr><tr><td colspan=\"2\">Recommender 1 : Nice to meet you. Seeker 1 : Hello. Recommender 2 : I am pleased to meet you. Seeker 2 : What movies do you recommend?</td><td>No knowledge -No knowledge -</td></tr><tr><td>TMN BERT</td><td>I will introduce a movie called Do You Like Disney Movies?</td><td>Danny Ocean immediately breaks his parole rules (no interstate movement) and reunites with his partner Rusty Ryan in Los Ange-les. He confides in Ryan about a new theft scheme he had hatched while in prison. (Plot)</td></tr><tr><td colspan=\"2\">Ours: Today I will introduce Ocean's Eleven. Gold: How about Ocean's Eleven?</td><td>Ocean's Eleven (Title) Ocean's Eleven (Title)</td></tr></table>",
"text": "The evaluation results. Scores are the mean of three runs of the experiment with different random seeds, and standard deviations are shown in parentheses. The bold scores indicate the best ones over models.",
"type_str": "table",
"num": null,
"html": null
},
"TABREF6": {
"content": "<table/>",
"text": "Examples of generated responses by our model and the baseline model. Subscript numbers indicate the number of turns in the dialogue. The knowledge type is indicated in parentheses in the knowledge column.",
"type_str": "table",
"num": null,
"html": null
}
}
}
}