mishig HF Staff commited on
Commit
c93d49b
ยท
verified ยท
1 Parent(s): 59f943a

Add 1 files

Browse files
Files changed (1) hide show
  1. 2103/2103.02548.md +273 -0
2103/2103.02548.md ADDED
@@ -0,0 +1,273 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Title: A Chinese Dialogue Dataset Towards Multi-turn Topic-driven Conversation
2
+
3
+ URL Source: https://arxiv.org/html/2103.02548
4
+
5
+ Published Time: Fri, 08 Nov 2024 01:13:45 GMT
6
+
7
+ Markdown Content:
8
+ Xiaoyang Wang 1 1 1 Xiaoyang Wang and Chen Li contributed equally to this work., Chen Li 1 1 footnotemark: 1, Jianqiao Zhao, Dong Yu
9
+
10
+ ###### Abstract
11
+
12
+ In this paper, we propose a Chinese multi-turn topic-driven conversation dataset, NaturalConv, which allows the participants to chat anything they want as long as any element from the topic is mentioned and the topic shift is smooth. Our corpus contains 19.9K conversations from six domains, and 400K utterances with an average turn number of 20.1. These conversations contain in-depth discussions on related topics or widely natural transition between multiple topics. We believe either way is normal for human conversation. To facilitate the research on this corpus, we provide results of several benchmark models. Comparative results show that for this dataset, our current models are not able to provide significant improvement by introducing background knowledge/topic. Therefore, the proposed dataset should be a good benchmark for further research to evaluate the validity and naturalness of multi-turn conversation systems. Our dataset is available 2 2 2 Also at https://huggingface.co/datasets/xywang1/NaturalConv at https://ailab.tencent.com/ailab/nlp/dialogue/#datasets.
13
+
14
+ {CJK*}
15
+
16
+ UTF8gbsn
17
+
18
+ Introduction
19
+ ------------
20
+
21
+ There is a resurgent interest in developing open domain dialogue system, due to the availability of large amounts of conversational data and the recent progress on neural approaches (Huang, Zhu, and Gao [2019](https://arxiv.org/html/2103.02548v3#bib.bib9)). However, building open-domain dialogue systems that can converse on various topics like humans remains extremely challenging and most current open domain dialogue system are only good at generating general responses without too much meaningful information(Gao et al. [2019](https://arxiv.org/html/2103.02548v3#bib.bib6)).
22
+
23
+ Therefore, increasing research efforts are moving to consider how to incorporate various kinds of information to improve the quality of open domain conversation. The information includes but not limited to personality (Qian et al. [2018](https://arxiv.org/html/2103.02548v3#bib.bib14)), common sense (Zhou et al. [2018b](https://arxiv.org/html/2103.02548v3#bib.bib23)), reasoning (Zhou, Huang, and Zhu [2018](https://arxiv.org/html/2103.02548v3#bib.bib26)), emotion(Zhou et al. [2018a](https://arxiv.org/html/2103.02548v3#bib.bib22)), extra knowledge in terms of knowledge graph (Moon et al. [2019](https://arxiv.org/html/2103.02548v3#bib.bib13); Wu et al. [2019](https://arxiv.org/html/2103.02548v3#bib.bib20); Zhou et al. [2020](https://arxiv.org/html/2103.02548v3#bib.bib24)) etc. Especially, a variety of knowledge grounded dialogue corpora (Zhu et al. [2017](https://arxiv.org/html/2103.02548v3#bib.bib27); Dinan et al. [2019](https://arxiv.org/html/2103.02548v3#bib.bib5); Liu et al. [2018](https://arxiv.org/html/2103.02548v3#bib.bib11); Moghe et al. [2018](https://arxiv.org/html/2103.02548v3#bib.bib12); Zhou, Prabhumoye, and Black [2018](https://arxiv.org/html/2103.02548v3#bib.bib25); Moon et al. [2019](https://arxiv.org/html/2103.02548v3#bib.bib13); Qin et al. [2019](https://arxiv.org/html/2103.02548v3#bib.bib15); Tuan, Chen, and Lee [2019](https://arxiv.org/html/2103.02548v3#bib.bib18); Wu et al. [2019](https://arxiv.org/html/2103.02548v3#bib.bib20); Zhou et al. [2020](https://arxiv.org/html/2103.02548v3#bib.bib24)) have been released to demonstrate attempts for generating informative responses with topic related dialogue. Knowledge/Topic grounded conversation is a kind of new type conversation. On one hand, unlike open domain dialogue, it involves some specific topics which need extra knowledge during response generation. On the other hand, it also has various indirect information to the topic such as chitchat, saying a joke, expressing personal experiences, etc.
24
+
25
+ Therefore, we believe this kind of topic grounded conversations are closer to human-like conversations than open-domain dialogue in terms of naturalness and popularity. However, we find two common drawbacks from current available grounded conversation corpora: one is that almost all the mentioned work requires that the participants can only chat within the given topic and assumes the participants are familiar with the topic by reading the provided document or knowledge graph. But in our real life, people can easily extend the topic if they are very familiar with that and can also easily shift to other topics if their partner starts an unfamiliar topic. The other shortcoming is that most mentioned previous work encourages annotators to directly talk about the topic once they start the conversation. The work(Moghe et al. [2018](https://arxiv.org/html/2103.02548v3#bib.bib12)) even explicitly forbids chitchat during the annotation stage. However, the reality is quite opposite. Few daily conversations between people will directly step into topic after start. People will always have greeting before any formal or informal talk.
26
+
27
+ In order to conquer these problems, we propose NaturalConv, a Chinese dialogue dataset towards multi-turn topic-driven conversation with scenario. It is quite suitable for modeling topic interactions in multi-turn natural human-like dialogues. The common point with previous corpora is that the proposed dataset is also topic grounded and collected by annotators who communicate based on one given topic in the form of a news article. But the biggest different characteristics are the following: First, talk does not have to be only about the content from the article if one or both of them are not interested in that topic. Participants can talk about anything they want as long as any information from the news article is mentioned and the transition among the topic is natural; Second, we require that two participants need to suppose a scenario for their conversation. It means annotators conduct the dialogue task like a cosplay. They can assume the talk happens in any scenario as long as it follows a normal logic to conduct the conversation; Third, we allow chitchat/greetings in our conversation.
28
+
29
+ Table[1](https://arxiv.org/html/2103.02548v3#Sx1.T1 "Table 1 โ€ฃ Introduction โ€ฃ NaturalConv: A Chinese Dialogue Dataset Towards Multi-turn Topic-driven Conversation") shows an example. The top is the news article which both participants can access. From the dialogue, we can guess this dialogue happens between two students before the first class in the morning. Second, we find only 2 (A-4, A-5) out of 20 utterances explicitly mention the specific information of the news. Other utterances include chitchat (A-1, B-1), query/response about scenario (A-9, B-9, A-10, B-10), and personal experience related to topic (A-7, A-8, B-8), etc. But we can observe that the whole dialogue is more natural and human-like than most previous multi-turn dialogues that are more QA-like information exchange.
30
+
31
+ ๅŒ—ไบฌๆ—ถ้—ดไปŠๅคฉๅ‡Œๆ™จ๏ผŒ่ท็”ฒ็ฌฌ4่ฝฎ่ฟ›่กŒไบ†ไธคๅœบ่กฅ่ต›ใ€‚้˜ฟ่ดพๅ…‹ๆ–ฏๅ’ŒๅŸƒๅ› ้œๆธฉๅ‡ๅœจไธปๅœบๅ–ๅพ—ไบ†่ƒœๅˆฉ๏ผŒไธค้˜Ÿ7่ฝฎๅŽๅŒ็งฏ17ๅˆ†๏ผŒ้˜ฟ่ดพๅ…‹ๆ–ฏไปฅ6ไธชๅ‡€่ƒœ็ƒ็š„ไผ˜ๅŠฟ้ข†่ท‘็งฏๅˆ†ๆฆœใ€‚ 0็‚น30ๅˆ†๏ผŒๅŸƒๅ› ้œๆธฉไธŽๆ ผ็ฝ—ๅฎๆ น็š„ๆฏ”่ต›ๅผ€ๆˆ˜๏ผŒๅŸƒๅ› ้œๆธฉๆœ€็ปˆ3-1ไธปๅœบ่Žท่ƒœใ€‚2็‚น45ๅˆ†๏ผŒ้˜ฟ่ดพๅ…‹ๆ–ฏไธปๅœบไธŽ็ฆๅ›พ็บณ้”กๅก”ๅพทไน‹ๆˆ˜ๅผ€็ƒใ€‚็”ฑไบŽๅŸƒๅ› ้œๆธฉๅทฒ็ปๅ…ˆ่Žท่ƒœไบ†๏ผŒ้˜ฟ่ดพๅ…‹ๆ–ฏๅฟ…้กป่Žท่ƒœๆ‰่ƒฝๅœจ็งฏๅˆ†ๆฆœไธŠๅ’ฌไฝๅฏนๆ–นใ€‚ ๅœจๆ•ดไธชไธŠๅŠๅœบ๏ผŒ้˜ฟ่ดพๅ…‹ๆ–ฏๅพ—ๅŠฟไธๅพ—ๅˆ†๏ผŒๅŒๆ–น0-0ไบ’ไบค็™ฝๅทใ€‚ๅœจไธ‹ๅŠๅœบไธญ๏ผŒ้˜ฟ่ดพๅ…‹ๆ–ฏ็ช็„ถ่ฟŽๆฅไบ†ๅคง็ˆ†ๅ‘ใ€‚ๅœจ็Ÿญ็Ÿญ33ๅˆ†้’Ÿๅ†…๏ผŒ้˜ฟ่ดพๅ…‹ๆ–ฏ็–ฏ็‹‚ๆ‰“่ฟ›5็ƒ๏ผŒๅนณๅ‡ๆฏ6ๅˆ†้’Ÿๅฐฑ่ƒฝๅ–ๅพ—1ไธช่ฟ›็ƒใ€‚ ๅœจ็ฌฌ50ๅˆ†้’Ÿๆ—ถ๏ผŒๆ–ฐๆดๆ™ฎ็ฝ—ๆข…ๆ–ฏไธบ้˜ฟ่ดพๅ…‹ๆ–ฏๆ‰“็ ดๅƒตๅฑ€ใ€‚ๅก”่ฟชๅฅ‡ๅทฆไพง้€ๅ‡บๆจชไผ ๏ผŒๆ™ฎ็ฝ—ๆข…ๆ–ฏๅŽ็‚นๆŽจๅฐ„็ ด้—จใ€‚53ๅˆ†้’Ÿไบจ็‰นๆ‹‰ๅฐ”ๅคด็ƒ่กฅๅฐ„๏ผŒๅ†…้›ทๆ–ฏๅœจ้—จ็บฟๅ‰ๅคด็ƒๆŽฅๅŠ›็ ด้—จใ€‚ 68ๅˆ†้’Ÿๆ—ถ๏ผŒๆ™ฎ็ฝ—ๆข…ๆ–ฏ่ฟ‘่ท็ฆป่กฅๅฐ„ๆข…ๅผ€ไบŒๅบฆใ€‚่ฟ™ๅ27ๅฒ็š„ๅ‰ๅœบๅคš้ขๆ‰‹๏ผŒ่ท‘ๅˆฐๅœบ่พนๆฅไบ†ไธ€็•ชๅฐฌ่ˆžใ€‚77ๅˆ†้’Ÿๆ—ถ้˜ฟ่ดพๅ…‹ๆ–ฏๆ”ถ่Žท็ฌฌ4็ƒ๏ผŒๅฎข้˜ŸๅŽๅซๅ“ˆ้‡Œๆ–ฏๅœจ้˜ฒไผ ไธญๆ—ถไผธ่…ฟๅฐ†็ƒไธ€ๆ…๏ผŒ็ป“ๆžœ็šฎ็ƒๆฐๅฅฝ่ถŠ่ฟ‡้—จๅฐ†ๆปšๅ…ฅ็ฝ‘็ชใ€‚ ๅœจ็ฌฌ83ๅˆ†้’Ÿๆ—ถ๏ผŒๆ™ฎ็ฝ—ๆข…ๆ–ฏไธŠๆผ”ไบ†ๅธฝๅญๆˆๆณ•๏ผŒๆฏ”ๅˆ†ไนŸๆœ€็ปˆ่ขซๅฎšๆ ผไธบ5-0ใ€‚ๅœจๆŽฅๅˆฐๅก”่ฟชๅฅ‡็›ดไผ ๅŽ๏ผŒๆ™ฎ็ฝ—ๆข…ๆ–ฏ็ฆๅŒบๅทฆไพงๅ่ถŠไฝๆˆๅŠŸ๏ผŒไป–็š„ๅ•ๅˆ€ไฝŽๅฐ„ไปŽ้—จๅฐ†่ฃ†ไธ‹ๅ…ฅ็ฝ‘ใ€‚ๆ™ฎ็ฝ—ๆข…ๆ–ฏ่ฟ™ๆฌก็š„ๅบ†็ฅๅŠจไฝœๆ˜ฏ็ง€ๅ‡บไธ‰ๆ นๆ‰‹ๆŒ‡๏ผŒไธ่ฟ‡ไป–ๆ‰‹ๆŒ‡ไปŽไธŠๅˆฐไธ‹ๆŠน่ฟ‡้ข้ƒจๆ—ถ็š„ๅŠจไฝœ๏ผŒๅพˆๆœ‰็‚นๅƒๆ˜ฏๅœจๆ“ฆ้ผปๆถ•ใ€‚In the early morning of Beijing time, the Dutch league had two matches in the fourth round. Ajax and Eindhoven both won at home, scoring 17 points after 7 rounds. Ajax led the scoreboard with 6 goals advantage. At 0:30, Eindhoven played with Groningen, and Eindhoven won 3-1 at home. At 2:45, Ajax kicked off the match against Fortunaโ€™s Sitad at home. Since Eindhoven had won first, Ajax must win in order to keep the pace. In the first half, neither team scored. In the second half, Ajax started to score goals. In a short span of 33 minutes, Ajax scored five goals crazily, averaging one goal every 6 minutes. In the 50th minute, Promes broke the deadlock for Ajax. Tadic sent out a cross on the left, and Promes pushed back and scored. Huntelaar made a header shot in the 53rd minute, and Neres headed the ball into the net in front of the goal line. In the 68th minute, Promes scored for a second time. The 27-year-old versatile frontcourt player ran to the edge of the field for a full dance. Ajax scored the fourth goal in the 77th minute. Away team defender Harris gave the ball a poke with his leg while defending the cross. As a result, the ball just passed the goalkeeper and rolled into the net. In the 83rd minute, Promes achieved a hat-trick and the score was finally fixed at 5-0. After receiving Tadicโ€™s direct pass, Promes successfully countered offside on the left side of the penalty area. His one-on-one low shot went into the net from under the goalkeeperโ€™s crotch. Promes showed three fingers for celebration, but his fingers rubbed his face from top to bottom, which was a bit like wiping his nose.
32
+ Turn Content of Dialogue Description
33
+ A-1 ๅ—จ๏ผŒไฝ ๆฅ็š„ๆŒบๆ—ฉๅ•Šใ€‚(Hi, you come so early.)chitchat
34
+ B-1 ๆ˜ฏๅ•Š๏ผŒไฝ ๆ€Žไนˆๆฅๅพ—่ฟ™ไนˆๆ™š๏ผŸ(Yes, why do you come so late?)chitchat
35
+ A-2 ๆ˜จๆ™šๆˆ‘็œ‹ไบ†็ƒ่ต›๏ผŒๆ‰€ไปฅไปŠๆ—ฉ่ตทๆ™šไบ†๏ผŒไนŸๆฒกๅƒ้ฅญใ€‚(I watched a sports game last night, so I woke up late this morning, and I have not had my breakfast.)chitchat, start to introduce the topic about soccer
36
+ B-2 ็Žฐๅœจ่ฟ™ไธช็‚น้ฃŸๅ ‚ๅบ”่ฏฅๆœ‰้ฅญ๏ผŒไฝ ็œ‹ๅฏไป€ไนˆ็ƒ่ต›ๅ•Š๏ผŸ็ฏฎ็ƒๅ—๏ผŸ(Oh, the cafeteria should still be open now. Which game did you watch? Basketball game?)
37
+ A-3 ไธๆ˜ฏ๏ผŒ่ถณ็ƒใ€‚(No, soccer game.)introduce general information of news
38
+ B-3 ๆ€ชไธๅพ—๏ผŒ่ถณ็ƒๆ—ถ้—ด้•ฟใ€‚(I see. Soccer game usually takes longer than basketball game.)
39
+ A-4 ไฝ ็Ÿฅ้“ไนˆ๏ผŒๆฏๆฌก้ƒฝๆ˜ฏๆ™ฎ็ฝ—ๆข…ๆ–ฏ่ฟ›็ƒใ€‚(Do you know, Promes can score every time.)provide a specific information
40
+ B-4 ่ฟ™ไธชๆˆ‘ๅˆšๆ‰ไนŸ็œ‹ไบ†ๆ–ฐ้—ปไบ†๏ผŒไป–ๅฅฝๆœ‰ๅฎžๅŠ›ๅ•Šใ€‚(Oh, I also read the news of that game. Yes. He is very strong.)
41
+ A-5 ๆ˜ฏๅ•Š๏ผŒๅฐคๅ…ถๆ˜ฏไป–้‚ฃไธชๅธฝๅญๆˆๆณ•๏ผŒ่ฎฉๆˆ‘็œ‹็š„ๅคชๆƒŠๅฟƒๅŠจ้ญ„ไบ†ใ€‚(Yes, especially he had a hat-trick last night. I was so excited.)provide a specific information
42
+ B-5 ๆˆ‘ไธ€ๅŒๅญฆๅœจ็พค้‡Œ่ฏดไบ†๏ผŒๆฏๆฌก่Šๅคฉ้ƒฝ็ฆปไธๅผ€ไป–๏ผŒๅฏ่งไป–็š„ๅฎžๅŠ›ๆœ‰ๅคšๅผบๅคงใ€‚(One of my classmates always talked about Promes every time as well, which means how strong he is.)the key point of this utterance is not related to news
43
+ A-6 ๆ˜ฏๅ•Š๏ผŒ็œ‹ๆฅไฝ ้‚ฃไธชๅŒๅญฆๅ’Œๆˆ‘ๆ˜ฏไธ€ๆ ท็š„ๆƒณๆณ•ใ€‚(Yes, your classmate has the same point with me.)from this utterance to the end, none is related to the news, but still about the soccer topic.
44
+ B-6 ๆˆ‘ๅฅฝไธๅฎนๆ˜“ๆ‘†่„ฑไป–็š„่ฏ้ข˜๏ผŒไฝ ๅˆๆฅไธ€ไธช่ฏดๅ‡บไป–็š„ๅๅญ—ใ€‚(I had a hard time getting rid of his topic, and now you mentioned his name again.)
45
+ A-7 ๅ“ˆๅ“ˆ๏ผŒไฝ ไธๆ‡‚ๆˆ‘ไปฌๅฏน่ถณ็ƒๆœ‰ๅคš็ƒญ็ˆฑใ€‚(Haha, you donโ€™t understand how much we love soccer.)
46
+ B-7 ๆˆ‘็Ÿฅ้“ไฝ ็ƒญ็ˆฑ๏ผŒๆˆ‘่ฟ˜่ฎฐๅพ—ไฝ ๅ‚ๅŠ ๅˆไธญๆฏ”่ต›่ฟ˜ๆ‹ฟๅˆฐๅ† ๅ†›ๅ‘ขใ€‚ไฝ ๅŠŸไธๅฏๆฒกๅ•Šใ€‚(I know you love it, and I still remember you won the junior high school competition. You played a very important role.)
47
+ A-8 ๅ“ˆๅ“ˆ๏ผŒ่ฟ˜ๆ˜ฏไฝ ่ƒฝ่ฎฐๅพ—ๆˆ‘ๅฝ“ๆ—ถ็š„่พ‰็…Œใ€‚(Haha, you can still remember my glory at that time.)
48
+ B-8 ๆฒกๅŠžๆณ•๏ผŒๅ’ฑไฟฉไปŽๅฐไธ€่ตท้•ฟๅคง็š„๏ผŒๅฝผๆญคๅคชไบ†่งฃๅฝผๆญคไบ†ใ€‚(We grew up together, and we know each other too well.)
49
+ A-9 ๅ—ฏ๏ผŒ่€ๅธˆๆฅไบ†ใ€‚(Sure. The professor is coming.)
50
+ B-9 ๅฟซๆ‰“ๅผ€่ฏพๆœฌ๏ผŒ่€ๅธˆ่ฆๆฃ€ๆŸฅใ€‚(Open the book. He said he would check our work this time.)
51
+ A-10 ๅ—ฏๅ—ฏ๏ผŒไธ‹่ฏพๅ†่Šใ€‚(OK. Letโ€™s talk after class.)
52
+ B-10 ๅ—ฏใ€‚(Sure.)
53
+
54
+ Table 1: One example from our NaturalConv dataset.
55
+
56
+ With above mentioned significantly different properties, we also find it is not trivial to incorporate the document/knowledge into the process of dialogue generation. Our implemented methods of incorporating document/knowledge discussed in the Methods part of the paper do not bring in significant performance gains on our dataset. In summary, this paper makes the following contributions:
57
+
58
+ * โ€ขWe collect a new dataset, NaturalConv, based on topic-driven conversation generation in Chinese. It is much closer to human-like conversation with natural property including a full and natural setting such as scenario assumption, free topic extension, greetings, ect. It contains about 400K utterances and 19.9K dialogues in multiple domains (including but not limited to sports, entertainment, and technology). The averaged turn number is 20, remarkably longer than those in other corpora.
59
+ * โ€ขNaturalConv provides a benchmark to evaluate the ability of generating conversations in a natural setting. The corpus can empower the future research of not only document grounded conversation generation, but also conversation style and strategy learning from different scenarios.
60
+ * โ€ขWe also conduct extensive experiments on this corpus to facilitate future research. Results show it is still very challenging to incorporate document knowledge on our dataset for dialogue generation. We still need deep research work to improve the system to handle such natural and vivid conversation.
61
+
62
+ Related Work
63
+ ------------
64
+
65
+ As more dialogue data is accessible and huge computation resource is available, neural based open-domain conversation generation has been largely advanced recently (Adiwardana et al. [2020](https://arxiv.org/html/2103.02548v3#bib.bib1); Roller et al. [2020](https://arxiv.org/html/2103.02548v3#bib.bib16)). But, most neural response generation models developed for open-domain dialogue systems are not grounded in real world, which prevents these systems from effectively conversing about anything meaningful. Knowledge grounding is crucial for the system to provide practical responses. Otherwise, the system would prefer bland and repetitive responses.
66
+
67
+ To accelerate the research of knowledge-grounded conversation, several knowledge-grounded corpora are proposed. Some (Ghazvininejad et al. [2018](https://arxiv.org/html/2103.02548v3#bib.bib7); Liu et al. [2018](https://arxiv.org/html/2103.02548v3#bib.bib11); Tuan, Chen, and Lee [2019](https://arxiv.org/html/2103.02548v3#bib.bib18); Qin et al. [2019](https://arxiv.org/html/2103.02548v3#bib.bib15)) obtain the knowledge by automatic methods such as NER and string match. But more (Zhou, Prabhumoye, and Black [2018](https://arxiv.org/html/2103.02548v3#bib.bib25); Dinan et al. [2019](https://arxiv.org/html/2103.02548v3#bib.bib5); Gopalakrishnan et al. [2019](https://arxiv.org/html/2103.02548v3#bib.bib8); Moon et al. [2019](https://arxiv.org/html/2103.02548v3#bib.bib13); Wu et al. [2019](https://arxiv.org/html/2103.02548v3#bib.bib20); Zhou et al. [2020](https://arxiv.org/html/2103.02548v3#bib.bib24)) collect knowledge during the annotation from annotators.
68
+
69
+ There are also differences among these corpus. From the aspect of whether two participants or only one participant can access the knowledge, (Dinan et al. [2019](https://arxiv.org/html/2103.02548v3#bib.bib5)) assumes one annotator is a professional person who can access Wikipedia resource, while the other one is an apprentice who is seeking information and knows nothing about the topic. On the other hand, (Moon et al. [2019](https://arxiv.org/html/2103.02548v3#bib.bib13); Wu et al. [2019](https://arxiv.org/html/2103.02548v3#bib.bib20); Zhou et al. [2020](https://arxiv.org/html/2103.02548v3#bib.bib24)) allow all annotators to access knowledge.
70
+
71
+ The knowledge in (Zhou, Prabhumoye, and Black [2018](https://arxiv.org/html/2103.02548v3#bib.bib25); Dinan et al. [2019](https://arxiv.org/html/2103.02548v3#bib.bib5); Gopalakrishnan et al. [2019](https://arxiv.org/html/2103.02548v3#bib.bib8)) is unstructured plain text, while (Moon et al. [2019](https://arxiv.org/html/2103.02548v3#bib.bib13); Wu et al. [2019](https://arxiv.org/html/2103.02548v3#bib.bib20); Zhou et al. [2020](https://arxiv.org/html/2103.02548v3#bib.bib24)) provides structured knowledge graph. (Moon et al. [2019](https://arxiv.org/html/2103.02548v3#bib.bib13)) uses Freebase(Bast et al. [2014](https://arxiv.org/html/2103.02548v3#bib.bib2)) as background knowledge. To the best of our knowledge, DuConv (Wu et al. [2019](https://arxiv.org/html/2103.02548v3#bib.bib20)) and KdConv (Zhou et al. [2020](https://arxiv.org/html/2103.02548v3#bib.bib24)) are the only two existing Chinese human-labeled knowledge-grounded dialogue datasets. The DuConv utilizes the combination of unstructured text like short comments and structured knowledge graphs as knowledge resources. One limitation of DuConv is its strong assumption that the conversation must transfer from one entity to another inside the knowledge graph, while this is not always true during human conversation. KdConv constructs their knowledge graph from multiple resources. One defect of KdConv is its high overlap between dialogue and the provided knowledge, which means the annotator heavily duplicates the content from knowledge graph and the dialogue lacks variability. Table[2](https://arxiv.org/html/2103.02548v3#Sx2.T2 "Table 2 โ€ฃ Related Work โ€ฃ NaturalConv: A Chinese Dialogue Dataset Towards Multi-turn Topic-driven Conversation") shows statistics among corpora that share similar settings with ours.
72
+
73
+ Dataset Language Document Type Annotation Level Topic Avg. # turns# uttrs
74
+ CMU DoG English Text Sentence Film 22.6 130k
75
+ Wizard of Wiki English Text Sentence Multiple 9.0 202k
76
+ DuConv Chinese Text&KG Dialogue Film 5.8 91k
77
+ KdConv Chinese Text&KG Sentence Film, Music, Travel 19.0 86k
78
+ NaturalConv Chinese Text Dialogue Sports, Ent, Tech, Games, Edu, Health 20.1 400k
79
+
80
+ Table 2: Comparison among our NaturalConv corpus and other human-labeled document/knowledge grounded dialogue corpus.
81
+
82
+ Dataset
83
+ -------
84
+
85
+ In this section, we describe the creation of NaturalConv in details. NaturalConv is designed to collect a multi-turn document grounded dialogue dataset with scenario and naturalness property of conversation. The created dialogues are expected to include three key points: dialogue with meaningful content, dialogue in a scenario, and dialogue with naturalness. In the following, we will describe how we design the data collection.
86
+
87
+ ### Dialogue Collection
88
+
89
+ Collect and filter document: First of all, we believe only with a common topic, the dialogue can have meaningful content. Therefore, we collect news articles as the grounding documents for dialogue. At the same time, we avoid to choose professional materials or topics because we believe most daily conversations are leisure talk about what happens daily. As a result, we collect in total of 6,500 news articles that are in 6 categories in time range from September 2019 to December 2019. Table[3](https://arxiv.org/html/2103.02548v3#Sx3.T3 "Table 3 โ€ฃ Corpus Statistics โ€ฃ Dataset โ€ฃ NaturalConv: A Chinese Dialogue Dataset Towards Multi-turn Topic-driven Conversation") shows the distribution of each category. The uneven distribution is caused by the popularity of different categories and the appropriateness of news articles for dialogue grounding article. For example, we filter politics and economics news due to their sensibility, remove short news because of their poor informativeness, and drop too long news to avoid annotators spending too much time on reading, etc.
90
+
91
+ Create dialogue with grounded document: Second, we recruit annotators to generate multi-turn conversations that are related to a news article. In this step, the significant difference between our work and others is that we have few restrictions or assumptions for participants. For example, we do not have an explicit goal during conversation as proposed in (Wu et al. [2019](https://arxiv.org/html/2103.02548v3#bib.bib20)) that requires the conversation should transfer to a different entity within the give topic. In addition, two participants in our data collection both have access to the news article rather than that only one participant has access to the material as an expert and the other does not as an apprentice, as proposed in (Dinan et al. [2019](https://arxiv.org/html/2103.02548v3#bib.bib5)). During our conversation, we only have the following three very limited requirements:
92
+
93
+ * โ€ขThe length of dialogue must be no less than 10 turns for each participant and the content of the news article must be mentioned. But, we do not require how much of the content needs to be involved and how the content should be mentioned. Therefore, once anything in the news is touched, the participants can shift the topic immediately as long as the shift is smooth. The shift can be anything such as content related to the topic but unrelated to the news article, or even unrelated to the topic. Table 1 and Table 2 in the supplemental material give a complete example. News from Table 1 is the common topic for the dialogue in Table 2. It is interesting to find the topic can be shifted to a Chinese TV show โ€œWhere Are We Going? Dadโ€ from the news about German F1 racing driver Michael Schumacher.
94
+ * โ€ขEvery conversation must happen in a scenario. The participants can decide the scenario in their preference or they can choose a scenario that can easily trigger the initial topic. Table 3 and Table 4 in the supplemental material give such an example. The news is from technology category and is about a newly released wireless headset from Google. The dialogue happens on a playground where lots of people are doing exercise.
95
+ * โ€ขOnce the participants read the above two instructions, they can talk about anything they want as long as the conversation goes as naturally as possible and follows the human logic. One example is from Table 5 and Table 6 of the supplemental material. The news is about a newly released electronic game. But we find few utterances are about the game itself. Two participants talk a lot about their experience of playing the game in childhood and plan to play it together in the near future. At the end of the dialogue, the participants even exchange the account IDs for playing another game.
96
+
97
+ We employ a data supplier in China to carry out the dialogue collection project. We cooperate closely with our partner, monitoring every detail in the process. We enforce strict guidelines to control the quality, checking if there is repetition between each utterance and the news passage, making sure the participant combinations are as diverse as possible, monitoring utterances length to eliminate any perfunctory behavior, etc. We also sample the dialogues and manually read them as one of our quality management methods. Any dialogue that fails our test would be returned and rewritten until it passes our examination. We pay our data supplier roughly $50,000 currency-dollar 50 000\$50,000$ 50 , 000 for the task.
98
+
99
+ In our data collection, we do not require our supplier to provide fine-grained sentence-level annotation for linguistic features due to the following observations: 1) the dialogue pattern in our corpus is largely oral and very flexible for accurate and efficient annotations; 2) there is no obvious correspondence between the sentences in the document and the utterances in the dialogue. However, extra annotations from other parties in the community are always welcome.
100
+
101
+ ### Corpus Statistics
102
+
103
+ Table[3](https://arxiv.org/html/2103.02548v3#Sx3.T3 "Table 3 โ€ฃ Corpus Statistics โ€ฃ Dataset โ€ฃ NaturalConv: A Chinese Dialogue Dataset Towards Multi-turn Topic-driven Conversation") summarizes the regular information about NaturalConv. In the following, we describe two specific metrics on related corpora and ours to present our corpusโ€™s features.
104
+
105
+ Sports Ent Tech Games Edu Health Total
106
+ # document 3,124 1,331 1,476 103 414 52 6,500
107
+ # dialogues 9,740 4,403 4,061 308 1,265 142 19,919
108
+ # dialogues per document 3.1 3.3 2.8 3.0 3.1 2.7 3.0
109
+ # utterances 195,643 88,457 81,587 6,180 25,376 2,852 400,095
110
+ Avg. # utterances per dialogue 20.1 20.1 20.1 20.1 20.1 20.1 20.1
111
+ Avg. # tokens per utterance 12.0 12.4 12.3 12.1 12.6 12.5 12.2
112
+ Avg. # characters per utterance 17.8 18.1 18.6 17.8 18.1 18.3 18.1
113
+ Avg. # tokens per dialogue 241.1 248.2 247.5 242.9 248.3 251.1 244.8
114
+ Avg. # characters per dialogue 357.5 363.2 372.8 356.5 356.5 368.0 363.1
115
+
116
+ Table 3: Statistic of NaturalConv.
117
+
118
+ Similarity between document and dialogues: As aforementioned, our conversations include indirect content in respect with the document. We use the BLEU score similarity between document and dialogue to measure in our dataset and other existing datasets how much content people talk about are directly from the background document. A lower similarity measure indicates the dialogue is less the repetition of the document and potentially more natural and informative. In this evaluation, we conduct the comparison between our dataset, CMU DoG (Zhou, Prabhumoye, and Black [2018](https://arxiv.org/html/2103.02548v3#bib.bib25)), DuConv (Wu et al. [2019](https://arxiv.org/html/2103.02548v3#bib.bib20)) and KdConv (Zhou et al. [2020](https://arxiv.org/html/2103.02548v3#bib.bib24)). In these datasets, each dialogue has a dialogue-level grounding information. CMU DoG and ours are plain text and the other two are structured KG. Obviously, from Table[4](https://arxiv.org/html/2103.02548v3#Sx3.T4 "Table 4 โ€ฃ Corpus Statistics โ€ฃ Dataset โ€ฃ NaturalConv: A Chinese Dialogue Dataset Towards Multi-turn Topic-driven Conversation"), we find our dataset has the lowest BLUE1 score and significantly lower BLUE2 score compared to other datasets.
119
+
120
+ BLEU1 BLEU2
121
+ CMU DoG 23.7 9.64
122
+ DuConv 16.53 10.58
123
+ KdConv 35.69 26.27
124
+ NaturalConv 16.17 6.13
125
+
126
+ Table 4: Statistic of similarity between grounding document and dialogue among different dataset.
127
+
128
+ Variability between dialogues: Since we have few restrictions for annotators, they are allowed not only to talk on different aspects within the document, but also to shift topics easily. We believe this will lead to better variability of dialogues given the same or similar documents. To prove our point, we measure the variability between dialogues from different pairs of annotators when given the same background document/knowledge grounding. Specifically, for each document/knowledge in CMU DoG, DuConv, KdConv, and NaturalConv, we randomly choose 3 corresponding dialogues respectively, calculate the BLEU scores for each possible pair of the 3 dialogues, and average them. Finally, we average the scores across all the documents/knowledge to represent the overall variability of the corpus. The higher of score means the lower variability between dialogues. Results in Table[5](https://arxiv.org/html/2103.02548v3#Sx3.T5 "Table 5 โ€ฃ Corpus Statistics โ€ฃ Dataset โ€ฃ NaturalConv: A Chinese Dialogue Dataset Towards Multi-turn Topic-driven Conversation") indicate our dataset has the best variability.
129
+
130
+ Human evaluation on naturalness: We perform manual evaluation of dialogue naturalness for DuConv, KdConv, and NaturalConv. We randomly selected 100 sessions of dialogues from each of the three corpora. Dialogue-level evaluations are performed by two annotators with three grades: natural (3), fair (2), unnatural (1). Evaluation shows our corpus has the best averaged naturalness score of 2.8. DuConv and KdConv have scores of 2.4 and 2.0 respectively.
131
+
132
+ Avg-BLEU1 Avg-BLEU2
133
+ CMU DoG 33.15 14.62
134
+ NaturalConv 32.36 12.56
135
+
136
+ Table 5: Statistic of similarity between dialogues under same topic among different dataset.
137
+
138
+ Methods
139
+ -------
140
+
141
+ In this section, we discuss the methods we use for conversation modeling and response generation with the collected NaturalConv corpus. Both the retrieval-based and generation-based methods are evaluated. To further explore the role of the document grounding on dialogue, we extend the generation models to integrate the retrieved document contents most related to the current dialogue context through attention mechanism.
142
+
143
+ ### Retrieval-based Model
144
+
145
+ Given a dialogue context ๐— ๐—\mathbf{X}bold_X, the retrieval-based dialogue system responds to the context via searching for the best response ๐ฒ ๐ฒ\mathbf{y}bold_y from the NatrualConv corpus. We adopt an IR model by finding the most similar query in the retrieval corpus and then utilizing its response as the result. Similarity is measured by the BM25 index between bags of words.
146
+
147
+ Recently, the BERT based retrieval dialogue models(Whang et al. [2019](https://arxiv.org/html/2103.02548v3#bib.bib19)) have shown promising performances in dialogue systems. We further incorporate the BERT model to re-rank outputs of the BM25 retrieval model. This Retrieval-BERT model is fine-tuned from the โ€œbert-base-chineseโ€ backbone for sequence classification on the training data with both the ground truth response labeled as โ€œ1โ€ and top Kโˆ’1 ๐พ 1 K-1 italic_K - 1 responses from the BM25 retrieval method labeled as โ€œ0โ€. During inference, it re-ranks the top K ๐พ K italic_K response from the BM25 retrieval method given the dialogue context as query according to the sequence classification scores of the fine-tuned model.
148
+
149
+ ### Generation-based Model
150
+
151
+ In our multi-turn conversation setting, the generation-based dialogue models take the concatenation of the past k ๐‘˜ k italic_k dialogue utterances as input ๐—={๐ฑ 1,๐ฑ 2,โ€ฆ,๐ฑ k}๐— subscript ๐ฑ 1 subscript ๐ฑ 2โ€ฆsubscript ๐ฑ ๐‘˜\mathbf{X}=\{\mathbf{x}_{1},\mathbf{x}_{2},...,\mathbf{x}_{k}\}bold_X = { bold_x start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , bold_x start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , โ€ฆ , bold_x start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT }, and outputs a natural-language response consisting of a sequence of words ๐ฒ={y 1,y 2,โ€ฆ,y n}๐ฒ subscript ๐‘ฆ 1 subscript ๐‘ฆ 2โ€ฆsubscript ๐‘ฆ ๐‘›\mathbf{y}=\{y_{1},y_{2},...,y_{n}\}bold_y = { italic_y start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_y start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , โ€ฆ , italic_y start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT }, where n ๐‘› n italic_n is the maximum possible number of words in the response sequence. The training of generation-based dialogue models requires a training dataset ๐’Ÿ={(๐— i,๐ฒ i)i=1 N}๐’Ÿ superscript subscript superscript ๐— ๐‘– superscript ๐ฒ ๐‘– ๐‘– 1 ๐‘\mathcal{D}=\{(\mathbf{X}^{i},\mathbf{y}^{i})_{i=1}^{N}\}caligraphic_D = { ( bold_X start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT , bold_y start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT ) start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_N end_POSTSUPERSCRIPT } containing N ๐‘ N italic_N gold input-output dialogue pairs (๐— i,๐ฒ i)superscript ๐— ๐‘– superscript ๐ฒ ๐‘–(\mathbf{X}^{i},\mathbf{y}^{i})( bold_X start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT , bold_y start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT ). To train parameter ฮธ ๐œƒ\theta italic_ฮธ of the generative model, we use Maximum Likelihood Estimation (MLE) to minimize the loss โ„’=โˆ‘i=1 N โ„’ iโข(๐— i,๐ฒ i;ฮธ)โ„’ superscript subscript ๐‘– 1 ๐‘ superscript โ„’ ๐‘– superscript ๐— ๐‘– superscript ๐ฒ ๐‘– ๐œƒ\mathcal{L}=\sum_{i=1}^{N}\mathcal{L}^{i}(\mathbf{X}^{i},\mathbf{y}^{i};\theta)caligraphic_L = โˆ‘ start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_N end_POSTSUPERSCRIPT caligraphic_L start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT ( bold_X start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT , bold_y start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT ; italic_ฮธ ), where
152
+
153
+ โ„’ i(๐— i,๐ฒ i;ฮธ)=โˆ’โˆ‘t=1|๐ฒ i|log P(y t i|๐— i,y<t i|ฮธ).\mathcal{L}^{i}(\mathbf{X}^{i},\mathbf{y}^{i};\theta)=-\sum_{t=1}^{|\mathbf{y}% ^{i}|}\log P(y_{t}^{i}|\mathbf{X}^{i},y_{<t}^{i}|\theta).caligraphic_L start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT ( bold_X start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT , bold_y start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT ; italic_ฮธ ) = - โˆ‘ start_POSTSUBSCRIPT italic_t = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT | bold_y start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT | end_POSTSUPERSCRIPT roman_log italic_P ( italic_y start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT | bold_X start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT , italic_y start_POSTSUBSCRIPT < italic_t end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT | italic_ฮธ ) .
154
+
155
+ We implement the generation-based model as the Seq2Seq model consisting of an encoder and a decoder. Different encoder-decoder structures including GRU and LSTM with attention mechanism, as well as the Transformer encoder-decoder model are used.
156
+
157
+ ### Model with Document Grounding
158
+
159
+ To further incorporate the document grounding information in the generation-based models, we split the document ๐’ ๐’\mathbf{S}bold_S into a sequence of sentences ๐’={๐ฌ 1,๐ฌ 2,โ€ฆ,๐ฌ m}๐’ subscript ๐ฌ 1 subscript ๐ฌ 2โ€ฆsubscript ๐ฌ ๐‘š\mathbf{S}=\{\mathbf{s}_{1},\mathbf{s}_{2},...,\mathbf{s}_{m}\}bold_S = { bold_s start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , bold_s start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , โ€ฆ , bold_s start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT }. Given the dialogue context input ๐—={๐ฑ 1,๐ฑ 2,โ€ฆ,๐ฑ k}๐— subscript ๐ฑ 1 subscript ๐ฑ 2โ€ฆsubscript ๐ฑ ๐‘˜\mathbf{X}=\{\mathbf{x}_{1},\mathbf{x}_{2},...,\mathbf{x}_{k}\}bold_X = { bold_x start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , bold_x start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , โ€ฆ , bold_x start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT } to the generation model consisting of k ๐‘˜ k italic_k contextual dialogue utterances, we retrieve from ๐’ ๐’\mathbf{S}bold_S the sentence ๐ฌโˆ—subscript ๐ฌ\mathbf{s}_{*}bold_s start_POSTSUBSCRIPT โˆ— end_POSTSUBSCRIPT that is most similar to the most recent dialogue utterance ๐ฑ k subscript ๐ฑ ๐‘˜\mathbf{x}_{k}bold_x start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT. The generation model would then take the concatenated (๐ฌโˆ—,๐—)subscript ๐ฌ ๐—(\mathbf{s}_{*},\mathbf{X})( bold_s start_POSTSUBSCRIPT โˆ— end_POSTSUBSCRIPT , bold_X ) as input to generate the response ๐ฒ ๐ฒ\mathbf{y}bold_y.
160
+
161
+ To train the model with document grounding, we minimize the loss โ„’=โˆ‘i=1 N โ„’ iโข(๐— i,๐ฒ i,๐ฌโˆ—i;ฮธ)โ„’ superscript subscript ๐‘– 1 ๐‘ superscript โ„’ ๐‘– superscript ๐— ๐‘– superscript ๐ฒ ๐‘– superscript subscript ๐ฌ ๐‘– ๐œƒ\mathcal{L}=\sum_{i=1}^{N}\mathcal{L}^{i}(\mathbf{X}^{i},\mathbf{y}^{i},% \mathbf{s}_{*}^{i};\theta)caligraphic_L = โˆ‘ start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_N end_POSTSUPERSCRIPT caligraphic_L start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT ( bold_X start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT , bold_y start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT , bold_s start_POSTSUBSCRIPT โˆ— end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT ; italic_ฮธ ), where
162
+
163
+ โ„’ i(๐— i,๐ฒ i,๐ฌโˆ—i;ฮธ)=โˆ’โˆ‘t=1|๐ฒ i|log P(y t i|๐— i,๐ฌโˆ—i,y<t i|ฮธ).\mathcal{L}^{i}(\mathbf{X}^{i},\mathbf{y}^{i},\mathbf{s}_{*}^{i};\theta)=-\sum% _{t=1}^{|\mathbf{y}^{i}|}\log P(y_{t}^{i}|\mathbf{X}^{i},\mathbf{s}_{*}^{i},y_% {<t}^{i}|\theta).caligraphic_L start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT ( bold_X start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT , bold_y start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT , bold_s start_POSTSUBSCRIPT โˆ— end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT ; italic_ฮธ ) = - โˆ‘ start_POSTSUBSCRIPT italic_t = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT | bold_y start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT | end_POSTSUPERSCRIPT roman_log italic_P ( italic_y start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT | bold_X start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT , bold_s start_POSTSUBSCRIPT โˆ— end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT , italic_y start_POSTSUBSCRIPT < italic_t end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT | italic_ฮธ ) .
164
+
165
+ We implement the model with document grounding also as the Seq2Seq model with the encoder-decoder structure. We use the attention mechanism for both GRU and LSTM models to ensure the document information ๐ฌโˆ—subscript ๐ฌ\mathbf{s}_{*}bold_s start_POSTSUBSCRIPT โˆ— end_POSTSUBSCRIPT can be incoproated. The Transformer encoder can incorporate ๐ฌโˆ—subscript ๐ฌ\mathbf{s}_{*}bold_s start_POSTSUBSCRIPT โˆ— end_POSTSUBSCRIPT through its self-attention mechanism. We denote these generation models incorporating docs as โ€œGRU with Docโ€, โ€œLSTM with Docโ€, and โ€œTransformer with Docโ€.
166
+
167
+ Experiments
168
+ -----------
169
+
170
+ We conduct experiments to provide benchmark results for the NaturalConv dataset. Both the results of retrieval-based models and generation-based are evaluated. Furthermore, we evaluate the performance of models with document grounding, and provide discussions on the results.
171
+
172
+ ### Implementation Details
173
+
174
+ We implement LSTM, GRU, BERT, and Transformer models with PyTorch. The experiments are performed on Nvidia Tesla P40 GPUs. The LTP(Che, Li, and Liu [2010](https://arxiv.org/html/2103.02548v3#bib.bib3)) Chinese word segmentation tool is used for tokenization.
175
+
176
+ Our Retrieval model uses BM25 index to retrieve the most related response in the corpus. The Retrieval-BERT model re-ranks the top K=10 ๐พ 10 K=10 italic_K = 10 retrieved responses. Our GRU network consists of the one-layer bi-directional GRU encoder and the one-layer GRU decoder. Its embedding size is set to 300, and the hidden state size is set to 800. The LSTM network consists of a two-layer bi-directional LSTM encoder and and a two-layer LSTM decoder. Both the embedding size and the hidden state size of the LSTM model are set to 500. The Transformer model contains a six-layer encoder and a six-layer decoder, with the embedding size, hidden unit size, and attention head number to be 1024, 4096, and 16, respectively.
177
+
178
+ ADAM is used to optimize the GRU, LSTM and Transformer models, with the initial learning rate set to be 5ร—10โˆ’5 5 superscript 10 5 5\times 10^{-5}5 ร— 10 start_POSTSUPERSCRIPT - 5 end_POSTSUPERSCRIPT for GRU, 1ร—10โˆ’3 1 superscript 10 3 1\times 10^{-3}1 ร— 10 start_POSTSUPERSCRIPT - 3 end_POSTSUPERSCRIPT for LSTM, and 5ร—10โˆ’4 5 superscript 10 4 5\times 10^{-4}5 ร— 10 start_POSTSUPERSCRIPT - 4 end_POSTSUPERSCRIPT for Transformer, respectively.
179
+
180
+ ### Metrics
181
+
182
+ Our proposed models are tested with the following automatic evaluation metrics: 1) BLEU-1 and BLEU-2 scores(Tomeh et al. [2009](https://arxiv.org/html/2103.02548v3#bib.bib17)), 2) F1 score(Wu et al. [2019](https://arxiv.org/html/2103.02548v3#bib.bib20)), 3) DISTINCT-1 and DISTINCT-2 scores(Li et al. [2015](https://arxiv.org/html/2103.02548v3#bib.bib10)), 4) BERTScore(Zhang et al. [2020](https://arxiv.org/html/2103.02548v3#bib.bib21)).
183
+
184
+ The BLEU-1/2 scores evaluate the token (word) level similarity between the output response and the reference response. The F1 score, comparatively, evaluates the Chinese character level similarity between the output response and the reference response. We further use the DISTINCT-1/2 scores to evaluate the diversity of the generated sentences. Finally, the recently proposed BERTScore is used to obtain a similarity measure that does not require exact match of tokens or characters. We denote the mean values of the BERTScore precision, recall, and F1 measure over all testing pairs as P BERT subscript ๐‘ƒ BERT P_{\textrm{{BERT}}}italic_P start_POSTSUBSCRIPT BERT end_POSTSUBSCRIPT, R BERT subscript ๐‘… BERT R_{\textrm{{BERT}}}italic_R start_POSTSUBSCRIPT BERT end_POSTSUBSCRIPT, and F BERT subscript ๐น BERT F_{\textrm{{BERT}}}italic_F start_POSTSUBSCRIPT BERT end_POSTSUBSCRIPT, respectively. Our backbone model for BERTScore evaluation is the โ€œbert-base-chineseโ€ BERT model released by (Devlin et al. [2018](https://arxiv.org/html/2103.02548v3#bib.bib4)).
185
+
186
+ ### Data Split
187
+
188
+ We split different documents and their corresponding dialogues from the NaturalConv corpus into the train, dev, and test sets, respectively. As a result, different dialogues belonging to the same grounding documents can only appear simultaneously either in train, dev, or test set. The total number of documents in different topics, as well as the total number of dialogue pairs for each set are presented in Table[6](https://arxiv.org/html/2103.02548v3#Sx5.T6 "Table 6 โ€ฃ Data Split โ€ฃ Experiments โ€ฃ NaturalConv: A Chinese Dialogue Dataset Towards Multi-turn Topic-driven Conversation"). The data split will be released together with the corpus.
189
+
190
+ Train Dev Test
191
+ # Doc - Sports 9500 120 120
192
+ # Doc - Ent 4283 60 60
193
+ # Doc - Tech 3941 60 60
194
+ # Doc - Games 292 8 8
195
+ # Doc - Edu 1233 16 16
196
+ # Doc - Health 126 8 8
197
+ # Dialogue Pairs 369802 5183 5191
198
+
199
+ Table 6: Statistics of our train, dev, and test sets.
200
+
201
+ ### Results
202
+
203
+ The results for generation-based conversation models are given in Table[7](https://arxiv.org/html/2103.02548v3#Sx5.T7 "Table 7 โ€ฃ Results โ€ฃ Experiments โ€ฃ NaturalConv: A Chinese Dialogue Dataset Towards Multi-turn Topic-driven Conversation"). We further provide the performance of models incorporating document grounding in Table[8](https://arxiv.org/html/2103.02548v3#Sx5.T8 "Table 8 โ€ฃ Results โ€ฃ Experiments โ€ฃ NaturalConv: A Chinese Dialogue Dataset Towards Multi-turn Topic-driven Conversation").
204
+
205
+ BLEU-1/2 DISTINCT-1/2 F1 P BERT subscript ๐‘ƒ BERT P_{\textrm{{BERT}}}italic_P start_POSTSUBSCRIPT BERT end_POSTSUBSCRIPT / R BERT subscript ๐‘… BERT R_{\textrm{{BERT}}}italic_R start_POSTSUBSCRIPT BERT end_POSTSUBSCRIPT / F BERT subscript ๐น BERT F_{\textrm{{BERT}}}italic_F start_POSTSUBSCRIPT BERT end_POSTSUBSCRIPT
206
+ Retrieval 23.30 / 13.12 8.48 / 43.21 23.39 63.78 / 64.22 / 63.90
207
+ Retrieval-BERT 24.96 / 13.82 8.27 / 42.31 24.87 65.35 / 64.87 / 65.01
208
+ GRU 27.89 / 14.23 1.80 / 8.17 26.61 67.49 / 65.35 / 66.32
209
+ LSTM 26.09 / 13.35 0.98 / 4.30 26.65 67.97 / 64.49 / 66.09
210
+ Transformer 25.17 / 12.39 2.91 / 15.32 25.73 65.37 / 64.55 / 64.84
211
+
212
+ Table 7: Performances of the retrieval-based and generation-based dialogue models without incorporating information from the document on the NaturalConv corpus.
213
+
214
+ BLEU-1/2 DISTINCT-1/2 F1 P BERT subscript ๐‘ƒ BERT P_{\textrm{{BERT}}}italic_P start_POSTSUBSCRIPT BERT end_POSTSUBSCRIPT / R BERT subscript ๐‘… BERT R_{\textrm{{BERT}}}italic_R start_POSTSUBSCRIPT BERT end_POSTSUBSCRIPT / F BERT subscript ๐น BERT F_{\textrm{{BERT}}}italic_F start_POSTSUBSCRIPT BERT end_POSTSUBSCRIPT Manual (1-3)
215
+ GRU 27.89 / 14.23 1.80 / 8.17 26.61 67.49 / 65.35 / 66.32 1.97
216
+ GRU with Doc 27.86 / 14.24 1.87 / 8.73 26.70 67.39 / 65.32 / 66.25 2.03
217
+ LSTM 26.09 / 13.35 0.98 / 4.30 26.65 67.97 / 64.49 / 66.09 2.10
218
+ LSTM with Doc 26.79 / 14.54 2.13 / 9.49 28.08 68.50 / 65.60 / 66.92 2.16
219
+ Transformer 25.17 / 12.39 2.91 / 15.32 25.73 65.37 / 64.55 / 64.84 2.04
220
+ Transformer with Doc 24.47 / 13.12 2.77 / 14.35 27.01 67.39 / 65.06 / 66.08 2.09
221
+
222
+ Table 8: Performances of the generation-based dialogue models without or with incorporating information from the document on the NaturalConv corpus.
223
+
224
+ Comparison between dialogue models. From Table[7](https://arxiv.org/html/2103.02548v3#Sx5.T7 "Table 7 โ€ฃ Results โ€ฃ Experiments โ€ฃ NaturalConv: A Chinese Dialogue Dataset Towards Multi-turn Topic-driven Conversation"), we can see the GRU and LSTM models in many cases outperform the retrieval-based model in terms of different similarity metrics including BLEU-1/2, F1, and BERTScore. On the other hand, the retrieval-based model significantly outperforms the generation-based models in the DISTINCT-1/2 metrics. It indicates the GRU and LSTMs models can generate dialogue responses that are more similar to the golden responses. However, these generation-based Seq2Seq models are still not capable enough to generate dialogue responses that are as diverse as the human responses that are retrieved by the retrieval model.
225
+
226
+ Performances of generation-based models. We can further compare the performances between different generation-based dialogue models in Table[7](https://arxiv.org/html/2103.02548v3#Sx5.T7 "Table 7 โ€ฃ Results โ€ฃ Experiments โ€ฃ NaturalConv: A Chinese Dialogue Dataset Towards Multi-turn Topic-driven Conversation"). In our experiment, GRU, LSTM, Transformer are all trained only with the NaturalConv corpus. GRU and LSTM in general have similar performances in terms of the similarity measures between their generated responses and the ground truth responses. Comparatively, the Transformer model, which is significantly bigger than GRU and LSTM in terms of model size, performs slightly worse than both GRU and LSTM in similarity measures including Bleu-1/2, F1, and BERTScore on our NaturalConv corpus with 369,802 dialogue pairs for training. On the other hand, the Transformer model obviously outperforms both GRU and LSTM in DISTINCT-1/2 measures, indicating its responses are more diverse than those of GRU and LSTM.
227
+
228
+ Performances with document grounding. In Table[8](https://arxiv.org/html/2103.02548v3#Sx5.T8 "Table 8 โ€ฃ Results โ€ฃ Experiments โ€ฃ NaturalConv: A Chinese Dialogue Dataset Towards Multi-turn Topic-driven Conversation"), we compare the performances of GRU, LSTM, and Transformer models without or with incorporating information from the document through our model with document grounding. In this experiment, we can observe that the performances of GRU and GRU with Doc are similar in all the metrics. The LSTM with Doc model improves the LSTM model in similarity measures including BLEU-1/2, F1, and BERTScore, as well as in diversity measures DISTINCT-1/2. Similar improvements can be found in the Transformer with Doc model in comparing to the baseline Transformer.
229
+
230
+ Human evaluation of generation-based models. We perform manual generation quality evaluation with the randomly selected 100 queries in the test set for each model in Table[8](https://arxiv.org/html/2103.02548v3#Sx5.T8 "Table 8 โ€ฃ Results โ€ฃ Experiments โ€ฃ NaturalConv: A Chinese Dialogue Dataset Towards Multi-turn Topic-driven Conversation"). The responses are evaluated by two annotators with the overall quality score in three grades: good (3), fair (2), bad (1). The averaged scores in Table[8](https://arxiv.org/html/2103.02548v3#Sx5.T8 "Table 8 โ€ฃ Results โ€ฃ Experiments โ€ฃ NaturalConv: A Chinese Dialogue Dataset Towards Multi-turn Topic-driven Conversation") show slight performance gains for models with document comparing to their corresponding models without document.
231
+
232
+ In summary, the improvements from incorporating the document information for dialogue response generation is still not significant. It indicates that the current methodology could still be limited in discovering/exploiting from the document the information that is more likely to be used by human beings in the multi-turn topic-driven conversation setting. Moreover, considering NaturalConv dialogue includes information outside of the document, utilizing knowledge outside of the document or corpus could also be beneficial to further improve the performance.
233
+
234
+ Conclusion
235
+ ----------
236
+
237
+ In this paper, we propose a Chinese multi-turn topic-driven conversation generation corpus, NaturalConv. It contains 400K utterances and 19.9K dialogues, with an average number of 20.1 turns. Each dialogue is based on a shared topic and two participants are free to talk anything as long as any one specific aspect from the topic is mentioned. The participants are also required to assume a scenario for the conversation. Therefore, the dialogue contains various conversation elements such as chitchat, discussions about the topic, any possible extensions of the topic, etc. We believe this dataset provides a good benchmark to evaluate the ability to model topic-driven free-style conversations. In addition, we provide results of several benchmark models to facilitate further research. Experiments demonstrate that our current models can not provide significant improvement by introducing document knowledge, therefore there is much room in topic-grounded conversation modeling for future work.
238
+
239
+ Acknowledgements
240
+ ----------------
241
+
242
+ We thank all the annotators for annotating the dialogues. The views and opinions expressed in the dataset including the documents and the dialogues do not necessarily reflect those of Tencent or the authors of this paper.
243
+
244
+ References
245
+ ----------
246
+
247
+ * Adiwardana et al. (2020) Adiwardana, D.; Luong, M.-T.; So, D.R.; Hall, J.; Fiedel, N.; Thoppilan, R.; Yang, Z.; Kulshreshtha, A.; Nemade, G.; Lu, Y.; and Le, Q.V. 2020. Towards a Human-like Open-Domain Chatbot.
248
+ * Bast et al. (2014) Bast, H.; Bรคurle, F.; Buchhold, B.; and HauรŸmann, E. 2014. Easy access to the freebase dataset. In _Proceedings of the 23rd International Conference on World Wide Web_, 95โ€“98.
249
+ * Che, Li, and Liu (2010) Che, W.; Li, Z.; and Liu, T. 2010. Ltp: A chinese language technology platform. In _Proceedings of the 23rd International Conference on Computational Linguistics: Demonstrations_, 13โ€“16. Association for Computational Linguistics.
250
+ * Devlin et al. (2018) Devlin, J.; Chang, M.-W.; Lee, K.; and Toutanova, K. 2018. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. _arXiv preprint arXiv:1810.04805_ .
251
+ * Dinan et al. (2019) Dinan, E.; Roller, S.; Shuster, K.; Fan, A.; Auli, M.; and Weston, J. 2019. Wizard of Wikipedia: Knowledge-powered Conversational Agents. In _Proceedings of the International Conference on Learning Representations (ICLR)_.
252
+ * Gao et al. (2019) Gao, J.; Galley, M.; Li, L.; et al. 2019. Neural approaches to conversational ai. _Foundations and Trendsยฎ in Information Retrieval_ 13(2-3): 127โ€“298.
253
+ * Ghazvininejad et al. (2018) Ghazvininejad, M.; Brockett, C.; Chang, M.-W.; Dolan, B.; Gao, J.; Yih, W.-t.; and Galley, M. 2018. A knowledge-grounded neural conversation model. In _Thirty-Second AAAI Conference on Artificial Intelligence_.
254
+ * Gopalakrishnan et al. (2019) Gopalakrishnan, K.; Hedayatnia, B.; Chen, Q.; Gottardi, A.; Kwatra, S.; Venkatesh, A.; Gabriel, R.; Hakkani-Tรผr, D.; and AI, A.A. 2019. Topical-Chat: Towards Knowledge-Grounded Open-Domain Conversations. _Proc. Interspeech 2019_ 1891โ€“1895.
255
+ * Huang, Zhu, and Gao (2019) Huang, M.; Zhu, X.; and Gao, J. 2019. Challenges in building intelligent open-domain dialog systems. _arXiv preprint arXiv:1905.05709_ .
256
+ * Li et al. (2015) Li, J.; Galley, M.; Brockett, C.; Gao, J.; and Dolan, B. 2015. A diversity-promoting objective function for neural conversation models. _arXiv preprint arXiv:1510.03055_ .
257
+ * Liu et al. (2018) Liu, S.; Chen, H.; Ren, Z.; Feng, Y.; Liu, Q.; and Yin, D. 2018. Knowledge Diffusion for Neural Dialogue Generation. In _Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_, 1489โ€“1498. Melbourne, Australia: Association for Computational Linguistics. doi:10.18653/v1/P18-1138. URL https://www.aclweb.org/anthology/P18-1138.
258
+ * Moghe et al. (2018) Moghe, N.; Arora, S.; Banerjee, S.; and Khapra, M.M. 2018. Towards Exploiting Background Knowledge for Building Conversation Systems. In _Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing_, 2322โ€“2332. Brussels, Belgium: Association for Computational Linguistics. doi:10.18653/v1/D18-1255. URL https://www.aclweb.org/anthology/D18-1255.
259
+ * Moon et al. (2019) Moon, S.; Shah, P.; Kumar, A.; and Subba, R. 2019. OpenDialKG: Explainable Conversational Reasoning with Attention-based Walks over Knowledge Graphs. In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_, 845โ€“854. Florence, Italy: Association for Computational Linguistics. doi:10.18653/v1/P19-1081. URL https://www.aclweb.org/anthology/P19-1081.
260
+ * Qian et al. (2018) Qian, Q.; Huang, M.; Zhao, H.; Xu, J.; and Zhu, X. 2018. Assigning Personality/Profile to a Chatting Machine for Coherent Conversation Generation. In _IJCAI_, 4279โ€“4285.
261
+ * Qin et al. (2019) Qin, L.; Galley, M.; Brockett, C.; Liu, X.; Gao, X.; Dolan, B.; Choi, Y.; and Gao, J. 2019. Conversing by reading: Contentful neural conversation with on-demand machine reading. _arXiv preprint arXiv:1906.02738_ .
262
+ * Roller et al. (2020) Roller, S.; Dinan, E.; Goyal, N.; Ju, D.; Williamson, M.; Liu, Y.; Xu, J.; Ott, M.; Shuster, K.; Smith, E.M.; Boureau, Y.-L.; and Weston, J. 2020. Recipes for building an open-domain chatbot.
263
+ * Tomeh et al. (2009) Tomeh, N.; et al. 2009. Complexity-based phrase-table filtering for statistical machine translation. In _Summit XII_. Citeseer.
264
+ * Tuan, Chen, and Lee (2019) Tuan, Y.-L.; Chen, Y.-N.; and Lee, H.-y. 2019. DyKgChat: Benchmarking Dialogue Generation Grounding on Dynamic Knowledge Graphs. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_, 1855โ€“1865. Hong Kong, China: Association for Computational Linguistics. doi:10.18653/v1/D19-1194. URL https://www.aclweb.org/anthology/D19-1194.
265
+ * Whang et al. (2019) Whang, T.; Lee, D.; Lee, C.; Yang, K.; Oh, D.; and Lim, H. 2019. Domain adaptive training bert for response selection. _arXiv preprint arXiv:1908.04812_ .
266
+ * Wu et al. (2019) Wu, W.; Guo, Z.; Zhou, X.; Wu, H.; Zhang, X.; Lian, R.; and Wang, H. 2019. Proactive Human-Machine Conversation with Explicit Conversation Goal. In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_, 3794โ€“3804. Florence, Italy: Association for Computational Linguistics. doi:10.18653/v1/P19-1369. URL https://www.aclweb.org/anthology/P19-1369.
267
+ * Zhang et al. (2020) Zhang, T.; Kishore, V.; Wu, F.; Weinberger, K.Q.; and Artzi, Y. 2020. BERTScore: Evaluating Text Generation with BERT. In _International Conference on Learning Representations_. URL https://openreview.net/forum?id=SkeHuCVFDr.
268
+ * Zhou et al. (2018a) Zhou, H.; Huang, M.; Zhang, T.; Zhu, X.; and Liu, B. 2018a. Emotional chatting machine: Emotional conversation generation with internal and external memory. In _Thirty-Second AAAI Conference on Artificial Intelligence_.
269
+ * Zhou et al. (2018b) Zhou, H.; Young, T.; Huang, M.; Zhao, H.; Xu, J.; and Zhu, X. 2018b. Commonsense Knowledge Aware Conversation Generation with Graph Attention. In _IJCAI_, 4623โ€“4629.
270
+ * Zhou et al. (2020) Zhou, H.; Zheng, C.; Huang, K.; Huang, M.; and Zhu, X. 2020. KdConv: A Chinese Multi-domain Dialogue Dataset Towards Multi-turn Knowledge-driven Conversation. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_. Association for Computational Linguistics.
271
+ * Zhou, Prabhumoye, and Black (2018) Zhou, K.; Prabhumoye, S.; and Black, A.W. 2018. A Dataset for Document Grounded Conversations. In _Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing_, 708โ€“713. Brussels, Belgium: Association for Computational Linguistics. doi:10.18653/v1/D18-1076. URL https://www.aclweb.org/anthology/D18-1076.
272
+ * Zhou, Huang, and Zhu (2018) Zhou, M.; Huang, M.; and Zhu, X. 2018. An interpretable reasoning network for multi-relation question answering. _arXiv preprint arXiv:1801.04726_ .
273
+ * Zhu et al. (2017) Zhu, W.; Mo, K.; Zhang, Y.; Zhu, Z.; Peng, X.; and Yang, Q. 2017. Flexible end-to-end dialogue system for knowledge grounded conversation. _arXiv preprint arXiv:1709.04264_ .